Giter Site home page Giter Site logo

graphql-stitching-ruby's People

Contributors

drich10 avatar gmac avatar mikeharty avatar rsperko avatar thomasmarshall avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

graphql-stitching-ruby's Issues

Support field-level authorization

It'd be nice to formally support field-level authorization through the query planner, similar to other federation libraries. A few specs:

  • Unauthorized fields are simply filtered out of the request by default.
  • A setting opts requests with unauthorized fields into returning immediately with an error.

It looks like @mikeharty has been doing some auth work in his custom executor. Mike – any chance you could elaborate here with more on how the feature could/should work with what you're already doing?

Variables with boolean value `false` are overwritten in Request.prepare!

In the Request class, the prepare! method applies default values to variables using conditional assignment:

operation.variables.each do |v|
  @variables[v.name] ||= v.default_value
end

Unfortunately,false is falsey (obviously), so it is always overwritten with the value in v.default_value (which could be nil or true).

This is critical, I'd think, considering it can flip false boolean values to true if the default is true.

Script to reproduce (run from project root):

require_relative 'lib/graphql/stitching'
require_relative 'lib/graphql/stitching/request'

query = <<~GRAPHQL
  query($a: Boolean, $b: Boolean, $c: Boolean = true) {
    base(a: $a, b: $b, c: $c) { id }
  }
GRAPHQL

variables = { "a" => true, "b" => false, "c" => false }
request = GraphQL::Stitching::Request.new(GraphQL.parse(query), variables: variables)
request.prepare!

puts request.variables
# {"a"=>true, "b"=>nil, "c"=>true}

Fix is simple, I'll open a PR in a moment with the fix + a test.

Plan runtime directives, @skip and @include

At present, runtime directives are completely ignored by the planner. Need to:

  • Extract variables from all runtime directives while planning.
  • Confirm that runtime directives are passed through in subqueries.
  • Make planning operations aware of runtime directives on their root scope, and conditionally run operations when an entire operation can be ignored.

Batch all queries to a location per execution frame

Right now, many operations may have the same after_key yet all target the same location on behalf of different insertion paths. This results in several requests made to the same service during the same execution frame. We should expand batching to write a single query for all of the different operations being delegated during a given frame.

Support for multipart form file uploads?

I recently added support for multipart form file uploads in the project where I've implemented this library. I've been debating whether or not it's something that would make sense to incorporate here.

My implementation has deviated pretty far in this particular area; I implemented a custom HttpExecutable in order to swap the client to RestClient (needed a proxy), and to add pre/post-request hook points to manipulate queries and responses. This is also where the multipart form handling happens, I'm using apollo_upload_server-ruby, which does follow the spec, but is a Rails specific implementation.

I say all of that essentially to say, maybe that's where this belongs, in a custom implementation, if needed. I figured I'd get your take before making that assumption. The thought also crossed my mind that maybe there's a plugin-style approach that would make sense. Let me know if there's any interest here, I'm happy to more formally propose a solution if there's an appetite for it, just wanted to gauge whether this feels messy/out of scope first.

Arguments are dropped during composition

From @mikeharty:

# Getting double args sometimes... why?
return if owner.arguments.any? { _1.first == argument_name }

This line causes arguments to be dropped entirely - I haven't dug into the root of why the double args, but if I change that return to be next, my arguments stop being dropped.

I'm testing this against a reasonably substantial (~400 distinct types) schema and with these changes am generating identical schema to without stitching.

n+1 / batching

Hello!

I'm considering using this gem for a graphql-ruby / packwerk project. The idea of having in-process graphql stitching is compelling. I'm curious if you have thoughts about how I could avoid n+1 problems. I don't have a great understanding about how gateways / schema stitching tools handle this in general.

Composer directives for omitting elements

Similar to other composer libraries, there should be some controls for prioritizing and hiding elements in a schema. Specifically:

  • @override: prioritize a field from a given location. This can replace the current root_field_location_selector proc. This might make more sense as a @priority directive that can assign a numeric priority to fields. Priority determines the location’s rank in the field’s delegation set (the planner searches field locations from first-to-last, so the first location has priority). It might also be nice to prioritize fields as “-1” to omit them from the delegation set entirely so they are never considered (the planner will still pick non-priority locations when it allows making fewer requests).

  • @inaccessible: eliminates a field or argument from the combined schema. The element is omitted from the supergraph when any subschema marks it as inaccessible. This feature has a few implications:

    • This can lead to empty object scopes (object types with zero fields), which should not be allowed. Raise composer error if a type resolves with zero fields.
    • This can also lead to unreachable types left in the schema. We’d need to traverse from root query and mutation types before building the supergraph and prune any types without field/argument usage.
    • Inaccessible elements should still be known to resolver queries, which may operate with subgraph information beyond the scope of the public supergraph. This extended criteria should only ever be selected through exports, so should be compatible with the current Shaper, but tests should confirm that.

Root fields with fragments do not export typename

I discovered that when the data for a corresponding type cannot be retrieved from an external server (returning null), the results can vary based on how fragments are used, even if the queries are essentially the same in meaning.

In my case, access to the corresponding type may be restricted due to user permissions, leading to its unavailability from the external server.

I have created a sample to reproduce the issue and would like to share it with you.

Schema Definition

ServiceA

  • Server facing client applications
  • Manages ParentResource
type Query {
  parentResource(id: ID!): ParentResource

  subResourcesByIds(ids: [ID!]!): [SubResource]!
}

type ParentResource {
  id: ID!
  subResource: SubResource
}

type SubResource {
  id: ID!
}

ServiceB

  • Server behind ServiceA
  • Manages SubResource
  • Viewing SubResource may not be permitted depending on the user
  • Therefore, subResourcesByIds may return null elements
type Query {
  subResourcesByIds(ids: [ID!]!): [SubResource]!
}

type SubResource {
  id: ID!
  serviceBField: String!
}

Query Results

In the following examples, ServiceB returns { "data": { "_0_result": [null] } }

Query that Returns the Expected Result

Query

query {
  parentResource(id: "ParentResource:1000") {
    id
    subResource {
      ...SubResourceFragment
    }
  }
}

fragment SubResourceFragment on SubResource {
  id
  serviceBField
}

Result

{
  "data": {
    "parentResource": {
      "id": "ParentResource:1000",
      "subResource": null
    }
  }
}

Query that Returns the Unexpected Result

Query

query {
  parentResource(id: "ParentResource:1000") {
    ...ParentResourceFragment
  }
}

fragment ParentResourceFragment on ParentResource {
  id
  subResource {
    id
    serviceBField
  }
}

Result

{
  "data": {
    "parentResource": {
      "id": "ParentResource:1000",
      // `serviceBField` is defined as non-nullable, but the field is not being returned
      // This makes the response violate the Schema. I expect `subResource` to be returned as null.
      "subResource": {
        "id": "SubResource:2000",
        "_export_id": "SubResource:2000",
        "_export___typename": "SubResource"
      }
    }
  }
}

I hope this information is helpful in addressing the issue.

Need conditional type checks during execution

Summary

Requests are statically planned up front, which means that fragment selections may generate operations forking from types that are not actually resolved. We need to perform a type check after each execution and (recursively?) eliminate child operations that don't actually apply to the resolved type.

Argument defaults are ignored during composition

From @mikeharty:

I'm not totally clear on whether "default_value" is officially supported in the GraphQL gem, but we are using it and it does work. It appears to be left out of the implementation here: https://github.com/gmac/graphql-stitching-ruby/blob/main/lib/graphql/stitching/composer.rb#L369

I've worked around this adding a merge_default_values function into the monkey patch I mentioned above:

def merge_default_values(type_name, members_by_location, argument_name: nil, field_name: nil)
  default_values = members_by_location.values.map(&:default_value)
  
  return nil if default_values.any?(&:nil?)
  
  if default_values.uniq.length != 1
    path = [type_name, field_name, argument_name].compact.join('.')
    raise ComposerError, "Default values for `#{path}` must be the same."
  end
  
  default_values.first
end

and I updated the code at the point I linked to conditionally add the default_value via kwargs if it has a value, so that it doesn't default fields to nulls that previously had no default (caused validation issues otherwise)

type = merge_value_types(type_name, value_types, argument_name: argument_name, field_name: field_name)
default_value = merge_default_values(type_name, arguments_by_location, argument_name: argument_name, field_name: field_name)
kwargs = {}
kwargs[:default_value] = default_value unless default_value.nil? && type.non_null?
schema_argument = owner.argument(
  argument_name,
  description: merge_descriptions(type_name, arguments_by_location, argument_name: argument_name, 
                                                                    field_name: field_name),
  deprecation_reason: merge_deprecations(type_name, arguments_by_location, argument_name: argument_name, 
                                                                           field_name: field_name),
  type: GraphQL::Stitching::Util.unwrap_non_null(type),
  required: type.non_null?,
  camelize: false,
  **kwargs
)

Feat: hoist inlined inputs

Summary

Input values embedded into a GraphQL document make that document unique and prevent it from hashing consistently when looking for a cached query plan:

query { 
  product(id: "1") { name }
}

A nice add-on would be a utility that traverses a request document and extracts input literals and hoists them up to document variables. Then no matter how the request was submitted, it will be subject to plan caching with a normalized body:

query($_hoist_0: ID!){ 
  product(id: $_hoist_0) { name }
}

# variables { "_hoist_0": "1" }

This normalization would be appropriate to happen in the Document object.

Support GraphQL v1.13

There's presently one failing test involving a late-bound type when running with GraphQL v1.13. Let's assess and fix/ignore to expand gem compatibility down to GraphQL Ruby v1.13.

Need a Gateway component

Summary

The library is organized around composable pieces:

Composer -> Supergraph -> Planner -> Executor -> Shaper

These are intentionally discrete so that parts and pieces of the stitching workflow can be mixed and matched (ie: precompose and caching of supergraph, caching and restoration of query plans, etc). However, this makes the library difficult to use quickly out of the box. We need a Gateway component that rolls up a boilerplate workflow of all the parts and pieces into one unit.

Desired API

Should be easy to build a stitched gateway and use it to execute requests:

gateway = GraphQL::Stitching::Gateway.new({
  products: {
    schema: GraphQL::Schema.from_definition(movies_schema),
    client: GraphQL::Stitching::RemoteClient.new(url: "http://localhost:3000"),
  },
  showtimes: {
    schema: GraphQL::Schema.from_definition(showtimes_schema),
    client: GraphQL::Stitching::RemoteClient.new(url: "http://localhost:3001"),
  },
  local: {
    schema: MyLocal::GraphQL::Schema
  },
})

gateway.cache_read do |key|
  $redis.get(key) # << 3P code
end

gateway.cache_write do |key, payload|
  $redis.set(key, payload) # << 3P code
end

result = gateway.execute(
  # Same basic arguments as GraphQL Ruby (https://graphql-ruby.org/queries/executing_queries)
  query: ,
  document: ,
  variables: ,
)

Steps

This general workflow is mostly laid out in example/gateway.rb. Use that for reference...

  1. Run Composer during gateway initialization – make sure all location names are input as strings.
  2. Also during initialization, add provided clients to the composed Supergraph. A client is anything that responds to .call(). We should consolidate Supergraph's assign_location_url and assign_location_handler into one "assign_location_client" method that accepts any call-able object.
  3. The Gateway cache_read and cache_write methods should stash the provided procs as instance variables. These are not required to be set.
  4. Add an execute method, the signature should be a subset of the GraphQL::Schema.execute method. When invoking execute:
  • Generate a Document (stitching lib) from the provided document or query
  • Validate the document AST against the Supergraph schema. Format and return any validation errors.
  • Bonus: add a variable hosting routine to normalize the document.
  • If there's a cache reader, then generate a document SHA and request it from the cache accessor. Parse any returned results (currently requires JSON to parse with symbolized keys). If we have a cached plan, we can skip the next step.
  • Plan the submitted request document unless a cached plan was found.
    • If there's a cache writer, then provide the generated plan and the request SHA to the cache writer.
  • Execute the plan with provided query variables.
  • Pass the raw execution result to the Shaper (in progress).
  1. Needs tests.

Issue mapping Enum values to keys in Composer.build_enum_type

Hello,

First, thanks for writing this, the implementation is very clean and easy to walk through.

I have a case where I'm upgrading a pretty stale version of GraphQL and I've run into an issue with Enums. The project makes use of the value property on Enums to translate between GraphQL Enum "labels" and Ruby values, e.g.:

Enum.value('UNSPECIFIED', 'Unspecified value', value: 'none') will be UNSPECIFIED in the Schema, but if passed as a query argument, will appear as "none" in Ruby.

The issue comes up during introspection, when Composer.build_enum_type attempts to build the enum types.

On this line: https://github.com/gmac/graphql-stitching-ruby/blob/main/lib/graphql/stitching/composer.rb#L238
it is constructing a new EnumValue via EnumValue.value, but it passes the value as the first argument, rather than the graphql_name.

In my case, this causes two issues:

  1. Some of my Enum values are not valid EnumValue names, they are Ruby primitives or don't meet the naming validation rules
  2. Those that do pass validation are still detached from their original EnumValue which links the name and value together.

I experimented pretty thoroughly with different approaches to solve this, I was hoping I could achieve it by implementing a custom enum_value_class, but the necessary values aren't passed down. For now, I've resorted to monkey-patching build_enum_type, which gets the job done for my narrow case, but there's likely a better general solution. For my Schema, I don't have overlapping types, so I can reliably pick the first location an EnumValue was seen and pull the graphql_name off of that, which I've done by simply:

enum_values_by_value_location.each do |value, enum_values_by_location|
  # Getting the first location
  location = enum_values_by_location.keys.first
  # Getting the GraphQL name off of it, or falling back to original behavior
  graphql_name = enum_values_by_location[location]&.graphql_name || value
  enum_value = value(graphql_name,
               value: value,
               description: builder.merge_descriptions(type_name, enum_values_by_location, enum_value: value),
               deprecation_reason: builder.merge_deprecations(type_name, enum_values_by_location, enum_value: value))

Any tips or thoughts appreciated, happy to offer any addition info as needed.

Support schema visibility controls

GraphQL Ruby supports visibility controls for selectively hiding parts of a schema from view. Stitching should be able to piggy-back on the GraphQL Ruby implementation of the feature to allow portions of the combined schema to be hidden. Visibility controls would make sense as directives structured similar to Apollo authorizations.

Consider renaming `@boundary` to `@stitch`

The @boundary directive is terminology borrowed from Bramble, and while I like it, it's not necessary to carry it over and is potentially confusing given that Bramble's boundary annotations work differently and are a lot more confusing.

Need async executor execution

Summary

The Executor currently runs request executions synchronously.

def exec!
  # @todo make this async
  next_ops = @queue.select { _1[:after_key].nil? }

  while next_ops.any?
    next_ops.each do |op|
      # Each of these "next operations" should be run in parallel...
      # Also, each individual completion should trigger a next round looking for new operations
      # (we do NOT need to await all operations in this round before looking for followups)
      @status[op[:key]] = perform_operation(op)
    end

    next_ops = @queue.select do |op|
      after_key = op[:after_key]
      after_key && @status[after_key] == :completed && @status[op[:key]].nil?
    end
  end
end

We need to explore async options for running batches of requests concurrently. We'd ideally align with GraphQL Ruby's async implementation to avoid new dependencies, or match GraphQL Batch. Things to look at:

  • GraphQL Ruby dataloader docs
  • The GraphQL Interpreter, which is what all GraphQL::Schema.execute calls go to. All requests are multiplexed (single execution is just a wrapper for a multiplex of one)... and multiplexing uses GraphQL's built-in dataloader.
  • Might want to talk to @swalkinshaw about how GraphQL Batch gets its async event reactor. Following GraphQL Batch would be a good secondary approach.

Remote errors are not properly propagated to clients

When one of our location returns an error, GraphQL::Stitching::Client doesn't propagate that error and raises no implicit conversion of nil into Array (TypeError) instead at this line:

@executor.errors.concat(extract_errors!(origin_sets_by_operation, errors)) if errors&.any?

This is because that extract_errors!(origin_sets_by_operation, errors) is returning nil.

Looking at the implementation,

end
errors_result.flatten!
end

This flatten! is the crux. It can return nil, in case the operation did not modify errors_result.

Need Shaper component

Summary

Right now, a raw execution result is returned directly. This raw result has many possible inaccuracies:

  • Might contain stitching keys that were automatically added.
  • Might be missing requested fields rather than providing the requested field with a null value.
  • Needs to apply schema nullability constraints to the resolved payload.

We need a final-pass algorithm that traverses the original request, prunes extra payload fields, adds missing payload fields as null, and then bubbles nullability constraints up through the document tree. Same basic idea as the Apollo resultsShaper or Bramble bubbleUpNullValuesInPlace.

There's a dev branch setup for this work here: https://github.com/gmac/graphql-stitching-ruby/compare/dev_shaper?expand=1

Tests run using:

bundle exec rake test TEST=test/graphql/stitching/shaper_test.rb

Example

The user requested this:

query {
  storefront(id: "1") {
    id
    products {
      upc
      name
      price
      nullableField
    }
  }
}

But the raw execution result looks like this:

{
  "data": {
    "storefront": {
      "id": "1",
      "products": [
        {
          "upc": "1",
          "_STITCH_upc": "1",
          "_STITCH_typename": "Product",
          "name": "iPhone",
          "price": 699.99,
          "nullableField": 1
        },
        {
          "upc": "2",
          "_STITCH_upc": "2",
          "_STITCH_typename": "Product",
          "name": "Apple Watch",
          "price": 399.99
        }
      ]
    }
  }
}

In the above, the user didn't request the stitching keys, so they should be removed. They did request nullableField but a value only came back for one record, so the field is missing from the second and should be added as "nullableField": null. Lastly, we'd need to bubble up errors in place based on schema null constraints.

Support composite keys/inputs

It would add practical value if composite key selections were allowed. Composite keys would require an additional argument mapping to express how the composite selections map into query arguments:

input WidgetKey {
  group: String!
  name: String!
}

widgets(keys: [WidgetKey!]!, other: String): [Widget]! @stitch(
  key: "scope { group name }",
  arguments: "keys: {group: $scope.group, name: $scope.name}, other: 'Sfoo'"
)

The arguments param is parsed as a GraphQL inner-arguments literal. Then, paths from the key are inserted into the literal as a namespaced path prefixed by "$", ie: $scope.group. Some scoping rules would apply, as a repeatable key field can only be inserted into into a repeatable argument scope.

Arguments

type Widget {
  scope: String!
  name: String!
}

type Query {
  widget1(scope: String!, name: String!): Widget @stitch(
    key: "scope name",
    arguments: "scope: $scope, name: $name"
  )
  widget2(s: String!, n: String!): Widget @stitch(
    key: "scope name",
    arguments: "s: $scope, n: $name"
  )
}

Input objects

type Widget {
  scope: String!
  name: String!
}

input WidgetKey {
  scope: String!
  name: String!
}
input WidgetKey2 {
  s: String!
  n: String!
}
type Query {
  widgets(keys: [WidgetKey!]!): [Widget]! @stitch(
    key: "scope name", 
    arguments: "keys: {scope: $scope, name: $name}"
  )

  widget1(key: WidgetKey!): Widget @stitch(
    key: "scope name", 
    arguments: "key: {scope: $scope, name: $name}"
  )
  widget2(key: WidgetKey2!): Widget @stitch(
    key: "scope name", 
    arguments: "key: {s: $scope, n: $name}"
  )
  
  widget1(key: WidgetKey, other: String): Widget @stitch(
    key: "scope name", 
    arguments: "key: {scope: $scope, name: $name}, other: 'Sfoo'"
  )
  widget2(key: WidgetKey2, other: String): Widget @stitch(
    key: "scope name", 
    arguments: "key: {s: $scope, n: $name}, other: 'Sfoo'"
  )
}

Nested selections

type WidgetScope {
  group: String!
  name: String!
}
type Widget {
  scope: WidgetScope
  title: String
}

input WidgetKey {
  group: String!
  key: String!
}
type Query {
  widgets(keys: [WidgetKey!]!, other: String): [Widget]! @stitch(
    key: "scope { group name }",
    arguments: "keys: {group: $scope.group, name: $scope.name}, other: 'Sfoo'"
  )
}

Entity representations (Apollo Federation protocol)

type WidgetScope {
  group: String!
  name: String!
}
type Widget {
  scope: WidgetScope
  title: String
}

union _Entity = Widget
scalar _Any
type Query {
  # sends keys as JSON blobs:
  # [{"group": "a", name: "b", "__typename": "Widget"}, ...]
  _entities(representations: [_Any!]!): [_Entity]! @stitch(
    key: "scope { group name } __typename", 
    arguments: "representations: { group: $scope.group, name: $scope.name, __typename: $__typename }",
  )
}

Simple parser:

class ArgumentsParser
  class << self
    # "reps: {group: $scope.group, name: $scope.name}, other: 'Sfoo'""
    def parse(template)
      template = template.gsub("'", '"').gsub(/(\$[\w\.]+)/) { %|"#{_1}"| }
      GraphQL.parse("{ f(#{template}) }")
        .definitions.first
        .selections.first
        .arguments
    end
  end
end

Extract batched stitching ids into request variables

At present, executor batching inlines all stitching IDs into their resolver queries:

query MyOperation_2 {
  _0_result: widgets(ids:["a","b","c"]) { ... }
  _1_0_result: sprocket(id:"x") { ... }
  _1_1_result: sprocket(id:"y") { ... }
}

This is not ideal because it creates high request cardinality, which may defeat some backend caches. It would be generally better if requests stayed consistent when possible and submitted keys as request variables:

query MyOperation_2($_0_key: [ID!]!, $_1_0_key: ID!, $_1_1_key: ID!) {
  _0_result: widgets(ids: $_0_key) { ... }
  _1_0_result: sprocket(id: $_1_0_key) { ... }
  _1_1_result: sprocket(id: $_1_1_key) { ... }
}

# variables: { "_0_key": ["a","b","c"], "_1_0_key": "x", "_1_1_key": "y" }

All hashes should use string keys

The library does some work using hashes... this is because we want to facilitate serialization and deserialization of critical data structures (delegation maps and query plans). However, right now some parts use string keys and other parts use symbol keys... this is at odds with running JSON.parse on a cached structure and being ready to go.

While symbol keys look nicer, delegation maps have to use all string keys for name matching. It probably makes the most sense to use string keys everywhere. We could also potentially wrap some common structures like boundaries in Structs, but that just adds a step and more object creation, so I'm not sure it's worth it.

Support foreign key → relation transform

GraphQL foreign keys are commonly handled as Product.imageId is here:

# -- Products schema:

type Product {
  id: ID!
  imageId: ID!
}

# -- Images schema:

type Image {
  id: ID!
  url: String!
}

However, stitching wants this schema to be shaped as:

# -- Products schema:

type Product {
  id: ID!
  image: Image!
}

type Image {
  id: ID!
}

# -- Images schema:

type Image {
  id: ID!
  url: String!
}

Rather than forcing services to be reshaped (assuming we even have ownership and are able to), it would be nice if stitching would handle the transformation of key fields into typed relations, such as:

# -- Products schema:

type Product {
  id: ID!
  imageId: ID! @relation(fieldName: "image", typeName: "Image", foreignKey: "id")
  # --> image: Image! // type Image { id: ID! }
}

# -- Images schema:

type Image {
  id: ID!
  url: String!
}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.