Giter Site home page Giter Site logo

absinthe's Introduction

Absinthe

Build Status Version Hex Docs Download License Last Updated

GraphQL implementation for Elixir.

Goals:

  • Complete implementation of the GraphQL Working Draft.
  • An idiomatic, readable, and comfortable API for Elixir developers
  • Extensibility based on small parts that do one thing well.
  • Detailed error messages and documentation.
  • A focus on robustness and production-level performance.

Please see the website at https://absinthe-graphql.org.

Why Use Absinthe?

Absinthe goes far beyond GraphQL specification basics.

Easy-to-Read, Fast-to-Run Schemas

Absinthe schemas are defined using easy-to-read macros that build and verify their structure at compile-time, preventing runtime errors and increasing performance.

Pluggability

The entire query processing pipeline is configurable. Add, swap out, or remove the parser, individual validations, or resolution logic at will, even on a per-document basis.

Advanced Resolution

Absinthe includes a number of advanced resolution features, to include:

  • Asynchronous field resolution
  • Batched field resolution (addressing N+1 query problems)
  • A resolution plugin system supporting further extensibility

Safety

  • Complexity analysis and configurable limiting
  • Support for precompiled documents/preventing custom documents

Idiomatic Documents, Idiomatic Code

Write your schemas in idiomatic Elixir snake_case notation. Absinthe can transparently translate to camelCase notation for your API clients.

Or, define your own translation schema by writing a simple adapter.

Frontend Support

We care about support for third-party frameworks, both on the back and front end.

So far, we include specialized support for Phoenix and Plug on the backend, and Relay on the frontend.

Of course we work out of the box with other frontend frameworks and GraphQL clients, too.

Installation

Install from Hex.pm:

def deps do
  [{:absinthe, "~> 1.7.0"}]
end

Note: Absinthe requires Elixir 1.10 or higher.

Upgrading

See CHANGELOG for upgrade steps between versions.

Documentation

Mix Tasks

Absinthe includes a number of useful Mix tasks for extracting schema metadata.

Run mix help in your project and look for tasks starting with absinthe.

Related Projects

See the GitHub organization.

Community

The project is under constant improvement by a growing list of contributors, and your feedback is important. Please join us in Slack (#absinthe-graphql under the Elixir Slack account) or the Elixir Forum (tagged absinthe).

Please remember that all interactions in our official spaces follow our Code of Conduct.

Contribution

Please follow contribution guide.

License

See LICENSE.md.

absinthe's People

Contributors

akoutmos avatar axelson avatar azhi avatar benwilson512 avatar billylanchantin avatar binaryseed avatar bruce avatar calvin-kargo avatar darrenclark avatar dylan-chong avatar eotopalik-nr avatar fishcakez avatar gazler avatar hauptbenutzer avatar howleysv avatar jgautsch avatar josevalim avatar justinakoh avatar kdawgwilk avatar maartenvanvliet avatar maeisbad avatar maennchen avatar mattbaker avatar michaelcaterisano avatar paulo-ferraz-oliveira avatar pmargreff avatar tfiedlerdejanze avatar tlvenn avatar tofran avatar vderyagin avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

absinthe's Issues

Resolver mechanism

To make defining resolve functions more organized and add support for rescuing and reporting common errors, we've talked about having a Resolver behaviour.

Here's the sketch @benwilson512 and I came up with:

defmodule MyResolver do
  @behaviour Absinthe.Resolver

  # Specs on how it might work:

  @typep resolution_function :: (map, Absinthe.Execution.t -> {:ok, any} | {:error, any})
  @typep mod_fun :: {atom, atom}

  @spec resolver(any) :: {:ok, mod_fun} | {:ok, resolution_function} | {:error, binary}

  @spec transform_error({:error, any}) :: {:error, [binary]}
  # Useful for, eg, transform_error({:error, %Ecto.Changeset{} = changeset})

end

Then, eg:

%Absinthe.Type.ObjectType{
  fields: fields(
    item: [
      type: :item,
      args: args(id: [type: :id]),
      resolve: MyResolver.resolve(:item)
    ]
  )
}

Support overriding default resolver

The default resolver does a Map.get/2 to retrieve a field value. In some cases (see #76) it may be desirable to modify the default resolver in some way, and in a single location.

Error parsing reserved words used in variable name

This may already be known and may be related to #62 and #65.

Using a reserved word as a variable name causes the parser to error, for example:

Absinthe.parse("""
  mutation CreateThing($input: Int!) {
    createThing(input: $input) { clientThingId }
  }
""")
# {:error,
#  %{message: "An unknown error occurred during parsing: no function clause matching in :absinthe_parser.extract_binary/1"}}

If you modify this test case in test/lib/absinthe/parser_test.exs to look like this, it also exposes the issue:

it "can parse queries with arguments and variables that are 'reserved words'" do
  @reserved
  |> Enum.each(fn
    arg ->
      assert {:ok, _} = Absinthe.parse("""
        mutation CreateThing($#{arg}: Int!) {
          createThing(#{arg}: $#{arg}) { clientThingId }
        }
        """)
  end)
end

Union Type Support

Only sketched out so far

  • Type Support
  • Introspection support; anything Union-specific left out of #26

Document the use Absinthe.Schema type_modules option

To support loading types from other modules, you can pass a :type_modules option to use Absinthe.Schema, eg:

defmodule App.Schema do
  use Absinthe.Schema, type_modules: [App.Schema.ObjectTypes, App.Schema.ScalarTypes]

  # ...

end

This needs to be documented.

Inline Fragments

This is largely already done, but need to verify.

  • General execution
  • Use with directives

Add middleware capability

Copy pasting description from Sangria ( http://sangria-graphql.org/learn/#middleware )

Support generic middleware that can be used for different purposes, like performance measurement, metrics collection, security enforcement, etc. on a field and query level. Moreover it makes it much easier for people to share standard middleware in a libraries. Middleware allows you to define callbacks before/after query and field.

It would be a great addition to Absinthe.

As in Sangria, Middleware are especially useful when you can attach arbitrary meta to your fields:

In order to ensure generic classification of fields, every field contains a generic list or FieldTags which provides a user-defined meta-information about this field (just to highlight a few examples: Permission("ViewOrders"), Authorized, Measured, Cached, etc.)

Atom option for Utils.camelize

I was using the Absinthe.Utils.camelize function and thought it would be nice to have an option to return an atom instead of a binary. Is this something that you would be interested in? If so I would be happy to PR tomorrow.

Example:

    # pass in as opt
    Absinthe.Utils.camelize("foo", lower: true, atom: true)
    >>> :foo

    # instead of piping after
    Absinthe.Utils.camelize("foo", lower: true)
    |> String.to_existing_atom
    >>> :foo

Default values and missing variables

Suppose:

field :thing, :thing do
  arg :id, :id, default_value: "1"
end

query Foo($id: ID) {
  thing(id: $id)
}

If you don't pass in an id variable the default is not selected.

Rework definition of enum values

Looking at the need to introspect enumValues in the spec, I see deprecation support included, something that I skipped when building out Type.Enum initially.

I'm thinking this, plus the fact we're having to provide a mapping of external/internal values when creating an enum probably points at a need for the type definition to be rethought a little, possible with the inclusion of a fields/args-like function to more easily define values (and, in conjunction with deprecate, support deprecating them).

Error parsing relay mutation

When I try to parse a Relay mutation, I get the following exception:

** (FunctionClauseError) no function clause matching in :absinthe_parser.extract_binary/1
    (absinthe) src/absinthe_parser.yrl:210: :absinthe_parser.extract_binary("input")
    (absinthe) src/absinthe_parser.yrl:105: :absinthe_parser.yeccpars2_72/7
    (absinthe) /opt/boxen/homebrew/Cellar/erlang/18.2.1/lib/erlang/lib/parsetools-2.1.1/include/yeccpre.hrl:57: :absinthe_parse
r.yeccpars0/5
    (absinthe) lib/absinthe.ex:177: Absinthe.parse/1

Heres my schema:

object :user do
    field :id, :id
    field :email, :string
    field :name, :string
  end

input_object :create_user_input do
    field :email, non_null(:string)
    field :name, non_null(:string)
end

mutation do
    @desc "Create a user"
    field :create_user, type: :user do
      arg :input, non_null(:create_user_input)

      resolve &Resolver.User.create/2
    end
end

Relay submits a mutation like:

mutation CreateUserMutation($input_0:createUserInput!){createUser(input:$input_0){clientMutationId}}

Dataloader equivalent (or alternative approach)

I discussed this with @benwilson512 in Slack briefly, but having thought about it further, I don't think that the solution to n+1 queries suggested in this page will work well enough to be a generalised solution.

Essentially the solution proposed right now is to use the AST (passed to every resolve function) to look ahead and work out what relations will be followed, and use that information to preload those relationships (with Ecto).

The issue is this is a very Ecto-centric approach to GraphQL, it won't work at all with a service-oriented architecture, and won't work with any kind of backend that doesn't support joins, or backends which use several different storages. The main use case I have right now is integrating elasticsearch - I could have a Product type, with related categories, tags, users etc; and I may retrieve the products and facets from elasticsearch, then want to retrieve their relations from my postgresql database.

Dataloader takes the approach of letting you define loaders, that you call with individual IDs, which are aggregated and turned into a single request to whatever backend you may have. The results are then passed back to the relevant callers. I don't know enough about Elixir to know whether this approach is possible (the JS implementation detail relies on certain details about how JavaScript works which probably don't exist in Elixir), so an alternative might be needed (the idea that springs to mind is the ability to define a resolve_batch function against a field). But what's clear to me is that every language that implements a GraphQL server, also needs a solution to this problem.

Pre-Release review

@benwilson512 I'd appreciate it if you'd look through the documentation/code and look for any show stoppers that need to be fixed before our soft v0.1.0 release, and add them to the v0.1.0 milestone.

Required argument validation error message not correctly adapted for fields

Noticed the following in my console today. Note organization_id vs organizationId (the latter would be expected) in the field error:

{message: "Field `dashboardResults': 1 required argument (`organization_id') not provided",…}
{message: "Argument `organizationId' (ID): Not provided", locations: [{line: 1, column: 0}]}

Generate err on name/identifier collisions within a type module

Currently we catch name/identifier collisions across modules, but not within modules, which can lead to frustration, especially for new schema developers that aren't used to GraphQL's constraints (and may be doing a lot of copy and paste, building a schema).

Some cases to check for:

  • Same function name, using @absinthe :type (Elixir warns on the identical function, but we should do more)
  • Different function names, using identical @absinthe type: :same_custom_identifier
  • Multiple matching %Absinthe.Type.Object{name: same_name}

Type naming cleanup

Make the following changes

Remove inconsistent: "Type" suffixes

  • ObjectType -> Object
  • InputObjectType -> InputObject
  • InterfaceType -> Interface

Simply field type

  • FieldDefinition -> Field

Dynamic Default Values

Today I ran into a scenario where I wanted the default value for a field named before to be the current time, but any value we give to default_value is gonna be hard coded as it's evaluated at compile time.

Since we already go through the effort of handling anonymous functions, I think we ought to allow a default value to also be a zero arity function that would get called at runtime.

Thoughts? @bruce

Error providing InputObjects values as query document variables

Issue originally found in #62, and occurs during value coercion when building the variables for the execution context.

** (MatchError) no match of right hand side value: %Absinthe.Type.InputObject{__reference__: %{identifier: :create_user_inp
ut, location: %{file: "/web/graph/schema/type/user.ex", line: 22}, module: Graph.Schema.Type.U
ser}, description: nil, fields: %{email: %Absinthe.Type.Field{__reference__: %{identifier: :email, location: %{file: "/web/graph/schema/type/user.ex", line: 23}, module: Graph.Schema.Type.User}, args: %{}, default_value:
nil, deprecation: nil, description: nil, name: "email", resolve: nil, type: %Absinthe.Type.NonNull{of_type: :string}}, name: %A
bsinthe.Type.Field{__reference__: %{identifier: :name, location: %{file: "/web/graph/schema/type/user.e
x", line: 24}, module: Graph.Schema.Type.User}, args: %{}, default_value: nil, deprecation: nil, description: nil, nam
e: "name", resolve: nil, type: %Absinthe.Type.NonNull{of_type: :string}}}, name: "CreateUserInput"}
        (absinthe) lib/absinthe/execution/variables.ex:95: Absinthe.Execution.Variables.coerce/2
        (absinthe) lib/absinthe/execution/variables.ex:66: Absinthe.Execution.Variables.valid/4

It looks like we need to handle this case specifically in Variables.build and Arguments.build.

Optimizations

We should bake the post processing that's done on a schema into the schema module itself more, so that calling .schema is not such an expensive process.

General ideas include:

For type maps

def __query_type_by_name__("User"), do: MyTypeModule.user
def __query_type_by_identifier__(:user), do: MyTypeModule.user

Using on_definition hooks to introspect on the AST of types / queries / mutations as they're created to do the transformations then, and inject them into other functions.

Things that won't work: Unquoting the .schema as it currently looks somewhere. Anonymous functions can't be escaped, so they can't be unquoted into anything.

List + Non Null Validations

Make sure the following rules from the spec are checked during validation:

  • If the modified type of a List is Non‐Null, then that List may not contain any null items.
  • If the modified type of a Non‐Null is List, then null is not accepted, however an empty list is accepted.
  • If the modified type of a List is a List, then each item in the first List is another List of the second List’s type.
  • A Non‐Null type cannot modify another Non‐Null type.

Separate Validation phase

Currently validation is only implemented in parts (focused on field, variable, and argument validation) and is conflated with execution.

Validation should be rebuilt as a separate (optional, per the spec) phase, using Absinthe.Traversal.reduce/4 if at all possible.

Add query reducers

Sometimes in can be helpful to perform some analysis on a query before executing it. An example is complexity analysis: it aggregates the complexity of all fields in the query and then rejects the query without executing it if complexity is too high. Another example is gathering all Permission field tags and then fetching extra user auth data from external service if query contains protected fields. This need to be done before the query started to execute.

Sangria reference: http://sangria-graphql.org/learn/#query-reducers

Protection against malicious queries

Given the nature of GraphQL, a client can easily send infinitely deep queries which may have high impact on server performance. Absinthe should help guard against this by analyzing query complexity before executing it and should reject it if it goes beyond a given customizable threshold.

This is the ideal use case / application for #107

Being able to limit query depth would also probably be valuable as well.

Update @spec in Execution.Arguments

Now that we're using Execution.Arguments to build argument maps for directives, we need to update the @spec types to show other than just fields can be passed in.

How can I support camel-cased database data?

I have a RethinkDB table with a document that has the key verificationCode. Since this data already exists it's not easy to migrate it to verification_code.

Is there a way to change the field to take a name? I thought I read somewhere that it's possible but can't seem to find it in the docs.

My current type looks like this (removed other fields for clarity):

defmodule Badger.Types.User do
  use Absinthe.Schema.Notation

  @desc "A registered user"
  object :user do
    field :verification_code, :string
  end
end

I've tried using camel case in the type module but I get a "Field 'verificationCode': Not present in schema"

Is there any way to work around this?

Support passing the current target to all resolver functions

Right now to get the "current object" in a resolver, you need to do something like this:

resolve: fn
  _args, execution ->
    # do something with execution.resolution.target
end

Since the object is really the current scope of the schema node in question, this happens often enough, and the internals of the Execution.t are best considered a "private API," I think it makes sense to pass it as the first argument to the resolver, eg:

resolve: fn
  obj, _args, _execution ->
    # do something with obj
end

(This would bring it generally in line with other implementations.)

Or, alternately, have some other contract we expect from resolvers.

What do you think, @benwilson512 ?

Refactor Argument building to use Traversal

The argument building code is inarguably the worst in the project. Now that we have it, I think we'd be able to clean it up quite a bit by using Absinthe.Traversal.reduce/4 to walk the schema and extract/coerce the argument values.

Support GraphIQL Introspection

query IntrospectionQuery {
    __schema {
      queryType { name }
      mutationType { name }
      types {
        ...FullType
      }
      directives {
        name
        description
        args {
          ...InputValue
        }
        onOperation
        onFragment
        onField
      }
    }
  }

  fragment FullType on __Type {
    kind
    name
    description
    fields {
      name
      description
      args {
        ...InputValue
      }
      type {
        ...TypeRef
      }
      isDeprecated
      deprecationReason
    }
    inputFields {
      ...InputValue
    }
    interfaces {
      ...TypeRef
    }
    enumValues {
      name
      description
      isDeprecated
      deprecationReason
    }
    possibleTypes {
      ...TypeRef
    }
  }

  fragment InputValue on __InputValue {
    name
    description
    type { ...TypeRef }
    defaultValue
  }

  fragment TypeRef on __Type {
    kind
    name
    ofType {
      kind
      name
      ofType {
        kind
        name
        ofType {
          kind
          name
        }
      }
    }
  }

Check nested InputObjectTypes

Right now sub fields inside an InputObjectType are not checked against the schema, only top level keys. Note that they ARE successfully removed from the parameters it seems, but there isn't an error collected for it.

Did you mean...

Given that field names are in general pretty public, how would you feel about using the built in string distance functions to include a "did you mean X?" in field not present error messages?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.