exaspark / batch-loader Goto Github PK
View Code? Open in Web Editor NEW:zap: Powerful tool for avoiding N+1 DB or HTTP queries
License: MIT License
:zap: Powerful tool for avoiding N+1 DB or HTTP queries
License: MIT License
Hi, firstly great library, love every bit of the implementation! Was fascinating learning how batch-loader worked.
I have what I can only imagine to be a common use-case for BatchLoader; batching N+1 SQL 1:Many calls. All the examples in the docs appear to be Many:1, or at least X:1, and so they are resolving to single items. Let me explain in code, as all that is probably just confusing things.
# Lets say I have a house, and I want all the photos for that house.
# This doesn't work, because it resolves a single photo to a single house
BatchLoader.for(house.id).batch do |house_ids, loader|
Photo.where(owner_id: house_ids).each do |photo|
loader.call(photo.owner_id, photo)
end
end
# This doesn't work, because if a house has no photos, nil is returned instead of [].
# Also for such a common pattern, this seems like a lot of boilerplate
BatchLoader.for(house.id).batch do |house_ids, loader|
Photo.where(owner_id: house_ids)
.group_by(&:owner_id).each do |house_id, photos|
loader.call(house_id, photos)
end
end
# This would be my dream API... Note batch takes a default resolve value
BatchLoader.for(house.id).batch([]) do |house_ids, loader|
Photo.where(owner_id: house_ids).each do |photo|
# Here .add is used instead of .call to let BatchLoader know not to replace item
# but to call
# @item = (@item || @default) << item
loader.add(photo.owner_id, photo)
end
end
# Alternative API... explicit batch_array means loader API can stay the same, no default needed
BatchLoader.for(house.id).batch_array do |house_ids, loader|
Photo.where(owner_id: house_ids).each do |photo|
loader.call(photo.owner_id, photo)
end
end
Is something like this possible?
class Types::SubcategoryType < Types::BaseObject
def records
BatchLoader.for(object.category_id).batch do |category_ids, loader|
records = Record.where(category_id: category_ids).group_by(&:subcategory_id)
loader.call(object.subcategory_id, records[object.id])
end
end
end
The goal is to load all records
across all subcategories
of a category
at once. Currently the example above will return null
because the batch
call uses the category_id
as the key, but on the loader
call I'm using a subcategory_id
.
Hi,
I think that cache
should affect only fetching using #batch
method, while already fetched value should be evaluated once. Let me illustrate it with the following code:
require "bundler/inline"
gemfile { gem "rspec", require: 'rspec/autorun'; gem "batch-loader" }
RSpec.describe BatchLoader do
def value_lazy(arg)
BatchLoader.for(arg).batch(cache: cache) do |args, loader|
$batch_method_calls_count += 1
$data.slice(*args).each { |k,v| loader.call(k,v) }
end
end
before do
$batch_method_calls_count = 0
$data = {item: 1}
end
context 'when caching is disabled' do
let(:cache) { false }
context 'when value is loaded once' do
let(:loaded_value) { value_lazy(:item) }
it do
pending '>>>> ERROR HAPPENS HERE <<<<'
expect { $data[:item] += 1 }.not_to change { loaded_value }
end
it do
pending '>>>> AND HERE <<<<'
2.times { loaded_value.inspect }
expect($batch_method_calls_count).to eq(1)
end
end
context 'when value is loaded on the fly' do
it do
expect { $data[:item] += 1 }.to change { value_lazy(:item) }
end
end
end
end
It makes cache: false
quite impractical as usage of a returned value in a simple computation where the value is referenced multiple times can lead to an angry ddos attack.
Another problem related to this effect: it's not clear how to combine disabled caching usage with changing data:
it do
pending '>>>> ALSO FAILS <<<<'
value_before = value_lazy(:item)
$data[:item] += 1
value_after = value_lazy(:item)
expect(value_before).not_to eq(value_after)
end
Could you provide your thoughts and correct me if I'm wrong, please? Thanks!
Hi,
I tried the example given here https://github.com/exAspArk/batch-loader#batch-key after updating ruby version to 3.0.0.
id = post.association_id
key = post.association_type
BatchLoader.for(id).batch(key: key) do |ids, loader, args|
model = Object.const_get(args[:key])
model.where(id: ids).each { |record| loader.call(record.id, record) }
end
It gives me wrong number of arguments (given 1, expected 0)
error when passing arguments in batch
method, Now, It is not expecting any argument. Is there any alternative way to do the same?
ruby version: 3.0.0
rails version: 6.0.3.4
graphql version: (>= 1.8)
With the below code, if some_call
returns nil
, get_stuff
returns nil
. This is because some_call
returns a BatchLoader
which never executes and some_other_call
doesn't get called.
def get_stuff
some_call || some_other_call
end
def some_call
BatchLoader.for(id) ...
end
This occured in my active record models when called directly, thus failing the tests for the method get_stuff
. However in the controller, which uses a jbuilder that calls get_stuff
, it is lazily loaded and then when evaluated, it calls some_other_call
and works properly.
What's the best way to handle this?
I have to keep bothering you but I'm having another problem.
Delivery has_many: :drivers, through: deliveries_driver
Driver has_many :deliveries, through: :deliveries_drivers
DeliveriesDriver
belongs_to :delivery belongs_to :driver
resolve ->(delivery, args, ctx) {
BatchLoader.for(delivery.id).batch do |delivery_ids, loader|
DeliveriesDriver.includes(:driver).joins(:delivery).where(delivery_id: delivery_ids).each do |dd|
loader.call(dd.delivery_id) { |memo| memo << dd.driver }
end
end
}
I'm getting the error:
NoMethodError (undefined method `id' for #<Array:0x007fd990dc4508>):
app/controllers/graphql_controller.rb:12:in `execute'
We were on GraphQL 1.8.5 and wanted to upgrade to GraphQL version 1.8.7, but the batch loader stopped batching requests. This is due to the introduction of the scope
option in GraphQL 1.8.7 https://github.com/rmosolgo/graphql-ruby/compare/357fb18..5e387ea#diff-4df5462c469f7f83e661688be477a19bR15
I am opening this issue in the hope to get this fixed and in the meantime to show how to fix this if scoping is not needed. It is described in: rmosolgo/graphql-ruby#1778 (comment)
Is there a function to prime items into other batches?
The facebook dataloader exposes this function and essentially it affords items loaded to be found by another key. This is especially useful, for instance, so that when you are batch load Posts for a User you could also prime a Category batch find where a Comment belongs to a Category by a foreign key.
I was looking at the docs for loading multiple items but in my testing it doesn't seem to avoid the N+1 query. I don't see how you are collecting the user_ids
array in the example so that could be where I am off. My code is:
I added the batch-loader gem to a Ruby on Rails 5 project I'm working on, and realized that I was running into issues related to caching. When I tried to disable caching between HTTP requests as suggested in the README documentation by adding the use BatchLoader::Middleware
line into my graphql schema file I got this error:
undefined method
use' for BatchLoader::Middleware:Class`
After some googling, I found this post:
https://medium.com/@bajena3/use-lazy-relationships-to-eliminate-n-1-queries-in-your-rails-apps-46ea2ce42162
which suggests configuring the middlware in the Rails application.rb
file. When I tried this, it worked for me.
So I guess my question: does the documentation needs to be updated, or am I doing something wrong following what's in the README?
Nice gem - thanks!
This isn't an issue per se - I already have a working solution based on the second example from this section in the README: https://github.com/exAspArk/batch-loader#loading-multiple-items.
I'm curious though as to why the following approach does not prevent an N+1 query.
Given these models:
class Post < ApplicationRecord
has_many :comments
end
class Comment < ApplicationRecord
belongs_to :post
end
And a GraphQL schema with a postsConnection
on the root query type and a commentsConnection
on the post
type.
The following seems to trigger an N+1 query for the comments association on each post when used to resolve the commentsConnection
for each post
:
BatchLoader::GraphQL.for(post.id).batch(default_value: []) do |post_ids, loader|
Post.
where(id: post_ids).
includes(:comments).
each { |post| loader.call(post.id, post.comments) }
end
I use Searchkick gem in my project and I got a post list from Graphql, then I got N+1 problem.
# demo_schema.rb
class DemoSchema < GraphQL::Schema
mutation ::Types::MutationType
query ::Queries::QueryType
use BatchLoader::GraphQL
end
# post_type.rb
module Types
class PostType < BaseObject
field :id, String, null: false
field :title, String, null: false
field :images, [Types::ImageType], null: true
def images
# object's class is Searchkick::HashWrapper
BatchLoader::GraphQL.for(object.id).batch(default_value: []) do |post_ids, loader|
Image.where(post_id: post_ids).each { |i| loader.call(i.post_id, i) }
end
end
end
end
its work well in console:
SELECT `images`.* FROM `images` WHERE `images`.`post_id` IN (92, 91, 79, 56, 17, 16);
but images is null in result:
{
"data": {
"posts": {
"records": [
{
"id": 92,
"title": "Inkscape 1.0 Release Candidate",
"images": []
},
{
"id": 91,
"title": "Saving Money on International Payments as a Remote Freelancer",
"images": []
},
{
"id": 79,
"title": "Remembering Conway",
"images": []
},
...
]
}
}
}
How to fix the problem with Graphql & Searchkick?
Thanks.
Hi i have a project that use graphql and batch loader
But i can't used in nested relation.
I have 3 table on my projects (Countries, Provinces, CityDistrict)
When i get all the province on certain country, i can get it without n+1 problem.
But the problem is i can't get country without n+1 problem.
i.e graphql query :
query city_districts {
cityDistricts {
edges {
node {
id
name
country{
name
}
province{
name
}
}
}
}
}
FYI: 'object' is city district it self
City District Graphql Types:
module Types
class CityDistrictType < BaseObject
field :id, ID, null: false
field :name, String, null: false
field :province, Types::ProvinceType, null: false
field :country, Types::CountryType, null: false
def province
BatchLoader::GraphQL.for(object.province_id).batch do |province_ids, loader|
Country.joins(:provinces).where("provinces.id IN (:province_ids)", province_ids: province_ids).group(:id).each { |country| loader.call(country.id, country) }
end
end
def country
BatchLoader::GraphQL.for(object.province_id).batch do |province_ids, loader|
Country.joins(:provinces).where("provinces.id IN (:province_ids)", province_ids: province_ids).group(:id).each { |country| loader.call(country.id, country) }
end
end
end
end
Result :
Cannot return null for non-nullable field CityDistrict.country
Any solution ?
Hey all, I'm trying to print out the variable values inside of batch blocks for debug purpose, but somehow it doesn't work. Nothing print out. I wonder if anyone knows how to do that? Thanks a lot.
BatchLoader.for(object).batch do |meals, loader|
billables = InternalClient::MakomoClient.billables_by_consumption_owners(
owners: meals,
internal_name: MENU_PRICE_NAME
).group_by { |b| b.owner_id.to_i }
puts billables
meals.each do |meal|
menu_price = billables[meal.id].result.first.amount
loader.call(meal, menu_price)
end
end
Hi, thanks for the gem β€οΈ
I am trying to wrap my head around the concept and I have the following model, where one client has many transfers through accounts.
class Client
has_many :accounts
has_many :transfers, through: :accounts
end
Imagine ClientType
has a field called transfers
, this is my attempt to lazy load it (because it sounds very similar to 1-many example):
ClientType = GraphQL::ObjectType.define do
...
field :transfers, types[TransferType] do
resolve ->(client, args, ctx) do
BatchLoader.for(client.id).batch(default_value: []) do |client_ids, loader|
Transfer.includes(:client).joins(:client).where(clients: { id: client_ids }).each do |transfer|
loader.call(transfer.client.id) { |memo| memo << transfer }
end
end
end
end
However this is resulting in multiple queries to the Transfer table, exactly like a N+1 (one per client id).
Any idea what am I missing? Before that I was using Client.includes(:transfers)
and it was working as expected resulting in queries like:
D, [2018-01-30T21:37:54.025760 #19309] DEBUG -- : Client Load (0.8ms) SELECT "clients".* FROM "clients" WHERE "clients"."deleted_at" IS NULL AND "clients"."canonical_id" IN ('user-15', 'user-16', 'user-17', 'user-18', 'user-19', 'user-20', 'user-21', 'user-1', 'user-2', 'user-3', 'user-4', 'user-5', 'user-6', 'user-7', 'user-8', 'user-9', 'user-10', 'user-11', 'user-12', 'user-13')
D, [2018-01-30T21:37:54.092492 #19309] DEBUG -- : Account Load (0.7ms) SELECT "accounts".* FROM "accounts" WHERE "accounts"."deleted_at" IS NULL AND "accounts"."client_id" IN (31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50)
D, [2018-01-30T21:37:54.117644 #19309] DEBUG -- : Transfer Load (1.7ms) SELECT "transfers".* FROM "transfers" WHERE "transfers"."deleted_at" IS NULL AND "transfers"."account_id" IN (16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 32, 33, 34)
I feel like I am almost there, maybe I need an Account
query first and a Transfer
account right after that.
Does this work with Rails 6? I'm following the examples as per the README (using loader.call
) and instead of getting a BatchLoader
instance back, i just get nil
.
Then if i follow the examples as per the original blog post (using loader.load
), i get a BatchLoader
instance back, but when I try to access anything on it, i get a private method error:
NoMethodError: private method `load' called for #<Proc:0x00007f955766b928>
I can't work out if it's not working because i'm doing something wrong, because the docs are wrong, or because it doesn't work with Rails 6, etc.
Rough mockup of my code:
app/models/foo.rb
# ..snip..
has_many :bars
def bars_lazy
BatchLoader.for(id).batch do |foo_ids, batch_loader|
Bar.where(foo_id: foo_ids).each { |s| batch_loader.load(b.id, b) }
end
end
# ..snip..
I have a delivery object have has_many workflows.
I have this as my workflows field in my delivery type:
field :workflows, types[Types::WorkflowType] do
description "All workflows for a delivery"
resolve ->(obj, args, ctx) {
BatchLoader.for(obj.id).batch do |delivery_ids, loader|
Workflow.where(delivery_id: delivery_ids).each { |workflow| loader.call(workflow.delivery_id, workflow) }
end
}
end
I'm getting this error when run a query:
NoMethodError (undefined method `each' for #<Workflow:0x007fcccdcd5978>):
app/controllers/graphql_controller.rb:12:in `execute'
I'm not sure where this error is coming from.
I'm running into a bit of difficulty using GraphQL Pro's Pundit authorization with batch-loader. We've set up a base resolver class like so:
class BaseResolver < GraphQL::Function
def batch(obj, args, model)
object_id = "#{obj.class.name.downcase}_id"
BatchLoader.for(obj.id).batch(default_value: [], key: args.to_h) do |obj_ids, loader|
scopes(model.in(object_id => obj_ids), args).each do |record|
loader.call(record[object_id]) { |memo| memo << record }
end
end
end
def self.extend_type(base_type)
base_type.define_connection do
field :totalCount do
type types.Int
resolve ->(obj, _args, _ctx) { obj.nodes.size }
end
end
end
end
and then a specific resolver might look like this:
class FindUnicorns < BaseResolver
type extend_type(Types::UnicornType)
def call(obj, args, _ctx)
batch(obj, args, Unicorn)
end
private
def scopes(query, _args)
query
end
end
and we have a policy like this:
class UnicornPolicy < ApplicationPolicy
class Scope
attr_reader :user, :scope
def initialize(user, scope)
@user = user
@scope = scope
end
def resolve
if user.admin?
scope.all
else
# only unicorns user rides
scope.where(rider_id: user.id)
end
end
end
end
The trouble is that GraphQL Pro will only invoke the UnicornPolicy::Scope#resolve
method if the resolver returns an ActiveRecord::Relation
or Mongoid::Criteria
, and with batch-loader, we're actually returning a BatchLoader
. If I replace our FindUnicorns resolver with this:
class FindUnicornsNoBatch < BaseResolver
type Types::UnicornType
def call(_obj, _args, ctx)
Unicorn.all
end
end
...then the Scope#resolve
method is called as I desire.
My question is: is there a way I can use these two libraries together? Perhaps I should batch after applying the policy? Or maybe patch GraphQL Pro to also invoke the policy scope if the object type is BatchLoader
? My goal is to achieve the efficiency of batch loading and caching Unicorn results, while also ensuring that no user receives Unicorns which they do not deserve.
Thanks for an awesome gem! I love the flexibility and the clarity you've achieved by reducing the "magic" involved.
I think I may be missing a key understanding since I can't figure out how I would represent a many to many relationship.
The documentation talks about the memoization being extremely useful for 1:many relationships, and I believe this is because loader.call(1, value)
overwrites the value for 1
. I think I'm missing how multiple BatchLoader.for(1)
calls get grouped together.
The test situation I am working with follows
# ActiveRecord classes
class Parent < ActiveRecord::Base
has_and_belongs_to_many :children
end
class Child < ActiveRecord::Base
has_and_belongs_to_many :parents
end
# Usage
BatchLoader::GraphQL.for(parent.id).batch(default_value: []) do |parent_ids, loader|
children_for_parent_ids(parent_ids).each do |loaded_child|
loader.call([unsure]) { |memo| memo << loaded_child}
end
end
# How I think I could make it work...
BatchLoader::GraphQL.for(parent.id).batch(default_value: []) do |parent_ids, loader|
children_for_parent_ids(parent_ids).each do |loaded_child|
parent_ids_for_child(loaded_child).each do |parent_id|
loader.call(parent_id) { |memo| memo << loaded_child }
end
end
end
If this isn't the right place to ask question, feel free to close the issue! I would really appreciate any direction though π
Currently, the cache
keyword argument is used for two things:
#__sync
uses it to set a @synced
instance variable, which tells later calls to #__sync
not to reload the value when another method is called on it (https://github.com/exAspArk/batch-loader/blob/v1.3.0/lib/batch_loader.rb#L49-L55).#__sync!
uses it to replace methods on the proxied object with their 'real' equivalents (https://github.com/exAspArk/batch-loader/blob/v1.3.0/lib/batch_loader.rb#L77-L78).In our use of this gem at GitLab, we've noticed that it's very easy for objects with a large interface to spend a lot of time in #__replace_with!
. See https://gitlab.com/gitlab-org/gitlab-ce/issues/60373#note_159582633 and https://gitlab.com/gitlab-org/gitlab-ce/issues/43065#note_160469960 for some examples.
In some exploratory testing, we found that retaining only item 1 from my list above gave us better performance than either disabling the cache entirely, or having both 1 and 2 coupled together.
Could we consider a replace_methods
keyword argument? If unset, it would default to the same value as cache
, but if set to false
, it would skip item 2 above. I am happy to create a PR if that sounds reasonable.
My code for batch loader :
BatchLoader.for(object.id).batch(default_value: []) do |province_ids, loader|
City.where(province_id: province_ids).each do |city|
loader.call(city.province_id) { |data| data << city }
end
end
Result :
Province Load (0.7ms) SELECT "provinces".* FROM "provinces"
β³ app/controllers/graphql_controller.rb:11
City Load (0.5ms) SELECT "cities".* FROM "cities" WHERE "cities"."province_id" = $1 [["province_id", 1]]
β³ app/graphql/types/province_type.rb:21
City Load (1.1ms) SELECT "cities".* FROM "cities" WHERE "cities"."province_id" = $1 [["province_id", 2]]
β³ app/graphql/types/province_type.rb:21
City Load (0.5ms) SELECT "cities".* FROM "cities" WHERE "cities"."province_id" = $1 [["province_id", 3]]
Here's my gemfile :
source 'https://rubygems.org'
git_source(:github) { |repo| "https://github.com/#{repo}.git" }
ruby '2.4.5'
# Bundle edge Rails instead: gem 'rails', github: 'rails/rails'
gem 'rails', '~> 5.2.1'
# Use sqlite3 as the database for Active Record
# gem 'sqlite3'
# Use postgresql as the database for Active Record
gem 'pg'
# Use Puma as the app server
gem 'puma', '~> 3.11'
# Build JSON APIs with ease. Read more: https://github.com/rails/jbuilder
# gem 'jbuilder', '~> 2.5'
# Use Redis adapter to run Action Cable in production
# gem 'redis', '~> 4.0'
# Use ActiveModel has_secure_password
# gem 'bcrypt', '~> 3.1.7'
# Use ActiveStorage variant
# gem 'mini_magick', '~> 4.8'
# Use Capistrano for deployment
# gem 'capistrano-rails', group: :development
# Reduces boot times through caching; required in config/boot.rb
gem 'bootsnap', '>= 1.1.0', require: false
# Use Rack CORS for handling Cross-Origin Resource Sharing (CORS), making cross-origin AJAX possible
# gem 'rack-cors'
group :development, :test do
# Call 'byebug' anywhere in the code to stop execution and get a debugger console
gem 'byebug', platforms: [:mri, :mingw, :x64_mingw]
end
group :development do
gem 'listen', '>= 3.0.5', '< 3.2'
# Spring speeds up development by keeping your application running in the background. Read more: https://github.com/rails/spring
gem 'spring'
gem 'spring-watcher-listen', '~> 2.0.0'
end
# Windows does not include zoneinfo files, so bundle the tzinfo-data gem
gem 'tzinfo-data', platforms: [:mingw, :mswin, :x64_mingw, :jruby]
gem 'graphql'
gem 'batch-loader'
Im already use your quick fix on this comment
#22 (comment)
It can solved this problem.
The other solution :
rmosolgo/graphql-ruby#1778 (comment)
But it must make scope attribute to false
Is there best practice to solve this problem ?
Thank you
I'm using batch-loader in conjunction with the graphql gem, and I've been experimenting with updating the gem version from 0.3.0 to 1.0.2. In doing so, I noticed a performance decrease that I thought I would report.
Our javascript client does an expensive graphQL call after login to download much of the static data it needs to display throughout a single page app. A couple of the resources requested during this expensive 'getEnums' call rely on batch-loader. On my development machine, if I perform this query with 0.3.0, it anecdotally takes circa 500ms on average. Here are 3 logins
INFO [16/Sep/2017:09:28:05 +0100]: GraphQL [getCurrentUser] completed in 12 ms
INFO [16/Sep/2017:09:28:05 +0100]: GraphQL [getEnums] completed in 627 ms
INFO [16/Sep/2017:09:28:38 +0100]: GraphQL [getCurrentUser] completed in 12 ms
INFO [16/Sep/2017:09:28:38 +0100]: GraphQL [getEnums] completed in 649 ms
INFO [16/Sep/2017:09:28:59 +0100]: GraphQL [getCurrentUser] completed in 8 ms
INFO [16/Sep/2017:09:28:59 +0100]: GraphQL [getEnums] completed in 491 ms
If I upgrade to 1.0.0, it takes approx 800ms to return.
INFO [16/Sep/2017:09:33:23 +0100]: GraphQL [getCurrentUser] completed in 12 ms
INFO [16/Sep/2017:09:33:24 +0100]: GraphQL [getEnums] completed in 819 ms
INFO [16/Sep/2017:09:33:40 +0100]: GraphQL [getCurrentUser] completed in 11 ms
INFO [16/Sep/2017:09:33:41 +0100]: GraphQL [getEnums] completed in 839 ms
INFO [16/Sep/2017:09:33:58 +0100]: GraphQL [getCurrentUser] completed in 8 ms
INFO [16/Sep/2017:09:33:59 +0100]: GraphQL [getEnums] completed in 732 ms
Same is true for 1.0.1 (actually seems slightly faster in these test runs):
INFO [16/Sep/2017:09:36:20 +0100]: GraphQL [getCurrentUser] completed in 10 ms
INFO [16/Sep/2017:09:36:21 +0100]: GraphQL [getEnums] completed in 852 ms
INFO [16/Sep/2017:09:36:32 +0100]: GraphQL [getCurrentUser] completed in 6 ms
INFO [16/Sep/2017:09:36:33 +0100]: GraphQL [getEnums] completed in 733 ms
INFO [16/Sep/2017:09:36:47 +0100]: GraphQL [getCurrentUser] completed in 6 ms
INFO [16/Sep/2017:09:36:47 +0100]: GraphQL [getEnums] completed in 703 ms
If I upgrade to 1.0.2 however, this shoots up to 2800ms or so.
INFO [16/Sep/2017:09:39:32 +0100]: GraphQL [getCurrentUser] completed in 11 ms
INFO [16/Sep/2017:09:39:35 +0100]: GraphQL [getEnums] completed in 2811 ms
INFO [16/Sep/2017:09:39:49 +0100]: GraphQL [getCurrentUser] completed in 7 ms
INFO [16/Sep/2017:09:39:51 +0100]: GraphQL [getEnums] completed in 2665 ms
INFO [16/Sep/2017:09:40:06 +0100]: GraphQL [getCurrentUser] completed in 12 ms
INFO [16/Sep/2017:09:40:08 +0100]: GraphQL [getEnums] completed in 2751 ms
The only variation between the tests is updating the Gemfile and altering batch_loader.load to batch_loader.call.
In terms of data retrieved using batch-loader it's currently restricted to small portion of the data returned in the call. We have Clouds containing SubClouds, containing ProductFamilies and batch-loader is used to fetch these in fewer queries.
graphql/types/cloud.rb
field :subClouds, -> { types[SubCloudType] }, 'The sub-clouds available for this cloud',
resolve: lambda { |cloud, _args, _ctx|
BatchLoader.for(cloud.id).batch do |cloud_ids, batch_loader|
sub_clouds = SubCloud.in(cloud_id: cloud_ids).entries
cloud_ids.each do |cloud_id|
sub = sub_clouds.select { |s| s.cloud_id == cloud_id }
batch_loader.call(cloud_id, sub)
end
end
}
graphql/types/subcloud.rb
field :productFamilies, -> { types[ProductFamilyType] }, 'The product families available for this sub-cloud',
resolve: lambda { |sub_cloud, _args, _ctx|
BatchLoader.for(sub_cloud.id).batch do |sub_cloud_ids, batch_loader|
families = ProductFamily.in(subCloud_id: sub_cloud_ids).entries
sub_cloud_ids.each do |sub_cloud_id|
family = families.select { |f| f.subCloud_id == sub_cloud_id }
batch_loader.call(sub_cloud_id, family)
end
end
}
I'll see if I can log the lambda calls and database queries to see what the underlying differences are.
Hi,
I have two Rails application connected as follows:
APP1: expose data through GraphQL Api
APP2: define objects associated to APP1 data, it consume Api provided by APP1
Is there a way to avoid N+1 http requests problem (when loading APP1 data for a collection of APP2 objects)?
I tried implementing something similar to what is presented at https://github.com/exAspArk/batch-loader#restful-api-example, it works (and performs caching) but it doesn't solve N+1 http requests problem.
BTW: I think that rating_lazy implementation is not correct, it should be
class Post < ApplicationRecord
def rating_lazy
BatchLoader.for(id).batch do |ids, loader|
Parallel.each(ids, in_threads: 10) { |id| loader.call(id, rating) }
end
end
# ...
end
Many thanks,
Mauro
We're using this gem for preloading related data on ActiveRecord models from another data store. This works well, but I wonder if the interface for AR could be improved.
Consider the following example:
# Pipeline is a collection of CI jobs, for example an rspec and rubocop job can be in that
pipelines = Pipeline.last(10)
pipelines.each { |p| p.lazy_commit } # list the commit for batching
# What I would want to write:
pipelines = Pipeline.last(10).with_batched(:lazy_commit)
This way I can forget about when the pipelines are actually fetched from the database. I understand this is a minor issue, and having a hook like that might make more sense to provide in rails/rails
than in this code base. What do you think?
Example:
I have Users with a type attribute like reader and this users have Posts with a different states
I need to select only users with specific type attribute and the Post with a specific type... So basically filter the associations values
I hope the explanation was clear
Thanks for the help!
Consider this script:
def modify!(xs)
xs.map! { |x| x.map(&:succ) }
end
def load_value(x)
BatchLoader.for(x).batch do |xs, loader|
p xs
modify!(xs).each { |x| loader.call(x.map(&:prev), x) }
end
end
a = [1, 2, 3]
b = [4, 5, 6]
load_value(a)
load_value(b)
load_value(a)
I would expect the last load_value
call here to not print anything. It actually prints [[1, 2, 3], [4, 5, 6]]
.
We discovered this in GitLab because we were doing something similar; see https://gitlab.com/gitlab-org/gitlab-ce/issues/60829#note_163484873 for a fuller explanation. We shouldn't be doing that (and we'll fix it) but it might make sense for BatchLoader to be more robust here, too?
@exAspArk what do you think?
cc @DouweM
Given the below, I was under the impression I would be able to avoid the n+1
request problem. However the below query makes n+1 calls. Am I using this incorrectly? Is there a way to avoid the n+1 problem when trying to sort?
User.where(canonical_id: args['ids']).sort{ |a, b| a.latest_posts.date <=> b.latest_posts.date }
// user model
def latest_posts
BatchLoader.for(id).batch do |user_ids, loader|
Posts.where(user_id: user_ids)
.group('users.id')
.group_by(&:user_id)
.each do |user_id, posts|
// latest_content = ...some function that finds the post with the latest date
// latest_date = ...some function that finds the post by the latest date
loader.call(user_id, OpenStruct.new(user_id: user_id, content: latest_content, date: latest_date)
end
end
end
Edit: After some investigation this appears to be because latest_post is a OpenStruct and sort gets called on a field in the struct. For example User.where(canonical_id: args['ids']).sort{ |a, b| a.latest_posts <=> b.latest_posts }
does not result in n + 1
queries, and also interestingly
manually invoking the latest_posts
on each user but NOT fetching a field from the struct also does not do n + 1
queries:
User.where(canonical_id: args['ids']).each(&:latest_posts).sort{ |a, b| a.latest_posts.date <=> b.latest_posts.date }
Is this something that this gem can fix? Would be nice to be able to sort on a nested field, but it's a bit silly that I have to randomly throw in a .each(&:latest_posts)
just to prevent the n+1
queries
As the title says, I was wondering if it was possible to use batch-loader to query a has-Many relationship but specifically get a count of the amount items for a specific parent object.
I've taken a look at the closed issue but none of them really show a solution. There is one that touches this subject but more in the context of using values from one loader in another, which is not what I am trying to do per se.
Using the batch loader is giving me some warnings now with 2.7.1.
rvm/gems/ruby-2.7.1/gems/batch-loader-1.5.0/lib/batch_loader/graphql.rb:58: warning: Using the last argument as keyword parameters is deprecated; maybe ** should be added to the call
Looks like this method needs a tweak...
def batch(*args, &block)
@batch_loader.batch(*args, &block)
self
end
More notes on potential fix.
https://bloggie.io/@kinopyo/how-to-fix-ruby-2-7-warning-using-the-last-argument-as-keyword-parameters-is-deprecated
This is a small, stylistic issue, but it might be clearer to new users if the methods referencing the id's were more explicit. Something like this, maybe?
BatchLoader::GraphQL.for_id(post.user_id).batch do |user_ids, loader|
User.where(id: user_ids).each { |user| loader.match_id(user.id, user) }
end
Just a thought. call
is pretty generic and doesn't convey as much meaning as a name that says "ok, when given this ID, reference this object".
Hi,
trying this awesome library and immediately bump into this error:
/Users/schovi/.rvm/gems/ruby-2.4.0@playground/gems/batch-loader-1.0.0/lib/batch_loader/executor.rb:22:in `block in initialize': uninitialized constant BatchLoader::Executor::Set (NameError)
from /Users/schovi/.rvm/gems/ruby-2.4.0@playground/gems/batch-loader-1.0.0/lib/batch_loader/executor_proxy.rb:46:in `items_to_load'
from /Users/schovi/.rvm/gems/ruby-2.4.0@playground/gems/batch-loader-1.0.0/lib/batch_loader/executor_proxy.rb:16:in `add'
from /Users/schovi/.rvm/gems/ruby-2.4.0@playground/gems/batch-loader-1.0.0/lib/batch_loader.rb:22:in `batch'
from main.rb:4:in `block in <main>'
from main.rb:3:in `times'
from main.rb:3:in `each'
from main.rb:3:in `map'
from main.rb:3:in `<main>'
I tried to minimize code:
require "batch-loader"
3.times.map do |i|
BatchLoader.for(i).batch do |somethings, loader|
somethings.each do |val| loader.call(val, val) end
end
end
Having
ruby 2.4.0
batch-loader 1.0.0
Looks like schemas are defined differently now, in the ruby-graphql gem:
class FooSchema < GraphQL::Schema
mutation(Types::MutationType)
query(Types::QueryType)
end
As opposed to the docs for batch-loader which have:
Schema = GraphQL::Schema.define do
query QueryType
use BatchLoader::GraphQL
end
Or am I missing something?
Thanks!
I tried to abstract out my BatchLoader calls a bit to DRY them up. In my GraphQL resolvers, I started using these field declarations:
field :org, !Types::Org, resolve: BatchLoadResolver.new(Org)
field :user, !Types::UserType, resolve: BatchLoadResolver.new(User)
With a BatchLoadResolver
that would do the common work of loading a many:1 association:
class BatchLoadResolver
attr_reader :id_method, :klass
def initialize(klass, id_method = nil)
@klass = klass
@id_method = id_method || :"#{klass.name.underscore}_id"
end
def call(parent, args, ctx)
BatchLoader.for(parent.public_send(id_method)).batch(cache: false) do |ids, loader|
klass.where(id: ids, org: parent.org).all.each do |child|
loader.call(child.id, child)
end
end
end
end
This same loading code was working fine when I had the BatchLoader
blocks defined individually for each field. However, when doing the above, all of my objects' IDs would get mixed together into a single batch (rather than a batch for each type of object).
It quickly became obvious that with a simple call like BatchLoader.for(object_id)
there was clearly no way for this gem to distinguish between types of objects it would have to load. After some digging, I found the implementation call to source_location
, which sets up a hash key for loaders based on where the loading block was defined. That makes sense given the minimal info I'm passing BatchLoader
when executing it, but it does limit the ways this library can be used.
You may not consider this a bug, but I think you might at least want to make this more obvious in the readme. I suspect others will run into the same situation.
If this is something you care to support, maybe there could be another way to choose a hash key.
It's quite possible that I'm doing something wrong, because I don't understand 100% how this gem works, but I would appreciate any help.
I'm trying to implement the 1:Many solution (Post has_many Comments
) in GraphQL and here's my Post
query field for its comments:
field :comments, types[Types::CommentType] do
#resolve ->(o, _, _) { o.comments }
resolve ->(o, _, _) do
BatchLoader.for(o.id).batch(default_value: []) do |ids, loader|
Comment.where(post_id: ids).each do |c|
loader.call(o.id) { |memo| memo << c }
end
end
end
end
Now in my database I have a Comment
attached to the Post
with the id: 139. While that comment does appear in the query result, which is successfully lazy, the comment appears attached to the very first Post
listed, which has another id. And its actual parent (Post
with id 139) shows no comments in the query results.
What am I doing wrong?
I have something that looks like this:
class Group < GraphQL::Schema::Object
field :members, [String], null: false
field :members_count, Int, null: false
def members
BatchLoader::GraphQL.for(object.id).batch(default_value: []) do |ids, loader|
# query for all member associations for group IDs
end
end
def members_count
members.count
end
end
But members_count
reading from members
throws an error like
undefined method `count' for #<BatchLoader::GraphQL:0x000...>
This was working before the 1.3.0 changes.
I am just getting started with the gem and have a few questions:
for
call to BatchLoader
or if that is not it's intended purpose. We have a lot of join tables and support for this model would be crucial.Looks promising and like a great gem!
Hi in my query value returned is one object Project.find(id)
This schema return one error
Failed to build a GraphQL list result for field
Project.Itemsat path
project.items.\nExpected
#<BatchLoader::GraphQL:0x000055f6101402b0 @batch_loader=#BatchLoader:0x62120>(BatchLoader::GraphQL) to implement
.eachto satisfy the GraphQL return type
[Item!].\n"
field :items, [Types::ItemType], null: true
def items
BatchLoader::GraphQL.for(object.id).batch(default_value: []) do |project_ids, loader|
Item.where(project_id: project_ids).each do |data|
loader.call(data.project_id) { |memo| memo << data }
end
end
end
One ideia?
Hi,
I'm using BatchLoader
for queries on my ActiveRecord
objects and I'm getting an error when I call Kernel.Array
with the BatchLoader
proxy:
(This is in a new blank Rails 5.2.2 project with just one model that I haven't done anything to, just to make sure something in my real project wasn't causing the problem)
irb(main):002:0> u = User.first
=> #<User id: 1, ...>
irb(main):003:0> b = BatchLoader.for(1).batch {|ids, loader| loader.call(1, u)}
=> #<BatchLoader:0x94261337109840>
irb(main):004:0> Array(b)
Traceback (most recent call last):
2: from (irb):10
1: from (irb):10:in `Array'
NoMethodError (undefined method `to_ary' for #<User:0x000055baed851f08>)
Did you mean? to_key
This isn't an issue when calling Kernel.Array
on the ActiveRecord
objects directly or the result of __sync
on the proxy:
irb(main):005:0> Array(u)
=> [#<User id: 1, ...>]
irb(main):006:0> Array(b.__sync)
=> [#<User id: 1, ...>]
Kernel.Array
is a C function that calls to_ary
if the object has that method, otherwise it wraps the object in an Array.
What's strange is that my User
class doesn't have a to_ary
method (and neither does ActiveRecord::Base
), so I don't know why Kernel.Array
is trying to call it. It must be the way in which Kernel.Array
detects whether an object responds to a method is different than just calling respond_to?
, which BatchLoader
does handle:
irb(main):013:0> b = BatchLoader.for(1).batch {|ids, l| l.call(1, u)}
=> #<BatchLoader:0x93972309044200>
irb(main):014:0> b.respond_to?(:to_ary)
=> false
Maybe the low level C ruby functions do something different that isn't handled by the proxy correctly?
Also, this seems to affect ActiveRecord
objects wrapped in the proxy, but not other types of objects I've tested (Object
, String
, Rails
, Array
), for example:
>> b = BatchLoader.for(1).batch {|ids, l| l.call(1, "Hello World")}
=> #<BatchLoader:0x140413397447560>
>> Array(b)
=> ["Hello World"]
Any ideas?
--Caleb
Hi! π This is awesome, thank you so much for making it.
We're trying to add batch-loader for GraphQL and are running into an issue where we're getting a "cannot return null" on non-nullable fields.
{
"message": "Cannot return null for non-nullable field Item.schedule"
},
{
"message": "Cannot return null for non-nullable field Item.schedule"
},
{
"message": "Cannot return null for non-nullable field Item.schedule"
},
If we make the field nullable, then it appears that BatchLoader is just not loading the association at all.
"schedule": null,
We added some logging to see the output, the code:
def batch_load_test(id, klass)
BatchLoader::GraphQL.for(id).batch do |ids, loader|
Rails.logger.error("*" * 80)
Rails.logger.error("Batch Load #{klass}")
Rails.logger.error("*" * 80)
klass.where(id: ids).each { |record| loader.call(record.id, record) }
end
end
def schedule
batch_load_test(object.schedule_id, Schedule)
end
def location
batch_load_test(object.zone_id, Location)
end
In the log output, Batch Load Location
appears correctly, but Batch Load Schedule
doesn't.
I've tested a lot of variations and can't seem to find something that works consistently.
Thanks for reading!
graphql-ruby: 1.11.1
batch-loader: 1.5
ruby: 2.6.6
Hello,
I was happily refactoring away all my association loading in GraphQL by using a combination of key
and association reflection. After I changed everything and did a query I get a deadlock; recursive locking
, auwsch.
Doing some research it seems that Mutex does not allow reentry while Monitor does. Changing line 101 in batch_loader.rb to mutex = Monitor.new
does fix the issue locally.
I did some research on it and found this related article: https://japgolly.blogspot.com/2012/04/ruby-mutex-reentrancy.html There is a cost to it but I believe its marginal low (compare to SQL queries).
So WDYT? Up for MR or am I completely missing the point? π
Using Batch Loader and GraphQL v1.8.6
Event Load (0.9ms) SELECT "events".* FROM "events"
β³ app/controllers/api/graphql_controller.rb:11
EventPeriod Load (0.9ms) SELECT "event_periods".* FROM "event_periods" WHERE "event_periods"."event_id" IN ($1, $2, $3) [["event_id", 1], ["event_id", 2], ["event_id", 3]]
β³ app/graphql/resolvers/event_periods_resolver.rb:10
EventPeriod::Grouping Load (0.7ms) SELECT "event_period_groupings".* FROM "event_period_groupings" WHERE "event_period_groupings"."event_period_group_id" IN ($1, $2) [["event_period_group_id", 5], ["event_period_group_id", 12]]
β³ app/graphql/resolvers/event_period_days_resolver.rb:10
EventPeriod::Day Load (0.7ms) SELECT "event_periods".* FROM "event_periods" WHERE "event_periods"."type" IN ('day') AND "event_periods"."id" IN ($1, $2, $3, $4, $5) [["id", 1], ["id", 2], ["id", 3], ["id", 4], ["id", 11]]
β³ app/graphql/resolvers/event_period_days_resolver.rb:10
Using GraphQL v1.8.7
Event Load (0.8ms) SELECT "events".* FROM "events"
β³ app/controllers/api/graphql_controller.rb:11
EventPeriod Load (1.1ms) SELECT "event_periods".* FROM "event_periods" WHERE "event_periods"."event_id" = $1 [["event_id", 1]]
β³ app/graphql/resolvers/event_periods_resolver.rb:10
EventPeriod Load (0.6ms) SELECT "event_periods".* FROM "event_periods" WHERE "event_periods"."event_id" = $1 [["event_id", 2]]
β³ app/graphql/resolvers/event_periods_resolver.rb:10
EventPeriod Load (0.4ms) SELECT "event_periods".* FROM "event_periods" WHERE "event_periods"."event_id" = $1 [["event_id", 3]]
β³ app/graphql/resolvers/event_periods_resolver.rb:10
EventPeriod::Grouping Load (0.8ms) SELECT "event_period_groupings".* FROM "event_period_groupings" WHERE "event_period_groupings"."event_period_group_id" = $1 [["event_period_group_id", 5]]
β³ app/graphql/resolvers/event_period_days_resolver.rb:10
EventPeriod::Day Load (0.8ms) SELECT "event_periods".* FROM "event_periods" WHERE "event_periods"."type" IN ('day') AND "event_periods"."id" IN ($1, $2, $3, $4) [["id", 1], ["id", 2], ["id", 3], ["id", 4]]
β³ app/graphql/resolvers/event_period_days_resolver.rb:10
EventPeriod::Grouping Load (0.4ms) SELECT "event_period_groupings".* FROM "event_period_groupings" WHERE "event_period_groupings"."event_period_group_id" = $1 [["event_period_group_id", 12]]
β³ app/graphql/resolvers/event_period_days_resolver.rb:10
EventPeriod::Day Load (0.4ms) SELECT "event_periods".* FROM "event_periods" WHERE "event_periods"."type" IN ('day') AND "event_periods"."id" IN ($1, $2, $3, $4, $5) [["id", 1], ["id", 2], ["id", 3], ["id", 4], ["id", 11]]
β³ app/graphql/resolvers/event_period_days_resolver.rb:10
Completed 200 OK in 248ms (Views: 0.4ms | ActiveRecord: 19.9ms)
Probably something has changed in GraphQL but not sure if BatchLoader should adapt or GraphQL has a bug. Related issue on GraphQL: rmosolgo/graphql-ruby#1778
Changelog: rmosolgo/graphql-ruby@v1.8.6...master
Hi, everyone.
Does batch-loader can works with graphql union type?
Hi πThank you for all of your work on this gem!
I've got a question in the context of GraphQL. Let's take a typical example, a Post has an author.
You could have a Post and a User Type, and load the author (user) in a resolver. Example:
class Types::ProjectType < Types::BaseObject
field :id, ID, null: true
field :name, String, null: true
field :author, resolver: Resolvers::Author
end
module Resolvers
class Author < Resolvers::Base
type Types::UserType, null: true
def resolve
BatchLoader.for(object.author_id]).batch do |user_ids, loader|
User.where(id: user_ids).each do |user|
loader.call(user.id, user)
end
end
end
end
end
The example above works great and the N+1 HTTP/SQL problem does not exist π―
Due to some reasons, I'd like to load the author_name
inside of the Post type and not expose the Author type at all.
I've tried loading the data in following manner, however unfortunately the N+1 problem starts occurring:
class Types::ProjectType < Types::BaseObject
field :id, ID, null: true
field :author_name, String, null: true
def author_name
author.name
end
private
def author
BatchLoader.for(object.author_id]).batch do |user_ids, loader|
User.where(id: user_ids).each do |user|
loader.call(user.id, user)
end
end
end
end
I'm guessing this is due to the fact that the later implementation does not load the other author records (just because Ruby method calls work that way) so it starts loading them one by one.
Do you have a suggestion on how to tackle an implementation like this? Thank you!
I think this bug is a serious problem. every time BatchLoader.for().batch
is called, a new object of nil
will be returned by default. this can cause some weird bugs in Ruby and is a waste of memory. you can reproduce it as the following:
β ~ irb
irb(main):001:0> require 'batch-loader'
=> true
irb(main):002:0> a = nil
=> nil
irb(main):003:0> b = BatchLoader.for(1).batch do |ids, loader|
ids.each { |id| loader.call(id, nil) }
end
=> nil
irb(main):004:0> a.object_id
=> 8
irb(main):005:0> b.object_id # why 'b' is a new object of `nil`?
=> 200
irb(main):006:0> b = BatchLoader.for(2).batch do |ids, loader|
ids.each { |id| loader.call(id, nil) }
end
=> nil
irb(main):007:0> b.object_id # a new object of `nil` again
=> 220
irb(main):008:1* unless b
irb(main):009:1* puts 1 # this should be called, but it was skipped because of a new object of `nil`
irb(main):010:0> end
=> nil
irb(main):011:1* unless a
irb(main):012:1* puts 1
irb(main):013:0> end
1
=> nil
env: batch-loader (2.0.1)
this bug result in some weird behaviors that unless b
cannot be evaluated as true
and it's hard to debug, it confused me a lot.
This created a regression where resolvers that do not return a BatchLoader::GraphQL
fail to call BatchLoader::GraphQL
calls that the resolver uses.
Originally posted by @maletor in #32 (comment)
I was looking at converting my own analysers to the new Interpreter and stumbled on the fact that if I enabled both use GraphQL::Analysis::AST
and use GraphQL::Execution::Interpreter
BatchLoader::GraphQL is not working anymore.
Reading up on it and a bit in the source code I think I know why since the ability to redefine a proc for a field is removed (in favour of simplicity and speed). More information here: https://graphql-ruby.org/queries/interpreter.html
Any chance this gem can be updated to support the new GraphQL Interpreter?
Great library, this really helped bring our GraphQL response times down!
One thing we are trying to do is have a SQL query with a 1:Many relationship, that we then narrow down to a single result. This is because we can not write a sql query to obtain the result we want as it requires additional logic.
We solved this in our model with
def contract
contracts.order(:start_date).where("end_date >= current_date OR end_date is NULL").first
end
Where because we have ordered the results, we know the first result is correct.
Alternatively we could return the array, and then pragmatically check for the result we want by comparing the data returned.
We haven't however been able to solve this with batch-loader in our GraphQL query.
Attempt 1
BatchLoader::GraphQL.for(obj.id).batch(cache: false) do | person_ids, loader |
Contract.where(person_id: person_ids)
.where("contracts.end_date >= current_date OR contracts.end_date is NULL")
.order(:start_date)
.each { |contract|
loader.call(contract.id, contract)
}
end
The BatchLoader doesn't seem to respect the order it receives the model at all.
If return to contracts with person_id = 1
it doesn't matter what we pass loader
first or second. It will always return the lowest contract.id
. (Why is this? How does it choose what to return when it has multiple results for the same id?)
Attempt 2
contracts = BatchLoader::GraphQL.for(obj.id).batch(default_value: []) do |person_ids, loader|
Contract.where(person_id: person_ids).where("end_date >= current_date OR end_date is NULL").order(:start_date).each do |contract|
loader.call(contract.person_id) { |arr| arr << contract }
end
end
return contracts.first
Gives the correct result but now its executes as n+1 again.
Attempt 3
BatchLoader::GraphQL.for(obj.id).batch(default_value: []) do |person_ids, loader|
Contract.where(person_id: person_ids).where("end_date >= current_date OR end_date is NULL").order(:start_date).each do |contract|
loader.call(contract.person_id) { |arr|
if arr.empty?
arr << contract
end
arr
}
end
end
This works, except that it returns an array with a single value. Really we just want to return the value.
I am not sure how we can go about this?
Is there a way that we can have the result of an array returned than then manipulate it?
Or is there a way we can ensure the the first/last value is returned when there are conflicting results?
Am I missing something completely?
(Feel free to close. Since this isn't actually a bug or feature. But more not understand usage of the library)
I am using the method outlined in loading multiple items to batch load a 1:many relationship but it is putting all of the items on the many side of the relationship on the first object.
Our models are:
class Project < ApplicationRecord
has_many :photos
end
class Photo < ApplicationRecord
belongs_to :project
end
field :photos, types[Types::PhotoType] do
argument :limit, types.Int, default_value: 50, prepare: ->(limit, ctx) { [limit, 50].min }
resolve ->(project, args, ctx) {
BatchLoader.for(project.id).batch(default_value: []) do |project_ids, loader|
Image.where(location_id: project_ids).each do |photo|
loader.call(project.id) { |memo| memo << photo }
end
end
}
end
My query in graphql is:
{
viewer {
company {
projects(limit: 5) {
id
photos(limit: 6) {
id
}
}
}
}
}
Which returns:
{
"data": {
"viewer": {
"company": {
"projects": [
{
"id": "20565654",
"photos": [
{
"id": "207724043"
},
{
"id": "207724044"
},
{
"id": "207724055"
},
{
"id": "207724054"
},
{
"id": "207724053"
},
{
"id": "207724052"
},
{
"id": "207724051"
},
{
"id": "207724050"
},
{
"id": "207724049"
},
{
"id": "207724048"
},
{
"id": "207724047"
}
]
},
{
"id": "20565653",
"photos": []
},
{
"id": "20565652",
"photos": []
},
{
"id": "20565651",
"photos": []
},
{
"id": "20565650",
"photos": []
}
]
}
}
}
}
As you can see all of the photos are on the first project and not the subsequent project. I assume I am missing some detail but any help would be appreciated.
Modified printing code
[7] pry(main)* func=-> (item, value = (no_value = true; nil), &block) do
[7] pry(main)* p [item,value,no_value,block]
[7] pry(main)* if no_value && !block
[7] pry(main)* raise ArgumentError, "Please pass a value or a block"
[7] pry(main)* elsif block && !no_value
[7] pry(main)* raise ArgumentError, "Please pass a value or a block, not both"
[7] pry(main)* end
[7] pry(main)* end
[9] pry(main)> func.call(1,'haha')
Ruby 2.4.0
Prints [1,'haha',nil,nil]
Ruby 2.1.2
Prints [1,"haha",true,nil]
Raises ArgumentError: Please pass a value or a block
If you have a Post with many Comments and you also want to query individual Comments outside of a given Post, it's not possible to batch all of those Comments together.
I'm running into a situation where I need a universal "cache" of objects; it's not enough to just nest them under one association.
Different example from a MessageType object with Users and Receipts (join table for users and messages).
BatchLoader.for(obj.sender_id).batch do |ids, loader|
User.where(id: ids).each { |record| loader.call(record.id, record) }
end
BatchLoader.for(obj.id).batch(default_value: []) do |message_ids, loader|
receipts = Receipt.where(message_id: message_ids).includes(:user)
receipts.each { |r| loader.call(r.message_id) { |memo| memo << r.user } }
end
How do you make it so that when a sender is also a recipient, it's not broken into a separate query? Can you nest BatchLoaders? I wasn't able to get any success with that.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
π Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. πππ
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google β€οΈ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.