Giter Site home page Giter Site logo

chewy's Introduction

Gem Version GitHub Actions Code Climate Inline docs

Chewy

Chewy is an ODM (Object Document Mapper), built on top of the official Elasticsearch client.

Why Chewy?

In this section we'll cover why you might want to use Chewy instead of the official elasticsearch-ruby client gem.

  • Every index is observable by all the related models.

    Most of the indexed models are related to other and sometimes it is necessary to denormalize this related data and put at the same object. For example, you need to index an array of tags together with an article. Chewy allows you to specify an updateable index for every model separately - so corresponding articles will be reindexed on any tag update.

  • Bulk import everywhere.

    Chewy utilizes the bulk ES API for full reindexing or index updates. It also uses atomic updates. All the changed objects are collected inside the atomic block and the index is updated once at the end with all the collected objects. See Chewy.strategy(:atomic) for more details.

  • Powerful querying DSL.

    Chewy has an ActiveRecord-style query DSL. It is chainable, mergeable and lazy, so you can produce queries in the most efficient way. It also has object-oriented query and filter builders.

  • Support for ActiveRecord.

Installation

Add this line to your application's Gemfile:

gem 'chewy'

And then execute:

$ bundle

Or install it yourself as:

$ gem install chewy

Compatibility

Ruby

Chewy is compatible with MRI 3.0-3.2¹.

¹ Ruby 3 is only supported with Rails 6.1

Elasticsearch compatibility matrix

Chewy version Elasticsearch version
7.2.x 7.x
7.1.x 7.x
7.0.x 6.8, 7.x
6.0.0 5.x, 6.x
5.x 5.x, limited support for 1.x & 2.x

Important: Chewy doesn't follow SemVer, so you should always check the release notes before upgrading. The major version is linked to the newest supported Elasticsearch and the minor version bumps may include breaking changes.

See our migration guide for detailed upgrade instructions between various Chewy versions.

Active Record

5.2, 6.0, 6.1 Active Record versions are supported by all Chewy versions.

Getting Started

Chewy provides functionality for Elasticsearch index handling, documents import mappings, index update strategies and chainable query DSL.

Minimal client setting

Create config/initializers/chewy.rb with this line:

Chewy.settings = {host: 'localhost:9250'}

And run rails g chewy:install to generate chewy.yml:

# config/chewy.yml
# separate environment configs
test:
  host: 'localhost:9250'
  prefix: 'test'
development:
  host: 'localhost:9200'

Elasticsearch

Make sure you have Elasticsearch up and running. You can install it locally, but the easiest way is to use Docker:

$ docker run --rm --name elasticsearch -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" elasticsearch:7.11.1

Index

Create app/chewy/users_index.rb with User Index:

class UsersIndex < Chewy::Index
  settings analysis: {
    analyzer: {
      email: {
        tokenizer: 'keyword',
        filter: ['lowercase']
      }
    }
  }

  index_scope User
  field :first_name
  field :last_name
  field :email, analyzer: 'email'
end

Model

Add User model, table and migrate it:

$ bundle exec rails g model User first_name last_name email
$ bundle exec rails db:migrate

Add update_index to app/models/user.rb:

class User < ApplicationRecord
  update_index('users') { self }
end

Example of data request

  1. Once a record is created (could be done via the Rails console), it creates User index too:
User.create(
  first_name: "test1",
  last_name: "test1",
  email: '[email protected]',
  # other fields
)
# UsersIndex Import (355.3ms) {:index=>1}
# => #<User id: 1, first_name: "test1", last_name: "test1", email: "[email protected]", # other fields>
  1. A query could be exposed at a given UsersController:
def search
  @users = UsersIndex.query(query_string: { fields: [:first_name, :last_name, :email, ...], query: search_params[:query], default_operator: 'and' })
  render json: @users.to_json, status: :ok
end

private

def search_params
  params.permit(:query, :page, :per)
end
  1. So a request against http://localhost:3000/users/[email protected] issuing a response like:
[
  {
    "attributes":{
      "id":"1",
      "first_name":"test1",
      "last_name":"test1",
      "email":"[email protected]",
      ...
      "_score":0.9808291,
      "_explanation":null
    },
    "_data":{
      "_index":"users",
      "_type":"_doc",
      "_id":"1",
      "_score":0.9808291,
      "_source":{
        "first_name":"test1",
        "last_name":"test1",
        "email":"[email protected]",
        ...
      }
    }
  }
]

Usage and configuration

Client settings

To configure the Chewy client you need to add chewy.rb file with Chewy.settings hash:

# config/initializers/chewy.rb
Chewy.settings = {host: 'localhost:9250'} # do not use environments

And add chewy.yml configuration file.

You can create chewy.yml manually or run rails g chewy:install to generate it:

# config/chewy.yml
# separate environment configs
test:
  host: 'localhost:9250'
  prefix: 'test'
development:
  host: 'localhost:9200'

The resulting config merges both hashes. Client options are passed as is to Elasticsearch::Transport::Client except for the :prefix, which is used internally by Chewy to create prefixed index names:

  Chewy.settings = {prefix: 'test'}
  UsersIndex.index_name # => 'test_users'

The logger may be set explicitly:

Chewy.logger = Logger.new(STDOUT)

See config.rb for more details.

AWS Elasticsearch

If you would like to use AWS's Elasticsearch using an IAM user policy, you will need to sign your requests for the es:* action by injecting the appropriate headers passing a proc to transport_options. You'll need an additional gem for Faraday middleware: add gem 'faraday_middleware-aws-sigv4' to your Gemfile.

require 'faraday_middleware/aws_sigv4'

Chewy.settings = {
  host: 'http://my-es-instance-on-aws.us-east-1.es.amazonaws.com:80',
  port: 80, # 443 for https host
  transport_options: {
    headers: { content_type: 'application/json' },
    proc: -> (f) do
        f.request :aws_sigv4,
                  service: 'es',
                  region: 'us-east-1',
                  access_key_id: ENV['AWS_ACCESS_KEY'],
                  secret_access_key: ENV['AWS_SECRET_ACCESS_KEY']
    end
  }
}

Index definition

  1. Create /app/chewy/users_index.rb
class UsersIndex < Chewy::Index

end
  1. Define index scope (you can omit this part if you don't need to specify a scope (i.e. use PORO objects for import) or options)
class UsersIndex < Chewy::Index
  index_scope User.active # or just model instead_of scope: index_scope User
end
  1. Add some mappings
class UsersIndex < Chewy::Index
  index_scope User.active.includes(:country, :badges, :projects)
  field :first_name, :last_name # multiple fields without additional options
  field :email, analyzer: 'email' # Elasticsearch-related options
  field :country, value: ->(user) { user.country.name } # custom value proc
  field :badges, value: ->(user) { user.badges.map(&:name) } # passing array values to index
  field :projects do # the same block syntax for multi_field, if `:type` is specified
    field :title
    field :description # default data type is `text`
    # additional top-level objects passed to value proc:
    field :categories, value: ->(project, user) { project.categories.map(&:name) if user.active? }
  end
  field :rating, type: 'integer' # custom data type
  field :created, type: 'date', include_in_all: false,
    value: ->{ created_at } # value proc for source object context
end

See here for mapping definitions.

  1. Add some index-related settings. Analyzer repositories might be used as well. See Chewy::Index.settings docs for details:
class UsersIndex < Chewy::Index
  settings analysis: {
    analyzer: {
      email: {
        tokenizer: 'keyword',
        filter: ['lowercase']
      }
    }
  }

  index_scope User.active.includes(:country, :badges, :projects)
  root date_detection: false do
    template 'about_translations.*', type: 'text', analyzer: 'standard'

    field :first_name, :last_name
    field :email, analyzer: 'email'
    field :country, value: ->(user) { user.country.name }
    field :badges, value: ->(user) { user.badges.map(&:name) }
    field :projects do
      field :title
      field :description
    end
    field :about_translations, type: 'object' # pass object type explicitly if necessary
    field :rating, type: 'integer'
    field :created, type: 'date', include_in_all: false,
      value: ->{ created_at }
  end
end

See index settings here. See root object settings here.

See mapping.rb for more details.

  1. Add model-observing code
class User < ActiveRecord::Base
  update_index('users') { self } # specifying index and back-reference
                                      # for updating after user save or destroy
end

class Country < ActiveRecord::Base
  has_many :users

  update_index('users') { users } # return single object or collection
end

class Project < ActiveRecord::Base
  update_index('users') { user if user.active? } # you can return even `nil` from the back-reference
end

class Book < ActiveRecord::Base
  update_index(->(book) {"books_#{book.language}"}) { self } # dynamic index name with proc.
                                                             # For book with language == "en"
                                                             # this code will generate `books_en`
end

Also, you can use the second argument for method name passing:

update_index('users', :self)
update_index('users', :users)

In the case of a belongs_to association you may need to update both associated objects, previous and current:

class City < ActiveRecord::Base
  belongs_to :country

  update_index('cities') { self }
  update_index 'countries' do
    previous_changes['country_id'] || country
  end
end

Default import options

Every index has default_import_options configuration to specify, suddenly, default import options:

class ProductsIndex < Chewy::Index
  index_scope Post.includes(:tags)
  default_import_options batch_size: 100, bulk_size: 10.megabytes, refresh: false

  field :name
  field :tags, value: -> { tags.map(&:name) }
end

See import.rb for available options.

Multi (nested) and object field types

To define an objects field you can simply nest fields in the DSL:

field :projects do
  field :title
  field :description
end

This will automatically set the type or root field to object. You may also specify type: 'objects' explicitly.

To define a multi field you have to specify any type except for object or nested in the root field:

field :full_name, type: 'text', value: ->{ full_name.strip } do
  field :ordered, analyzer: 'ordered'
  field :untouched, type: 'keyword'
end

The value: option for internal fields will no longer be effective.

Geo Point fields

You can use Elasticsearch's geo mapping with the geo_point field type, allowing you to query, filter and order by latitude and longitude. You can use the following hash format:

field :coordinates, type: 'geo_point', value: ->{ {lat: latitude, lon: longitude} }

or by using nested fields:

field :coordinates, type: 'geo_point' do
  field :lat, value: ->{ latitude }
  field :long, value: ->{ longitude }
end

See the section on Script fields for details on calculating distance in a search.

Join fields

You can use a join field to implement parent-child relationships between documents. It replaces the old parent_id based parent-child mapping

To use it, you need to pass relations and join (with type and id) options:

field :hierarchy_link, type: :join, relations: {question: %i[answer comment], answer: :vote, vote: :subvote}, join: {type: :comment_type, id: :commented_id}

assuming you have comment_type and commented_id fields in your model.

Note that when you reindex a parent, its children and grandchildren will be reindexed as well. This may require additional queries to the primary database and to elastisearch.

Also note that the join field doesn't support crutches (it should be a field directly defined on the model).

Crutches™ technology

Assume you are defining your index like this (product has_many categories through product_categories):

class ProductsIndex < Chewy::Index
  index_scope Product.includes(:categories)
  field :name
  field :category_names, value: ->(product) { product.categories.map(&:name) } # or shorter just -> { categories.map(&:name) }
end

Then the Chewy reindexing flow will look like the following pseudo-code:

Product.includes(:categories).find_in_batches(1000) do |batch|
  bulk_body = batch.map do |object|
    {name: object.name, category_names: object.categories.map(&:name)}.to_json
  end
  # here we are sending every batch of data to ES
  Chewy.client.bulk bulk_body
end

If you meet complicated cases when associations are not applicable you can replace Rails associations with Chewy Crutches™ technology:

class ProductsIndex < Chewy::Index
  index_scope Product
  crutch :categories do |collection| # collection here is a current batch of products
    # data is fetched with a lightweight query without objects initialization
    data = ProductCategory.joins(:category).where(product_id: collection.map(&:id)).pluck(:product_id, 'categories.name')
    # then we have to convert fetched data to appropriate format
    # this will return our data in structure like:
    # {123 => ['sweets', 'juices'], 456 => ['meat']}
    data.each.with_object({}) { |(id, name), result| (result[id] ||= []).push(name) }
  end

  field :name
  # simply use crutch-fetched data as a value:
  field :category_names, value: ->(product, crutches) { crutches[:categories][product.id] }
end

An example flow will look like this:

Product.includes(:categories).find_in_batches(1000) do |batch|
  crutches[:categories] = ProductCategory.joins(:category).where(product_id: batch.map(&:id)).pluck(:product_id, 'categories.name')
    .each.with_object({}) { |(id, name), result| (result[id] ||= []).push(name) }

  bulk_body = batch.map do |object|
    {name: object.name, category_names: crutches[:categories][object.id]}.to_json
  end
  Chewy.client.bulk bulk_body
end

So Chewy Crutches™ technology is able to increase your indexing performance in some cases up to a hundredfold or even more depending on your associations complexity.

Witchcraft™ technology

One more experimental technology to increase import performance. As far as you know, chewy defines value proc for every imported field in mapping, so at the import time each of these procs is executed on imported object to extract result document to import. It would be great for performance to use one huge whole-document-returning proc instead. So basically the idea or Witchcraft™ technology is to compile a single document-returning proc from the index definition.

index_scope Product
witchcraft!

field :title
field :tags, value: -> { tags.map(&:name) }
field :categories do
  field :name, value: -> (product, category) { category.name }
  field :type, value: -> (product, category, crutch) { crutch.types[category.name] }
end

The index definition above will be compiled to something close to:

-> (object, crutches) do
  {
    title: object.title,
    tags: object.tags.map(&:name),
    categories: object.categories.map do |object2|
      {
        name: object2.name
        type: crutches.types[object2.name]
      }
    end
  }
end

And don't even ask how is it possible, it is a witchcraft. Obviously not every type of definition might be compiled. There are some restrictions:

  1. Use reasonable formatting to make method_source be able to extract field value proc sources.
  2. Value procs with splat arguments are not supported right now.
  3. If you are generating fields dynamically use value proc with arguments, argumentless value procs are not supported yet:
[:first_name, :last_name].each do |name|
  field name, value: -> (o) { o.send(name) }
end

However, it is quite possible that your index definition will be supported by Witchcraft™ technology out of the box in most of the cases.

Raw Import

Another way to speed up import time is Raw Imports. This technology is only available in ActiveRecord adapter. Very often, ActiveRecord model instantiation is what consumes most of the CPU and RAM resources. Precious time is wasted on converting, say, timestamps from strings and then serializing them back to strings. Chewy can operate on raw hashes of data directly obtained from the database. All you need is to provide a way to convert that hash to a lightweight object that mimics the behaviour of the normal ActiveRecord object.

class LightweightProduct
  def initialize(attributes)
    @attributes = attributes
  end

  # Depending on the database, `created_at` might
  # be in different formats. In PostgreSQL, for example,
  # you might see the following format:
  #   "2016-03-22 16:23:22"
  #
  # Taking into account that Elastic expects something different,
  # one might do something like the following, just to avoid
  # unnecessary String -> DateTime -> String conversion.
  #
  #   "2016-03-22 16:23:22" -> "2016-03-22T16:23:22Z"
  def created_at
    @attributes['created_at'].tr(' ', 'T') << 'Z'
  end
end

index_scope Product
default_import_options raw_import: ->(hash) {
  LightweightProduct.new(hash)
}

field :created_at, 'datetime'

Also, you can pass :raw_import option to the import method explicitly.

Index creation during import

By default, when you perform import Chewy checks whether an index exists and creates it if it's absent. You can turn off this feature to decrease Elasticsearch hits count. To do so you need to set skip_index_creation_on_import parameter to false in your config/chewy.yml

Skip record fields during import

You can use ignore_blank: true to skip fields that return true for the .blank? method:

index_scope Country
field :id
field :cities, ignore_blank: true do
  field :id
  field :name
  field :surname, ignore_blank: true
  field :description
end

Default values for different types

By default ignore_blank is false on every type except geo_point.

Journaling

You can record all actions that were made to the separate journal index in ElasticSearch. When you create/update/destroy your documents, it will be saved in this special index. If you make something with a batch of documents (e.g. during index reset) it will be saved as a one record, including primary keys of each document that was affected. Common journal record looks like this:

{
  "action": "index",
  "object_id": [1, 2, 3],
  "index_name": "...",
  "created_at": "<timestamp>"
}

This feature is turned off by default. But you can turn it on by setting journal setting to true in config/chewy.yml. Also, you can specify journal index name. For example:

# config/chewy.yml
production:
  journal: true
  journal_name: my_super_journal

Also, you can provide this option while you're importing some index:

CityIndex.import journal: true

Or as a default import option for an index:

class CityIndex
  index_scope City
  default_import_options journal: true
end

You may be wondering why do you need it? The answer is simple: not to lose the data.

Imagine that you reset your index in a zero-downtime manner (to separate index), and in the meantime somebody keeps updating the data frequently (to old index). So all these actions will be written to the journal index and you'll be able to apply them after index reset using the Chewy::Journal interface.

When enabled, journal can grow to enormous size, consider setting up cron job that would clean it occasionally using chewy:journal:clean rake task.

Index manipulation

UsersIndex.delete # destroy index if it exists
UsersIndex.delete!

UsersIndex.create
UsersIndex.create! # use bang or non-bang methods

UsersIndex.purge
UsersIndex.purge! # deletes then creates index

UsersIndex.import # import with 0 arguments process all the data specified in index_scope definition
UsersIndex.import User.where('rating > 100') # or import specified users scope
UsersIndex.import User.where('rating > 100').to_a # or import specified users array
UsersIndex.import [1, 2, 42] # pass even ids for import, it will be handled in the most effective way
UsersIndex.import User.where('rating > 100'), update_fields: [:email] # if update fields are specified - it will update their values only with the `update` bulk action
UsersIndex.import! # raises an exception in case of any import errors

UsersIndex.reset! # purges index and imports default data for all types

If the passed user is #destroyed?, or satisfies a delete_if index_scope option, or the specified id does not exist in the database, import will perform delete from index action for this object.

index_scope User, delete_if: :deleted_at
index_scope User, delete_if: -> { deleted_at }
index_scope User, delete_if: ->(user) { user.deleted_at }

See actions.rb for more details.

Index update strategies

Assume you've got the following code:

class City < ActiveRecord::Base
  update_index 'cities', :self
end

class CitiesIndex < Chewy::Index
  index_scope City
  field :name
end

If you do something like City.first.save! you'll get an UndefinedUpdateStrategy exception instead of the object saving and index updating. This exception forces you to choose an appropriate update strategy for the current context.

If you want to return to the pre-0.7.0 behavior - just set Chewy.root_strategy = :bypass.

:atomic

The main strategy here is :atomic. Assume you have to update a lot of records in the db.

Chewy.strategy(:atomic) do
  City.popular.map(&:do_some_update_action!)
end

Using this strategy delays the index update request until the end of the block. Updated records are aggregated and the index update happens with the bulk API. So this strategy is highly optimized.

:sidekiq

This does the same thing as :atomic, but asynchronously using sidekiq. Patch Chewy::Strategy::Sidekiq::Worker for index updates improving.

Chewy.strategy(:sidekiq) do
  City.popular.map(&:do_some_update_action!)
end

The default queue name is chewy, you can customize it in settings: sidekiq.queue_name

Chewy.settings[:sidekiq] = {queue: :low}

:lazy_sidekiq

This does the same thing as :sidekiq, but with lazy evaluation. Beware it does not allow you to use any non-persistent record state for indices and conditions because record will be re-fetched from database asynchronously using sidekiq. However for destroying records strategy will fallback to :sidekiq because it's not possible to re-fetch deleted records from database.

The purpose of this strategy is to improve the response time of the code that should update indexes, as it does not only defer actual ES calls to a background job but update_index callbacks evaluation (for created and updated objects) too. Similar to :sidekiq, index update is asynchronous so this strategy cannot be used when data and index synchronization is required.

Chewy.strategy(:lazy_sidekiq) do
  City.popular.map(&:do_some_update_action!)
end

The default queue name is chewy, you can customize it in settings: sidekiq.queue_name

Chewy.settings[:sidekiq] = {queue: :low}

:delayed_sidekiq

It accumulates IDs of records to be reindexed during the latency window in Redis and then performs the reindexing of all accumulated records at once. This strategy is very useful in the case of frequently mutated records. It supports the update_fields option, so it will attempt to select just enough data from the database.

Keep in mind, this strategy does not guarantee reindexing in the event of Sidekiq worker termination or an error during the reindexing phase. This behavior is intentional to prevent continuous growth of Redis db.

There are three options that can be defined in the index:

class CitiesIndex...
  strategy_config delayed_sidekiq: {
    latency: 3,
    margin: 2,
    ttl: 60 * 60 * 24,
    reindex_wrapper: ->(&reindex) {
      ActiveRecord::Base.connected_to(role: :reading) { reindex.call }
    }
    # latency - will prevent scheduling identical jobs
    # margin - main purpose is to cover db replication lag by the margin
    # ttl - a chunk expiration time (in seconds)
    # reindex_wrapper - lambda that accepts block to wrap that reindex process AR connection block.
  }

  ...
end

Also you can define defaults in the initializers/chewy.rb

Chewy.settings = {
  strategy_config: {
    delayed_sidekiq: {
      latency: 3,
      margin: 2,
      ttl: 60 * 60 * 24,
      reindex_wrapper: ->(&reindex) {
        ActiveRecord::Base.connected_to(role: :reading) { reindex.call }
      }
    }
  }
}

or in config/chewy.yml

  strategy_config:
    delayed_sidekiq:
      latency: 3
      margin: 2
      ttl: <%= 60 * 60 * 24 %>
      # reindex_wrapper setting is not possible here!!! use the initializer instead

You can use the strategy identically to other strategies

Chewy.strategy(:delayed_sidekiq) do
  City.popular.map(&:do_some_update_action!)
end

The default queue name is chewy, you can customize it in settings: sidekiq.queue_name

Chewy.settings[:sidekiq] = {queue: :low}

Explicit call of the reindex using :delayed_sidekiq strategy

CitiesIndex.import([1, 2, 3], strategy: :delayed_sidekiq)

Explicit call of the reindex using :delayed_sidekiq strategy with :update_fields support

CitiesIndex.import([1, 2, 3], update_fields: [:name], strategy: :delayed_sidekiq)

While running tests with delayed_sidekiq strategy and Sidekiq is using a real redis instance that is NOT cleaned up in between tests (via e.g. Sidekiq.redis(&:flushdb)), you'll want to cleanup some redis keys in between tests to avoid state leaking and flaky tests. Chewy provides a convenience method for that:

# it might be a good idea to also add to your testing setup, e.g.: a rspec `before` hook
Chewy::Strategy::DelayedSidekiq.clear_timechunks!

:active_job

This does the same thing as :atomic, but using ActiveJob. This will inherit the ActiveJob configuration settings including the active_job.queue_adapter setting for the environment. Patch Chewy::Strategy::ActiveJob::Worker for index updates improving.

Chewy.strategy(:active_job) do
  City.popular.map(&:do_some_update_action!)
end

The default queue name is chewy, you can customize it in settings: active_job.queue_name

Chewy.settings[:active_job] = {queue: :low}

:urgent

The following strategy is convenient if you are going to update documents in your index one by one.

Chewy.strategy(:urgent) do
  City.popular.map(&:do_some_update_action!)
end

This code will perform City.popular.count requests for ES documents update.

It is convenient for use in e.g. the Rails console with non-block notation:

> Chewy.strategy(:urgent)
> City.popular.map(&:do_some_update_action!)

:bypass

When the bypass strategy is active the index will not be automatically updated on object save.

For example, on City.first.save! the cities index would not be updated.

Nesting

Strategies are designed to allow nesting, so it is possible to redefine it for nested contexts.

Chewy.strategy(:atomic) do
  city1.do_update!
  Chewy.strategy(:urgent) do
    city2.do_update!
    city3.do_update!
    # there will be 2 update index requests for city2 and city3
  end
  city4..do_update!
  # city1 and city4 will be grouped in one index update request
end

Non-block notation

It is possible to nest strategies without blocks:

Chewy.strategy(:urgent)
city1.do_update! # index updated
Chewy.strategy(:bypass)
city2.do_update! # update bypassed
Chewy.strategy.pop
city3.do_update! # index updated again

Designing your own strategies

See strategy/base.rb for more details. See strategy/atomic.rb for an example.

Rails application strategies integration

There are a couple of predefined strategies for your Rails application. Initially, the Rails console uses the :urgent strategy by default, except in the sandbox case. When you are running sandbox it switches to the :bypass strategy to avoid polluting the index.

Migrations are wrapped with the :bypass strategy. Because the main behavior implies that indices are reset after migration, there is no need for extra index updates. Also indexing might be broken during migrations because of the outdated schema.

Controller actions are wrapped with the configurable value of Chewy.request_strategy and defaults to :atomic. This is done at the middleware level to reduce the number of index update requests inside actions.

It is also a good idea to set up the :bypass strategy inside your test suite and import objects manually only when needed, and use Chewy.massacre when needed to flush test ES indices before every example. This will allow you to minimize unnecessary ES requests and reduce overhead.

RSpec.configure do |config|
  config.before(:suite) do
    Chewy.strategy(:bypass)
  end
end

Elasticsearch client options

All connection options, except the :prefix, are passed to the Elasticseach::Client.new (chewy/lib/chewy.rb):

Here's the relevant Elasticsearch documentation on the subject: https://rubydoc.info/gems/elasticsearch-transport#setting-hosts

ActiveSupport::Notifications support

Chewy has notifying the following events:

search_query.chewy payload

  • payload[:index]: requested index class
  • payload[:request]: request hash

import_objects.chewy payload

  • payload[:index]: currently imported index name

  • payload[:import]: imports stats, total imported and deleted objects count:

    {index: 30, delete: 5}
  • payload[:errors]: might not exist. Contains grouped errors with objects ids list:

    {index: {
      'error 1 text' => ['1', '2', '3'],
      'error 2 text' => ['4']
    }, delete: {
      'delete error text' => ['10', '12']
    }}

NewRelic integration

To integrate with NewRelic you may use the following example source (config/initializers/chewy.rb):

require 'new_relic/agent/instrumentation/evented_subscriber'

class ChewySubscriber < NewRelic::Agent::Instrumentation::EventedSubscriber
  def start(name, id, payload)
    event = ChewyEvent.new(name, Time.current, nil, id, payload)
    push_event(event)
  end

  def finish(_name, id, _payload)
    pop_event(id).finish
  end

  class ChewyEvent < NewRelic::Agent::Instrumentation::Event
    OPERATIONS = {
      'import_objects.chewy' => 'import',
      'search_query.chewy' => 'search',
      'delete_query.chewy' => 'delete'
    }.freeze

    def initialize(*args)
      super
      @segment = start_segment
    end

    def start_segment
      segment = NewRelic::Agent::Transaction::DatastoreSegment.new product, operation, collection, host, port
      if (txn = state.current_transaction)
        segment.transaction = txn
      end
      segment.notice_sql @payload[:request].to_s
      segment.start
      segment
    end

    def finish
      if (txn = state.current_transaction)
        txn.add_segment @segment
      end
      @segment.finish
    end

    private

    def state
      @state ||= NewRelic::Agent::TransactionState.tl_get
    end

    def product
      'Elasticsearch'
    end

    def operation
      OPERATIONS[name]
    end

    def collection
      payload.values_at(:type, :index)
             .reject { |value| value.try(:empty?) }
             .first
             .to_s
    end

    def host
      Chewy.client.transport.hosts.first[:host]
    end

    def port
      Chewy.client.transport.hosts.first[:port]
    end
  end
end

ActiveSupport::Notifications.subscribe(/.chewy$/, ChewySubscriber.new)

Search requests

Quick introduction.

Composing requests

The request DSL have the same chainable nature as AR. The main class is Chewy::Search::Request.

CitiesIndex.query(match: {name: 'London'})

Main methods of the request DSL are: query, filter and post_filter, it is possible to pass pure query hashes or use elasticsearch-dsl.

CitiesIndex
  .filter(term: {name: 'Bangkok'})
  .query(match: {name: 'London'})
  .query.not(range: {population: {gt: 1_000_000}})

You can query a set of indexes at once:

CitiesIndex.indices(CountriesIndex).query(match: {name: 'Some'})

See https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl.html and https://github.com/elastic/elasticsearch-dsl-ruby for more details.

An important part of requests manipulation is merging. There are 4 methods to perform it: merge, and, or, not. See Chewy::Search::QueryProxy for details. Also, only and except methods help to remove unneeded parts of the request.

Every other request part is covered by a bunch of additional methods, see Chewy::Search::Request for details:

CitiesIndex.limit(10).offset(30).order(:name, {population: {order: :desc}})

Request DSL also provides additional scope actions, like delete_all, exists?, count, pluck, etc.

Pagination

The request DSL supports pagination with Kaminari. An extension is enabled on initialization if Kaminari is available. See Chewy::Search and Chewy::Search::Pagination::Kaminari for details.

Named scopes

Chewy supports named scopes functionality. There is no specialized DSL for named scopes definition, it is simply about defining class methods.

See Chewy::Search::Scoping for details.

Scroll API

ElasticSearch scroll API is utilized by a bunch of methods: scroll_batches, scroll_hits, scroll_wrappers and scroll_objects.

See Chewy::Search::Scrolling for details.

Loading objects

It is possible to load ORM/ODM source objects with the objects method. To provide additional loading options use load method:

CitiesIndex.load(scope: -> { active }).to_a # to_a returns `Chewy::Index` wrappers.
CitiesIndex.load(scope: -> { active }).objects # An array of AR source objects.

See Chewy::Search::Loader for more details.

In case when it is necessary to iterate through both of the wrappers and objects simultaneously, object_hash method helps a lot:

scope = CitiesIndex.load(scope: -> { active })
scope.each do |wrapper|
  scope.object_hash[wrapper]
end

Rake tasks

For a Rails application, some index-maintaining rake tasks are defined.

chewy:reset

Performs zero-downtime reindexing as described here. So the rake task creates a new index with unique suffix and then simply aliases it to the common index name. The previous index is deleted afterwards (see Chewy::Index.reset! for more details).

rake chewy:reset # resets all the existing indices
rake chewy:reset[users] # resets UsersIndex only
rake chewy:reset[users,cities] # resets UsersIndex and CitiesIndex
rake chewy:reset[-users,cities] # resets every index in the application except specified ones

chewy:upgrade

Performs reset exactly the same way as chewy:reset does, but only when the index specification (setting or mapping) was changed.

It works only when index specification is locked in Chewy::Stash::Specification index. The first run will reset all indexes and lock their specifications.

See Chewy::Stash::Specification and Chewy::Index::Specification for more details.

rake chewy:upgrade # upgrades all the existing indices
rake chewy:upgrade[users] # upgrades UsersIndex only
rake chewy:upgrade[users,cities] # upgrades UsersIndex and CitiesIndex
rake chewy:upgrade[-users,cities] # upgrades every index in the application except specified ones

chewy:update

It doesn't create indexes, it simply imports everything to the existing ones and fails if the index was not created before.

rake chewy:update # updates all the existing indices
rake chewy:update[users] # updates UsersIndex only
rake chewy:update[users,cities] # updates UsersIndex and CitiesIndex
rake chewy:update[-users,cities] # updates every index in the application except UsersIndex and CitiesIndex

chewy:sync

Provides a way to synchronize outdated indexes with the source quickly and without doing a full reset. By default field updated_at is used to find outdated records, but this could be customized by outdated_sync_field as described at Chewy::Index::Syncer.

Arguments are similar to the ones taken by chewy:update task.

See Chewy::Index::Syncer for more details.

rake chewy:sync # synchronizes all the existing indices
rake chewy:sync[users] # synchronizes UsersIndex only
rake chewy:sync[users,cities] # synchronizes UsersIndex and CitiesIndex
rake chewy:sync[-users,cities] # synchronizes every index in the application except except UsersIndex and CitiesIndex

chewy:deploy

This rake task is especially useful during the production deploy. It is a combination of chewy:upgrade and chewy:sync and the latter is called only for the indexes that were not reset during the first stage.

It is not possible to specify any particular indexes for this task as it doesn't make much sense.

Right now the approach is that if some data had been updated, but index definition was not changed (no changes satisfying the synchronization algorithm were done), it would be much faster to perform manual partial index update inside data migrations or even manually after the deploy.

Also, there is always full reset alternative with rake chewy:reset.

chewy:create_missing_indexes

This rake task creates newly defined indexes in ElasticSearch and skips existing ones. Useful for production-like environments.

Parallelizing rake tasks

Every task described above has its own parallel version. Every parallel rake task takes the number for processes for execution as the first argument and the rest of the arguments are exactly the same as for the non-parallel task version.

https://github.com/grosser/parallel gem is required to use these tasks.

If the number of processes is not specified explicitly - parallel gem tries to automatically derive the number of processes to use.

rake chewy:parallel:reset
rake chewy:parallel:upgrade[4]
rake chewy:parallel:update[4,cities]
rake chewy:parallel:sync[4,-users]
rake chewy:parallel:deploy[4] # performs parallel upgrade and parallel sync afterwards

chewy:journal

This namespace contains two tasks for the journal manipulations: chewy:journal:apply and chewy:journal:clean. Both are taking time as the first argument (optional for clean) and a list of indexes exactly as the tasks above. Time can be in any format parsable by ActiveSupport.

rake chewy:journal:apply["$(date -v-1H -u +%FT%TZ)"] # apply journaled changes for the past hour
rake chewy:journal:apply["$(date -v-1H -u +%FT%TZ)",users] # apply journaled changes for the past hour on UsersIndex only

When the size of the journal becomes very large, the classical way of deletion would be obstructive and resource consuming. Fortunately, Chewy internally uses delete-by-query ES function which supports async execution with batching and throttling.

The available options, which can be set by ENV variables, are listed below:

  • WAIT_FOR_COMPLETION - a boolean flag. It controls async execution. It waits by default. When set to false (0, f, false or off in any case spelling is accepted as false), Elasticsearch performs some preflight checks, launches the request, and returns a task reference you can use to cancel the task or get its status.
  • REQUESTS_PER_SECOND - float. The throttle for this request in sub-requests per second. No throttling is enforced by default.
  • SCROLL_SIZE - integer. The number of documents to be deleted in single sub-request. The default batch size is 1000.
rake chewy:journal:clean WAIT_FOR_COMPLETION=false REQUESTS_PER_SECOND=10 SCROLL_SIZE=5000

RSpec integration

Just add require 'chewy/rspec' to your spec_helper.rb and you will get additional features:

update_index helper mock_elasticsearch_response helper to mock elasticsearch response mock_elasticsearch_response_sources helper to mock elasticsearch response sources build_query matcher to compare request and expected query (returns true/false)

To use mock_elasticsearch_response and mock_elasticsearch_response_sources helpers add include Chewy::Rspec::Helpers to your tests.

See chewy/rspec/ for more details.

Minitest integration

Add require 'chewy/minitest' to your test_helper.rb, and then for tests which you'd like indexing test hooks, include Chewy::Minitest::Helpers.

Since you can set :bypass strategy for test suites and manually handle import for the index and manually flush test indices using Chewy.massacre. This will help reduce unnecessary ES requests

But if you require chewy to index/update model regularly in your test suite then you can specify :urgent strategy for documents indexing. Add Chewy.strategy(:urgent) to test_helper.rb.

Also, you can use additional helpers:

mock_elasticsearch_response to mock elasticsearch response mock_elasticsearch_response_sources to mock elasticsearch response sources assert_elasticsearch_query to compare request and expected query (returns true/false)

See chewy/minitest/ for more details.

DatabaseCleaner

If you use DatabaseCleaner in your tests with the transaction strategy, you may run into the problem that ActiveRecord's models are not indexed automatically on save despite the fact that you set the callbacks to do this with the update_index method. The issue arises because chewy indices data on after_commit run as default, but all after_commit callbacks are not run with the DatabaseCleaner's' transaction strategy. You can solve this issue by changing the Chewy.use_after_commit_callbacks option. Just add the following initializer in your Rails application:

#config/initializers/chewy.rb
Chewy.use_after_commit_callbacks = !Rails.env.test?

Pre-request Filter

Should you need to inspect the query prior to it being dispatched to ElasticSearch during any queries, you can use the before_es_request_filter. before_es_request_filter is a callable object, as demonstrated below:

Chewy.before_es_request_filter = -> (method_name, args, kw_args) { ... }

While using the before_es_request_filter, please consider the following:

  • before_es_request_filter acts as a simple proxy before any request made via the ElasticSearch::Client. The arguments passed to this filter include:
    • method_name - The name of the method being called. Examples are search, count, bulk and etc.
    • args and kw_args - These are the positional arguments provided in the method call.
  • The operation is synchronous, so avoid executing any heavy or time-consuming operations within the filter to prevent performance degradation.
  • The return value of the proc is disregarded. This filter is intended for inspection or modification of the query rather than generating a response.
  • Any exception raised inside the callback will propagate upward and halt the execution of the query. It is essential to handle potential errors adequately to ensure the stability of your search functionality.

Import scope clean-up behavior

Whenever you set the import_scope for the index, in the case of ActiveRecord, options for order, offset and limit will be removed. You can set the behavior of chewy, before the clean-up itself.

The default behavior is a warning sent to the Chewy logger (:warn). Another more restrictive option is raising an exception (:raise). Both options have a negative impact on performance since verifying whether the code uses any of these options requires building AREL query.

To avoid the loading time impact, you can ignore the check (:ignore) before the clean-up.

Chewy.import_scope_cleanup_behavior = :ignore

Contributing

  1. Fork it (http://github.com/toptal/chewy/fork)
  2. Create your feature branch (git checkout -b my-new-feature)
  3. Implement your changes, cover it with specs and make sure old specs are passing
  4. Commit your changes (git commit -am 'Add some feature')
  5. Push to the branch (git push origin my-new-feature)
  6. Create new Pull Request

Use the following Rake tasks to control the Elasticsearch cluster while developing, if you prefer native Elasticsearch installation over the dockerized one:

rake elasticsearch:start # start Elasticsearch cluster on 9250 port for tests
rake elasticsearch:stop # stop Elasticsearch

Copyright

Copyright (c) 2013-2021 Toptal, LLC. See LICENSE.txt for further details.

chewy's People

Contributors

averell23 avatar baronworks avatar barthez avatar bbatsov avatar dependabot[bot] avatar dmitry avatar dnnx avatar ericproulx avatar giovannibonetti avatar gmile avatar igor-alexandrov avatar inbeom avatar jesjos avatar jirutka avatar jkostolansky avatar jondavidford avatar josecoelho avatar ka8725 avatar konalegi avatar leemhenson avatar marshall-lee avatar mikeyhogarth avatar mkcode avatar mrzasa avatar olleolleolle avatar pyromaniac avatar sergeygaychuk avatar undr avatar vitalina-vakulchyk avatar webgago avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

chewy's Issues

"wrong constant name" with namespace models

I'm attempting to define an index like this:

module Spree
  class ProductsIndex < Chewy::Index
    define_type Product
  end
end

This shows this stacktrace:

NameError: wrong constant name Spree::Product
    from /Users/ryanbigg/.rbenv/versions/2.1.0/lib/ruby/gems/2.1.0/bundler/gems/chewy-bf2e5bc806b0/lib/chewy/type.rb:14:in `const_set'
    from /Users/ryanbigg/.rbenv/versions/2.1.0/lib/ruby/gems/2.1.0/bundler/gems/chewy-bf2e5bc806b0/lib/chewy/type.rb:14:in `new'
    from /Users/ryanbigg/.rbenv/versions/2.1.0/lib/ruby/gems/2.1.0/bundler/gems/chewy-bf2e5bc806b0/lib/chewy/index.rb:80:in `define_type'
    from /Users/ryanbigg/Projects/test/spree_test/app/chewy/spree/products_index.rb:3:in `<class:ProductsIndex>'
    from /Users/ryanbigg/Projects/test/spree_test/app/chewy/spree/products_index.rb:2:in `<module:Spree>'
    from /Users/ryanbigg/Projects/test/spree_test/app/chewy/spree/products_index.rb:1:in `<top (required)>'

Similarly,

class ProductsIndex < Chewy::Index
  define_type Spree::Product
end

Yields the same error. I think this is because my constant is namespaced inside of a module.

update_index not paying attention to model scope

If I define an index like it is done in the example:
define_type User.active.includes(:country, :badges, :projects) do

And an active user changes to inactive, it is still included in the index. Is this an issue or is this wanted behaiviour? Because in my opinion, since there is no offensive hint in the docs, this i not very intuitional.

Collection returned from query has nil objects

After performing a query(either using term or multi_match), if I call to_a or just iterate over the scope, I noticed that there are nil objects in the collection. I'm not sure if this is a Chewy issue or ES issue. Apologies if this is an ES issue or a me not understanding ES issue. Thanks for your help!

Can i write a Sidekiq job updating the index async?

Hi there

I'm just getting started with your chewy gem because it looks great and I'd like to try it.
A question. Is it possible to hook into the update_index action and perform the updating inside an async sidekiq job?

Example:

class User <  ActiveRecord::Base
  after_save    { Indexer.perform_async(:index, self.class.name,  self.id) }
  ......
end    

And a Sidekiq worker named "Indexer":

class Indexer
  include Sidekiq::Worker

  def perform(operation, model_name, record_id)
    #update elasticsearch index here...
  end
end

#delete_from_index? doesn't work with scopes

Not sure is it correct behaviour or not

class User
  ...

  def delete_from_index?
    !searchable? || need_completion?
  end
end

UsersIndex.filter{ match_all }.total_count #=> 1001
UsersIndex::User.import User.where(phone: phone) #=> true
UsersIndex.filter{ match_all }.total_count #=> 1001

UsersIndex::User.import User.where(phone: phone).to_a #=> true
UsersIndex.filter{ match_all }.total_count #=> 1000

Support for multiple hosts

Wondering if chewy can handle multiple elasticsearch hosts? I know Elasticsearch::Transport::Client can, but it seems like chewy barfs when trying to add multiple hosts.

Any thoughts? workarounds?

When is my index updated?

Hi

Would you mind explaining when an index is updated? When do I have to use the urgent: true option?

I have an Article model with this line:
update_index("global#news") { self if is_news? && published? }

Whenever I create or updated an article it indexes it correctly.

I also have a Comment model with this logic:
update_index("global#discussion") { commentable if discussion_comment? }
So, if a comment belongs to a Discussion (commentable is polymorphic and can also be Article) it should reindex the Discussion.

In the Comment example the index is NOT updated, unless I add the urgent: true option. Why?

Also, my tests for the Comment model passes regardless of the urgent option.

...
expect { comment.save! }.to update_index("global#discussion")
...

That passes but the index is not updated in reality.

Let me get down and dirty with one to one elasticsearch dsl syntax

It's cool that I can do chaining and stuff with this plugin but I felt like I had to learn yet another DSL when using this and I was already new to elasticsearch so I was a bit annoyed that i had to learn both.

I really like how simple the "elasticsearch-model" gem handles search, you literally just pass it the same json/hash that you would if you were using Elasticsearchs' interactive console Sense.

I could easily copy snippets from elasticsearchs' website and use them is my code with zero effort.

I'd really like to see chewy implement the same thing.

Faraday::Error::ConnectionFailed error in elastic search

I have installed the elastic search and jdk 1.7 in windows.
When I start elastic search , Its gets started.

C:\elasticsearch-1.1.1\bin>service.bat start
The service 'elasticsearch-service-x64' has been started

http://localhost:9200/ returns proper json in browser
{
  "status" : 200,
  "name" : "Ever",
  "version" : {
    "number" : "1.1.1",
    "build_hash" : "f1585f096d3f3985e73456debdc1a0745f512bbc",
    "build_timestamp" : "2014-04-16T14:27:12Z",
    "build_snapshot" : false,
    "lucene_version" : "4.7"
  },
  "tagline" : "You Know, for Search"
}

I have set my JAVA_HOME to

'C:\Program Files\Java\jdk1.7.0_55\jre'

And my java version
java -version

java version "1.7.0_55"
Java(TM) SE Runtime Environment (build 1.7.0_55-b13)
Java HotSpot(TM) 64-Bit Server VM (build 24.55-b03, mixed mode)

But when I am trying to do search or import the database to elastic search

FundraiserCamapignsIndex.import

I gives this error.

Faraday::Error::ConnectionFailed: getaddrinfo: No such host is known.
from c:/RailsInstaller/Ruby1.9.3/lib/ruby/1.9.1/net/http.rb:762:in `initialize'

Can you tell what I am missing here.

update_index() does not work!

I have the only one index on Tour model:

class ToursIndex < Chewy::Index
  define_type Tour.public.includes(:tags, :assets, :prices, days: [:locations]) do
...

On the Tour model i have this:

  update_index('tours') { self }

But index does not get updated!
If i do

rake chewy:update[tours]

Index is updated successfuly.

I have tried:

  update_index('tours') { self }
  update_index('tours#tour') { self }
  update_index('tours') { Tour }

Nothing...
When i save tour there is no log entry from chewy...

Any suggestion why index updating does not occur?

Nesting of collections, unexpected behavior

Hi there!

So far I'm really enjoying using Chewy and it's working great, except one little nit and reading mappings.rb isn't helping me sort it out. I'm sure that I'm just missing the Right Way™ and hopefully I can at least contribute a doc patch when I get it sorted out :)

I need to have 3 tiers in my document, which look like: Event -> Category ->>> Licenses (licenses are attached to categories, and are important for faceting in presented views).

In plain english, an Event has a Category, and Categories have many Licenses.

The mapping works with:

define_type Event do
  field :category, type: 'object' do
    field :licenses, type: 'object'
  end
end

However, when I want to limit what is exposed on the licenses (say the id and the name), changing it to:

field :licenses, type: 'object' do
  field :id
  field :name
end

I get an error because it seems the value is no longer the record value, but the ActiveRecord collection:

undefined method `id' for #<License::ActiveRecord_Associations_CollectionProxy:0x007fd30e7807b0>

Ok, so reading through I see that I may need to pass values so I changed it to:

field :licenses, type: 'object', value: ->(license) do
  field :id
  field :name
end

But then I get the error:

undefined method `name' for #<Category:0x007fd2f18a60a8>

This is where my understanding and expectations come undone :). When I have a value being passed in, and licenses is a collection, why is it the category and not the license being iterated?

And then I thought, perhaps it is about using the value: :some_field syntax, so I used:

field :licenses, type: 'object' do
  field :id, value: :id
  field :name, value: :name
end

And then I get an error (Even if I put name: 'event' when I declare this index, it makes no difference):

       Index `EventsIndex` doesn't have type named `event`

Ultimately, what I need is a document stored that looks like:

"event": {
  "id": 1,
  "category": {
    "id" : 2,
    "licenses": {
        "id" : 3,
        "name": "Some Name"
    }
  }
}

However, there are other fields in licenses that I don't want to be present (or rather, do not need to be present)

I did finally figure out that I can return an array of hashes, but it gets awkward to manage these variables and I have the distinct feeling I'm doing something wrong.

Thanks very much,
-Jay

Attachment field example

I am using Carrierwave for the s3 file upload and have elasticsearch-mapper-attachments plugin installed for my elasticsearch

# app/chewy/candidate_index.rb
class CandidateIndex < Chewy::Index
  define_type Candidate.includes({:campaign_invitation => :labels}, :campaign_response_details) do
    field :resume, type: 'attachment', value: -> (candidate) {
      if candidate.resume.present?
        Base64.encode64(open(candidate.resume.url) { |doc| doc.read })
      else
        ""
      end
    }
  end
end

# app/models/candidate.rb
class Candidate < ActiveRecord::Base
  mount_uploader :resume, ResumeUploader

  def self.search(options)
    fields = [ 'resume' ]
    CandidateIndex.query(multi_match: { query: options[:keyword], fields: fields })
  end
end

Execute multiple queries on one index in one request

Hi!

I have an index with multiple types in it. Now I want to make a search where I get results for each type.
At first I was thinking about using aggregations but that does not work I think, since it is bound to the result of the query.

To clarify, let's say I have three types in my index, Foo, Bar and Baz.
I mean, if I search for "Lorem Ipsum" with a limit of 10 it will fetch the top 10 results and then group these by type in my aggregations. Maybe the top 5 results are of type Foo and then 4 are Bar and only one Baz which is not what I want...

What I really want is 10 hits of each type. So, the most relevant Foo the most relevant Bar and the most relevant Baz.

Sure, I could do one request per type to get the desired result but it would be cool if it could be done in one request (using something like msearch I guess, https://github.com/elasticsearch/elasticsearch-ruby/blob/5dc6bc61b85cb681b2453e4b9a6afb9a35e1be98/elasticsearch-api/lib/elasticsearch/api/actions/msearch.rb).

Does Chewy support something like this?

Wishlist?

I'd like to work on this library, but I'm not sure where to start. Is there anything specific and maybe smaller that would server as a good starting point?

I noticed that you have a list of 'coming soon' items at the bottom

  • Typecasting support
  • Advanced (simplyfied) query DSL: UsersIndex.query { email == '[email protected]' } will produce term query
  • update_all support
  • Other than ActiveRecord ORMs support (Mongoid)
  • Maybe, closer ORM/ODM integration, creating index classes implicitly

Boosting query (demoting certain documents)

Hi!

I'm trying to move away from (re)Tire to some other gem and found Chewy. I wonder if it is possible to create a boosting query somehow?

What I want to do is demote documents that does not match a specific query.
For example, lets say I want articles written by a certain department to get a bump in the score. I could use a boosting query (http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/query-dsl-boosting-query.html) to demote all blog posts NOT written by the department.

     query: {
          boosting: {
            positive: [{
              filtered: {
                query: {
                  bool: {
                    should: [some_query_that_matches_stuff],
                    must: [some_other_query],
                    minimum_number_should_match: 1
                  }
                },
                filter: {}
              }
            }],
            negative: {
              filtered: {
                filter: {
                  and: [{ not: { term: { department_id: Department.important.id } } }]
                }
              }
            },
            negative_boost: 0.1
          }
        },
        filter: {},
        facets: {}
      }

Is there a way to accomplish something like this in Chewy?

Typhoeus error / The option: disable_ssl_peer_verification is invalid

Anyone seeing this? During index create, chewy is complaining:
Ethon::Errors::InvalidOption: The option: disable_ssl_peer_verification is invalid.
Please try ssl_verifypeer instead of disable_ssl_peer_verification.

from /Users/yan/.rvm/gems/ruby-2.1.1@reverb/gems/typhoeus-0.6.8/lib/typhoeus/hydra/runnable.rb:17:in `block in run'
    from /Users/yan/.rvm/gems/ruby-2.1.1@reverb/gems/chewy-0.4.1/lib/chewy/index/actions.rb:61:in `create!'

Seems like some option it's passing to typhoeus? Maybe I have a different version than it expects?

chewy/rspec issue expecting param instead of block

I'm getting the following when trying to use chew in my specs:

Failure/Error: specify { expect { product.save! }.to update_index(ProductsIndex::Product) }
       You must pass an argument rather than a block to use the provided matcher (update index ProductsIndex::Product), or the matcher must implement `supports_block_expectations?`.

I also noticed some deprecation warnings. Is there an edge version that's more compatible with rails 4 and rspec 3?

index a serialized field

I am using the chewy gem to tie ES to my rails app.I am new to chewy, so i am facing a problem when i try to index a field of my model. The field is a text field in the DB which i serialize as a Hash in my model. The hash is dynamic and might have 0 to n elements in it in the form. The field name is items Any help would be much appreciated.

{"0"=>{"property"=>"value","property"=>"value"},"1"=>{"property"=>"value","property"=>"value"}.......}

class ModelNameIndex < Chewy::Index
define_type ModelName do
field :user_id, type: 'integer'
field :enduser_id, type: 'integer'
field :items, type: 'object'
field :created, type: 'date', include_in_all: false,
value: ->{ created_at }
end
end

class ModelName < ActiveRecord::Base
update_index('IndexName#name') { self }
belongs_to :user
serialize :items, Hash
end

Facets?

Couldn't find any info on how to use facets in chewy... Am i missing something or it does not supported?

Searching question

I have a model with a one-to-many association. It's indexed on [:start], ie
field :start, value: ->(thing) { thing.bars.map(&:start) }

Foo has_many Bars

Bar belongs_to Foo

Each bar has a field called start which is a DateTime. I want to get any foo objects with at least one bar that occurs in the future. So that should not include any foos with no bars or bars with only past dates.

Using the Chewy syntax, I'd like to query ElasticSearch to retrieve all foo objects with at least one bar where start > Time.zone.now.

I've tried a few ways but it's not obvious how to do that. (not sure if this an "issue" but a usage question!)

Reference to calling object in nested fields?

I have a situation where a nested field needs a date from the parent to do a dynamic calculation, and this works fine with a hash return but I need to apply analyzers to the nested fields and templates are not working well enough :(

Imagine something like this:

define_type Event do
  field :start_date

  field :rooms, type: 'object' do
    field :name
    field :is_available, value: -> { is_available?(event.start_date) }
  end
end

The event.start_date part is what I can't sort out. Previously I was simply returning a hash from the rooms block to set the is_available flag since the hash gets the parent object and not the rooms relationship.

Is there a way to do this without the hash method? The fields/base#compose method calls instance_exec with the object, but the object doesn't have any awareness of the calling method above it and now I'm a bit lost on how to proceed.

query_spec.rb failing for me

For me, query_spec.rb:148 currently fails. It seems that my version of elasticsearch (1.2.1) will only insert the key_as_string propery if the underlying field (rating) is of type "date". I see it passing in travis, so I guess it may be related to elasticsearch.

1) Chewy::Query#aggregations results should == {"ratings"=>{"buckets"=>[{"key"=>0, "key_as_string"=>"0", "doc_count"=>4}, {"key"=>1, "key_as_string"=>"1", "doc_count"=>3}, {"key"=>2, "key_as_string"=>"2", "doc_count"=>3}]}}
 Failure/Error: } }
   expected: {"ratings"=>{"buckets"=>[{"key"=>0, "key_as_string"=>"0", "doc_count"=>4}, {"key"=>1, "key_as_string"=>"1", "doc_count"=>3}, {"key"=>2, "key_as_string"=>"2", "doc_count"=>3}]}}
        got: {"ratings"=>{"buckets"=>[{"key"=>0, "doc_count"=>4}, {"key"=>1, "doc_count"=>3}, {"key"=>2, "doc_count"=>3}]}} (using ==)
   Diff:
   @@ -1,2 +1,2 @@
   -"ratings" => {"buckets"=>[{"key"=>0, "key_as_string"=>"0", "doc_count"=>4}, {"key"=>1, "key_as_string"=>"1", "doc_count"=>3}, {"key"=>2, "key_as_string"=>"2", "doc_count"=>3}]}
   +"ratings" => {"buckets"=>[{"key"=>0, "doc_count"=>4}, {"key"=>1, "doc_count"=>3}, {"key"=>2, "doc_count"=>3}]}

Elasticsearch Completion Suggest

I want to use the elasticsearch completion suggest feature to implement autocomplete. when I do something like this:

index.suggest(mysuggest: { text: 'roy', completion: { field: 'suggest' }})

it sends this request to the elasticsearch server:

GET /console/job,table,column/_search [{"suggest":{"mysuggest":{"text":"roy","completion":{"field":"suggest"}}}}]

Since the search endpoint is used and there is no query given, I think elasticsearch just assumes a blank query (so it returns all objects in db). It also has this in the result (when I submit the query with a curl command)

{...
"suggest":{"mysuggest":[{"text":"roy","offset":0,"length":3,"options":[{"text":"Roy Lane","score":1.0}]}]}}

I want to be able to get those suggestion results, but I can't figure out any way to do it. I tried looking into the Chewy code, but I'm new to Ruby and I don't really understand what is going on.

Is there currently any way to retrieve suggestion results? If not, can someone point me to the right places in the code to do this? I think a better way to implement suggest would just be to use the _suggest endpoint of the elasticsearch server so that you are not doing a query if all you want are suggestion results.

Something like this would be good:

GET /console/job,table,column/_suggest [{"mysuggest":{"text":"roy","completion":{"field":"suggest"}}}]

Issue with custom ActiveModelSerializer

I've got a Rails 4 project using AMS and Chewy.

If I define an AMS for default, I'm able to serialise just fine. However, if I use a custom AMS, I get an error: NoMethodError (undefined method 'read_attribute_for_serialization' for #<Chewy::Query:0x000000060c1da8>)

Code (works fine):

@search = Search.new(params)
@results = @search.query.load
render json: @results

Code (breaks):

@search = Search.new(params)
@results = @search.query.load
render json: @results, serializer: MySerializer

Both the default model serializer I wrote and MySerializer are identical. There's nothing fancy in there either.

class MySerializer < ActiveModel::Serializer
  attributes :my_attribute, :other_attribute

  has_one :thing

end

I've tried to monkeypatch Chewy::Query:

module Chewy
  class Query
    alias_method :read_attribute_for_serialization, :send
  end
end

but I just get NoMethodError (undefined method 'my_attribute' for #<Chewy::Query:0x000000069af5c8>)

Use boost factor and decay

Hello

I wanted to use #boost_factor and #decay functions as specified in the readme:
UsersIndex.boost_factor(5, filter: {term: {type: 'Expert'}})

Howver, those methods are not defined on the index. In my index I had to add this delegator to make it work:
singleton_class.delegate :boost_factor, :decay, to: :all

I guess that is just a mistake and it should be delegated by default?

Thank you for a great gem :)

Conflict with the 'client' method name

Hi guys,

If have a rails model called Client, and if I define a chewy index for this model for example:

# client_index.rb
class ClientIndex < Chewy::Index
  define_type Client do
    field :name
  end
end

# client.rb
class Client < ActiveRecord::Base
  update_index('client#client') { self }
end

There is a conflict with the elasticsearch 'client' method that you are using

# chewy/lib/chewy/config.rb
def client
  Thread.current[:chewy_client] ||= ::Elasticsearch::Client.new configuration
end

This the exception

jruby-1.7.11 :003 > ClientsIndex.client.import
NoMethodError: undefined method `indices' for ClientsIndex::Client:Class
    from /Users/joselo/.rvm/gems/jruby-1.7.11@instafac/gems/chewy-0.5.0/lib/chewy/index/actions.rb:15:in `exists?'

# chewy/lib/chewy/index/actions.rb
def exists?
  client.indices.exists(index: index_name)
end

Multi-model index example

Hi there. I'm trying to piece together how defining and querying a multi-model index might work in Chewy. Is there an example somewhere I can check out and if not, what would a quick and dirty one look like?

Thanks!

index_analyzer, search_analyzer issue

Seems like those two options do not work as expected.

чувак there?
field :phone_number, index_analyzer: 'phone', search_analyzer: 'phone' - не работает
field :phone_number, analyzer: 'phone' - а это работает
все-таки чет не так с этими опциями

Plus I would want to see exception if I set something wrong in my options.

WhateverIndex.reset! ignores specified nested fields mapping

I have a nested item specified in my mappings, eg:

field :user_qualifications, type: 'nested' do
        field :qualification_level, value: ->(o) { o.qualification.qualification_level }
end

When updating the index by saving the model and using update_index, it indexes as expected. When using WhateverIndex.reset! in Rails console, it ignores my mappings and indexes everything in the nested model, ie. id, created_at, updated_at, etc. instead of just indexing qualification_level

Querying 3 fields on one Index throws "No parser for element" error

I have a basic index on 3 fields and I'm building a simple search text box that searches across all 3 fields and should return results if any of the 3 fields have results.

When I do the following query:

scope = PostsIndex.query(term: {body: 'hover', title: 'hover', html_block: 'hover'}).query_mode(:should)
scope.total_count

I get this error:

Elasticsearch::Transport::Transport::Errors::BadRequest: [400] {"error":"SearchPhaseExecutionException[Failed to execute phase [query], all shards failed; shardFailures {[S1Eit6J9Sq6SHSiqrXsgjg][posts][3]: SearchParseException[[posts][3]: query[body:hover],from[-1],size[-1]: Parse Failure [Failed to parse source [{\"query\":{\"term\":{\"body\":\"hover\",\"title\":\"hover\",\"html_block\":\"hover\"}}}]]]; nested: SearchParseException[[posts][3]: query[body:hover],from[-1],size[-1]: Parse Failure [No parser for element [html_block]]]; }{[S1Eit6J9Sq6SHSiqrXsgjg][posts][4]: SearchParseException[[posts][4]: query[body:hover],from[-1],size[-1]: Parse Failure [Failed to parse source [{\"query\":{\"term\":{\"body\":\"hover\",\"title\":\"hover\",\"html_block\":\"hover\"}}}]]]; nested: SearchParseException[[posts][4]: query[body:hover],from[-1],size[-1]: Parse Failure [No parser for element [html_block]]]; }{[S1Eit6J9Sq6SHSiqrXsgjg][posts][0]: SearchParseException[[posts][0]: query[body:hover],from[-1],size[-1]: Parse Failure [Failed to parse source [{\"query\":{\"term\":{\"body\":\"hover\",\"title\":\"hover\",\"html_block\":\"hover\"}}}]]]; nested: SearchParseException[[posts][0]: query[body:hover],from[-1],size[-1]: Parse Failure [No parser for element [html_block]]]; }{[S1Eit6J9Sq6SHSiqrXsgjg][posts][1]: SearchParseException[[posts][1]: query[body:hover],from[-1],size[-1]: Parse Failure [Failed to parse source [{\"query\":{\"term\":{\"body\":\"hover\",\"title\":\"hover\",\"html_block\":\"hover\"}}}]]]; nested: SearchParseException[[posts][1]: query[body:hover],from[-1],size[-1]: Parse Failure [No parser for element [html_block]]]; }{[S1Eit6J9Sq6SHSiqrXsgjg][posts][2]: SearchParseException[[posts][2]: query[body:hover],from[-1],size[-1]: Parse Failure [Failed to parse source [{\"query\":{\"term\":{\"body\":\"hover\",\"title\":\"hover\",\"html_block\":\"hover\"}}}]]]; nested: SearchParseException[[posts][2]: query[body:hover],from[-1],size[-1]: Parse Failure [No parser for element [html_block]]]; }]","status":400}

I figured out I could structure the query this way:

scope = PostsIndex.query(term: {body: 'hover'}).query(term: {title: 'hover'}).query(term: {html_block: 'hover'}).query_mode(:should)
scope.total_count

I'm new to Chewy, so maybe there is a better of way of accomplishing this, but I couldn't find a clearer way in the documentation.

Also, so far chewy rocks btw! Great work!

WillPaginate Features Are Not Present

Hi there,

I'm trying paginate my query results using will_paginate, and despite commit 64130ad adding #paginate and #page methods to the query results, I'm getting undefined method errors when I try to use them.

defined?(::WillPaginate) returns "constant", so I'm not sure why the will_paginate module is not being loaded. Gemfile.lock lists chewy (0.5.2) and will_paginate (3.0.5). Any ideas?

expose took

Something similar to:

module Chewy
  class Query
    def took
      _response['took']
    end
  end
end

Field array value issues!

I have Tour with Tags (M:M relation)
I have defined index field wich is array of tag ids

field :tags, value: ->(tour) { tour.tags.map(&:id) }

And inside of "join" model updating index like this:

class TourTag < ActiveRecord::Base
  update_index('tours', urgent: true) { tour }

So if relation added or deleted it should update Tour with new tags(index for this tour should fully update). The result is totaly unexpected, on tag creation indexed tags array contain only last created tag, but on delete it is updated properly...

NameError: uninitialized constant - module namespace conflicting with model name

Given the following index and definitions:

class AwardsIndex < Chewy::Index
  define_type Award
  define_type Award::Category
end

when trying AwardsIndex.import will get error

NameError: uninitialized constant AwardsIndex::Award::Category

everything will work fine if just defining Award or Award::Category without the other, but get the error when defining both.

How to troubleshoot when not getting expected results

When I query elastic search using my web browser I get the following result:
2014-09-04 at 3 37 pm

Yet, when I do the same query using chewy I get nothing:

2014-09-04 at 3 38 pm

What am I missing here?? In chewy if I remove "baker" and just query using "jimmy" then I get the result that I'm looking for. Is there something that I need to set in chewy or es config to allow multiple words to be searched?

2014-09-04 at 3 41 pm

Trouble with controller spec

Greetings. I'm having trouble issuing searches within my controller spec.

Here's the spec:

describe 'GET /companies/:company_id/products/search' do
    before do
      @tenant.scope_schema do
        create(:product, company: @company, name: 'Battlefield 4')
        create(:product, company: @company, name: 'League of Legends')
        create(:product, company: @company, name: 'Coors Light')
        ProductsIndex::Product.import
      end

      get :search, company_id: @company.id, q: 'League'
    end

    it 'should respond successfully' do
      expect(response).to be_successful
    end
  end

and the controller action:

def search
    @results = ProductsIndex::Product
      .query(term: { _all: params[:q] })
      .filter{ tenant_id == current_tenant.id }
      .filter{ company_id == company.id }
      .to_a
    binding.pry
  end

Which yields the following error:

1) Api::V1::ProductsController GET /companies/:company_id/products/search should respond successfully
     Failure/Error: get :search, company_id: @company.id, q: 'League'
     Elasticsearch::Transport::Transport::Errors::BadRequest:
       [400] {"error":"SearchPhaseExecutionException[Failed to execute phase [query], all shards failed; shardFailures {[uzV7mCJyQ0eN8xtjOHlhbg][test_products][3]: SearchParseException[[test_products][3]: from[-1],size[-1]: Parse Failure [Failed to parse source [{\"query\":{\"filtered\":{\"query\":{\"term\":{\"_all\":\"League\"}},\"filter\":{\"and\":[{\"term\":{\"tenant_id\":{\"name\":\"current_tenant.id\",\"args\":[]}}},{\"term\":{\"company_id\":{\"name\":\"company.id\",\"args\":[]}}}]}}}}]]]; nested: QueryParsingException[[test_products] [term] filter does not support [name]]; }{[uzV7mCJyQ0eN8xtjOHlhbg][test_products][4]: SearchParseException[[test_products][4]: from[-1],size[-1]: Parse Failure [Failed to parse source [{\"query\":{\"filtered\":{\"query\":{\"term\":{\"_all\":\"League\"}},\"filter\":{\"and\":[{\"term\":{\"tenant_id\":{\"name\":\"current_tenant.id\",\"args\":[]}}},{\"term\":{\"company_id\":{\"name\":\"company.id\",\"args\":[]}}}]}}}}]]]; nested: QueryParsingException[[test_products] [term] filter does not support [name]]; }{[uzV7mCJyQ0eN8xtjOHlhbg][test_products][1]: SearchParseException[[test_products][1]: from[-1],size[-1]: Parse Failure [Failed to parse source [{\"query\":{\"filtered\":{\"query\":{\"term\":{\"_all\":\"League\"}},\"filter\":{\"and\":[{\"term\":{\"tenant_id\":{\"name\":\"current_tenant.id\",\"args\":[]}}},{\"term\":{\"company_id\":{\"name\":\"company.id\",\"args\":[]}}}]}}}}]]]; nested: QueryParsingException[[test_products] [term] filter does not support [name]]; }{[uzV7mCJyQ0eN8xtjOHlhbg][test_products][2]: SearchParseException[[test_products][2]: from[-1],size[-1]: Parse Failure [Failed to parse source [{\"query\":{\"filtered\":{\"query\":{\"term\":{\"_all\":\"League\"}},\"filter\":{\"and\":[{\"term\":{\"tenant_id\":{\"name\":\"current_tenant.id\",\"args\":[]}}},{\"term\":{\"company_id\":{\"name\":\"company.id\",\"args\":[]}}}]}}}}]]]; nested: QueryParsingException[[test_products] [term] filter does not support [name]]; }{[uzV7mCJyQ0eN8xtjOHlhbg][test_products][0]: SearchParseException[[test_products][0]: from[-1],size[-1]: Parse Failure [Failed to parse source [{\"query\":{\"filtered\":{\"query\":{\"term\":{\"_all\":\"League\"}},\"filter\":{\"and\":[{\"term\":{\"tenant_id\":{\"name\":\"current_tenant.id\",\"args\":[]}}},{\"term\":{\"company_id\":{\"name\":\"company.id\",\"args\":[]}}}]}}}}]]]; nested: QueryParsingException[[test_products] [term] filter does not support [name]]; }]","status":400}
     # /projects/chewy/lib/chewy/query.rb:807:in `block in _response'
     # /projects/chewy/lib/chewy/query.rb:805:in `_response'
     # /projects/chewy/lib/chewy/query.rb:817:in `_results'
     # /projects/chewy/lib/chewy/query.rb:831:in `_collection'
     # /projects/chewy/lib/chewy/query.rb:29:in `each'
     # ./app/controllers/api/v1/products_controller.rb:41:in `to_a'
     # ./app/controllers/api/v1/products_controller.rb:41:in `search'
     # ./app/models/tenant.rb:29:in `scope_schema'
     # ./app/controllers/api/v1/base_controller.rb:24:in `scope_current_tenant'
     # ./spec/controllers/api/v1/products_controller_spec.rb:129:in `block (3 levels) in <top (required)>'
     # -e:1:in `<main>'

When checking my test_products index in the browser I see no documents. What am I doing wrong?

Query timeout option

Is there a way to add the timeout option?

{
  "timeout": "5000ms",
  "query": { ... }
}

match_all gives proper result but query and filter with params/conditions return empty array

filter with match all

[139] milaap_webapp »  FundraiserCampaignsIndex.filter{ match_all }.to_a
=> [
  [0] #<FundraiserCampaignsIndex::FundraiserCampaign:0x108c9050 @attributes={"motivation"=>"I am cycling to help former Devadasi women start independent businesses", "user"=>{"first_name"=>"Mayukh", "last_name"=>"Choudhury"}, "project"=>"Help former Devadasi women start independent businesses", "id"=>"39", "_score"=>1.0, "_explanation"=>nil}>,
  [1] #<FundraiserCampaignsIndex::FundraiserCampaign:0x108c8678 @attributes={"motivation"=>"I am pledging my birthday to help former Devadasi women start independent businesses", "user"=>{"first_name"=>"Satya", "last_name"=>"Kothimangalam"}, "project"=>"Help former Devadasi women start independent businesses", "id"=>"41", "_score"=>1.0, "_explanation"=>nil}>,
  [2] #<FundraiserCampaignsIndex::FundraiserCampaign:0x108c7cc8 @attributes={"motivation"=>"I am fundraising to revive village economy through traditional crafts", "user"=>{"first_name"=>"mayank", "last_name"=>"choudhury"}, "project"=>"Revive village economy through traditional crafts", "id"=>"46", "_score"=>1.0, "_explanation"=>nil}>,
]

Now notice that last result of match all is

#<FundraiserCampaignsIndex::FundraiserCampaign:0x108c7cc8 @attributes={"motivation"=>"I am fundraising to revive village economy through traditional crafts", "user"=>{"first_name"=>"mayank", "last_name"=>"choudhury"}, "project"=>"Revive village economy through traditional crafts", "id"=>"46", "_score"=>1.0, "_explanation"=>nil}>,

which includes "mayank".

But when I search for "mayank" I get empty array like below

FundraiserCampaignsIndex.filter{ name == "mayank" }.to_a
=> []

How to define child/parent mappings?

I can't figure out how to define a child/parent mapping in chewy. The following gives a routing error, presumably because column isn't being given a parent _id.

define_type Table do
  field :_id, type: 'integer', value: { id }
  field :user_ids, value: -> { server.users_for_table(self) }
end

define_type Column.includes(:table) do
  root _parent: { type: 'table' } do
    field :_parent, value: -> { table.id }
  end
end

I can't find any examples of parent ids being given anywhere other than the url (like the examples here http://www.elasticsearch.org/guide/en/elasticsearch/guide/current/indexing-parent-child.html). Is it possible to specify a parent id in chewy currently or does there need to be some code added?

no block given (yield) with rails 4.1

My code was working fine with Rails 3.2 but it gives following error with rails 4.1

LocalJumpError - no block given (yield):
chewy (0.4.1) lib/chewy/config.rb:129:in `atomic'
chewy (0.4.1) lib/chewy.rb:50:in `atomic'
app/controllers/application_controller.rb:15:in `block in <class:ApplicationController>'

code in application controller is (line 15)

around_action { |&block| Chewy.atomic(&block) }

Keep in mind with rails 3.2 I was using around_filter and changed it to around_action according to rails4

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.