Giter Site home page Giter Site logo

olivere / elastic Goto Github PK

View Code? Open in Web Editor NEW
7.3K 166.0 1.1K 8.37 MB

Deprecated: Use the official Elasticsearch client for Go at https://github.com/elastic/go-elasticsearch

Home Page: https://olivere.github.io/elastic/

License: MIT License

Go 99.97% Makefile 0.02% Shell 0.02%
elasticsearch go

elastic's Introduction

Elastic

This is a development branch that is actively being worked on. DO NOT USE IN PRODUCTION! If you want to use stable versions of Elastic, please use Go modules for the 7.x release (or later) or a dependency manager like dep for earlier releases.

Elastic is an Elasticsearch client for the Go programming language.

Build Status Godoc license

See the wiki for additional information about Elastic.

Buy Me A Coffee

Releases

The release branches (e.g. release-branch.v7) are actively being worked on and can break at any time. If you want to use stable versions of Elastic, please use Go modules.

Here's the version matrix:

Elasticsearch version Elastic version Package URL Remarks
7.x                   7.0             github.com/olivere/elastic/v7 (source doc) Use Go modules.
6.x                   6.0             github.com/olivere/elastic (source doc) Use a dependency manager (see below).
5.x 5.0 gopkg.in/olivere/elastic.v5 (source doc) Actively maintained.
2.x 3.0 gopkg.in/olivere/elastic.v3 (source doc) Deprecated. Please update.
1.x 2.0 gopkg.in/olivere/elastic.v2 (source doc) Deprecated. Please update.
0.9-1.3 1.0 gopkg.in/olivere/elastic.v1 (source doc) Deprecated. Please update.

Example:

You have installed Elasticsearch 7.0.0 and want to use Elastic. As listed above, you should use Elastic 7.0 (code is in release-branch.v7).

To use the required version of Elastic in your application, you should use Go modules to manage dependencies. Make sure to use a version such as 7.0.0 or later.

To use Elastic, import:

import "github.com/olivere/elastic/v7"

Elastic 7.0

Elastic 7.0 targets Elasticsearch 7.x which was released on April 10th 2019.

As always with major version, there are a lot of breaking changes. We will use this as an opportunity to clean up and refactor Elastic, as we already did in earlier (major) releases.

Elastic 6.0

Elastic 6.0 targets Elasticsearch 6.x which was released on 14th November 2017.

Notice that there are a lot of breaking changes in Elasticsearch 6.0 and we used this as an opportunity to clean up and refactor Elastic as we did in the transition from earlier versions of Elastic.

Elastic 5.0

Elastic 5.0 targets Elasticsearch 5.0.0 and later. Elasticsearch 5.0.0 was released on 26th October 2016.

Notice that there are will be a lot of breaking changes in Elasticsearch 5.0 and we used this as an opportunity to clean up and refactor Elastic as we did in the transition from Elastic 2.0 (for Elasticsearch 1.x) to Elastic 3.0 (for Elasticsearch 2.x).

Furthermore, the jump in version numbers will give us a chance to be in sync with the Elastic Stack.

Elastic 3.0

Elastic 3.0 targets Elasticsearch 2.x and is published via gopkg.in/olivere/elastic.v3.

Elastic 3.0 will only get critical bug fixes. You should update to a recent version.

Elastic 2.0

Elastic 2.0 targets Elasticsearch 1.x and is published via gopkg.in/olivere/elastic.v2.

Elastic 2.0 will only get critical bug fixes. You should update to a recent version.

Elastic 1.0

Elastic 1.0 is deprecated. You should really update Elasticsearch and Elastic to a recent version.

However, if you cannot update for some reason, don't worry. Version 1.0 is still available. All you need to do is go-get it and change your import path as described above.

Status

We use Elastic in production since 2012. Elastic is stable but the API changes now and then. We strive for API compatibility. However, Elasticsearch sometimes introduces breaking changes and we sometimes have to adapt.

Having said that, there have been no big API changes that required you to rewrite your application big time. More often than not it's renaming APIs and adding/removing features so that Elastic is in sync with Elasticsearch.

Elastic has been used in production starting with Elasticsearch 0.90 up to recent 7.x versions. We recently switched to GitHub Actions for testing. Before that, we used Travis CI successfully for years).

Elasticsearch has quite a few features. Most of them are implemented by Elastic. I add features and APIs as required. It's straightforward to implement missing pieces. I'm accepting pull requests :-)

Having said that, I hope you find the project useful.

Getting Started

The first thing you do is to create a Client. The client connects to Elasticsearch on http://127.0.0.1:9200 by default.

You typically create one client for your app. Here's a complete example of creating a client, creating an index, adding a document, executing a search etc.

An example is available here.

Here's a link to a complete working example for v6.

Here are a few tips on how to get used to Elastic:

  1. Head over to the Wiki for detailed information and topics like e.g. how to add a middleware or how to connect to AWS.
  2. If you are unsure how to implement something, read the tests (all _test.go files). They not only serve as a guard against changes, but also as a reference.
  3. The recipes contains small examples on how to implement something, e.g. bulk indexing, scrolling etc.

API Status

Document APIs

  • Index API
  • Get API
  • Delete API
  • Delete By Query API
  • Update API
  • Update By Query API
  • Multi Get API
  • Bulk API
  • Reindex API
  • Term Vectors
  • Multi termvectors API

Search APIs

  • Search
  • Search Template
  • Multi Search Template
  • Search Shards API
  • Suggesters
    • Term Suggester
    • Phrase Suggester
    • Completion Suggester
    • Context Suggester
  • Multi Search API
  • Count API
  • Validate API
  • Explain API
  • Profile API
  • Field Capabilities API

Aggregations

  • Metrics Aggregations
    • Avg
    • Boxplot (X-pack)
    • Cardinality
    • Extended Stats
    • Geo Bounds
    • Geo Centroid
    • Matrix stats
    • Max
    • Median absolute deviation
    • Min
    • Percentile Ranks
    • Percentiles
    • Rate (X-pack)
    • Scripted Metric
    • Stats
    • String stats (X-pack)
    • Sum
    • T-test (X-pack)
    • Top Hits
    • Top metrics (X-pack)
    • Value Count
    • Weighted avg
  • Bucket Aggregations
    • Adjacency Matrix
    • Auto-interval Date Histogram
    • Children
    • Composite
    • Date Histogram
    • Date Range
    • Diversified Sampler
    • Filter
    • Filters
    • Geo Distance
    • Geohash Grid
    • Geotile grid
    • Global
    • Histogram
    • IP Range
    • Missing
    • Nested
    • Parent
    • Range
    • Rare terms
    • Reverse Nested
    • Sampler
    • Significant Terms
    • Significant Text
    • Terms
    • Variable width histogram
  • Pipeline Aggregations
    • Avg Bucket
    • Bucket Script
    • Bucket Selector
    • Bucket Sort
    • Cumulative cardinality (X-pack)
    • Cumulative Sum
    • Derivative
    • Extended Stats Bucket
    • Inference bucket (X-pack)
    • Max Bucket
    • Min Bucket
    • Moving Average
    • Moving function
    • Moving percentiles (X-pack)
    • Normalize (X-pack)
    • Percentiles Bucket
    • Serial Differencing
    • Stats Bucket
    • Sum Bucket
  • Aggregation Metadata

Indices APIs

  • Create Index
  • Delete Index
  • Get Index
  • Indices Exists
  • Open / Close Index
  • Shrink Index
  • Rollover Index
  • Put Mapping
  • Get Mapping
  • Get Field Mapping
  • Types Exists
  • Index Aliases
  • Update Indices Settings
  • Get Settings
  • Analyze
    • Explain Analyze
  • Index Templates
  • Indices Stats
  • Indices Segments
  • Indices Recovery
  • Indices Shard Stores
  • Clear Cache
  • Flush
    • Synced Flush
  • Refresh
  • Force Merge

Index Lifecycle Management APIs

  • Create Policy
  • Get Policy
  • Delete Policy
  • Move to Step
  • Remove Policy
  • Retry Policy
  • Get Ilm Status
  • Explain Lifecycle
  • Start Ilm
  • Stop Ilm

cat APIs

  • cat aliases
  • cat allocation
  • cat count
  • cat fielddata
  • cat health
  • cat indices
  • cat master
  • cat nodeattrs
  • cat nodes
  • cat pending tasks
  • cat plugins
  • cat recovery
  • cat repositories
  • cat thread pool
  • cat shards
  • cat segments
  • cat snapshots
  • cat templates

Cluster APIs

  • Cluster Health
  • Cluster State
  • Cluster Stats
  • Pending Cluster Tasks
  • Cluster Reroute
  • Cluster Update Settings
  • Nodes Stats
  • Nodes Info
  • Nodes Feature Usage
  • Remote Cluster Info
  • Task Management API
  • Nodes hot_threads
  • Cluster Allocation Explain API

Rollup APIs (XPack)

  • Create Job
  • Delete Job
  • Get Job
  • Start Job
  • Stop Job

Query DSL

  • Match All Query
  • Inner hits
  • Full text queries
    • Match Query
    • Match Boolean Prefix Query
    • Match Phrase Query
    • Match Phrase Prefix Query
    • Multi Match Query
    • Common Terms Query
    • Query String Query
    • Simple Query String Query
    • Combined Fields Query
    • Intervals Query
  • Term level queries
    • Term Query
    • Terms Query
    • Terms Set Query
    • Range Query
    • Exists Query
    • Prefix Query
    • Wildcard Query
    • Regexp Query
    • Fuzzy Query
    • Type Query
    • Ids Query
  • Compound queries
    • Constant Score Query
    • Bool Query
    • Dis Max Query
    • Function Score Query
    • Boosting Query
  • Joining queries
    • Nested Query
    • Has Child Query
    • Has Parent Query
    • Parent Id Query
  • Geo queries
    • GeoShape Query
    • Geo Bounding Box Query
    • Geo Distance Query
    • Geo Polygon Query
  • Specialized queries
    • Distance Feature Query
    • More Like This Query
    • Script Query
    • Script Score Query
    • Percolate Query
  • Span queries
    • Span Term Query
    • Span Multi Term Query
    • Span First Query
    • Span Near Query
    • Span Or Query
    • Span Not Query
    • Span Containing Query
    • Span Within Query
    • Span Field Masking Query
  • Minimum Should Match
  • Multi Term Query Rewrite

Modules

  • Snapshot and Restore
    • Repositories
    • Snapshot get
    • Snapshot create
    • Snapshot delete
    • Restore
    • Snapshot status
    • Monitoring snapshot/restore status
    • Stopping currently running snapshot and restore
  • Scripting
    • GetScript
    • PutScript
    • DeleteScript

Sorting

  • Sort by score
  • Sort by field
  • Sort by geo distance
  • Sort by script
  • Sort by doc

Scrolling

Scrolling is supported via a ScrollService. It supports an iterator-like interface. The ClearScroll API is implemented as well.

A pattern for efficiently scrolling in parallel is described in the Wiki.

How to contribute

Read the contribution guidelines.

Credits

Thanks a lot for the great folks working hard on Elasticsearch and Go.

Elastic uses portions of the uritemplates library by Joshua Tacoma, backoff by Cenk Altı and leaktest by Ian Chiles.

LICENSE

MIT-LICENSE. See LICENSE or the LICENSE file provided in the repository for details.

elastic's People

Contributors

aarontami avatar c2h5oh avatar dimfeld avatar dungnx-teko avatar enteris avatar eticzon avatar garrettkelleyv avatar jrmycanady avatar jtdoepke avatar larrycinnabar avatar mbalabin avatar mcos avatar mdzor avatar nwolff avatar olivere avatar peteclark-ft avatar phillbaker avatar qilingzhao avatar quixoten avatar rwynn avatar slawo avatar telendt avatar thezeroslave avatar timbutler avatar vancexu avatar veqryn avatar wedneyyuri avatar wesleyk avatar xose avatar zyqsempai avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

elastic's Issues

query filter support

i use structure of aggregations like this:

"aggs": {
    "all": {
      "global": {},
      "aggs": {
        "filtered_all": {
          "filter": {
            "bool": {
              "must": [
                {
                  "query": {
                    "bool": {
                      "should": [
                        {
                          "multi_match": {
                            "query":"iphone"
                            "type": "cross_fields",
                            "fields": [
                              "name^2",
                              "category^2",
                            ]
                          }
                        }
                      ]
                    }
                  }
                }
              ]
          },
          "aggs": ... //some aggs which use current filters
       }
     }
  }
}

So, is it possible to implement this filter:bool:must:query filter? I can't find something like elastic.NewQueryFilter()

Response body unmarshal error: cannot unmarshal number into Go value of type string

If TermsFacet field are not string type ( int/long/boolean), the search request will throw error:

"json: cannot unmarshal number into Go value of type string"

It happened because searchFacetTerm.Term is defined as "string". It should be "interface{}".

search.go:
type searchFacetTerm struct {
Term interface{} json:"term"
Count int json:"count"
}

Add Support for regexp Filter

It looks like this filter isn't implemented:

http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/query-dsl-regexp-filter.html

Would that be possible? I'm trying to create something like the following (and regexp lets me deal with both numbered and string fields):

POST logstash-2015.01.08/_search?count=true
{
   "aggregations":{
      "g_eventlog_id":{
         "aggregations":{
            "g_TargetUserName":{
               "aggregations":{
                  "ts":{
                     "date_histogram":{
                        "field":"@timestamp",
                        "interval":"30m"
                     }
                  }
               },
               "terms":{
                  "field":"TargetUserName"
               }
            }
         },
         "terms":{
            "field":"eventlog_id"
         }
      }
   },
   "query": {
       "filtered":{
          "query": { 
              "range": {
                    "@timestamp":{
                        "from":"2015-01-08T19:37:59.091203948Z",
                        "include_lower":true,
                        "include_upper":true,
                        "to":"2015-01-08T20:37:59.091203948Z"
                    }
                }
          },
          "filter": {"regexp":{ "eventlog_id": "4624"}}
       }
   }
}

Delete with an empty id could be dangerous

@olivere just ran into an issue and thought to alert you guys. unless i am mistaken, there isn't a check for an empty id when performing index operations. This might be fine in most cases, as ES would error out, but when an id is empty and u can perform a delete call like so -

client.Delete().Index(index).Type(mtype).Id(id).Do()

the entire index is wiped out. quite dangerous. i have added checks in my code now, but if it makes sense to do it at lib level, we might save someone some grief..

its the difference between -
curl -XDELETE localhost:9200/someindex/type/
vs
curl -XDELETE localhost:9200/someindex/type/1

Update a document

Hi!
It is possible update a document? update doc

In the README.md I see this:

update, err := client.Update().Index("twitter").Type("tweet").Id("1").
    Script("ctx._source.retweets += num").
    ScriptParams(map[string]interface{}{"num": 1}).
    Upsert(map[string]interface{}{"retweets": 0}).
    Do()

But I don't find the method Update() in Clientstruct

ScriptSort source interface reverse incorrectly set

// Source returns the JSON-serializable data.
func (s ScriptSort) Source() interface{} {
    source := make(map[string]interface{})
    x := make(map[string]interface{})
    source["_script"] = x

    x["script"] = s.script
    x["type"] = s.typ
    if !s.ascending {
        x["reverse"] = false
    }

If !s.ascending, reverse should be true:

x["reverse"] = true

missing features

Thanks for what seems like a great library. We are going to investigate switching from elastigo but I just wanted to share what features we are using that would be great to have support for:

  • API: Update
  • Query: script (as text is totally fine)
  • Aggregation: filters (more than 1 filter)
  • Pool connection to multiple hosts (instead of having a fallback)

These features might be straight forward for us to add if we decide to switch lib, but if by any chance they are super easy for you to take care of, it would be super useful.

I need a Get request,but default is POST and can't be modify.

In the search.go,i need a Get request,but default is POST and can't be modify.

The return result is not same when use the get and post,so the 'GET' is required.


2015/03/02 15:23:26 POST /vehicles/_search?pretty=true HTTP/1.1
Host: localhost:9200
User-Agent: elastic/1.3.1 (windows-amd64)
Transfer-Encoding: chunked
Accept: application/json
Content-Type: application/json
Accept-Encoding: gzip

Expose JSON Elastic Query String

I would like to be bale to present the elastic query that is generated to my program (which debug doesn't really let me do). Could searchSource be exported (or a method to access it?).

Standardize which repo to use

The linked documentation is for github.com/olivere/elastic, but the instructions are for installing gopkg.in/olivere/elastic.v1, and there's substantial differences between the two.

This threw me off for a while because I was trying to reference functions in the github repo, and used in unit tests, but missing from the gopkg repo. In hindsight it's obvious, but after having used the lib for a few days, I totally forgot I was't using the github repo. I think this will likewise throw people off who don't know what gopkg is.

How about referencing the same repo for both docs and installation instructions, and then adding a note about the existence of the other?

Add Http Basic Auth support to requests

We are using the newly released shield plugin for ES. This requires us to set the authentication header.

I'd like to add that to the client, would you prefer this as a Request mutator just like SetBodyJson in https://github.com/olivere/elastic/blob/023773e3454ae460b40687dbf893821c371a033f/request.go or as a Client setting that is automatically added to each request. Since each request might require different authorisation a per request setting would not be the worst thing here...

Any thoughts?

RangeFilter support new format

In ES documentation:
The from, to, include_lower and include_upper parameters have been deprecated in favour of gt,gte,lt,lte.
http://www.elasticsearch.org/guide/en/elasticsearch/reference/0.90/query-dsl-numeric-range-filter.html

In current documentation it's already from, to, include_lower and include_upper already not exists:
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/query-dsl-range-filter.html

need to add this possibility? or it's already exists somewhere?

invalid memory address or nil pointer dereference goroutine

I use go-workers, and some times I meet some problems like this, could you help me ?

JID-38fe0021c6a213377cee972d error: runtime error: invalid memory address or nil pointer dereference
goroutine 99 [running]:
github.com/jrallison/go-workers.func·007()
/root/.jenkins/jobs/chitu/workspace/go/src/github.com/jrallison/go-workers/middleware_logging.go:23 +0x280
github.com/jrallison/go-workers.func·008()
/root/.jenkins/jobs/chitu/workspace/go/src/github.com/jrallison/go-workers/middleware_retry.go:43 +0x566
github.com/jrallison/go-workers.func·009()
/root/.jenkins/jobs/chitu/workspace/go/src/github.com/jrallison/go-workers/middleware_stats.go:13 +0x74
github.com/olivere/elastic.(_Client).PerformRequest(0x0, 0xa92c80, 0x3, 0xc208467300, 0x13, 0xc208473b30, 0xa44f80, 0xc2080b57a0, 0x0, 0x0, ...)
/root/.jenkins/jobs/chitu/workspace/go/src/github.com/olivere/elastic/client.go:785 +0xec5
github.com/olivere/elastic.(_IndexService).Do(0xc2080b5880, 0xa44f80, 0x0, 0x0)
/root/.jenkins/jobs/chitu/workspace/go/src/github.com/olivere/elastic/index.go:206 +0xfba

Unchecked json.Unmarshal results in Blank Keys

In elastic it is possible to have keys that are not strings. So for example, in:

func (a *AggregationBucketKeyItem) UnmarshalJSON(data []byte) error {
    var aggs map[string]json.RawMessage
    if err := json.Unmarshal(data, &aggs); err != nil {
        return err
    }
    a.Aggregations = aggs

    json.Unmarshal(aggs["key"], &a.Key)
    json.Unmarshal(aggs["doc_count"], &a.DocCount)
    return nil
}

We get an empty key result because the Unmarhsal fails, and since the code isn't checking for errors in unmarshmaling this is a silent error.

Not populate struct on search

    type Search struct {
            Title   string    `json:"title"`
            Url     string    `json:"url"`
            Tags    []string  `json:"tags,omitempty"`
            Created time.Time `json:"created,omitempty"`
    }

    termQuery := elastic.NewTermQuery("title", "avelino")
    searchResult, err := client.Search().
            Index("poorny").
            Query(termQuery).
            From(0).Size(10).
            Pretty(true).
            Do()
    if err != nil {
            panic(err)
    }
    fmt.Printf("Query took %d milliseconds\n", searchResult.TookInMillis)
    if searchResult.Hits != nil {
            for _, hit := range searchResult.Hits.Hits {
                    var s Search
                    hit.Source.MarshalJSON()
                    err := json.Unmarshal(*hit.Source, s)
                    if err != nil {
                    }
                    fmt.Printf("Tweet by %s: %s\n", s.Title, s.Url)
            }
    }

Return:

Query took 1 milliseconds
Tweet by : 
Tweet by : 
Tweet by : 
Tweet by : 
Tweet by : 
Tweet by : 
Tweet by : 
Tweet by : 

Mapping

Hi @olivere,

I am busy adding mapping to the create-index function. You can see the progress of this here. It's a bit quick and dirty and I will create some helper/convenience methods:

https://github.com/emilebosch/elastic/tree/feature/add-mapping

To use this it would be like (which is a pain):

index := es.Index{Mappings: make(map[string]es.Mapping)}
index.Mappings["combi"] = es.Mapping{Properties: make(map[string]es.Property)}
index.Mappings["combi"].Properties["AboProvider"] = es.Property{Index: "not_analyzed", Type: "string"}
index.Mappings["combi"].Properties["AboSoort"] = es.Property{Index: "not_analyzed", Type: "string"}

client.DeleteIndex(INDEX).Do()
client.CreateIndex(INDEX).
  Mapping(index).
  Do()

I was also thinking of maybe to decorate the structs: So that you can just define the analyzer etc. On struct level. Such as:

type NewsItem struct {
  NewsItem       string `json:"newsitem" es:"not_analyzed, string"`
}

What you think?

fuzzy and fuzzy_like_this?

Hi,
First at all, this package is AWESOME!!. Is really comfortable and intuitive :).
I would like if you have think add fuzzy and fuzzy_like_this query in a near future. I really interesting for this!

Thanks!

Add Support for regexp Query

Let me know if I'm getting annoying and I can try to get caught up on the library to contribute :-/ But now I'm also looking for the rexep Query.

The regexp filter is working as expected - thanks a lot for that

How to cook thrift

Hello, can you please suggest - from what can i start, for trying elasticsearch with thrift?

Problems With Sniffing

I'm running Elasticsearch v1.4.4 in a Docker container. I kept having trouble getting the client to work properly. I was trying to run the sample in the README (obviously pointing to my Docker container instead of localhost). It was taking ~30 seconds to create the client, and then would fail to create the index with the error: no Elasticsearch node available.

As soon as I set turned off sniffing when creating the client (elastic.SetSniff(false)), everything worked perfectly. It doesn't really bother me that I have to turn sniffing off, but I wanted to put this issue out to see if anyone else had seen an issue like this.

P.S. @olivere - The documentation is awesome! 👍

About Error handle

Is there any method to access to the Error struct?
im new to this
Thanks!

Not obvious behaviour of sorting

Now you can use or SortBy or Sort/SortWithInfo, but not together.
I did something like:

engineRequest = engineRequest.Sort("field1", false)
engineRequest.SortBy(elastic.NewScriptSort(DEFAULT_SCORING, "number").Lang("groovy"))
engineRequest = engineRequest.Sort("field2", false)

in result i have only order by script. I think it's strange behaviour, i expected that i will have sorts by field1, script, field2.

it happening because of code:

if len(s.sorters) > 0 {
        sortarr := make([]interface{}, 0)
        for _, sorter := range s.sorters {
            sortarr = append(sortarr, sorter.Source())
        }
        source["sort"] = sortarr
    } else if len(s.sorts) > 0 {
        sortarr := make([]interface{}, 0)
        for _, sort := range s.sorts {
            sortarr = append(sortarr, sort.Source())
        }
        source["sort"] = sortarr
    }

i understand that i can avoid it by using elastic.NewFieldSort("field1").Desc(), but then need to mark Sort and SortWithInfo as deprecated?

The type for Max/Min aggregation's Value

Hi again :)

Is there any reason that the Value field on AggregationValueMetric is of type float64 ? (see here)

I'm using date histogram aggregation with max aggregation (on field which is of type time.Time).

So the results returned from pure ES are:

...
"key_as_string": "2015",
"key": 1420070400000,
"doc_count": 9256,
"latest": {
    "value": 1423589933165
},
...

but in elastic I'm getting 1.422469422e+12 for the latest value... Obviously the original int(64?) is cast to float64.

ES docs does not specify type for Min/Max, it only mentions "...returns the maximum value among the numeric values extracted from the aggregated documents..."

Ideally, I'd prefer if it returns the type same as original value's type.

Thanks, Maciek

Generating SubAggregations

I'm having trouble figuring out how to generate an elastic query with nested sub aggregations. I'm after a query like the following if the user were to give logsource,pid (this list can be 1-n length):

{
   "query":{
      "match":{
         "message":"error"
      }
   },
   "aggs":{
      "g_logsource":{
         "terms":{
            "field":"logsource"
         },
         "aggs":{
            "g_pid":{
               "terms":{
                  "field":"pid"
               },
               "aggs":{
                  "ts":{
                     "date_histogram":{
                        "field":"@timestamp",
                        "interval":"1h"
                     }
                  }
               }
            }
         }
      }
   }
}

I've tried the following:

    ts := elastic.NewDateHistogramAggregation().Field("@timestamp").Interval(strings.Replace(interval, "M", "n", -1))
    keys := strings.Split(keystring, ",")
    aggregation := elastic.NewTermsAggregation().Field("g_" + keys[0])
    if len(keys) > 1 {
        for _, key := range keys[1:] {
            aggregation = aggregation.SubAggregation("g_"+key, elastic.NewTermsAggregation().Field(key))
        }
    }
    s = s.Aggregation("aggs", aggregation.SubAggregation("ts", ts))

But it seems to produce adgacent aggregations instead of sub aggregations - here is the generate query according to debug output:

{
   "aggregations":{
      "aggs":{
         "aggregations":{
            "g_pid":{
               "terms":{
                  "field":"pid"
               }
            },
            "ts":{
               "date_histogram":{
                  "field":"@timestamp",
                  "interval":"5m"
               }
            }
         },
         "terms":{
            "field":"g_logsource"
         }
      }
   },
   "query":{
      "match":{
         "message":{
            "query":"error"
         }
      }
   }
}

Could you provide guidance on how to do this?

Accessing Sub Aggregation Results?

With the following:

    q := elastic.NewMatchQuery("message", "error")
    dh := elastic.NewDateHistogramAggregation().Field("@timestamp").Interval("5m")
    t := elastic.NewTermsAggregation().Field("logsource").SubAggregation("ts", dh)
    result, err := client.Search().Index("logstash-2014.11.07").Query(q).Aggregation("groups", t).Do()

I'm not really sure how to access the sub aggregations.

fmt.Println(string(result.Aggregations["groups"])) returns {"buckets":[{"key":"ny","doc_count":200851,"ts":{"buckets":[{"key_as_string":"2014-11-07T00:00:00.000Z","key":1415318400000,"doc_count":879}

So the sub aggregation seems to be working, however:

    if groups, ok := result.GetAggregation("groups"); ok {
        _, hmm := groups.GetAggregation("ts")
        fmt.Println(hmm)
    }

returns false

How to construct filter query?

Hi there,

Thanks for a great library :-)

I'm having a bit of trouble to construct a Search API query, in Query DSN this looks like (truncated for readability):

{
    "filter": {
        "term": {
           "Status": "3"
       }
    },
   "query": {
      "match": {
          "MyField": {
              "query": "something",
              "minimum_should_match": "75%"
          }
      }
   }
}

I have constructed this query using elastic but I can't figure out how to make the filter to work.

q := elastic.NewMatchQuery("MyField", my_query).MinimumShouldMatch("75%")
t := elastic.NewQueryFilter(elastic.NewTermFilter("Status", "3"))
sres, _ := dbc.Search().
    Index("MyIndex").Type("MyType").
    Query(&q).Do()

I don't see Filter method but there's PostFilter (too slow in my case).

Perhaps I should somehow pass t to q and then to Query method?

Would appreciate some hints.

Problems on connect

@DasHaus, I copied it over from #57:

Hi,
I have the same problem here:

panic: main: conn db: no Elasticsearch node available

goroutine 1 [running]:
log.Panicf(0x84de50, 0x11, 0xc2080c7e90, 0x1, 0x1)
    /usr/local/go/src/log/log.go:314 +0xd0
main.init·1()
    /Users/emilio/go/src/monoculum/init.go:40 +0x348
main.init()
    /Users/emilio/go/src/monoculum/main.go:334 +0xa4

goroutine 526 [select]:
net/http.(*persistConn).roundTrip(0xc2088ad1e0, 0xc2086a9d50, 0x0, 0x0, 0x0)
20:30:13 app         |  /usr/local/go/src/net/http/transport.go:1082 +0x7ad
net/http.(*Transport).RoundTrip(0xc20806c000, 0xc2086f6000, 0xc20873ff50, 0x0, 0x0)
20:30:13 app         |  /usr/local/go/src/net/http/transport.go:235 +0x558
20:30:13 app         | net/http.send(0xc2086f6000, 0xed4f18, 0xc20806c000, 0x21, 0x0, 
20:30:13 app         | 0x0)
    /usr/local/go/src/net/http/client.go:219
20:30:13 app         |  +0x4fc
net/http.(*Client).send(0xc08b00, 0xc2086f6000, 0x21
20:30:13 app         | , 0x0, 0x0)
    /usr/local/go/src/net/http/client.go:142 +0x15b
20:30:13 app         | net/http.(*Client).doFollowingRedirects(0xc08b00, 0xc2086f6000, 0x97cd00, 0x0, 0x0, 0x0)
20:30:13 app         |  /usr/local/go/src/net/http/client.go:367 +0xb25
net/http.(*Client).Do(0xc08b00, 0xc2086f6000, 0xc20873fce0, 0x0, 
20:30:13 app         | 0x0)
    /usr/local/go/src/net/http/client.go
20:30:13 app         | :174 +0xa4
github.com/olivere/elastic.(*Client).sniffNode(0xc208659d10, 0xc208569920, 0x15
20:30:13 app         | , 0x0, 0x0, 0x0)
20:30:13 app         |  /Users/emilio/go/src/github.com/olivere/elastic/client.go:543
20:30:13 app         |  +0x16a
20:30:13 app         | 
github.com/olivere/elastic.func·014(0xc208569920, 0x15
20:30:13 app         | )
    /Users/emilio/go/src/github.com/olivere/elastic/client.go:508 +0x47
20:30:13 app         | created by github.com/olivere/elastic.(*Client).sniff
    /Users/emilio/go/src/github.com/olivere/elastic/client.go:508 +0x744

goroutine 525 [chan receive]:
20:30:13 app         | database/sql.(*DB).connectionOpener(0xc2086de960)
    /usr/local/go/src/database/sql/sql.go:589 +0x4c
created by database/sql.Open
    /usr/local/go/src/database/sql/sql.go:452 +0x31c

goroutine 529 [IO wait]:
20:30:13 app         | net.(*pollDesc).Wait(0xc2084fe370, 0x72, 0x0
20:30:13 app         | , 
20:30:13 app         | 0x0)
    /usr/local/go/src/net/fd_poll_runtime.go:84 +0x47
net.(*pollDesc).WaitRead(0xc2084fe370, 0x0, 0x0)
    /usr/local/go/src/net/fd_poll_runtime.go:89 +0x43
net.(*netFD).Read(0xc2084fe310, 0xc208709000, 0x1000, 0x1000, 0x0, 0xed4d48, 0xc2086a9ec8)
    /usr/local/go/src/net/fd_unix.go:242 +0x40f
net.(*conn).Read(0xc20896a800, 0xc208709000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
    /usr/local/go/src/net/net.go:121 +0xdc
net/http.noteEOFReader.Read(0xef0410, 0xc20896a800, 0xc2088ad238, 0xc208709000, 0x1000, 0x1000, 0xeb7010, 0x0, 0x0)
    /usr/local/go/src/net/http/transport.go:1270 +0x6e
net/http.(*noteEOFReader).Read(0xc208569b40, 0xc208709000, 0x1000, 0x1000, 0xc207f6957f, 0x0, 0x0)
    <autogenerated>:125 +0xd4
bufio.(*Reader).fill(0xc2088f3c80)
    /usr/local/go/src/bufio/bufio.go:97 +0x1ce
bufio.(*Reader).Peek(0xc2088f3c80, 0x1, 0x0, 0x0, 0x0, 0x0, 0x0)
    /usr/local/go/src/bufio/bufio.go:132 +0xf0
net/http.(*persistConn).readLoop(0xc2088ad1e0)
    /usr/local/go/src/net/http/transport.go:842 +0xa4
created by net/http.(*Transport).dialConn
    /usr/local/go/src/net/http/transport.go:660 +0xc9f

goroutine 530 [select]:
net/http.(*persistConn).writeLoop(0xc2088ad1e0)
    /usr/local/go/src/net/http/transport.go:945 +0x41d
created by net/http.(*Transport).dialConn
    /usr/local/go/src/net/http/transport.go:661 +0xcbc

This occurs sometimes... not always...

curl -XGET 127.0.0.1:9200/_nodes/http?pretty=1
{
  "cluster_name" : "elasticsearch",
  "nodes" : {
    "3l_Ing0oSfWu5U63US5kxg" : {
      "name" : "Rattler",
      "transport_address" : "inet[192.168.1.91/192.168.1.91:9300]",
      "host" : "Mac-Emilio",
      "ip" : "192.168.1.91",
      "version" : "1.3.4",
      "build" : "a70f3cc",
      "http_address" : "inet[/192.168.1.91:9200]",
      "http" : {
        "bound_address" : "inet[/0:0:0:0:0:0:0:0:9200]",
        "publish_address" : "inet[/192.168.1.91:9200]",
        "max_content_length_in_bytes" : 104857600
      }
    }
  }
}

How to define the analyzer when creating index?

I'm here for asking questions: How to define the analyzer when creating index?

It seems by default it will not using the default analyzer I defined in elasticsearch.yml.
index.analysis.analyzer.default.type: ik

If I request:
http://localhost:9200/useridx/_search?q=Lin it doesn't using the default analyzer.

And this works:
http://localhost:9200/useridx/_search?analyzer=ik&q=Lin

Infinite Retries

Hi @olivere. I had a problem where my elastic cluster went down, and it seemed like my elastic queries kept retrying themselves without stopping. I see something about retries in one of the latest commits. Was this a known issue with this module - or should I be looking at my usage of the library?

Currently using an older vendored version.

How to create free text query

Following your example in the code provided, I am trying to do a free text search. It is clear that passing elastic.NewTermQuery("user", "olivere") will filter on user field.
I have tried passing a map[string] interface to satisfy the Query interface in the code but it fails. In this example I am looking for the word "leyzaola" that matches any field.
This is the code I was playing with:
var qs map[string]string
qs["query"] = "leyzaola"
termQuery := elastic.NewTermFilter("query_string", qs)
searchResults, err := client.Search().Index("users").Query(&termQuery).Sort("name", true).From(0).Size(5).Do()
`

Could you please tell me how to perform such thing?
Thanks.

Aggregations returned as base64 encoded string (json.RawMessage)

if you json.Marshal(result) on query result with aggregations, the aggregation results are marshaled as a base64 encoded string , whereas the query hits are not, which is a bit of a hassle. You can test it by marshaling the result from a query execution that uses aggregation. I tested with TermsAggregation.

You can solve it by using result.Aggregations... but not for entire result response.

type Aggregations map[string]json.RawMessage

Its because json.RawMessage is not a pointer (reference: https://groups.google.com/forum/#!topic/Golang-Nuts/38ShOlhxAYY).

Changing it to type Aggregations map[string]*json.RawMessage along with its usage in search_aggs.go solves it :) But I am not sure about other ramifications due to this.

If you like this, I can submit a pull request. For now, I have just forked the repo.

Thanks.

Some problems when using Fileds

I'm using a searchservice with some fields. When i get the search hits and want to access the Source, it gives me a "nil",

so i have to get the data with something like "hit.Fields["userid"].([]interface{})[0].(string)"

Is there another way to get the result from the Search with fields?
Thanks~

json.Marshal() on a result.Aggregations returns invalid JSON

When marshaling to the result of a aggregation to JSON it produces invalid JSON. The code below is the relevant snippet from my code:

termsAgg := elastic.NewTermsAggregation().
        Field("_type").
        OrderByAggregation("topScore", false).
        SubAggregation("topScore", elastic.NewMaxAggregation().Script("_score")).
        SubAggregation("types", elastic.NewTopHitsAggregation().Size(5))

searchRequest := client.Search().
        Index("myindex").
        Query(query).
        Aggregation("top_results", termsAgg)

result, err := searchRequest.Do()
if err != nil {
        //Actually doing real error handling here, but I cut that for readability
        log.Println("Error: SearchHandler:", err)
        return
}

aggRes := result.Aggregations

topResults, found := aggRes.Terms("top_results")
if !found {
        log.Println("Error: no results found")
        return
}

resultJson, err := json.Marshal(topResults)

Result (trimmed):

{  
   "Aggregations":{  
      "buckets":[  ],
      "doc_count_error_upper_bound":0,
      "sum_other_doc_count":0
   },
   "DocCountErrorUpperBound":0,
   "SumOfOtherDocCount":0,
   "Buckets":[  
      {  
         "Aggregations":{  
            "doc_count":39,
            "key":"ne",
            "topScore":{  
               "value":2.060434579849243
            },
            "types":{  
               "hits":{  
                  "total":39,
                  "max_score":2.0604346,
                  "hits":[  
                     {  },
                     {  },
                     {  },
                     {  },
                     {  }
                  ]
               }
            }
         },
         "Key":"ne",
         "KeyNumber":ne,
         "DocCount":39
      }
   ]
}

Notice the unquoted value of KeyNumber.

It look like this is an oversight when solving #51, but I'm not sure.

Is it possible to unmarshal JSON to a filter that can be then be used with the builders?

I want to use the builders to construct a Scan with a query for use in an export web service. The user can specify a filter that they want to use using JSON. It will consist of a "must" or "should" filter containing an array of term filters, etc. I would like to build a Filter from the user defined JSON and then add it to my Scan and run it. At the moment I can't see how that's possible without writing my own unmarshal code.

Function score weight not supported parsing exception

As per the discussion in #74 (comment), the function weighting is still causing an issue. I am getting the following exception raised when adding the weight onto a score function, as per the test example (https://github.com/olivere/elastic/blob/master/search_queries_fsq_test.go#L72)

        QueryParsingException[[.....] field_value_factor query does not support [weight]

As far as I can tell this needs adding to the FunctionScoreQuery similar to the original PR implementation.

Thanks

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.