Giter Site home page Giter Site logo

paginator's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

paginator's Issues

cursor fields can't find field inside select ordered by fragment

Hey guys.

I've been having some issues executing queries with select bidings and ordering by these bidings using fragments.
If I have some query that is using :select bidings for example and ordered by it using fragment:

from(
  u in User,
  join: c in Contact,
  on: c.user_id = u.id,
  order_by: fragment("new_field")
  select: %{
    new_field: c.some_field
  }
)

My cursor fields can't find my new_field on my query during pagination but none option worked:

[cursor_fields: [{:new_field, :desc}], limit: 10]
[cursor_fields: [{"new_field", :desc}], limit: 10]

I also tried using but not had success

fetch_cursor_value_fun: fn
        schema, :new_field ->
          schema.new_field
          
        schema, field ->
          Paginator.default_fetch_cursor_value(schema, field)
      end,

I'm doing some mistake or it is just not possible to do it?

Reversed order of arguments in the reverse order of Ecto

the keyword args of desc: :id vs id: :desc. the former being ecto, the latter being paginator.

Is there a reason why they're incongruent, and if not, would y'all accept a PR to switch that to be congruent to what Ecto is currently doing?

Name collision with scrivener_ecto

Hello!

This library and scrivener_ecto both provide a metaprogrammed API that generates a paginate function on the user's Repo.

Give they share the same name what is the recommended method for taking advantage of both libraries in one application? Using multiple repos would result in multiple connection pools, which has performance implications.

Thanks,
Louis

Using total_count_limit raises an exception

Hello,
I found a little bug concerning the total_count_limit parameter.

This is what I have in my query params:

params: %{include_total_count: true, limit: 10, total_count_limit: "20"}

And this is what I get when running the query:

[error] #PID<0.802.0> running OssWeb.Endpoint (connection #PID<0.801.0>, stream id 1) terminated
Server: localhost:4013 (http)
Request: GET /v1/oem/test/subscriber?limit=10&include_total_count=true&total_count_limit=20
** (exit) an exception was raised:
    ** (ArithmeticError) bad argument in arithmetic expression
        (paginator 1.1.0) lib/paginator.ex:371: Paginator.total_count/4
        (paginator 1.1.0) lib/paginator.ex:181: Paginator.paginate/4

Looks like lib/paginator.ex:371, which is

      |> limit(^(total_count_limit + 1))

should be replaced with:

      |> limit(^total_count_limit + 1)

What do you think?

Regards,
Jerome

What is the purpose of `cursor_fields`

I know that they're used to create a cursor, and then match on it, but why would you not just use id assuming id is a unique value?

you have [:charged_at, :amount, :id], but assuming :id is unique, what's the value of adding the other two?

Change description to match actual behaviour

What this library provide is index-based pagination which is different from cursor-based one. This difference may not be the relevant for most web developers, but for DBA it is quite important. The main difference is that cursor-based pagination can be bidirectional, but it comes at the price: you need to maintain continuous connection to DB which mean that this is not stateless pagination.

Logo contribution

Hi. I am a graphic designer. I volunteer to design a logo for open source projects. I can design it for you to use it in the text file. What dou you say?

cursor_fields yield error

Version: 1.0.2
Cursor Fields: [{{:person, :last_name}, :asc}, {{:person, :first_name}, :asc}]
Error:

** (exit) an exception was raised:
    ** (ArgumentError) expected a keyword list, but an entry in the list is not a two-element tuple with an atom as its first element, got: {{:person, :last_name}, :asc}
        (elixir 1.11.2) lib/keyword.ex:475: Keyword.keys/1
        (paginator 1.0.1) lib/paginator/config.ex:73: Paginator.Config.cursor_values_match_cursor_fields?/2
        (paginator 1.0.1) lib/paginator/config.ex:54: Paginator.Config.validate!/1
        (paginator 1.0.1) lib/paginator.ex:177: Paginator.paginate/4

Expected: Cursors given by this library should also be able to return a result.

I'll directly work on a PR to fix this issue.

Binary cursor field

I am using UUID as the primary key and choosing [:inserted_at, :id] as cursor fields.
it successfully gets encode/decode. but while executing in Postgres it produces the following error.

(DBConnection.EncodeError) Postgrex expected a binary of 16 bytes, got "6f99aeef-a057-4562-8ef6-d35e310b3593". Please make sure the value you are passing matches the definition in your table or in your query or convert the value accordingly.

so is there a workaround for this? maybe someway to customize cursor encode/decode

It should allow a user to pass parameters directly.

Hi,
Because the front-end always pass a map with string keys, I think it should normalize params from a map since it expects a keyword list for parameters like Scrivener.

There are two ways of doing this:

  1. normalize params from map to keyword list
  2. use map directly in config struct.

Which kind do you prefer?

Cursor decode does not work for custom sort field with `fragment`

Custom field with fragment, passed with anonymous function

cursor_fields = [
      {{:vendors,
       fn ->
         dynamic([vendors: v], fragment("LOWER(?)", v.name))
       end}, :asc},
      {:id, :asc}
]

Call without after_cursor is working:

opts = [
          limit: args[:limit] || @hard_limit,
          cursor_fields: cursor_fields,
          fetch_cursor_value_fun: fetch_cursor_value_fun
 ]
 %{metadata: %{after: after_cursor}} = Paginator.paginate(query, opts, Db.Repo, [])

However, when paginate with after, &Paginator.Cursor.decode/1 will fail:

opts = [
          limit: args[:limit] || @hard_limit,
          cursor_fields: cursor_fields,
          fetch_cursor_value_fun: fetch_cursor_value_fun,
          after: after_cursor
 ]
 
 Paginator.paginate(query, opts, Db.Repo, []) # <- Fails

Output:

     ** (ArgumentError) cannot deserialize #Function<2.128102115/0 in Clients.Queries.CollectionQuery..../1>, the term is not safe for deserialization
     stacktrace:
       (plug_crypto 1.2.5) lib/plug/crypto.ex:80: Plug.Crypto.non_executable_terms/1
       (plug_crypto 1.2.5) lib/plug/crypto.ex:99: Plug.Crypto.non_executable_tuple/2
       (plug_crypto 1.2.5) lib/plug/crypto.ex:65: anonymous fn/3 in Plug.Crypto.non_executable_terms/1
       (stdlib 5.2) maps.erl:416: :maps.fold_1/4
       (plug_crypto 1.2.5) lib/plug/crypto.ex:51: Plug.Crypto.non_executable_binary_to_term/2
       (paginator 1.2.0) lib/paginator/config.ex:44: Paginator.Config.new/1
       (paginator 1.2.0) lib/paginator.ex:175: Paginator.paginate/4

It is documented already:
https://github.com/duffelhq/paginator/pull/166/files#diff-b335630551682c19a781afebcf4d07bf978fb1f8ac04c6bf87428ed5106870f5R190

Function

fn ->
         dynamic([vendors: v], fragment("LOWER(?)", v.name))
 end

is not safe term to decode in Paginator.Cursor, deps/paginator/lib/paginator/cursor.ex:9

Idea: mb to support {:mfa, module, functions, args} way to decode after_cursor

Wierd behaviour

I created a small test phoenix app because I had strange results within my main app. But I get wierd results here, too: I created a simple schema for Tickets containing a subject and a message, generated 100 entries (all message and subject fields contain the id) and this is what I put in my controller:

@paging_factor = 5
query = from t in Ticket,
order_by: t.subject

return the first 5 posts

page1 = Repo.paginate(query, cursor_fields: [:subject, :id], limit: @paging_factor)
IO.puts "first take:"
for item <- page1.entries do
IO.puts item.message
end

Output is:
1
2
3
4
5

So far so good. But now:

assign the after cursor to a variable

cursor_after = page1.metadata.after

return the next 5 posts

page2 = Repo.paginate(query, after: cursor_after, cursor_fields: [:subject, :id], limit: @paging_factor)
IO.puts "Next take..."
for item <- page2.entries do
IO.puts item.message
end

This results in this output:
7
8
9
10
11

So it skipped the 6th object (and evertime I use the next cursor.after it skips one object)... But now it gets really weird:
`

assign the before cursor to a variable

cursor_before = page2.metadata.before

return the previous 5 posts (if no post was created in between it should be the same list as in our first call to paginate)

page3 = Repo.paginate(query, before: cursor_before, cursor_fields: [:subject, :id], limit: @paging_factor)
IO.puts "Now go back..."
for item <- page3.entries do
IO.puts item.message
end

Result:
4
3
2
1
0

My deps are:
{:phoenix, "> 1.3.4"},
{:phoenix_pubsub, "
> 1.0"},
{:phoenix_ecto, "> 3.2"},
{:postgrex, ">= 0.0.0"},
{:phoenix_html, "
> 2.10"},
{:phoenix_live_reload, "> 1.0", only: :dev},
{:gettext, "
> 0.11"},
{:plug_cowboy, "> 1.0"},
{:paginator, "
> 0.5"}

Cursor vs Keyset-based Pagination

Is there any chance that you could change the documentation (and repo title) and possibly replace and deprecate the cursor_ parameters with keyset_ parameters? Using 'Cursor' to describe what the library does gives the wrong impression about it's implementation, and leads to teams using inaccurate language, which just leads to unnecessary arguments, and confusion when searching on Google without reading the excellent links you provide in the README.

Love the library, we're currently moving our production system to it. Many thanks.

New release

The update to plug_crypto to 1.2.0 (268a97d) has been merged one month ago but it's not included in any release, and we have to fallback to use the git dependency.

Maybe you can do a release with a patch version (1.0.2) to include it in a hex release?

Having wrong before and after key values whenever I'm using select in query

We've faced in a problem while we are using Paginator with Ecto.Query.select/3
There is a code sample:

query = from u in User, order_by: [asc: user.inserted_at, asc: user.id], select: %{user: u}
Backend.Repo.paginate(query, cursor_fields: [inserted_at: :asc, id: :asc])

Got a metadata:

%Paginator.Page.Metadata{
    after: "g3QAAAACZAACaWRkAANuaWxkAAtpbnNlcnRlZF9hdGQAA25pbA==",
    before: nil,
    limit: 2,
    total_count: 20,
    total_count_cap_exceeded: false
}

And after I applied it to fetch a next page I got it but before and after keys have the same values as a previous Metadata has.

%Paginator.Page.Metadata{
    after: "g3QAAAACZAACaWRkAANuaWxkAAtpbnNlcnRlZF9hdGQAA25pbA==",
    before: "g3QAAAACZAACaWRkAANuaWxkAAtpbnNlcnRlZF9hdGQAA25pbA==",
    limit: 2,
    total_count: 20,
    total_count_cap_exceeded: false
}

Any workaround here?

null values get dropped from results

The SQL queries generated will silently drop null values, e.g: sort_order = '' and sort_order > '' will both exclude null values. So I guess this should currently only be used on not-nullable columns.

At least for string values. My guess is that other types will be affected as well.

Pagination does not work with joins

If the query has joins, the limit you apply on a query would also count in a number of joined records. In result, you would get fewer results than you expect or, if there is a lot of joined records, one record without all joined assocs.

This is a classic error, which can be worked around like this:

select * from (select * from users limit 10 offset 10)  as u
left join files f
   on u.id = f.user_id

But the library sets the limit by itself, which does not allow an application developer to wrap a query in a subquery.

Unsafe comparison with nil at Paginator.Ecto.Query

Hello there!

There is ะฐn error about unsafe comparison with nil which occurs while using multiple cursor_fields.
I checked up pull requests and found solution by @0nkery. It opened more than month ago however there are still no reactions. Are you going to review it?

Thanks)

Feature suggestion: `.paginated_stream/2` that mimics `Repo.stream/2`

We've just created a similar library ( https://github.com/allegro/ecto-cursor-based-stream ) that adds a cursor_based_stream/2 function that mimics Repo.stream function.

Just afterwards we've noticed paginator exists. Perhaps we could add a similar function here, e.g. cursor_based_stream or paginated_stream, that returns all items as stream? Would you be interested in such a code contribution? Then our library wouldn't be much needed anymore.

Implement Paginator.Cursor in a more configurable way

Use case: encode/decode DateTime in a shorter variant, like Unix timestamp or so.

Example:

a = DateTime.utc_now                              
#DateTime<2019-04-02 11:43:12.567049Z>
a |> :erlang.term_to_binary() |> Base.url_encode64()
"g3QAAAANZAAKX19zdHJ1Y3RfX2QAD0VsaXhpci5EYXRlVGltZWQACGNhbGVuZGFyZAATRWxpeGlyLkNhbGVuZGFyLklTT2QAA2RheWECZAAEaG91cmELZAALbWljcm9zZWNvbmRoAmIACKcJYQZkAAZtaW51dGVhK2QABW1vbnRoYQRkAAZzZWNvbmRhDGQACnN0ZF9vZmZzZXRhAGQACXRpbWVfem9uZW0AAAAHRXRjL1VUQ2QACnV0Y19vZmZzZXRhAGQABHllYXJiAAAH42QACXpvbmVfYWJicm0AAAADVVRD"
a |> DateTime.to_unix(:microsecond) |> :erlang.term_to_binary |> Base.url_encode64()
"g24HAA27XHaShQU="

As you can see, the latter is much shorter.

Order by & preloads

Hi,
My first question is how can I set order_by of some field? Let's say I've got such query:

defp compose_query({"order_by", "title_ascending"}, query) do
    order_by(query, [advert], advert.title)
end

Setting cursor fields to [title: :asc] doesn't work for me - it seems that somehow metadata is broken. Repo.paginate responses with first page with correct entries and metadata seems like this:

%Paginator.Page.Metadata{
  after: "g2wAAAABZAADbmlsag==", #this is wrong
  before: nil,
  limit: 4,
  total_count: 7,
  total_count_cap_exceeded: false
}

Any advices? ๐Ÿ˜„


My second question is about sorting by preloaded association field. Let's say I've got an advert that belongs to user & user has many adverts. I'm trying to paginate adverts ordered by user name.

My question is if is this possible somehow?
I'm performing query, which looks like that:

defp compose_query({"order_by", "user_email_ascending"}, query) do
    join(query, :left, [advert], user in assoc(advert, :user))
    |> order_by([advert, user], user.email)
end

My guess would be, that I should set cursor_fields to [[:user, :email]], but it doesn't work.

I really hope for a fast answer ๐Ÿ˜„

Calling paginate with after: won't return all results and won't return new metadata.after in output

I try to use the dep to provide a load more button that a user can call N times.

When the page loads the first time I call:
Repo.paginate(query, include_total_count: true, cursor_fields: [:priority, :parution_date, :id], limit: 2)

Then I use the metadata.after to trigger a second call:
Repo.paginate(query, after: cursor_after, cursor_fields: [:priority, :parution_date, :id], limit: 2)

This time I get a metadata.before and a metadata.after = nil which doesn't give the opportunity to call again and again the method.

Did I misunderstood the objective of the dep, or there should be a new metadata.after provided until there is no more records ?

Update hex release / broken desc pagination in 0.6.0

I am attempting to paginate results in descending order, and with the version I've installed from Hex (0.6.0), the Paginator is encoding the same values for both before/after. The cursor decodes to [nil,nil].

It looks like this issue no longer exists on the master branch (I vendored this module into my project to test), however, there has not been a hex release with these changes.

`cursor_fields` with virtual field

Hi there! :)

I'd like my cursor_fields to use a virtual field which is added with a select_merge, how can I accomplish it?

Thanks in advance!

Default fetch cursor value function fails when base schema has the same field has the binding's

I have a cursor with 2 fields, one being a preload, something like: [{:author, :name}, :name] (let's say the base schema is :post).

I think the default fetch cursor value erroneously tries to fetch first the field on the base schema: https://github.com/duffelhq/paginator/blob/main/lib/paginator.ex#L243

As a result, if the base schema and the preloaded schema have the same field, then the wrong field from the base schema is used.

In the example above, that would result in the cursor having twice the post's name:

%{:id => 92575, :name => "Post's name", {:author, :name} => "Post's name"}

instead of

%{:id => 92575, :name => "Post's name", {:author, :name} => "Author's name"}

If I've not misunderstood how this library works, I'm ready to provide with a PR!

help: validate cursors?

I'm using an Ecto embedded_schema in order to validate pagination params being passed into the query string of my API.

However I'm wondering how can I check if a cursor is indeed valid and without having to rescue ArgumentError from Paginator.paginate/4


  defp changeset(schema \\ %__MODULE__{}, params) do
    schema
    |> cast(params, @optional)
    |> validate_cursor()
    |> validate_number(:limit, greater_than: 0, message: "must be greater than 0")
  end

  defp validate_cursor(changeset) do
    cursor_before = get_field(changeset, :before)
    cursor_after = get_field(changeset, :after)

    case {cursor_before, cursor_after} do
      {nil, nil} ->
        changeset

      {_cursor_before, nil} ->
        changeset

      {nil, _cursor_after} ->
        changeset

      {_cursor_before, _cursor_after} ->
        changeset
        |> add_error(:before, "use either 'before' or 'after'")
        |> add_error(:after, "use either 'before' or 'after'")
    end
  end

Query with a fragment seems to return the after and before keys swapped.

Example query:

   Model
    |> order_by([o], desc: fragment("date(?)", o.inserted_at))
    |> group_by([o], fragment("date(?)", o.inserted_at))
    |> select([o], %{fragment("date(?)", o.inserted_at) => sum(o.amount)})
    |> Repo.paginate(Keyword.merge(opts, sort_direction: :desc, cursor_fields: [:inserted_at]))

Returns properly and seems to paginate correctly if I pass in the before Key. For some reason the after key returns the same element. Any idea what is going on?

Comparison with composite to optimize performance?

If I'm not wrong, currently the where condition for the cursor values is built in this format:

where cursor_field1 > x and cursor_field2 > y

But we could use a composite value (when the fields are compared with same operator)?

where (cursor_field1, cursor_field2) > (x, y)

A composite is ordered left-to-right with further right fields only breaking ties.

So I think it is faster as the fields on the right are only compared in case of ties?

Suggestion: Change metadata

TL;DR

Make after and before always present and add has_next_page and has_previous_page booleans to metada.


Hello there ๐Ÿ‘‹

I've recently started using this library to cursor paginate a list in a small Phoenix + Vue project and bumped into an issue that required a lot of work-arounds when using this in an infinite scroll scenario.

Problem

Picture this:
We want to paginate through 5 records ([1, 2, 3, 4, 5]) 3 at a time (limit: 3).
So, the first query returns something like this:

entries: [1, 2, 3], metadata: { after: 3, before: nil, ... }

On the client-side application I push the 3 entries to an items array and store the after cursor in a pagination object.
The second query then will ask for 3 records after the cursor 3 and paginatior returns the following:

entries: [4, 5], metadata: { after: nil, before: 4, ... }

On the client-side application I push the 2 entries to the items array and now the after cursor is null so if I store it and use it, the next query will return the first 3 entries again. I can't use the before cursor as well because the next query would then return me the 5th item again.

I built a bit of a complex work-around on the client-side where I fetch the before the last and the last items just to get the cursor of the last item.

Suggestion

Make the after and before cursors be always present so that the consumers of the API always have the beginning and end of the page.
Add two different metadata properties to show that there is a next page and a previous page.
I suggest has_next_page and has_previous_page mimicking graphQL's relay style pagination - https://relay.dev/graphql/connections.htm#sec-undefined.PageInfo

What do you think?

Ecto3-RC

Hello,

Can paginator already be used with Ecto3-RC?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.