palkan / logidze Goto Github PK
View Code? Open in Web Editor NEWDatabase changes log for Rails
License: MIT License
Database changes log for Rails
License: MIT License
Hi small issue came to mind when looking into the gem for possible use.
When making the change timestamps are saving correctly but time on the log_data seems to be off by ~48000 years.
Record version:
{"c"=>{"updated_at"=>"2018-06-25 15:18:31.22996"},
"r"=>"37",
"v"=>3,
"ts"=>1529939911230}>
Converting ts
Time.at(version.time)
gives 50451-11-12 20:40:30 +0000
We do not handle such cases, for example, as adding or removing columns. That might lead to undesired behaviour (see #58).
Some thoughts.
When a new column has been added, we might have no initial value for it (i.e. in the first version). Hence for the version prior to adding the column we should (either):
Not sure which one is the right way.
Sometimes it is really handy to save data describing changes in associated objects, but there is no gem for that. PaperTrail tries to do that, but that's an experimental feature, which is not working very well.
If i had something like:
class User < ActiveRecord::Base
has_many :attachments
end
class Attachment < ActiveRecord::Base
belongs_to :user
end
and user uploads a new attachment, there is not going to be meaningful log data that i can use later.
Best case scenario: i'll get log_data with timestamps (which are going to be blacklisted most likely) as the only changed attribute.
But i'd like to have an ability to build a changelog or to revert to any previous version of User including matching versions of associated objects.
Unfortunately, i don't have a clear singular solution, hence feature request, instead of pull request.
If i had to implement an ad-hoc solution, i'd just serialize every associated object and treat them as nested attributes, but i am not really sure if that's the best way of doing that.
Possibly related to #59
We have this migration:
class ChangeIsAgentToAccountType < ActiveRecord::Migration[5.2]
def up
add_column :users, :account_type, :integer, default: 0, null: false
User.all.each do |user|
user.update(account_type: user.is_agent ? User.account_types[:agent] : User.account_types[:user])
end
remove_column :users, :is_agent
end
def down
add_column :users, :is_agent, :boolean, default: false, null: false
User.all.each do |user|
user.update(is_agent: user.account_type_agent? ? true : false)
end
remove_column :users, :account_type
end
end
so, is_agent
column is now gone, but it is still in the log data.
In our admin, we print all changes for a specific record:
panel 'Version history' do
versions = []
log_size = resource.log_size
(1..log_size).each do |version|
#versions << resource.at(version: version)
end
table_for versions.reverse! do
column('Version', &:log_version)
column('Change') { |resource| log_data_change(resource) }
column('Actions') { |resource| link_to 'Restore', "#{resource.id}/restore?log_version=#{resource.log_version}", method: :put, data: { confirm: t('are_you_sure') } }
end
end
Error: can't write unknown attribute
is_agentAt:
resource.at(version: version)`
Stacktrace:
activemodel (5.2.0) lib/active_model/attribute.rb:207:in `with_value_from_database'
activemodel (5.2.0) lib/active_model/attribute_set.rb:57:in `write_from_user'
activerecord (5.2.0) lib/active_record/attribute_methods/write.rb:51:in `_write_attribute'
activerecord (5.2.0) lib/active_record/attribute_methods/write.rb:45:in `write_attribute'
logidze (0.6.3) lib/logidze/model.rb:206:in `apply_column_diff'
logidze (0.6.3) lib/logidze/model.rb:198:in `block in apply_diff'
logidze (0.6.3) lib/logidze/model.rb:197:in `each'
logidze (0.6.3) lib/logidze/model.rb:197:in `apply_diff'
logidze (0.6.3) lib/logidze/model.rb:211:in `build_dup'
logidze (0.6.3) lib/logidze/model.rb:112:in `at_version'
logidze (0.6.3) lib/logidze/model.rb:70:in `at'
Seems like it could be fixed by the gem. Is there any temporary fix we could do?
UPDATE:
Temporary fix: resource.log_data.data['h'].map{|l| l['c'].delete('is_agent') }
Looking at the migration that is generated, it looks like row collapsing only occurs when the number of log entries exactly matches the set row limit.
For example, if a limit of 30 is set, any record that has less than 30 log_data
entries would work as expected; however, if a record already has 31+ entries, it would not collapse future entries and instead continues to add new entries as if there were no limit.
Is this a good enhancement opportunity?
Currently you need to specify a timestamp to get associations processed by logidze. If you specify a version (at: version) it will not set logidze_requested_ts
and therefore will skip the logidze processing.
I tried some code to set the logidze_requested_ts
when using at version but it has issues. I can create an incomplete PR for review.
Hi, I am having some issues getting Logidze working. I have followed your install instructions and whenever I try to create I get the following error
PG::UndefinedFunction - ERROR: function to_jsonb(positions) does not exist
LINE 1: SELECT logidze_snapshot(to_jsonb(NEW.*), ts_column, columns_...
^
HINT: No function matches the given name and argument types. You might need to add explicit type casts.
QUERY: SELECT logidze_snapshot(to_jsonb(NEW.*), ts_column, columns_blacklist)
CONTEXT: PL/pgSQL function logidze_logger() line 20 at assignment
Any ideas?
I am using the Apartment gem which is tenanting my customers using postgres schemas, perhaps this is the problem?
From: #89 (comment)
The idea is the following: we want to reduce the data loaded from DB by not loading log_data
column by default (using ignored_columns
feature), and only load log_data
when explicitly specified (e.g. via with_log_data
scope).
The proposed API:
class User < ActiveRecord::Base
has_logidze ignore_log_data: true
end
User.all #=> SELECT id, name FROM users
User.with_log_data #=> SELECT id, name, log_data FROM users
(optional, maybe, another PR) Add an easier way to access log_data
on a single record, e.g.:
user = User.find(params[:id])
user.log_data #=> raises (or nil?) since column is ignored
user.log_data! #=> loads log_data from DB
I've been saying lodge-ize which doesn't seem right. Is there a canonical pronunciation for the gem?
I've installed logidze in a project without a conventional structure and I'm receiving an exception
class Project < Shared::ApplicationRecord
has_logidze
undefined local variable or method `has_logidze' for Clients::Project(Table doesn't exist):Class
when I move has_logidze to a line after the the table name is set in the model I receive a slightly different exception
class Project < Shared::ApplicationRecord
self.table_name = 'projects'
has_logidze
undefined local variable or method `has_logidze' for #Class:0x00007fbb160a04c8
I'm thinking maybe if I want to manually handle versioning, I'd disable logging globally, and then whenever I want to create a version, call:
MyModel.connection.execute(%(
EXECUTE PROCEDURE logidze_snapshot(to_jsonb(#{attributes}), '{excluded columns}')
))
?
Hi,
First of all, I have to say that I'm really happy using this gem. Just wanted to note that when using a long list of column names to whitelist (63 names on a table with 109 columns), it parsed bad the list and included all the columns for the involved model.
I have solved it on my particular case by writing the column list to blacklist directly on the postgres function, on the migration generated.
But maybe it could be solved if the option --blacklist= (and it's white equivalent) work with a comma separated list, and between quotes, like:
--whitelist='first_column_name,second_column_name'
I'm sure that this way we would escape the shell behaviour.
Thanks,
So I saw in the wiki that association versioning is available and I need that feature. It has in there this:
This feature is considered experimental, so it's disabled by default. You can turn it on by setting Logidze.associations_versioning to true. Associations versioning works with belongs_to and has_many associations
Where do I turn on the associations_versioning setting and set it to true?
Is there any plan and/or roadmap to support ruby >2.1? As requirements states only ~>2.1
Is this because there are some hard dependencies on ruby 2.1?
Actually, I am using latest ruby patch level 2.4.1 and I am looking for a logging mechanism. Logidze sounds cooler than others. But this specific requirement is a road block for me(and for others too of-course).
Let me know if I can help in any form for this issue.
I currently use papertrail, and I want to delete its massively verbose table, but I don't want to lose the history.
Is there a way I can migrate my papertrail versions over to logidze?
Logidze is really awesome but it can't replace papertrail in a project I'm working on.
This kind of stuff does not belong to the resource itself, it would be more pleasant to store it alongside the log.
# application_controller.rb
def info_for_paper_trail
{ ip: request.remote_ip }
end
# model.rb
attr_accessor :comment
[...]
has_paper_trail meta: { reason: comment }
Currently, wrapping execution with Logidze.with_meta
(or Logidze.with_responsible
) call also wraps everything into a DB transaction, i.e.:
Logidze.with_meta(data) do
do_something
end
# structurally equal to
ActiveRecord::Base.transaction do
set_logidze_meta(data)
do_something
reset_logidze_meta
end
That could lead to an unexpected behavior, especially, when using .with_responsible
within an around_action
hook: all the DB operations happened within the action could be rollbacked in case of the exception. Thus, wrapping everything into .with_responsible
significantly changes the way the application works.
We need to investigate the possibility of removing the transaction from .with_meta
. The reason why it was used is to guarantee the state at the end of the block at the DB side (that's the way SET LOCAL
works in transactions).
If it's possible then we need to upgrade the current behavior by adding a transactional: true|false
option for .with_meta
/.with_responsible
methods with true
by default.
Also, we might consider adding a global switch (Logidze.transactional_meta = true | false
).
I apologize in advance if these questions don't belong here, I'm quite curious about the functionality and performance of this gem. I'm mainly asking this questions from the viewpoint that most of my records would be accessing the most recent version only, and very rarely needing to review past changes.
I am using rails 5.0.0, with production and development server.
When I take a pull from production server to my local server, the configuration for logidze.disabled is always missing probably due to role specific config are ignored during exporting/importing.
I even add a rake a task which will set this config variable, but still, I am facing issues in development, whenever I am updating any model who have logidze installed.
namespace :db_sync do
desc "Post import task"
task :post_import do
ActiveRecord::Base.connection.execute("SET logidze.disabled TO off; select current_setting('logidze.disabled')")
end
end
Any suggestions ?
I understand why, currently, this is a requirement. I'm not questioning current/past limitations, just thinking about future possibilities.
I was looking at the internals of another gem (Scenic) I use (for managing materialized views in particular) and they are able to handle raw SQL-based migration/schema statements without the hard requirement of having to use SQL formatted db/schema.rb
##Digging deeper, and looking at a working example
Scenic seem to have a helper that gets exposed to the AR::Schema define
block named create_view
(or create_materialized_view
) that gets called, passing off the SQL as a HEREDOC block param.
(source code of helpers https://github.com/scenic-views/scenic/blob/b51ed69a223440076a5fa2d48bc5d7764e450eb9/lib/scenic/statements.rb)
These in turn (scenic has a db-adapter abstraction layer) gets passed of the the adapter, each on having their own special way of handling the actual execution of the statement. e.g. Scenic.database.create_view(name, sql_definition)
def create_materialized_view(name, sql_definition, no_data: false)
raise_unless_materialized_views_supported
execute <<-SQL
CREATE MATERIALIZED VIEW #{quote_table_name(name)} AS
#{sql_definition}
#{'WITH NO DATA' if no_data};
SQL
end
Obviously there is some filler here that Logidze would not need, but is the general idea one we could use: Supply a help for the schema/migration system to use to create the trigger/s
Is there something else I'm missing? It has been a bit since I really dove deep on logidze's codebase so maybe I'm not seeing another blocker to this general idea.
We would have to have a helper that would essentially take over the job that currently is being done by the generator, but we have that functionality, namely logidze knows the SQL needed for it's triggers given a model name, so theoretically, we should be able to drop the hard requirement of SQL Formatted schema files and instead of running a generator, we'd write a vanilla migration with a helper, such as logidze_model :post, blacklist: [:created_at, :active], timestamp_column: nil
Not sure what this looks like for updating the definition, some food for thought.
This all said, I don't think that doing so will directly, dynamically allow anything new or interesting, other than lowering the bar for alternative database support, such as MySQL/Maria, Redshift, etc.
Hi,
I wonder, what prevents logidze from mysql support? Is there some technical reasons, or it's just a lack of time and no need of it right now?
I think, with mysql support this gem can address much more people's problems.
I'm having a problem with any string that appears to be in scientific notation. For instance '557236406134e62000323100'.
The specific error is:
ActiveRecord::RangeError: PG::NumericValueOutOfRange: ERROR: value overflows numeric format
CONTEXT: PL/pgSQL function logidze_logger() line 68 at assignment
Postgres version: 9.6
Edit: I have my suspicions that it is because of the to_jsonb method.
Returns the value as json or jsonb. Arrays and composites are converted (recursively) to arrays and objects; otherwise, if there is a cast from the type to json, the cast function will be used to perform the conversion; otherwise, a scalar value is produced. For any scalar type other than a number, a Boolean, or a null value, the text representation will be used, in such a fashion that it is a valid json or jsonb value.
I believe to_jsonb
is erroneously attempting to typecast the string, could this be a bug in PG?
PostgreSQL now()
returns the transaction start time (https://www.postgresql.org/docs/9.6/static/functions-datetime.html). That means all changes within the same transaction have equal timestamps.
We should use statement_timestamp()
for more accurate timestamps.
Hi! It would be awesome if this gem supported not only ActiveRecord, but Sequel, too. I fiddled around with it the other day and it seems that providing Sequel support isn't hard.
What do you think about it? Should we make an adapter that would detect what abstraction is being used or should we make a separate gem for Sequel?
This library looks very interesting and promising but one downside I see by storing the snapshots in the same level as the record is that you are totally unaware of any delete operations.
Example I want to keep an exact log of a join association between two records. Using logidze I'm unable to see any historical changes between the two models as there is no way to see any removed associations.
Or I'm missing something?
Great gem! Is there a way to add the user_id that is responsible for the change, via current_user is present?
Following up from: #45 is there any plans for adding support for has_many through associations?
I'm taking logidze
for a spin and it seems to be a great fit for our needs, but I've encountered what appears to be a bug. With association_versioning
and ignore_log_data
turned on, after retrieving a copy of the model at a given point in time using at
, I'm then unable to retrieve the associated records, instead receiving the error ActiveModel::MissingAttributeError: missing attribute: log_data
.
Ruby version: 2.3.5
ActiveRecord version: 4.2.10
The issue vanishes if I either turn off ignore_log_data
on the associated model (ie. the Comment
model in the example below) or disable association versioning. So far I haven't found a way to work around the issue.
Steps to reproduce, based on the association tracking example:
# Post
class CreatePosts < ActiveRecord::Migration
def change
create_table :posts do |t|
t.string :title
t.string :content
t.timestamps null: false
end
end
end
# Comment
class CreateComments < ActiveRecord::Migration
def change
create_table :comments do |t|
t.string :body
t.references :post, index: true, foreign_key: true
t.timestamps null: false
end
end
end
class Post < ActiveRecord::Base
has_logidze ignore_log_data: true
has_many :comments
end
class Comment < ActiveRecord::Base
has_logidze ignore_log_data: true
belongs_to :post
end
=> #<Post:0x00007fb7ac642c18
id: 4,
title: "New Post",
content: "Bug hunting",
created_at: Mon, 11 Mar 2019 12:28:08 UTC +00:00,
updated_at: Mon, 11 Mar 2019 12:28:08 UTC +00:00,
log_data: nil>
=> #<Post:0x00007fb7ad2bbd78
id: 4,
title: "New Post",
content: "Bug hunting",
created_at: Mon, 11 Mar 2019 12:28:08 UTC +00:00,
updated_at: Mon, 11 Mar 2019 12:28:08 UTC +00:00,
log_data:
#<Logidze::History:0x00007fb7afa619e0
@data=
{"h"=>
[{"c"=>
{"id"=>4,
"title"=>"New Post",
"content"=>"Bug hunting",
"created_at"=>"2019-03-11T12:28:08.089259",
"updated_at"=>"2019-03-11T12:28:08.089259"},
"v"=>1,
"ts"=>1552307288089}],
"v"=>1}>>
=> #<Comment:0x00007fb7af89a620
id: 3,
body: "First comment",
post_id: 4,
created_at: Mon, 11 Mar 2019 12:28:46 UTC +00:00,
updated_at: Mon, 11 Mar 2019 12:28:46 UTC +00:00,
log_data: nil>
=> #<Post:0x00007fb7ad2bbd78
id: 4,
title: "New Post",
content: "Bug hunting",
created_at: Mon, 11 Mar 2019 12:28:08 UTC +00:00,
updated_at: Mon, 11 Mar 2019 12:28:08 UTC +00:00,
log_data:
#<Logidze::History:0x00007fb7afa619e0
@data=
{"h"=>
[{"c"=>
{"id"=>4,
"title"=>"New Post",
"content"=>"Bug hunting",
"created_at"=>"2019-03-11T12:28:08.089259",
"updated_at"=>"2019-03-11T12:28:08.089259"},
"v"=>1,
"ts"=>1552307288089}],
"v"=>1},
@versions=
[#<Logidze::History::Version:0x00007fb7af92ba30
@data=
{"c"=>
{"id"=>4,
"title"=>"New Post",
"content"=>"Bug hunting",
"created_at"=>"2019-03-11T12:28:08.089259",
"updated_at"=>"2019-03-11T12:28:08.089259"},
"v"=>1,
"ts"=>1552307288089}>]>>
ActiveModel::MissingAttributeError: missing attribute: log_data
from /Users/dmills/.rvm/gems/ruby-2.3.5@yesware/gems/activerecord-4.2.10/lib/active_record/attribute_methods/read.rb:93:in `block in _read_attribute'
I hope that helps. Let me know if I can provide any additional details that might be useful. And thanks for the great work on this library!
Issue: I'm getting this error when I try to load the logidze gem in Rails 5.2.0.beta2.
Error:
There was an error while trying to load the gem 'logidze'. (Bundler::GemRequireError)
Gem Load Error is: uninitialized constant ActiveModel::Type::Value
Where it happens: /lib/logidze/history/type.rb:2
What solved it for me:
ActiveSupport.on_load(:active_record) do
require 'active_model/type/value'
end
Source:
I don't fully understand how the loading changed in this new rails version. I stumbled upon the solution on this commit: https://github.com/iaankrynauw/paranoia/commit/49b9e68ee79d8a8e5fa04add1e93b9d69ccf9315
This is easy to step around, but right now if a record is created within a Logidze.without_logging
block, the log_data will be nil. That's expected, but it would be nice if log_size
returned 0 instead of this error:
Module::DelegationError: Document#log_size delegated to log_data.size, but log_data is nil
I have a model with an optional belongs to. The history of the associated belongs_to model throws an error when trying to access the history when the reference was nil.
Have a PR with tests. It might be too simple but the tests are all passing.
I track a position in a list and the status of data display in a table row.
execute <<-SQL CREATE TRIGGER logidze_on_brokerage_transactions BEFORE UPDATE OR INSERT ON brokerage_transactions FOR EACH ROW WHEN (coalesce(#{current_setting('logidze.disabled')}, '') <> 'on') EXECUTE PROCEDURE logidze_logger(100, 'updated_at', '{created_at, position, stale}'); SQL
The position and stale get updated a lot - and they don't actually matter to the audit. I ended up with a bunch of records that just have the updated_at as the changed field. I then change it to:
EXECUTE PROCEDURE logidze_logger(100, '', '{created_at, position, updated_at, stale}');
Now I have a bunch of change entries with no entries in them. Basically I'm trying to only store changes if there is a change in a field that isn't created_at, position or stale. What should it look like?
Related to #30
The log doesn't respect complex data types such as jsonb
, array
, hstore
, etc (due to the usage of hstore
under the hood. The values for these types are just strings.
We have to manually cast them back to appropriate Ruby structure.
Example:
post = Post.create!(tags: ['some', 'tag']) # tags is an array column
post.udpate!(tags: ['other'])
post.reload.diff_from(...)
#=> {"old"=>{"tags"=>["some", "tag"]}, "new"=>"{\"tags\": [\"other\"]}"}
# but should be
#=> {"old"=>{"tags"=>["some", "tag"]}, "new"=>{"tags"=>["other"]}}
The problem also may affect the at
methods.
Its working fine in development and production mode. But when running in testing where factory_girl gem is loaded, it casing errors. Check the stack for more details.
undefined local variable or method `has_logidze' for #<Class:0x007fa12bcc8700> (NameError)
/Users/ankur/.rvm/gems/ruby-2.2.2@project/gems/activerecord-5.0.0/lib/active_record/dynamic_matchers.rb:21:in `method_missing'
/Users/ankur/dev/project/project/app/models/bot.rb:2:in `<class:Bot>'
/Users/ankur/dev/project/project/app/models/bot.rb:1:in `<top (required)>'
/Users/ankur/.rvm/gems/ruby-2.2.2@project/gems/activesupport-5.0.0/lib/active_support/dependencies.rb:293:in `require'
/Users/ankur/.rvm/gems/ruby-2.2.2@project/gems/activesupport-5.0.0/lib/active_support/dependencies.rb:293:in `block in require'
/Users/ankur/.rvm/gems/ruby-2.2.2@project/gems/activesupport-5.0.0/lib/active_support/dependencies.rb:259:in `load_dependency'
/Users/ankur/.rvm/gems/ruby-2.2.2@project/gems/activesupport-5.0.0/lib/active_support/dependencies.rb:293:in `require'
/Users/ankur/.rvm/gems/ruby-2.2.2@project/gems/activesupport-5.0.0/lib/active_support/dependencies.rb:380:in `block in require_or_load'
/Users/ankur/.rvm/gems/ruby-2.2.2@project/gems/activesupport-5.0.0/lib/active_support/dependencies.rb:37:in `block in load_interlock'
/Users/ankur/.rvm/gems/ruby-2.2.2@project/gems/activesupport-5.0.0/lib/active_support/dependencies/interlock.rb:12:in `block in loading'
/Users/ankur/.rvm/gems/ruby-2.2.2@project/gems/activesupport-5.0.0/lib/active_support/concurrency/share_lock.rb:117:in `exclusive'
/Users/ankur/.rvm/gems/ruby-2.2.2@project/gems/activesupport-5.0.0/lib/active_support/dependencies/interlock.rb:11:in `loading'
/Users/ankur/.rvm/gems/ruby-2.2.2@project/gems/activesupport-5.0.0/lib/active_support/dependencies.rb:37:in `load_interlock'
/Users/ankur/.rvm/gems/ruby-2.2.2@project/gems/activesupport-5.0.0/lib/active_support/dependencies.rb:358:in `require_or_load'
/Users/ankur/.rvm/gems/ruby-2.2.2@project/gems/activesupport-5.0.0/lib/active_support/dependencies.rb:511:in `load_missing_constant'
/Users/ankur/.rvm/gems/ruby-2.2.2@project/gems/activesupport-5.0.0/lib/active_support/dependencies.rb:203:in `const_missing'
/Users/ankur/dev/project/project/spec/factories/bots.rb:2:in `block in <top (required)>'
/Users/ankur/.rvm/gems/ruby-2.2.2@project/gems/factory_girl-4.5.0/lib/factory_girl/syntax/default.rb:49:in `instance_eval'
/Users/ankur/.rvm/gems/ruby-2.2.2@project/gems/factory_girl-4.5.0/lib/factory_girl/syntax/default.rb:49:in `run'
/Users/ankur/.rvm/gems/ruby-2.2.2@project/gems/factory_girl-4.5.0/lib/factory_girl/syntax/default.rb:7:in `define'
/Users/ankur/dev/project/project/spec/factories/bots.rb:1:in `<top (required)>'
I have a factory
FactoryGirl.define do
factory :bot, class: Bot do |f|
f.name 'Bot'
f.description { Faker::Lorem.paragraph }
factory :public_bot do |f|
f.remote_avatar_url { Faker::Avatar.image }
end
end
end
We need a way to restrict versioning to a subset of columns.
These can be done in 2 ways: through whitelisting and blacklisting.
To reduce the size of the log data.
Extend logidze_logger
function (and related functions) to support 2 more arguments, except
and only
. Then CREATE TRIGGER
migration looks like:
execute <<-SQL
CREATE TRIGGER logidze_on_users
BEFORE UPDATE OR INSERT ON users FOR EACH ROW
WHEN (current_setting('logidze.disabled') <> 'on')
EXECUTE PROCEDURE logidze_logger(null, null, '{name, role, email, phone}');
SQL
Where the first argument is a limit
(as it is now), the second argument is a blacklist of columns, and the third argument is a whitelist of columns.
We should be able to specify these parameters through generator script:
rails generate logidze:model Post --only="title,user_id,tags"
rails generate logidze:model Post --except="created_at,updated_at"
We also need a way to upgrade existing triggers (maybe through the --upgrade
flag).
Sometimes it might be necessary to reset the history for a record (or records) (e.g. with some GDPR-related stuff).
Let's add an API to do that. Something like this:
# for single record
record.reset_log_data #=> which is, probably, equal to Logidze.without_logging { record.update_column(:log_data, nil) }
# for relation
User.where(...).reset_log_data #=> Logidze.without_logging { relation.update_all(log_data: nil) }
I am currently working on a way to clean up the history as I did not use the without_logging
wisely in the past in regards to some maintenance tasks and now what to expose part of the history to users.
My approach is to remove the specific version from the json and updating the number of the other versions.
Is this something that would make sense to added to the gem?
Are there any obvious issues with my approach?
Thanks for the speedy work on #93!
The recent debounce changes are welcome, but I'm wondering if we could add a feature to do this on demand. I have a background job which generates a table of contents for a document after it's been saved. Since the table of contents links to headers within the document, this job needs to modify the document itself. Currently, the changes this job makes to the document are ignored by Logidze, but it would be better if I could tell Logidze to lump a set of changes into the most recent version in log_data, in a block. It would also aid in generating reasonable history when importing a document.
In the example of add responsible id to the changelog in README:
def set_logidze_responsible(&block)
Logidze.with_responsible(current_user&.id, &block)
end
I haven't seen the current_user&.id
used before. What's the difference between it and the normal current_user.id
?
Is it possible to add next feature? I set time since last updated time when last version will be overwrite without creating a new version. When time is out: new version will be created like current behavior. It will be good for autosaving.
I'm having an issue where my log_data is nil until after I reload the model. Should the log_data attribute be populate automatically after save?
If I understood the docs, the timestamp is derived from the updated_at
timestamp at the record level. In my case, I have these set by ActiveRecord, so they are always available.
If I try to load a version at(time: <record.updated_at>)
, I am getting back nil
. Somehow, the timestamps seem to be off by 1 millisecond after inspecting log_data
. Simply adding 1 to the timestamp will yield the expected result, but I feel I am missing something here.
I simply want to grab the version based on the recorded updated_time
. Any tips would be appreciated, thanks!
First of all, awesome gem!
I just tried to deploy to Heroku, and the migration returned this error.
ActiveRecord::StatementInvalid: PG::InsufficientPrivilege: ERROR: permission denied to set parameter "logidze.disabled" CONTEXT: SQL statement "ALTER DATABASE ***DB_NAME*** SET logidze.disabled TO off"
Any ideas on how this might be fixed?
Currently, I get the error 'No such file or directory @ rb_sysopen - path/app/models/underscored_model_name.rb'
This is because of https://github.com/palkan/logidze/blob/master/lib/generators/logidze/model/model_generator.rb#L91
I propose that instead, 'UnderscoredModelName'.constantize
should be used, so the file path can be retrieved via:
[20] pry(main)> ap name = 'StepDefinitions::Base';
klass = name.constantize;
(klass.methods - ActiveRecord::Base.methods).select{ |meth|
location_path = klass.method(meth).source_location[0];
location_path.include?('/app/') && location_path.ends_with?("#{name.underscore}.rb")
}.map{ |meth|
[
meth, klass.method(meth).source_location
]
}
=> [
[
:excluding_type,
[
"/project_path/app/models/data/step_definitions/base.rb",
149
]
],
[
:types,
[
"/project_path/app/models/data/step_definitions/base.rb",
145
]
]
]
Let's say we have something like:
Logidze.with_responsible(user.id) do
Post.create
end
It would be really useful to be able to find records by their responsible_id
:
Post.where(log_data: { responsible_id: user.id })
Otherwise to be able to do this you have to add a real responsible_id
column on each of the models and duplicate the with_responsible
logic with a slight twist.
Since log_data
is JSONB, I think something like this (pseudo-code) is doable, but it requires to know the structure of log_data
:
Post.where("(log_data -> 'h' @> '[r ->> ?]')", user.id)
Hi, I haven't been able to debug much yet but wanted to report this. I enabled logidze for a model with a carrierwave field. I'm showing a table with all revisions and providing a button to undo them, so I wanted to use append: true
.
While model.undo!
and model.redo!
work correctly, model.redo! append: true
makes carrierwave fail. Apparently it's re-running the upload process without a file attached to it, so it can't read any of it's attributes (in my particular setup if can't find the extension to create the filename).
I'll update this issue with my findings :)
Let's make it possible to set ignore_log_data
globally instead of per-model:
# application.rb
Rails.application.config.logidze.select_ignore_log_data = true
NOTE: if we explicitly provide ingore_log_data: false
with has_logidze
, then it overrides the global setting.
Also, let's add an API to temporary change this global setting:
# admin/application_controller.rb
class Admin::ApplicationController < AC::Base
around_action :enable_logidze_log_data
def enable_logidze_log_data
Logidze.with_ignore_log_data(false) { yield }
end
end
If I have two processes, one which intends to update the log_data (model b), and one wants to disable logging (model a), would this scenario be possible:
disable logging to save a
start saving b
start saving a
finish saving b
finish saving a
re-enable logging after saving a
b's log wasn't updated?
I start with some object
$ o = MyModel.create(a: 1, b: 2)
I add column c to MyModel in a migration
$ ap o
<MyModel: 0x00000> {
:a => 1,
:b => 2,
:c => nil,
:log_data => #<Logidze....>
}
I change the value of c
$ o.update(c: 3)
I request the initial version and expect to see the default value of c
$ o.at_version(1).c
3
Instead I see the most recent value of c.
Is this expectation reasonable? Is there a way to support this behavior if not by default then through configuration?
Ruby Version: ruby 2.6.3p62 (2019-04-16 revision 67580) [x86_64-darwin18]
Rails Version: Rails 6.0.0.rc1
PostgreSQL Version: PostgreSQL 10.6 on x86_64-pc-linux-musl, compiled by gcc (Alpine 8.2.0) 8.2.0, 64-bit
Logidze Version: v0.10.0
Running tests that rely on test fixtures caused a bunch of failures that were seemingly unrelated to the tests being ran.
Tests would continue to pass.
I have a feeling that this may be related to parallel tests and Rails stock test fixtures.
I kept my app on v0.9.0 and also got some recent console errors while trying to set a responsible_id
in an ActiveJob
. The error output was WARNING: SET LOCAL can only be used in transaction blocks
I realize this isn't a very actionable error, but wanted to bring it up in case others ran into something similar. Feel free to close if it's not helpful.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.