intersectmbo / cardano-db-sync Goto Github PK
View Code? Open in Web Editor NEWA component that follows the Cardano chain and stores blocks and transactions in PostgreSQL
License: Apache License 2.0
A component that follows the Cardano chain and stores blocks and transactions in PostgreSQL
License: Apache License 2.0
As a user of either cardano-db-sync
or cardano-db-sync-extended
, I wish to run the cardano-db-tool validate
command with arguments to specify the poll interval and credentials to remotely send results as a syslog over TCP, so that the service health can be monitored remotely. This protocol and transport makes the service compatible with standard tooling, including Graylog, which is used by IOHK for log aggregation
A service could then be defined in nix and the nix-based Docker definition, providing feature parity across the platforms
Running the validation on the new Shelley testnet (PGPASSFILE=config/pgpass-shelley-testnet ./cardano-db-tool validate
) I got a validation failure:
Total supply plus fees at block 2933 is same as genesis supply: cardano-db-tool: 11000000000 /= 10999982984.8
CallStack (from HasCallStack):
error, called at app/Cardano/Db/App/Validate/TotalSupply.hs:27:12 in main:Cardano.Db.App.Validate.TotalSupply
Needs investigation.
Dropping the DB and resyncing the chain does not fix the problem.
A running instance is missing an epoch row (194), which may have occurred at the epoch boundary. An instance built after this boundary did not have the same result.
I noticed there is already shelley specific code in the repo and there are shelley-specific issues as well but in the docs I don't see instructions to run db sync in that mode. How can I try out cardano-db-sync with Shelley testnet?
I followed the steps from building and running document for db-sync and the last line of output is:
[db-sync-node:Info:1] [2020-03-03 15:36:41.01 UTC] localInitiatorNetworkApplication: connecting to node via "../cardano-node/state-node-staging/state-node-staging/node.socket"
There is no information about inserting data blocks anymore as it used to be for explorer - it this an expected behavior now ?
$ PGPASSFILE=config/pgpass db-sync-node/bin/cardano-db-sync --config config/explorer-staging-config.yaml --genesis-file ../cardano-node/configuration/mainnet-genesis-dryrun-with-stakeholders.json --socket-path ../cardano-node/state-node-staging/state-node-staging/node.socket --schema-dir schema/
[db-sync-node:Info:1] [2020-03-03 15:36:31.50 UTC] NetworkMagic: RequiresNoMagic 633343913
[db-sync-node:Info:1] [2020-03-03 15:36:41.00 UTC] Initial genesis distribution populated. Hash c6a004d3d178f600cd8caa10abbebe1549bef878f0665aea2903472d5abf7323
[db-sync-node:Info:1] [2020-03-03 15:36:41.01 UTC] Total genesis supply of Ada: 31112484651.000000
[db-sync-node:Info:1] [2020-03-03 15:36:41.01 UTC] localInitiatorNetworkApplication: connecting to node via "../cardano-node/state-node-staging/state-node-staging/node.socket"
The auto-generated SQL in the migration files have 'CREATe' instead of 'CREATE'.
This is very minor but can we clean this up?
The error after upgrade is:
Running : migration-2-0002-20200615.sql
psql:/nix/store/7rbfljczglgd45gjv06gg978mf1xgb8r-schema/migration-2-0002-20200615.sql:17: ERROR: column "slots_per_epoch" contains null values
CONTEXT: SQL statement "ALTER TABLE "meta" ADD COLUMN "slots_per_epoch" INT8 NOT NULL"
PL/pgSQL function migrate() line 7 at EXECUTE
ExitFailure 3
This issue focuses on block and on blocks only. We try to translate the Byron blocks into the Shelley equivalent.
What should follow is transaction insertion.
Checking the cardano-explorer
code I found a few nullTracers
which should not be in a production code. This will not allow us to debug the application, and should not be in a production code.
[db-sync-node:Info:27] [2020-06-14 09:26:33.59 UTC] Starting chainSyncClient
[db-sync-node:Info:27] [2020-06-14 09:26:33.64 UTC] Cardano.Db tip is at empty (genesis)
[db-sync-node:Info:33] [2020-06-14 09:26:33.64 UTC] Running DB thread
[db-sync-node:Info:33] [2020-06-14 09:26:33.97 UTC] Shelley: Rollbacking to slot 7505, hash 4f6eac703201587503bc9816e2be69a2c4f2c5e55460b0e830c21219d158fcd9
[db-sync-node:Error:33] [2020-06-14 09:26:34.29 UTC] DB lookup fail in insertABlock: block hash 4f6eac703201587503bc9816e2be69a2c4f2c5e55460b0e830c21219d158fcd9
[db-sync-node:Info:33] [2020-06-14 09:26:34.29 UTC] Shutting down DB thread
[db-sync-node.Handshake:Info:51] [2020-06-14 09:26:43.44 UTC] [String "Send MsgProposeVersions (fromList [(NodeToClientV_2,TInt 42)])",String "LocalHandshakeTrace",String "ConnectionId {localAddress = LocalAddress {getFilePath = ""}, remoteAddress = LocalAddress {getFilePath = "/home/ubuntu/.piper"}}"]
[db-sync-node.Handshake:Info:51] [2020-06-14 09:26:43.44 UTC] [String "Recv MsgAcceptVersion NodeToClientV_2 (TInt 42)",String "LocalHandshakeTrace",String "ConnectionId {localAddress = LocalAddress {getFilePath = ""}, remoteAddress = LocalAddress {getFilePath = "/home/ubuntu/.piper"}}"]
[db-sync-node:Info:55] [2020-06-14 09:26:43.44 UTC] Starting chainSyncClient
[db-sync-node:Info:55] [2020-06-14 09:26:43.52 UTC] Cardano.Db tip is at empty (genesis)
[db-sync-node:Info:60] [2020-06-14 09:26:43.52 UTC] Running DB thread
[db-sync-node:Info:60] [2020-06-14 09:26:43.75 UTC] Shelley: Rollbacking to slot 7486, hash f3e9f26c0beab717b81cc1d934f7d55579de2fbc01ac7bf4443d4511ae5290ac
[db-sync-node:Error:60] [2020-06-14 09:26:44.78 UTC] DB lookup fail in insertABlock: block hash f3e9f26c0beab717b81cc1d934f7d55579de2fbc01ac7bf4443d4511ae5290ac
[db-sync-node:Info:60] [2020-06-14 09:26:44.78 UTC] Shutting down DB thread
[db-sync-node:Error:63] [2020-06-14 09:26:44.80 UTC] recvMsgRollForward: AsyncCancelled
does db-sync supports ff-testnet?
This shows most recent block (epoch 50) doesn't have an epoch listed in epoch table. This breaks the functionality of the frontend explorer on shelley environments.
cexplorer=# SELECT tx_count,blk_count,no FROM epoch ORDER BY no DESC LIMIT 1;
tx_count | blk_count | no
----------+-----------+----
315 | 810 | 49
(1 row)
cexplorer=# SELECT id,hash,epoch_no,slot_no,block_no FROM block WHERE id=44145;
id | hash | epoch_no | slot_no | block_no
-------+--------------------------------------------------------------------+----------+---------+----------
44145 | \x661b5facbacca4897b8679fbefd7c4219d06d3b4e34b368adac2e65a69281790 | 50 | 1087736 | 44135
(1 row)
This user re-registered there pool a number of times but metadata hash always remained at what it was when they first registered it:
cexplorer=# SELECT url, pool_meta_data.* FROM pool LEFT JOIN pool_meta_data ON pool.meta = pool_meta_data.id WHERE pool_meta_data.url='https://stakepool.at/atada.metadata.json';
url | id | url | hash | tx_id
------------------------------------------+----+------------------------------------------+--------------------------------------------------------------------+-------
https://stakepool.at/atada.metadata.json | 99 | https://stakepool.at/atada.metadata.json | \x86bfb1f574da7ce771bb36fc91215e72beb02adcb136f6cb9d02a54ce319f782 | 4669
https://stakepool.at/atada.metadata.json | 99 | https://stakepool.at/atada.metadata.json | \x86bfb1f574da7ce771bb36fc91215e72beb02adcb136f6cb9d02a54ce319f782 | 4669
https://stakepool.at/atada.metadata.json | 99 | https://stakepool.at/atada.metadata.json | \x86bfb1f574da7ce771bb36fc91215e72beb02adcb136f6cb9d02a54ce319f782 | 4669
https://stakepool.at/atada.metadata.json | 99 | https://stakepool.at/atada.metadata.json | \x86bfb1f574da7ce771bb36fc91215e72beb02adcb136f6cb9d02a54ce319f782 | 4669
https://stakepool.at/atada.metadata.json | 99 | https://stakepool.at/atada.metadata.json | \x86bfb1f574da7ce771bb36fc91215e72beb02adcb136f6cb9d02a54ce319f782 | 4669
https://stakepool.at/atada.metadata.json | 99 | https://stakepool.at/atada.metadata.json | \x86bfb1f574da7ce771bb36fc91215e72beb02adcb136f6cb9d02a54ce319f782 | 4669
https://stakepool.at/atada.metadata.json | 99 | https://stakepool.at/atada.metadata.json | \x86bfb1f574da7ce771bb36fc91215e72beb02adcb136f6cb9d02a54ce319f782 | 4669
[db-sync-node:Info:37] [2020-03-05 17:40:12.20 UTC] Total supply at start of epoch 0 is 41999999999.999903 Ada
[db-sync-node:Info:37] [2020-03-05 17:41:14.62 UTC] Total supply at start of epoch 1 is 41999999999.999903 Ada
[db-sync-node:Info:37] [2020-03-05 17:41:14.62 UTC] insertABOBBoundary: epoch 1 hash 73e68ae83ee19defc2bfc21c14cbc5e7c83868f21883a218753324453ae07777
[db-sync-node:Info:37] [2020-03-05 17:41:14.62 UTC] epochPluginInsertBlock: Epoch table updated for epoch 0
[db-sync-node:Error:37] [2020-03-05 17:41:14.63 UTC] Unable to query epoch number 0
[db-sync-node:Info:37] [2020-03-05 17:41:14.63 UTC] Shutting down DB thread
[db-sync-node:Error:41] [2020-03-05 17:41:14.68 UTC] recvMsgRollForward: AsyncCancelled
the process is still running, but the logs say it has given up and halted?
it fails the same way upon wiping everything and restarting (edited)
switching to the non-extended db-sync appears to have fixed syncing
See cardano-foundation/cardano-graphql#126 for a query result aimed at testnet
While we do not have the hardcoded reference anymore in schema now, there is still a hard coded reference to slots per epoch here.
@KtorZ raised:
There's no docker images tagged for corresponding node's releases for cardano-db-sync, for instance, if one wants to run cardano-db-sync with [email protected]
DevOps provided instructions for this:
Currently, the latest cardano-node release is 1.10.1.
Since adding a new validation to cardano-db-tool
a couple of days ago, I have been running validation on my db-sync
node periodically. I have now seen the first failure:
nix@nix:~/cardano-db-sync$ PGPASSFILE=config/pgpass ./cardano-db-tool validate
Total supply plus fees at block 1515226 is same as genesis supply: ok
Validate total supply decreasing from block 1515226 to block 3770668: ok
Epoch table entries for epochs [0..184] are correct: ok
All txs for blocks in epoch 185 are present: Failed on block no 4010399: expected tx count of 0 but got 1
All txs for blocks in epoch 169 are present: ok
Since the validation always run on the most recent epoch (currently 185) repeated runs always fail the same way, so this is definitely an error, but there are no errors in the logs that I can see.
Even more strangely this SQL query gives more than one row with block_no = 4010399
:
cexplorer=# select id, slot_no, block_no, previous, slot_leader, time, tx_count from block
where block_no = 4010399 ;
id | slot_no | block_no | previous | slot_leader | time | tx_count
---------+---------+----------+----------+-------------+---------------------+----------
4011986 | 4012493 | 4010399 | 4011984 | 7 | 2020-04-09 17:22:31 | 0
4011985 | 4012492 | 4010399 | 4011984 | 6 | 2020-04-09 17:22:11 | 1
Looking for blocks which have either of those as a previous:
cexplorer=# select id, slot_no, block_no, previous, slot_leader, time, tx_count from block
where previous = 4011986 ;
id | slot_no | block_no | previous | slot_leader | time | tx_count
---------+---------+----------+----------+-------------+---------------------+----------
4011987 | 4012494 | 4010400 | 4011986 | 8 | 2020-04-09 17:22:51 | 0
(1 row)
cexplorer=# select id, slot_no, block_no, previous, slot_leader, time, tx_count from block
where previous = 4011985 ;
id | slot_no | block_no | previous | slot_leader | time | tx_count
----+---------+----------+----------+-------------+------+----------
(0 rows)
This suggests that a rollback happened and indeed, in the logs I see:
[db-sync-node:Info:55] [2020-04-09 17:22:13.56 UTC] insertABlock: slot 4012492, block 4010399, hash ee8b35b3ea713f57c5000b942178cef876d17a90be44a1830f9b2f0e187b9914
[db-sync-node:Info:55] [2020-04-09 17:22:54.90 UTC] No rollback required: chain tip slot is 4012491
[db-sync-node:Info:55] [2020-04-09 17:22:54.91 UTC] insertABlock: slot 4012493, block 4010399, hash c928c2e82a3efbf1e3f0382636e4eac0933510cdc51f65266b85c16084d8feb2
[db-sync-node:Info:55] [2020-04-09 17:22:55.10 UTC] epochPluginInsertBlock: Inserting row in epoch table for epoch 185
[db-sync-node:Info:55] [2020-04-09 17:22:55.11 UTC] insertABlock: slot 4012494, block 4010400, hash ba0be729a025373d23ce419a584711f158ca904d3b86eae26ad094422b33cd77
which suggests something like a restart of db sync thread.
I tried deleting the Block
table entry that no subsequent block used as a previous block but foreign key constraints make that difficult.
It is not fixed :
select * from pool_hash where hash = '\xf784138ce22b62919022ca3439bef0b0d25518da4427bc5beabf4334'
select * from pool_update where hash_id=105;
the reward_addr_id corresponding to my owner is 22878.
select * from reward where addr_id=22878;
=> nothing, idem for an other staking address with addr_id=23200
Values I should have received :
23200 -> 712912434664
22878 -> 1539770088234
Note.
I don't know if it's related... When I registered my pool, I tried to set a different --pool-reward-account-verification-key-file than the --pool-owner-stake-verification-key-file. I did not received rewards on the reward account until I delegate to the pool. I always received rewards on the owner account... I think db-sync should show these rewards. I'm going to resubmit the certificate with the same reward & owner verification key to see if it helps.
db-sync has a semver 1.4.0
but docker image only has a master version
Why not tag docker image with 1.4.0?
As a data consumer I need to know if the initial sync from genesis is completed, so I can avoid running queries that may produce invalid results.
meta.at_tip - boolean, default: false
Add documentation to explain where the incomplete data will be during the initial sync, then a SQL query to determine this client-side
To improve service provisioning, it would be helpful to separate the credentials from the current pgpass
file, as demonstrated in the following docker-compose snippet:
services:
cardano-db-sync:
...
environment:
- POSTGRES_HOST=postgres
- POSTGRES_PORT=5432
- POSTGRES_DB_FILE=/run/secrets/postgres_db
- POSTGRES_PASSWORD_FILE=/run/secrets/postgres_password
- POSTGRES_USER_FILE=/run/secrets/postgres_user
secrets:
- postgres_password
- postgres_user
- postgres_db
postgres:
image: postgres:11.5-alpine
environment:
- POSTGRES_DB_FILE=/run/secrets/postgres_db
- POSTGRES_PASSWORD_FILE=/run/secrets/postgres_password
- POSTGRES_USER_FILE=/run/secrets/postgres_user
secrets:
- postgres_password
- postgres_user
- postgres_db
secrets:
postgres_db:
file: ./config/secrets/postgres_db
postgres_password:
file: ./config/secrets/postgres_password
postgres_user:
file: ./config/secrets/postgres_user
N.B. This pattern should not be seen as specific to Docker, but it does provide Docker with a secure way of managing sensitive information.
POSTGRES_HOST
POSTGRES_PORT
POSTGRES_DB_FILE
POSTGRES_PASSWORD_FILE
POSTGRES_USER_FILE
pgpass
, setting the ENVs if not found on service init.While testing using a genesis with smaller values for k (EpochSlot effectively reduced to 2000), we found that slot_no was not being adhered to. Upon checking, it seems there are a few hard coded references that would be good to eliminate :
For Migration Schema (here and here).
For grafana monitoring dashboard (here and here).
Would be good if these could be fetched from genesis.json in config file instead.
This issue also impacts input-output-hk/cardano-rest#19
Hi, after a discussion with Kevin, it seems that the metadata field needs to be configured as an ARRAY in the txs table instead of a single file.
This is how it looks like as an array:
CREATE TABLE txs (
txid hash PRIMARY KEY,
in_blockid hash NOT NULL REFERENCES blocks (blockid) ON DELETE CASCADE,
fee lovelace NOT NULL,
withdrawals withdrawal ARRAY,
metadata tx_metadata ARRAY
);
CREATE TYPE tx_metadata as (
label uinteger,
metadata jsonb
);
In this case, a transaction has a list of metadata types each with a label and a json. The metadata described in Appendix E of the shelley design spec (https://hydra.iohk.io/job/Cardano/cardano-ledger-specs/delegationDesignSpec/latest/download-by-type/doc-pdf/delegation_design_spec), which we can call 'user-defined metadata' is a single map from uints keys to 64 byte values. Hence, declaring metadata as an array sugests that we will have other types of metadata. In this case a transaction will really have an array of 'labeled' maps. Is this correct?
If a single transaction can only have a metadata field, then the correct design would be more like this:
CREATE TABLE txs (
txid hash PRIMARY KEY,
in_blockid hash NOT NULL REFERENCES blocks (blockid) ON DELETE CASCADE,
fee lovelace NOT NULL,
withdrawals withdrawal ARRAY,
metadata jsonb
);
Which means that there’s only one type of metadata, there's no label for it and it is encoded in json.
However, this is the current state:
CREATE TABLE txs (
txid hash PRIMARY KEY,
in_blockid hash NOT NULL REFERENCES blocks (blockid) ON DELETE CASCADE,
fee lovelace NOT NULL,
withdrawals withdrawal ARRAY,
metadata tx_metadata
);
CREATE TYPE tx_metadata as (
label uinteger,
metadata jsonb
);
One transaction has one metadata label and one metadata map (encoded as json). The label in this case loses its purpose.
It seems like rollbacks aren't working as expected. We see multiple blocks in the DB with the same block height. Here's an example (on mainnet):
cexplorer=# SELECT * FROM block WHERE block_no=4224831;
id | hash | slot_no | block_no | previous | merkel_root | slot_leader | size | epoch_no | time | tx_count
---------+--------------------------------------------------------------------+---------+----------+----------+--------------------------------------------------------------------+-------------+------+----------+---------------------+----------
4225044 | \x941a71094c7b10243845a82e53dc45959d8fde5f6d87e463efe660aa9611b965 | 4226979 | 4224831 | 4225043 | \x95549f5fcfc370eb4b24c981a6053f909bdef6418ce896828c6be18fa31ab406 | 6 | 1731 | 195 | 2020-05-29 08:57:51 | 3
4225045 | \x275ee8b9ae441e6e32e1f7f36290e6a048722619d91195773a07b06682418209 | 4226980 | 4224831 | 4225043 | \x6eb04497431d77657a94b0eee70dae59e7403980d4fc9d06ba5295436b8f0f54 | 7 | 1418 | 195 | 2020-05-29 08:58:11 | 2
(2 rows)
Since cardano-node and CLI started accepting GenesisFile
and SocketPath
, it would be good to eliminate these from cardano-db-sync as well for consistency.
We need to adapt the existing database format and tools to support the needs of the new GraphQL dashboard/explorer and other internal or external tools. In essence, this involves:
Agree on the data requirements with the explorer team
Adapt the existing PostgreSQL database schema for Byron to support the new Shelley era block, transaction and other data
including address format and other changes
Normalise the schema if necessary to optimise the data storage and database performance
extend the existing schema
Provide any necessary interfacing code
to abstract over raw SQL code (e.g. db_add_new_block(...))
(this may involve changing existing calls to deal with additional information or the new schema)
Include calls in the node to record basic on-chain data
block creation and embedded transactions
Include calls to record off-chain data obtained from the ledger state
rewards calculations
treasury and reserves levels
(may require code to be inserted at the epoch boundary transition - beware of data insertion cost
- might need a separate thread to achieve this efficiently)
Include basic abstractions over transactions
pool registration/deregistration
stake key registration/deregistration
delegations
...
(needs to be done during transaction insertion - make sure this is efficient - it will be on the critical performance path)
Extend to include more complex transactions
parameter protocol updates (may require replay of the voting mechanism)
Note that the database will need to be migrated to the new format. We will not be able to use the existing database.
SELECT * FROM pool WHERE hash='\xdc240e2e02c1ff7914a840749df10452b900531e43e99d7554e4a59d'
15 "binary data" 12345678000000 22733 14 0.1234 12345000000 424
37 "binary data" 1000000000000 22733 14 0.1234 12345000000 1044
327 "binary data" -2537764290116403776 22733 14 0.9999 -2537764290116403776 50091
The first iteration of the Shelley transactions where we insert just the data we had in Byron (Shelley is a strict superset of that information).
Cardano DB-Sync always crash with the following error.
postgres_1_97b8bbe6330b | 2020-07-11 09:34:58.764 UTC [87] ERROR: column "slots_per_epoch" contains null values
postgres_1_97b8bbe6330b | 2020-07-11 09:34:58.764 UTC [87] CONTEXT: SQL statement "ALTER TABLE "meta" ADD COLUMN "slots_per_epoch" INT8 NOT NULL"
postgres_1_97b8bbe6330b | PL/pgSQL function migrate() line 7 at EXECUTE
postgres_1_97b8bbe6330b | 2020-07-11 09:34:58.764 UTC [87] STATEMENT: SELECT migrate() ;
cardano-db-sync_1_d8338f03636b | ExitFailure 3
cardano-db-sync_1_d8338f03636b |
cardano-db-sync_1_d8338f03636b | Errors in file: /tmp/migrate-2020-07-11T093458.log
cardano-db-sync_1_d8338f03636b |
In the database, I've found my pool, delegators of my pool but not rewards of delegators ( by addr_id found in the delegation table for my pool )
[db-sync-node:Info:26] [2020-06-06 07:26:00.07 UTC] Cardano.Db tip is at slot 7790, block 410
[db-sync-node:Info:31] [2020-06-06 07:26:00.07 UTC] Running DB thread
[db-sync-node:Info:31] [2020-06-06 07:26:00.25 UTC] Shelley: No rollback required: db tip slot is 7790 ledger tip slot is 7790
[db-sync-node:Error:31] [2020-06-06 07:26:01.02 UTC] DB lookup fail in insertTxIn: tx hash 75c62cb4a2bbc63ccff6b518de7f18beef9eb1898829822f421bece94f44bb61
[db-sync-node:Info:31] [2020-06-06 07:26:01.02 UTC] Shutting down DB thread
Hi, the Atala Prism team has started integrating with cardano-db-sync
and could not find a way to order transactions within a block. Can we plan on adding a column to track the transaction index?
Or, as a workaround or perhaps definite way, can we just sort by the sequential tx.id
?
Thanks!
cardano-db-sync and possibly other binaries should be linked with -threaded
since it is required by ourboros-network (at least on OSX).
Hello Team,
I am having some error to build thecardano-db-sync and cardano-db-sync-extended v 2.0.0
When I run cabal install cardano-db-sync-extended
I have this error:
Building library for cborg-0.2.3.0..
[ 1 of 14] Compiling Codec.CBOR.ByteArray.Internal ( src/Codec/CBOR/ByteArray/Internal.hs, dist/build/Codec/CBOR/ByteArray/Internal.o )
[ 2 of 14] Compiling Codec.CBOR.ByteArray.Sliced ( src/Codec/CBOR/ByteArray/Sliced.hs, dist/build/Codec/CBOR/ByteArray/Sliced.o )
[ 3 of 14] Compiling Codec.CBOR.ByteArray ( src/Codec/CBOR/ByteArray.hs, dist/build/Codec/CBOR/ByteArray.o )
[ 4 of 14] Compiling Codec.CBOR.Decoding ( src/Codec/CBOR/Decoding.hs, dist/build/Codec/CBOR/Decoding.o )
[ 5 of 14] Compiling Codec.CBOR.Encoding[boot] ( src/Codec/CBOR/Encoding.hs-boot, dist/build/Codec/CBOR/Encoding.o-boot )
[ 6 of 14] Compiling Codec.CBOR.FlatTerm[boot] ( src/Codec/CBOR/FlatTerm.hs-boot, dist/build/Codec/CBOR/FlatTerm.o-boot )
[ 7 of 14] Compiling Codec.CBOR.Encoding ( src/Codec/CBOR/Encoding.hs, dist/build/Codec/CBOR/Encoding.o )
[ 8 of 14] Compiling Codec.CBOR.Magic ( src/Codec/CBOR/Magic.hs, dist/build/Codec/CBOR/Magic.o )
src/Codec/CBOR/Magic.hs:569:9: error:
Ambiguous occurrence ‘copyByteArrayToPtr’
It could refer to either ‘Prim.copyByteArrayToPtr’,
imported from ‘Data.Primitive.ByteArray’ at src/Codec/CBOR/Magic.hs:103:1-49
or ‘Codec.CBOR.Magic.copyByteArrayToPtr’,
defined at src/Codec/CBOR/Magic.hs:597:1
|
569 | copyByteArrayToPtr ba off ptr len
| ^^^^^^^^^^^^^^^^^^
cabal: Failed to build cborg-0.2.3.0 (which is required by exe:cardano-db-sync
from cardano-db-sync-2.0.0). See the build log above for details.
The validate program reports:
All transactions for blocks in epoch 195 are present: Failed on block no 4224831:
expected tx count of 2 but got 3
and a simple query:
select * from block where block_no = 4224831 ;
results in two rows where only one was expected.
# select id, slot_no, block_no,previous,tx_count,time from block where
block_no = 4224831 ;
id | slot_no | block_no | previous | tx_count | time
---------+---------+----------+----------+----------+---------------------
4265744 | 4226980 | 4224831 | 4265742 | 2 | 2020-05-29 08:58:11
4265743 | 4226979 | 4224831 | 4265742 | 3 | 2020-05-29 08:57:51
The block with the lower slot number was obviously orphaned, but the fact that no later block extended that chain was not detected and hence, the block not removed from the database.
Blocks that are orphaned like this should either be removed from the database, or avoided in the validate process.
Maintaining a count of blocks produced in the current epoch will remove the need for a common aggregation query that impacts performance and db load. In particular, the homepage of the new Cardano Explorer requires this information, and query caching only improves the UX, not load.
TxIn is missing an index
property, so it's currently not only possible to determine the sort order of the transaction inputs using a different property. The index present in TxOut will be used, so the interface will vary for an API consumer.
cardano-db-sync service seems to have block divergence issues if the cardano-node service is restarted while it is running.
I mean there is error with slot leader table (bad slot leader hashes, there are block hashes instead leader id); this is reason why
1 slot_leader hash = 1 row blocks table instead of 1 slot_leader = N rows blocks table
the current solution doesn't make sense to me and is somewhat different from actual byron mainnet solution
This would be useful to have this (eg Classic
vs OBFT
) in the explorer, but would require support from the network's node-to-client protocol.
Relatively recently, the https://github.com/input-output-hk/ouroboros-network repo merged a PR that the new Shelley types, which contain the pool metadata required for https://github.com/input-output-hk/smash
We need those in order to move forward in general, but also to be able to get pool metadata.
Tried building the project from the Nix shell using Cabal:
[nix-shell:~/projects/haskell/cardano-db-sync]$ cabal build all
Cloning into '/home/ksaric/projects/haskell/cardano-db-sync/dist-newstyle/src/cardano-b_-c97d51a3264adedf'...
remote: Enumerating objects: 103, done.
remote: Counting objects: 100% (103/103), done.
remote: Compressing objects: 100% (66/66), done.
remote: Total 1268 (delta 36), reused 57 (delta 20), pack-reused 1165
Receiving objects: 100% (1268/1268), 303.02 KiB | 954.00 KiB/s, done.
Resolving deltas: 100% (640/640), done.
HEAD is now at 1222078 Update dependency on cardano-prelude (#86)
Cloning into '/home/ksaric/projects/haskell/cardano-db-sync/dist-newstyle/src/cardano-c_-613c120a593cd275'...
remote: Enumerating objects: 35, done.
remote: Counting objects: 100% (35/35), done.
remote: Compressing objects: 100% (31/31), done.
remote: Total 1457 (delta 5), reused 20 (delta 2), pack-reused 1422
Receiving objects: 100% (1457/1457), 461.02 KiB | 263.00 KiB/s, done.
Resolving deltas: 100% (611/611), done.
HEAD is now at 2547ad1 Merge pull request #65 from dcoutts/dcoutts/drop-vrf-p256-openssl
Cloning into '/home/ksaric/projects/haskell/cardano-db-sync/dist-newstyle/src/cardano-l_-eee106153f3e350d'...
remote: Enumerating objects: 187, done.
remote: Counting objects: 100% (187/187), done.
remote: Compressing objects: 100% (103/103), done.
remote: Total 14119 (delta 77), reused 126 (delta 56), pack-reused 13932
Receiving objects: 100% (14119/14119), 3.87 MiB | 1.20 MiB/s, done.
Resolving deltas: 100% (8611/8611), done.
HEAD is now at 22e89a5 Update to latest cardano-ledger-specs (#754)
Cloning into '/home/ksaric/projects/haskell/cardano-db-sync/dist-newstyle/src/cardano-l_-afc0f56298b7d554'...
remote: Enumerating objects: 48, done.
remote: Counting objects: 100% (48/48), done.
remote: Compressing objects: 100% (44/44), done.
remote: Total 22507 (delta 13), reused 19 (delta 2), pack-reused 22459
Receiving objects: 100% (22507/22507), 21.55 MiB | 1.23 MiB/s, done.
Resolving deltas: 100% (13583/13583), done.
HEAD is now at 28b43814 Merge pull request #1335 from input-output-hk/jc/store-prev-proto-params
Cloning into '/home/ksaric/projects/haskell/cardano-db-sync/dist-newstyle/src/cardano-n_-695a8c1c408f8118'...
remote: Enumerating objects: 148, done.
remote: Counting objects: 100% (148/148), done.
remote: Compressing objects: 100% (114/114), done.
remote: Total 12773 (delta 50), reused 83 (delta 19), pack-reused 12625
Receiving objects: 100% (12773/12773), 4.50 MiB | 1.21 MiB/s, done.
Resolving deltas: 100% (8976/8976), done.
HEAD is now at b5b0a29 Merge #707
Cloning into '/home/ksaric/projects/haskell/cardano-db-sync/dist-newstyle/src/cardano-p_-6616435bbe3d1976'...
remote: Enumerating objects: 1201, done.
remote: Total 1201 (delta 0), reused 0 (delta 0), pack-reused 1201
Receiving objects: 100% (1201/1201), 289.69 KiB | 840.00 KiB/s, done.
Resolving deltas: 100% (534/534), done.
HEAD is now at 3ac22a2 Merge #103
Cloning into '/home/ksaric/projects/haskell/cardano-db-sync/dist-newstyle/src/cardano-s_-40b8d61132c3cbd1'...
remote: Enumerating objects: 139, done.
remote: Counting objects: 100% (139/139), done.
remote: Compressing objects: 100% (99/99), done.
remote: Total 2176 (delta 69), reused 71 (delta 30), pack-reused 2037
Receiving objects: 100% (2176/2176), 2.37 MiB | 1.19 MiB/s, done.
Resolving deltas: 100% (1154/1154), done.
HEAD is now at bc3563c Update CODEOWNERS (#357)
Cloning into '/home/ksaric/projects/haskell/cardano-db-sync/dist-newstyle/src/goblins-f4f7645a78b07334'...
remote: Enumerating objects: 410, done.
remote: Counting objects: 100% (410/410), done.
remote: Compressing objects: 100% (181/181), done.
remote: Total 423 (delta 167), reused 395 (delta 153), pack-reused 13
Receiving objects: 100% (423/423), 119.14 KiB | 709.00 KiB/s, done.
Resolving deltas: 100% (167/167), done.
HEAD is now at 26d35ad Add SeedGoblin instance for `Maybe a`
Cloning into '/home/ksaric/projects/haskell/cardano-db-sync/dist-newstyle/src/iohk-moni_-c4029568d4836e27'...
remote: Enumerating objects: 340, done.
remote: Counting objects: 100% (340/340), done.
remote: Compressing objects: 100% (123/123), done.
remote: Total 8725 (delta 194), reused 256 (delta 154), pack-reused 8385
Receiving objects: 100% (8725/8725), 34.76 MiB | 1.21 MiB/s, done.
Resolving deltas: 100% (4955/4955), done.
HEAD is now at 10877fb Merge pull request #534 from input-output-hk/cad-667-multiple-trace-acceptors
Cloning into '/home/ksaric/projects/haskell/cardano-db-sync/dist-newstyle/src/ouroboros_-86f4da3e33dbe3ab'...
remote: Enumerating objects: 2983, done.
remote: Counting objects: 100% (2983/2983), done.
remote: Compressing objects: 100% (660/660), done.
remote: Total 51096 (delta 2395), reused 2803 (delta 2265), pack-reused 48113
Receiving objects: 100% (51096/51096), 23.34 MiB | 1.23 MiB/s, done.
Resolving deltas: 100% (30957/30957), done.
HEAD is now at e71aa72a Merge #1844
Warning: No remote package servers have been specified. Usually you would have
one specified in the config file.
Resolving dependencies...
cabal: Could not resolve dependencies:
[__0] trying: cardano-binary-test-1.3.0 (user goal)
[__1] unknown package: quickcheck-instances (dependency of
cardano-binary-test)
[__1] fail (backjumping, conflict set: cardano-binary-test,
quickcheck-instances)
After searching the rest of the dependency tree exhaustively, these were the
goals I've had most trouble fulfilling: cardano-binary-test,
quickcheck-instances
Had a similar problem building with Stack, will submit a PR for Stack.
As a database client applying own migrations to the cardano-db
schema, I wish to be told when the database has been setup, so I can tie the two actions together in the application layer and recover gracefully on restarts.
https://www.postgresql.org/docs/12/sql-notify.html
NOTIFY 'cardano-db-sync-startup', 'init'
in the startup sequence to allow clients to prepare, or cease sending queries.NOTIFY 'cardano-db-sync-startup', 'db-setup'
when it's safe to apply migrations.We need to add the PoolMetaData
for the Shelley release where we will use that information in the https://github.com/input-output-hk/smash project for following the chain.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.