Giter Site home page Giter Site logo

concordium / testnet3-challenges Goto Github PK

View Code? Open in Web Editor NEW
326.0 110.0 473.0 466 KB

This repo is dedicated to Concordium Incentivized Testnet3.

Home Page: https://developers.concordium.com/

License: Apache License 2.0

testnet rust blockchain-technology cryptography incentivization missions bugs hackathon node proof-of-stake

testnet3-challenges's Introduction

TESTNET 3 CHALLENGES

Contributor Covenant

After two successful testnets in 2020 and our mainnet launch within sight, we are happy to announce the launch of Concordium Testnet 3 and to invite testers, developers, and users all over the world to compete to earn up to 10 million GTU.

Status

See the Status Page for the latest updates & notices for the Test Network, Identity Issuers and Challenges.

Please check this page first before reporting any new issues!

Rationale

Concordium Testnet 3 is a collaborative era intended to stress-test the network, encourage participation from all over the world, and help testers, developers, and users get ready to participate in Concordium.

Concordium Testnet 3 will be released on October 6, 2020. The Incentivized Testnet 3 will be launched on October 15, 2020 and will end 6 weeks later.

During this period, node operators, developers, and community members can receive rewards (up to 0,1% of the mainnet supply, meaning 10,000,000 GTU) for helping to secure, sustain, and grow Concordium Network and the ecosystem.

Rules of Engagement

Concordium does not intend to collect and store any other user data of its challenge contributors than what is strictly necessary to pay out their rewards on the mainnet once it is launched. Concordium only stores the GitHub user name and the earned total reward amount of its contributors.

The following rules of engagements are required:

  • A contributor, who wants to submit a challenge result in repository Testnet3-Challenges, must be a registered user on GitHub.
  • All submissions must be signed with GitHub build-in commit signature mechanism using GPG.
  • The GitHub user name should not be changed until the rewards are paid out on mainnet.
  • After the launch of the mainnet, a contributor should create a mainnet account and send a signed submission of the account address in repository Testnet3-RewardClaim.

To sign your submissions, you must use the GitHub build-in commit signature meachanism using GPG. If a commit has a signature that cannot be verified by GitHub, i.e. marked unverified or not marked at all, it does not constitute a valid submission for Concordium. Please see GitHub - Commit Signature for more information.

This approach enables Concordium to trace the GitHub user name to earned rewards on repository Testnet3-Challenges and to a mainnet account on repository Testnet3-RewardClaim.

Note that in one challenge, i.e. challenge (ID2) in mission Identities, a contributor must use a valid real-life identity to issue a Concordium identity. Concordium will only check the identity attributes made public in the submission. The Concordium identity will be erased when testnet3 is taken down.

Missions and Challenges

The missions and challenges can be viewed in Project Challenges. Please note that they intentionally only provide a minimum description of tasks and submission content. For detailed step-to-step guides, troubleshooting and faqs, we expect the contributors to consult Documentation and Help.

Reward Distribution

  • Concordium accepts one approved submission per challenge per contributor.
  • Each challenge has a GTU amount tag, indicated by the yellow label " GTU". This amount is rewarded, if Concordium approves the submission.
  • Each challenge has a total number of rewards, indicated by the purple label " rewards". Rewards are paid out after the first-come-first-serve principle. When the maximal total number of rewards is reached, Concordium won’t check nor approve any more related submissions in the queue.
  • Rejected submissions can be modified and resubmitted. However, they will loose the spot of the original submission in the queue and the new resubmission will have to line up at the end of the submission queue.

Submission Process

Please make a separate PR for each challenge submission. This will require a different branch for each PR, you can use the challenge ID as name i.e. B1, B2, ID1 etc. (If you already made a submission before this clarification was added, we'll accept it as-is)

Bugs and Improvements

If you encounter a problem, please check already reported issues on GitHub in Project Bugs & Improvements and known issues on the Concordium webside under Troubleshooting. To submit a bug report, please go to Issues and provide a short description, steps to reproduce, platform and OS, logs, expected result and actual result. We also welcome suggestions for improvements.

Documentation and Help

  • Concordium testnet information, step-by-step guides, and troubleshooting can be found here.
  • Frequently asked questions regarding the challenges can be found in Project FAQs.
  • General Github.com help documentation can be found here.

Contact

Disclaimer

By participating in this Testnet, you agree to the Concordium Testnet 3 Terms and Conditions.

License

Open source License

testnet3-challenges's People

Contributors

abizjak avatar bissembert1618 avatar concordium-cl avatar lottekh avatar supermario avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

testnet3-challenges's Issues

GTU drop

The GTU drop that can be requested on newly created accounts for test purposes has been increased temporarily from 100 GTU to 1000 GTU to ensure that transfer fees can be paid during the new transaction challenges. Note that the GTU drop button in the mobile app as well as the related pending transfer details still indicate 100 GTU. You will however receive 1000 GTU on your account when the transfer is finalized.

(T3) Bulk transactions with Concordium Client

Mission:

  • create or reuse identity (mobile wallet Concordium ID)
  • create or reuse accounts (mobile wallet Concordium ID)

Using Concordium Client:

  • send 1000 transfers, one immediately after the other

Note that all transfers come with a fee to be paid from the general balance, so please make sure, you have enough money on the general balance of the accounts. Otherwise, you might want to create a new account an get a new GTU drop. See Transfer Fees for more details.

Submission:

  • submit account addresses
  • submit transaction IDs, or alternatively, time period when the transactions were sent

Successful verification with invalid document in (ID3)

If you manage to receive a successful identity verification with an invalid document in challenge (ID3), please also submit that case with a clear description and screenshots or create a bug report, see Bugs and Improvements.

We will collect the information and forward it to the underlying identity verifier for Notabene for them to improve their services.

abrupt stop of chain synchronization

My node was up and running within 7 days and everything went smoothly. But today, when I went to the dashboard, I noticed that the last finalization was completed 20 hours ago. In this case, the uptime field showed that the node was online.
After I noticed this, I walked to the node, stopped it and started it again without rebuilding the chain. After rebooting, the node works fine, but the number of received and sent blocks has been reset.
I am attaching the logs of the first 1000 lines and the last. The logs were made before the node was rebooted.

node id ca2459aabfc8b7ef
head logs https://pastebin.com/A2PJaNyu
tail logs https://pastebin.com/URFw9VeK

New Mac release out now!

There was an issue on Mac when trying to add a baker on the node dashboard, which failed with a network error in the first released Mac version.

We have fixed the issue and uploaded a new release for Mac to our
Downloads page page. Please make sure to download the latest Mac version, released on 16-10-2020.

Please follow the step-by-step guide:

  1. Delete folder ~/Documents/concordium-software .
  2. Download the new release.
  3. Extract the .zip into the folder ~/Documents
  4. Open a terminal in ~/Documents/concordium-software.
  5. Ensure your node is stopped by running ./concordium-node-stop.
  6. Reset the data by running ./concordium-node-reset-data.
  7. Run ./concordium-node and let it catch up.
  8. Follow the regular guides towards becoming a baker.

Note that in all releases you can use the Concordium client for account and baker management, see e.g. concordium-client config account --help and concordium-client baker --help, respectively, to get started.

My node miss from dashboard often, but running

My node was online 14 days. Today I start receiving notifications from my custom monitor script, that node miss from dashboard list. This is not a browser cache issue, because I get info from your api. At the same time, my node output from console shows me normal node output. Also, last fin value 2 times higher than other nodes:

Image

On the first restart I also get following message on the initial screen:

error sending request for url (https://auth.testnet.concordium.com/token/create): operation timed out Failed to post to authentication server

Have problems with B6 logs

The problem is that I retrieved logs when the new tool wasn't introduced, so I was doing what I have at that moment used command concordium-node-retrieve-logs.

It retrieved successfully and generated concordium-testnet-system-report.log file too, if it would fail I couldn't generate any log at all. So after generated logs, I've downloaded them to my computer and deleted VPS, because I was using VPS / challenge.

And I was waiting for the new instructions on how to submit our logs, and I see Concordium team publish a new tool to retrieve these logs, but the problem that I had no VPS and only have logs.

And another problem that with this command concordium-node-retrieve-logs it didn't retrieved full log, it just cut off at some point... and from log it shows that I ran only for 5 days, even I was running for full 7 days. Log weights ~2.0Gb.

I hope I will be able to participate on B6 challenge, because now is too late to start from scratch.

My Pull request number is: #533

Memory allocation(core dumped) error while "retrieve-logs" command running

I have 2 nodes running. One has been running for 4 days and the other one is 6. Today I run retrieve-logs command on both of them and I realized something. Even though I run both of these nodes on devices with same features(4GB RAM, 80 GB disk), one log file was created successfully but I got memory allocation error on the other one. I wondered why this happened. Then I run commands again and watched RAM, disk and GPU usage graphics on both devices. After this, I found out how this retrieve-logs command works. Thus, I figured out why I got .log file successfully on one but I got this memory allocation/core dumped error on the other one.
When retrieve-logs command is run, writing process starts and this process uses memory on RAM. All data is kept on RAM. After the whole writing/streaming process completed, this log file that have been kept on RAM will be written and saved on disk and removed from RAM. This means when retrieve-logs command is run, if size of the .log file will be smaller than your device's free memory/RAM, .log file will be created and written successfully on disk. However if size of the log file will be larger than free RAM memory, RAM usage will increasingly go on and eventually RAM usage will be %100 and we will get memory allocation/core dumped error and .log file will not be saved successfully. retrieve-logs command will fail.

In my case, while size of my 4 days old node's log file is 2GB yet, it was approximately 3GB for 6 days old node(I can't say an exact size as I couldn't get the log file). When I run log command on both nodes, I could get my 2GB log file because it didn't exceed my free RAM space. However I couln't get the other one because of this RAM issue. Let's imagine I have 8GB RAM on my devices, I would have gotten both of these log files. But for example I have been running a node for 30 days(for B1 mission), I would have a log file sized around 20 GB. In this case, even 16 GB RAM wouldn't be enough to get the log file. Or let's think on macro sizes, how much free RAM space would we need to get the log file of a 6 months old node?

https://developers.concordium.com/testnet/docs/downloads In this official documantaion page,
we have been told that 4 GB of RAM is enough on linux system.
This is why I run these node on 4GB RAM. However with this current retrieve-logs command,
I can't even get the log file of a 4 days node.

I think this is a problem/bug need to be fixed.
A problem that other people will face day by day when their log files' sizes will grow.

I think this can be fixed with the method I suggest below.
RAM usage should be checked while retrieving and stream processes should
continue directly on disk when the memory is full.
"Copying MemoryStream to FileStream" method should be applied.

(ID3) Try to create identities with invalid physical ID documents

Mission:
Try to create an identity object with ID documents that are either expired, modified, not supported by Onfido (see https://onfido.com/supported-documents/) etc.

We are interested both in failed verifications (correct behavior) and in successful verifications (incorrect behavior). Please also submit the cases, where you managed to get a successful verification with an invalid document or create a bug report, see Bugs and Improvements. We will collect this information and forward it to the identity issuer Onfido for them to improve their services.

Submission:

  • submit information about the document type and nationality
  • submit device make and model and OS version
  • describe process/experience
  • submit screenshot of error message, if applicable

(ID1) Create identities and accounts using development environment

Mission:

  • create 2 test identities using the development environment for issuing identities (fake IDs)
  • create 2 accounts; one for each identity; make at least 1 identity data attributes public
  • request GTU drops for each account

Submission:

  • submit account addresses
  • submit public data attribute(s)
  • submit device make and model and OS version

[SOLVED] Concordium Client Writes Incorrectly and Checks Different Paths

Edit3: FOUND THE PROBLEM again. From the console, "concordium-client config account add-keys --account ACCOUNT --keys KEYS" needs to be also executed to get the encrypted file. The dashboard was not creating the key in my case. But the paths in the command logs are misleading in any case and shows non existent paths .

Edit2: FOUND THE PROBLEM. Adding the account from dashboard apparently did not create the encSecretKey.json, however adding the account from the client adds the encSecretKey.json. Probably the paths shows as is, however it is still able to find the correct path. But apparently -at least in my case- I needed to reset the key files- that I added from dashboard- and map files, and readd them by using the client. I am going to mark this as closed, and change the name as Solved, however this issue can be used to make some adjustments for the mainnet.

During the challenge B5, it is asked from us to change baker passwords. The node asked for an account (even though there was already one) and with the correct command I added the account, priorly added from dashboard so it is fine. I got "Warning: Account is already initialized: directory '/var/lib/concordium/config/accounts/XXXXXX" (omitted the account hash). However there is no "/var/lib/concordium/config/accounts/" directory and I found by grepping the hash that the correct path that concordium created was "./.config/concordium/accounts/".

Setting the key command is not working and it gives "concordium-client: /var/lib/concordium/config/accounts/XXXXXX/encSecretKey.json: openBinaryFile: does not exist (No such file or directory)". It can not find the files since it is creating the files in another path, and tries to check for it in another path. Also the other problem is it apparently it is not able to write it correctly to "./.config/concordium/accounts/XXXXXX" either, since there is no generated encSecretKey.json in it, it is just an empty folder.

Used system: Ubuntu 20.04

Edit: Going to try deleting the account, and readding to see if the problem will occur again. Will edit the result.

Node Reseted

Hello,
My B1 server has stopped today after 3 days later. This is what I seen

resim

As you see there is no any error, I checked the kern.log there is no any too:

(I sent you the file)

Then I tried to concordium-node-retrieve-logs but it gave me empty log file as you see:

resim

I tried to take log to times but same:

resim

After all, I restarted the node for see what block it stopped, but what I see:

resim

Node has totally reseted

I'm sorry can't give you log because it's empty.

But I'll send you ([email protected]) kern.log if you want to see.

Log size

UPDATED

The size of logs to be submitted often exceeds GitHub's upload limit of 100MB.

We've created a small tool retrieve_minified_logs to sample your full set of log files and output a summary that is acceptable for submission. Please see Logs for more details.

(B5) Register as baker on Concordium client

Missions:

  • run a node (or reuse existing node)
  • apply to bake in the Concordium Client
  • get stake from the node dashboard
  • update baker's reward account in the Concordium Client
  • update baker's keys in the Concordium Client

Submissions:

  • submit baker ID
  • submit screenshot for client registration command
  • submit account address
  • submit transaction ID of the transaction used to register as a baker
  • submit blockhash of at least one block you produced
  • submit transaction ID of the transaction that changes the reward account
  • submit transaction ID of the transaction that updated baker's keys
  • submit observations, if any

Pull Requests & Forks

Do challengers need to create multiple forks seperately for each mission. Or one pull request in one fork is enough? Because we can only make one "Pull Request" in a fork. Can you please inform us about this?

Soft-banned IP

I saw this error when I restart the node:

ERROR: Connection attempt from a soft-banned IP (54.169.218.49); rejecting

(B6) Memory and persistent storage

Mission:

  • run a node for 7 days (calendar days)
  • frequently check RAM usage
  • frequently check disk usage of the database

Submission:

  • submit node ID
  • submit machine specs
  • provide feedback on RAM and disk usage and expectations
  • submit log summaries of the run (see Logs for instructions & tooling)

⚠️ Submission Note

We originally asked for full logs, which turned out to be difficult due to log sizes. Please update your submission with the output of new log summary tool instead.

(T4) Shielded transfers with Concordium Client over a longer period

Mission:

  • create or reuse identity (mobile wallet Concordium ID)
  • create or reuse accounts (mobile wallet Concordium ID)

Using Concordium Client:

  • Send and receive shielded transfers, i.e. transfers from the shielded balance:
    -- 10 times per hour
    -- 3 times a day
    -- 5 days
  • Transfers must be shielded and successful to count.

Note that all transfers come with a fee to be paid from the general balance, so please make sure, you have enough money on the general balance of the accounts. Otherwise, you might want to create a new account an get a new GTU drop. See Transfer Fees for more details.

Submission:

  • submit account addresses

(B3) Restart a node with different restart type

Mission:
All restart types must be executed. Measure the time it takes to catch-up and note down the current chain length.

  • restart a node using local DB
  • restart a node with --no-block-state-import

Submission:

  • submit node ID
  • submit logs of startup sequence
  • provide feedback on measured time and chain length

concordium-client error

I have two node which is working on arch and ubuntu. both of them have same problem.
concordium is working under a user (not root) and that user has permission on docker.

My nodes are working as expected and no issue.

~/Documents/concordium-software$ ./concordium-client
OCI runtime exec failed: exec failed: container_linux.go:349: starting container process caused "chdir to cwd ("/home/docker/concordium-client") set in config.json failed: no such file or directory": unknown

version = 0.3.2
Docker version 19.03.13, build 4484c46d9d

Error While Restart Node: "p2p_client-cli: to few bytes"

I got this error while restarting my nodes
`2020-10-28T09:41:35.785747599Z: INFO: Starting up the consensus layer
p2p_client-cli: too few bytes
From: Accounts
demandInput

:: ""
CallStack (from HasCallStack):
error, called at src/Concordium/GlobalState/Persistent/BlobStore.hs:268:19 in globalstate-0.1.0.0-inplace:Concordium.GlobalState.Persistent.BlobStore
loadRef, called at src/Concordium/GlobalState/Persistent/BlobStore.hs:769:30 in globalstate-0.1.0.0-inplace:Concordium.GlobalState.Persistent.BlobStore
XU: warning: too many hs_exit()s`

missing log output

I'm trying to log with the new script. My node, which I run for 7 days for the b2 task, generates 1 daily logs. What is the reason?
b2 taskfor this node is turned on and off twice a day)

Commit & Pull Req Template

Is there any template for commiting and pull request.
I will commit my responses but if i do it wrong you will not accept without a template we will see a lot of conflict on the repository.

Please share a template for commiting and pull requests. @concordium-admin @bissembert1618

(T2) Use Concordium Client to send and receive GTU

Mission:

  • create or reuse identity (mobile wallet Concordium ID)
  • create or reuse accounts A and B (mobile wallet Concordium ID)
  • export identities and accounts from the mobile wallet Concordium ID
  • import id and accounts to the Concordium Client

Using Concordium Client:

  • transfer 3 GTU from account A to account B
  • shield 10 GTU of account A
  • transfer 7 GTU from the shielded balance on account A to B
  • unshield shielded balance on account B
  • check transfers in the respective block on the network dashboard

Note that all transfers come with a fee to be paid from the general balance, so please make sure, you have enough money on the general balance of the accounts. Otherwise, you might want to create a new account an get a new GTU drop. See Transfer Fees for more details.

Submission:

  • submit account addresses for A and B
  • submit screenshots of the block explorer on the network dashboard with transfer details (see wiki to find block explorer)

concordium-node-retrieve-logs output is truncating after 2gb

Whenever, i try to get logs with concordium-node-retrieve-logs, created file is just 2gb.
and noticed that my latest 3 days logs are not in concordium-testnet-node.log file

and than ive got my logs directly from docker with below command day by day.

184 docker container logs 49815d8caf87 --since 2020-10-14T00:00:00 --until 2020-15-00T00:00:00 > 14.log
185 docker container logs 49815d8caf87 --since 2020-10-14T00:00:00 --until 2020-10-15T00:00:00 > 14.log
186 docker container logs 49815d8caf87 --since 2020-10-15T00:00:00 --until 2020-10-16T00:00:00 > 15.log
187 docker container logs 49815d8caf87 --since 2020-10-16T00:00:00 --until 2020-10-17T00:00:00 > 16.log
188 docker container logs 49815d8caf87 --since 2020-10-17T00:00:00 --until 2020-10-18T00:00:00 > 17.log
189 docker container logs 49815d8caf87 --since 2020-10-18T00:00:00 --until 2020-10-19T00:00:00 > 18.log
190 docker container logs 49815d8caf87 --since 2020-10-19T00:00:00 --until 2020-10-20T00:00:00 > 19.log
191 docker container logs 49815d8caf87 --since 2020-10-20T00:00:00 --until 2020-10-21T00:00:00 > 20.log
192 docker container logs 49815d8caf87 --since 2020-10-21T00:00:00 --until 2020-10-22T00:00:00 > 21.log
193 docker container logs 49815d8caf87 --since 2020-10-22T00:00:00 --until 2020-10-23T00:00:00 > 22.log

xx@xx:/Documents/concordium-software$ cat 14.log | grep "Consensus layer started"
2020-10-14T19:46:06.038473951Z: INFO: Consensus layer started
xx@xx:
/Documents/concordium-software$ cat 15.log | grep "Consensus layer started"
xx@xx:/Documents/concordium-software$ cat 16.log | grep "Consensus layer started"
2020-10-16T08:45:01.627768980Z: INFO: Consensus layer started
2020-10-16T16:39:52.995671261Z: INFO: Consensus layer started
xx@xx:
/Documents/concordium-software$ cat 17.log | grep "Consensus layer started"
2020-10-17T09:46:21.915731601Z: INFO: Consensus layer started
2020-10-17T16:06:42.717900907Z: INFO: Consensus layer started
xx@xx:/Documents/concordium-software$ cat 18.log | grep "Consensus layer started"
2020-10-18T16:44:12.223313810Z: INFO: Consensus layer started
xx@xx:
/Documents/concordium-software$ cat 19.log | grep "Consensus layer started"
2020-10-19T09:43:11.622628250Z: INFO: Consensus layer started
2020-10-19T16:49:47.926726104Z: INFO: Consensus layer started
xx@xx:/Documents/concordium-software$ cat 20.log | grep "Consensus layer started"
2020-10-20T07:58:13.377478326Z: INFO: Consensus layer started
2020-10-20T19:25:55.419950605Z: INFO: Consensus layer started
xx@xx:
/Documents/concordium-software$ cat 21.log | grep "Consensus layer started"
2020-10-21T09:38:57.821702249Z: INFO: Consensus layer started
2020-10-21T16:24:17.101517849Z: INFO: Consensus layer started
xx@xx:/Documents/concordium-software$ cat 22.log | grep "Consensus layer started"
2020-10-22T09:08:05.716988975Z: INFO: Consensus layer started
2020-10-22T17:11:49.271294889Z: INFO: Consensus layer started
xx@xx:
/Documents/concordium-software$ cat concordium-testnet-node.log | grep "Consensus layer started"
2020-10-14T19:46:06.038473951Z: INFO: Consensus layer started
2020-10-16T08:45:01.627768980Z: INFO: Consensus layer started
2020-10-16T16:39:52.995671261Z: INFO: Consensus layer started
2020-10-17T09:46:21.915731601Z: INFO: Consensus layer started
2020-10-17T16:06:42.717900907Z: INFO: Consensus layer started
2020-10-18T16:44:12.223313810Z: INFO: Consensus layer started
2020-10-19T09:43:11.622628250Z: INFO: Consensus layer started
2020-10-19T16:49:47.926726104Z: INFO: Consensus layer started
2020-10-20T07:58:13.377478326Z: INFO: Consensus layer started

xx@xx:/Documents/concordium-software$ cat /proc/sys/fs/file-max
9223372036854775807
xx@xx:
/Documents/concordium-software$ ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 7781
max locked memory (kbytes, -l) 65536
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 7781
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited

xx@xx:/Documents/concordium-software$ file concordium-node-retrieve-logs
concordium-node-retrieve-logs: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), statically linked, BuildID[sha1]=6fcd1a4b831b1cc7f0ff7b2c8223b5e69dad50c5, stripped
xx@xx:
/Documents/concordium-software$ uname -a
Linux burcu 5.4.0-51-generic #56-Ubuntu SMP Mon Oct 5 14:28:49 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux

xx@xx:/Documents/concordium-software$ dd if=/dev/zero of=./LargeFile bs=1024 count=3000000
3000000+0 records in
3000000+0 records out
3072000000 bytes (3.1 GB, 2.9 GiB) copied, 14.1962 s, 216 MB/s
xx@xx:
/Documents/concordium-software$ du -csh ./LargeFile
2.9G ./LargeFile
2.9G total

My system is 64 bit and that binary has compiled 64 bit. so, 2bg file problem should not be 32bit limitation.

(B2) Restart a node according to schedule.

Mission:

  • restart twice a day for 7 days
    -- stop the node
    -- wait for different periods
    -- restart the node

Submission:

  • submit node ID
  • submit log summaries of the run (see Logs for instructions & tooling)

⚠️ Submission Note

We originally asked for full logs, which turned out to be difficult due to log sizes. Please update your submission with the output of new log summary tool instead.

(B4) Register as baker on node dashboard

Mission:

  • run a node (or reuse existing node)
  • apply to bake on the node dashboard
  • get stake on the node dashboard
  • after some time, stop baking on the node dashboard

Submission:

  • submit baker ID
  • submit account address
  • submit observations, if any

concordium-client account import error

After the solving the issue #692, my concordium-client elf is working but now i have another problem on same node.
that might be related with #692

i tried either with user or root account but output is still same.
i also copied export.concordiumwallet file inside of container to /root directory. but return didnt changed at all.

xx@xx:~/Documents/concordium-software$ ./concordium-client -v config account import export.concordiumwallet
Base configuration:

  • Verbose: yes
  • Account config dir: /var/lib/concordium/config/accounts
  • Account name map: none

Error: The given file 'export.concordiumwallet' does not exist.
xx@xx:~/Documents/concordium-software$ ls -la export.concordiumwallet
-rwxrwxrwx 1 xxa xxa 277811 Oct 21 11:33 export.concordiumwallet

One pull request per challenge

Please note that every submission to every challenge should have it's own pull request or in other words, one pull request per challenge. As each challenge has a limited number of rewards and we use the first-come-first-serve principle, we need to see the time stamps of each single challenge submission.

In other words,
one pull request for each challenge (Ci) , where C = {B, ID, T, EI} and i = {1,2,...}, i.e.
one pull request for challenge (B1),
one pull request for challenge (B2),
...
one pull request for challenge (X17).

The wording might not have been crystal clear everywhere in the descriptions, so we have tried to smoothened that out by now.

(ID2) Create identities and accounts using production environment

Mission:

  • create a real-life identity (using the production environment) with a physical identity paper supported for your country
  • create 2 accounts; make at least 1 identity data attributes public (When choosing an attribute, keep in mind that this attribute is publicly shown on the blockchain)
  • request GTU drops for each account

Submission:

  • submit document type and nationality
  • submit account addresses
  • submit public data attribute(s)
  • submit device make and model and OS version

Node commands

Invoke concordium-node --help to see the full list of flags and options.

Other node commands are

  • concordium-node-stop
  • concordium-node-retrieve-logs
  • concordium-node-reset-data

On windows, the command looks like follows:
command line

(B1) Run a node.

Mission:
Run a node for a month (30 calendar days).

Submission:

  • submit node ID
  • submit log summaries of the run (see Logs for instructions & tooling)

⚠️ Submission Note

We originally asked for full logs, which turned out to be difficult due to log sizes. Please update your submission with the output of new log summary tool instead.

(T1) Use Concordium ID to send and receive GTU

Mission:

  • create or reuse identity (mobile wallet Concordium ID)
  • create or reuse accounts A and B (mobile wallet Concordium ID)

On Concordium ID:

  • transfer 3 GTU from account A to account B
  • shield 10 GTU of account A
  • transfer 7 GTU from the shielded balance on account A to B
  • unshield shielded balance on account B
  • check transfers in the respective block on the network dashboard

Note that all transfers come with a fee to be paid from the general balance, so please make sure, you have enough money on the general balance of the accounts. Otherwise, you might want to create a new account an get a new GTU drop. See Transfer Fees for more details.

Submission:

  • submit account addresses for A and B
  • submit device make and model and OS version

GRPC

When I restart the node on the Ubuntu 20.04.1, repeating this:

2020-10-16T23:11:07.915153265Z: ERROR: TreeState: Database invariant violation: Could not read last finalized block
2020-10-16 23:11:07,916 INFO success: p2p-client entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2020-10-16T23:11:07.917293130Z: ERROR: External: Database invariant violation: Could not read last finalized block
Error: ErrorMessage { msg: "Database invariant violation. See logs for details." }

On the client side shows:
Oops, something went wrong: Unexpected HTTP response: 400: "Cannot establish connection to GRPC endpoint."

Buggy Alert on Baking

When I start another baking after stopped one, this below message has came on the client, pass is correct, but still baking started even though the alert.

Baker ID is 839

Oops, something went wrong: Unexpected HTTP response: 401: "cannot decrypt signing key with index 0: decryption failure: wrong password"

resim

Retrieve logs

To get full logs with the original tool, use concordium-node-retrieve-logs, see also Wiki: Logs.

To get smaller pruned logs, please see Logs for more details.

Identity issuer up again

Our identity issuer Notabene is down since 2020-10-18 before 20:00. That means that no new identities can be issued at the moment, neither with "Notabene" nor "Notabene development", and challenges (ID1) - (ID3) cannot be completed.

We have notified Notabene on several channels but haven’t heard back so far. We will communicate out, as soon as we hear back from them.

Stake delegation slot increase

The delegation service has been changed so that it now has 25 slots, each with around 1% stake. The duration of the delegation is unchanged, each delegation lasts for 2 hours. With these parameters we expect bakers that have the amounts delegated to produce around 7 blocks in the two hours.

Challenge (B3) updated

It came to our attention that in challenge (B3) line

  • restart a node with --listen-grpc-port set to 9999 and start Concordium Client connecting to port 9999

cannot be executed in a straightforward way, so we decided to delete it from the challenge. The challenge card has been updated and tagged accordingly. There is no need to resubmit (B3). Please merge this change into your working space.

sonerbo

4AsdEaWG5ftyFCEDSuq7PvePc8bzC7QVLbuVXqdGVZ156fVDRz

Xiaomi Note 6 Pro Android

Client commands

Invoke concordium-client --help to see the full list of topics. Include the topic to see the available commands within that topic. The most relevant topics might be

  • concordium-client transaction --help
  • concordium-client account --help
  • concordium-client config --help
  • concordium-client block --help
  • concordium-client baker --help

For information on how to use the concordium client, see Concordium Testnet Documentation: Concordium Client.

Pull request for each task

#323
Is not explains the main question, we understood that each task has have different pull request - this is clear.

But by using this video https://youtu.be/VFGayA6ikU4 we have the technical limitations to make different pull request to each task if we will have the one branch. May be we have to choose "Create the new branch for this commit" instead of "Commit directly to the main branch" in the video?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.