Giter Site home page Giter Site logo

temp_safe_network's Introduction

temp_safe_network's People

Contributors

b-zee avatar bochaco avatar dirvine avatar grumbach avatar iancoleman avatar jacderida avatar joshuef avatar maqi avatar oetyng avatar rolandsherwin avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

temp_safe_network's Issues

Add support for relocation retries at nodes

It's possible to timeout a join and try it again in case it gets stuck. This is not implemented by routing, but it's easy to do by the upper layers. Same thing should be possible for relocations as they can get stuck for the same reasons too. Currently the upper layer can detect that a relocation is taking too long (by measuring the time between the RelocationStarted and Relocated events), but has no way of restarting it without losing the relocation data which carry the proof of its age. We should add such feature.

JoinsAllowed logic scrambled

The logic seems to have been scrambled by mistake when translating the original code.

Current:
https://github.com/maidsafe/safe_network/blob/4adaeaff4f07871840397adc3371ec8b3436e7ce/sn/src/routing/core/msg_handling/agreement.rs#L109-L114

Original:

let op = if previous_name.is_some() {
    trace!("A relocated node has joined the section.");
    // Switch joins_allowed off a new adult joining.
    NodeDuty::SetNodeJoinsAllowed(false)
} else if network_api.our_prefix().await.is_empty() {
    NodeDuty::NoOp
} else {
    NodeDuty::SetNodeJoinsAllowed(false)
};

Originally, the branching on relocation seems to have been there only to log that fact.
The case for empty prefix was to never disallow joining in first section.

The current code is instead switching to allowing joins if a brand new node joins a section other than the first (empty prefix).
AFAIK this is not the intended behaviour.

Suggested replacement for L109-L114, as to preserve the original logic:

// Do not disable node joins in first section.
if !self.network_knowledge.prefix().await.is_empty() {
  // ..otherwise, switch off joins_allowed on a node joining.
  *self.joins_allowed.write().await = false;
}

RUSTSEC-2020-0159: Potential segfault in `localtime_r` invocations

Potential segfault in localtime_r invocations

Details
Package chrono
Version 0.4.19
URL chronotope/chrono#499
Date 2020-11-10

Impact

Unix-like operating systems may segfault due to dereferencing a dangling pointer in specific circumstances. This requires an environment variable to be set in a different thread than the affected functions. This may occur without the user's knowledge, notably in a third-party library.

Workarounds

No workarounds are known.

References

See advisory page for additional details.

RUSTSEC-2021-0079: Integer overflow in `hyper`'s parsing of the `Transfer-Encoding` header leads to data loss

Integer overflow in hyper's parsing of the Transfer-Encoding header leads to data loss

Details
Package hyper
Version 0.13.10
URL GHSA-5h46-h7hh-c6x9
Date 2021-07-07
Patched versions >=0.14.10

When decoding chunk sizes that are too large, hyper's code would encounter an integer overflow. Depending on the situation,
this could lead to data loss from an incorrect total size, or in rarer cases, a request smuggling attack.

To be vulnerable, you must be using hyper for any HTTP/1 purpose, including as a client or server, and consumers must send
requests or responses that specify a chunk size greater than 18 exabytes. For a possible request smuggling attack to be possible,
any upstream proxies must accept a chunk size greater than 64 bits.

See advisory page for additional details.

Nodes on podman container: "Unable to establish a connection"

I start 16 IPv6 nodes on a podman container running Alpine Linux on an RPi4 running Manjaro Linux.
One by one they drop off until only the first one is left.

All errors are of this nature:

 ERROR 2022-03-09T14:44:08.302708Z [sn/src/node/core/bootstrap/join.rs:L164]:
	 ➤ join {network_genesis_key=PublicKey(0dae..b0d0) target_section_key=PublicKey(0dae..b0d0) recipients=[Peer { name: 6ba5e7(01101011).., addr: [2001:983:8610:1:575a:3c3a:a0a0:875b]:12000 }]}
	 ➤ Node cannot join the network since it is not externally reachable: [fdc2:2c37:1a5c:5ad1::1]:12001
 ERROR 2022-03-09T14:44:08.303123Z [sn/src/bin/sn_node.rs:L300]:
	 ➤ Unfortunately we are unable to establish a connection to your machine ([fdc2:2c37:1a5c:5ad1::1]:12001) either through a public IP address, or via IGD on your router. Please ensure that IGD is enabled on your router - if it is and you are still unable to add your node to the testnet, then skip adding a node for this testnet iteration. You can still use the testnet as a client, uploading and downloading content, etc. https://safenetforum.org/

It seems to work fine on IPv4, although I will add that the IPv4 is a couple of versions behind, so I don't know if this is a version issue or an IPv4/6 issue.

What's the best action?

RUSTSEC-2021-0060: `aes-soft` has been merged into the `aes` crate

aes-soft has been merged into the aes crate

Details
Status unmaintained
Package aes-soft
Version 0.3.3
URL RustCrypto/block-ciphers#200
Date 2021-04-29

Please use the aes crate going forward. The new repository location is at:

<https://github.com/RustCrypto/block-ciphers/tree/master/aes>

AES-NI is now autodetected at runtime on i686/x86-64 platforms.
If AES-NI is not present, the aes crate will fallback to a constant-time
portable software implementation.

To force the use of a constant-time portable implementation on these platforms,
even if AES-NI is available, use the new force-soft feature of the aes
crate to disable autodetection.

See advisory page for additional details.

Podman container node assigns wrong IP address and port for host node when attempting to connect each other via slirp4netns

Safe network version: 0.52.0 0.47.0 0.40.0

I'm trying to run an app called the safe network inside a rootless podman container and then connect another node to it.

It fails because the host node tells container node that it should reach it on 10.0.2.100:random,
when it should be 192.178.168.29:12001.

I'm not sure if this is a maidsafe issue or a podman issue, so I'm putting this in both.

The IPs I've chosen

Root node inside container
- Local: Tap0 address:12000
- Public/Published: LAN address or public address:12000

Host
- Local: LAN address:12001
- Public: LAN address:12001

relevant parts of host log

	 ➤ 6da530.. Joining as a new node (PID: 202926) our socket: 192.168.178.29:12001, bootstrapper was: 192.168.178.29:12000, network's genesis key: PublicKey(0de4..893d)
...
	 ➤ Node cannot join the network since it is not externally reachable: 10.0.2.100:53100

relevant parts of container log

INFO 2021-12-29T11:21:28.021787Z [sn/src/routing/core/comm.rs:L183]:
	 ➤ Peer 10.0.2.100:53100 is NOT externally reachable: Send(ConnectionLost(TimedOut))

full host log

 INFO 2021-12-29T11:20:26.259319Z [sn/src/routing/routing_api/mod.rs:L152]:
	 ➤ 6da530.. Bootstrapping a new node.
 INFO 2021-12-29T11:20:26.276224Z [sn/src/routing/routing_api/mod.rs:L166]:
	 ➤ 6da530.. Joining as a new node (PID: 202926) our socket: 192.168.178.29:12001, bootstrapper was: 192.168.178.29:12000, network's genesis key: PublicKey(0de4..893d)
 INFO 2021-12-29T11:20:26.276595Z [sn/src/routing/core/bootstrap/join.rs:L505]:
	 ➤ join {network_genesis_key=PublicKey(0de4..893d) target_section_key=PublicKey(0de4..893d) recipients=[Peer { name: 6da530(01101101).., addr: 192.168.178.29:12000, connection: Some(Connection { id: 944978480, remote_address: 192.168.178.29:12000, .. }) }]}
	 ➤ send_join_requests {join_request=JoinRequest { section_key: PublicKey(0de4..893d), resource_proof_response: None, aggregated: None } recipients=[Peer { name: 6da530(01101101).., addr: 192.168.178.29:12000, connection: Some(Connection { id: 944978480, remote_address: 192.168.178.29:12000, .. }) }] section_key=PublicKey(0de4..893d) should_backoff=false}
	 ➤ Sending JoinRequest { section_key: PublicKey(0de4..893d), resource_proof_response: None, aggregated: None } to [Peer { name: 6da530(01101101).., addr: 192.168.178.29:12000, connection: Some(Connection { id: 944978480, remote_address: 192.168.178.29:12000, .. }) }]
 INFO 2021-12-29T11:20:27.451718Z [sn/src/routing/core/bootstrap/join.rs:L367]:
	 ➤ join {network_genesis_key=PublicKey(0de4..893d) target_section_key=PublicKey(0de4..893d) recipients=[Peer { name: 6da530(01101101).., addr: 192.168.178.29:12000, connection: Some(Connection { id: 944978480, remote_address: 192.168.178.29:12000, .. }) }]}
	 ➤ Setting Node name to 8d1c11.. (age 98)
 INFO 2021-12-29T11:20:27.451765Z [sn/src/routing/core/bootstrap/join.rs:L374]:
	 ➤ join {network_genesis_key=PublicKey(0de4..893d) target_section_key=PublicKey(0de4..893d) recipients=[Peer { name: 6da530(01101101).., addr: 192.168.178.29:12000, connection: Some(Connection { id: 944978480, remote_address: 192.168.178.29:12000, .. }) }]}
	 ➤ Newer Join response for us 8d1c11(10001101).., SAP SectionAuthorityProvider { prefix: Prefix(), public_key_set: PublicKeySet { public_key: PublicKey(0de4..893d), threshold: 0 }, elders: {Peer { name: 2d82ba(00101101).., addr: 192.168.178.29:12000, connection: None }} } from Peer { name: 2d82ba(00101101).., addr: 192.168.178.29:12000, connection: Some(Connection { id: 944978480, remote_address: 192.168.178.29:12000, .. }) }
 INFO 2021-12-29T11:20:28.011536Z [sn/src/routing/core/bootstrap/join.rs:L505]:
	 ➤ join {network_genesis_key=PublicKey(0de4..893d) target_section_key=PublicKey(0de4..893d) recipients=[Peer { name: 6da530(01101101).., addr: 192.168.178.29:12000, connection: Some(Connection { id: 944978480, remote_address: 192.168.178.29:12000, .. }) }]}
	 ➤ send_join_requests {join_request=JoinRequest { section_key: PublicKey(0de4..893d), resource_proof_response: None, aggregated: None } recipients=[Peer { name: 2d82ba(00101101).., addr: 192.168.178.29:12000, connection: Some(Connection { id: 944978480, remote_address: 192.168.178.29:12000, .. }) }] section_key=PublicKey(0de4..893d) should_backoff=true}
	 ➤ Sending JoinRequest { section_key: PublicKey(0de4..893d), resource_proof_response: None, aggregated: None } to [Peer { name: 2d82ba(00101101).., addr: 192.168.178.29:12000, connection: Some(Connection { id: 944978480, remote_address: 192.168.178.29:12000, .. }) }]
 ERROR 2021-12-29T11:21:28.031312Z [sn/src/routing/core/bootstrap/join.rs:L164]:
	 ➤ join {network_genesis_key=PublicKey(0de4..893d) target_section_key=PublicKey(0de4..893d) recipients=[Peer { name: 6da530(01101101).., addr: 192.168.178.29:12000, connection: Some(Connection { id: 944978480, remote_address: 192.168.178.29:12000, .. }) }]}
	 ➤ Node cannot join the network since it is not externally reachable: 10.0.2.100:53100

full container log

 INFO 2021-12-29T10:19:25.359368Z [sn/src/routing/routing_api/mod.rs:L85]:
	 ➤ 2d82ba.. Starting a new network as the genesis node (PID: 1).
 INFO 2021-12-29T10:19:25.477464Z [sn/src/routing/routing_api/mod.rs:L128]:
	 ➤ 2d82ba.. Genesis node started!. Genesis key PublicKey(0de4..893d), hex: ade4e34061a1b0f3d1e81999bf075c0af6046f02bd81178e186a5d593957ca948dc7682d6ce729aa924aeb1a8d6ef97c
 INFO 2021-12-29T10:19:25.477680Z [sn/src/routing/routing_api/dispatcher.rs:L87]:
	 ➤ Starting to probe network
 INFO 2021-12-29T10:19:25.477867Z [sn/src/routing/routing_api/dispatcher.rs:L115]:
	 ➤ Writing our PrefixMap to disk
 INFO 2021-12-29T10:19:25.477911Z [sn/src/routing/core/mod.rs:L212]:
	 ➤ Writing our latest PrefixMap to disk
 INFO 2021-12-29T10:19:25.482805Z [sn/src/node/node_api/mod.rs:L87]:
	 ➤ Node PID: 1, prefix: Prefix(), name: 2d82ba(00101101).., age: 255, connection info: "192.168.178.29:12000"
INFO 2021-12-29T11:21:28.021787Z [sn/src/routing/core/comm.rs:L183]:
	 ➤ Peer 10.0.2.100:53100 is NOT externally reachable: Send(ConnectionLost(TimedOut))

host command

safe networks switch lan-ipv4 && \
  RUST_BACKTRACE=full ~/.safe/node/sn_node -vv \
  --clear-data \
  --skip-auto-port-forwarding \
  --local-addr 192.168.178.29:12001 \
  --public-addr 192.168.178.29:12001 \
  --root-dir=/home/folaht/.safe/node/joinnode-ipv4_12001 \
  --log-dir=/home/folaht/.safe/node/joinnode-ipv4_12001 &

The IP address and port the host node tries to connect to

[folaht@Rezosur-zot joinnode-ipv4_12001]$ safe networks
+----------+--------------+------------------------------------------------------------------------+
| Networks |              |                                                                        |
+----------+--------------+------------------------------------------------------------------------+
| Current  | Network name | Connection info                                                        |
+----------+--------------+------------------------------------------------------------------------+
| *        | lan-ipv4     | /home/folaht/.safe/cli/networks/lan-ipv4_node_connection_info.config   |
+----------+--------------+------------------------------------------------------------------------+
|          | local-ipv4   | /home/folaht/.safe/cli/networks/local-ipv4_node_connection_info.config |
+----------+--------------+------------------------------------------------------------------------+
|          | local-ipv6   | /home/folaht/.safe/cli/networks/local-ipv6_node_connection_info.config |
+----------+--------------+------------------------------------------------------------------------+
[folaht@Rezosur-zot joinnode-ipv4_12001]$ cat /home/folaht/.safe/cli/networks/lan-ipv4_node_connection_info.config
["8520343e22464270f10ef2408902ab87b1186e7410cf57d5390c9a44b8953097e0b36e2bd29d1169eaa446a82966b12d",["192.168.178.29:12000"]]

status container node

[folaht@Rezosur-zot rootnode-ipv4]$ podman ps -a
CONTAINER ID  IMAGE                                COMMAND     CREATED         STATUS             PORTS                                                             NAMES
f4c6a137b93d  localhost/rootnode-ipv4_test:latest              37 minutes ago  Up 37 minutes ago  192.168.178.29:12000->12000/tcp, 192.168.178.29:12000->12000/udp  test_rootnode-ipv4

pid container node

[folaht@Rezosur-zot rootnode-ipv4]$ ps -ef | grep sn_node
10999      51780   51777  0 18:13 ?        00:00:01 sn_node -vv --idle-timeout-msec 5500 --keep-alive-interval-msec 4000 --skip-auto-port-forwarding --clear-data --local-addr 10.0.2.100:12000 --public-addr 192.168.178.29:12000 --log-dir /home/admin/.safe/node/rootnode-ipv4_12000 --root-dir /home/admin/.safe/node/rootnode-ipv4_12000 --first
folaht     52185    4230  0 18:50 pts/1    00:00:00 grep --colour=auto sn_node

I created a second log collection that's more verbose.

https://gist.github.com/Folaht/70ae0329b5acd176cc3ea84b920c1576

main README confusingly states two different licensing terms for (possibly) separate parts of the project.

The file <README.md> contains this:

License

This Safe Network repository is licensed under the General Public License (GPL), version 3 (LICENSE http://www.gnu.org/licenses/gpl-3.0.en.html).

Linking exception

safe_network is licensed under GPLv3 with linking exception. This means you can link to and use the library from any program, proprietary or open source; paid or gratis. However, if you modify safe_network, you must distribute the source to your modified version under the terms of the GPLv3.

See the LICENSE file for more details.

That is ambiguous: Does it mean that the project in general is licensed as "GPL-3" and only some subset called "safe_network" has linking exception?

If (as I guess) the intent is to state one single set of licensing terms "GPL-3 with linking exception", then I recommend to state it all together in a single paragraph, no subsection and not with varying identifiers.

[routing] Clients attempting to directly contact nodes in another section should receive an Infrastructure error

If a client sends a message(client message) to not-it's-section, should that message be handled at routing saying that the client has contacted a wrong section and Redirect it to the right section?

Yes. Nodes do thing with a client message directed to the wrong section.

Wanted

Clients should receive an infrastructure error, indicating this elder is not in their section.

Nodes should not receive any client message that is not for their section.

TBD:

Should the error inclide known section elders?

If not, clients will rebootstrap + query.

Is there any reason not to provide this info with the error?

RUSTSEC-2021-0080: Links in archive can create arbitrary directories

Links in archive can create arbitrary directories

Details
Package tar
Version 0.4.35
URL alexcrichton/tar-rs#238
Date 2021-07-19

When unpacking a tarball that contains a symlink the tar crate may create
directories outside of the directory it's supposed to unpack into.

The function errors when it's trying to create a file, but the folders are
already created at this point.

use std::{io, io::Result};
use tar::{Archive, Builder, EntryType, Header};

fn main() -&gt; Result&lt;()&gt; {
    let mut buf = Vec::new();

    {
        let mut builder = Builder::new(&amp;mut buf);

        // symlink: parent -&gt; ..
        let mut header = Header::new_gnu();
        header.set_path(&quot;symlink&quot;)?;
        header.set_link_name(&quot;..&quot;)?;
        header.set_entry_type(EntryType::Symlink);
        header.set_size(0);
        header.set_cksum();
        builder.append(&amp;header, io::empty())?;

        // file: symlink/exploit/foo/bar
        let mut header = Header::new_gnu();
        header.set_path(&quot;symlink/exploit/foo/bar&quot;)?;
        header.set_size(0);
        header.set_cksum();
        builder.append(&amp;header, io::empty())?;

        builder.finish()?;
    };

    Archive::new(&amp;*buf).unpack(&quot;demo&quot;)
}

This issue was discovered and reported by Martin Michaelis (@mgjm).

See advisory page for additional details.

Register API: Missing a way to get all nodes from Merkle Register CRDT

I might be missing something, but it seems like there is no way to retrieve the low-level crdts::merkle_reg::MerkleReg.

There is a safe_network::client::Client::get_register function that returns a Register type, but this can only be used to query specific entries with a known hash, or the root entries.

(As an aside, I think the term 'entry' is a little confusing in the context of DAGs.)

Implement basic `Log Spent` in nodes

Where

Probably inside a sn/data/spentbook folder

Blockers

This may be clearer w/ the In memory DAG, but I don't think it's a blocker to setting up basic validation functionality and integrating teh sn_dbc code

Sketched impl:

type OrderingDAG = MerkleReg<MaidTransaction>;
type InvertedDAG = BTreeMap<Hash, BTreeSet<Hash>>; // used for bootstrapping and traversing the DAG forward

struct MaidTransaction {
    // hash of the MaidTransaction that produced an output with this inputs public key
    inputs: BTreeMap<PublicKey, Hash>,
    ringct: RingCtTransaction, // The actual transaction
}

fn log_spent(&mut self, tx: MaidTransaction, roots: SectionAuth<BTreeSet<Hash>>) {
    self.verify_tx(tx, roots);
    # TODO: hook up to the in memory DAGs when available.
    #self.ordering_dag.write(tx, roots);
    #for (input, hash) in tx.inputs.iter() {
    #    self.inverted_dag.entry(hash).or_default().insert(tx.hash());
    #}
}

This API will be the core integration point between sn_dbc and sn on the SpentBook side. This is where we will need to figure out the key-management situation (i.e. sn-dbc needs a way to validate and create signatures, it exposes some traits we can implement to do the signing / verifying)

Add Initial DBC messages

What

Implement initial draft sn/messaging messages for the various DBC flows.

These include:

  • read spentbook from specific section/s
  • write/log tx and spentbook on a section
  • verify a DBC is valid and spent/no-spent (thinking a user may want to verify without spending it)
  • reissue DBCs

--- Qs:

  • Are there more messages to come here?

Initial DBC CLI commands

What

We need full CLI flows for pending DBC commands.

(While messaging/dag/client implementation is underway, these can purely print some placeholder text).

Intial versions should just print output to stdout

Commands

  • reissue DBC(s) (from->to)
  • check validity of DBC

sn_dbc already has many examples of this flow, so it should ideally just be integrating these existing commands into the CLI itself.

Missing INVALID_NRS_CHARS

safe_network/blob/main/src/url/url_parts.rs
suggests ln40
// https://www.unicode.org/Public/UCD/latest/ucd/PropList.txt
// Other_Default_Ignorable_Code_Point

but code seems to be missing at least
'\u{FFF8}', // reserved

then a query follows as the PropList.txt do suggest these below but unclear if they are not possible for some other reason, in which case could do with comment noting that

E0000         ; Other_Default_Ignorable_Code_Point # Cn       <reserved-E0000>
E0002..E001F  ; Other_Default_Ignorable_Code_Point # Cn  [30] <reserved-E0002>..<reserved-E001F>
E0080..E00FF  ; Other_Default_Ignorable_Code_Point # Cn [128] <reserved-E0080>..<reserved-E00FF>
E01F0..E0FFF  ; Other_Default_Ignorable_Code_Point # Cn [3600] <reserved-E01F0>..<reserved-E0FFF>

and less obviously certain "deprecated" are missing, with full list deprecated as below but the code only seeming to include 206A..206F ? It's not obvious to me why these would not be ok but I wonder if E0001 is possible that seems noted in Gnome character map as "strongly discouraged".. so perhaps is a different kind than the ignorable.

0149          ; Deprecated # L&       LATIN SMALL LETTER N PRECEDED BY APOSTROPHE
0673          ; Deprecated # Lo       ARABIC LETTER ALEF WITH WAVY HAMZA BELOW
0F77          ; Deprecated # Mn       TIBETAN VOWEL SIGN VOCALIC RR
0F79          ; Deprecated # Mn       TIBETAN VOWEL SIGN VOCALIC LL
17A3..17A4    ; Deprecated # Lo   [2] KHMER INDEPENDENT VOWEL QAQ..KHMER INDEPENDENT VOWEL QAA
206A..206F    ; Deprecated # Cf   [6] INHIBIT SYMMETRIC SWAPPING..NOMINAL DIGIT SHAPES
2329          ; Deprecated # Ps       LEFT-POINTING ANGLE BRACKET
232A          ; Deprecated # Pe       RIGHT-POINTING ANGLE BRACKET
E0001         ; Deprecated # Cf       LANGUAGE TAG

look into optional ACKs. After elders have sent on to adults.

We could use ACKs in a non-required fashion.

Worse case is we fall back onto current behaviour and optionally validate a PUT.

We can send an ACK back once a message has been processed at elders (ie, once the message has gone out to adults to store a chunk, OR once a register has been written).

The ack may then allow a client to stop querying/validating/waiting for AE flows to complete.

(Which in turn may simplify some tests)

Issue with switch networks and uploading file

Hi,

I am having an issue with switching safe network to comnet following this guide. Here is the error log:

Feb 28 01:23:45.510 DEBUG safe: Starting Safe CLI...
Feb 28 01:23:45.511 DEBUG safe::cli: Processing command: CmdArgs { cmd: Some(Networks { cmd: Some(Switch { network_name: "comnet" }) }), output_fmt: None, output_json: false, dry: false, xorurl_base: None }
Feb 28 01:23:45.511 DEBUG safe::operations::config: Config settings retrieved from '/home/tamunosiki/.safe/cli/config.json': Settings { networks: {"comnet": ConnInfoLocation("https://sn-comnet.s3.eu-west-2.amazonaws.com/node_connection_info.config")} }
Feb 28 01:23:45.511 DEBUG safe::operations::config: Config settings at '/home/tamunosiki/.safe/cli/config.json' updated with: Settings { networks: {"comnet": ConnInfoLocation("https://sn-comnet.s3.eu-west-2.amazonaws.com/node_connection_info.config")} }
Feb 28 01:23:45.511 DEBUG safe::subcommands::networks: Switching to 'comnet' network...
Switching to 'comnet' network...
Fetching 'comnet' network connection information from 'https://sn-comnet.s3.eu-west-2.amazonaws.com/node_connection_info.config' ...
Error:
   0: expected value at line 1 column 1

Location:
   sn_cli/src/operations/config.rs:412

Backtrace omitted.
Run with RUST_BACKTRACE=1 environment variable to display it.
Run with RUST_BACKTRACE=full to include source snippets.

I was able to connect to the local node baby-fleming follow the guide from the github repo but encountered an when I tried to upload a file. Here is the error log:

Error:
   0: ClientError: Problem finding sufficient elders. A supermajority of responses is unobtainable. 1 were known in this section, 3 needed. Section pk: PublicKey(1067..b7df)
   1: Problem finding sufficient elders. A supermajority of responses is unobtainable. 1 were known in this section, 3 needed. Section pk: PublicKey(1067..b7df)

Location:
   /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/core/src/result.rs:1911

Backtrace omitted.
Run with RUST_BACKTRACE=1 environment variable to display it.
Run with RUST_BACKTRACE=full to include source snippets.

I am using a fish shell on an arch base linux.

I am super excited about the Safe Network and will like to build my Rust apps on the network. I will like to start playing with the network through the CLI before jumping into the API. So please I need help figuring out what my issues are.

Creating existing register with different owner does not give error

Using Client::store_public_register twice providing different owners succeeds.

let mut csprng = rand::rngs::OsRng;
let pk1: Keypair = Keypair::new_ed25519(&mut csprng).public_key();
let pk2: Keypair = Keypair::new_ed25519(&mut csprng).public_key();

let xor = XorName::random();

let (_, wal) = client
    .store_public_register(xor, 0xFF, pk1, Default::default())
    .await
    .unwrap();
client.publish_register_ops(wal).await.unwrap();

let (_, wal) = client
    .store_public_register(xor, 0xFF, pk2, Default::default())
    .await
    .unwrap();
client.publish_register_ops(wal).await.unwrap(); // No panic ...

Is this expected behavior?

Mitigation for "Confusables are worrisome"

There is a comment
safe_network/blob/main/src/url/url_parts.rs
atm ln23
// Confusables are worrisome but not invalid.

but perhaps that could note confusables are not a worry - the solution to confusables lies with NRS registration?
a "confusable url can never be registered."
so, a check then put for confusables only within the nrs registration and then not required for all url usage.

Expecting too that confusables are not useful against xorurls because those are simple limited set of characters.

Perhaps is not ideal coding expecting there can never be slippage.. I don't know if then becomes relative to confidence of what can never occur.

Aarch64 ARM build

Is there any reason this architecture hasn't been included in the ARM support pull request?
This is very easy to do correct?
Copy-paste the armv7 merger and replace everything with aarch64 and it works?

I'm actually surprised the Armv7 has been added first.

sn_node: support logrotate style logfile rotation

The sn_node logfiles are currently closed after one hour, and a new file opened with a different name. The files grow quickly in size and each one can be 10MB or more.

Problem:

  • a command such as tail -f or vdash stops following the logs on the first rotation
  • logfiles take up too much space

Proposal: modify the behaviour to support logrotate style naming, compression and thresholds, specifically:

  • the logfile being updated always has the same name
  • rotated files are moved in sequence to new names with .1, .2, .3 ... .N appended to the name of the first logfile
  • rotated files are compressed (ideally optional)
  • rotation is based on a default maximum size (ideally with configurable size or time)
  • the number of logfiles kept is limited by default, say to ten (i.e. max N = 9), ideally configurable

This would keep the total size used more manageable and allow logfile watching programs to continue following the log as the logfiles are rotated. Ideally features will be configurable by CLI parameters on sn_node (and ultimately safe node).

This issue relates to the discussion on the forum here

Implementation Note

The current logging system uses the tracing module with the tracing_appender as here:

https://github.com/maidsafe/safe_network/blob/e9a9cc1096e025d88f19390ad6ba7398f71bc800/sn/src/bin/sn_node.rs#L100

I'm hopeful the new behaviour can be achieved by replacing the RollingFileAppender with a RollingWriter that provides a wrapper for a suitable module with logrotate functionality, such as file_rotation (which implements tokio::io::Write) or file_rotate (which implements std::io::Write).

[routing] Sybil attack via relocation

A relocated node can create arbitrary number of new identities and sign them with their old key and then pretend they are all different nodes joining the destination section. If the node is old enough, it can easily take control of the dst section this way.

Idea for a fix:

We record the previous name as part of the MemberInfo in the SectionPeers container. Then when inserting an entry into the container, we check whether an entry with the same previous_name already exists there. If so, we vote Offline for both of them.

RUSTSEC-2021-0062: project abandoned; migrate to the `aes-siv` crate

project abandoned; migrate to the aes-siv crate

Details
Status unmaintained
Package miscreant
Version 0.5.2
URL miscreant/miscreant.rs@5d921f5
Date 2021-02-28

The Miscreant project has been abandoned and archived.

The Rust implementation has been adapted into the new aes-siv crate which
implements both the AES-CMAC-SIV and AES-PMAC-SIV constructions:

<https://github.com/RustCrypto/AEADs/tree/master/aes-siv>

Please migrate to the aes-siv crate.

Alternatively see the aes-gcm-siv crate for a newer, faster construction
which provides similar properties:

<https://github.com/RustCrypto/AEADs/tree/master/aes-gcm-siv>

See advisory page for additional details.

on arm and termux on android 10 safe commands execute but dont return to prompt

platform arm
using termux on android 10 on redmi 9AT (see image for specifics)
Screenshot_2022-03-05-10-03-24-281_com android settings

using the safe commands leads to execution of the command without returning to cli prompt

it gets stuck and needs ctrl + c after every command!

well only safe -V runs ok as far as I know

tested with 0.50.4

safe networks as you see in next screenshot is stalling after execution and i have to press ctrl +c
IMG_20220305_102235

also safe node install just stalls and doesnt execute

RUSTSEC-2021-0059: `aesni` has been merged into the `aes` crate

aesni has been merged into the aes crate

Details
Status unmaintained
Package aesni
Version 0.6.0
URL RustCrypto/block-ciphers#200
Date 2021-04-29

Please use the aes crate going forward. The new repository location is at:

<https://github.com/RustCrypto/block-ciphers/tree/master/aes>

AES-NI is now autodetected at runtime on i686/x86-64 platforms.
If AES-NI is not present, the aes crate will fallback to a constant-time
portable software implementation.

To prevent this fallback (and have absence of AES-NI result in an illegal
instruction crash instead), continue to pass the same RUSTFLAGS which were
previously required for the aesni crate to compile:

RUSTFLAGS=-Ctarget-feature=+aes,+ssse3

See advisory page for additional details.

RUSTSEC-2020-0036: failure is officially deprecated/unmaintained

failure is officially deprecated/unmaintained

Details
Status unmaintained
Package failure
Version 0.1.8
URL rust-lang-deprecated/failure#347
Date 2020-05-02

The failure crate is officially end-of-life: it has been marked as deprecated
by the former maintainer, who has announced that there will be no updates or
maintenance work on it going forward.

The following are some suggested actively developed alternatives to switch to:

See advisory page for additional details.

RUSTSEC-2020-0071: Potential segfault in the time crate

Potential segfault in the time crate

Details
Package time
Version 0.1.43
URL time-rs/time#293
Date 2020-11-18
Patched versions >=0.2.23
Unaffected versions =0.2.0,=0.2.1,=0.2.2,=0.2.3,=0.2.4,=0.2.5,=0.2.6

Impact

Unix-like operating systems may segfault due to dereferencing a dangling pointer in specific circumstances. This requires an environment variable to be set in a different thread than the affected functions. This may occur without the user's knowledge, notably in a third-party library.

The affected functions from time 0.2.7 through 0.2.22 are:

  • time::UtcOffset::local_offset_at
  • time::UtcOffset::try_local_offset_at
  • time::UtcOffset::current_local_offset
  • time::UtcOffset::try_current_local_offset
  • time::OffsetDateTime::now_local
  • time::OffsetDateTime::try_now_local

The affected functions in time 0.1 (all versions) are:

  • at
  • at_utc

Non-Unix targets (including Windows and wasm) are unaffected.

Patches

Pending a proper fix, the internal method that determines the local offset has been modified to always return None on the affected operating systems. This has the effect of returning an Err on the try_* methods and UTC on the non-try_* methods.

Users and library authors with time in their dependency tree should perform cargo update, which will pull in the updated, unaffected code.

Users of time 0.1 do not have a patch and should upgrade to an unaffected version: time 0.2.23 or greater or the 0.3. series.

Workarounds

No workarounds are known.

References

time-rs/time#293

See advisory page for additional details.

RUSTSEC-2021-0078: Lenient `hyper` header parsing of `Content-Length` could allow request smuggling

Lenient hyper header parsing of Content-Length could allow request smuggling

Details
Package hyper
Version 0.13.10
URL GHSA-f3pg-qwvg-p99c
Date 2021-07-07
Patched versions >=0.14.10

hyper's HTTP header parser accepted, according to RFC 7230, illegal contents inside Content-Length headers.
Due to this, upstream HTTP proxies that ignore the the header may still forward them along if it chooses to ignore the error.

To be vulnerable, hyper must be used as an HTTP/1 server and using an HTTP proxy upstream that ignores the header's contents
but still forwards it. Due to all the factors that must line up, an attack exploiting this vulnerablity is unlikely.

See advisory page for additional details.

DBC: Implement in-memory MerkleReg backed OrderingDAG

Where

Probably inside sn/data/spentbook

What

We need the following API's implemented on top of the Ordering DAG (which in our case will simply be the MerkleReg)

type OrderingDAG = MerkleReg<MaidTransaction>;
type InvertedDAG = BTreeMap<Hash, BTreeSet<Hash>>; // used for bootstrapping and traversing the DAG forward

struct MaidTransaction {
    // hash of the MaidTransaction that produced an output with this inputs public key
    inputs: BTreeMap<PublicKey, Hash>,
    ringct: RingCtTransaction, // The actual transaction
}
/// return the hashes of the most recent MaidTransactions
fn spentbook_roots(&self) -> BTreeSet<Hash>
/// return the MaidTransaction for the given hash
fn spentbook_read(&self, tx_hash: Hash) -> Option<MaidTransaction>
/// returns the transactions that came after the transaction with hash `tx_hash`
/// This will be used to traverse the SpentBook from the genesis onwards.
fn spentbook_next_transactions(&self, tx_hash: Hash) -> Option<BTreeSet<Hash>>

The in memory dag would only be written to as/when we have log_spent in place.

This is just a quick issue to get the basics of a spent book in place.

Lacking elders when starting nodes in a podman container.

I run 16 nodes in a podman container.

I run into a few issues and I'm not they're related.

I get a lack of elders.

$ ./upload.sh
Error:
   0: ClientError: Problem finding sufficient elders. A supermajority of responses is unobtainable. 1 were known in this section, 7 needed. Section pk: PublicKey(10da..1145)
   1: Problem finding sufficient elders. A supermajority of responses is unobtainable. 1 were known in this section, 7 needed. Section pk: PublicKey(10da..1145)

All the nodes above ten dissappear within minutes.

bash-5.1# ps -ef | grep sn_node
    1 root      0:09 sn_node -vvvv --idle-timeout-msec 5500 --keep-alive-interval-msec 4000 --skip-auto-port-forwarding --local-addr 10.88.1.3:12000 --public-addr 83.163.103.119:12000 --first
   37 root      0:12 sn_node -vvv --idle-timeout-msec 5500 --keep-alive-interval-msec 4000 --skip-auto-port-forwarding --local-addr 10.88.1.3:12001 --public-addr 83.163.103.119:12001 --log-dir /root/.safe/node/node_dir_1 --root-dir /root/.safe/node/node_dir_1
   50 root      0:10 sn_node -vvv --idle-timeout-msec 5500 --keep-alive-interval-msec 4000 --skip-auto-port-forwarding --local-addr 10.88.1.3:12002 --public-addr 83.163.103.119:12002 --log-dir /root/.safe/node/node_dir_2 --root-dir /root/.safe/node/node_dir_2
   62 root      0:07 sn_node -vvv --idle-timeout-msec 5500 --keep-alive-interval-msec 4000 --skip-auto-port-forwarding --local-addr 10.88.1.3:12003 --public-addr 83.163.103.119:12003 --log-dir /root/.safe/node/node_dir_3 --root-dir /root/.safe/node/node_dir_3
   78 root      0:15 sn_node -vvv --idle-timeout-msec 5500 --keep-alive-interval-msec 4000 --skip-auto-port-forwarding --local-addr 10.88.1.3:12004 --public-addr 83.163.103.119:12004 --log-dir /root/.safe/node/node_dir_4 --root-dir /root/.safe/node/node_dir_4
   90 root      0:04 sn_node -vvv --idle-timeout-msec 5500 --keep-alive-interval-msec 4000 --skip-auto-port-forwarding --local-addr 10.88.1.3:12005 --public-addr 83.163.103.119:12005 --log-dir /root/.safe/node/node_dir_5 --root-dir /root/.safe/node/node_dir_5
  103 root      0:06 sn_node -vvv --idle-timeout-msec 5500 --keep-alive-interval-msec 4000 --skip-auto-port-forwarding --local-addr 10.88.1.3:12006 --public-addr 83.163.103.119:12006 --log-dir /root/.safe/node/node_dir_6 --root-dir /root/.safe/node/node_dir_6
  128 root      0:00 sn_node -vvv --idle-timeout-msec 5500 --keep-alive-interval-msec 4000 --skip-auto-port-forwarding --local-addr 10.88.1.3:12007 --public-addr 83.163.103.119:12007 --log-dir /root/.safe/node/node_dir_7 --root-dir /root/.safe/node/node_dir_7
  148 root      0:09 sn_node -vvv --idle-timeout-msec 5500 --keep-alive-interval-msec 4000 --skip-auto-port-forwarding --local-addr 10.88.1.3:12008 --public-addr 83.163.103.119:12008 --log-dir /root/.safe/node/node_dir_8 --root-dir /root/.safe/node/node_dir_8
  156 root      0:03 sn_node -vvv --idle-timeout-msec 5500 --keep-alive-interval-msec 4000 --skip-auto-port-forwarding --local-addr 10.88.1.3:12009 --public-addr 83.163.103.119:12009 --log-dir /root/.safe/node/node_dir_9 --root-dir /root/.safe/node/node_dir_9
  173 root      0:08 sn_node -vvv --idle-timeout-msec 5500 --keep-alive-interval-msec 4000 --skip-auto-port-forwarding --local-addr 10.88.1.3:12010 --public-addr 83.163.103.119:12010 --log-dir /root/.safe/node/node_dir_10 --root-dir /root/.safe/node/node_dir_10
  185 root      0:00 sn_node -vvv --idle-timeout-msec 5500 --keep-alive-interval-msec 4000 --skip-auto-port-forwarding --local-addr 10.88.1.3:12011 --public-addr 83.163.103.119:12011 --log-dir /root/.safe/node/node_dir_11 --root-dir /root/.safe/node/node_dir_11
  193 root      0:13 sn_node -vvv --idle-timeout-msec 5500 --keep-alive-interval-msec 4000 --skip-auto-port-forwarding --local-addr 10.88.1.3:12012 --public-addr 83.163.103.119:12012 --log-dir /root/.safe/node/node_dir_12 --root-dir /root/.safe/node/node_dir_12
  360 root      0:00 grep sn_node

I often get errors like these.

	 ➤ Commands in flight (root permit len): 5
 ERROR 2021-12-05T09:09:00.129426Z [sn/src/node/routing/api/dispatcher.rs:L453]:
	 ➤ Error encountered when handling command (cmd_id 1769307206.0.0.0): EmptyRecipientList
 ERROR 2021-12-05T09:09:00.129439Z [sn/src/node/routing/api/dispatcher.rs:L342]:
	 ➤ Failed to handle command "1769307206.0.0.0" with error EmptyRecipientList

This one is also common.

node_dir_7/sn_node.log.2021-12-05-14-        ➤ Newer Join response for us 576e02(01010111).., SAP SectionAuthorityProvider { prefix: Prefix(), public_key_set: PublicKeySet { public_key: PublicKey(1260..3ed1), threshold: 3 }, elders: {Peer { name: 0d567d(00001101).., addr: 83.163.103.119:12004, connection: None }, Peer { name: 0dead7(00001101).., addr: 83.163.103.119:12005, connection: None }, Peer { name: 0f3fd0(00001111).., addr: 83.163.103.119:12001, connection: None }, Peer { name: 8c621c(10001100).., addr: 83.163.103.119:12002, connection: None }, Peer { name: d20175(11010010).., addr: 83.163.103.119:12000, connection: None }} } from Peer { name: 0d567d(00001101).., addr: 83.163.103.119:12004, connection: Some(Connection { id: 613667920, remote_address: 83.163.103.119:12004, .. }) }
node_dir_7/sn_node.log.2021-12-05-14: ERROR 2021-12-05T14:20:56.029401Z [sn/src/node/routing/core/bootstrap/join.rs:L501]:
node_dir_7/sn_node.log.2021-12-05-14-    ➤ join {network_genesis_key=PublicKey(10da..1145) target_section_key=PublicKey(10da..1145) recipients=[Peer { name: cf197e(11001111).., addr: 83.163.103.119:12000, connection: Some(Connection { id: 612059696, remote_address: 83.163.103.119:12000, .. }) }]}

Allow valid AE updates to be processed at nodes, even if sent to an unknown (new but valid, child PK)

As per:

https://maidsafe.discourse.team/t/whether-prefix-map-shall-hold-parent-when-only-know-of-one-child/522/9

If we get a message for an unkown PK, but it's an AE-Update with a valid chain beyond what we know. We should accept it and update.

With this, we should then be able to remove Parent keys from the prefixmap even if we only know one child key.

(We then also need to test that the prefixmap returns siblingB even if siblingA is closest (but unkown) (as that's the fastest path to the correct prefix)

licensing linking exception mentioned in (most) README files but not in code headers

The GPL linking exception is not mentiond in code headers.

Licensing statements in code headers are commonly considered as superseding project-wide licensing statements e.g. in a README file.

Licensing in a LICENSE file is commonly considered as a collection of licensing information - so when it contains the GPL license and a licensing exception, and code headers state that their licensing is GPL and refer to the LICENSE file for details, it can be interpreted as those code files do not permit lining exceptions regardles of the existence of such exception in the referenced LICENSE file.

I recommend to expand the licensing statements in code headers to mention linking exception where appropriate.

[routing] require name change when promoted or demoted

This is inherited from the issue maidsafe/sn_routing#2112 :

When a node is about to be promoted to elder, it needs to generate a new name for itself so that the elder sets are always unique.

For example, currently we can have section progression like this:

A, B, C, D
A, B, C, E
A, B, C, D

So now 1 and 3 are identical which might make the section vulnerable to replay attacks (the demotion of D in step 2 could be replayed after step 3 to demote it again). There might be other, more subtle issues with this.

To prevent that, we should require that D changes its name before the promotion, so the progression would look like this (D' is the new name of D):

A, B, C, D
A, B, C, E
A, B, C, D'

The flow could look approximately like this:

The section decides to promote/demote some nodes (after a churn, ...)
The nodes that are not elders currently but that would become new elders are identified
A message is sent to these new to-be-elders (signed by the section and accumulated at the destination) to ask them to generate the new name
On receiving the message and verifying it, they generate the new name and send back a proof of that - a message containing the new name signed with the old key
When all proofs are received, the section proceeds with the promotion
To handle faulty nodes, we should give the to-be-elders a limited time to do the name change and if they fail to do so, we remove them from the promotion and replace with different nodes and repeat the process.

note: The name change request message could additionally include a name interval into which the new name must fall which can then be used to keep the section balanced - so when a split happens, roughly half of the elders go into one subsection and half into the other. If this could be made reliable it could allow us to remove some of the complexity which we currently have to handle the case where all the elders end up in only one subsection.

additional note: another way to handle this issue could be force relocation during demotion . That is when a node got demoted, it will be relocated to self section or a remote section.

Racing issues when setting JoinsAllowed?

https://github.com/maidsafe/safe_network/blob/4adaeaff4f07871840397adc3371ec8b3436e7ce/sn/src/routing/core/msg_handling/agreement.rs#L109

(NB: see issue #889 for the correct logic there)

Consider this case:

Node_a and Node_b drops virtually in the same moment.
Diff: -2, JoinsAllowed: true (all good)
Node_c joins.
Diff: -1, JoinsAllowed: false (not so good?)

We need JoinsAllowed to be true to be able to regain our original node count, which supposedly was necessary since we at some point, due to resource shortage, asked for additional nodes as to reach that number.
Currently we seem to fall back on the StorageLevel analysis to again trigger setting JoinsAllowed to true - is that a sane approach though?

No suggested solution ATM, just raising the Q.

Edit: Unless it's OK to fall back on the StorageLevel analysis, a potentially more nimble solution would be to keep a score of highest number of nodes that we've had in the section so far. While we are below that number, JoinsAllowed should be true.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.