Giter Site home page Giter Site logo

gottstech / gotts Goto Github PK

View Code? Open in Web Editor NEW
48.0 48.0 4.0 1.04 MB

A blockchain for non-collateralized stable-coins, follow MimbleWimble protocol but with explicit amount.

Home Page: https://gotts.tech

License: Apache License 2.0

Shell 0.23% Rust 99.68% Dockerfile 0.06% Nix 0.03%
cryptocurrency cryptography gotts mimblewimble rust stablecoins

gotts's People

Contributors

garyyu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

gotts's Issues

peer_connect panicked for 'too many open files"

20191120 23:43:29.740 DEBUG gotts_p2p::handshake - Connected! Cumulative 7254@146 offered from PeerAddr(V4(45.118.135.254:13514)) "MW/Gotts 0.0.3" HEADER_HIST | TXHASHSET_HIST | PEER_LIST | TX_KERNEL_HASH | FULL_NODE
20191120 23:43:29.743 ERROR gotts_util::logger -
thread 'peer_connect' panicked at 'clone conn for reader failed: Os { code: 24, kind: Other, message: "Too many open files" }': src/libcore/result.rs:1165stack backtrace:
   0: backtrace::backtrace::trace
   1: backtrace::capture::Backtrace::new
   2: gotts_util::logger::send_panic_to_log::{{closure}}
   3: std::panicking::rust_panic_with_hook
   4: std::panicking::continue_panic_fmt
   5: rust_begin_unwind
   6: core::panicking::panic_fmt
   7: core::result::unwrap_failed
   8: gotts_p2p::conn::listen
   9: gotts_p2p::peer::Peer::new
  10: gotts_p2p::peer::Peer::connect
  11: gotts_p2p::serv::Server::connect
  12: std::sys_common::backtrace::__rust_begin_short_backtrace
  13: std::panicking::try::do_call
  14: __rust_maybe_catch_panic
  15: core::ops::function::FnOnce::call_once{{vtable.shim}}
  16: <alloc::boxed::Box<F> as core::ops::function::FnOnce<A>>::call_once
  17: std::sys::unix::thread::Thread::new::thread_start
  18: _pthread_body
  19: _pthread_start

This log repeated for multiple times:

$ grep -B 1 "panicked" gotts-server.log

20191120 23:05:10.364 ERROR gotts_util::logger - 
thread 'peer_read' panicked at 'failed to open /dev/urandom: Os { code: 24, kind: Other, message: "Too many open files" }': src/libcore/result.rs:1165stack backtrace:
--
20191120 23:35:18.941 ERROR gotts_util::logger - 
thread 'peer_connect' panicked at 'clone conn for reader failed: Os { code: 24, kind: Other, message: "Too many open files" }': src/libcore/result.rs:1165stack backtrace:
--
20191120 23:37:23.715 ERROR gotts_util::logger - 
thread 'peer_connect' panicked at 'clone conn for reader failed: Os { code: 24, kind: Other, message: "Too many open files" }': src/libcore/result.rs:1165stack backtrace:
--
20191120 23:38:04.004 ERROR gotts_util::logger - 
thread 'peer_connect' panicked at 'clone conn for reader failed: Os { code: 24, kind: Other, message: "Too many open files" }': src/libcore/result.rs:1165stack backtrace:
--
20191120 23:40:08.595 ERROR gotts_util::logger - 
thread 'peer_connect' panicked at 'clone conn for reader failed: Os { code: 24, kind: Other, message: "Too many open files" }': src/libcore/result.rs:1165stack backtrace:
--
20191120 23:40:48.976 ERROR gotts_util::logger - 
thread 'peer_connect' panicked at 'clone conn for reader failed: Os { code: 24, kind: Other, message: "Too many open files" }': src/libcore/result.rs:1165stack backtrace:
--
20191120 23:42:50.169 ERROR gotts_util::logger - 
thread 'peer_connect' panicked at 'clone conn for reader failed: Os { code: 24, kind: Other, message: "Too many open files" }': src/libcore/result.rs:1165stack backtrace:
--
20191120 23:43:29.743 ERROR gotts_util::logger - 
thread 'peer_connect' panicked at 'clone conn for reader failed: Os { code: 24, kind: Other, message: "Too many open files" }': src/libcore/result.rs:1165stack backtrace:
--
20191120 23:46:11.731 ERROR gotts_util::logger - 
thread 'peer_connect' panicked at 'clone conn for reader failed: Os { code: 24, kind: Other, message: "Too many open files" }': src/libcore/result.rs:1165stack backtrace:
--
20191120 23:49:25.035 ERROR gotts_util::logger - 
thread 'peer_connect' panicked at 'clone conn for reader failed: Os { code: 24, kind: Other, message: "Too many open files" }': src/libcore/result.rs:1165stack backtrace:
--
20191120 23:49:36.470 ERROR gotts_util::logger - 
thread 'peer_connect' panicked at 'clone conn for reader failed: Os { code: 24, kind: Other, message: "Too many open files" }': src/libcore/result.rs:1165stack backtrace:

Spent output still has the output position height index info in the database

When querying a spent output, the unwrap() here will panic:

chain/src/txhashset/utxo_view.rs

	/// Find the complete input/s info, use chain database data according to inputs
	pub fn get_complete_inputs(
		&self,
		inputs: &Vec<Input>,
	) -> Result<HashMap<Commitment, OutputEx>, Error> {
		let mut complete_inputs: HashMap<Commitment, OutputEx> = HashMap::new();
		for input in inputs {
			if let Ok(ofph) = self.batch.get_output_pos_height(&input.commitment()) {
				let output = match ofph.features {
					OutputFeatures::Plain | OutputFeatures::Coinbase => self
						.output_i_pmmr
						.get_data(ofph.position)
						.unwrap()        <<<< panic here
						.into_output(),
					OutputFeatures::SigLocked => self
						.output_ii_pmmr
						.get_data(ofph.position)
						.unwrap()        <<<< or panic here
						.into_output(),
				};
				...

The only possible reason is that I could have forgot cleaning the output position height index when an output is spent. The original PR is here.

To be checked and confirmed.

debug version often fail to start on thread 'peer_connect' panicked

thread 'peer_connect' panicked at 'called `Option::unwrap()` on a `None` value': src/libcore/option.rs:347stack backtrace:
   0: gotts_util::logger::send_panic_to_log::{{closure}}
             at util/src/logger.rs:241
   1: std::panicking::rust_panic_with_hook
             at src/libstd/panicking.rs:478
   2: std::panicking::continue_panic_fmt
             at src/libstd/panicking.rs:381
   3: rust_begin_unwind
             at src/libstd/panicking.rs:308
   4: core::panicking::panic_fmt
             at src/libcore/panicking.rs:85
   5: core::panicking::panic
             at src/libcore/panicking.rs:49
   6: core::option::Option<T>::unwrap
             at /rustc/a53f9df32fbb0b5f4382caaad8f1a46f36ea887c/src/libcore/macros.rs:12
   7: gotts_store::lmdb::Store::get_ser_access
             at store/src/lmdb.rs:237
   8: gotts_store::lmdb::Batch::get_ser
             at store/src/lmdb.rs:334
   9: gotts_p2p::store::PeerStore::update_state
             at p2p/src/store.rs:183
  10: gotts_p2p::peers::Peers::update_state
             at p2p/src/peers.rs:427
  11: gotts_servers::gotts::seed::listen_for_addrs::{{closure}}
             at servers/src/gotts/seed.rs:335

Support SSDP in LAN

Currently we limit the connection in the same NAT address: only one connection is allowed .

	/// Checks whether there's any reason we don't want to accept an incoming peer
	/// connection. There can be a few of them:
	/// 1. Accepting the peer connection would exceed the configured maximum allowed
	/// inbound peer count. Note that seed nodes may wish to increase the default
	/// value for PEER_LISTENER_BUFFER_COUNT to help with network bootstrapping.
	/// A default buffer of 8 peers is allowed to help with network growth.
	/// 2. The peer has been previously banned and the ban period hasn't
	/// expired yet.
	/// 3. We're already connected to a peer at the same IP. While there are
	/// many reasons multiple peers can legitimately share identical IP
	/// addresses (NAT), network distribution is improved if they choose
	/// different sets of peers themselves. In addition, it prevent potential
	/// duplicate connections, malicious or not.
	fn check_undesirable(&self, stream: &TcpStream) -> bool {

At this moment, I don't want to change this behaviour, but In case someone have multiple node servers running in the same NAT, these servers can't connect to each other because above limitation.

A simple solution to solve this is to use the SSDP (Simple Service Discovery Protocol), allowing the node servers discovery each other in same LAN and connect each other.

A Rust SSDP crate can be used for this: https://github.com/GGist/ssdp-rs

Note: To avoid the LAN ip address publishing to peers network, the GetPeerAddrs protocol response should skip all these LAN ip addresses.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.