Giter Site home page Giter Site logo

conflux-chain / conflux-rust Goto Github PK

View Code? Open in Web Editor NEW
647.0 647.0 197.0 99.67 MB

The official Rust implementation of Conflux protocol. https://doc.confluxnetwork.org

Home Page: https://doc.confluxnetwork.org

License: GNU General Public License v3.0

Rust 88.90% Shell 0.07% Dockerfile 0.02% Python 8.61% C++ 1.55% CMake 0.05% C 0.04% Solidity 0.75% Batchfile 0.01%
blockchain conflux cryptocurrency p2p rust

conflux-rust's Introduction

Conflux-Rust

Conflux-rust is a Rust-based implementation of the Conflux protocol. It is fast and reliable.

For Users

Please follow the Conflux Documentation to build and run Conflux.

For Developers

For a general overview of the crates, see Project Layout.

Contribution

Thank you for considering helping out with our source code. We appreciate any contributions, even the smallest fixes. Please read the guidelines on how to submit issues and pull requests. Note that if you want to propose significant changes to the Conflux protocol, please submit a CIP.

Unit Tests and Integration Tests

Unit tests come together with the Rust code. They can be invoked via cargo test --release --all. See the Getting Started page for more information.

Integration tests are Python test scripts with the _test.py suffix in the tests directory. To run these tests, first compile Conflux in release mode using cargo build --release and fetch all submodule using git submodule update --remote --recursive --init. Then, you can run all integration tests using the script tests/test_all.py.

Resources

License

GNU General Public License v3.0

conflux-rust's People

Contributors

0xfx01 avatar boqiu avatar burtonqin avatar chenxingli avatar conflux-cx avatar confluxyangz avatar csdtowards avatar darwintree avatar dependabot-preview[bot] avatar dongz9 avatar fanlong avatar ftiasch avatar ksqsf avatar linyufly avatar lostrating avatar pana avatar peilun-conflux avatar pudongair avatar resodo avatar rongma7 avatar royshang avatar seekstar avatar shangchenglumetro avatar sparkmiw avatar thegaram avatar wangdayong228 avatar yangzhe1990 avatar yilinhan avatar yqrashawn avatar zimpha avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

conflux-rust's Issues

Call notify() without holding the lock.

Code in Conflux: self.work_ready.notify_all();
File Location: util/io/src/service_mio.rs: 381, 493
Why is this bad: It may leads to concurrency bug.
Remarks: Lock on "work_ready_mutex"

Data Races among concurrent calls to MIO lib

Some of functions (like set_readiness() and readniss()) in MIO is not thead safe. Concurrent calls to them must be protected.


Call Stack 1 (by multiple threads):

mio::poll::SetReadiness::set_readiness
mio::channel::SenderCtl::inc
mio::deprecated::event_loop::Sender>::send
io::service_mio::IoContext>::message
network::service::NetworkContext<'a> as network::NetworkContext>::register_timer
cfxcore::sync::synchronization_protocol_handler::SynchronizationProtocolHandler as network::NetworkProtocolHandler>::initialize

Call Stack 1 (by multiple threads):

mio::poll::Poll::poll1
mio::poll::Poll::poll
<mio::deprecated::event_loop::EventLoop>::run
<io::service_mio::IoManager>::start
mio::poll::Poll::new
mio::deprecated::event_loop::EventLoopBuilder::build
<io::service_mio::IoService>::start
network::service::NetworkService::start
cfxcore::sync::synchronization_service::SynchronizationService::start

use of `offset` with a `usize` casted to an `isize`

Code in Conflux: (*self.table_ptr.offset(Self::lower_bound(self.bitmap, index) as isize))
File Location: core/src/storage/impls/multi_version_merkle_patricia_trie/merkle_patricia_trie/children_table.rs:56:19 , 68:11
Why is this bad: If we’re always increasing the pointer address, we can avoid the numeric cast by using the add method instead.
Remarks: try
(*self.table_ptr.add(Self::lower_bound(self.bitmap, index)))

it help readability. It may also lead to out of box , if don’t handle properly

Private chain does not generate new blocks

Hi,

I'm trying to start a private chain in order to run some tests. But conflux d56d2c55338bde7bcb851c3f00dbec4fab0a6a50 doesn't generate any new blocks.

My configuration:

public_address="59.66.209.32:32323"
start_mining=true
mining_author="aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"
jsonrpc_tcp_port=12536
jsonrpc_http_port=12537
log_conf="log.yaml"

logs:

https://pastebin.com/QZXbgycS

`.get().unwrap()` on a HashMap.

Code in Conflux: let index = self.node_index.get(&id).unwrap();
File Location: network/src/node_table.rs:783:29
Why is this bad: Using the Index trait ([]) is more clear and more concise.
Remarks: if get() will produce NONE , which could be cause panic. try this: &self.node_index[&id]

State existence boundary for full nodes.

Ideally the boundary should be an interval. It's OK to start with the left hand side of the interval.

The boundary should be set to infinity at start up if we don't persist the interval;
After syncing, the interval should be set to snapshot point itself.

Receipts of REWARD_EPOCH_COUNT before should be handled specially.
Access to storage beyond the interval should be denied.
Removal of old snapshot moves the left boundary of the interval.

Remove workaround in sync_checkpoint_test.py:70

Didn't seem to receive mining bonus

System Config:

Microsoft Windows [Build 17134]
Windows Subsystem for Linux

console log for the past 2 minutes:

2019-04-08T22:58:01.575270400-04:00 INFO cfxcore::sync::synchronization_graph - Before gc cache_size=1230434432 26151 26149 19375 0 2339908 0
2019-04-08T22:58:06.584616-04:00 INFO cfxcore::sync::synchronization_graph - Before gc cache_size=1230435088 26152 26150 19375 0 2339914 0
2019-04-08T22:58:11.602708700-04:00 INFO cfxcore::sync::synchronization_graph - Before gc cache_size=1230437736 26153 26151 19375 0 2339928 0
2019-04-08T22:58:16.613143500-04:00 INFO cfxcore::sync::synchronization_graph - Before gc cache_size=1230437736 26153 26151 19375 0 2339943 0
2019-04-08T22:58:21.539654400-04:00 INFO cfxcore::sync::synchronization_graph - Before gc cache_size=1230438392 26155 26153 19375 0 2339957 0
2019-04-08T22:58:26.539251400-04:00 INFO cfxcore::sync::synchronization_graph - Before gc cache_size=1230438720 26156 26154 19375 0 2339972 0
2019-04-08T22:58:31.547013600-04:00 INFO cfxcore::sync::synchronization_graph - Before gc cache_size=1230439048 26157 26155 19375 0 2339983 0
2019-04-08T22:58:36.572629400-04:00 INFO cfxcore::sync::synchronization_graph - Before gc cache_size=1230439048 26157 26155 19375 0 2340001 0
2019-04-08T22:58:41.578057600-04:00 INFO cfxcore::sync::synchronization_graph - Before gc cache_size=1230463200 26158 26156 19375 0 2340016 0
2019-04-08T22:58:46.585643600-04:00 INFO cfxcore::sync::synchronization_graph - Before gc cache_size=1230463200 26158 26156 19375 0 2340016 0
2019-04-08T22:58:51.640608900-04:00 INFO cfxcore::sync::synchronization_graph - Before gc cache_size=1230463528 26159 26157 19375 0 2340016 0
2019-04-08T22:58:56.623148400-04:00 INFO cfxcore::sync::synchronization_graph - Before gc cache_size=1230463528 26159 26157 19375 0 2340016 0
2019-04-08T22:59:01.633678-04:00 INFO cfxcore::sync::synchronization_graph - Before gc cache_size=1230463528 26159 26157 19375 0 2340016 0
2019-04-08T22:59:06.750339600-04:00 INFO cfxcore::sync::synchronization_graph - Before gc cache_size=1230463528 26159 26157 19375 0 2340016 0
2019-04-08T22:59:11.549894-04:00 INFO cfxcore::sync::synchronization_graph - Before gc cache_size=1230463856 26160 26158 19375 0 2340016 0
2019-04-08T22:59:16.586198200-04:00 INFO cfxcore::sync::synchronization_graph - Before gc cache_size=1230463856 26160 26158 19375 0 2340016 0
2019-04-08T22:59:21.560062100-04:00 INFO cfxcore::sync::synchronization_graph - Before gc cache_size=1230463856 26160 26158 19375 0 2340016 0
2019-04-08T22:59:26.567145500-04:00 INFO cfxcore::sync::synchronization_graph - Before gc cache_size=1230463856 26160 26158 19375 0 2340016 0
2019-04-08T22:59:31.577901100-04:00 INFO cfxcore::sync::synchronization_graph - Before gc cache_size=1230464184 26161 26159 19375 0 2340016 0
2019-04-08T22:59:36.633956900-04:00 INFO cfxcore::sync::synchronization_graph - Before gc cache_size=1230464512 26162 26160 19375 0 2340016 0
2019-04-08T22:59:41.696795200-04:00 INFO cfxcore::sync::synchronization_graph - Before gc cache_size=1230480216 26163 26161 19375 0 2340016 0
2019-04-08T22:59:46.630989200-04:00 INFO cfxcore::sync::synchronization_graph - Before gc cache_size=1230480216 26163 26161 19375 0 2340016 0
2019-04-08T22:59:51.625480400-04:00 INFO cfxcore::sync::synchronization_graph - Before gc cache_size=1230480216 26163 26161 19375 0 2340016 0
2019-04-08T22:59:56.626987200-04:00 INFO cfxcore::sync::synchronization_graph - Before gc cache_size=1230481200 26166 26165 19375 0 2340016 0
2019-04-08T23:00:01.550719300-04:00 INFO cfxcore::sync::synchronization_graph - Before gc cache_size=1230481528 26167 26165 19375 0 2340016 0

default.toml:

public_address="[my public ip address]:32323" 
port=32323
udp_port=32323

start_mining=true
mining_author="37be1438b30f0b99a620ae219bc7a59dc1a88cec"

jsonrpc_tcp_port=12536
jsonrpc_http_port=12537

I have been running the full node for two days but I did not seem to notice any increase from Conflux Wallet Web app. Other functions such as transfer and deploy smart contract worked fine.

Make debug state execution great again.

We used to have an option to run a debug state execution when we found state root mismatch after execution, however it has fallen apart after some refactor in the past.

In lab without malicious nodes, the debug execution is still helpful for debugging. We should clean-up related code and make it work again.

Nodes may be disconnected unexpectedly.

Some peers of a node may be disconnected every few minutes when connected to TestNet.
This is triggered by Hup from mio, but not sure if it's just caused by unstable public network condition or caused by some internal problems.

Setup of a private Conflux Blockchain

I want to setup conflux as a private/local blockchain, and then make some transactions like contract creations, calling contract function, transfer of cfx from account to smart contract, etc. For example, just like Ganache client. Ganache has 10 accounts with 100 ether balance, and that balance can be used in scenario mentioned above.

Memory allocated but not freed.

Code in Conflux: let cache = new_cache(size);
File Location: .cargo/registry/src/github.com-1ecc6299db9ec823/rocksdb-0.11.0/src/db_options.rs: 34
Why is this bad: It may cause memory Leak
Remarks: (called at db/src/kvdb-rocksdb/src/lib.rs:332)

返回Block信息的rpc中 epochNumber 永远为 null

返回了 epochNumber 但都是 0
涉及函数:
cfx_getBlockByEpochNumber
cfx_getBlockByHash
cfx_getBlockByHashWithPivotAssumption
等.
补充: 只有 200000 之前的block中epochNumber为null, epoch >= 200000 返回值正常.

explicit counter loop

Code in Conflux: for tx in uncached_trans {
File Location: core/src/sync/synchronization_protocol_handler.rs:2587:23 , core/src/transaction_pool/mod.rs:426:23
Description: The variable start_idx is used as a loop counter. Consider using for (start_idx, item) in uncached_trans.enumerate() or similar iterators.
Why is this bad: Not only is the version using .enumerate() more readable, the compiler is able to remove bounds checks which can lead to faster code in some instances.

seems unable to mine behind NAT

System Info:

MacBook Pro (Retina, 15-inch, Mid 2015)
2.2 GHz Intel Core i7
16 GB 1600 MHz DDR3
SSD 512GB
Intel Iris Pro 1536 MB

modified Config:

start_mining=true
 mining_author="1c7bd3841df24bcadf61329f670e890a39566f53"
#public_address="59.66.209.32:32323"

mining duration

roughly 6 hours

verification

http://www.confluxscan.io/accountdetail/0x1c7bd3841df24bcadf61329f670e890a39566f53
No blocks mined

relevant log

2019-06-17T05:28:33.467227+04:00 INFO client - Start mining with pow config: ProofOfWorkConfig { test_mode: false, initial_difficulty: 100000000, block_generation_period: 5000000, difficulty_adjustment_epoch_period: 200 }

recurrent log like:
2019-06-17T11:12:56.654650+04:00 INFO cfxcore::statistics - Statistics: StatisticsInner { sync_graph: SyncGraphStatistics { inserted_block_count: 1 }, consensus_graph: ConsensusGraphStatistics { inserted_block_count: 0 } }
2019-06-17T11:12:56.654716+04:00 INFO cfxcore::sync::synchronization_graph - Before gc cache_size=1736 1 0 0 0 0 0
2019-06-17T11:12:56.751546+04:00 INFO cfxcore::sync::synchronization_protocol_handler - Peer connected: peer=32
2019-06-17T11:12:56.755517+04:00 INFO cfxcore::sync::synchronization_protocol_handler - Peer disconnected: peer=32
2019-06-17T11:12:56.794377+04:00 INFO cfxcore::sync::synchronization_protocol_handler - Peer disconnected: peer=31
2019-06-17T11:12:56.959347+04:00 INFO cfxcore::sync::synchronization_protocol_handler - Peer connected: peer=36
2019-06-17T11:12:56.960411+04:00 INFO cfxcore::sync::synchronization_protocol_handler - Peer disconnected: peer=36
2019-06-17T11:12:57.682022+04:00 INFO cfxcore::sync::synchronization_protocol_handler - Peer connected: peer=31
2019-06-17T11:12:57.682968+04:00 INFO cfxcore::sync::synchronization_protocol_handler - Peer disconnected: peer=31
2019-06-17T11:12:57.913260+04:00 INFO cfxcore::sync::synchronization_protocol_handler - Peer connected: peer=36
2019-06-17T11:12:57.918682+04:00 INFO cfxcore::sync::synchronization_protocol_handler - Peer connected: peer=32
2019-06-17T11:12:57.925160+04:00 INFO cfxcore::sync::synchronization_protocol_handler - Peer disconnected: peer=32
2019-06-17T11:12:58.275087+04:00 INFO cfxcore::sync::synchronization_protocol_handler - Peer disconnected: peer=36
2019-06-17T11:12:58.902882+04:00 INFO cfxcore::storage::impls::multi_version_merkle_patricia_trie::node_memory_manager - trie node allocator: max allowed size: 20200000, configured idle_size: 200000, size: 1200000, allocated: 4
2019-06-17T11:12:58.902955+04:00 INFO cfxcore::storage::impls::multi_version_merkle_patricia_trie::node_memory_manager - number of nodes loaded from db 0
2019-06-17T11:12:58.902983+04:00 INFO cfxcore::storage::impls::multi_version_merkle_patricia_trie::node_memory_manager - number of uncached leaf node loads 0
2019-06-17T11:12:58.903037+04:00 INFO cfxcore::storage::impls::multi_version_merkle_patricia_trie::node_memory_manager - number of db loads for uncached leaf nodes 0
2019-06-17T11:12:58.903144+04:00 INFO cfxcore::storage::impls::multi_version_merkle_patricia_trie::node_memory_manager - number of db loads for merkle computation 0

use of `or_insert` followed by a function call

Code in Conflux: or_insert(self.weight_tree.subtree_weight(prev_me));
File Location: core/src/consensus/mod.rs:327:18
Why is this bad: The function will always be called and potentially allocate an object acting as the default.
Remarks: try this: or_insert_with(|| self.weight_tree.subtree_weight(prev_me))

Calls to `std::mem::forget` with a reference instead of an owned value. Forgetting a reference does nothing

Code in Conflux: mem::forget(trie_node);
File Location: core/src/storage/impls/multi_version_merkle_patricia_trie/node_memory_manager.rs:453:21
Why is this bad: Calling forget on a reference will only forget the reference itself, which is a no-op. It will not forget the underlying referenced value, which is likely what was intended.
Remarks: It handle with reference instead of value, because its values still be there..

"Internal error" for Deploying Contract through JavaScript

I want to deploy contract through JavaScript. Here is my code

const ConfluxWeb = require('conflux-web');
const provider = 'http://testnet-jsonrpc.conflux-chain.org:12537';
const confluxWeb = new ConfluxWeb(provider);
const account1 = '0xb7be910d098ae4e5989d50787787b7e7a1245fa8'
const privateKey1 = "0x"+'4a98c06e6b1520bd973c7bc...............'

var compiledContract = require('./build/MyContract.json');

	(async () => {

		const contract = new confluxWeb.cfx.Contract(compiledContract.abi);
		const params = {
		    data: '0x' + compiledContract.bytecode,
		    arguments: [account1]
		};
		const transaction = contract.deploy(params);
		const options = {
		    data: transaction.encodeABI(),
		    gas: await transaction.estimateGas({from: account1})
		};
		const signed = await confluxWeb.cfx.accounts.signTransaction(options, privateKey1);
		const receipt = await confluxWeb.cfx.sendSignedTransaction(signed.rawTransaction);
		console.log(`Contract deployed at address: ${receipt.contractAddress}`);
		})();

and here is my output Error;

(node:10864) UnhandledPromiseRejectionWarning: Error: Node error: {"code":-32603,"message":"Internal error"}
    at Function.validate (C:\Users\.......\node_modules\conflux-web-providers\dist\conflux-web-providers.cjs.js:111:18)
    at HttpProvider._callee$ (C:\Users\.......\node_modules\conflux-web-providers\dist\conflux-web-providers.cjs.js:705:61)
    at tryCatch (C:\Users\.......\node_modules\regenerator-runtime\runtime.js:45:40)
    at Generator.invoke [as _invoke] (C:\Users\.......\node_modules\regenerator-runtime\runtime.js:271:22)
    at Generator.prototype.(anonymous function) [as next] (C:\Users\.......\node_modules\regenerator-runtime\runtime.js:97:21)
    at asyncGeneratorStep (C:\Users\.......\node_modules\@babel\runtime\helpers\asyncToGenerator.js:3:24)
    at _next (C:\Users\.......\node_modules\@babel\runtime\helpers\asyncToGenerator.js:25:9)
    at process._tickCallback (internal/process/next_tick.js:68:7)
(node:10864) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 1)
(node:10864) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.

it should be noted that above code is successfully working on ethereum testnet, but not on conflux.

This loop never actually loops

Code in Conflux: for (i, node_ref) in self.children_table.iter() { return TrieNodeAction::MergePath { child_index: i, child_node_ref: (*node_ref).into(), }; }
File Location: core/src/storage/impls/multi_version_merkle_patricia_trie/merkle_patricia_trie/trie_node.rs:634:9
Why is this bad: This loop never loops, all it does is obfuscating the code.
Remarks: Unnecessary loop, may be replaced with just if else..

missing steps of pip3 installation in document

Tested two new Ubuntu machines

➜  conflux-rust git:(v0.1.0) ./dev-support/dep_pip3.sh
./dev-support/dep_pip3.sh: line 6: pip3: command not found
./dev-support/dep_pip3.sh: line 7: pip3: command not found

needless pass by value

Code in Conflux: rx: Receiver<Work>, channel: IoChannel,
File Location: util/io/src/worker.rs:85:13
Description: this argument is passed by value, but not consumed in the function body
Why is this bad: Taking arguments by reference is more flexible and can sometimes avoid unnecessary allocations.
Remarks: unnecessary allocation, try to consider taking a reference instead: &Receiver<Work<Message>>

use of `expect` followed by a function call

Code in Conflux: .expect(&format!("db get failed, key: {:?}", &key.key() as &[u8]))
File Location: core/src/db.rs:262:14
Why is this bad: The function will always be called.
Remarks: try this: unwrap_or_else(|_| panic!("db get failed, key: {:?}", &key.key() as &[u8]))

Could not transfer cfx to my Account (i.e. wallet) through Smart Contract

I have written following contract;

pragma solidity 0.5.1;

contract MyContract {
    uint256 totalSupply; 
    mapping(address => uint256) public balances;
    address payable wallet;

    constructor(address payable _wallet) public {
        totalSupply = 0;
        wallet = _wallet;
    }
    
    function () external payable{
        buyToken();
    }

    function buyToken() public payable {
        balances[msg.sender] += msg.value;
        wallet.transfer(msg.value);
        totalSupply +=msg.value;

    }
    function getTotalSupply()public view returns  (uint256 ){
        return totalSupply;
    }

}

above contract successfully deployed on testnet, running all its functionalities.. except wallet.transfer(msg.value); . This line is trying to send the received value from caller of buyToken() to wallet . I have tested the above contract on ethereum (using web3, metamask's accounts). The account balance at metamask is changing. But the same is not updated on conflux wallet.
here is my program in javascript.

const ConfluxTx = require('confluxjs-transaction') ;
const ConfluxWeb = require('conflux-web');
const confluxWeb = new ConfluxWeb('http://testnet-jsonrpc.conflux-chain.org:12537');
const account1 = '0xb7be910d098ae4e5989d50787787b7e7a1245fa8'
const account2 = '0x3d2792f52f314564b58ecc6368f6f47adb727fb3'
const privateKey1 = Buffer.from('4a98c06e6b1520bd973c7bc5d2ee..............','hex')
const privateKey2 = Buffer.from('8f2016c58e898238dd5b4e003985...............', 'hex')
const contractAddress = '0xd022b78af2d8c2ce983da508cd37fab6753ad401'
const contractABI = [{"constant":true,"inputs":[{"name":"","type":"address"}],"name":"balances","outputs":[{"name":"","type":"uint256"}],"payable":false,"stateMutability":"view","type":"function"},{"constant":false,"inputs":[],"name":"buyToken","outputs":[],"payable":true,"stateMutability":"payable","type":"function"},{"constant":true,"inputs":[],"name":"getTotalSupply","outputs":[{"name":"","type":"uint256"}],"payable":false,"stateMutability":"view","type":"function"},{"inputs":[{"name":"_wallet","type":"address"}],"payable":false,"stateMutability":"nonpayable","type":"constructor"},{"payable":true,"stateMutability":"payable","type":"fallback"}];

var contract  = new confluxWeb.cfx.Contract(contractABI, contractAddress)

contract.methods.getTotalSupply().call().then(console.log);
contract.methods.balances(account1).call().then(console.log);
contract.methods.balances(account2).call().then(console.log);

// FOR CALLING buyToken() /////////
confluxWeb.cfx.getTransactionCount(account2, (err, txCount) => {
   const data = contract.methods.buyToken().encodeABI();
      const txObject = {
      nonce:    confluxWeb.utils.toHex(txCount),
      gasLimit: confluxWeb.utils.toHex(1000000),
      gasPrice: confluxWeb.utils.toHex(confluxWeb.utils.toDrip('100', 'gdrip')),
      to: contractAddress,
      value: confluxWeb.utils.toHex(confluxWeb.utils.toDrip('7', 'cfx')),
      data:data
     
    }
    // sign the trx
    const tx = new ConfluxTx(txObject)
    // console.log(tx);
//     const tx = new Tx(txObject, {chain:'testrpc'})
// const tx = new Tx(txObject, {chain:'ropsten', hardfork: 'petersburg'})
tx.sign(privateKey2)
// console.log(tx);
const serializedTx = tx.serialize()
// console.log(serializedTx);
const raw = '0x' + serializedTx.toString('hex')
// console.log(raw);
//broadcast tx
confluxWeb.cfx.sendSignedTransaction (raw, (err, txHash)=> {
    console.log('err:', err)
    console.log('txHash', txHash)
})

  })

///// FOR CONTRACT CREATION//////// 
confluxWeb.cfx.getTransactionCount(account1, (err, txCount) => {
  const data='0x608060405234801561001057600080fd5b506040516020806102a28339810180604052602081101561003057600080fd5b81019080805190602001909291905050506000808190555080600260006101000a81548173ffffffffffffffffffffffffffffffffffffffff021916908373ffffffffffffffffffffffffffffffffffffffff16021790555050610209806100996000396000f3fe608060405260043610610051576000357c01000000000000000000000000000000000000000000000000000000009004806327e235e31461005b578063a4821719146100c0578063c4e41b22146100ca575b6100596100f5565b005b34801561006757600080fd5b506100aa6004803603602081101561007e57600080fd5b81019080803573ffffffffffffffffffffffffffffffffffffffff1690602001909291905050506101bc565b6040518082815260200191505060405180910390f35b6100c86100f5565b005b3480156100d657600080fd5b506100df6101d4565b6040518082815260200191505060405180910390f35b34600160003373ffffffffffffffffffffffffffffffffffffffff1673ffffffffffffffffffffffffffffffffffffffff16815260200190815260200160002060008282540192505081905550600260009054906101000a900473ffffffffffffffffffffffffffffffffffffffff1673ffffffffffffffffffffffffffffffffffffffff166108fc349081150290604051600060405180830381858888f193505050501580156101aa573d6000803e3d6000fd5b50346000808282540192505081905550565b60016020528060005260406000206000915090505481565b6000805490509056fea165627a7a72305820df6b2ff0dfa5f88de9423a5664be62b688c5217b160cd60bc7e3b25f6f9b3eaf0029';
      const txObject = {
      nonce:    confluxWeb.utils.toHex(txCount),
      gasLimit: confluxWeb.utils.toHex(1000000),
      gasPrice: confluxWeb.utils.toHex(confluxWeb.utils.toDrip('10', 'gdrip')),
      data:data
     
    }
    // sign the trx
    const tx = new ConfluxTx(txObject)
    // console.log(tx);
//     const tx = new Tx(txObject, {chain:'testrpc'})
// const tx = new Tx(txObject, {chain:'ropsten', hardfork: 'petersburg'})
tx.sign(privateKey1)
// console.log(tx);
const serializedTx = tx.serialize()
// console.log(serializedTx);
const raw = '0x' + serializedTx.toString('hex')
// console.log(raw);
//broadcast tx
confluxWeb.cfx.sendSignedTransaction (raw, (err, txHash)=> {
    console.log('err:', err)
    console.log('txHash', txHash)
})

  }) 

Using `clone` on a double-reference; this will copy the reference instead of cloning the inner type.

Code in Conflux: .map(|(name, option)| ColumnFamilyDescriptor::new(name.clone(), option))
File Location: db/src/kvdb-rocksdb/src/lib.rs:338:58
Why is this bad: Cloning an &&T copies the inner &T, instead of cloning the underlying T.
Remarks: You may try dereferencing it
.map(|(name, option)| ColumnFamilyDescriptor::new(&(*name).clone(), option))
^^^^^^^^^^^^^^^^
or try being explicit about what type to clone

             ` .map(|(name, option)| ColumnFamilyDescriptor::new(&str::clone(name), option))`

Epoch not increasing

I have build conflux from source and running full node. But there are no epoch increasing is it error or something else.

2019-09-20T06:37:18.777851764+05:00 INFO cfxcore::statistics - Statistics: StatisticsInner { sync_graph: SyncGraphStatistics { inserted_block_count: 1 }, consensus_graph: ConsensusGraphStatistics { inserted_block_count: 0, processed_block_count: 0 } }
2019-09-20T06:37:19.061565987+05:00 INFO cfxcore::storage::impls::state_manager - number of nodes committed to db 3
2019-09-20T06:37:19.779131718+05:00 INFO cfxcore::sync::synchronization_protocol_handler - Catch-up mode: true, latest epoch: 0
2019-09-20T06:37:20.779865465+05:00 INFO cfxcore::sync::synchronization_protocol_handler - Catch-up mode: true, latest epoch: 0
2019-09-20T06:37:21.780447768+05:00 INFO cfxcore::sync::synchronization_protocol_handler - Catch-up mode: true, latest epoch: 0
2019-09-20T06:37:22.780877889+05:00 INFO cfxcore::sync::synchronization_protocol_handler - Catch-up mode: true, latest epoch: 0
2019-09-20T06:37:23.781271727+05:00 INFO cfxcore::sync::synchronization_protocol_handler - Catch-up mode: true, latest epoch: 0
2019-09-20T06:37:23.781414351+05:00 INFO cfxcore::statistics - Statistics: StatisticsInner { sync_graph: SyncGraphStatistics { inserted_block_count: 1 }, consensus_graph: ConsensusGraphStatistics { inserted_block_count: 0, processed_block_count: 0 } }
2019-09-20T06:37:23.781463141+05:00 INFO cfxcore::block_data_manager - Before gc cache_size=242 1 0 0 0
2019-09-20T06:37:24.062025342+05:00 INFO cfxcore::storage::impls::state_manager - number of nodes committed to db 3
2019-09-20T06:37:24.781949547+05:00 INFO cfxcore::sync::synchronization_protocol_handler - Catch-up mode: true, latest epoch: 0
2019-09-20T06:37:25.782615423+05:00 INFO cfxcore::sync::synchronization_protocol_handler - Catch-up mode: true, latest epoch: 0
2019-09-20T06:37:26.783125529+05:00 INFO cfxcore::sync::synchronization_protocol_handler - Catch-up mode: true, latest epoch: 0
2019-09-20T06:37:27.783498846+05:00 INFO cfxcore::sync::synchronization_protocol_handler - Catch-up mode: true, latest epoch: 0
2019-09-20T06:37:28.784069831+05:00 INFO cfxcore::sync::synchronization_protocol_handler - Catch-up mode: true, latest epoch: 0
2019-09-20T06:37:28.784156829+05:00 INFO cfxcore::block_data_manager - Before gc cache_size=242 1 0 0 0
2019-09-20T06:37:28.784204845+05:00 INFO cfxcore::statistics - Statistics: StatisticsInner { sync_graph: SyncGraphStatistics { inserted_block_count: 1 }, consensus_graph: ConsensusGraphStatistics { inserted_block_count: 0, processed_block_count: 0 } }
2019-09-20T06:37:29.062187727+05:00 INFO cfxcore::storage::impls::state_manager - number of nodes committed to db 3
2019-09-20T06:37:29.784880403+05:00 INFO cfxcore::sync::synchronization_protocol_handler - Catch-up mode: true, latest epoch: 0
2019-09-20T06:37:30.785262847+05:00 INFO cfxcore::sync::synchronization_protocol_handler - Catch-up mode: true, latest epoch: 0
2019-09-20T06:37:31.785643293+05:00 INFO cfxcore::sync::synchronization_protocol_handler - Catch-up mode: true, latest epoch: 0
2019-09-20T06:37:32.786243354+05:00 INFO cfxcore::sync::synchronization_protocol_handler - Catch-up mode: true, latest epoch: 0
2019-09-20T06:37:33.786941114+05:00 INFO cfxcore::sync::synchronization_protocol_handler - Catch-up mode: true, latest epoch: 0
2019-09-20T06:37:33.787056495+05:00 INFO cfxcore::statistics - Statistics: StatisticsInner { sync_graph: SyncGraphStatistics { inserted_block_count: 1 }, consensus_graph: ConsensusGraphStatistics { inserted_block_count: 0, processed_block_count: 0 } }
2019-09-20T06:37:33.787105638+05:00 INFO cfxcore::block_data_manager - Before gc cache_size=242 1 0 0 0
2019-09-20T06:37:34.062364269+05:00 INFO cfxcore::storage::impls::state_manager - number of nodes committed to db 3
2019-09-20T06:37:34.787760882+05:00 INFO cfxcore::sync::synchronization_protocol_handler - Catch-up mode: true, latest epoch: 0
2019-09-20T06:37:35.788372334+05:00 INFO cfxcore::sync::synchronization_protocol_handler - Catch-up mode: true, latest epoch: 0
2019-09-20T06:37:36.788991793+05:00 INFO cfxcore::sync::synchronization_protocol_handler - Catch-up mode: true, latest epoch: 0
2019-09-20T06:37:37.789444008+05:00 INFO cfxcore::sync::synchronization_protocol_handler - Catch-up mode: true, latest epoch: 0
2019-09-20T06:37:38.790060893+05:00 INFO cfxcore::sync::synchronization_protocol_handler - Catch-up mode: true, latest epoch: 0
2019-09-20T06:37:38.790112319+05:00 INFO cfxcore::block_data_manager - Before gc cache_size=242 1 0 0 0
2019-09-20T06:37:38.790145340+05:00 INFO cfxcore::statistics - Statistics: StatisticsInner { sync_graph: SyncGraphStatistics { inserted_block_count: 1 }, consensus_graph: ConsensusGraphStatistics { inserted_block_count: 0, processed_block_count: 0 } }
2019-09-20T06:37:39.062577283+05:00 INFO cfxcore::storage::impls::state_manager - number of nodes committed to db 3

Using peer_id to keep track of requests can introduce inconsistent behavior.

When peers are disconnected, its peer_id may be reused by other connections later.
Now we memorize the peer_id of in-flight requests in the protocol handler and process them if timeout. Reused peer_id can introduce unexpected behavior in match_request.
With issue #35, this bug may be trigger quite commonly now.
One solution is to keep track of in-flight requests in every peer's state, and this can be fixed together with cleverer synchronization of headers/blocks because we also need more state there.

account address definition in protocol specification

In the Conflux Protocol Specification file, the account address is defined as the left most 160-bits of the Keccak hash of the corresponding public key. However, based on the implementation, it should be the right most 160-bits.

Reference to uninitialized memory

Code in Conflux: trie_node = mem::uninitialized();
File Location: core/src/storage/impls/multi_version_merkle_patricia_trie/node_memory_manager.rs:435:37
Why is this bad: Creation of null references is undefined behavior (UB).
Remarks: It may cause dangling pointer and make a raw values , which may cause UB

the since field must contain a semver-compliant version

Code in Conflux: build_rpc_trait! { /// Cfx rpc interface. pub trait Cfx { /// Returns protocol version encoded as a string (quotes are necessary). } }
File Location: client/src/rpc/traits/cfx.rs:17:1 , 144, 196
Why is this bad: For checking the version of the deprecation, it must be a valid semver. Failing that, the contained information is useless.
Remarks: <::jsonrpc_macros::auto_args::build_rpc_trait macros> : 29:1, since field is missing

Mutable borrow from immutable input(s)

Code in Conflux: pub unsafe fn get_unchecked_mut(&self, key: usize) -> &mut T {
File Location: core/src/storage/impls/multi_version_merkle_patricia_trie/slab/mod.rs:709:59 , core/src/storage/impls/multi_version_merkle_patricia_trie/slab/mod.rs:713:39 , core/src/storage/impls/multi_version_merkle_patricia_trie/slab/mod.rs:717:45
Why is this bad: This is trivially unsound, as one can create two mutable references from the same (immutable!) source.
Remarks: The code contains an UB: if some optimisation assumes that the object is immutable, but it was in fact mutated (through the reference acquired by this function), you’ll easily can get anything wrong.

Inconsistent Lock Order (Potentials to introduce deadlocks)

Thread A: SynchronizationGraph => ConsensusGraph
pub fn insert_block( (core/src/sync/synchronization_graph.rs:1001)
...
let mut inner = self.inner.write(); <---- Lock(SynchronizationGraph.inner.write) (core/src/sync/synchronization_graph.rs:1011)
...
self.consensus.on_new_block_construction_only(&h, &*inner); (core/src/sync/synchronization_graph.rs:1095)
=> let inner = &mut *self.inner.write(); <----- Lock(ConsensusGraph.inner.write) (core/src/consensus/mod.rs:2429)

Thread B: ConsensusGraph => SynchronizationGraph
impl SynchronizationGraph { (core/src/sync/synchronization_graph.rs:512)
Ok(hash) => consensus.on_new_block(&hash, inner.as_ref()), (core/src/sync/synchronization_graph.rs:553)
=>
let mut inner = &mut *self.inner.write(); <---- Lock(ConsensusGraph.inner.write) (core/src/consensus/mod.rs:2532)
...
let difficulty_in_my_epoch =
sync_inner_lock.read().total_difficulty_in_own_epoch(hash); <---- Lock(SynchronizationGraph.inner.read) (core/src/consensus/mod.rs:2535)

redundant closure found

Code in Conflux: self.last_contact.map(|c| c.into_node_contact());
File Location: network/src/node_table.rs:865:47
Why is this bad: Needlessly creating a closure adds code for no benefit and gives the optimizer more work.
Remarks: remove closure as shown: node_table::json::NodeContact::into_node_contact

https not enforced on online wallet

The online wallet works over both http and https. The http version should redirect to https automatically to enforce secure use.

conflux-wallet

(This is not closely related to this repo but I was unable to find a more suitable place to post it.)

Casting from `*const u8` to a more-strictly-aligned pointer (`*const u16`)

Code in Conflux: let o: *const u16 = addr_bytes.as_ptr() as *const u16;
File Location: network/src/node_table.rs:91:37
Why is this bad: Dereferencing the resulting pointer may be undefined behavior.
Remarks: Casting *const u8 to *const u16 can produce different value in different run rounds.
For example
fn main(){ let x:u8 = 1; let y = (&x as *const u8) as *const u16; let z = (&x as *const u8) as *const u8; unsafe{ println!("the value of y is {}", *y); println!("the value of z is {}", *z); } }
the value of y is 26369
the value of y is 22273

Build failed: Could not compile `journaldb'

I am trying to build Conflux on Windows, but found this error.

error[E0599]: no method named malloc_size_of found for type memory_db::Memory DB<keccak_hasher::KeccakHasher, memory_db::HashKey<keccak_hasher::KeccakHasher>, elastic_array::ElasticArray128<u8>> in the current scope
--> C:\Users\Jawad.cargo\git\checkouts\conflux-parity-deps-2536dd8b4e78cc06
1930617\util\journaldb\src\archivedb.rs:110:16
|
110 | self.overlay.malloc_size_of()
| ^^^^^^^^^^^^^^
|
= note: the method malloc_size_of exists but the following trait bounds we
re not satisfied:
memory_db::MemoryDB<keccak_hasher::KeccakHasher, memory_db::HashKey <keccak_hasher::KeccakHasher>, elastic_array::ElasticArray128<u8>> : parity_util _mem::allocators::MallocSizeOfExt

error[E0599]: no method named size_of found for type memory_db::MemoryDB<kecc ak_hasher::KeccakHasher, memory_db::HashKey<keccak_hasher::KeccakHasher>, elasti c_array::ElasticArray128<u8>> in the current scope
--> C:\Users\Jawad.cargo\git\checkouts\conflux-parity-deps-2536dd8b4e78cc06
1930617\util\journaldb\src\earlymergedb.rs:342:16
|
342 | self.overlay.size_of(&mut ops) + match self.refs {
| ^^^^^^^
|
= note: the method size_of exists but the following trait bounds were not
satisfied:
memory_db::MemoryDB<keccak_hasher::KeccakHasher, memory_db::HashKey <keccak_hasher::KeccakHasher>, elastic_array::ElasticArray128<u8>> : parity_util _mem::malloc_size::MallocSizeOf

error[E0599]: no method named size_of found for type lock_api::rwlock::RwLock ReadGuard<'_, parking_lot::raw_rwlock::RawRwLock, std::collections::HashMap<prim itive_types::H256, earlymergedb::RefInfo>> in the current scope
--> C:\Users\Jawad.cargo\git\checkouts\conflux-parity-deps-2536dd8b4e78cc06
1930617\util\journaldb\src\earlymergedb.rs:343:28
|
343 | Some(ref c) => c.read().size_of(&mut ops),
| ^^^^^^^
|
= note: the method size_of exists but the following trait bounds were not
satisfied:
std::collections::HashMap<primitive_types::H256, earlymergedb::RefI nfo> : parity_util_mem::malloc_size::MallocSizeOf

error[E0277]: the trait bound primitive_types::H256: parity_util_mem::malloc_si ze::MallocSizeOf is not satisfied
--> C:\Users\Jawad.cargo\git\checkouts\conflux-parity-deps-2536dd8b4e78cc06
1930617\util\journaldb\src\overlayrecentdb.rs:136:21
|
136 | #[derive(PartialEq, MallocSizeOf)]
| ^^^^^^^^^^^^ the trait parity_util_mem::malloc_size:: MallocSizeOf is not implemented for primitive_types::H256
|
= note: required by parity_util_mem::malloc_size::MallocSizeOf::size_of

error[E0599]: no method named size_of found for type memory_db::MemoryDB<kecc ak_hasher::KeccakHasher, memory_db::HashKey<keccak_hasher::KeccakHasher>, elasti c_array::ElasticArray128<u8>> in the current scope
--> C:\Users\Jawad.cargo\git\checkouts\conflux-parity-deps-2536dd8b4e78cc06
1930617\util\journaldb\src\overlayrecentdb.rs:255:42
|
255 | let mut mem = self.transaction_overlay.size_of(&mut ops);
| ^^^^^^^
|
= note: the method size_of exists but the following trait bounds were not
satisfied:
memory_db::MemoryDB<keccak_hasher::KeccakHasher, memory_db::HashKey <keccak_hasher::KeccakHasher>, elastic_array::ElasticArray128<u8>> : parity_util _mem::malloc_size::MallocSizeOf

error[E0599]: no method named size_of found for type memory_db::MemoryDB<kecc ak_hasher::KeccakHasher, memory_db::HashKey<keccak_hasher::KeccakHasher>, elasti c_array::ElasticArray128<u8>> in the current scope
--> C:\Users\Jawad.cargo\git\checkouts\conflux-parity-deps-2536dd8b4e78cc06
1930617\util\journaldb\src\overlayrecentdb.rs:258:34
|
258 | mem += overlay.backing_overlay.size_of(&mut ops);
| ^^^^^^^
|
= note: the method size_of exists but the following trait bounds were not
satisfied:
memory_db::MemoryDB<keccak_hasher::KeccakHasher, memory_db::HashKey <keccak_hasher::KeccakHasher>, elastic_array::ElasticArray128<u8>> : parity_util _mem::malloc_size::MallocSizeOf

error[E0599]: no method named size_of found for type std::collections::HashMa p<primitive_types::H256, elastic_array::ElasticArray128<u8>, std::hash::BuildHas herDefault<plain_hasher::PlainHasher>> in the current scope
--> C:\Users\Jawad.cargo\git\checkouts\conflux-parity-deps-2536dd8b4e78cc06
1930617\util\journaldb\src\overlayrecentdb.rs:259:34
|
259 | mem += overlay.pending_overlay.size_of(&mut ops);
| ^^^^^^^
|
= note: the method size_of exists but the following trait bounds were not
satisfied:
std::collections::HashMap<primitive_types::H256, elastic_array::Ela sticArray128<u8>, std::hash::BuildHasherDefault<plain_hasher::PlainHasher>> : pa rity_util_mem::malloc_size::MallocSizeOf

error[E0599]: no method named size_of found for type std::vec::Vec<primitive_ types::H256> in the current scope
--> C:\Users\Jawad.cargo\git\checkouts\conflux-parity-deps-2536dd8b4e78cc06
1930617\util\journaldb\src\refcounteddb.rs:109:16
|
109 | self.inserts.size_of(&mut ops) + self.removes.size_of(&mut ops)
| ^^^^^^^
|
= note: the method size_of exists but the following trait bounds were not
satisfied:
[primitive_types::H256] : parity_util_mem::malloc_size::MallocSizeO f
std::vec::Vec<primitive_types::H256> : parity_util_mem::malloc_size ::MallocSizeOf

error[E0599]: no method named size_of found for type std::vec::Vec<primitive_ types::H256> in the current scope
--> C:\Users\Jawad.cargo\git\checkouts\conflux-parity-deps-2536dd8b4e78cc06
1930617\util\journaldb\src\refcounteddb.rs:109:49
|
109 | self.inserts.size_of(&mut ops) + self.removes.size_of(&mut ops)
| ^^^^^^^
|
= note: the method size_of exists but the following trait bounds were not
satisfied:
[primitive_types::H256] : parity_util_mem::malloc_size::MallocSizeO f
std::vec::Vec<primitive_types::H256> : parity_util_mem::malloc_size ::MallocSizeOf

error: aborting due to 9 previous errors

Some errors have detailed explanations: E0277, E0599.
For more information about an error, try rustc --explain E0277.
error: Could not compile journaldb.
warning: build failed, waiting for other jobs to finish...

There seems still a deadlock problem.

Event:
I saw there is an PR about deadlock had been merged, so I updated the new code, built, and reconnected to the test net.

On 2019.06.12

  1. I am very happy cause the test net had been reset, and I only need to download 50M data.
  2. After caught-up mode, the data is correct compared to the data on root nodes, so that I am sure that everything ran well
  3. I shutdown the process by CTRL-C, set "start_mining=true"
  4. I reran the process, I can see that I set mined on log after minutes.
  5. The block mined by me is not shown online, so I waited to the next block mined.
  6. I tested the JSON-API, I found something unexpected, the epoch number on my local node is much larger than that on root nodes
  7. After minutes, I found that my node did not mine and I could not shutdown the terminal normally as usual.

PC:
ubuntu 16.04, RAM 8G,

default.toml

jsonrpc_tcp_port=12536
jsonrpc_http_port=12537

ledger_cache_size=256
db_cache_size=16
storage_cache_start_size=126000
storage_cache_size=2400000
storage_recent_lfu_factor=0.5
storage_idle_size=25000
storage_node_map_size=10000000
tx_pool_size=67_000

Build not successful- Patch kvdb-rocksdb v0.1.4 was not used in the crate graph.

I am following the help from https://conflux-chain.github.io/conflux-doc/install/ .. but today i received following warning and errors;
warning: Patch kvdb-rocksdb v0.1.4 (C:\conflux-rust\db\src\kvdb-rocksdb) was not used in the crate graph.
Check that the patched package version and available features are compatible
with the dependency requirements. If the patch has a different version from
what is locked in the Cargo.lock file, run cargo update to use the new
version. This may also occur with an optional dependency that is not enabled.

error[E0609]: no field disable_wal on type impls::kvdb_rocksdb::DatabaseConfig
--> db\src\rocksdb\mod.rs:94:15
|
94 | db_config.disable_wal = disable_wal;
| ^^^^^^^^^^^ unknown field
|
= note: available fields are: max_open_files, memory_budget, compaction, columns

error[E0277]: the trait bound impls::kvdb_rocksdb::Database: impls::kvdb::KeyValueDB is not satisfied
--> db\src\rocksdb\mod.rs:114:20
|
114 | key_value: Arc::new(db),
| ^^^^^^^^^^^^ the trait impls::kvdb::KeyValueDB is not implemented for impls::kvdb_rocksdb::Database
|
= note: required for the cast to the object type dyn impls::kvdb::KeyValueDB

Casting &T to &mut T may cause undefined behaviour,

Code in Conflux: &mut *(self.trie_node as *const TrieNodeDeltaMpt as *mut TrieNodeDeltaMpt)
File Location: core/src/storage/impls/multi_version_merkle_patricia_trie/merkle_patricia_trie/cow_node_ref.rs:86:14, core/src/storage/impls/multi_version_merkle_patricia_trie/slab/mod.rs:714:23
Why is this bad: It’s basically guaranteed to be undefined behaviour. UnsafeCell is the only way to obtain aliasable data that is considered mutable.
Remarks: The UnsafeCell type is the only legal way to obtain aliasable data that is considered mutable. In general, transmuting an &T type into an &mut T is considered undefined behavior.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.