Giter Site home page Giter Site logo

web3infra-foundation / mega Goto Github PK

View Code? Open in Web Editor NEW
98.0 7.0 26.0 7.06 MB

Mega is an unofficial open source implementation of Google Piper.

Home Page: https://gitmega.dev

License: Apache License 2.0

Dockerfile 0.06% Shell 0.08% Rust 69.69% JavaScript 25.09% HTML 0.13% CSS 3.56% Starlark 0.84% TypeScript 0.56%
git monorepo rust p2p p2p-git hacktoberfest decentralised buck2 google piper

mega's People

Contributors

andres-nju avatar benjamin-747 avatar dependabot[bot] avatar genedna avatar github-merge-queue[bot] avatar ivanbeethoven avatar kaligo-li avatar phyknife avatar qiuzhilei avatar sonichen avatar timeprinciple avatar wujian0327 avatar yutingnie avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

mega's Issues

The test function create a new path in the repo

When running cargo test --workspace, the test function named test_content_store in database/src/driver/lfs/storage.rs created a new folder and files in the database.

On branch main
Your branch is up to date with 'origin/main'.

Untracked files:
  (use "git add <file>..." to include in what will be committed)
	database/content-store/

nothing added to commit but untracked files present (use "git add" to track)

The tests folder in the root is for the testing data made by the test suite. Using the PathBuf::from(env::current_dir().unwrap().parent().unwrap()); get the root path of the workspace in the test function and make a clean output.

Issue with action-rs: Failed Tests Due to Git LFS Data Not Being Downloaded

I've encountered this while using action-rs in the GitHub Actions workflow. The problem revolves around cargo tests and their interaction with test data managed via Git LFS (Large File Storage).

When running cargo tests as part of the CI pipeline using action-rs, the test data stored with Git LFS is not downloaded. This leads to failures in tests that rely on these large data files.
Impact: The tests that depend on LFS-tracked files fail due to their unavailability, significantly impacting the reliability of our CI process.

  1. Action-rs Maintenance Status: Action-rs has entered a maintenance phase. Consequently, there is no active issue tracking or patching for this action.
  2. Potential Replacement: I've identified a potential alternative action, Build Rust projects with Cross, which seems like it could be a suitable replacement for action-rs.
  3. Temporary Workaround in #243: As an interim measure, the functionality cargo test --workspace has been removed in #243. This necessitates manual verification of each PR before merge, adding to the workflow complexity and potential for error.

Version conflicts between craft’s dependency pgp library and p2p’s dependency libp2p library

PGP 0.10.0 (or its higher version0.10.1,0.10.2) and libp2p 0.51.3 have version conflicts,they depends on different version for curve25519-dalek
Error message is below(PGP 0.10.2 and libp2p 0.51.3):
error: failed to select a version for curve25519-dalek.
... required by package snow v0.9.1
... which satisfies dependency snow = "^0.9.1" of package libp2p-noise v0.42.0
... which satisfies dependency libp2p-noise = "^0.42.0" of package libp2p v0.51.3
... which satisfies dependency libp2p = "^0.51.3" of package p2p v0.1.0 (E:\mega\p2p)
... which satisfies path dependency p2p (locked to 0.1.0) of package mega v0.1.0 (E:\mega)
versions that meet the requirements =4.0.0-rc.0 are: 4.0.0-rc.0

all possible versions conflict with previously selected packages.

previously selected package curve25519-dalek v4.0.0-rc.3
... which satisfies dependency curve25519-dalek = "=4.0.0-rc.3" of package ed25519-dalek v2.0.0-rc.3
... which satisfies dependency ed25519-dalek = "^2.0.0-rc.3" of package pgp v0.10.2
... which satisfies dependency pgp = "^0.10.2" of package git-craft v0.1.0 (E:\mega\craft)
... which satisfies path dependency git-craft (locked to 0.1.0) of package mega v0.1.0 (E:\mega)

failed to select a version for curve25519-dalek which could resolve this conflict

Push git repository encounter `self.param_types.len() <= (u16::MAX as usize)` error

I cloned the repo form https://github.com/biomejs/biome.git, then added a remote git remote add local http://127.0.0.1:8000/third_parts/crates/biomejs/biome. When I push, I encounter this error:

thread 'tokio-runtime-worker' panicked at /Users/eli/.cargo/registry/src/index.crates.io-6f17d22bba15001f/sqlx-postgres-0.7.1/src/message/parse.rs:31:13:
assertion failed: self.param_types.len() <= (u16::MAX as usize)

Proposal for a New URL Protocol for Sharing Open Source Projects in DHT (Distributed Hash Table)

Proposal

I propose developing a new URL protocol for Git to share open source projects in a DHT(Distributed Hash Table). The benefits of leveraging DHT include enhanced reliability, fault tolerance, and scalability. Given these advantages, a specific URL protocol for this purpose would significantly advance how developers collaborate on open source projects.

Expected Benefits

  1. Scalability: With DHT, the new URL protocol can handle more projects than current solutions.
  2. Reliability: The DHT's inherent redundancy ensures that projects remain accessible even if several nodes go down.
  3. Decentralization: This protocol would enhance open source openness by making projects less reliant on centralized platforms.

Compilation Error: Failed to Get `winapi` as a Dependency

While trying to compile the mega project, I encountered an error related to the winapi dependency. Below is the output I got:

➜  mega git:(main)
cargo build
    Updating crates.io index
error: failed to get `winapi` as a dependency of package `russh-cryptovec v0.7.0`
    ... which satisfies dependency `russh-cryptovec = "^0.7.0"` of package `russh v0.37.1`
    ... which satisfies dependency `russh = "^0.37.1"` of package `gateway v0.1.0 (/Users/eli/GitMono/mega/gateway)`
    ... which satisfies path dependency `gateway` (locked to 0.1.0) of package `mega v0.1.0 (/Users/eli/GitMono/mega)`

Caused by:
  failed to query replaced source registry `crates-io`

Caused by:
  download of wi/na/winapi failed

Caused by:
  failed to download from `https://index.crates.io/wi/na/winapi`

Caused by:
  [2] Failed initialization ([CONN-1-0] send: no filter connected)

I was trying to compile the commit 36c2809 ↗ when this error occurred.

It seems like the winapi dependency couldn't be downloaded from the crates.io index. I am not sure what the cause could be, as I have not encountered this issue before.

There is no README.md in default initial root directory.

I am trying to build the UI of mega, but I need a REAEME.md file to test and presentation effects.
Please add a README.md file in default initial root directory.

## Init directory configuration
MEGA_INIT_DIRS = "projects,docs,third_parts" # init these repo directories in mega init command

Got `SelectNextSome polled after terminated` Error in container environment

  • Due to the necessity of starting multiple nodes for P2P services to conduct end-to-end testing, we've used Docker Compose to initiate multiple containers. However, this approach has led to certain issues, currently hindering the development of end-to-end testing code.
  • The issue can be reproduced by using the command: docker compose -f tests/compose/mega_p2p/compose.yaml up --build.
  • Please refer to the attached files for additional error log details.

2024-01-04 15:23:54 StartOptions {
2024-01-04 15:23:54 service: [
2024-01-04 15:23:54 Http,
2024-01-04 15:23:54 P2p,
2024-01-04 15:23:54 ],
2024-01-04 15:23:54 common: CommonOptions {
2024-01-04 15:23:54 host: "0.0.0.0",
2024-01-04 15:23:54 data_source: Postgres,
2024-01-04 15:23:54 },
2024-01-04 15:23:54 http: HttpCustom {
2024-01-04 15:23:54 http_port: 8000,
2024-01-04 15:23:54 https_port: 443,
2024-01-04 15:23:54 https_key_path: None,
2024-01-04 15:23:54 https_cert_path: None,
2024-01-04 15:23:54 },
2024-01-04 15:23:54 ssh: SshCustom {
2024-01-04 15:23:54 ssh_port: 8100,
2024-01-04 15:23:54 ssh_key_path: None,
2024-01-04 15:23:54 ssh_cert_path: None,
2024-01-04 15:23:54 },
2024-01-04 15:23:54 p2p: P2pCustom {
2024-01-04 15:23:54 p2p_port: 8300,
2024-01-04 15:23:54 bootstrap_node: "/ip4/172.17.0.1/tcp/8200",
2024-01-04 15:23:54 secret_key: None,
2024-01-04 15:23:54 relay_server: false,
2024-01-04 15:23:54 },
2024-01-04 15:23:54 }
2024-01-04 15:23:54 2024-01-04T07:23:54.432728Z INFO p2p::peer: Generate keys randomly
2024-01-04 15:23:54 2024-01-04T07:23:54.433848Z INFO p2p::node::client: Local peer id: PeerId("16Uiu2HAm3gVu7nN4b6VQxRbG1sbYQuZrVJx5xjbb4UestFi5Rf48")
2024-01-04 15:23:54 2024-01-04T07:23:54.436535Z INFO libp2p_swarm: local_peer_id=16Uiu2HAm3gVu7nN4b6VQxRbG1sbYQuZrVJx5xjbb4UestFi5Rf48
2024-01-04 15:23:54 2024-01-04T07:23:54.436900Z DEBUG libp2p_core::transport::choice: Failed to listen on address using libp2p_core::transport::map::Map<libp2p_core::transport::upgrade::Multiplexed<libp2p_core::transport::and_then::AndThen<libp2p_core::transport::and_then::AndThen<libp2p_relay::priv_client::transport::Transport, libp2p_core::transport::upgrade::Builder<libp2p_relay::priv_client::transport::Transport>::authenticate<libp2p_relay::priv_client::Connection, libp2p_noise::io::Output<multistream_select::negotiated::Negotiated<libp2p_relay::priv_client::Connection>>, libp2p_noise::Config, libp2p_noise::Error>::{{closure}}>, libp2p_core::transport::upgrade::Authenticated<libp2p_core::transport::and_then::AndThen<libp2p_relay::priv_client::transport::Transport, libp2p_core::transport::upgrade::Builder<libp2p_relay::priv_client::transport::Transport>::authenticate<libp2p_relay::priv_client::Connection, libp2p_noise::io::Output<multistream_select::negotiated::Negotiated<libp2p_relay::priv_client::Connection>>, libp2p_noise::Config, libp2p_noise::Error>::{{closure}}>>::multiplex<libp2p_noise::io::Output<multistream_select::negotiated::Negotiated<libp2p_relay::priv_client::Connection>>, libp2p_yamux::Muxer<multistream_select::negotiated::Negotiated<libp2p_noise::io::Output<multistream_select::negotiated::Negotiated<libp2p_relay::priv_client::Connection>>>>, libp2p_yamux::Config, std::io::error::Error>::{{closure}}>>, libp2p::builder::phase::relay::<impl libp2p::builder::SwarmBuilder<libp2p::builder::phase::provider::AsyncStd, libp2p::builder::phase::relay::RelayPhase<libp2p_core::transport::map::Map<libp2p_core::transport::upgrade::Multiplexed<libp2p_core::transport::and_then::AndThen<libp2p_core::transport::and_then::AndThen<libp2p_tcp::Transport<libp2p_tcp::provider::async_io::Tcp>, libp2p_core::transport::upgrade::Builder<libp2p_tcp::Transport<libp2p_tcp::provider::async_io::Tcp>>::authenticate<async_io::Asyncstd::net::tcp::TcpStream, libp2p_noise::io::Output<multistream_select::negotiated::Negotiated<async_io::Asyncstd::net::tcp::TcpStream>>, libp2p_noise::Config, libp2p_noise::Error>::{{closure}}>, libp2p_core::transport::upgrade::Authenticated<libp2p_core::transport::and_then::AndThen<libp2p_tcp::Transport<libp2p_tcp::provider::async_io::Tcp>, libp2p_core::transport::upgrade::Builder<libp2p_tcp::Transport<libp2p_tcp::provider::async_io::Tcp>>::authenticate<async_io::Asyncstd::net::tcp::TcpStream, libp2p_noise::io::Output<multistream_select::negotiated::Negotiated<async_io::Asyncstd::net::tcp::TcpStream>>, libp2p_noise::Config, libp2p_noise::Error>::{{closure}}>>::multiplex<libp2p_noise::io::Output<multistream_select::negotiated::Negotiated<async_io::Asyncstd::net::tcp::TcpStream>>, libp2p_yamux::Muxer<multistream_select::negotiated::Negotiated<libp2p_noise::io::Output<multistream_select::negotiated::Negotiated<async_io::Asyncstd::net::tcp::TcpStream>>>>, libp2p_yamux::Config, std::io::error::Error>::{{closure}}>>, libp2p::builder::phase::tcp::<impl libp2p::builder::SwarmBuilder<libp2p::builder::phase::provider::AsyncStd, libp2p::builder::phase::tcp::TcpPhase>>::with_tcp<libp2p_noise::Config::new, libp2p_noise::io::Output<multistream_select::negotiated::Negotiated<async_io::Asyncstd::net::tcp::TcpStream>>, libp2p_noise::Error, <libp2p_yamux::Config as core::default::Default>::default, libp2p_yamux::Muxer<multistream_select::negotiated::Negotiated<libp2p_noise::io::Output<multistream_select::negotiated::Negotiated<async_io::Asyncstd::net::tcp::TcpStream>>>>, std::io::error::Error>::{{closure}}>>>>::with_relay_client<libp2p_noise::Config::new, libp2p_noise::io::Output<multistream_select::negotiated::Negotiated<libp2p_relay::priv_client::Connection>>, libp2p_noise::Error, <libp2p_yamux::Config as core::default::Default>::default, libp2p_yamux::Muxer<multistream_select::negotiated::Negotiated<libp2p_noise::io::Output<multistream_select::negotiated::Negotiated<libp2p_relay::priv_client::Connection>>>>, std::io::error::Error>::{{closure}}> address=/ip4/0.0.0.0/tcp/8300
2024-01-04 15:23:54 2024-01-04T07:23:54.437082Z DEBUG libp2p_tcp: listening on address address=0.0.0.0:8300
2024-01-04 15:23:54 2024-01-04T07:23:54.437433Z DEBUG netlink_proto::handle: handle: forwarding new request to connection
2024-01-04 15:23:54 2024-01-04T07:23:54.437987Z INFO p2p::node::client: Connect to database
2024-01-04 15:23:55 First setting: IdGeneratorOptions { method: Some(1), base_time: Some(1582136402000), worker_id: Some(1), worker_id_bit_len: Some(6), seq_bit_len: Some(8), max_seq_num: Some(255), min_seq_num: Some(5), top_over_cost_count: Some(2000) }
2024-01-04 15:23:55 First setting: IdGeneratorOptions { method: Some(1), base_time: Some(1582136402000), worker_id: Some(1), worker_id_bit_len: Some(6), seq_bit_len: Some(8), max_seq_num: Some(255), min_seq_num: Some(5), top_over_cost_count: Some(2000) }
2024-01-04 15:23:55 2024-01-04T07:23:55.999521Z DEBUG Swarm::poll: netlink_proto::connection: reading incoming messages
2024-01-04 15:23:55 2024-01-04T07:23:55.999557Z DEBUG Swarm::poll: netlink_proto::codecs: NetlinkCodec: decoding next message
2024-01-04 15:23:55 2024-01-04T07:23:55.999574Z DEBUG Swarm::poll: netlink_proto::connection: forwarding unsolicited messages to the connection handle
2024-01-04 15:23:55 2024-01-04T07:23:55.999577Z DEBUG Swarm::poll: netlink_proto::connection: forwaring responses to previous requests to the connection handle
2024-01-04 15:23:55 2024-01-04T07:23:55.999580Z DEBUG Swarm::poll: netlink_proto::connection: handling requests
2024-01-04 15:23:55 2024-01-04T07:23:55.999653Z DEBUG Swarm::poll: netlink_proto::connection: sending messages
2024-01-04 15:23:56 2024-01-04T07:23:56.000148Z DEBUG Swarm::poll: netlink_proto::connection: reading incoming messages
2024-01-04 15:23:56 2024-01-04T07:23:56.000172Z DEBUG Swarm::poll: netlink_proto::codecs: NetlinkCodec: decoding next message
2024-01-04 15:23:56 2024-01-04T07:23:56.000188Z DEBUG Swarm::poll: netlink_proto::codecs: NetlinkCodec: decoding next message
2024-01-04 15:23:56 2024-01-04T07:23:56.000554Z DEBUG Swarm::poll: netlink_proto::protocol::protocol: handling messages (request id = RequestId { sequence_number: 1, port: 0 })
2024-01-04 15:23:56 2024-01-04T07:23:56.000579Z DEBUG Swarm::poll: netlink_proto::protocol::protocol: handling response to request RequestId { sequence_number: 1, port: 0 }
2024-01-04 15:23:56 2024-01-04T07:23:56.000585Z DEBUG Swarm::poll: netlink_proto::protocol::protocol: done handling response to request RequestId { sequence_number: 1, port: 0 }
2024-01-04 15:23:56 2024-01-04T07:23:56.000588Z DEBUG Swarm::poll: netlink_proto::codecs: NetlinkCodec: decoding next message
2024-01-04 15:23:56 2024-01-04T07:23:56.000596Z DEBUG Swarm::poll: netlink_proto::protocol::protocol: handling messages (request id = RequestId { sequence_number: 1, port: 0 })
2024-01-04 15:23:56 2024-01-04T07:23:56.000599Z DEBUG Swarm::poll: netlink_proto::protocol::protocol: handling response to request RequestId { sequence_number: 1, port: 0 }
2024-01-04 15:23:56 2024-01-04T07:23:56.000601Z DEBUG Swarm::poll: netlink_proto::protocol::protocol: done handling response to request RequestId { sequence_number: 1, port: 0 }
2024-01-04 15:23:56 2024-01-04T07:23:56.000603Z DEBUG Swarm::poll: netlink_proto::codecs: NetlinkCodec: decoding next message
2024-01-04 15:23:56 2024-01-04T07:23:56.000609Z DEBUG Swarm::poll: netlink_proto::codecs: NetlinkCodec: decoding next message
2024-01-04 15:23:56 2024-01-04T07:23:56.000612Z DEBUG Swarm::poll: netlink_proto::protocol::protocol: handling messages (request id = RequestId { sequence_number: 1, port: 0 })
2024-01-04 15:23:56 2024-01-04T07:23:56.000614Z DEBUG Swarm::poll: netlink_proto::protocol::protocol: handling response to request RequestId { sequence_number: 1, port: 0 }
2024-01-04 15:23:56 2024-01-04T07:23:56.000616Z DEBUG Swarm::poll: netlink_proto::protocol::protocol: done handling response to request RequestId { sequence_number: 1, port: 0 }
2024-01-04 15:23:56 2024-01-04T07:23:56.000618Z DEBUG Swarm::poll: netlink_proto::codecs: NetlinkCodec: decoding next message
2024-01-04 15:23:56 2024-01-04T07:23:56.000624Z DEBUG Swarm::poll: netlink_proto::connection: forwarding unsolicited messages to the connection handle
2024-01-04 15:23:56 2024-01-04T07:23:56.000628Z DEBUG Swarm::poll: netlink_proto::connection: forwaring responses to previous requests to the connection handle
2024-01-04 15:23:56 2024-01-04T07:23:56.000632Z DEBUG Swarm::poll: netlink_proto::connection: handling requests
2024-01-04 15:23:56 2024-01-04T07:23:56.000634Z DEBUG Swarm::poll: netlink_proto::connection: sending messages
2024-01-04 15:23:56 2024-01-04T07:23:56.000680Z DEBUG Swarm::poll: libp2p_tcp: New listen address address=/ip4/127.0.0.1/tcp/8300
2024-01-04 15:23:56 2024-01-04T07:23:56.000713Z DEBUG Swarm::poll: libp2p_swarm: New listener address listener=ListenerId(1) address=/ip4/127.0.0.1/tcp/8300
2024-01-04 15:23:56 2024-01-04T07:23:56.000819Z INFO p2p::node::client: Listening on "/ip4/127.0.0.1/tcp/8300"
2024-01-04 15:23:56 2024-01-04T07:23:56.000832Z DEBUG Swarm::poll: netlink_proto::connection: reading incoming messages
2024-01-04 15:23:56 2024-01-04T07:23:56.000835Z DEBUG Swarm::poll: netlink_proto::codecs: NetlinkCodec: decoding next message
2024-01-04 15:23:56 2024-01-04T07:23:56.000840Z DEBUG Swarm::poll: netlink_proto::connection: forwarding unsolicited messages to the connection handle
2024-01-04 15:23:56 2024-01-04T07:23:56.000842Z DEBUG Swarm::poll: netlink_proto::connection: forwaring responses to previous requests to the connection handle
2024-01-04 15:23:56 2024-01-04T07:23:56.000844Z DEBUG Swarm::poll: netlink_proto::connection: handling requests
2024-01-04 15:23:56 2024-01-04T07:23:56.000848Z DEBUG Swarm::poll: netlink_proto::connection: sending messages
2024-01-04 15:23:56 2024-01-04T07:23:56.000852Z DEBUG Swarm::poll: libp2p_tcp: New listen address address=/ip4/172.18.0.5/tcp/8300
2024-01-04 15:23:56 2024-01-04T07:23:56.000857Z DEBUG Swarm::poll: libp2p_swarm: New listener address listener=ListenerId(1) address=/ip4/172.18.0.5/tcp/8300
2024-01-04 15:23:56 2024-01-04T07:23:56.000863Z INFO p2p::node::client: Listening on "/ip4/172.18.0.5/tcp/8300"
2024-01-04 15:23:56 2024-01-04T07:23:56.000869Z DEBUG Swarm::poll: netlink_proto::connection: reading incoming messages
2024-01-04 15:23:56 2024-01-04T07:23:56.000873Z DEBUG Swarm::poll: netlink_proto::codecs: NetlinkCodec: decoding next message
2024-01-04 15:23:56 2024-01-04T07:23:56.000891Z DEBUG Swarm::poll: netlink_proto::connection: forwarding unsolicited messages to the connection handle
2024-01-04 15:23:56 2024-01-04T07:23:56.000905Z DEBUG Swarm::poll: netlink_proto::connection: forwaring responses to previous requests to the connection handle
2024-01-04 15:23:56 2024-01-04T07:23:56.000907Z DEBUG Swarm::poll: netlink_proto::connection: handling requests
2024-01-04 15:23:56 2024-01-04T07:23:56.000910Z DEBUG Swarm::poll: netlink_proto::connection: sending messages
2024-01-04 15:23:57 2024-01-04T07:23:56.999038Z INFO p2p::node::client: Trying to dial bootstrap node"/ip4/172.17.0.1/tcp/8200"
2024-01-04 15:23:57 2024-01-04T07:23:56.999470Z DEBUG libp2p_core::transport::choice: Failed to dial address using libp2p_core::transport::map::Map<libp2p_core::transport::upgrade::Multiplexed<libp2p_core::transport::and_then::AndThen<libp2p_core::transport::and_then::AndThen<libp2p_relay::priv_client::transport::Transport, libp2p_core::transport::upgrade::Builder<libp2p_relay::priv_client::transport::Transport>::authenticate<libp2p_relay::priv_client::Connection, libp2p_noise::io::Output<multistream_select::negotiated::Negotiated<libp2p_relay::priv_client::Connection>>, libp2p_noise::Config, libp2p_noise::Error>::{{closure}}>, libp2p_core::transport::upgrade::Authenticated<libp2p_core::transport::and_then::AndThen<libp2p_relay::priv_client::transport::Transport, libp2p_core::transport::upgrade::Builder<libp2p_relay::priv_client::transport::Transport>::authenticate<libp2p_relay::priv_client::Connection, libp2p_noise::io::Output<multistream_select::negotiated::Negotiated<libp2p_relay::priv_client::Connection>>, libp2p_noise::Config, libp2p_noise::Error>::{{closure}}>>::multiplex<libp2p_noise::io::Output<multistream_select::negotiated::Negotiated<libp2p_relay::priv_client::Connection>>, libp2p_yamux::Muxer<multistream_select::negotiated::Negotiated<libp2p_noise::io::Output<multistream_select::negotiated::Negotiated<libp2p_relay::priv_client::Connection>>>>, libp2p_yamux::Config, std::io::error::Error>::{{closure}}>>, libp2p::builder::phase::relay::<impl libp2p::builder::SwarmBuilder<libp2p::builder::phase::provider::AsyncStd, libp2p::builder::phase::relay::RelayPhase<libp2p_core::transport::map::Map<libp2p_core::transport::upgrade::Multiplexed<libp2p_core::transport::and_then::AndThen<libp2p_core::transport::and_then::AndThen<libp2p_tcp::Transport<libp2p_tcp::provider::async_io::Tcp>, libp2p_core::transport::upgrade::Builder<libp2p_tcp::Transport<libp2p_tcp::provider::async_io::Tcp>>::authenticate<async_io::Asyncstd::net::tcp::TcpStream, libp2p_noise::io::Output<multistream_select::negotiated::Negotiated<async_io::Asyncstd::net::tcp::TcpStream>>, libp2p_noise::Config, libp2p_noise::Error>::{{closure}}>, libp2p_core::transport::upgrade::Authenticated<libp2p_core::transport::and_then::AndThen<libp2p_tcp::Transport<libp2p_tcp::provider::async_io::Tcp>, libp2p_core::transport::upgrade::Builder<libp2p_tcp::Transport<libp2p_tcp::provider::async_io::Tcp>>::authenticate<async_io::Asyncstd::net::tcp::TcpStream, libp2p_noise::io::Output<multistream_select::negotiated::Negotiated<async_io::Asyncstd::net::tcp::TcpStream>>, libp2p_noise::Config, libp2p_noise::Error>::{{closure}}>>::multiplex<libp2p_noise::io::Output<multistream_select::negotiated::Negotiated<async_io::Asyncstd::net::tcp::TcpStream>>, libp2p_yamux::Muxer<multistream_select::negotiated::Negotiated<libp2p_noise::io::Output<multistream_select::negotiated::Negotiated<async_io::Asyncstd::net::tcp::TcpStream>>>>, libp2p_yamux::Config, std::io::error::Error>::{{closure}}>>, libp2p::builder::phase::tcp::<impl libp2p::builder::SwarmBuilder<libp2p::builder::phase::provider::AsyncStd, libp2p::builder::phase::tcp::TcpPhase>>::with_tcp<libp2p_noise::Config::new, libp2p_noise::io::Output<multistream_select::negotiated::Negotiated<async_io::Asyncstd::net::tcp::TcpStream>>, libp2p_noise::Error, <libp2p_yamux::Config as core::default::Default>::default, libp2p_yamux::Muxer<multistream_select::negotiated::Negotiated<libp2p_noise::io::Output<multistream_select::negotiated::Negotiated<async_io::Asyncstd::net::tcp::TcpStream>>>>, std::io::error::Error>::{{closure}}>>>>::with_relay_client<libp2p_noise::Config::new, libp2p_noise::io::Output<multistream_select::negotiated::Negotiated<libp2p_relay::priv_client::Connection>>, libp2p_noise::Error, <libp2p_yamux::Config as core::default::Default>::default, libp2p_yamux::Muxer<multistream_select::negotiated::Negotiated<libp2p_noise::io::Output<multistream_select::negotiated::Negotiated<libp2p_relay::priv_client::Connection>>>>, std::io::error::Error>::{{closure}}> address=/ip4/172.17.0.1/tcp/8200
2024-01-04 15:23:57 2024-01-04T07:23:56.999578Z DEBUG libp2p_tcp: dialing address address=172.17.0.1:8200
2024-01-04 15:23:57 2024-01-04T07:23:57.014775Z DEBUG Transport::dial{address=/ip4/172.17.0.1/tcp/8200}: multistream_select::dialer_select: Dialer: Proposed protocol: /noise
2024-01-04 15:23:57 2024-01-04T07:23:57.014820Z DEBUG Transport::dial{address=/ip4/172.17.0.1/tcp/8200}: multistream_select::dialer_select: Dialer: Expecting proposed protocol: /noise
2024-01-04 15:23:57 2024-01-04T07:23:57.021717Z DEBUG Transport::dial{address=/ip4/172.17.0.1/tcp/8200}: multistream_select::negotiated: Negotiated: Received confirmation for protocol: /noise
2024-01-04 15:23:57 2024-01-04T07:23:57.032109Z DEBUG Transport::dial{address=/ip4/172.17.0.1/tcp/8200}: multistream_select::dialer_select: Dialer: Proposed protocol: /yamux/1.0.0
2024-01-04 15:23:57 2024-01-04T07:23:57.032239Z DEBUG Transport::dial{address=/ip4/172.17.0.1/tcp/8200}: multistream_select::dialer_select: Dialer: Expecting proposed protocol: /yamux/1.0.0
2024-01-04 15:23:57 2024-01-04T07:23:57.032308Z DEBUG Transport::dial{address=/ip4/172.17.0.1/tcp/8200}: yamux::connection: new connection: 9500929a (Client)
2024-01-04 15:23:57 2024-01-04T07:23:57.033037Z DEBUG Swarm::poll: libp2p_kad::handler: New outbound connection peer=16Uiu2HAm8QJyknrG9ZuaypuX6mphfrh5WtTx7BbcE6u4pRyKjnf9 mode=client
2024-01-04 15:23:57 2024-01-04T07:23:57.033992Z DEBUG Swarm::poll: libp2p_swarm: Connection established peer=16Uiu2HAm8QJyknrG9ZuaypuX6mphfrh5WtTx7BbcE6u4pRyKjnf9 endpoint=Dialer { address: "/ip4/172.17.0.1/tcp/8200", role_override: Dialer } total_peers=1
2024-01-04 15:23:57 2024-01-04T07:23:57.034079Z INFO p2p::node::client: ConnectionEstablished:16Uiu2HAm8QJyknrG9ZuaypuX6mphfrh5WtTx7BbcE6u4pRyKjnf9 at /ip4/172.17.0.1/tcp/8200/p2p/16Uiu2HAm8QJyknrG9ZuaypuX6mphfrh5WtTx7BbcE6u4pRyKjnf9
2024-01-04 15:23:57 2024-01-04T07:23:57.034132Z DEBUG new_established_connection{remote_addr=/ip4/172.17.0.1/tcp/8200 id=1 peer=16Uiu2HAm8QJyknrG9ZuaypuX6mphfrh5WtTx7BbcE6u4pRyKjnf9}:Connection::poll: yamux::connection::rtt: sending ping 3120215291
2024-01-04 15:23:57 2024-01-04T07:23:57.034104Z INFO p2p::node::client: Behaviour(Kademlia(RoutingUpdated { peer: PeerId("16Uiu2HAm8QJyknrG9ZuaypuX6mphfrh5WtTx7BbcE6u4pRyKjnf9"), is_new_peer: true, addresses: ["/ip4/172.17.0.1/tcp/8200/p2p/16Uiu2HAm8QJyknrG9ZuaypuX6mphfrh5WtTx7BbcE6u4pRyKjnf9"], bucket_range: (Distance(57896044618658097711785492504343953926634992332820282019728792003956564819968), Distance(115792089237316195423570985008687907853269984665640564039457584007913129639935)), old_peer: None }))
2024-01-04 15:23:57 2024-01-04T07:23:57.034219Z DEBUG new_established_connection{remote_addr=/ip4/172.17.0.1/tcp/8200 id=1 peer=16Uiu2HAm8QJyknrG9ZuaypuX6mphfrh5WtTx7BbcE6u4pRyKjnf9}:Connection::poll: yamux::connection: 9500929a: new outbound (Stream 9500929a/1) of (Connection 9500929a Client (streams 0))
2024-01-04 15:23:57 2024-01-04T07:23:57.034251Z DEBUG new_established_connection{remote_addr=/ip4/172.17.0.1/tcp/8200 id=1 peer=16Uiu2HAm8QJyknrG9ZuaypuX6mphfrh5WtTx7BbcE6u4pRyKjnf9}:Connection::poll: multistream_select::dialer_select: Dialer: Proposed protocol: /ipfs/id/1.0.0
2024-01-04 15:23:57 2024-01-04T07:23:57.116052Z DEBUG new_established_connection{remote_addr=/ip4/172.17.0.1/tcp/8200 id=1 peer=16Uiu2HAm8QJyknrG9ZuaypuX6mphfrh5WtTx7BbcE6u4pRyKjnf9}:Connection::poll: multistream_select::negotiated: Negotiated: Received confirmation for protocol: /yamux/1.0.0
2024-01-04 15:23:57 2024-01-04T07:23:57.116261Z DEBUG new_established_connection{remote_addr=/ip4/172.17.0.1/tcp/8200 id=1 peer=16Uiu2HAm8QJyknrG9ZuaypuX6mphfrh5WtTx7BbcE6u4pRyKjnf9}:Connection::poll: yamux::connection::rtt: received pong 3120215291, estimated round-trip-time 82.13025ms
2024-01-04 15:23:57 2024-01-04T07:23:57.116461Z DEBUG new_established_connection{remote_addr=/ip4/172.17.0.1/tcp/8200 id=1 peer=16Uiu2HAm8QJyknrG9ZuaypuX6mphfrh5WtTx7BbcE6u4pRyKjnf9}:Connection::poll: multistream_select::listener_select: Listener: confirming protocol: /ipfs/id/1.0.0
2024-01-04 15:23:57 2024-01-04T07:23:57.116488Z DEBUG new_established_connection{remote_addr=/ip4/172.17.0.1/tcp/8200 id=1 peer=16Uiu2HAm8QJyknrG9ZuaypuX6mphfrh5WtTx7BbcE6u4pRyKjnf9}:Connection::poll: multistream_select::listener_select: Listener: sent confirmed protocol: /ipfs/id/1.0.0
2024-01-04 15:23:57 2024-01-04T07:23:57.116808Z DEBUG new_established_connection{remote_addr=/ip4/172.17.0.1/tcp/8200 id=1 peer=16Uiu2HAm8QJyknrG9ZuaypuX6mphfrh5WtTx7BbcE6u4pRyKjnf9}:Connection::poll: multistream_select::dialer_select: Dialer: Received confirmation for protocol: /ipfs/id/1.0.0
2024-01-04 15:23:57 2024-01-04T07:23:57.117292Z INFO p2p::node::client: Told Bootstrap Node our public address.
2024-01-04 15:23:57 2024-01-04T07:23:57.117334Z INFO p2p::node::client: Bootstrap Node told us our public address: "/ip4/172.18.0.5/tcp/8300"
2024-01-04 15:23:57 2024-01-04T07:23:57.117372Z INFO p2p::node::client: Dial bootstrap node successfully
2024-01-04 15:23:57 2024-01-04T07:23:57.117552Z DEBUG libp2p_kad::behaviour: Switching to server-mode assuming that one of [/ip4/172.17.0.1/tcp/8200/p2p/16Uiu2HAm8QJyknrG9ZuaypuX6mphfrh5WtTx7BbcE6u4pRyKjnf9/p2p-circuit] is externally reachable
2024-01-04 15:23:57 2024-01-04T07:23:57.117561Z DEBUG libp2p_kad::behaviour: Re-configuring 1 established connection
2024-01-04 15:23:57 2024-01-04T07:23:57.119265Z DEBUG p2p::node::client: Event: NewExternalAddrCandidate { address: "/ip4/172.18.0.5/tcp/8300" }
2024-01-04 15:23:57 2024-01-04T07:23:57.119286Z INFO p2p::node::client: loop ends?
2024-01-04 15:23:57 2024-01-04T07:23:57.119383Z INFO p2p::node::client: loop ends?
2024-01-04 15:23:57 2024-01-04T07:23:57.119541Z INFO p2p::node::client: loop ends?
2024-01-04 15:23:57 2024-01-04T07:23:57.121277Z DEBUG new_established_connection{remote_addr=/ip4/172.17.0.1/tcp/8200 id=1 peer=16Uiu2HAm8QJyknrG9ZuaypuX6mphfrh5WtTx7BbcE6u4pRyKjnf9}: libp2p_kad::handler: Changed mode on outbound connection peer=16Uiu2HAm8QJyknrG9ZuaypuX6mphfrh5WtTx7BbcE6u4pRyKjnf9 mode=server
2024-01-04 15:23:57 2024-01-04T07:23:57.121541Z DEBUG new_established_connection{remote_addr=/ip4/172.17.0.1/tcp/8200 id=1 peer=16Uiu2HAm8QJyknrG9ZuaypuX6mphfrh5WtTx7BbcE6u4pRyKjnf9}:Connection::poll: yamux::connection: 9500929a: new outbound (Stream 9500929a/3) of (Connection 9500929a Client (streams 0))
2024-01-04 15:23:57 2024-01-04T07:23:57.121585Z DEBUG new_established_connection{remote_addr=/ip4/172.17.0.1/tcp/8200 id=1 peer=16Uiu2HAm8QJyknrG9ZuaypuX6mphfrh5WtTx7BbcE6u4pRyKjnf9}:Connection::poll: multistream_select::dialer_select: Dialer: Proposed protocol: /rendezvous/1.0.0
2024-01-04 15:23:57 2024-01-04T07:23:57.121623Z DEBUG new_established_connection{remote_addr=/ip4/172.17.0.1/tcp/8200 id=1 peer=16Uiu2HAm8QJyknrG9ZuaypuX6mphfrh5WtTx7BbcE6u4pRyKjnf9}:Connection::poll: yamux::connection: 9500929a: new outbound (Stream 9500929a/5) of (Connection 9500929a Client (streams 1))
2024-01-04 15:23:57 2024-01-04T07:23:57.121643Z DEBUG new_established_connection{remote_addr=/ip4/172.17.0.1/tcp/8200 id=1 peer=16Uiu2HAm8QJyknrG9ZuaypuX6mphfrh5WtTx7BbcE6u4pRyKjnf9}:Connection::poll: multistream_select::dialer_select: Dialer: Proposed protocol: /libp2p/circuit/relay/0.2.0/hop
2024-01-04 15:23:57 2024-01-04T07:23:57.121712Z DEBUG new_established_connection{remote_addr=/ip4/172.17.0.1/tcp/8200 id=1 peer=16Uiu2HAm8QJyknrG9ZuaypuX6mphfrh5WtTx7BbcE6u4pRyKjnf9}:Connection::poll: libp2p_identify::handler: Supported listen protocols changed, pushing to peer peer=16Uiu2HAm8QJyknrG9ZuaypuX6mphfrh5WtTx7BbcE6u4pRyKjnf9 before=/mega/git_info_refs, /mega/nostr, /ipfs/id/1.0.0, /mega/git_upload_pack, /libp2p/circuit/relay/0.2.0/stop, /ipfs/id/push/1.0.0, /mega/git_obj after=/mega/nostr, /ipfs/id/1.0.0, /mega/git_upload_pack, /libp2p/circuit/relay/0.2.0/stop, /mega/git_obj, /ipfs/kad/1.0.0, /mega/git_info_refs, /ipfs/id/push/1.0.0
2024-01-04 15:23:57 2024-01-04T07:23:57.121749Z DEBUG new_established_connection{remote_addr=/ip4/172.17.0.1/tcp/8200 id=1 peer=16Uiu2HAm8QJyknrG9ZuaypuX6mphfrh5WtTx7BbcE6u4pRyKjnf9}:Connection::poll: yamux::connection: 9500929a: new outbound (Stream 9500929a/7) of (Connection 9500929a Client (streams 2))
2024-01-04 15:23:57 2024-01-04T07:23:57.121762Z DEBUG new_established_connection{remote_addr=/ip4/172.17.0.1/tcp/8200 id=1 peer=16Uiu2HAm8QJyknrG9ZuaypuX6mphfrh5WtTx7BbcE6u4pRyKjnf9}:Connection::poll: multistream_select::dialer_select: Dialer: Proposed protocol: /ipfs/id/push/1.0.0
2024-01-04 15:35:41 2024-01-04T07:35:41.080059Z DEBUG request{method=GET uri=/projects/mega.git/info/refs?service=git-receive-pack version=HTTP/1.1}: tower_http::trace::on_request: started processing request
2024-01-04 15:35:41 2024-01-04T07:35:41.120039Z DEBUG request{method=GET uri=/projects/mega.git/info/refs?service=git-receive-pack version=HTTP/1.1}: git::protocol::pack: git_info_refs response: b"001f# service=git-receive-pack\n000000931de82deeff0f9107f7fb9f6a5d93a95b84e951bb HEAD\0report-status report-status-v2 delete-refs quiet atomic side-band-64k ofs-delta agent=mega/0.0.1\n003d1de82deeff0f9107f7fb9f6a5d93a95b84e951bb refs/heads/main\n0000"
2024-01-04 15:35:41 2024-01-04T07:35:41.120075Z DEBUG request{method=GET uri=/projects/mega.git/info/refs?service=git-receive-pack version=HTTP/1.1}: tower_http::trace::on_response: finished processing request latency=42 ms status=200
2024-01-04 15:35:41 2024-01-04T07:35:41.276052Z DEBUG request{method=GET uri=/projects/mega.git/info/refs?service=git-upload-pack version=HTTP/1.1}: tower_http::trace::on_request: started processing request
2024-01-04 15:35:41 2024-01-04T07:35:41.277340Z DEBUG request{method=GET uri=/projects/mega.git/info/refs?service=git-upload-pack version=HTTP/1.1}: git::protocol::pack: git_info_refs response: b"001e# service=git-upload-pack\n000000b21de82deeff0f9107f7fb9f6a5d93a95b84e951bb HEAD\0shallow deepen-since deepen-not deepen-relative multi_ack_detailed no-done include-tag side-band-64k ofs-delta agent=mega/0.0.1\n003d1de82deeff0f9107f7fb9f6a5d93a95b84e951bb refs/heads/main\n0000"
2024-01-04 15:35:41 2024-01-04T07:35:41.277361Z DEBUG request{method=GET uri=/projects/mega.git/info/refs?service=git-upload-pack version=HTTP/1.1}: tower_http::trace::on_response: finished processing request latency=1 ms status=200
2024-01-04 15:35:41 2024-01-04T07:35:41.400089Z DEBUG request{method=POST uri=/projects/mega.git/info/lfs/locks/verify version=HTTP/1.1}: tower_http::trace::on_request: started processing request
2024-01-04 15:35:41 2024-01-04T07:35:41.400218Z INFO request{method=POST uri=/projects/mega.git/info/lfs/locks/verify version=HTTP/1.1}: gateway::lfs: req: Request { method: POST, uri: /projects/mega.git/info/lfs/locks/verify, version: HTTP/1.1, headers: {"host": "localhost:8000", "user-agent": "git-lfs/3.4.0 (GitHub; darwin arm64; go 1.21.1)", "content-length": "34", "accept": "application/vnd.git-lfs+json", "content-type": "application/vnd.git-lfs+json; charset=utf-8", "accept-encoding": "gzip"}, body: Body(UnsyncBoxBody) }
2024-01-04 15:35:41 Locks retrieved: []
2024-01-04 15:35:41 2024-01-04T07:35:41.403420Z DEBUG request{method=POST uri=/projects/mega.git/info/lfs/locks/verify version=HTTP/1.1}: tower_http::trace::on_response: finished processing request latency=3 ms status=200
2024-01-04 15:35:41 2024-01-04T07:35:41.457098Z DEBUG request{method=POST uri=/projects/mega.git/git-receive-pack version=HTTP/1.1}: tower_http::trace::on_request: started processing request
2024-01-04 15:35:41 2024-01-04T07:35:41.457442Z DEBUG request{method=POST uri=/projects/mega.git/git-receive-pack version=HTTP/1.1}: git::protocol::pack: 19129 bytes from client
2024-01-04 15:23:57 thread 'tokio-runtime-worker' panicked at /usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/futures-util-0.3.30/src/stream/stream/select_next_some.rs:32:9:
2024-01-04 15:23:57 SelectNextSome polled after terminated
2024-01-04 15:23:57 note: run with RUST_BACKTRACE=1 environment variable to display a backtrace

Design and Implementation of File for AI Data

Researchers have trained a lot of data sets due to the emergence of Large Language Models (LLMs). However, many data sets that cannot be traced were produced by the unsupervised training of LLMs. Therefore, it is now problematic whether the data sets used match the requirements. We intend to establish a standard for the Artificial Intelligence (AI) training data set in order to facilitate traceability because it has numerous important properties that should be known. We proposed AIBOM, which employs standardised formats to simplify the administration of data in various formats and was inspired by SBOM in software engineering. After discussions between Xihe Platform and lawyers, we agreed that we can help data governance plans and reduce data governance risks by integrating the AIBOM standard.
The specific content of AIBOM and its explanations are shown in the figure. Interested friends are welcome to contribute.
AIBOM

Next we plan to do the following:

  1. Design data format as metadata-Header
  2. Design data sample
    We assume that changes in training data and annotation will not affect each other, and design the following example:
    2.1 Example of changes to training data and label version updates
    2.2 Sample display of data query
  3. Coding based on 1,2 points

Database Error When Push A Long Name Of Repository

When I push a repository that has a long name, I get a PostgreSQL database error:

$ git remote -v
local http://127.0.0.1:8000/projects/crates/a367fef239ee214ddc83474ce5f20cae2837231c2e3174fa118b8c921 (fetch)
local http://127.0.0.1:8000/projects/crates/a367fef239ee214ddc83474ce5f20cae2837231c2e3174fa118b8c921 (push)

thread 'tokio-runtime-worker' panicked at /xx/mega/storage/src/driver/database/storage.rs:213:14:
called `Result::unwrap()` on an `Err` value: Query(SqlxError(Database(PgDatabaseError { severity: Error, code: "22001", message: "value too long for type character varying(64)", detail: None, hint: None, position: None, where: None, schema: None, table: None, column: None, data_type: None, constraint: None, file: Some("varchar.c"), line: Some(632), routine: Some("varchar") })))

The host environment is Arch x86_64 with PostgreSQL 15.4.

Question about starting project

I want to start your project on Windows. I followed your guidance in README and got stuck in step 5 "Config environment variables for local test". I was confusing about what the .env file is, where and what to set 'environment variables such as DB_USERNAME, DB_SECRET, and DB_HOST' and where I should put the following parameter settings.

Design and Implement P2P Discovery Ability for Git Based on DHT

Git is an excellent and widely used version control system. However, one potential area of improvement lies in the decentralization of repositories. Git collaboration relies heavily on centralized services such as GitHub, GitLab, Bitbucket, etc. We propose adding a peer-to-peer (P2P) discovery feature based on Distributed Hash Table (DHT) technology to further decentralize Git repositories using Mega.

DHT is a class of decentralized distributed systems that provides a lookup service similar to a hash table: (key, value) pairs are stored in a DHT, and any participating node can efficiently retrieve the value associated with a given key.

  1. Design a DHT schema for Git: This schema would allow Git repositories to be stored and found in a DHT. The key could be a hash of the repository's URL, and the value could be the repository's metadata.

  2. Integrate DHT into Mega: Modify Mega to include DHT functionality. This could involve adding or modifying new commands to support DHT operations.

  3. Develop a discovery protocol: This protocol would enable Git clients to discover each other and exchange repository information via DHT. #53

This feature could greatly increase the decentralization and resilience of Git, making it even more reliable and versatile for version control.

Partial batch missing issue when inserting git objects during database operations

test command: cargo test --package git --lib -- internal::pack::preload::tests::test_demo_channel --exact --nocapture --ignored
note: Need to configure database connection information separately.
e.g.

   std::env::set_var("MEGA_DATABASE_URL", "mysql://root:123456@localhost:3306/mega");

The database shoule be inserted 324 objects, but after the UT test_demo_channel, there are only 224.

Delta Object Decode Issue

When I push https://github.com/8go/matrix-commander-rs to third_parts/crates/8go/matrix-commander-rs, I get a decode delta object error:

cargo run https
    Finished dev [unoptimized + debuginfo] target(s) in 0.90s
warning: the following packages contain code that will be rejected by a future version of Rust: buf_redux v0.8.4, nom v4.2.3
note: to see what the problems were, use the option `--future-incompat-report`, or run `cargo report future-incompatibilities --id 1`
     Running `target/debug/mega https`
HttpOptions {
    host: "127.0.0.1",
    port: 8000,
    key_path: None,
    cert_path: None,
    lfs_content_path: Some(
        "/Volumes/Data/objects/mega",
    ),
    data_source: Postgres,
}
First setting: IdGeneratorOptions { method: Some(1), base_time: Some(1582136402000), worker_id: Some(1), worker_id_bit_len: Some(6), seq_bit_len: Some(8), max_seq_num: Some(255), min_seq_num: Some(5), top_over_cost_count: Some(2000) }
2023-10-24T16:54:07.979985Z DEBUG hyper::proto::h1::io: parsed 6 headers
2023-10-24T16:54:07.980020Z DEBUG hyper::proto::h1::conn: incoming body is empty
2023-10-24T16:54:07.981941Z  INFO gateway::https: headers: {"Content-Type": "application/x-git-receive-pack-advertisement", "Cache-Control": "no-cache, max-age=0, must-revalidate"}
2023-10-24T16:54:07.988642Z DEBUG sqlx::query: summary="SELECT * FROM refs …" db.statement="\n\nSELECT\n  *\nFROM\n  refs\nwhere\n  $1 LIKE CONCAT(repo_path, '%')\n" rows_affected=1 rows_returned=1 elapsed=4.756209ms
2023-10-24T16:54:07.995563Z DEBUG sqlx::query: summary="SELECT \"refs\".\"id\", \"refs\".\"repo_path\", \"refs\".\"ref_name\", …" db.statement="\n\nSELECT\n  \"refs\".\"id\",\n  \"refs\".\"repo_path\",\n  \"refs\".\"ref_name\",\n  \"refs\".\"ref_git_id\",\n  \"refs\".\"created_at\",\n  \"refs\".\"updated_at\"\nFROM\n  \"refs\"\nWHERE\n  \"refs\".\"repo_path\" = $1\n" rows_affected=1 rows_returned=1 elapsed=5.199125ms
2023-10-24T16:54:07.995910Z DEBUG git::protocol::pack: git_info_refs response: b"001f# service=git-receive-pack\n000000824f76fa8feb4b4a2b50b168f5a2c5b064e5961185 HEAD\0report-status report-status-v2 delete-refs quiet atomic side-band-64k ofs-delta\n003d4f76fa8feb4b4a2b50b168f5a2c5b064e5961185 refs/heads/main\n0000"
2023-10-24T16:54:07.996290Z DEBUG hyper::proto::h1::io: flushed 541 bytes
2023-10-24T16:54:08.017061Z DEBUG hyper::proto::h1::io: parsed 7 headers
2023-10-24T16:54:08.017078Z DEBUG hyper::proto::h1::conn: incoming body is content-length (6582 bytes)
2023-10-24T16:54:08.017184Z DEBUG hyper::proto::h1::conn: incoming body completed
2023-10-24T16:54:08.018393Z DEBUG git::protocol::pack: init comamnd: RefCommand { ref_name: "refs/heads/main", old_id: "4f76fa8feb4b4a2b50b168f5a2c5b064e5961185", new_id: "42b2deaedbaa3146938e24d54aa3aeeb1482f9a6", status: "ok", error_msg: "", command_type: Update }, caps:[ReportStatusv2, SideBand64k]
2023-10-24T16:54:08.018639Z  INFO git::internal::pack::preload: Start Preload git objects:40
2023-10-24T16:54:08.018646Z  INFO git::internal::pack::preload:  Preloading  git objects:0
2023-10-24T16:54:08.020630Z  INFO git::internal::pack::preload: Preload time cost:2 ms
2023-10-24T16:54:08.020682Z  INFO git::internal::pack::preload: Decode the preload git object
***Git Type Counter Info***
commit:6  tree:3   blob:4  tag:0	ref_delta :12	ofs_dalta:15
base_object:13 	 delta_object:27 	

2023-10-24T16:54:08.020703Z  INFO git::internal::pack::preload: Deal with the object using 10 threads.
2023-10-24T16:54:08.021543Z DEBUG sqlx::query: summary="BEGIN" db.statement="" rows_affected=0 rows_returned=0 elapsed=180.042µs
2023-10-24T16:54:08.021909Z  INFO git::internal::pack::preload: thread begin : 42801
2023-10-24T16:54:08.021915Z  INFO git::internal::pack::preload: thread begin : 31824
2023-10-24T16:54:08.021925Z  INFO git::internal::pack::preload: thread begin : 30878
2023-10-24T16:54:08.021929Z  INFO git::internal::pack::preload: thread begin : 45686
2023-10-24T16:54:08.021929Z  INFO git::internal::pack::preload: thread begin : 34044
2023-10-24T16:54:08.021933Z  INFO git::internal::pack::preload: thread begin : 37361
2023-10-24T16:54:08.021909Z  INFO git::internal::pack::preload: thread begin : 950
2023-10-24T16:54:08.021930Z  INFO git::internal::pack::preload: thread begin : 10025
2023-10-24T16:54:08.021949Z  INFO git::internal::pack::preload: thread begin : 7451
2023-10-24T16:54:08.021986Z  INFO git::internal::pack::preload: thread id  : 31824 run to obj :0
2023-10-24T16:54:08.021988Z  INFO git::internal::pack::preload: thread begin : 64208
thread 'tokio-runtime-worker' panicked at git/src/internal/pack/preload.rs:489:5:
wrong delta decode
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
thread 'tokio-runtime-worker' panicked at git/src/internal/pack/preload.rs:489:5:
wrong delta decode
thread 'tokio-runtime-worker' panicked at git/src/internal/pack/preload.rs:489:5:
wrong delta decode
thread 'tokio-runtime-worker' panicked at git/src/internal/pack/preload.rs:489:5:
wrong delta decode
thread 'tokio-runtime-worker' panicked at git/src/internal/pack/preload.rs:489:5:
wrong delta decode
2023-10-24T16:54:08.028623Z DEBUG sqlx::query: summary="SELECT \"git_obj\".\"id\", \"git_obj\".\"git_id\", \"git_obj\".\"object_type\", …" db.statement="\n\nSELECT\n  \"git_obj\".\"id\",\n  \"git_obj\".\"git_id\",\n  \"git_obj\".\"object_type\",\n  \"git_obj\".\"data\",\n  \"git_obj\".\"link\"\nFROM\n  \"git_obj\"\nWHERE\n  \"git_obj\".\"git_id\" = $1\nLIMIT\n  $2\n" rows_affected=0 rows_returned=1 elapsed=5.053459ms
2023-10-24T16:54:08.031698Z DEBUG sqlx::query: summary="SELECT \"git_obj\".\"id\", \"git_obj\".\"git_id\", \"git_obj\".\"object_type\", …" db.statement="\n\nSELECT\n  \"git_obj\".\"id\",\n  \"git_obj\".\"git_id\",\n  \"git_obj\".\"object_type\",\n  \"git_obj\".\"data\",\n  \"git_obj\".\"link\"\nFROM\n  \"git_obj\"\nWHERE\n  \"git_obj\".\"git_id\" = $1\nLIMIT\n  $2\n" rows_affected=0 rows_returned=1 elapsed=8.142833ms
2023-10-24T16:54:08.031804Z DEBUG sqlx::query: summary="SELECT \"git_obj\".\"id\", \"git_obj\".\"git_id\", \"git_obj\".\"object_type\", …" db.statement="\n\nSELECT\n  \"git_obj\".\"id\",\n  \"git_obj\".\"git_id\",\n  \"git_obj\".\"object_type\",\n  \"git_obj\".\"data\",\n  \"git_obj\".\"link\"\nFROM\n  \"git_obj\"\nWHERE\n  \"git_obj\".\"git_id\" = $1\nLIMIT\n  $2\n" rows_affected=0 rows_returned=1 elapsed=7.073125ms
thread 'tokio-runtime-worker' panicked at git/src/internal/pack/preload.rs:489:5:
wrong delta decode
2023-10-24T16:54:08.032603Z DEBUG sqlx::query: summary="INSERT INTO \"mr\" (\"id\", …" db.statement="\n\nINSERT INTO\n  \"mr\" (\n    \"id\",\n    \"mr_id\",\n    \"git_id\",\n    \"object_type\",\n    \"created_at\"\n  )\nVALUES\n  ($1, $2, $3, $4, $5),\n  ($6, $7, $8, $9, $10),\n  ($11, $12, $13, $14, $15),\n  ($16, $17, $18, $19, $20) ON CONFLICT DO NOTHING RETURNING \"id\"\n" rows_affected=4 rows_returned=4 elapsed=7.744ms
2023-10-24T16:54:08.032716Z DEBUG sqlx::query: summary="SELECT \"git_obj\".\"id\", \"git_obj\".\"git_id\", \"git_obj\".\"object_type\", …" db.statement="\n\nSELECT\n  \"git_obj\".\"id\",\n  \"git_obj\".\"git_id\",\n  \"git_obj\".\"object_type\",\n  \"git_obj\".\"data\",\n  \"git_obj\".\"link\"\nFROM\n  \"git_obj\"\nWHERE\n  \"git_obj\".\"git_id\" = $1\nLIMIT\n  $2\n" rows_affected=0 rows_returned=1 elapsed=7.622417ms
2023-10-24T16:54:08.033997Z DEBUG sqlx::query: summary="SELECT \"git_obj\".\"id\", \"git_obj\".\"git_id\", \"git_obj\".\"object_type\", …" db.statement="\n\nSELECT\n  \"git_obj\".\"id\",\n  \"git_obj\".\"git_id\",\n  \"git_obj\".\"object_type\",\n  \"git_obj\".\"data\",\n  \"git_obj\".\"link\"\nFROM\n  \"git_obj\"\nWHERE\n  \"git_obj\".\"git_id\" = $1\nLIMIT\n  $2\n" rows_affected=0 rows_returned=1 elapsed=2.210166ms
2023-10-24T16:54:08.039260Z DEBUG sqlx::query: summary="SELECT \"git_obj\".\"id\", \"git_obj\".\"git_id\", \"git_obj\".\"object_type\", …" db.statement="\n\nSELECT\n  \"git_obj\".\"id\",\n  \"git_obj\".\"git_id\",\n  \"git_obj\".\"object_type\",\n  \"git_obj\".\"data\",\n  \"git_obj\".\"link\"\nFROM\n  \"git_obj\"\nWHERE\n  \"git_obj\".\"git_id\" = $1\nLIMIT\n  $2\n" rows_affected=0 rows_returned=1 elapsed=6.309ms
2023-10-24T16:54:08.041271Z DEBUG sqlx::query: summary="INSERT INTO \"git_obj\" (\"id\", …" db.statement="\n\nINSERT INTO\n  \"git_obj\" (\"id\", \"git_id\", \"object_type\", \"data\", \"link\")\nVALUES\n  ($1, $2, $3, $4, $5),\n  ($6, $7, $8, $9, $10),\n  ($11, $12, $13, $14, $15),\n  ($16, $17, $18, $19, $20) ON CONFLICT DO NOTHING RETURNING \"id\"\n" rows_affected=4 rows_returned=4 elapsed=6.923625ms
await last thread
2023-10-24T16:54:08.041311Z  INFO git::internal::pack::preload: Git Object Produce thread one  time cost:19 ms
2023-10-24T16:54:08.042142Z DEBUG sqlx::query: summary="SELECT \"git_obj\".\"id\", \"git_obj\".\"git_id\", \"git_obj\".\"object_type\", …" db.statement="\n\nSELECT\n  \"git_obj\".\"id\",\n  \"git_obj\".\"git_id\",\n  \"git_obj\".\"object_type\",\n  \"git_obj\".\"data\",\n  \"git_obj\".\"link\"\nFROM\n  \"git_obj\"\nWHERE\n  \"git_obj\".\"git_id\" = $1\nLIMIT\n  $2\n" rows_affected=0 rows_returned=1 elapsed=2.714291ms
2023-10-24T16:54:08.043550Z DEBUG sqlx::query: summary="INSERT INTO \"mr\" (\"id\", …" db.statement="\n\nINSERT INTO\n  \"mr\" (\n    \"id\",\n    \"mr_id\",\n    \"git_id\",\n    \"object_type\",\n    \"created_at\"\n  )\nVALUES\n  ($1, $2, $3, $4, $5),\n  ($6, $7, $8, $9, $10),\n  ($11, $12, $13, $14, $15),\n  ($16, $17, $18, $19, $20) ON CONFLICT DO NOTHING RETURNING \"id\"\n" rows_affected=4 rows_returned=4 elapsed=517.458µs
2023-10-24T16:54:08.044486Z DEBUG sqlx::query: summary="SELECT \"git_obj\".\"id\", \"git_obj\".\"git_id\", \"git_obj\".\"object_type\", …" db.statement="\n\nSELECT\n  \"git_obj\".\"id\",\n  \"git_obj\".\"git_id\",\n  \"git_obj\".\"object_type\",\n  \"git_obj\".\"data\",\n  \"git_obj\".\"link\"\nFROM\n  \"git_obj\"\nWHERE\n  \"git_obj\".\"git_id\" = $1\nLIMIT\n  $2\n" rows_affected=0 rows_returned=1 elapsed=4.221042ms
2023-10-24T16:54:08.046163Z DEBUG sqlx::query: summary="INSERT INTO \"git_obj\" (\"id\", …" db.statement="\n\nINSERT INTO\n  \"git_obj\" (\"id\", \"git_id\", \"object_type\", \"data\", \"link\")\nVALUES\n  ($1, $2, $3, $4, $5),\n  ($6, $7, $8, $9, $10),\n  ($11, $12, $13, $14, $15),\n  ($16, $17, $18, $19, $20) ON CONFLICT DO NOTHING RETURNING \"id\"\n" rows_affected=4 rows_returned=4 elapsed=897.125µs
await last thread
2023-10-24T16:54:08.046194Z  INFO git::internal::pack::preload: Git Object Produce thread one  time cost:24 ms
thread 'tokio-runtime-worker' panicked at git/src/internal/pack/preload.rs:254:32:
called `Result::unwrap()` on an `Err` value: JoinError::Panic(Id(17), ...)
2023-10-24T16:54:08.047281Z DEBUG hyper::proto::h1::io: parsed 7 headers
2023-10-24T16:54:08.047290Z DEBUG hyper::proto::h1::conn: incoming body is content-length (6582 bytes)
2023-10-24T16:54:08.047328Z DEBUG hyper::proto::h1::conn: incoming body completed
2023-10-24T16:54:08.048259Z DEBUG git::protocol::pack: init comamnd: RefCommand { ref_name: "refs/heads/main", old_id: "4f76fa8feb4b4a2b50b168f5a2c5b064e5961185", new_id: "42b2deaedbaa3146938e24d54aa3aeeb1482f9a6", status: "ok", error_msg: "", command_type: Update }, caps:[ReportStatusv2, SideBand64k]
2023-10-24T16:54:08.048284Z  INFO git::internal::pack::preload: Start Preload git objects:40
2023-10-24T16:54:08.048288Z  INFO git::internal::pack::preload:  Preloading  git objects:0
2023-10-24T16:54:08.048491Z DEBUG sqlx::query: summary="INSERT INTO \"mr\" (\"id\", …" db.statement="\n\nINSERT INTO\n  \"mr\" (\n    \"id\",\n    \"mr_id\",\n    \"git_id\",\n    \"object_type\",\n    \"created_at\"\n  )\nVALUES\n  ($1, $2, $3, $4, $5),\n  ($6, $7, $8, $9, $10),\n  ($11, $12, $13, $14, $15),\n  ($16, $17, $18, $19, $20) ON CONFLICT DO NOTHING RETURNING \"id\"\n" rows_affected=4 rows_returned=4 elapsed=590.458µs
2023-10-24T16:54:08.050074Z  INFO git::internal::pack::preload: Preload time cost:1 ms
2023-10-24T16:54:08.050084Z  INFO git::internal::pack::preload: Decode the preload git object
***Git Type Counter Info***
commit:6  tree:3   blob:4  tag:0	ref_delta :12	ofs_dalta:15
base_object:13 	 delta_object:27 	

2023-10-24T16:54:08.050096Z  INFO git::internal::pack::preload: Deal with the object using 10 threads.
2023-10-24T16:54:08.051480Z DEBUG sqlx::query: summary="BEGIN" db.statement="" rows_affected=0 rows_returned=0 elapsed=215.5µs
2023-10-24T16:54:08.051526Z  INFO git::internal::pack::preload: thread begin : 63463
thread '2023-10-24T16:54:08.051539Z  INFO git::internal::pack::preload: thread begin : 27351
tokio-runtime-worker' panicked at git/src/internal/pack/preload.rs:489:5:
wrong delta decode
thread 'tokio-runtime-worker' panicked at git/src/internal/pack/preload.rs:489:5:
wrong delta decode
2023-10-24T16:54:08.051555Z  INFO git::internal::pack::preload: thread begin : 12791
2023-10-24T16:54:08.051571Z  INFO git::internal::pack::preload: thread begin : 28467
2023-10-24T16:54:08.051585Z  INFO git::internal::pack::preload: thread begin : 39810
thread 'tokio-runtime-worker' panicked at git/src/internal/pack/preload.rs:489:5:
wrong delta decode
2023-10-24T16:54:08.051608Z  INFO git::internal::pack::preload: thread begin : 953
2023-10-24T16:54:08.051610Z  INFO git::internal::pack::preload: thread begin : 30406
2023-10-24T16:54:08.051620Z  INFO git::internal::pack::preload: thread begin : 58736
2023-10-24T16:54:08.051628Z  INFO git::internal::pack::preload: thread begin : 18412
2023-10-24T16:54:08.051631Z  INFO git::internal::pack::preload: thread id  : 58736 run to obj :0
2023-10-24T16:54:08.051662Z  INFO git::internal::pack::preload: thread begin : 29233
thread 'tokio-runtime-worker' panicked at git/src/internal/pack/preload.rs:489:5:
wrong delta decode
thread 'tokio-runtime-worker' panicked at git/src/internal/pack/preload.rs:489:5:
wrong delta decode
2023-10-24T16:54:08.056711Z DEBUG sqlx::query: summary="SELECT \"git_obj\".\"id\", \"git_obj\".\"git_id\", \"git_obj\".\"object_type\", …" db.statement="\n\nSELECT\n  \"git_obj\".\"id\",\n  \"git_obj\".\"git_id\",\n  \"git_obj\".\"object_type\",\n  \"git_obj\".\"data\",\n  \"git_obj\".\"link\"\nFROM\n  \"git_obj\"\nWHERE\n  \"git_obj\".\"git_id\" = $1\nLIMIT\n  $2\n" rows_affected=0 rows_returned=1 elapsed=4.036209ms
2023-10-24T16:54:08.057863Z DEBUG sqlx::query: summary="SELECT \"git_obj\".\"id\", \"git_obj\".\"git_id\", \"git_obj\".\"object_type\", …" db.statement="\n\nSELECT\n  \"git_obj\".\"id\",\n  \"git_obj\".\"git_id\",\n  \"git_obj\".\"object_type\",\n  \"git_obj\".\"data\",\n  \"git_obj\".\"link\"\nFROM\n  \"git_obj\"\nWHERE\n  \"git_obj\".\"git_id\" = $1\nLIMIT\n  $2\n" rows_affected=0 rows_returned=1 elapsed=5.089041ms
thread 'tokio-runtime-worker' panicked at git/src/internal/pack/preload.rs:489:5:
wrong delta decode
2023-10-24T16:54:08.058707Z DEBUG sqlx::query: summary="SELECT \"git_obj\".\"id\", \"git_obj\".\"git_id\", \"git_obj\".\"object_type\", …" db.statement="\n\nSELECT\n  \"git_obj\".\"id\",\n  \"git_obj\".\"git_id\",\n  \"git_obj\".\"object_type\",\n  \"git_obj\".\"data\",\n  \"git_obj\".\"link\"\nFROM\n  \"git_obj\"\nWHERE\n  \"git_obj\".\"git_id\" = $1\nLIMIT\n  $2\n" rows_affected=0 rows_returned=1 elapsed=4.021459ms
2023-10-24T16:54:08.063147Z DEBUG sqlx::query: summary="INSERT INTO \"mr\" (\"id\", …" db.statement="\n\nINSERT INTO\n  \"mr\" (\n    \"id\",\n    \"mr_id\",\n    \"git_id\",\n    \"object_type\",\n    \"created_at\"\n  )\nVALUES\n  ($1, $2, $3, $4, $5),\n  ($6, $7, $8, $9, $10),\n  ($11, $12, $13, $14, $15),\n  ($16, $17, $18, $19, $20) ON CONFLICT DO NOTHING RETURNING \"id\"\n" rows_affected=4 rows_returned=4 elapsed=9.421584ms
2023-10-24T16:54:08.064063Z DEBUG sqlx::query: summary="SELECT \"git_obj\".\"id\", \"git_obj\".\"git_id\", \"git_obj\".\"object_type\", …" db.statement="\n\nSELECT\n  \"git_obj\".\"id\",\n  \"git_obj\".\"git_id\",\n  \"git_obj\".\"object_type\",\n  \"git_obj\".\"data\",\n  \"git_obj\".\"link\"\nFROM\n  \"git_obj\"\nWHERE\n  \"git_obj\".\"git_id\" = $1\nLIMIT\n  $2\n" rows_affected=0 rows_returned=1 elapsed=11.4405ms
2023-10-24T16:54:08.064947Z DEBUG sqlx::query: summary="SELECT \"git_obj\".\"id\", \"git_obj\".\"git_id\", \"git_obj\".\"object_type\", …" db.statement="\n\nSELECT\n  \"git_obj\".\"id\",\n  \"git_obj\".\"git_id\",\n  \"git_obj\".\"object_type\",\n  \"git_obj\".\"data\",\n  \"git_obj\".\"link\"\nFROM\n  \"git_obj\"\nWHERE\n  \"git_obj\".\"git_id\" = $1\nLIMIT\n  $2\n" rows_affected=0 rows_returned=1 elapsed=3.193125ms
2023-10-24T16:54:08.066778Z DEBUG sqlx::query: summary="INSERT INTO \"git_obj\" (\"id\", …" db.statement="\n\nINSERT INTO\n  \"git_obj\" (\"id\", \"git_id\", \"object_type\", \"data\", \"link\")\nVALUES\n  ($1, $2, $3, $4, $5),\n  ($6, $7, $8, $9, $10),\n  ($11, $12, $13, $14, $15),\n  ($16, $17, $18, $19, $20) ON CONFLICT DO NOTHING RETURNING \"id\"\n" rows_affected=4 rows_returned=4 elapsed=16.514125ms
await last thread
2023-10-24T16:54:08.066810Z  INFO git::internal::pack::preload: Git Object Produce thread one  time cost:44 ms
2023-10-24T16:54:08.067528Z DEBUG sqlx::query: summary="SELECT \"git_obj\".\"id\", \"git_obj\".\"git_id\", \"git_obj\".\"object_type\", …" db.statement="\n\nSELECT\n  \"git_obj\".\"id\",\n  \"git_obj\".\"git_id\",\n  \"git_obj\".\"object_type\",\n  \"git_obj\".\"data\",\n  \"git_obj\".\"link\"\nFROM\n  \"git_obj\"\nWHERE\n  \"git_obj\".\"git_id\" = $1\nLIMIT\n  $2\n" rows_affected=0 rows_returned=1 elapsed=1.620875ms
2023-10-24T16:54:08.068054Z DEBUG sqlx::query: summary="SELECT \"git_obj\".\"id\", \"git_obj\".\"git_id\", \"git_obj\".\"object_type\", …" db.statement="\n\nSELECT\n  \"git_obj\".\"id\",\n  \"git_obj\".\"git_id\",\n  \"git_obj\".\"object_type\",\n  \"git_obj\".\"data\",\n  \"git_obj\".\"link\"\nFROM\n  \"git_obj\"\nWHERE\n  \"git_obj\".\"git_id\" = $1\nLIMIT\n  $2\n" rows_affected=0 rows_returned=1 elapsed=866.625µs
2023-10-24T16:54:08.074169Z DEBUG sqlx::query: summary="INSERT INTO \"mr\" (\"id\", …" db.statement="\n\nINSERT INTO\n  \"mr\" (\n    \"id\",\n    \"mr_id\",\n    \"git_id\",\n    \"object_type\",\n    \"created_at\"\n  )\nVALUES\n  ($1, $2, $3, $4, $5),\n  ($6, $7, $8, $9, $10),\n  ($11, $12, $13, $14, $15),\n  ($16, $17, $18, $19, $20) ON CONFLICT DO NOTHING RETURNING \"id\"\n" rows_affected=4 rows_returned=4 elapsed=5.733292ms
2023-10-24T16:54:08.074343Z DEBUG sqlx::query: summary="SELECT \"git_obj\".\"id\", \"git_obj\".\"git_id\", \"git_obj\".\"object_type\", …" db.statement="\n\nSELECT\n  \"git_obj\".\"id\",\n  \"git_obj\".\"git_id\",\n  \"git_obj\".\"object_type\",\n  \"git_obj\".\"data\",\n  \"git_obj\".\"link\"\nFROM\n  \"git_obj\"\nWHERE\n  \"git_obj\".\"git_id\" = $1\nLIMIT\n  $2\n" rows_affected=0 rows_returned=1 elapsed=692.458µs
2023-10-24T16:54:08.081446Z DEBUG sqlx::query: summary="INSERT INTO \"git_obj\" (\"id\", …" db.statement="\n\nINSERT INTO\n  \"git_obj\" (\"id\", \"git_id\", \"object_type\", \"data\", \"link\")\nVALUES\n  ($1, $2, $3, $4, $5),\n  ($6, $7, $8, $9, $10),\n  ($11, $12, $13, $14, $15),\n  ($16, $17, $18, $19, $20) ON CONFLICT DO NOTHING RETURNING \"id\"\n" rows_affected=4 rows_returned=4 elapsed=5.541209ms
await last thread
2023-10-24T16:54:08.081472Z  INFO git::internal::pack::preload: Git Object Produce thread one  time cost:59 ms
2023-10-24T16:54:08.084429Z DEBUG sqlx::query: summary="INSERT INTO \"git_obj\" (\"id\", …" db.statement="\n\nINSERT INTO\n  \"git_obj\" (\"id\", \"git_id\", \"object_type\", \"data\", \"link\")\nVALUES\n  ($1, $2, $3, $4, $5),\n  ($6, $7, $8, $9, $10),\n  ($11, $12, $13, $14, $15),\n  ($16, $17, $18, $19, $20) ON CONFLICT DO NOTHING RETURNING \"id\"\n" rows_affected=4 rows_returned=4 elapsed=19.633542ms
await last thread
2023-10-24T16:54:08.084456Z  INFO git::internal::pack::preload: Git Object Produce thread one  time cost:32 ms
2023-10-24T16:54:08.086807Z DEBUG sqlx::query: summary="INSERT INTO \"mr\" (\"id\", …" db.statement="\n\nINSERT INTO\n  \"mr\" (\n    \"id\",\n    \"mr_id\",\n    \"git_id\",\n    \"object_type\",\n    \"created_at\"\n  )\nVALUES\n  ($1, $2, $3, $4, $5),\n  ($6, $7, $8, $9, $10),\n  ($11, $12, $13, $14, $15),\n  ($16, $17, $18, $19, $20) ON CONFLICT DO NOTHING RETURNING \"id\"\n" rows_affected=4 rows_returned=4 elapsed=720.375µs
2023-10-24T16:54:08.089271Z DEBUG sqlx::query: summary="INSERT INTO \"git_obj\" (\"id\", …" db.statement="\n\nINSERT INTO\n  \"git_obj\" (\"id\", \"git_id\", \"object_type\", \"data\", \"link\")\nVALUES\n  ($1, $2, $3, $4, $5),\n  ($6, $7, $8, $9, $10),\n  ($11, $12, $13, $14, $15),\n  ($16, $17, $18, $19, $20) ON CONFLICT DO NOTHING RETURNING \"id\"\n" rows_affected=4 rows_returned=4 elapsed=815.458µs
await last thread
2023-10-24T16:54:08.089301Z  INFO git::internal::pack::preload: Git Object Produce thread one  time cost:37 ms
thread 'tokio-runtime-worker' panicked at git/src/internal/pack/preload.rs:254:32:
called `Result::unwrap()` on an `Err` value: JoinError::Panic(Id(36), ...)
2023-10-24T16:54:08.091477Z DEBUG sqlx::query: summary="INSERT INTO \"mr\" (\"id\", …" db.statement="\n\nINSERT INTO\n  \"mr\" (\n    \"id\",\n    \"mr_id\",\n    \"git_id\",\n    \"object_type\",\n    \"created_at\"\n  )\nVALUES\n  ($1, $2, $3, $4, $5),\n  ($6, $7, $8, $9, $10),\n  ($11, $12, $13, $14, $15),\n  ($16, $17, $18, $19, $20) ON CONFLICT DO NOTHING RETURNING \"id\"\n" rows_affected=4 rows_returned=4 elapsed=489.625µs
2023-10-24T16:54:08.100274Z DEBUG sqlx::query: summary="INSERT INTO \"git_obj\" (\"id\", …" db.statement="\n\nINSERT INTO\n  \"git_obj\" (\"id\", \"git_id\", \"object_type\", \"data\", \"link\")\nVALUES\n  ($1, $2, $3, $4, $5),\n  ($6, $7, $8, $9, $10),\n  ($11, $12, $13, $14, $15),\n  ($16, $17, $18, $19, $20) ON CONFLICT DO NOTHING RETURNING \"id\"\n" rows_affected=4 rows_returned=4 elapsed=6.963083ms
await last thread
2023-10-24T16:54:08.100329Z  INFO git::internal::pack::preload: Git Object Produce thread one  time cost:48 ms
2023-10-24T16:54:08.103461Z DEBUG sqlx::query: summary="INSERT INTO \"mr\" (\"id\", …" db.statement="\n\nINSERT INTO\n  \"mr\" (\n    \"id\",\n    \"mr_id\",\n    \"git_id\",\n    \"object_type\",\n    \"created_at\"\n  )\nVALUES\n  ($1, $2, $3, $4, $5),\n  ($6, $7, $8, $9, $10),\n  ($11, $12, $13, $14, $15),\n  ($16, $17, $18, $19, $20) ON CONFLICT DO NOTHING RETURNING \"id\"\n" rows_affected=4 rows_returned=4 elapsed=1.382166ms
2023-10-24T16:54:08.120868Z DEBUG sqlx::query: summary="INSERT INTO \"git_obj\" (\"id\", …" db.statement="\n\nINSERT INTO\n  \"git_obj\" (\"id\", \"git_id\", \"object_type\", \"data\", \"link\")\nVALUES\n  ($1, $2, $3, $4, $5),\n  ($6, $7, $8, $9, $10),\n  ($11, $12, $13, $14, $15),\n  ($16, $17, $18, $19, $20) ON CONFLICT DO NOTHING RETURNING \"id\"\n" rows_affected=4 rows_returned=4 elapsed=15.390667ms
await last thread
2023-10-24T16:54:08.120935Z  INFO git::internal::pack::preload: Git Object Produce thread one  time cost:69 ms
git remote -v
local	http://127.0.0.1:8000/third_parts/crates/8go/matrix-commander-rs (fetch)
local	http://127.0.0.1:8000/third_parts/crates/8go/matrix-commander-rs (push)
origin	https://github.com/8go/matrix-commander-rs/ (fetch)
origin	https://github.com/8go/matrix-commander-rs/ (push)

git push local main
枚举对象中: 55, 完成.
对象计数中: 100% (55/55), 完成.
使用 10 个线程进行压缩
压缩对象中: 100% (36/36), 完成.
写入对象中: 100% (40/40), 6.29 KiB | 6.29 MiB/s, 完成.
总共 40(差异 27),复用 0(差异 0),包复用 0
错误:RPC 失败。curl 52 Empty reply from server
send-pack: unexpected disconnect while reading sideband packet
致命错误:远端意外挂断了
Everything up-to-date

Build a React-Based Web Interface for Mega Project to Support Code Repository Browsing, Code Review, and Merge Request Handling

The mega lacks a user-friendly web interface for navigating the code repository, handling code reviews, and managing Merge Requests (MRs). We propose building a new web interface based on React.js, a popular JavaScript library for building user interfaces to address this.

While the mega offers excellent version control capabilities, it's currently lacking an intuitive web interface that allows users to interact with the code repository easily. This limitation hinders users' ability to browse code, leave comments, and manage MR(Merge Request)s.

The new interface would include the following features:

  1. Code Repository Browsing: Users should be able to easily navigate through the code repository, view file structures, and read the content of the files.

  2. Code Review and Commenting: Users should be able to leave comments on specific lines or blocks of code for review purposes. This feature is crucial for collaborative coding projects.

  3. Merge Request (MR) Handling: The interface should be user-friendly to create, review, and merge MRs. It should display the changes made in the MR, allow for comments, and enable an easy way to merge the MR if it's approved.

The LFS support error

When I try to import a git repository, including the LFS objects, I encounter a problem like this:

error: remote local already exists.
Locking support detected on remote "local". Consider enabling it with:
  $ git config lfs.http://127.0.0.1:8000/third_parts/crates/AlexanderThaller/ansi2png-rs.git/info/lfs.locksverify true
Uploading LFS objects:   0% (0/54), 0 B | 0 B/s, done.
batch response: Post "http://127.0.0.1:8000/third_parts/crates/AlexanderThaller/ansi2png-rs.git/info/lfs/objects/batch": EOF
error: failed to push some refs to 'http://127.0.0.1:8000/third_parts/crates/AlexanderThaller/ansi2png-rs'

Improve the documentation for new users

The documentation could use some improvement to help new users get started.

  1. Add a quickstart guide or getting started section. This would help new users install the necessary prerequisites, install the software, and get off the ground quickly with a basic example. Many open source projects have a quick start guide, which new users greatly appreciate.
  2. Improve the installation and setup documentation. The current installation docs are sparse. It would be good to provide more details on the installation steps for each platform (macOS, Linux, Windows), required packages or dependencies, and any gotchas or troubleshooting tips. Screenshots and code examples would also help.
  3. Add more examples and tutorials. The project needs better examples and tutorials to help new users learn how to use the software. Some tutorials on everyday use cases and examples for beginners would make the project much more accessible.
  4. Improve the API and config file documentation. The API and configuration docs could use more details. Adding overviews, code samples, default values, and descriptions for each option would help new developers use the project more efficiently.
  5. Consider reorganizing the documentation. The current organization of the docs feels haphazard. Grouping related topics, using a standard template for each page, and providing an overview/navigation page help make the documentation easier to read and find information.

Overall, improving and expanding the documentation is one of the best ways to help new users start the project. Please let me know if you want to help improve or to reorganize the documentation!

OOM with Excessive Memory Usage During Git Repository Import

I am writing to report an issue I encountered while using Mega to import a Git repository. The repository in question is https://github.com/AFLplusplus/qemu-libafl-bridge.

During the import process, I noticed that Mega was consuming excessive memory while parsing Pack files. This high memory usage ultimately triggered the Linux system's Out-Of-Memory (OOM) killer, resulting in the termination of the Mega process.

So that you know, the system I am using has 16GB of RAM. Under idle conditions, it typically has around 13GB of available memory. Before this incident, no specific optimizations were applied to the system or Mega settings.

Suggestions for improving Mega's performance

Currently, Mega faces the issue of low efficiency in handling large repositories. Here are some potential optimization suggestions:

  1. Adjust the LRU cache size during unpacking to be dynamically initialized based on the object size, such as 30% of the object size.(Further discussion is needed) This can help reduce the number of cache misses and minimize the need for extensive database queries.

  2. During unpacking, git objects are parsed, but when storing the mr objects, the original raw data is used. Then, during the merge of mr objects, they are converted back into git objects, this causing redundant parsing. It is possible to consider directly returning the raw data during unpacking to avoid this issue.

  3. Maintain a database index to improve data retrieval speed and overall performance.

  4. When building the tree in memory, load all the required git objects into memory at once, instead of repeatedly querying the database.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.