Giter Site home page Giter Site logo

auto-commit's Introduction

banner

Automagically-generated commit messages

A CLI tool that generates commit messages from your staged changes, built in Rust and using OpenAI's GPT-3.5.

Installation

You can install auto-commit by running the following command in your terminal.

curl -fsSL https://raw.githubusercontent.com/m1guelpf/auto-commit/main/install.sh | sh -

Or, if you're an arch user, you can download it from the AUR using

yay -S auto-commit

You may need to close and reopen your terminal after installation. Alternatively, you can download the binary corresponding to your OS from the latest release.

Usage

auto-commit uses GPT-3.5. To use it, grab an API key from your dashboard, and save it to OPENAI_API_KEY as follows (you can also save it in your bash/zsh profile for persistance between sessions).

export OPENAI_API_KEY='sk-XXXXXXXX'

Once you have configured your environment, stage some changes by running, for example, git add ., and then run auto-commit.

Of course, auto-commit also includes some options, for editing the message before commiting, or just printing the message to the terminal.

$ auto-commit --help
Automagically generate commit messages.

Usage: auto-commit [OPTIONS]

Options:
  -v, --verbose...  More output per occurrence
  -q, --quiet...    Less output per occurrence
      --dry-run     Output the generated message, but don't create a commit.
  -r, --review      Edit the generated commit message before committing.
  -h, --help        Print help information
  -V, --version     Print version information

Develop

Make sure you have the latest version of rust installed (use rustup). Then, you can build the project by running cargo build, and run it with cargo run.

License

This project is open-sourced under the MIT license. See the License file for more information.

auto-commit's People

Contributors

andrewsb avatar lucemans avatar m1guelpf avatar orgads avatar thedevminertv avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

auto-commit's Issues

Read `OPENAI_API_KEY` from config

I've used a handful of tools that use codex, and so far most have been able to read the API key from a file, as well as ENV. This is pretty convenient, since I've been able to reference a few of them to the same ~/.config/openaiapirc file, while others have their own folder within ~/.config.

Error when running

auto-commit: error while loading shared libraries: libssl.so.1.1: cannot open shared object file: No such file or directory

auto-commit does not auto-install

m1:auto-commit nv$ curl -fsSL https://raw.githubusercontent.com/m1guelpf/auto-commit/main/install.sh | sh -
https://github.com/m1guelpf/auto-commit/releases/latest/download/auto-commit-darwin-aarch64
downloading latest binary
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100 5023k  100 5023k    0     0  1009k      0  0:00:04  0:00:04 --:--:-- 1147k
installed - Auto Commit 0.1.5
m1:auto-commit nv$ auto-commit --help
-bash: auto-commit: command not found
m1:auto-commit nv$

Long files cause thread panic

I understand this is a limitation of OpenAI, but I wonder if it's possible to work around (e.g., send an abridged copy). If not, I think it would be a good idea to fail gracefully and maybe make a note in the README.

Thanks for your work!

(venv) C:\[redacted] [feature/[redacted] โ†‘1 +0 ~3 -0 | +0 ~1 -0 !]> auto-commit --verbose
Loading Data...
 ๐Ÿง‘โšฝ๏ธ       ๐Ÿง‘  Analyzing Codebase...[2023-02-22T00:06:16Z DEBUG openai_api] Request: Request { method: Post, url: Url { scheme: "https", cannot_be_a_base: false, username: "", password: None, host: Some(Domain("api.openai.com")), port: None, path: "/v1/engines/code-davinci-002/completions", query: None, fragme
nt: None }, headers: {"content-type": "application/json"}, version: None, body: Body { reader: "<hidden>", length: Some(50550), bytes_read: 0 }, local_addr: None, peer_addr: None, ext: Extensions, trailers_sender: Some(Sender { .. }), trailers_receiver: Some(Receiver { .. }), has_trailers: false }
[2023-02-22T00:06:16Z DEBUG hyper::client::connect::dns] resolving host="api.openai.com"
[2023-02-22T00:06:16Z DEBUG hyper::client::connect::http] connecting to 52.152.96.252:443
[2023-02-22T00:06:17Z DEBUG hyper::client::connect::http] connected to 52.152.96.252:443
๐Ÿง‘   โšฝ๏ธ     ๐Ÿง‘  Analyzing Codebase...[2023-02-22T00:06:17Z DEBUG hyper::proto::h1::io] flushed 210 bytes
[2023-02-22T00:06:17Z DEBUG hyper::proto::h1::io] flushed 16384 bytes
[2023-02-22T00:06:17Z DEBUG hyper::proto::h1::io] flushed 16384 bytes
[2023-02-22T00:06:17Z DEBUG hyper::proto::h1::io] flushed 16384 bytes
[2023-02-22T00:06:17Z DEBUG hyper::proto::h1::io] flushed 1398 bytes
๐Ÿง‘       โšฝ๏ธ๐Ÿง‘   Analyzing Codebase...[2023-02-22T00:06:17Z DEBUG hyper::proto::h1::io] read 722 bytes
[2023-02-22T00:06:17Z DEBUG hyper::proto::h1::io] parsed 11 headers
[2023-02-22T00:06:17Z DEBUG hyper::proto::h1::conn] incoming body is content-length (297 bytes)
[2023-02-22T00:06:17Z DEBUG hyper::proto::h1::conn] incoming body completed
[2023-02-22T00:06:17Z DEBUG openai_api] Response: Response { response: Response { status: BadRequest, headers: {"openai-model": "code-davinci-002", "openai-organization": "user-[redacted]", "access-control-allow-origin": "*", "content-length": "297", "strict-transport-security": "max-age=15724800;
 includeSubDomains", "x-request-id": "[redacted]", "date": "Wed, 22 Feb 2023 00:06:17 GMT", "content-type": "application/json", "connection": "keep-alive", "openai-processing-ms": "262", "openai-version": "2020-10-01"}, version: Some(Http1_1), has_trailers: false, trailers_sender: Some(Sen
der { .. }), trailers_receiver: Some(Receiver { .. }), upgrade_sender: Some(Sender { .. }), upgrade_receiver: Some(Receiver { .. }), has_upgrade: false, body: Body { reader: "<hidden>", length: Some(297), bytes_read: 0 }, ext: Extensions, local_addr: None, peer_addr: None } }
[2023-02-22T00:06:17Z DEBUG hyper::client::pool] pooling idle connection for ("https", api.openai.com)
thread 'main' panicked at 'Couldn't complete prompt.: Api(ErrorMessage { message: "This model's maximum context length is 8001 tokens, however you requested 14731 tokens (12731 in your prompt; 2000 for the completion). Please reduce your prompt; or completion length.", error_type: "invalid_request_error" })', s
rc\main.rs:137:10
stack backtrace:
   0:     0x7ff75a3c197f - <unknown>
   1:     0x7ff75a3e219a - <unknown>
   2:     0x7ff75a3b99e9 - <unknown>
   3:     0x7ff75a3c3f5b - <unknown>
   4:     0x7ff75a3c3bd5 - <unknown>
   5:     0x7ff75a3c4509 - <unknown>
   6:     0x7ff75a3c440d - <unknown>
   7:     0x7ff75a3c25b7 - <unknown>
   8:     0x7ff75a3c40e9 - <unknown>
๐Ÿง‘      โšฝ๏ธ  ๐Ÿง‘  Analyzing Codebase...   9:     0x7ff75a3f4b45 - <unknown>
  10:     0x7ff75a3f4cc3 - <unknown>
  11:     0x7ff75a131bd9 - <unknown>
  12:     0x7ff75a11a0bb - <unknown>
  13:     0x7ff75a127e7d - <unknown>
  14:     0x7ff75a114a39 - <unknown>
  15:     0x7ff75a11d537 - <unknown>
  16:     0x7ff75a133fb6 - <unknown>
  17:     0x7ff75a1241ec - <unknown>
  18:     0x7ff75a3b39eb - <unknown>
  19:     0x7ff75a11d6d7 - <unknown>
  20:     0x7ff75a3e995c - <unknown>
  21:     0x7fff94527614 - BaseThreadInitThunk
  22:     0x7fff955a26a1 - RtlUserThreadStart
๐Ÿง‘     โšฝ๏ธ   ๐Ÿง‘  Analyzing Codebase...

`invalid_request_error`

m1:ubiquity-dollar nv$ auto-commit
Loading Data...
๐Ÿ™ˆ  Analyzing Codebase...thread 'main' panicked at 'Couldn't complete prompt.: Api(ErrorMessage { message: "That model does not exist", error_type: "invalid_request_error" })', src/main.rs:137:10
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
๐Ÿ™‰  Analyzing Codebase...m1:ubiquity-dollar nv$
m1:ubiquity-dollar nv$ auto-commit --verbose
There are no staged files to commit.
Try running `git add` to stage some files.
Loading Data...
๐Ÿ™ˆ  Analyzing Codebase...[2022-11-01T08:55:12Z DEBUG openai_api] Request: Request { method: Post, url: Url { scheme: "https", cannot_be_a_base: false, username: "", password: None, host: Some(Domain("api.openai.com")), port: None, path: "/v1/engines/code-davinci-002/completions", query: None, fragment: None }, headers: {"content-type": "application/json"}, version: None, body: Body { reader: "<hidden>", length: Some(2686), bytes_read: 0 }, local_addr: None, peer_addr: None, ext: Extensions, trailers_sender: Some(Sender { .. }), trailers_receiver: Some(Receiver { .. }), has_trailers: false }
[2022-11-01T08:55:12Z DEBUG hyper::client::connect::dns] resolving host="api.openai.com"
[2022-11-01T08:55:12Z DEBUG hyper::client::connect::http] connecting to 52.152.96.252:443
[2022-11-01T08:55:12Z DEBUG hyper::client::connect::http] connected to 52.152.96.252:443
๐Ÿ™‰  Analyzing Codebase...[2022-11-01T08:55:13Z DEBUG hyper::proto::h1::io] flushed 209 bytes
[2022-11-01T08:55:13Z DEBUG hyper::proto::h1::io] flushed 2686 bytes
[2022-11-01T08:55:13Z DEBUG hyper::proto::h1::io] read 439 bytes
[2022-11-01T08:55:13Z DEBUG hyper::proto::h1::io] parsed 7 headers
[2022-11-01T08:55:13Z DEBUG hyper::proto::h1::conn] incoming body is content-length (158 bytes)
[2022-11-01T08:55:13Z DEBUG hyper::proto::h1::conn] incoming body completed
[2022-11-01T08:55:13Z DEBUG hyper::client::pool] pooling idle connection for ("https", api.openai.com)
[2022-11-01T08:55:13Z DEBUG openai_api] Response: Response { response: Response { status: NotFound, headers: {"connection": "keep-alive", "strict-transport-security": "max-age=15724800; includeSubDomains", "content-length": "158", "date": "Tue, 01 Nov 2022 08:55:13 GMT", "content-type": "application/json; charset=utf-8", "vary": "Origin", "x-request-id": "7dd1ef5d977c0dae864d89fbcbeeaa37"}, version: Some(Http1_1), has_trailers: false, trailers_sender: Some(Sender { .. }), trailers_receiver: Some(Receiver { .. }), upgrade_sender: Some(Sender { .. }), upgrade_receiver: Some(Receiver { .. }), has_upgrade: false, body: Body { reader: "<hidden>", length: Some(158), bytes_read: 0 }, ext: Extensions, local_addr: None, peer_addr: None } }
thread 'main' panicked at 'Couldn't complete prompt.: Api(ErrorMessage { message: "That model does not exist", error_type: "invalid_request_error" })', src/main.rs:137:10
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
๐Ÿ™Š  Analyzing Codebase...m1:ubiquity-dollar nv$

With RUST_BACKTRACE=full

m1:ubiquity-dollar nv$ auto-commit
Loading Data...
๐ŸŒŽ  Analyzing Codebase...thread 'main' panicked at 'Couldn't complete prompt.: Api(ErrorMessage { message: "That model does not exist", error_type: "invalid_request_error" })', src/main.rs:137:10
stack backtrace:
   0:        0x100803cb8 - <std::sys_common::backtrace::_print::DisplayBacktrace as core::fmt::Display>::fmt::h1543c132bc4e188c
   1:        0x10082077c - core::fmt::write::hda8e8eb84b49cbfc
   2:        0x1007fe498 - std::io::Write::write_fmt::hb84c8996aec7120c
   3:        0x1008054c4 - std::panicking::default_hook::{{closure}}::hdf06011cb093de6a
   4:        0x100805228 - std::panicking::default_hook::hd7ceb942fff7b170
   5:        0x10080595c - std::panicking::rust_panic_with_hook::h053d4067a63a6fcb
   6:        0x100805890 - std::panicking::begin_panic_handler::{{closure}}::hea9e6c546a23e8ff
   7:        0x100804194 - std::sys_common::backtrace::__rust_end_short_backtrace::hd64e012cf32134c6
   8:        0x1008055e8 - _rust_begin_unwind
   9:        0x10083422c - core::panicking::panic_fmt::hbfde5533e1c0592e
  10:        0x100834318 - core::result::unwrap_failed::h68832e989a8867c1
  11:        0x1005f5544 - auto_commit::main::{{closure}}::hde9ffac744f15d7c
  12:        0x1005d8a78 - std::thread::local::LocalKey<T>::with::h8299edffc48b47fb
  13:        0x1005e1498 - tokio::runtime::enter::Enter::block_on::h8cd42799fe53fdaa
  14:        0x1005eb9d4 - tokio::runtime::context::enter::hd622a04884cced71
  15:        0x1005dc83c - tokio::runtime::handle::Handle::enter::hb597e6521843e9f1
  16:        0x1005e26a4 - auto_commit::main::h54076be9b6549311
  17:        0x1005fd8c8 - std::sys_common::backtrace::__rust_begin_short_backtrace::h27a8d6ce065a0fc5
  18:        0x1005e84cc - std::rt::lang_start::{{closure}}::h901a2890ae649abe
  19:        0x1007f976c - std::rt::lang_start_internal::hef2161f9571a51d7
  20:        0x1005e2768 - _main
๐ŸŒ  Analyzing Codebase...m1:ubiquity-dollar nv$ 

How to reduce prompt/ completion length?

auto-commit
Loading Data...
โ–— Analyzing Codebase...thread 'main' panicked at 'Couldn't complete prompt.: Api(ErrorMessage { message: "This model's maximum context length is 8001 tokens, however you requested 11523 tokens (9523 in your prompt; 2000 for the completion). Please reduce your prompt; or completion length.", error_type: "invalid_request_error" })', src/main.rs:137:10
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
โ–– Analyzing Codebase...%    

I assume my codebase && commit history is too big and exceeds open AI limitations. Is there a way to reduce codebase analysis?

Analyzing Codebase...thread 'main' panicked at 'Couldn't complete prompt.: AsyncProtocol(invalid type: null, expected a string at line 1 column 233)', src\main.rs:137:10

I just added README.md, and tried auto-commit.

$ git status
On branch main
Your branch is up to date with 'origin/main'.

Changes to be committed:
  (use "git restore --staged <file>..." to unstage)
        modified:   README.md

$ auto-commit
Loading Data...
๐ŸŽ„ Analyzing Codebase...thread 'main' panicked at 'Couldn't complete prompt.: AsyncProtocol(invalid type: null, expected a string at line 1 column 233)', src\main.rs:137:10
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
๐ŸŒฒ Analyzing Codebase...

$ 

auto-commit v0.1.5 fails with 'Couldn't complete prompt' error on macOS

Summary:
Auto Commit 0.1.5 encounters a runtime error when analyzing a codebase on macOS (Darwin-aarch64). The error message suggests that the required model 'code-davinci-002' does not exist.

Installed version:
Auto Commit 0.1.5 (Darwin-aarch64)

Installation method:
Installed using the following command:

curl -fsSL https://raw.githubusercontent.com/m1guelpf/auto-commit/main/install.sh | sh -

Error Message

Loading Data...
โ–   โ ˆ    โ–Œ Analyzing Codebase...thread 'main' panicked at 'Couldn't complete prompt.: Api(ErrorMessage { message: "The model: `code-davinci-002` does not exist", error_type: "invalid_request_error" })', src/main.rs:137:10
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
โ–    โ ‚   โ–Œ Analyzing Codebase...

Steps to reproduce:

  • Install Auto Commit 0.1.5 on macOS (Darwin-aarch64) using the provided installation command.
  • Run auto-commit in a terminal.
  • Observe the error message displayed during the "Analyzing Codebase" step.

Expected behavior:

Auto Commit should analyze the codebase without encountering any errors.

Actual behavior:

Auto Commit fails with a 'Couldn't complete prompt' error related to a missing model 'code-davinci-002'.

Common installation expectations

Hi! Thanks for this cool project.

I discovered, and installed this tool from the AUR, and after installation, I couldn't identify what files had been installed as a result.
Files installed via pacman, or from the AUR via a helper are typically discoverable by running pacman -Qql auto-commit, where you'd see the path of the new binary/lib/service/doc as a result of the installation, but that query returned nothing after installing auto-commit.

I looked at the PKGBUILD and saw that it was executing install.sh during the package() step. This works fine, but results in some unexpected behavior:

  • New files added to the system are not known to pacman, so the user cannot query them, nor can the files be removed during an uninstall
  • The install.sh script suggests placing the executable binary in ~/.bin, which is not a standard directory (though maybe common? I've always used a ~/bin, and haven't seen ~/.bin used before)
    installing them)
  • Running makepkg on the PKGBUILD file locally results in a failure, as it appears to contain some errors
  • Installing this from the AUR resulted in a pre-compiled binary being installed, which wasn't what I was expecting (though you could say that I should probably be reviewing every PKGBUILD before

I normally expect the AUR package name to inform me as to what will be executed:

  • <pkg-name>: I'll be building a tarball of the latest release
  • <pkg-name>-git: I'll be building from the HEAD of the main branch
  • <pkg-name>-bin: I'll be pulling a pre-built binary from the latest release
  • <pkg-name>-bin-git/<pkg-name>-nightly(-bin): I'll be pulling a pre-built binary from some build process that happens more frequently than tagged releases (pre-release/dev/nightly)

I'm not at all suggesting that any of those things are hard rules or anything, but rather I just wanted to open the conversation.
I created a draft PR that contains some possible configurations to see if you felt there was a particular one that you might favor, or all if you'd like. #10
I didn't do any workflow modification since this was more of an exploratory effort.

Just come up with a single line

Is it possible to make auto-commit create a single line commit message? The second line is most of the time not true and redundant to me.

what does --review do?

This is really cool Miguel, thanks for creating it.

What is -r option supposed to do?

Screen Shot 2022-12-31 at 11 38 19 PM

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.