Comments (14)
Hey @h0dgep0dge! β¨
The main purpose of this project is to create a GitHub contributions like calendar for the local commits. That meas I don't want to parse always the commit history of all user projects but to keep the history in a database-like file.
However, in the new version (which is not yet merged into the main branch), git-stats
supports such a feature.
I'm not sure, but I guess you can install the new version like this:
$ npm i -g IonicaBizau/git-stats#v2.0.0
Then, run git-stats -h
to see the help output. I guess you're looking for the following command (/cc #55):
$ cd to/my/big/repository
$ git-stats --global-activity # or git-stats -g
Then you should see the repository related activity. π
Btw, please check #47 for more information and feedback for the new version. Would like to hear your feedback! π£
from git-stats.
The main purpose of this project is to create a GitHub contributions like calendar for the local commits. That meas I don't want to parse always the commit history of all user projects but to keep the history in a database-like file.
This seems like a non-sequitur to me. Why would you need to keep your own commit history in your own database, when you already have a git repository that is exactly that?
from git-stats.
Why would you need to keep your own commit history in your own database, when you already have a git repository that is exactly that?
I guess you're missing the point here. git-stats stores a global history of all user projects (those that are imported, for sure) and inserts automatically new commits if the user wanted to do so.
It would be impossible to parse all the user projects when you run git-stats
because you can't know where they are stored (e.g. not all are cloned locally), also it would take a long time to parse everything.
Does it make sense?
from git-stats.
It would be impossible to parse all the user projects when you run git-stats because you can't know where they are stored (e.g. not all are cloned locally), also it would take a long time to parse everything.
It would be possible to parse all projects if they're listed in a configuration file, and that could include repos that aren't cloned locally (albeit at a large cost), and the problem of parse time could be solved with caching. I suppose it does make sense, I think it's just an architecture decision I might have made differently, and it somewhat conflicts with my understanding of the design philosophy of git itself.
I would like to say again, it is a very cool tool, I just might not have done it exactly like it's been done here.
from git-stats.
@h0dgep0dge And then how do you solve the parsing caching? What happens if the repository doesn't have a remote and gets deleted? Such a configuration file could make sense for the git-stats-importer, but not for git-stats
. Peace. π
from git-stats.
And then how do you solve the parsing caching?
You have stored in cache the statistics for a particular repo, and up to which commit those stats cover. When invoked, the program would start at the given commit or branch, working backwards sucking up the data until it hits the previously recorded commit. You would only get a noticeable delay if you'd committed thousands of times since the last time the program was run.
What happens if the repository doesn't have a remote and gets deleted?
fatal: repository 'your/repo' does not exist
If the user has deleted the repo, they either meant to do it, they can clone it back from somewhere else, or they deleted their only copy by accident. In each of these cases, either it's fine, or the user is an idiot with bigger problems than if git-stats
will work properly.
Such a configuration file could make sense for the git-stats-importer, but not for git-stats
Why wouldn't a configuration file make sense? Either way you need to store data, it seems more straight forward to have the user run git-stats track this/repository
than it is to configure each of their repositories with a hook into git-stats
, do most git users even know what hooks are?
Peace. π
... not sure if friendly or arrogant ...
from git-stats.
And I'm curious where you would store that cache. Probably in a file, right? For speed and functionality, I chose to store all the commits (unique hashes) in a data store file (default: ~/.git-stats
).
Parsing tasks should be done in the importer, not here. The git-stats
project only takes the input data (in the case of calendar the data file) and shows the graph.
No worries, I'm friendly. πΈ
from git-stats.
What's the difference between reading all the commits from ~/.git-stats
, and reading all the commits from the repository itself? Isn't it just redundant?
from git-stats.
Time. Here are my stats since I started using git to now (click on the image to see it bigger):
Really, imagine a tool parsing 18k commits every time you run it... Like I specified before, git-stats
runs by default across repositories (not only one).
However, in the case of one repository (git-stats -g
), the data comes from the repo history (not from ~/.git-stats
).
from git-stats.
imagine a tool parsing 18k commits every time you run it
And like I said before, it wouldn't need to process all of them at once. It would only process any new commits that had been committed since the last time you ran the program.
from git-stats.
Btw, why do you not like to keep the history in a file? The good point about it is I can just copy the file on another machine I'm working on and see the same stats.
from git-stats.
I don't have a problem with keeping the history in a file, I have a problem with creating and working off a totally redundant history. Sure, you can copy the file and have the same stats, but you could just copy the repositories and have the same stats.
from git-stats.
All I'm saying is that I would have done it differently. If you're trying to get me to talk you out of your architecture, how about the fact that it's centralized? I work on several different machines, my PC, a laptop, I sometimes use cloud services when I don't have access to my own machines. As it stands, git-stats
wouldn't be able to track all of my work, but it would be able to if it read the data out of the repositories themselves.
from git-stats.
As I mentioned before, this is the way I implemented itβit was the simplest and the cleanest way I could find it (independent of any database and still fast enough). βοΈ
Hope you like the new release! π
from git-stats.
Related Issues (20)
- Feature: Past 30 Days HOT 2
- Author Pie Chart does not show date range used HOT 2
- Unable to print git-stats --athors to file HOT 2
- .
- Not working on Windows Git Bash HOT 5
- TypeError: Cannot supply flags when constructing one RegExp from another HOT 1
- Can not install git-stats HOT 1
- Show daily hours of activity over week HOT 1
- Error on install HOT 2
- Not able to see any of my own contributions HOT 2
- Aggregate a user HOT 1
- Add a command to view the number of code commit and remove lines HOT 1
- error installing on ubuntu 18 HOT 6
- Use special command execution tool to generate stats failed! HOT 1
- Ubuntu 20.04.02: install error HOT 6
- Respect XDG Base Directory Specification HOT 2
- Calendar starting on Sundays even with first_day: "Mon" HOT 3
- File filter HOT 1
- The pie chart drops top contributors from the legend if there are more than 40 HOT 2
- How to display separate stats for multiple projects? HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
π Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. πππ
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google β€οΈ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from git-stats.