Comments (14)
My checking github query had the wrong date in for updated since so actually all the data is there whoop whoop!
Thank you so for much for the fix!
from monocle.
From the crawler log, it seems like there is an unexpected error in the GraphQL response, and the process is likely stuck retrying, which would explain why there is no data stored.
The error is:
(GQLError {
message = "The additions count for this commit is unavailable.",
locations = Just [Position {line = 83, column = 11}], path = Just [PropName "repository",PropName "pullRequests",PropName "nodes",PropIndex 8,PropName "commits",PropName "nodes",PropIndex 0,PropName "commit",PropName "additions"],
errorType = Just (Custom "SERVICE_UNAVAILABLE"), extensions = Nothing} :| [])
The crawler needs a fix to handle that error (e.g. by updating the query/schema or reporting this issue to the github api bugtracker).
Though we could also skip such errors (using DontRetry
here) to provide partial data at least, but we don't have a way to report missing data on the user interface.
from monocle.
That could also be a transient error from the Github API ("SERVICE_UNAVAILABLE"). Does it still happen ?
As we request Changes by bulk, using DontRetry
we'll skip lot of Changes (25 AFAIR), we could also reduce the bulk amount before retrying (like for the server-side timeout query) and only when reach a bulk size of 1 then we DontRetry
.
from monocle.
Seem to be getting less of those errors today but still nothing being stored.
Although doing a manual hit to the graphql I would get an error trying to get 3 or more PRs.
Would it be possible to have that number configurable in the config?
If I knew Haskell I would give it a go, (would normally try and learn to do it myself, but in the middle of 3 months of training and my head becomes a shed)
from monocle.
I just did a test with llvm-project repository on github, and I don't have any issue fetching by bulk of 25 PRs. The crawler reduces by itself the amount of PRs it attempts to fetch when it encounters server side timeout. With GitHub this could happen a lot when PRs are with plenty of comments and data. For llvm it does not seems to be the case.
Regarding the other error, I see it, I don't have solution for now :( I need some time to experiment solutions for that issue.
from monocle.
I've run the query in GitHub Explorer with failures as below.
So I've raised a question in the community for help: https://github.com/orgs/community/discussions/79021
GetProjectPullRequests_graphql.txt
GetProjectPullRequests_vars.txt
I'm now getting some data compared to the weekend but not everything :(
"errors": [
{
"type": "SERVICE_UNAVAILABLE",
"path": [
"repository",
"pullRequests",
"nodes",
0,
"commits",
"nodes",
0,
"commit",
"additions"
],
"locations": [
{
"line": 84,
"column": 15
}
],
"message": "The additions count for this commit is unavailable."
}
from monocle.
Still playing with the query, taking the lines
additions
deletions
out of commits/commit has no errors now
from monocle.
Thank you very much for the feedback. It looks like working around this issue from monocle side is not going to be easy as we would need to enable an extra query (without the additions/deletions request).
from monocle.
I've found an offending pull request: llvm/llvm-project#74806
That not even the front end at the time of typing this can deal with, is there anyway to have the crawler skip an array of pull requests for when this may happen, as I think it's getting to this one and then stopping crawling and then starts again and gets stuck again and of course the only Pull Requests that get pulled in are the ones updated since the last run.
Must have worked at some point for the review to have happened.
from monocle.
The pull request query is defined in this module, and the parameters are documented here (search for pullRequests
). It does not seems possible to filter a given PR, at least not in an effective way.
As discussed with @morucci, monocle may be improved by reducing the query size when such an unexpected error happens, and when the size is one, then perhaps we could skip the offending items by using the provided endCursor in the error body.
The crawler indexes items by chunk of 500, so you should be getting some data by increasing the update_since
parameter. Perhaps we should also interpret this error as the end of stream so that the crawler submit everything it got until that point.
Thanks again for investigating that issue, it's a great feedback.
from monocle.
I'm looking more and I see that there is additions and deletions at the different levels and wondered if they were needed or could be inferred by data already retuned in the files part?
- repository > pullRequests > nodes > additions & deletions
- repository > pullRequests > nodes > commits > nodes additions & deletions
Taking out the second ones gives me a clean run through all 8,490 pull requests using a shabby Python script that does the same query calls, with no failures and 12 timeouts (most towards the end) vs a load before and one that never completed not even in graphQL explorer. Took 35 minutes to run.
I can't see where that data at Commits level is used (I've not clicked everything to find it)
Although I can see where the Top Level one and the Files ones are used:
change/[ORG]@[REPO]@[CHANGEID]
As I say I don't know haskell but I've managed to get a build running locally. - Taken me far too long but hey. (Why not python (other than Python is a lot slower than Haskell. google search HvsP) so we can all hack 😉 couldn't see an ADR for that one 😭 and I love to see ADRs being used! )
So if someone is able to point me at how I can update the schema to remove the additions and deletions at the nodes>commits level I would apprecaite it at least than I can get the data loaded up and able to move on to the next task :-)
Would love to learn haskell (if not only to help on the project, but to keep my techie side happy), so any pointers to help in 2024 would be awesome. I may be an Agile Coach by trade now, I'm still a software dev at heart 😆 ❤️ 👨💻
from monocle.
Why not python
Good catch, this deserves an ADR. You can learn more about this choice in: https://changemetrics.io/posts/2021-06-01-haskell-use-cases.html . The main reason being that the language is statistically typed with an advanced type system.
update the schema to remove the additions and deletions
The schema that pulls additions/deletions is shared by two queries and it is defined here. If you remove these attributes, then the build will fails in the PullRequests and UserPullRequests modules, and you can replace the missing term with 0
to fix the compile errors. Here are a few notes to help you:
- The graphql datatypes are generated from the query. You can run
cabal haddock
to get the module documentation, e.g. forLentille.GitHub.PullRequests
- The schema between the crawler and the api is defined in protobuf here (and haskell datatypes are also generated from this definition, see the Makefile)
- The different data types usage are documented in this module: Monocle.Backend.Documents
Would love to learn haskell
I would recommend https://learn-haskell.blog/
from monocle.
@bigmadkev, the related PR is merged. We believe that the indexing issue is fixed/mitigated so please let us know if we can close that issue.
from monocle.
Will clear my cache and let it run and see if it's able to get to it's current state (just missing 1 pull request out of 9k+)
Cheers
from monocle.
Related Issues (20)
- Target branch in the ChangeMergedEvent HOT 3
- Bump the ElasticSearch version to 8.X
- Question: Problems with running it using docker compose
- Add merge_commit_sha to Change event HOT 2
- Incorrect commit time in ChangeCommitPushedEvent HOT 4
- Github crawler - changedFiles will be removed. Use changedFilesIfAvailable instead
- Question: After using the latest image - 2 Nov 2023 - health check returns unhealthy. HOT 2
- Since 1.10.0 search user feature is broken
- Support a groups attribute in any Ident attribute HOT 16
- Crawler failed to load when using loop_delay_sec HOT 2
- Feedback on clicked links
- Corporate proxy settings for crawler HOT 4
- Duration is missing in some ChangeCommentedEvent and ChangeReviewedEvent HOT 8
- Errors when running crawler behind corporate proxy HOT 17
- Monocle k8s deployment, api service throws network error when accessing via load balancer service HOT 3
- Display "name" instead of account name HOT 2
- Crawler stopped importing data HOT 17
- Updating Ident HOT 2
- [Question]: Can you clarify when updated_at field modified in "Change" event (in relation to changing from draft : false to draft:" HOT 6
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from monocle.