backstrokeapp / server Goto Github PK
View Code? Open in Web Editor NEW:swimmer: A Github bot to keep repository forks up to date with their upstream.
Home Page: https://backstroke.co
License: MIT License
:swimmer: A Github bot to keep repository forks up to date with their upstream.
Home Page: https://backstroke.co
License: MIT License
When a user changes which repo's a link points to, the webhooks on the old repos aren't deleted.
currently i use https://ifttt.com/ to hook every hour. if backstroke could do it automatically it would be great
Solution: only update the link edit view when navigating to a new link edit page.
I'm getting permissions issue when using this on a private fork of a private repo (hook is on the child, not the parent). Is that expected to work?
This is the message
Status 200
Uhh, error: {"message":"Validation Failed","errors":[{"message":"The listed users and repositories cannot be searched either because the resources do not exist or you do not have permission to view them.","resource":"Search","field":"q","code":"invalid"}],"documentation_url":"https://developer.github.com/v3/search/"}
When trying to create a new link, I can't find the master branch in the list of branches. Further, the save button stays disabled. Am I missing something?
I never requested getting unsolicited noise like this: https://github.com/whitequark/conda-build/pull/1
I changed the remote branch, from master to st3, but it keeps asking to merge from master.
The upstream was: BoundInCode/AutoFileName on branch master
The downstream is: evandrocoan/AutoFileName master
Now:
The upstream was: BoundInCode/AutoFileName on branch st3
The downstream is: evandrocoan/AutoFileName master
But when I call the hook URL, it keeps creating merge commits from the master branch, instead of st3.
Of course I saved the changes. evandrocoan/AutoFileName#5
Github has this concept of a "network", which provides an abstraction to explain related repositories. For example, an upstream and a fork or the upstream are in the same network. An annoying limitation of the way that forks work is that a given user account can only have one repository of a given network contained within - this means that I can't have a fork and an upstream in the same user account, for example.
Often a way people get around this issue is by duplicating a repository, which circumvents Github's rule by not creating a "link" in the network between the duplicated repository and the original repository. Since pull requests can only be created within the bounds a network, this is a big problem for Backstroke users who want to sync repositories that are in "different" networks but actually share a history.
Historically, I've avoided this problem because it requires a lot of extra book-keeping. I think though I've found a less-complex way of approaching the problem.
Let's say we want to sync 1egoman/foo
's changes to 1egoman/bar
:
1egoman/bar
to backstroke-bot/bar
, creating a new node in the 1egoman/bar
's network that the bot has write access to.1egoman/foo
's contents to a temporary directory on the local system (let's say /tmp/contents_of_foo
)/tmp/contents_of_foo
to the repository backstroke-bot/bar
, effectively copying the contents of 1egoman/foo
into backstroke-bot/bar
backstroke-bot/bar
into 1egoman/foo
.So far, I've written a proof of concept of this process that works. I plan to provide updates here as I make progress on this issue.
Awesome project! Definitely interested in using this for gitlab
I have a fork of user1/repo1/ (for I am not the ownor or contributor) in my user/repo1 repository.
Within my user/repo1 repositor I created a classic backstroke hook in the Settings... Webhooks Payload URL = http://backstroke.us/ .
When a new commit comes I am not receiving any new pull requests because there is an error during a Recent Delivery in web hook log:
{
"message" : "Validation Failed",
"errors" : [{
"resource" : "PullRequest",
"code" : "custom",
"field" : "fork_collab",
"message" : "fork_collab Fork collab can't be granted by someone without permission"
}
],
"documentation_url" : "https://developer.github.com/v3/pulls/#create-a-pull-request"
}
Based on instruction in readme page it should work.
Am I missing something?
I have the web hook configured on a repository but there's an error 413 in my "Recent Deliveries". The full output from the request and response, headers and body, can be found on this gist: https://gist.github.com/punkstar/405e8d684aa7bc733e5cdc3bbca3dad4
I am trying to use Backstroke to auto sync a fork of a repository I do not have permission to create a web hook on, ie repo a (no permission) -> repo b (my fork).
I think I can use IFTTT to monitor the upstream commit RSS feed and trigger the Backstroke link but I am unable to create the link as I get an error creating the webhook on the upstream repo.
Is there a way to ignore this? Is there a better solution?
On the repository:
It is always creating a empty pull request every time I ran the update link:
curl -X POST https://backstroke.us/_dfasdf1asd1f3dsf32df.....
I had been pushed a commit it that repository in past, and now it is coming back with my pull request merge into that repository:
I have a "fork" of my own public repository as public repository (same GitHub account). As GitHub did not allow this, I just cloned it again, changed the remote master url to the "fork" repository and then added a upstream remote url.
Now backstroke does not accept this "fork", because it does not detect it as a fork.
There should be a checkbox to just merge whatever I want.
In new look and feel of Backstroke I did not find webhook URI to copy and paste in Upstream settings.
I suspect actually not work in automatic manner...
@francescobianco
Right now, the build script can only be used in production. In order to be used in staging, we have to manually switch BACKSTROKE_SERVER
value in build:production
to be the correct value.
Currently, a lot of Backstroke links are added to the queue even though they don't actually have an update that they need to have made to them. I need to debug this.
Right now, this leaves a weird flash when moving from one link to another.
Backstroke needs a new build process. Currently, it looks like:
gulp
locallygit push heroku master
Ideally, ci could do the deploy without a local build step (see #10)
I'm working on a rewrite of Backstroke. This has been a long time coming (over 6 months!) but I feel that it makes the system much more stable and predictable. In its current state, deploying updates to the live system is a challenge (and as a consequence, I haven't done it for months.) This isn't something I'm all that good at, so I'd love for anyone more experienced than me to let me know what I'm doing right and what I'm doing wrong.
What currently exists is deployed on Heroku on a free Dyno, using a mlab sandbox database.
In general, I want to try to split the system into a number of smaller services. One of the biggest changes involves link updates - the current plan is to stick all link update operations into a queue with workers at the end that perform the actual updates. As a consequence, the response to curl -X POST https://backstroke.us/_linkid
will return something like this:
{
"status": "ok",
"enqueuedAs": "id-of-thing-in-queue-here"
}
And then, to get the status of the webhook operation, make a call to https://api.backstroke.us/v1/operations/id-of-thing-in-queue-here
, which returns something like this:
{
"status": "ok",
"startedAt": "2017-09-01T11:26:06.722Z",
"finishedAt": "2017-09-01T11:28:06.722Z",
"output": {
"many": true
// anything else returned by the worker
}
}
The other large change is less of a reliance on webhooks. They are a side effect that is a pain to mange. Currently, links store two values: the last updated timestamp and the last known SHA that is the head of the upstream's branch. Every couple minutes, a timer is run in the background that finds all links that haven't been updated in 10 minutes (in this way, link updates are staggered so only a subset of all links are updated every couple minutes). If a link hasn't been updated in 10 minutes, then the SHA of the upstream branch is checked, and if it differs from the stored SHA, an automatic link update is added to the queue. Currently, this functionality lives in the api.backstroke.us
service below, but once that service has to be scaled past one instance that functionality would probably be extracted to another service.
Services in green are ones that I have already set up and services in red are ones that haven't been written yet:
NOTE: All green services are actually deployed. Check them out! :) Things may change though, so don't be surprised if I clear the database or something.
backstroke.surge.sh
- The new website. Code can be found here. I think it more accurately portrays Backstroke with it's upcoming changes.
legacy.backstroke.us
- Many people are still using Backstroke Classic. To maintain backward compatibility, I need to run a service to emulate the old behavior. This still needs to be written.
backstroke.us (nginx)
- A reverse proxy to run at backstroke.us
, directing all POST requests to legacy.backstroke.us
and all GET requests to backstroke.surge.sh
. Required to keep Backstroke Classic working.
app.backstroke.us
- The new dashboard. It simplifies the process of link management significantly. Screenshots and code are here.
api.backstroke.us
- Manages user authentication and link CRUD. This is the only services that is connected to the database, which means that it's the only stateful service. This is a massive win. Also, this service handles adding webhook operations to Redis for the worker on a timer or when a user pings a webhook url.
Backstroke Worker
- The worker reads operations from the Redis queue, performs them, and sticks the results back in Redis to be displayed by the api.backstroke.us
service. The worker is stateless, small, and tested well.
Before, this service was deployed on Heroku. I'm currently pursuing a sponsorship by DigitalOcean (They've said they'll give Backstroke $350 in free credits, but this was a few months ago. I need to follow up with them.)
If I'm unable to secure the DigitalOcean sponsorship (which is what it is looking like) then deployment is up in the air. I'm currently still deploying all the new services on Heroku as free dynos, utilizing Heroku Postgres and Heroku Redis for the stateful components of the system. Through Gratipay, we have about $4 a month available to put towards infrastructure. I think this could all be hosted on one DigitalOcean droplet of the smallest size, which is $5/mo. AWS, Google cloud platform, and other services should be explored too. Though I don't have as much experience with them they could work out too.
❤️ A thanks to all users - Backstroke has been a fun project to grow over the past year and a half. I hope we can make it better together!
Ryan Gaus, @1egoman
A number of users who have reported issues or commented on issues that may have opinions on these changes: @evandrocoan @thtliife @gaearon @eins78 @radrad @jeremypoulter @johanneskoester @m1guelpf
If a user has private repos registered, they need to login via /setup/login/private
for them to work.
Accordingly to:
I created a python script to loop through all my forks call the url
curl -X POST https://backstroke.us/_dfasdf1asd1f3dsf32df...
.
Thought, they are on my git modules, as follows. Can they be public or do I need to remove them?
[submodule "Packages/ANSIescape"]
path = Packages/ANSIescape
url = https://github.com/evandrocoan/SublimeANSI
upstream = https://github.com/aziz/SublimeANSI
backstroke = https://backstroke.us/_597befd29102c500...
This is the script check part:
upstream = configParser.get( section, "upstream" )
backstroke = configParser.get( section, "backstroke" )
path = configParser.get( section, "path" )
# log( 1, upstream )
# log( 1, backstroke )
# https://stackoverflow.com/questions/2018026/what-are-the-differences-between-the-urllib-urllib2-and-requests-module
if len( backstroke ) > 20:
# https://stackoverflow.com/questions/28396036/python-3-4-urllib-request-error-http-403
req = urllib.Request( backstroke, headers={'User-Agent': 'Mozilla/5.0'} )
res = urllib.urlopen( req )
# https://stackoverflow.com/questions/2667509/curl-alternative-in-python
print( res.read() )
Solution: give backstroke-bot
read permission to upstreams that it is pushing to.
Out of interest, is there a way to remove the "merge" commit that is automatically created after accepting a pull request?
There's assets in /assets
and in /public/assets/img/
- put everything into the /public/assets/img/
folder.
Also, update the readme to point to the right assets.
We need to redirect http -> https when a user does a GET to backstroke.us
. Webhooks get screwed up if this is done at the couldflare-level, so this needs to be done within express.
Something like
PS. **Hey, there's a new version of Backstroke available. [Check it out here](http://backstroke.us).**
at the end of the PR.
Have to explore this further. I've used both TravisCi and CircleCi, and they seem to work well.
The sass filenames are waaaayyyy out of date.
In every sass file with repo
in the name, change to link
. Also update the includes to match.
Like:
}).catch(error => {
if (error.message.indexOf('You cannot use a Stripe token more than once') === 0) {
res.status(400).send({error: `You can't reuse payment source tokens!`});
} else {
return Promise.resolve(error);
}
});
The repository https://github.com/Jayflux/sublime_toml_highlighting is a fork of fork of ... forked by me, and when I enter both these data I got:
Backstroke probably needs a real homepage.
Options:
GET backstroke.us
)Reproduce:
/links
.To
.Send
isn't disabled for repos that don't exist.On the StackOverflow question:
It is very easy to sync forked repository but question is how ?
In this example i am using WorldMapGenerator Repository
Go to your forked repository, you can see setting button click on it.
After clicking on setting button you can see Webhooks & services option in left menu click on it.
Then you can see Add Webhook Button on right side just click on it.
After clicking Add Webhook Button, details page will get open, so in detail page yo can see payload url. Enter http://backstroke.us url in it.
Now If any commit goes in main repository, then pull request will come in your forked repository.
That's it enjoy :)
For more details https://github.com/1egoman/backstroke
They show these steps, but I did not understand if the are correct. When he says Enter http://backstroke.us url in it
, it is the https://backstroke.us/_5a4ds65f46464s65d4654
URL, or just http://backstroke.us
?
If I do this in all my forks, I will get automatically updates? I do not need creates links and generate the URLs https://backstroke.us/_5a4ds65f46464s65d4654
for all my forks?
Hello,
Can you add a license to the project?
Thank you
This needs to be handled on the frontend and errors need to be displayed.
I've run into a few issues where the queue of link operations has reached a level where it is hard for the workers to recover, given the api rate limiting that github uses.
It'd be great if there was some way for the job that adds items to the queue to be sensitive of this and somehow not keep adding to the queue when it's really full.
Has anyone successfully gotten this to work with a fork of https://github.com/facebookincubator/create-react-app ?
I cannot get it to work in either the standard or classic setup.
In standard mode, I get the following error when trying to save a link:
No permission to add a webhook to the repository facebookincubator/create-react-app. Make sure thtliife has given Backstroke permission to access this organisation or repo via OAuth.
In classic mode, I get the following response...
Uhh, error: {
"message": "Validation Failed",
"errors": [{
"message": "The listed users and repositories cannot be searched either because the resources do not exist or you do not have permission to view them.",
"resource": "Search",
"field": "q",
"code": "invalid"
}],
"documentation_url": "https://developer.github.com/v3/search/"
}
Any ideas what I am doing wrong?
The create-react-app is not a private repo, as per facebook/create-react-app#1545 (comment)
I wanted to try out backstroke with a test repo, unfortunatly it is not working at all. I deleted and re-setup the Link several time with the same result.
According to github, and also when manually testing the hook via curl
the webhook only responds with status 500 and {"error":"Server error."}
When I first set it up, I am sure that I saw a few "successfull" webhook pushes from github: the server response had a success message and showed the current number of forks, but no pull requests where sent (I shortly suspected it's a feature since they were all my own forks, but meanwhile other users have forked it as well but get no PRs).
repo in question: https://github.com/Madek/madek-instance
Note: the github username starts with an uppercase character, which is not very common. It might be an issue because github sometimes cares about the casing and sometimes not. Also the repo contains a git submodule, not sure if that could cause problems.
Right now, it's //
- it should be http://
and soon https://
It would appear that the current user is hardcoded to 1egoman.
The upstream of my fork can have several forks, as mime. I setup this to monitor the upstream, but I would like to know if any of the other forks also receives updates.
This is very useful for Sublime Text packages because most packages developer went MIA (missing in action), so the pull requests on the upstream are always stalled. But their repositories forks are active and receive updates. For example: robertcollier4/REG#2
Related issue:
save
- error is thrown, but the link is turned on? Going back to the all links
page refreshes it back to an unsaved link.Since now a webhook is placed on both repos, we need to rethink how to delete webhooks.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.