secure-systems-lab / lab-guidelines Goto Github PK
View Code? Open in Web Editor NEWHow-to guides for Secure Systems Lab (SSL) projects and documents
Home Page: https://ssl.engineering.nyu.edu/
License: MIT License
How-to guides for Secure Systems Lab (SSL) projects and documents
Home Page: https://ssl.engineering.nyu.edu/
License: MIT License
Hello Team,
I am trying to understand the Uptane demo code. but in that i am not able to understand why we need to reinstall the dependency.
pip install --force-reinstall -r dev-requirements.txt
When we just do the even small print statement in uptane demo code.
Kindly help me here i am new in this things.
Thanks in Advance.
The README currently contains minimal information. I propose we create links to the other documents and provide a brief outline of what they each cover.
The lab's online communication seems to have moved away from Skype and to Slack and Hangouts exclusively. Shall we mention how to reach us there?
Consistent release version numbers and corresponding git tags are not only expected by users of our software, but also necessary for automated tools, such as dependency scanners, or downstream packager update detectors (see secure-systems-lab/securesystemslib#167 and in-toto/in-toto#286 for discussions).
Versioning
"semantic versioning" (semver) provides a clearly defined de-facto standard, which we already adhere to with TUF, in-toto and securesystemslib (more or less). We should make this a principle and be strict about it.
Tagging
The easiest way seems to just use the semantic version number as git tag name. However, @SantiagoTorres has a compelling argument for v
-prefixing the semantic version string, that is command line auto-completion (e.g. git checkout v[tab]
).
Most importantly, there should be no switching between conventions, such as a mix of X.Y.Z
, vX.Y.Z
or <arbitrary-prefix>vX.Y.Z
(see in-toto#releases and securesystemslib#releases).
Is the Git+GitHub guide sufficient for newcomers in helping them get started with our recommended software development platform? Should we have a repository to serve as a sort of Git playground, or should newcomers get started right away with contributions and learning how to use Git+GitHub that way?
[Copying the following from this comment, which was posted by @aaaaalbert]
How can we help our newcomers to adopt Git quickly (to use it productively for their own work) and in a way that interfaces nicely with our workflows?
Should we send newcomers to @octocat's Spoon-Knife repo to actually try out the theoretic skills acquired through this guide?
Or shall we have a playground repo of our own where people can experiment? "First task on first day on the team: Add your favorite sports/treat/metro line to secure-systems-lab/playground" sorts of thing?
The "Submit a pull request." section of our dev workflow doc points to GitHub docs that explain how to keep your fork in sync. It looks like GitHub updated their instruction to use merge
instead of rebase
, which conflicts with our recommendation in our git history guidelines, which strongly favor rebase
.
Possible Solutions:
Following discussions with @awwad and @lukpueh, I'd like to suggest a new practice for code reviews: Code authors should voice their expectations of a code review, and reviewers should state the performed actions and areas/topics of focus.
This proposal is motivated by the observation that code reviews are difficult to perform to a degree that exhausts the code quality space -- style, logic, documentation, testing, deployment, etc. They in turn tend to get shelved, which stalls progress. It's natural that not everyone is an expert in everone else's topic, so I think it's good to be explicit about the depth and dimensions of code reviews.
However, this proposal is not meant to duplicate the existing developer workflow documentation. It rather serves to augment that document's last step, "Request a review".
The necessary prerequisite is that code authors make sure that the changes they propose actually make sense and work at a basic level. (Corollary: It should never happen that neither the author nor the reviewer actually tests the fix.)
When submitting a PR for a working patch, the author may request specific actions or areas of focus for the code review:
"Please do / look for / look at the following: ...."
The reviewer complements this by meta-annotating their review, i.e. supplementing the actual review items with an overall statement:
"Here's what I did to review your code: ...."
Step 0: Make sure your proposed change actually works.
Step 0, part 2: This also means you make sure it conforms to the code style etc. right away. (Do not waste the reviewer's time with things you can easily get right in the first place!)
Step 1: Send a PR and list what you think are the aspects that require the largest share of a reviewer's attention.
Step 2: Feel free to suggest the review to someone, and/or ping them/the group to remind them of it.
Step 0: Inform the world that you are working on this.
Step 1: Perform the review.
Step 2: Mention the areas (topical, regional) you focus on. Feel free to mention areas you cannot review as well.
I think it's good to come up with a list of examples of items/actions that authors could request and reviewers could provide.
I'm happy to go over my list of review comments to start this. Perhaps you have things in mind right away, so let's just post items below. If it makes sense, the list can be incorporated into PR templates at some point.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.