wh1ter0se / powerup-2018 Goto Github PK
View Code? Open in Web Editor NEWThe FRC 2018 programming repository for FRC Team 3695, Foximus Prime
License: GNU General Public License v3.0
The FRC 2018 programming repository for FRC Team 3695, Foximus Prime
License: GNU General Public License v3.0
Dudding and I have been talking, and we think it would be a good idea to set up a timelapse cam that takes pictures of the build room periodically.
Ideally, this would mean having it take a picture 20 minutes after the start and 20 minutes before the end of each build session after we make a schedule, and then once every x hours on Saturdays. Realistically, it might just mean taking a picture at a certain time (6:30pm?) every day but Sunday.
It would need to be able to run headless (no screen or user interaction) on a pi, and resume the schedule if the pi reboots (in case of power outage).
If you are good with Pi's, PLEASE SPEAK UP. I'll work on this as much as I can, but realistically any help would be good.
This can be calibrated later, but we want to have it ready as soon as possible. It needs to locate the two pieces of reflective tape using a webcam mounted on the candy cane, and then quickly approach (adjusting left and right as needed), drop the hook on the bar, and then ascend.
Pretty vague notion but oh well. We want to incorporate digital RGBW lights on the bot this year (we had analog RGB lights last year but we didn't work on it too much, and they didn't work too well)
The plan is to ask Hosey if she can buy a few small Neopixel strips and/or panels and then try to make it work using one of our Arduino unos. If we can do this and show the cool stuff that it's capable of, we may be able to convince her to get more for our bot next year. They're pretty darn inexpensive considering their quality, plus we can reuse them every year.
THIS WOULD LOOK VERY NICE IF WE CAN DO IT
Anybody who is good with or at least knows Arduino, please speak up so we can get to work.
Pretty much translate 2017 code to the 2018 libraries.
Set up a Gitter community for Foximus Prime for at least all the programmers, Matt and Mrs. Hosey.
Add a command for it, and I think I'll need to add a subsystem for it?
We want to be able to make patterns based on things like mast height, robot speed, etc.
We need a list of tasks that the programmer-in-charge for each night to go through as they open and close up. Opening tasks should be done before we start each session, and closing tasks should be done before we leave.
We can't afford to waste time with drivers slowly lining up to put our sheet metal hook onto a 12" rung, especially if we want to leave time for teammates to ascend using our buddy bar.
The idea is to make a semi-autonomous (autonomous command that assumes control of the bot only for a few seconds, to help the drivers with individual tasks) command that will approach dead on, adjusting left or right to place our hook on the rung.
This will mean that our claw will have to have an FOV camera.
I encourage anybody who has time to help on this one. This might be difficult, since it will involve both coding and GRIP setup, so we want all hands on for this.
Just found out this thing is on an actuator, should make ascending easier but we need to rewrite the subsystem for it.
Pretty sure this is all on the screw motor, so start by sending the pinion mast to lower limit.
This should be a ButtonCommand (could be called by either a controller or SmartDash). I'm guessing we should just log the motor value (positive or negative) of the last ButtonUp of the limit switch.
TLDR; track if screw moved up or down since it last ran over the mid switch, then reverse that til we are at mid.
So I took a nap earlier and woke up with this idea. Kinda wanna go with it. If we use mecanum wheels this year or in future years (which may or may not happen, depending on the competition), then the robot will be able to strafe (move side to side) without rotating.
A demonstration of mecanum drive
The idea I got was to strafe opposite the direction our bot was manually turning if a certain button was pressed. This would produce a drifting effect, in which we are pointed toward the object we are rotating around. My purpose in writing this is to increase robot maneuverability, allowing us to essentially 'drift' around obstacles without losing our speed or disorienting the drivers.
The difficult part is gonna be coding it so that turning into the drift will keep you going straight (although the bot will be sideways; this is also called powersliding), while turning into the drift will essentially make you do donuts. Once you let go of the drift button, it then needs to resume the 'traditional' driving controls, aka quickly transition back to driving straight.
A demonstration of donuts
A demonstration of straightening out
It would let us modify the turning radius of Forza from the driver station. While taking it below 1 would take away our zero-radius turning, it may actually make it easier to control.
FMS provides variables through NetworkTables every year, so we need to figure out if they will provide with the current state of each scale and the switch (which direction its tilted). If we plan to make cube dropping sub-autonomous, this would be incredibly useful.
Our repo has evolved to the point that the current labels, as they are, are no longer as helpful.
I got it.
Last year it was pretty simple to implement (not to tune, mind you), so we need to check the migration table to see if it's still easy and how to do it.
See issue #18 for explanation. We need to write the shell code to stream two cams side by side for double FOV.
Self explanatory. This is to avoid kids not loading libraries right and getting a lot of red squigglies. I got this one.
Pretty standard. We need to create a shell subsystem with instantiated motors, method for motor inverts, instance in OI, and anything else needed.
The text over the video should be white, and the rest of the landing page's text should remain the same color. Not sure if this is done with inline styling (adding attributes in the html) or if it should be done with a class or ID and a corresponding section in style.css, but it needs to get done.
If you create a gh-pages branch, I can merge the site from last year (with links updated to 18).
It's just last years site.
https://kbowen99.github.io/PowerUp-2018/
Create and implement a versioning system (X.X.X) for our code. Maybe use YY.major.minor.
YY = last two digits of year
major = major version (2-4 of these at each major build, 0 being preseason)
minor = minor version (functioning updates to the major)
Future formalities:
All pulls from develop to master should be named the new version
Each d2m (develop to master) pull should be followed by a build release
Issue milestones should have the appropriate deadline
Standards.md got rewritten and the skeleton code needs to be checked to make sure it is all standardized.
I want to try to finish this by the end of the night. We need to be able to have an additional programmer, regardless of who has a laptop.
Our mast only has one motor receiving any control values so it won't ascend. We need to make sure this isn't due to a bug in the code. All code for the bot is currently in the main develop branch.
This one's on me, I just need to make a png and size it. If somebody else wants to implement it, I encourage them to go right ahead. Just comment if you wanna put it up yourself and I'll guide you through it.
Kind of a pain, but its straight forward. Brogan put his arduino code in a file somewhere in the Edward branch, we just need to figure out a logical place for it (most likely a new branch). My only hesitation is that a branch for a single file is somewhat absurd, but it stills seems pretty logical.
#75 NEEDS TO BE FINISHED FIRST
(please also only working on this if you do not have high-priority, time-sensitive issues to work on)
We need to make a light show to demonstrate the Neopixel capabilities. Show 'em what they paid for.
What do we need to get it working? How feasible is it to get all members to have access to it? Do we need our own CAD files provided or do they provide templates?
Pretty sure Mrs. Fox said that the part @brogan20 needed to wrap up his wiring will be here Tuesday or Wednesday. The sooner we can get that wired the better, because we need to demonstrate that neopixels are viable before we buy a bunch for the bot, and if we wait too long they may go out of stock on all of the useful lights.
I think it's done already, in the Vroomy branch. Once I finish flashing the firmware I'll be able to test it and confirm its done.
Based from #24
The goal is to start creating some cool patterns that we couldn't before via Arduino. If we can, we should see what it would take to control the 'duino via Java on the RoboRio.
This should pretty much just be the drive folder embedded into a page, but a password protected page.
Based off #38.
The plan is to use the output pins on the Rio and the input pins on the duino.
This is really just matter of figuring out how to manually change the voltage of a pin in WPILib.
We need 35amp voltage limits on the ascension talons; not sure about drivetrain yet, but we should figure it out sooner rather than later.
We aren't selling apple watches so let's finish updating the images and text on the landing page. If anybody would like to take this on, just tell me and I'll forward you the team's media folder.
The top third of the pixels are somewhat brighter and the pictures seem out of focus.
Self explanatory. Not sure if we use GRIP or something third party, We can take the approach of just getting an ultra-ultra-ultra-wide cam, but we already have some webcams to start playing with. Plus last year we tried to get one of those and I can't remember why but we never did.
This may actually be as simple as putting two camera streams side by side and combining it into one stream, then physically positioning the cams to align. So yeah, we can probably do this with GRIP.
PLEASE ADD TO THIS LIST OR CREATE PATTERNS AS MUCH AS YOU WOULD LIKE TO
We pretty much want to write a library of parameter-free patterns to use whenever we want.
Team color dependent patterns
Independent color patterns
Found some cool ideas in this vid here: (feel free to post some that you find if there are any other patterns)
https://www.youtube.com/watch?v=pQwgZwrXfhc
Cough cough.
Should be a simple fix when I get around to it. This is just to emulate actual driving.
The container needs to consistently be the height and width of the browser window, and the video needs to consistently stretch (without distorting) to fill up the container.
DriverStation.getInstance().getGameSpecificMessage() will return a string [L/R][L/R][L/R], in the order of [our scale][switch][opponent's scale].
I'm debating between declaring it in constants or in one of the autonomous classes.
Regardless of mast, we have our manipulator design. If possible, leave headroom to incorporate the lasers.
For those who do not know, our manipulator will be a head with two pneumatic arms, which each have flywheels on the ends to suck in the cube right before clamping down on it. We are talking about adding X amount of class 1 lasers to detect how close the cube is for automatic pickup.
Most likely able to do this through the browser GUI. If we want to base it off of accelerometer input to maintain a certain amount of g-force however, we'll have to do it in code.
Also hoping to be finished with this by the end of the night. We need to update all talon values, ensure correct wiring, and calibrate pneumatic code. If all goes well, we're done with this by 8-8:30 tonight.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.