Giter Site home page Giter Site logo

curb-wheel-archive's Introduction

SharedStreets Builder

The SharedStreets Builder application converts OpenStreetMap data to SharedStreets protocol buffer tiles.

SharedStreets uses this tool to generate and maintain a complete global OSM-dervied tile set. Users can operate the tool directly on their OSM or use a pregenerated global tileset provided by SharedStreets.

Support for non-OSM data sources has been moved to the sharedstreets-conflator tool.

Example use

java -jar ./sharedstreets-builder-0.1-preview.jar --input data/[osm_input_file].pbf --output ./[tile_output_directory]

Notes

The builder application is built on Apache Flink. If memory requirements exceed available space, Flink uses a disk-based cache for processing. Processing large OSM data sets may require several hundred gigabytes of free disk space.

Roadmap

curb-wheel-archive's People

Contributors

dependabot[bot] avatar emilyeros avatar kpwebb avatar molliemcardle avatar morganherlocker avatar peterqliu avatar russbiggs avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

curb-wheel-archive's Issues

change file permissions to ease software updates

Files that I can't currently delete:

  • config/wpa_supplicant.conf.template (might be settings on whole config directory)
  • all files in tileserver directory (includes: 2017-07-03_us_hawaii.mbtiles, setup-tileserver.sh)
  • all files in src directory (includes: graph.js, server.js)

Validate survey contents before returning to map view

When a user taps the button to complete a survey, it should check:

That all span-type features have a beginning and an end.

If there are open spans, prompt user with dialog box that warns them about one or more open spans. We could force the user to return to the survey mode to close spans manually. Or we could show two buttons: one to go back to the survey to correct this, and one to close all open zones and continue exiting the survey.

The difference between reference length and surveyed distance.

Conditions for a warning should prob be different depending on whether the surveyed distance is less than or greater than reference length, due to intersection offsets. Warn if:

  • Surveyed length is less than reference length by a difference of >20% of ref length
  • Surveyed length is greater than reference length by a difference of >10% of ref length
    Either condition should prompt use with a warning dialog and an option to start over on the street survey or else save and continue.

I don't think it should warn users if there's a feature that's created that doesn't have a photo. It may be best practice to take a photo for most features, but warning dialogs would get annoying pretty quickly.

Initial web app with map + camera function

Create very first version of web app: a map with ShSt graph edges on top, and the ability to take a photo that's linked back to that ShSt reference ID

(Not sure if this is Kevin or Morgan, so added both of you for awareness)

backwards roll

Allow the wheel to be rolled in reverse. The surveyed data should be incrementally shortened to reflect this.

Design questions:

  • What happens when a user rolls beyond the start of a surveyed feature that has not been completed? Is it deleted?
  • What happens when a user rolls beyond the end of a surveyed feature that has been completed?

split out wifi AP setup steps to allow debugging on local network

  • create a separate ap-setup.sh script containing wifi ap configuration
  • allow user to run core setup without enabling ap settings by connecting to local wifi network
  • development user can access pi via raspberrypi.local hostname via local network using dhcp assigned IP

how to digitize regulations after curb surveying

Context

Assumptions: Output from curb surveying will be:

  1. A set of street segments (GeoJSON line strings and points with OSM geometries. The points are for things like fire hydrants that eventually need to be buffered into line strings). Each feature will have properties for: ShSt Reference ID, side of street, start location (distance in metres), end location (distance in metres). The features may also have some sort of categorization (e.g. parking, loading, point).
  2. A set of photographs whose EXIF data contains information about where they were taken. To be able to drop these on a map, this needs to include geographic coordinates based on the Ref ID and measurement location (adjusted to center the location to account for intersection offsets). Or, if the photos will never be viewed on a map, the EXIF-type data must at least include the Ref ID, side of street, and measurement location (distance in metres, adjusted for centering and offsets).

The goal of digitization is to find an efficient and intuitive way for people to view the pictures and use that to create CurbLR properties that will be attached to the associated geometries.

Poking around with existing tools

Existing pathways could involve JOSM (or something else in the OSM ecosystem), QGIS, or ArcGIS. In general, I find data entry to be slower and more cumbersome in GIS systems. JOSM and other OSM tools are more suited to going through volumes of data to add structured attributes... but they are also unfamiliar to city staff and have an offputting 1980s Atari vibe going on in their UI. I'm not sure any of these approaches are ideal but I am scoping them anyway for perspective.

I started by making some fake data in JOSM: I drew a handful of regulatory geometries, dropped in some nearby-ish photos, and created XML-based tagging presets to simplify attribute tagging. That looks like this:

Screen Shot 2020-03-23 at 5 55 06 PM

The best way to use these presets is to create a handful of them, one for each common regulation in a city. We can set default values so that when someone selects a feature and clicks the "Mo-Fr loading zone" button, it tags the feature with all necessary CurbLR properties. Those properties are still editable, but we save some work. Similar guided forms can be created in QGIS and ArcGIS.

General problems

Working through JOSM made me realize that there are inherent UX problems with any map-centric interface:

  • The regulatory geometries will overlap and it'll be very difficult to distinguish from one another. In the JOSM example, I built in an offset for different sides of the street. Even then, I can't tell how many segments are there or select between them to edit the feature I'm interested in. This is slightly easier in GIS but not by much.
  • It's difficult to tell which photo corresponds to which segment. We would end up with a photograph at the start, end, or middle point of a segment. But to figure out what goes with what, we'd have to select a segment and visually identify the relevant photos and click back and forth between them. GIS systems have the same problem, though we could filter photos by ShSt Ref ID and interact with them in a slightly more sophisticated way. It's still cumbersome.
  • We are limited to a flat data format. This means using the OSM opening_hours convention and a parser for timeSpans, and finding other workarounds for userclass / subclass relationships and other "nested" properties.

Better approach

Overall, digitization involves taking an existing set of geometries, pulling up the relevant photographs, and providing an easy template to populate with CurbLR regulatory properties. I don't think we really need a full-fledged GIS system to do this, and using one actually hinders us.

Instead, how about a very basic web app that cycles through each regulatory geometry in turn? This would contain:

  1. A map with the individual feature showing, on top of some basemap tiles
  2. The photos associated with that individual feature
  3. A form for capturing the CurbLR properties (similar to the JOSM tagging preset, with dropdowns to control for input data quality)
  4. The ability to save changes and cycle through to the next feature

This would save and output the data in a GeoJSON compliant with the CurbLR spec. Adding a manifest onto it and saving as a JSON creates a valid CurbLR feed.

A more developed version might contain:

  • Buttons to populate the CurbLR properties with commonly-used regulations (e.g. a button for curb cut, Mo-Fr loading zone, or no parking zone - similar to how one would create multiple pre-sets in JOSM).
  • A map that lets users select a geometry on the map instead of just cycling though one by one (in case you just want to work on a particular area)
  • A form to add a manifest at the end and export the data as a CurbLR JSON

Like so:

Screen Shot 2020-03-24 at 4 17 31 PM

Resulting data could still be viewed in GIS before/afterwards if geometries need to be adjusted, but digitization would occur in the web app.

Disadvantages: This is more to create and maintain, and it separates the attribute entry from the ability to edit a feature (though that could also be viewed as a good thing).

Handle UI confirmation dialogs gracefully

The frontend uses browser prompts for users to confirm big actions, like deleting/ending curb zones, leaving the current survey, and providing instructions for photography. This brings up two specific challenges:

  • As a user surveys many curbs and regulations in a session, all the confirm dialogs can get unnecessary and bothersome

  • Some browsers let users disable dialogs entirely. Since these actions are gated by pressing the dialog OK, disabling them means not being able to do these actions at all. Unfortunately, there's no way in Javascript to see if a user has disabled prompts.

test: real-world consistency of wheel measurements

We know that the measurement wheel has 0.1m precision, but in the real world we won't be rolling along a perfectly straight line or perfectly flat surface. I wanted to check how much variance there can be when measuring a street segment in different ways.

I picked a short (15m) and a long (38m) section of curb between to points. The short section was straight. The long section was on a curve in the road. I rolled back and forth along each section multiple times to compare consistency (while LARPing with a paper-based app interface). I tried this a bunch of ways:

  • rolling normally
  • rolling extra carefully to stay parallel with the curb
  • rolling super carelessly while looking in a different direction
  • rolling uphill
  • rolling downhill
  • rolling over debris
  • rolling while running
  • rolling while doing a gentle zigzag back and forth across the sidewalk

Results were surprisingly consistent. I was within 0.1m in all “relatively normal” cases. Rolling extra carefully made no difference. Running made a 0.3m difference. Zigzagging back and forth along the sidewalk added a bit (38m became 38.8m).

This was a relatively narrow, residential sidewalk and an urban core would have more room for error due to higher sidewalk width and more obstacles. But overall, I feel confident about the consistency of the wheel.

image

image

image

upload image POST

Our server needs an image upload endpoint. This endpoint should accept an image file, place the image in static served directory, then respond with the URL to access the image. This link can then be used in the survey CurbLR regulations.

PARTS.md

Markdown file with a parts purchase list.

notes

  • prefer adafruit or sparkfun for suggestions on pi zero and microsd (steer away from fakes)
  • thingverse link for ordering preprinted wheel case
  • add disclaimer about golden path hardware we support

mbtiles admin upload UI

Add a UI form that allows user to drag and drop a mbtiles file that will serve vector tiles for the overworld map.

GET bounds

Server endpoints for retrieving all graph edges within a bounding box.

cc @kpwebb

Web app design

Current thinking

The current working design for the web app is shown below.

  1. Home screen shows the area of interest with streets color-coded by what has / hasn't been surveyed. The user taps on a blockface to map to begin surveying.
  2. User starts at one end of the street, wheeling along the curb with the direction of traffic. (We discussed and discarded the idea of manually tapping on a start location). When the user reaches a curb asset, they will pause the measurement if necessary (they might have to back up to take a clear photo), tap the start/middle/stop button to enter photo mode
  3. The user snaps one or more photos of the asset. (The LR and other geo info can be added as a custom EXIF header to help tie the photo to the street segment.)
  4. If the measurement is paused, it must be unpaused in order to resume surveying

IMG_0817

Issue

This interface design is simple, sticking to one surveying screen with a handful of buttons. However, we're not getting enough info here to capture segments - we would end up with LR'd points that mark the start, middle, or end of a segment... but we don't know which points go together. We can't maintain multiple states. Because curb regulations often overlap, we can't make sense of the output with this app design. This potentially doubles the amount of work for the digitization stage, since each point is handled individually instead of as a segment.

Potential solution

To maintain multiple states, we need the buttons to be activated and deactivated, so that we're capturing a segment (and its associated photo(s)) instead of two individual points. Doing this generically (buttons named "State 1", "State 2", etc) is possible but it puts a burden on users to remember what was what. So it's preferable to do this with words, to capture whether the asset is a parking zone, curb cut, etc. Capturing the asset category also reduces the digitization workload later.

The categories need to be specific enough that users can remember what's what, but generalizable enough that we don't have clutter from overly specific buttons (or try to capture all eventualities, and end up with exceptions that don't fit).

Here's a quick sketch:

Screen Shot 2020-03-23 at 10 49 09 PM

Thoughts?

script pi configuration

Add a script that does the following:

  1. install nodejs & npm
  2. setup wifi access point
  3. setup ssh access
  4. install curb-wheel firmware
  5. mount RAM tmp file

cc @kpwebb

Bird

import pygame_sdl2
pygame_sdl2.import_as_pygame()
'''
Android is much faster with hardware rendering
This dirty code shows how it could be done

We have just replaced SCREEN.blit with SCREEN_blit
and pygame.display.update with pygame_display_update
and gained much more playable framerate

Note that this code probably contains memory leaks,
unoptimal reinitialization of Sprites and other bad stuff

It is here just to display how slow is blitting

Writing everything correctly from start (Sprite) is preferred
'''

def SCREEN_blit(what, at):
global RENDERER, spritecache
Sprite(RENDERER.load_texture(what)).render(at)

def pygame_display_update():
RENDERER.render_present()
RENDERER.clear((0, 0, 0))

Set this variable to use the deprecated behavior

useswblitting = False

from itertools import cycle
import random
import sys

import pygame
from pygame.locals import *
from pygame.render import *

FPS = 30
SCREENWIDTH = 288 * 2
SCREENHEIGHT = 512 * 2

amount by which base can maximum shift to left

PIPEGAPSIZE = 100 * 2 # gap between upper and lower part of pipe
BASEY = SCREENHEIGHT * 0.79

image, sound and hitmask dicts

IMAGES, SOUNDS, HITMASKS = {}, {}, {}

list of all possible players (tuple of 3 positions of flap)

PLAYERS_LIST = (
# red bird
(
'assets/sprites/redbird-upflap.png',
'assets/sprites/redbird-midflap.png',
'assets/sprites/redbird-downflap.png',
),
# blue bird
(
# amount by which base can maximum shift to left
'assets/sprites/bluebird-upflap.png',
'assets/sprites/bluebird-midflap.png',
'assets/sprites/bluebird-downflap.png',
),
# yellow bird
(
'assets/sprites/yellowbird-upflap.png',
'assets/sprites/yellowbird-midflap.png',
'assets/sprites/yellowbird-downflap.png',
),
)

list of backgrounds

BACKGROUNDS_LIST = (
'assets/sprites/background-day.png',
'assets/sprites/background-night.png',
)

list of pipes

PIPES_LIST = (
'assets/sprites/pipe-green.png',
'assets/sprites/pipe-red.png',
)

try:
xrange
except NameError:
xrange = range

def main():
global SCREEN, FPSCLOCK, RENDERER
pygame.init()
FPSCLOCK = pygame.time.Clock()
SCREEN = pygame.display.set_mode((SCREENWIDTH, SCREENHEIGHT))
RENDERER = Renderer(None)
pygame.display.set_caption('Flappy Bird')

if useswblitting:
    global SCREEN_blit, pygame_display_update
    SCREEN_blit = SCREEN.blit
    pygame_display_update = pygame.display.update

# numbers sprites for score display
IMAGES['numbers'] = (pygame.image.load('assets/sprites/0.png').convert_alpha(), pygame.image.load('assets/sprites/1.png').convert_alpha(), pygame.image.load('assets/sprites/2.png').convert_alpha(), pygame.image.load('assets/sprites/3.png').convert_alpha(), pygame.image.load('assets/sprites/4.png').convert_alpha(), pygame.image.load('assets/sprites/5.png').convert_alpha(), pygame.image.load('assets/sprites/6.png').convert_alpha(), pygame.image.load('assets/sprites/7.png').convert_alpha(), pygame.image.load('assets/sprites/8.png').convert_alpha(), pygame.image.load('assets/sprites/9.png').convert_alpha())

# game over sprite
IMAGES['gameover'] = pygame.image.load('assets/sprites/gameover.png').convert_alpha()
# message sprite for welcome screen
IMAGES['message'] = pygame.image.load('assets/sprites/message.png').convert_alpha()
# base (ground) sprite
IMAGES['base'] = pygame.image.load('assets/sprites/base.png').convert_alpha()

# sounds
if 'win' in sys.platform:
    soundExt = '.wav'
else:
    soundExt = '.ogg'

SOUNDS['die'] = pygame.mixer.Sound('assets/audio/die' + soundExt)
SOUNDS['hit'] = pygame.mixer.Sound('assets/audio/hit' + soundExt)
SOUNDS['point'] = pygame.mixer.Sound('assets/audio/point' + soundExt)
SOUNDS['swoosh'] = pygame.mixer.Sound('assets/audio/swoosh' + soundExt)
SOUNDS['wing'] = pygame.mixer.Sound('assets/audio/wing' + soundExt)

while True:
    # select random background sprites
    randBg = random.randint(0, len(BACKGROUNDS_LIST) - 1)
    IMAGES['background'] = pygame.image.load(BACKGROUNDS_LIST[randBg]).convert()

    # select random player sprites
    randPlayer = random.randint(0, len(PLAYERS_LIST) - 1)
    IMAGES['player'] = (
        pygame.image.load(PLAYERS_LIST[randPlayer][0]).convert_alpha(),
        pygame.image.load(PLAYERS_LIST[randPlayer][1]).convert_alpha(),
        pygame.image.load(PLAYERS_LIST[randPlayer][2]).convert_alpha(),
    )

    # select random pipe sprites
    pipeindex = random.randint(0, len(PIPES_LIST) - 1)
    IMAGES['pipe'] = (
        pygame.transform.rotate(pygame.image.load(PIPES_LIST[pipeindex]).convert_alpha(), 180),
        pygame.image.load(PIPES_LIST[pipeindex]).convert_alpha(),
    )

    # hismask for pipes
    HITMASKS['pipe'] = (
        getHitmask(IMAGES['pipe'][0]),
        getHitmask(IMAGES['pipe'][1]),
    )

    # hitmask for player
    HITMASKS['player'] = (
        getHitmask(IMAGES['player'][0]),
        getHitmask(IMAGES['player'][1]),
        getHitmask(IMAGES['player'][2]),
    )

    movementInfo = showWelcomeAnimation()
    crashInfo = mainGame(movementInfo)
    showGameOverScreen(crashInfo)

def showWelcomeAnimation():
"""Shows welcome screen animation of flappy bird"""
# index of player to blit on screen
playerIndex = 0
playerIndexGen = cycle([0, 1, 2, 1])
# iterator used to change playerIndex after every 5th iteration
loopIter = 0

playerx = int(SCREENWIDTH * 0.2)
playery = int((SCREENHEIGHT - IMAGES['player'][0].get_height()) / 2)

messagex = int((SCREENWIDTH - IMAGES['message'].get_width()) / 2)
messagey = int(SCREENHEIGHT * 0.12)

basex = 0
# amount by which base can maximum shift to left
baseShift = IMAGES['base'].get_width() - IMAGES['background'].get_width()

# player shm for up-down motion on welcome screen
playerShmVals = {'val': 0, 'dir': 1}

while True:
    for event in pygame.event.get():
        if event.type == QUIT or (event.type == KEYDOWN and event.key == K_ESCAPE):
            pygame.quit()
            sys.exit()
        if event.type == KEYDOWN and (event.key == K_SPACE or event.key == K_UP) or event.type == MOUSEBUTTONDOWN:
            # make first flap sound and return values for mainGame
            SOUNDS['wing'].play()
            return {
                'playery': playery + playerShmVals['val'],
                'basex': basex,
                'playerIndexGen': playerIndexGen,
            }

    # adjust playery, playerIndex, basex
    if (loopIter + 1) % 5 == 0:
        playerIndex = next(playerIndexGen)
    loopIter = (loopIter + 1) % 30
    basex = -((-basex + 4) % baseShift)
    playerShm(playerShmVals)

    # draw sprites
    SCREEN_blit(IMAGES['background'], (0, 0))
    SCREEN_blit(IMAGES['player'][playerIndex], (playerx, playery + playerShmVals['val']))
    SCREEN_blit(IMAGES['message'], (messagex, messagey))
    SCREEN_blit(IMAGES['base'], (basex, BASEY))

    pygame_display_update()
    FPSCLOCK.tick(FPS)

def mainGame(movementInfo):
score = playerIndex = loopIter = 0
playerIndexGen = movementInfo['playerIndexGen']
playerx, playery = int(SCREENWIDTH * 0.2), movementInfo['playery']

basex = movementInfo['basex']
baseShift = IMAGES['base'].get_width() - IMAGES['background'].get_width()

# get 2 new pipes to add to upperPipes lowerPipes list
newPipe1 = getRandomPipe()
newPipe2 = getRandomPipe()

# list of upper pipes
upperPipes = [
    {
        'x': SCREENWIDTH + 200 * 2,
        'y': newPipe1[0]['y']
    },
    {
        'x': SCREENWIDTH + 200 * 2 + (SCREENWIDTH / 2),
        'y': newPipe2[0]['y']
    },
]

# list of lowerpipe
lowerPipes = [
    {
        'x': SCREENWIDTH + 200 * 2,
        'y': newPipe1[1]['y']
    },
    {
        'x': SCREENWIDTH + 200 * 2 + (SCREENWIDTH / 2),
        'y': newPipe2[1]['y']
    },
]

pipeVelX = -4

# player velocity, max velocity, downward accleration, accleration on flap
playerVelY = -9  # player's velocity along Y, default same as playerFlapped
playerMaxVelY = 10  # max vel along Y, max descend speed
playerMinVelY = -8  # min vel along Y, max ascend speed
playerAccY = 1  # players downward accleration
playerRot = 45  # player's rotation
playerVelRot = 3  # angular speed
playerRotThr = 20  # rotation threshold
playerFlapAcc = -9  # players speed on flapping
playerFlapped = False  # True when player flaps

while True:
    for event in pygame.event.get():
        if event.type == QUIT or (event.type == KEYDOWN and event.key == K_ESCAPE):
            pygame.quit()
            sys.exit()
        if event.type == KEYDOWN and (event.key == K_SPACE or event.key == K_UP) or event.type == MOUSEBUTTONDOWN:
            if playery > -2 * IMAGES['player'][0].get_height():
                playerVelY = playerFlapAcc
                playerFlapped = True
                SOUNDS['wing'].play()

    # check for crash here
    crashTest = checkCrash({'x': playerx, 'y': playery, 'index': playerIndex}, upperPipes, lowerPipes)
    if crashTest[0]:
        return {'y': playery, 'groundCrash': crashTest[1], 'basex': basex, 'upperPipes': upperPipes, 'lowerPipes': lowerPipes, 'score': score, 'playerVelY': playerVelY, 'playerRot': playerRot}

    # check for score
    playerMidPos = playerx + IMAGES['player'][0].get_width() / 2
    for pipe in upperPipes:
        pipeMidPos = pipe['x'] + IMAGES['pipe'][0].get_width() / 2
        if pipeMidPos <= playerMidPos < pipeMidPos + 4:
            score += 1
            SOUNDS['point'].play()

    # playerIndex basex change
    if (loopIter + 1) % 3 == 0:
        playerIndex = next(playerIndexGen)
    loopIter = (loopIter + 1) % 30
    basex = -((-basex + 100) % baseShift)

    # rotate the player
    if playerRot > -90:
        playerRot -= playerVelRot

    # player's movement
    if playerVelY < playerMaxVelY and not playerFlapped:
        playerVelY += playerAccY
    if playerFlapped:
        playerFlapped = False

        # more rotation to cover the threshold (calculated in visible rotation)
        playerRot = 45

    playerHeight = IMAGES['player'][playerIndex].get_height()
    playery += min(playerVelY, BASEY - playery - playerHeight)

    # move pipes to left
    for uPipe, lPipe in zip(upperPipes, lowerPipes):
        uPipe['x'] += pipeVelX
        lPipe['x'] += pipeVelX

    # add new pipe when first pipe is about to touch left of screen
    if 0 < upperPipes[0]['x'] < 5:
        newPipe = getRandomPipe()
        upperPipes.append(newPipe[0])
        lowerPipes.append(newPipe[1])

    # remove first pipe if its out of the screen
    if upperPipes[0]['x'] < -IMAGES['pipe'][0].get_width():
        upperPipes.pop(0)
        lowerPipes.pop(0)

    # draw sprites
    SCREEN_blit(IMAGES['background'], (0, 0))

    for uPipe, lPipe in zip(upperPipes, lowerPipes):
        SCREEN_blit(IMAGES['pipe'][0], (uPipe['x'], uPipe['y']))
        SCREEN_blit(IMAGES['pipe'][1], (lPipe['x'], lPipe['y']))

    SCREEN_blit(IMAGES['base'], (basex, BASEY))
    # print score so player overlaps the score
    showScore(score)

    # Player rotation has a threshold
    visibleRot = playerRotThr
    if playerRot <= playerRotThr:
        visibleRot = playerRot

    playerSurface = pygame.transform.rotate(IMAGES['player'][playerIndex], visibleRot)
    SCREEN_blit(playerSurface, (playerx, playery))

    pygame_display_update()
    FPSCLOCK.tick(FPS)

def showGameOverScreen(crashInfo):
"""crashes the player down ans shows gameover image"""
score = crashInfo['score']
playerx = SCREENWIDTH * 0.2
playery = crashInfo['y']
playerHeight = IMAGES['player'][0].get_height()
playerVelY = crashInfo['playerVelY']
playerAccY = 2
playerRot = crashInfo['playerRot']
playerVelRot = 7

basex = crashInfo['basex']

upperPipes, lowerPipes = crashInfo['upperPipes'], crashInfo['lowerPipes']

# play hit and die sounds
SOUNDS['hit'].play()
if not crashInfo['groundCrash']:
    SOUNDS['die'].play()

while True:
    for event in pygame.event.get():
        if event.type == QUIT or (event.type == KEYDOWN and event.key == K_ESCAPE):
            pygame.quit()
            sys.exit()
        if event.type == KEYDOWN and (event.key == K_SPACE or event.key == K_UP) or event.type == MOUSEBUTTONDOWN:
            if playery + playerHeight >= BASEY - 1:
                return

    # player y shift
    if playery + playerHeight < BASEY - 1:
        playery += min(playerVelY, BASEY - playery - playerHeight)

    # player velocity change
    if playerVelY < 15:
        playerVelY += playerAccY

    # rotate only when it's a pipe crash
    if not crashInfo['groundCrash']:
        if playerRot > -90:
            playerRot -= playerVelRot

    # draw sprites
    SCREEN_blit(IMAGES['background'], (0, 0))

    for uPipe, lPipe in zip(upperPipes, lowerPipes):
        SCREEN_blit(IMAGES['pipe'][0], (uPipe['x'], uPipe['y']))
        SCREEN_blit(IMAGES['pipe'][1], (lPipe['x'], lPipe['y']))

    SCREEN_blit(IMAGES['base'], (basex, BASEY))
    showScore(score)

    playerSurface = pygame.transform.rotate(IMAGES['player'][1], playerRot)
    SCREEN_blit(playerSurface, (playerx, playery))

    FPSCLOCK.tick(FPS)
    pygame_display_update()

def playerShm(playerShm):
"""oscillates the value of playerShm['val'] between 8 and -8"""
if abs(playerShm['val']) == 8:
playerShm['dir'] *= -1

if playerShm['dir'] == 1:
    playerShm['val'] += 1
else:
    playerShm['val'] -= 1

def getRandomPipe():
"""returns a randomly generated pipe"""
# y of gap between upper and lower pipe
gapY = random.randrange(0, int(BASEY * 0.6 - PIPEGAPSIZE))
gapY += int(BASEY * 0.2)
pipeHeight = IMAGES['pipe'][0].get_height()
pipeX = SCREENWIDTH + 10

return [
    {
        'x': pipeX,
        'y': gapY - pipeHeight
    },  # upper pipe
    {
        'x': pipeX,
        'y': gapY + PIPEGAPSIZE
    },  # lower pipe
]

def showScore(score):
"""displays score in center of screen"""
scoreDigits = [int(x) for x in list(str(score))]
totalWidth = 0 # total width of all numbers to be printed

for digit in scoreDigits:
    totalWidth += IMAGES['numbers'][digit].get_width()

Xoffset = (SCREENWIDTH - totalWidth) / 2

for digit in scoreDigits:
    SCREEN_blit(IMAGES['numbers'][digit], (Xoffset, SCREENHEIGHT * 0.1))
    Xoffset += IMAGES['numbers'][digit].get_width()

def checkCrash(player, upperPipes, lowerPipes):
"""returns True if player collders with base or pipes."""
pi = player['index']
player['w'] = IMAGES['player'][0].get_width()
player['h'] = IMAGES['player'][0].get_height()

# if player crashes into ground
if player['y'] + player['h'] >= BASEY - 1:
    return [True, True]
else:

    playerRect = pygame.Rect(player['x'], player['y'], player['w'], player['h'])
    pipeW = IMAGES['pipe'][0].get_width()
    pipeH = IMAGES['pipe'][0].get_height()

    for uPipe, lPipe in zip(upperPipes, lowerPipes):
        # upper and lower pipe rects
        uPipeRect = pygame.Rect(uPipe['x'], uPipe['y'], pipeW, pipeH)
        lPipeRect = pygame.Rect(lPipe['x'], lPipe['y'], pipeW, pipeH)

        # player and upper/lower pipe hitmasks
        pHitMask = HITMASKS['player'][pi]
        uHitmask = HITMASKS['pipe'][0]
        lHitmask = HITMASKS['pipe'][1]

        # if bird collided with upipe or lpipe
        uCollide = pixelCollision(playerRect, uPipeRect, pHitMask, uHitmask)
        lCollide = pixelCollision(playerRect, lPipeRect, pHitMask, lHitmask)

        if uCollide or lCollide:
            return [True, False]

return [False, False]

def pixelCollision(rect1, rect2, hitmask1, hitmask2):
"""Checks if two objects collide and not just their rects"""
rect = rect1.clip(rect2)

if rect.width == 0 or rect.height == 0:
    return False

x1, y1 = rect.x - rect1.x, rect.y - rect1.y
x2, y2 = rect.x - rect2.x, rect.y - rect2.y

for x in xrange(rect.width):
    for y in xrange(rect.height):
        if hitmask1[x1 + x][y1 + y] and hitmask2[x2 + x][y2 + y]:
            return True
return False

def getHitmask(image):
"""returns a hitmask using an image's alpha."""
mask = []
for x in xrange(image.get_width()):
mask.append([])
for y in xrange(image.get_height()):
mask[x].append(bool(image.get_at((x, y))[3]))
return mask

if name == 'main':
main()

street orientation selection

I'm working on selection of street orientation. The flow works like this:

  1. allow location, user navigates to target street curb
  2. map zooms to user puck, and orients to heading
  3. user clicks road
  4. road broadens and color changes to indicate selection
  5. road symbology added to show arrows that flow in closest direction to user orientation
  6. button revealed with "flip" icon; when pressed, reverses selected road symbology
  7. user accepts and kicks to survey screen

design

  • Drawing arrows along a line is not a straightforward task in mapbox gl. I'm using the mapbox streets oneway symbology rendering as a reference, but working to make the arrows much more aggressive so they can be seen on a phone, without being confused for an actual oneway.

image

  • I'm experimenting with drawing a opaque circle symbol at the starting curb to indicate to the user where they should go to start, based on the selected orientation.

SETUP.md

Markdown file with step by step instructions for building and flashing a curb wheel.

save survey POST endpoint

We need an endpoint to save survey info to. This endpoint needs to capture a JSON object with the following information:

Regulations can also have image urls associated with them. I think we should start with the assumption that images are uploaded and we have links to them served from the pi before we save the survey, so the regulations are valid from the get go. This can be done behind the scenes in serial.

saving

For saving the surveys, I think we should do this directly on the graph. For each reference (forward and back, separately), we would have an array of surveys that contains a list of Location objects, each with an array of Regulations. surveys can be treated as a CRDT with the user able to select the definitive version at digitization, and/or we can default to the most recent survey. Upon adding a survey, we then save a snapshot of the graph to disk as a checkpoint.

sharedstreets graph setup

on wheel graph setup steps:

  • install shst-js v0.15 (with optional osrm)
  • grab osm graph data (to coordinate with @morganherlocker pbf extraction process)
  • test integration with node web server

calculate bounds and centroid in extractor

Maps can be auto centered if we calculate the bounds and weighted center of all roads pulled from the extractor.

  • bounds calculated from minx,miny,maxx,maxy of all way nodes
  • centroid calculated by geometric average ([AVG(lon), AVG(lat)])

GeoJSON format exports

Export a geojson style set of files from the graph. This needs to be exposed by the server which will then be exposed to user through the admin UI.

orientation selection tweaks

@peterqliu The orientation selection looks great! I find this style more efficient and intuitive than the flip button I was tinkering with.

Screenshot from 2020-04-29 15-54-22

Couple questions on the design:

  • A,B work as placeholder symbols, but I think these could either be dropped or replaced with something that gives some information to the user about each direction. Maybe arrows are all you need? @emilyeros might have some ideas here, but a couple possibilities I can imagine that might make sense:
    • forward / back
    • left / right
    • sharedstreets references
  • red/green color symbology will be difficult to distinguish for some colorblind users, and I think red/green can sometimes have overloaded meanings we might be able to avoid (ie: on/off, enable/disable, etc.). Any ideas for a different palette?

Render MB tiles on the phone

Few different options:

  • Tessera + MB tiles npm module
  • OpenMapTiles (feedback: bulkier/more complicated)
  • other things on Kevin's list

test: UX for wheel + app (LARPing)

Went out to test some practical stuff for the wheel + Pi method. I used the original wheel and a paper-based "app" interface and did some role-playing along my street.

LARPing learnings:

  1. The photo usually isn't taken at the same location as the asset. We need to be able to roll to the parking sign, pause the measurement, and then back up a few feet and take a photo. Then resume rolling.
  2. Pausing the measurement may involve using the kickstand to regain a hand. Kickstand is reasonably solid for slight incline/declines. For more signficant slopes, it was possible to pivot the wheel sideways and then use the kickstand more reliably.
  3. If using the kickstand, this is going to fall occasionally. I knocked the wheel over a bunch of times fro different angles, onto grass and pavement; it held up fine and measurements stayed within ~0.2m, depending on the angle it fell towards. Durability will be a bigger concern as soon as we make modifications and add a Pi, but the wheel itself is resilient.
  4. This is a two-hand surveying process but can be done by a single person. They may wish to have a backpack with a clipboard and Field Papers in it just in case they need to make any notes. (We aren't using Field Papers for mapping in this case; it's just an easy way to create an atlas of the right size to take notes on)

IMG_1182

image

image

Default feature label/geom options

The app currently has the following options for adding a "zone" to the street:

  • 'Parking'
  • 'No Parking'
  • 'Stopping'
  • 'No Stopping'
  • 'Loading'
  • 'Standing'
  • 'No Standing'
  • 'Travel Lane'

Let's try out the following for our first round of testing. I know this is a mix of assets and regs, but let's give it a go anyway. I've included the geometry type in parentheses.

  • Parking (span)
  • No parking (span)
  • Loading (span)
  • No stopping (span)
  • Curb cut (span)
  • Paint (span)
  • Other zone (span)
  • Payment device (position)
  • Fire hydrant (position)
  • Other point (position)

Button to add these should be "add feature" instead of "add zone"

pbf admin upload UI

Add a UI form that allows user to drag and drop a osm.pbf file that will upload the extract POST endpoint.

Revamp README

Content should act as a front-door for both city government staff and developers who want to learn more about the project overall. Move technical instructions to elsewhere in the repo.

test: consistency between wheel counter and Pi counter, using parity wiring

I wired up my first prototype wheel using the parity wiring approach. This enabled me to see the measurement as tracked by the wheel's original circuit board, and to compare that to the measurement as tracked by the script on the Pi.

Initially, the Pi's measurement output moved perfectly in sync with the measurement on the wheel's display screen (see video).

In field tests, the measurements were sometimes off-sync by half a 0.1m increment. I think this is because the test script I wrote only updates the measurement according to one sensor; it should be updated to track both sensors independently and to update according to both.

In any case, my measurements were largely consistent in tests, sometimes exactly the same and sometimes with a difference of up to 0.05m (generally when I rolled backwards during a test).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.