Giter Site home page Giter Site logo

sjyk / deeplens-cv Goto Github PK

View Code? Open in Web Editor NEW
10.0 6.0 7.0 160.61 MB

Visual Stream Processing For The DeepLens System

License: MIT License

Python 74.63% Makefile 0.06% C 23.19% Shell 0.15% Cuda 1.46% C++ 0.12% Dockerfile 0.11% Jupyter Notebook 0.27%

deeplens-cv's Introduction

DeepLens

DeepLens is a temporally and spatially partitioned storage manager for video analytics.

Installation

  1. Install Dependencies
sudo apt update
sudo apt install postgresql postgresql-contrib
pip3 install -r requirements
  1. Configure PostgreSQL

Make sure PostgreSQL server is running and we have an empty database header created.

  1. Run experiments

python3 experiments/exp1_traffic_camera.py https://example.com/video.mp4

deeplens-cv's People

Contributors

j2zhao avatar knagaitsev avatar primkey7607 avatar sjyk avatar swjz avatar xuefeng11 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

deeplens-cv's Issues

Foreign key constraint violation

I got an error when running put() for many videos in parallel. It looks like the background clips are not in the clip table yet when we insert them into the background table of our metadata db. This is on the master branch.

Traceback (most recent call last):
  File "/home/azureuser/deeplens-cv/distributed/exp10_queue_worker.py", line 70, in <module>
    main()
  File "/home/azureuser/deeplens-cv/distributed/exp10_queue_worker.py", line 65, in main
    channel.start_consuming()
  File "/home/azureuser/.local/lib/python3.6/site-packages/pika/adapters/blocking_connection.py", line 1865, in start_consuming
    self._process_data_events(time_limit=None)
  File "/home/azureuser/.local/lib/python3.6/site-packages/pika/adapters/blocking_connection.py", line 2026, in _process_data_events
    self.connection.process_data_events(time_limit=time_limit)
  File "/home/azureuser/.local/lib/python3.6/site-packages/pika/adapters/blocking_connection.py", line 833, in process_data_events
    self._dispatch_channel_events()
  File "/home/azureuser/.local/lib/python3.6/site-packages/pika/adapters/blocking_connection.py", line 567, in _dispatch_channel_events
    impl_channel._get_cookie()._dispatch_events()
  File "/home/azureuser/.local/lib/python3.6/site-packages/pika/adapters/blocking_connection.py", line 1493, in _dispatch_events
    evt.properties, evt.body)
  File "/home/azureuser/deeplens-cv/distributed/exp10_queue_worker.py", line 60, in callback
    runFullPut(body.decode("utf-8"), batch_size=72)
  File "/home/azureuser/deeplens-cv/distributed/exp10_queue_worker.py", line 49, in runFullPut
    put()
  File "/home/azureuser/deeplens-cv/distributed/exp10_queue_worker.py", line 44, in put
    'batch_size': batch_size, 'num_processes': 4, 'background_scale': 1})
  File "/home/azureuser/deeplens-cv/deeplens/full_manager/full_manager.py", line 126, in put
    write_video_single(conn, filename, target, self.local_dir, self.remote_dir, self.content_splitter, tagger, stream=stream, args=args, background_scale=args['background_scale'], rows=rows, hwang=hwang)
  File "/home/azureuser/deeplens-cv/deeplens/full_manager/full_videoio.py", line 367, in write_video_single
    full_width, full_height, start_time, start_time + time_block)
  File "/home/azureuser/deeplens-cv/deeplens/full_manager/full_videoio.py", line 98, in _new_headers_batch
    insert_background_header(conn, ids[0], ids[i], name)
  File "/home/azureuser/deeplens-cv/deeplens/full_manager/full_videoio.py", line 511, in insert_background_header
    c.execute("INSERT INTO background VALUES ('%d', '%d', '%s')" % (background_id, clip_id, video_name))
psycopg2.errors.ForeignKeyViolation: insert or update on table "background" violates foreign key constraint "background_clip_id_fkey"
DETAIL:  Key (clip_id, video_name)=(8458666960472462466, test) is not present in table "clip".

TODO list

  • not iterating between batches, and starting over. new video reader each time with offset.
  • splitting function; might not work with real data
  • crop['all'], e.g. frame by frame data
  • joining videos aren't working
  • union of all objects is almost the same as the entire video
  • state of storage. restart each time
  • better query with overlapping objects
  • make limit work

@j2zhao @swjz

Simple FullStorageManager tagger

The current examples create a FullStorageManager by doing something like this:

manager = FullStorageManager(CustomTagger(FixedCameraBGFGSegmenter().segment, batch_size=100), CropSplitter(), 'videos')

If I don't want the bounding box to be changed at all, what is the simplest way to pass in options that accomplishes this? It seems that using FixedCameraBGFGSegmenter().segment creates a foreground bounding box which cuts off portions of the frames slightly.

Issues with FullManager get

I know the API is constantly changing, but the way I was previously using put and get with FullManager doesn't seem to be working anymore (I made some slight changes for the new put args). Anything I am doing wrong below? I get a video returned back, but it seems to have no frames.

from deeplens.struct import *
from deeplens.utils import *
from deeplens.dataflow.map import *
from deeplens.full_manager.full_manager import *
from deeplens.full_manager.condition import Condition
from deeplens.full_manager.full_video_processing import CropSplitter
from deeplens.tracking.background import FixedCameraBGFGSegmenter
from experiments.environ import *
from deeplens.utils.testing_utils import *

def do_filter(conn, video_name):
    c = conn.cursor()
    c.execute("SELECT clip_id FROM clip WHERE video_name = '%s'" % (video_name))
    res = c.fetchall()
    print(res)
    return [cl[0] for cl in res]

manager = FullStorageManager(CustomTagger(FixedCameraBGFGSegmenter().segment, batch_size=100), CropSplitter(), 'test_store')
manager.put('test.mp4', 'test1', {'frame_rate': 30, 'encoding': XVID, 'limit': -1, 'sample': 1.0, 'offset': 0, 'batch_size': 100, 'num_processes': 1, 'background_scale': 1}, False, False)

res = manager.get('test1', Condition(label='foreground', custom_filter=do_filter))
vid = res[0]
# doesn't print anything because there are no frames
for frame in vid:
    print('frame')
# doesn't play anything:
play(vid)

Recursion related performance issues

  1. The default python's maximum recursion depth is around 1000. When buffer_size exceeds that number, our system will drop all subsequent frames without any notice. One workaround is to set sys.setrecursionlimit() to a large number.

  2. Our system takes a huge amount of memory when the recursion is deep, thus becoming I/O bound. The virtual memory usage can reach 175GB on my laptop when I set buffer_size to 10000.

batch_size and other storage filtering questions

It looks like with the recent API changes the put and get operations of managers are supposed to work like this:

def filter(header_data):
    return True
args = {'encoding': XVID, 'sample': 1.0, 'offset': 0, 'limit': 1000, 'batch_size': 10}
manager = SimpleStorageManager('test_store')
manager.put('test.mp4', 'test1', args)
res = manager.get('test1', filter, {'batch_size': 1000})
  1. Why does manager.get need batch_size? It appears currently like it is just grabbing the number of frames specified there from the start of the video, so if you specify {'batch_size': 5} you would get at most 5 frames, regardless of how many frames are actually stored. This does not seem intuitive or useful, but maybe I am missing why it is actually needed.

I think it would be ideal to specify everything using the filter condition function that is passed into get, so if you want frames around numbers 100 to 200 you would say:

def filter(header_data):
    if (header_data['start'] >= 100 and header_data['end'] <= 200):
        return True
    return False
  1. Is this the ultimate goal of passing in condition filters, that you can specify exactly what you need, then the manager will ideally only read the storage files that match those conditions?

  2. If so, should this filtering function also be able to grab only part of frames, such as the top 100x100 pixels? Or will it grab the whole frame, then deal with getting only a certain region? My guess is that this has to do with what type of header you are working with in the filter function.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.