Giter Site home page Giter Site logo

robmarkcole / hass-deepstack-object Goto Github PK

View Code? Open in Web Editor NEW
432.0 25.0 95.0 30.72 MB

Home Assistant custom component for using Deepstack object detection

Home Page: https://community.home-assistant.io/t/face-and-person-detection-with-deepstack-local-and-free/92041

License: MIT License

Python 100.00%
home-assistant object-detection

hass-deepstack-object's Introduction

HASS-Deepstack-object

Home Assistant custom component for Deepstack object detection. Deepstack is a service which runs in a docker container and exposes various computer vision models via a REST API. Deepstack object detection can identify 80 different kinds of objects (listed at bottom of this readme), including people (person), vehicles and animals. Alternatively a custom object detection model can be used. There is no cost for using Deepstack and it is fully open source. To run Deepstack you will need a machine with 8 GB RAM, or an NVIDIA Jetson.

On your machine with docker, run Deepstack with the object detection service active on port 80:

docker run -e VISION-DETECTION=True -e API-KEY="mysecretkey" -v localstorage:/datastore -p 80:5000 deepquestai/deepstack

Usage of this component

The deepstack_object component adds an image_processing entity where the state of the entity is the total count of target objects that are above a confidence threshold which has a default value of 80%. You can have a single target object class, or multiple. The time of the last detection of any target object is in the last target detection attribute. The type and number of objects (of any confidence) is listed in the summary attributes. Optionally a region of interest (ROI) can be configured, and only objects with their center (represented by a x) within the ROI will be included in the state count. The ROI will be displayed as a green box, and objects with their center in the ROI have a red box.

Also optionally the processed image can be saved to disk, with bounding boxes showing the location of detected objects. If save_file_folder is configured, an image with filename of format deepstack_object_{source name}_latest.jpg is over-written on each new detection of a target. Optionally this image can also be saved with a timestamp in the filename, if save_timestamped_file is configured as True. An event deepstack.object_detected is fired for each object detected that is in the targets list, and meets the confidence and ROI criteria. If you are a power user with advanced needs such as zoning detections or you want to track multiple object types, you will need to use the deepstack.object_detected events.

Note that by default the component will not automatically scan images, but requires you to call the image_processing.scan service e.g. using an automation triggered by motion.

Home Assistant setup

Place the custom_components folder in your configuration directory (or add its contents to an existing custom_components folder). Then configure object detection. Important: It is necessary to configure only a single camera per deepstack_object entity. If you want to process multiple cameras, you will therefore need multiple deepstack_object image_processing entities.

The component can optionally save snapshots of the processed images. If you would like to use this option, you need to create a folder where the snapshots will be stored. The folder should be in the same folder where your configuration.yaml file is located. In the example below, we have named the folder snapshots.

Add to your Home-Assistant config:

image_processing:
  - platform: deepstack_object
    ip_address: localhost
    port: 80
    api_key: mysecretkey
    # custom_model: mask
    # confidence: 80
    save_file_folder: /config/snapshots/
    save_file_format: png
    save_timestamped_file: True
    always_save_latest_file: True
    scale: 0.75
    # roi_x_min: 0.35
    roi_x_max: 0.8
    #roi_y_min: 0.4
    roi_y_max: 0.8
    crop_to_roi: True
    targets:
      - target: person
      - target: vehicle
        confidence: 60
      - target: car
        confidence: 40
    source:
      - entity_id: camera.local_file

Configuration variables:

  • ip_address: the ip address of your deepstack instance.
  • port: the port of your deepstack instance.
  • api_key: (Optional) Any API key you have set.
  • timeout: (Optional, default 10 seconds) The timeout for requests to deepstack.
  • custom_model: (Optional) The name of a custom model if you are using one. Don't forget to add the targets from the custom model below
  • confidence: (Optional) The confidence (in %) above which detected targets are counted in the sensor state. Default value: 80
  • save_file_folder: (Optional) The folder to save processed images to. Note that folder path should be added to whitelist_external_dirs
  • save_file_format: (Optional, default jpg, alternatively png) The file format to save images as. png generally results in easier to read annotations.
  • save_timestamped_file: (Optional, default False, requires save_file_folder to be configured) Save the processed image with the time of detection in the filename.
  • always_save_latest_file: (Optional, default False, requires save_file_folder to be configured) Always save the last processed image, even if there were no detections.
  • scale: (optional, default 1.0), range 0.1-1.0, applies a scaling factor to the images that are saved. This reduces the disk space used by saved images, and is especially beneficial when using high resolution cameras.
  • show_boxes: (optional, default True), if False bounding boxes are not shown on saved images
  • roi_x_min: (optional, default 0), range 0-1, must be less than roi_x_max
  • roi_x_max: (optional, default 1), range 0-1, must be more than roi_x_min
  • roi_y_min: (optional, default 0), range 0-1, must be less than roi_y_max
  • roi_y_max: (optional, default 1), range 0-1, must be more than roi_y_min
  • crop_to_roi: (optional, default False), crops the image to the specified roi. May improve object detection accuracy when a region-of-interest is applied
  • source: Must be a camera.
  • targets: The list of target object names and/or object_type, default person. Optionally a confidence can be set for this target, if not the default confidence is used. Note the minimum possible confidence is 10%.

For the ROI, the (x=0,y=0) position is the top left pixel of the image, and the (x=1,y=1) position is the bottom right pixel of the image. It might seem a bit odd to have y running from top to bottom of the image, but that is the coordinate system used by pillow.

I created an app for exploring the config parameters at https://github.com/robmarkcole/deepstack-ui

Event deepstack.object_detected

An event deepstack.object_detected is fired for each object detected above the configured confidence threshold. This is the recommended way to check the confidence of detections, and to keep track of objects that are not configured as the target (use Developer tools -> EVENTS -> :Listen to events, to monitor these events).

An example use case for event is to get an alert when some rarely appearing object is detected, or to increment a counter. The deepstack.object_detected event payload includes:

  • entity_id : the entity id responsible for the event
  • name : the name of the object detected
  • object_type : the type of the object, from person, vehicle, animal or other
  • confidence : the confidence in detection in the range 0 - 100%
  • box : the bounding box of the object
  • centroid : the centre point of the object
  • saved_file : the path to the saved annotated image, which is the timestamped file if save_timestamped_file is True, or the default saved image if False

An example automation using the deepstack.object_detected event is given below:

- action:
    - data_template:
        caption: "New person detection with confidence {{ trigger.event.data.confidence }}"
        file: "{{ trigger.event.data.saved_file  }}"
      service: telegram_bot.send_photo
  alias: Object detection automation
  condition: []
  id: "1120092824622"
  trigger:
    - platform: event
      event_type: deepstack.object_detected
      event_data:
        name: person

Displaying the deepstack latest jpg file

It easy to display the deepstack_object_{source name}_latest.jpg image with a local_file camera. An example configuration is:

camera:
  - platform: local_file
    file_path: /config/snapshots/deepstack_object_local_file_latest.jpg
    name: deepstack_latest_person

Info on box

The box coordinates and the box center (centroid) can be used to determine whether an object falls within a defined region-of-interest (ROI). This can be useful to include/exclude objects by their location in the image.

  • The box is defined by the tuple (y_min, x_min, y_max, x_max) (equivalent to image top, left, bottom, right) where the coordinates are floats in the range [0.0, 1.0] and relative to the width and height of the image.
  • The centroid is in (x,y) coordinates where (0,0) is the top left hand corner of the image and (1,1) is the bottom right corner of the image.

Browsing saved images in HA

I highly recommend using the Home Assistant Media Player Browser to browse and preview processed images. Add to your config something like:

homeassistant:
.
.
  whitelist_external_dirs:
    - /config/images/
  media_dirs:
    local: /config/images/

media_source:

And configure Deepstack to use the above directory for save_file_folder, then saved images can be browsed from the HA front end like below:

Face recognition

For face recognition with Deepstack use https://github.com/robmarkcole/HASS-Deepstack-face

Support

For code related issues such as suspected bugs, please open an issue on this repo. For general chat or to discuss Home Assistant specific issues related to configuration or use cases, please use this thread on the Home Assistant forums.

Docker tips

Add the -d flag to run the container in background

FAQ

Q1: I get the following warning, is this normal?

2019-01-15 06:37:52 WARNING (MainThread) [homeassistant.loader] You are using a custom component for image_processing.deepstack_face which has not been tested by Home Assistant. This component might cause stability problems, be sure to disable it if you do experience issues with Home Assistant.

A1: Yes this is normal


Q4: What are the minimum hardware requirements for running Deepstack?

A4. Based on my experience, I would allow 0.5 GB RAM per model.


Q5: Can object detection be configured to detect car/car colour?

A5: The list of detected object classes is at the end of the page here. There is no support for detecting the colour of an object.


Q6: I am getting an error from Home Assistant: Platform error: image_processing - Integration deepstack_object not found

A6: This can happen when you are running in Docker/Hassio, and indicates that one of the dependencies isn't installed. It is necessary to reboot your Hassio device, or rebuild your Docker container. Note that just restarting Home Assistant will not resolve this.


Objects

The following lists all valid target object names:

person,   bicycle,   car,   motorcycle,   airplane,
bus,   train,   truck,   boat,   traffic light,   fire hydrant,   stop_sign,
parking meter,   bench,   bird,   cat,   dog,   horse,   sheep,   cow,   elephant,
bear,   zebra, giraffe,   backpack,   umbrella,   handbag,   tie,   suitcase,
frisbee,   skis,   snowboard, sports ball,   kite,   baseball bat,   baseball glove,
skateboard,   surfboard,   tennis racket, bottle,   wine glass,   cup,   fork,
knife,   spoon,   bowl,   banana,   apple,   sandwich,   orange, broccoli,   carrot,
hot dog,   pizza,   donut,   cake,   chair,   couch,   potted plant,   bed, dining table,
toilet,   tv,   laptop,   mouse,   remote,   keyboard,   cell phone,   microwave,
oven,   toaster,   sink,   refrigerator,   book,   clock,   vase,   scissors,   teddy bear,
hair dryer, toothbrush.

Objects are grouped by the following object_type:

  • person: person
  • animal: bird, cat, dog, horse, sheep, cow, elephant, bear, zebra, giraffe
  • vehicle: bicycle, car, motorcycle, airplane, bus, train, truck
  • other: any object that is not in person, animal or vehicle

Development

Currently only the helper functions are tested, using pytest.

  • python3 -m venv venv
  • source venv/bin/activate
  • pip install -r requirements-dev.txt
  • venv/bin/py.test custom_components/deepstack_object/tests.py -vv -p no:warnings

Videos of usage

Checkout this excellent video of usage from Everything Smart Home

Also see the video of a presentation I did to the IceVision community on deploying Deepstack on a Jetson nano.

hass-deepstack-object's People

Contributors

artyom-smirnov avatar covid10 avatar dependabot[bot] avatar dwradcliffe avatar jodur avatar priva28 avatar robmarkcole avatar shbatm avatar tjntomas avatar wizmo2 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

hass-deepstack-object's Issues

Bug: images always saved

Currently images are always being saved, regardless of whether there is a target in the image or not

Add last_detection attribute?

Add attribute to display the time of the last detection of an object? Alternatively we could place the timestamp in the processed image.

image_processing.object_detected events volume quite large

I was poking around at things, and noticed that my Home Assistant database was somewhat larger than I expected. While I have some cleaning up to do elsewhere, there's a very large number of events from this integration. Behold:

sqlite> select event_type,count(event_type) from events group by event_type;
automation_triggered|444
call_service|1468
image_processing.file_saved|37
image_processing.object_detected|110861
logbook_entry|17
state_changed|169292
sqlite>

Most of these are detections with quite low confidence.

sqlite> select * from events where event_type != 'state_changed' order by time_fired desc limit 20;
57841181|image_processing.object_detected|{"classifier": "deepstack_object", "entity_id": "image_processing.cam7_coral", "object": "tie", "confidence": 6.6}|LOCAL|2019-08-25 17:28:03.447058|2019-08-25 17:28:03.470214|dff50d142b354c169c2ac287a6edc783|
57841180|image_processing.object_detected|{"classifier": "deepstack_object", "entity_id": "image_processing.cam7_coral", "object": "tie", "confidence": 6.6}|LOCAL|2019-08-25 17:28:03.446988|2019-08-25 17:28:03.464997|d13d086d11f6457d81800150f8adb10a|
57841179|image_processing.object_detected|{"classifier": "deepstack_object", "entity_id": "image_processing.cam7_coral", "object": "bench", "confidence": 9.0}|LOCAL|2019-08-25 17:28:03.446906|2019-08-25 17:28:03.458613|23e353387a984a20bc2e57064530412e|
57841178|image_processing.object_detected|{"classifier": "deepstack_object", "entity_id": "image_processing.cam7_coral", "object": "bench", "confidence": 12.1}|LOCAL|2019-08-25 17:28:03.446749|2019-08-25 17:28:03.452756|e9e9aa9c854e4e4b8ae270b81ab22cbe|
57841176|image_processing.object_detected|{"classifier": "deepstack_object", "entity_id": "image_processing.cam9_coral", "object": "car", "confidence": 16.0}|LOCAL|2019-08-25 17:28:03.347528|2019-08-25 17:28:03.352460|0c1287f834854763804f7cc52e942105|
57841174|image_processing.object_detected|{"classifier": "deepstack_object", "entity_id": "image_processing.cam8_coral", "object": "tv", "confidence": 6.6}|LOCAL|2019-08-25 17:28:03.274874|2019-08-25 17:28:03.286495|1b18abd6361a4dbea42ee6d641ebe39d|
57841173|image_processing.object_detected|{"classifier": "deepstack_object", "entity_id": "image_processing.cam8_coral", "object": "car", "confidence": 34.0}|LOCAL|2019-08-25 17:28:03.274721|2019-08-25 17:28:03.278598|2b22fe4ae0f64a38970ac3536ffd9db5|
57841171|image_processing.object_detected|{"classifier": "deepstack_object", "entity_id": "image_processing.cam6_coral", "object": "potted plant", "confidence": 12.1}|LOCAL|2019-08-25 17:28:03.143938|2019-08-25 17:28:03.213783|dbc9fa1a333e4ee183a21522f12eb711|
57841170|image_processing.object_detected|{"classifier": "deepstack_object", "entity_id": "image_processing.cam6_coral", "object": "potted plant", "confidence": 12.1}|LOCAL|2019-08-25 17:28:03.143875|2019-08-25 17:28:03.208738|d6f30dedc01d4cf69db04f0cc3a2df72|
57841169|image_processing.object_detected|{"classifier": "deepstack_object", "entity_id": "image_processing.cam6_coral", "object": "car", "confidence": 16.0}|LOCAL|2019-08-25 17:28:03.143812|2019-08-25 17:28:03.202402|976b6e6516454d83b83ac50db4b3520e|
57841168|image_processing.object_detected|{"classifier": "deepstack_object", "entity_id": "image_processing.cam6_coral", "object": "chair", "confidence": 16.0}|LOCAL|2019-08-25 17:28:03.143750|2019-08-25 17:28:03.196763|f4d08f5afb0044fca90bad6ccf80de14|
57841167|image_processing.object_detected|{"classifier": "deepstack_object", "entity_id": "image_processing.cam6_coral", "object": "potted plant", "confidence": 16.0}|LOCAL|2019-08-25 17:28:03.143686|2019-08-25 17:28:03.189965|e904fcf2753947f1adcd8a95954c6451|
57841166|image_processing.object_detected|{"classifier": "deepstack_object", "entity_id": "image_processing.cam6_coral", "object": "car", "confidence": 16.0}|LOCAL|2019-08-25 17:28:03.143623|2019-08-25 17:28:03.183431|65051ac45d7c426e8213a1344f236418|
57841165|image_processing.object_detected|{"classifier": "deepstack_object", "entity_id": "image_processing.cam6_coral", "object": "car", "confidence": 21.1}|LOCAL|2019-08-25 17:28:03.143544|2019-08-25 17:28:03.175894|e18e70b28ab049a8bee3cb6f312057e8|
57841164|image_processing.object_detected|{"classifier": "deepstack_object", "entity_id": "image_processing.cam6_coral", "object": "car", "confidence": 34.0}|LOCAL|2019-08-25 17:28:03.143449|2019-08-25 17:28:03.170582|cc5f2f1170f5440e9b27f8346482f605|
57841163|image_processing.object_detected|{"classifier": "deepstack_object", "entity_id": "image_processing.cam6_coral", "object": "potted plant", "confidence": 50.0}|LOCAL|2019-08-25 17:28:03.143374|2019-08-25 17:28:03.164434|112eee9f915f425a8c4e6c0503961fa9|
57841162|image_processing.object_detected|{"classifier": "deepstack_object", "entity_id": "image_processing.cam6_coral", "object": "chair", "confidence": 58.2}|LOCAL|2019-08-25 17:28:03.143228|2019-08-25 17:28:03.146986|4607dcb4a4344848bd8a83353da42471|
57841160|image_processing.object_detected|{"classifier": "deepstack_object", "entity_id": "image_processing.cam5_coral", "object": "bench", "confidence": 12.1}|LOCAL|2019-08-25 17:28:02.819882|2019-08-25 17:28:02.910808|e083b66394944b52901a86da285e613f|
57841159|image_processing.object_detected|{"classifier": "deepstack_object", "entity_id": "image_processing.cam5_coral", "object": "potted plant", "confidence": 12.1}|LOCAL|2019-08-25 17:28:02.819814|2019-08-25 17:28:02.903817|214259526cc7437aa46d1f8924b0ba87|
57841158|image_processing.object_detected|{"classifier": "deepstack_object", "entity_id": "image_processing.cam5_coral", "object": "potted plant", "confidence": 12.1}|LOCAL|2019-08-25 17:28:02.819745|2019-08-25 17:28:02.887147|037e2411a3984ce18de031efbe093731|
sqlite>

It might be nice to reduce the volume of these events that are generated and thrown through the Home Assistant state machine. What comes to my mind is some combination of these to reduce the overall volume of events generated:

  • generate a single event, carrying a vector of the detected objects.
  • have some minimum threshold in the confidence for an object to be considered
  • maybe provide a list of the object types that are of interest...
  • ...maybe the count of these objects that were detected could be the state value?

I'm sure what's best in the spirit of things here, and the impact of other models that might work differently. Of course, I can also blacklist that service from the recorder database, but there remains a concern about the default behavior of shoving all the events into the database that we might want to consider?

Consider how to support multiple cameras

From @lmamakos
It’d also be nice to be able to either specify a path for each camera, or to include the camera name in the saved file. If I have multiple cameras going, I think it would be neat to have a file camera set up showing the latest image from each camera. Right now, the same “latest” file is overwritten by each configured camera.

Right now the solution is to configure the integration separately for each camera, so you specify a different save folder. You suggestion here might be better. However I also am considering whether the functionality of saving images with bounding boxes should be pushed into the home assistant repo, as it can be shared with the tensorflow integration etc

Automatic Scan?

I have Deepstack all setup and activated along with the components but I didn't think it was working as nothing was showing. Then I manually ran the image_processing.scan service on image_processing.person_detector and image_processing.face_detector and it started working... but ONLY when I use the image_processing.scan service.

Is this right? I assume because the config has a scan_interval that it should automatically scan.

Refactor required

Need to address:

Integrations need to be in their own folder. Change image_processing/deepstack_object.py to deepstack_object/image_processing.py. This will stop working soon.

Abstractions

What are best abstractions?

  • Classes
  • Target classes
  • Bounding boxes

Break repo in two

Split out face and object, so as to support HACS and improve maintainability

Create sensor for each label?

Rather than defining a target, how about creating a sensor for for EACH label/class identified. The state of the sensor would be the the last_detection time of that class. This makes sense as we usually want to know when something is seen, often the count is a secondary requirement.

Project discussion

Subject: Discussion of project I wish to write up.
Project: Get alerts when someone is at my front door, similar to this project but with a couple of significant improvements.

(1) Works with SIDE PROFILE.
(2) Works with ONVIF/RTSP camera

From discussion with @OlafenwaMoses (1) is straightforward so long as side profile images are used in training, TBD (2)

Nice to have (not a requirement )- works with Movidius/Coral USB stick

lots of errors thrown from component in Home Assistant log

I notice a bunch of errors in the Home Assistant log like this:

2019-08-06 00:05:06 ERROR (MainThread) [homeassistant.helpers.entity] Update for image_processing.cam8_coral fails
Traceback (most recent call last):
  File "/usr/src/app/homeassistant/helpers/entity.py", line 221, in async_update_ha_state
    await self.async_device_update()
  File "/usr/src/app/homeassistant/helpers/entity.py", line 378, in async_device_update
    await self.async_update()
  File "/usr/src/app/homeassistant/components/image_processing/__init__.py", line 132, in async_update
    await self.async_process_image(image.content)
  File "/usr/local/lib/python3.7/concurrent/futures/thread.py", line 57, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/config/custom_components/deepstack_object/image_processing.py", line 157, in process_image
    predictions_json = response.json()["predictions"]
KeyError: 'predictions'

which seems to be coming from here

predictions_json = response.json()["predictions"]

The error occurs when a response comes back from the coral-api.py server, and where there are no detections at all. The HTTP response is '200 OK', but there doesn't appear to be any 'prediction' key inserted into the response. That is only included in the response at https://github.com/robmarkcole/coral-pi-rest-server/blob/84d54ba69b5b909e7a27ec5cea0b36ab853095c6/coral-app.py#L77 if there's any predictions returned from the engine. It seems like the response would only have the success property as False, but that's not checked for.

So, I'm not sure what the right thing to do here might be, as I'm not sure what the deepstack API does. It looks like the block of code at

self._state = None
self._targets_confidences = []
self._predictions = {}
really only takes effect if there no response... and it it safe to move that code up above as the default return value? Here's where my unfamiliarity with the Home Assistant internals fails me - should that state be modified or left as-is in an error condition?

I'm also not sure if it makes sense or is safe to return a null predictions array from the coral-api.py service? Again, this is also related to the deepstack container's API behavior.

Return object centroid

For determining if an object is in a region of interest (ROI) it is required to have the centroid location of the object. A simple rule can then be applied to determine if the object is within the ROI. Return the centroid location and consider how best to implement the rule logic (automation?)

If nothing is detected from deepstack, set HA state to 0

If absolutely nothing is detected result from API is:

{"success":true,"predictions":[]}
```
but HA image_processing.blah shows state `unknown`

is there a way to return state = 0 ?
That way if `unknown` is shown, then other problems with docker are detectable.

Thanks

Error when recognising object

I get this error whenever I run image_processing.scan service. How do I fix this?

Update for image_processing.person_detector fails
Traceback (most recent call last):
  File "/usr/local/lib/python3.7/site-packages/homeassistant/helpers/entity.py", line 220, in async_update_ha_state
    await self.async_device_update()
  File "/usr/local/lib/python3.7/site-packages/homeassistant/helpers/entity.py", line 375, in async_device_update
    await self.async_update()
  File "/usr/local/lib/python3.7/site-packages/homeassistant/components/image_processing/__init__.py", line 132, in async_update
    await self.async_process_image(image.content)
  File "/usr/local/lib/python3.7/concurrent/futures/thread.py", line 57, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/config/custom_components/deepstack_object/image_processing.py", line 115, in process_image
    predictions_json = response.json()["predictions"]
KeyError: 'predictions'

Platform error image_processing.deepstack_object - No module named 'deepstack'

Updated to latest Deepstack object module via HACS and now receiving the following error:

Platform error image_processing.deepstack_object - No module named 'deepstack'

Here is my config:

  - platform: deepstack_object
    ip_address: 192.168.1.103
    port: 5000
    scan_interval: 10000
    save_file_folder: /config/www/deepstack_car_images
    target: car
    source:
      - entity_id: camera.front_of_house
        name: car_detector

Module is showing as installed in HACS and I removed and re-installed to make sure.

Can’t use server control to reboot due to error thrown up but when I do force restart HA from docker I can see the following image processing error in the dev logs:

Platform error: image_processing
Traceback (most recent call last):
  File "/usr/src/homeassistant/homeassistant/config.py", line 767, in async_process_component_config
    platform = p_integration.get_platform(domain)
  File "/usr/src/homeassistant/homeassistant/loader.py", line 235, in get_platform
    "{}.{}".format(self.pkg_path, platform_name)
  File "/usr/local/lib/python3.7/importlib/__init__.py", line 127, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
  File "<frozen importlib._bootstrap>", line 983, in _find_and_load
  File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 728, in exec_module
  File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
  File "/config/custom_components/deepstack_object/image_processing.py", line 15, in <module>
    import deepstack.core as ds
ModuleNotFoundError: No module named 'deepstack

KeyError: 'predictions'

I see the error below when there is a problem with the camera:

Traceback (most recent call last):
  File "/Users/robincole/Documents/GitHub/home-assistant/homeassistant/helpers/entity.py", line 221, in async_update_ha_state
    await self.async_device_update()
  File "/Users/robincole/Documents/GitHub/home-assistant/homeassistant/helpers/entity.py", line 378, in async_device_update
    await self.async_update()
  File "/Users/robincole/Documents/GitHub/home-assistant/homeassistant/components/image_processing/__init__.py", line 132, in async_update
    await self.async_process_image(image.content)
  File "/Users/robincole/anaconda3/lib/python3.7/concurrent/futures/thread.py", line 57, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/Users/robincole/.homeassistant/custom_components/deepstack_object/image_processing.py", line 157, in process_image
    predictions_json = response.json()["predictions"]
KeyError: 'predictions'

KeyError on Face_Detection

Using v0.4 i get the following error:

[homeassistant.helpers.entity] Update for image_processing.face_counter fails
Traceback (most recent call last):
  File "/usr/src/app/homeassistant/helpers/entity.py", line 221, in async_update_ha_state
    await self.async_device_update()
  File "/usr/src/app/homeassistant/helpers/entity.py", line 347, in async_device_update
    await self.async_update()
  File "/usr/src/app/homeassistant/components/image_processing/__init__.py", line 138, in async_update
    await self.async_process_image(image.content)
  File "/usr/local/lib/python3.6/concurrent/futures/thread.py", line 56, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/config/custom_components/image_processing/deepstack_face.py", line 167, in process_image
    predictions_json = response.json()["predictions"]
KeyError: 'predictions'

I'm running HA 85.1

My Config:
  - platform: deepstack_face
    ip_address: my_ip
    port: 5000
    scan_interval: 30
    source:
      - entity_id: camera.local_file
        name: face_counter

`target` extra key not accepted. Will become breaking change

Persistent message generated by HA:

Your configuration contains extra keys that the platform does not support (but were silently accepted before 0.88). Please find and remove the following. This will become a breaking change. 

- [target]. See /config/configuration.yaml, line 54).

Docker HA v0.89.0

My installation not work

Hello, I have followed all the indications but I do not get the recognition to work.
I have entered the test photo in the save folder but it does not work.

My continuous container log :

./run: line 2: 5198 Aborted (core dumped) python3 /app/intelligence.py &> /dev/null
./run: line 2: 5213 Aborted (core dumped) python3 /app/intelligence.py &> /dev/null
./run: line 2: 5228 Aborted (core dumped) python3 /app/intelligence.py &> /dev/null
./run: line 2: 5241 Aborted (core dumped) python3 /app/intelligence.py &> /dev/null
./run: line 2: 5256 Aborted (core dumped) python3 /app/intelligence.py &> /dev/null

Awesome work!! Thanks

Implement save file with bounding boxes

Implement _save_image() so that the image + bounding boxes is saved, allowing it to be used in automations.

  • Limit boxes to only the configured target object? Would prevent multiple overlapping boxes being added
  • Fire an event with the name of the saved file? -> We DO have folder_watcher for this use case, but since we know the file path we might as well publish it, allowing an automation to display in using the local file camera
  • Encode metadata (e.g. number of persons) in filename? Not necessary since we know the target is in the image

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.