Giter Site home page Giter Site logo

Comments (7)

NickM-27 avatar NickM-27 commented on September 27, 2024

You can not mix detectors

from frigate.

dakotasoukup avatar dakotasoukup commented on September 27, 2024

I reran it with just the coral in the config file and it came back with the same thing. 2024-02-15 16:16:34.606281089 Process detector:coral:
2024-02-15 16:16:34.607337884 Traceback (most recent call last):
2024-02-15 16:16:34.607363522 File "/usr/lib/python3.9/multiprocessing/process.py", line 315, in _bootstrap
2024-02-15 16:16:34.607365147 self.run()
2024-02-15 16:16:34.607369887 File "/usr/lib/python3.9/multiprocessing/process.py", line 108, in run
2024-02-15 16:16:34.607372680 self._target(*self._args, **self._kwargs)
2024-02-15 16:16:34.607380237 File "/opt/frigate/frigate/object_detection.py", line 125, in run_detector
2024-02-15 16:16:34.607381720 detections = object_detector.detect_raw(input_frame)
2024-02-15 16:16:34.607383202 File "/opt/frigate/frigate/object_detection.py", line 75, in detect_raw
2024-02-15 16:16:34.607400744 return self.detect_api.detect_raw(tensor_input=tensor_input)
2024-02-15 16:16:34.607402529 File "/opt/frigate/frigate/detectors/plugins/edgetpu_tfl.py", line 59, in detect_raw
2024-02-15 16:16:34.607421888 self.interpreter.set_tensor(self.tensor_input_details[0]["index"], tensor_input)
2024-02-15 16:16:34.607436461 File "/usr/lib/python3/dist-packages/tflite_runtime/interpreter.py", line 572, in set_tensor
2024-02-15 16:16:34.607437949 self._interpreter.SetTensor(tensor_index, value)
2024-02-15 16:16:34.607446205 ValueError: Cannot set tensor: Dimension mismatch. Got 300 but expected 320 for dimension 1 of input 7.
and the cameras did not load this time..

It does say in the frigate docs that Frigate provides the following builtin detector types: cpu, edgetpu, openvino, tensorrt, and rknn. By default, Frigate will use a single CPU detector. Other detectors may require additional configuration as described below. When using multiple detectors they will run in dedicated processes, but pull from a common queue of detection requests from across all cameras.

from frigate.

NickM-27 avatar NickM-27 commented on September 27, 2024

Yes, as in multiple corals for example.

What config do you have right now?

from frigate.

dakotasoukup avatar dakotasoukup commented on September 27, 2024

Okay. Curent config is.

mqtt:
  host: 192.168.0.170
  port: 1883 # <---- same mqtt broker that home assistant uses
  user: dakota
  password: foster47290

ffmpeg:
  hwaccel_args: preset-vaapi

detectors:
  coral:
    type: edgetpu
    device: pci

detect:
  # Optional: width of the frame for the input with the detect role (default: use native stream resolution)
  width: 1280
  # Optional: height of the frame for the input with the detect role (default: use native stream resolution)
  height: 720
  # Optional: desired fps for your camera for the input with the detect role (default: shown below)
  # NOTE: Recommended value of 5. Ideally, try and reduce your FPS on the camera.
  fps: 5
  # Optional: enables detection for the camera (default: True)
  enabled: True
  # Optional: Number of consecutive detection hits required for an object to be initialized in the tracker. (default: 1/2 the frame rate)
  min_initialized: 2
  # Optional: Number of frames without a detection before Frigate considers an object to be gone. (default: 5x the frame rate)
  max_disappeared: 25
  # Optional: Configuration for stationary object tracking
  stationary:
    # Optional: Frequency for confirming stationary objects (default: same as threshold)
    # When set to 1, object detection will run to confirm the object still exists on every frame.
    # If set to 10, object detection will run to confirm the object still exists on every 10th frame.
    interval: 50
    # Optional: Number of frames without a position change for an object to be considered stationary (default: 10x the frame rate or 10s)
    threshold: 50
    # Optional: Define a maximum number of frames for tracking a stationary object (default: not set, track forever)
    # This can help with false positives for objects that should only be stationary for a limited amount of time.
    # It can also be used to disable stationary object tracking. For example, you may want to set a value for person, but leave
    # car at the default.
    # WARNING: Setting these values overrides default behavior and disables stationary object tracking.
    #          There are very few situations where you would want it disabled. It is NOT recommended to
    #          copy these values from the example config into your config unless you know they are needed.
    max_frames:
      # Optional: Default for all object types (default: not set, track forever)
      default: 3000
      # Optional: Object specific values
      objects:
        person: 1000
  # Optional: Milliseconds to offset detect annotations by (default: shown below).
  # There can often be latency between a recording and the detect process,
  # especially when using separate streams for detect and record.
  # Use this setting to make the timeline bounding boxes more closely align
  # with the recording. The value can be positive or negative.
  # TIP: Imagine there is an event clip with a person walking from left to right.
  #      If the event timeline bounding box is consistently to the left of the person
  #      then the value should be decreased. Similarly, if a person is walking from
  #      left to right and the bounding box is consistently ahead of the person
  #      then the value should be increased.
  # TIP: This offset is dynamic so you can change the value and it will update existing
  #      events, this makes it easy to tune.
  # WARNING: Fast moving objects will likely not have the bounding box align.
  annotation_offset: 0
      
#Global Object Settings
objects:
  track:
    - person
    - car
    - dog


record:
  enabled: True
  retain:
    days: 7
    mode: all
  events:
    retain:
      default: 30
      mode: motion

snapshots:
  enabled: True
  retain:
    default: 30


model:
  width: 300
  height: 300
  input_tensor: nhwc
  input_pixel_format: bgr
  labelmap_path: /openvino-model/coco_91cl_bkgr.txt

birdseye:
  enabled: True
  mode: continuous
  
cameras:
  Backyard: # <--- this will be changed to your actual camera later
    enabled: True
    ffmpeg:
      inputs:
        - path: rtsp://admin:[email protected]/media/video2
          roles:
            - detect
            - rtmp # <- deprecated, recommend using restream instead
        - path: rtsp://admin:[email protected]/media/video1
          roles:
            - record  
    motion:
       mask:    
        - 0,0,0,85,355,79,361,0            
  front: # <--- this will be changed to your actual camera later
    enabled: True
    ffmpeg:
      inputs:
        - path: rtsp://Admin:[email protected]/stream1?username=admin&password=E10ADC3949BA59ABBE56E057F20F883E
          roles:
            - detect
            - rtmp # <- deprecated, recommend using restream instead
        - path: rtsp://Admin:[email protected]/stream0?username=admin&password=E10ADC3949BA59ABBE56E057F20F883E
          roles:
            - record 
    motion:
        mask:  
            - 754,720,743,620,1280,667,1280,720            

  front2: # <--- this will be changed to your actual camera later
    enabled: True
    ffmpeg:
      inputs:
        - path: rtsp://admin:[email protected]:554/Streaming/channels/2
          roles:
            - detect
            - rtmp # <- deprecated, recommend using restream instead
        - path: rtsp://admin:[email protected]:554
          roles:
            - record   
    objects:
      filters:
        car:
          mask:
                 - 0,0,0,251,314,403,600,405,1163,156    
    motion:
         mask:
              - 0,267,0,0,1280,0,1280,511                       

  driveway: # <--- this will be changed to your actual camera later
    enabled: True
    ffmpeg:
      inputs:
        - path: rtsp://admin:[email protected]/media/video2
          roles:
            - detect
            - rtmp # <- deprecated, recommend using restream instead
        - path: rtsp://admin:[email protected]/media/video1
          roles:
            - record   
    record:
      events:
        required_zones:
        - zone_2
    zones:
        zone_1:
          coordinates: 388,0,28,170,371,389,1099,300,1220,136
        zone_2:
          coordinates: 747,335,911,264,1057,341,1016,501,821,635,75,628
    objects:
        filters:
          car:
            mask:
             - 809,27,656,0,383,68,400,81,304,126,582,427,1000,195,1022,124    
    motion:
      mask:
        - 0,0,0,95,288,151,378,0
  garage: # <--- this will be changed to your actual camera later
    enabled: True
    ffmpeg:
      inputs:
        - path: rtsp://admin:[email protected]/media/video2
          roles:
            - detect
            - rtmp # <- deprecated, recommend using restream instead
        - path: rtsp://admin:[email protected]/media/video1
          roles:
            - record 
    motion:
      mask:  
          - 0,0,0,78,314,79,322,0        
    
go2rtc:
  streams:
    front2:
      - rtsp://admin:[email protected]:554/Streaming/channels/2
    front:
     -  rtsp://Admin:[email protected]/stream1?username=admin&password=E10ADC3949BA59ABBE56E057F20F883E
    Backyard:
     - rtsp://admin:[email protected]/media/video2
    driveway:  
    -  rtsp://admin:[email protected]/media/video2
    garage:
    - rtsp://admin:[email protected]/media/video2
[
![Screenshot from 2024-02-15 16-33-15](https://github.com/blakeblackshear/frigate/assets/155684864/3eda7443-df31-4b43-9663-962ab020134d)
![Screenshot from 2024-02-15 16-33-03](https://github.com/blakeblackshear/frigate/assets/155684864/e94b9e84-dfea-41ea-bd48-56c54c4a08ac)
![Screenshot from 2024-02-15 16-32-48](https://github.com/blakeblackshear/frigate/assets/155684864/9b96b893-882c-455a-8435-a5a001bebb0b)
![Screenshot from 2024-02-15 16-32-27](https://github.com/blakeblackshear/frigate/assets/155684864/0b017047-27fa-4135-b977-5add4a3cc5f2)
](url)

from frigate.

NickM-27 avatar NickM-27 commented on September 27, 2024

You still have the model config for openvino set

model:
  width: 300
  height: 300
  input_tensor: nhwc
  input_pixel_format: bgr
  labelmap_path: /openvino-model/

from frigate.

dakotasoukup avatar dakotasoukup commented on September 27, 2024

Thanks! that fixed it. Do i need to set that for the coral? What would be the proper config. I just deleted it for now and everything is good

from frigate.

NickM-27 avatar NickM-27 commented on September 27, 2024

No need to set anything for the coral

from frigate.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.