Giter Site home page Giter Site logo

yin-yang-ranch's Introduction

yin-yang-ranch: Software & Raspberry Pis help manage a farm

You can watch my PyCon YouTube video about this project here:

PyCon 2020 Talk:

Jeff Bass - Yin Yang Ranch: Building a Distributed Computer Vision Pipeline using Python, OpenCV and ZMQ

PyCon 2020 Talk Video about this project

PyCon 2020 Talk Presentation slides

If you have a question about my PyCon 2020 talk, open an issue! For more about that see yin-yang-ranch issue 2.

This project is a collection of Python programs and Raspberry Pi hardware projects to help manage a small urban permaculture farm called Yin Yang Ranch. The 2 acre farm is an ongoing science project to build living soil, capture rain in barrels, and grow a variety of plants and fruit trees that can feed birds, bees, butterflies and people. We are in Southern California about 10 miles from the Malibu coast. Drought and limited rainfall are the toughest climate issues. Monitoring and observation are important, so I built a Raspberry Pi Camera system to read the water meter and monitor temperatures to optimize irrigation. I can send a text message to the system ("Susan") to ask about water usage or temperatures:

docs/images/text-messages.png

This repository contains the software and the hardware designs used to build our measurement and monitoring systems. yin-yang-ranch is a continuously evolving project with a lot of hardware hacking and software refactoring. I am open-sourcing everything in case it might be helpful to others. My projects use Raspberry Pi computers, PiCameras, various sensors and related electronics. I control the hardware with Python programs that use computer vision, OpenCV, Numpy, pandas, and the PyZMQ messaging library. I use the Raspberry Pi GPIO Python module to control lights (e.g., to light the water meter) and irrigation valves.

The Yin Yang Ranch project is made up of 4 repositories on GitHub:

  1. yin-yang-ranch: this repository. Overall project design and librarian prototype code.
  2. imageZMQ: Transporting OpenCV images.
  3. imagenode: Capture and Send Images and Sensor Data.
  4. imagehub: Receive and Store Images and Event Logs.

imageZMQ moves images taken by Raspberry Pi computers to hub computers for image processing. imagenode runs on multiple RPi computers, continuously capturing images, detecting motion, and gathering sensor data (e.g. air and soil temperatures). imagehub runs on a Mac or a Linux computer and receives images and event messages from 8-10 Raspberry Pi computers simultaneously. I use a variety of computer vision techniques implemented in Python. I have programs that can read the water meter. Or tell if that critter moving behind the barn is a coyote or a raccoon.

I also have a website at yin-yang-ranch.com that will someday display some dashboards on weather, compost temperatures, solar power generation and when the last coyote was spotted. It is just a few pictures of the ranch for now while I am developing the dashboard software.

The overall system design is a hub and spoke network with ZMQ messaging between Raspberry PiCameras and imagehubs. One image hub can simultaneously receive images from about 10 PiCameras. A librarian program gathers event messages and images from the imagehubs. A communications program uses the event logs to answer queries about images and events, as shown in the SMS text exchange pictured above. By distributing computer vision processing pipelines across Raspberry Pi computers and more powerful computers like Macs, each computer can do what it does best. A Raspberry Pi can take pictures with the PiCamera and adjust camera settings, control additional lighting, crop, flip and grayscale images, as well as detect motion. A Mac can store and index images from many Raspberry Pi computers simultaneously. It can perform more complex image processing like reading the changing digits of the water meter or using image classification techniques to label a coyote or a raccoon in an image stream. My current setup has about a dozen Raspberry Pis with PiCamera modules and 2 linux laptops with webcams attached to a single imagehub.

docs/images/CVpipeline.png

The project contains code repositories for each part of the design shown above:

  • imagenode: image capture on Raspberry Pi and other computers using PiCameras, webcams and various OpenCV techniques for image rotation, threshholding, dilation, differencing and motion detection. Also sends sensor data, such as temperature and humidity, from sensors attached to the RPi's. See imagenode: Capture and Send Images and Sensor Data.
  • imageZMQ: Python classes that transport OpenCV images from imagenodes to imagehubs. The imageZMQ package is pip installable. See imagezmq: Transporting OpenCV images.
  • imagehub: receives event messages, images and sensor data from multiple Raspberry Pi and other computers via imagezmq. Stores them to disk files. Note that the imagenodes don't store any data to the RPi SD cards, but send all their data to the imagehub for storage. See imagehub: Receiving and saving images and event data from multiple Raspberry Pi's.
  • librarian: reads the imagehub event logs and stored images to answer questions about them. A prototype of the librarian code is contained in this repository. It can answer simple queries like those in the SMS texting example above. See The Librarian Prototype. Also, for an excellent alternative to my own librarian design, see this approach.
  • commhub: provides a very simple natural language interface for answering questions about events and images (is the water running? was a coyote sighted today?). It parses the inbound questions and provides simple answers using data from the imagehub event logs. The commhub has methods for different channels of communication with end users. The prototype commhub code in this repository implements 2 communications channels: 1) SMS texting (using Google Voice and its Gmail interface) and 2) a terminal window CLI text interface.
  • commagents: are separate Python programs connecting each communication channel to the commhub. For example, an SMS/texting agent (example shown above), is implemented as gmail_watcher.py in this repository. Future commagents such as a Twilio SMS texting agent, an email agent and a webchat agent are being developed.
  • yin-yang-ranch (this GitHub repository): contains overall project documentation and design. This repository also contains prototype Python programs for the librarian, commhub and an example commagent (in the librarian-prototype folder). There is also example data from my farm in the test-data folder. That folder contains imagehub logs and captured images from my farm (including images of coyotes, a bobcat, the mail truck and an Amazon delivery ;-)

This distributed design allows each computer to do what it does best. A Raspberry Pi with a PiCamera can watch a water meter for needle motion, then transmit only those images show the water flow changes (from flowing to not flowing or vice versa). The logic for motion detection and image selection runs in the Raspberry Pi imagenode, which only sends relevant images to the imagehub, saving network bandwidth. The imagehub stores the event messages and images from multiple nodes at the same time. The librarian program answers user queries about images and event messages. A more complete "which computer does what" explanation can be found in Distributing tasks among the multiple computers.

The system is written in Python and uses these packages. Higher versions will usually work fine, but these specific ones are known to work. See each specific repository above for more software details.

  • Python 3.6 and 3.7
  • OpenCV 3.3 and 4.0+
  • Raspian Buster
  • PyZMQ 20.0+
  • imagezmq 1.1.1+
  • imutils 0.4.3+ (used get to images from PiCamera)

The project uses a wide variety of electronics hardware:

  • Raspberry Pi computers with both PiCameras and webcams.
  • Mac and Linux laptops (some with webcams as nodes).
  • Temperature and humidity sensors.
  • Lighting control electronics (e.g., to light the water meter).
  • Motion detection sensors (both PIR and ultrasonic).
  • Infrared lighting arrays (to watch for coyotes and raccoons at night).
  • Irrigation actuators to turn water on and off.
  • Solar panel monitoring hardware with programs to optimize power use and track the daily, monthly and annual sunshine energy reaching the farm. Hours and intensity of sunlight are big factors in photosynthesis, plant growth rates and water requirements.

This is what a water meter looks like:

docs/images/water-meter.jpg

The water meter project uses computer vision to manage water use on the farm. I can use computer vision to determine if water is flowing or not, read the gallons used per hour or per day, and save some of the images for analysis. The project also watches for unusual water flow due to leaks or broken irrigation controls and sends alerts. When the water is flowing, the large analog needle spins clockwise. Each full rotation of the needle causes the rightmost digit of the digital meter to advance by one digit. The small "blue star" dial is a "leak detector" that spins even when a very small amount of water is flowing (like a dripping faucet).

The Raspberry Pi sits in a mason jar on top of the water meter cover. The PiCamera and the array of LED lights is underneath the water meter cover and aimed at the water meter face. Here is a picture of the water meter as seen by the PiCamera:

docs/images/water-meter-cam-view.jpg

For more details on the water meter camera electronics and buildout, see Water Meter Camera Hardware Details.

Raspberry Pi nodes around the farm can monitor temperature and detect motion of critters wandering about. Here is a log that shows motion detected behind the barn, along with a couple of pictures that were taken when the coyote activated the motion detection in the imagenode RPi running in the barn:

docs/images/coyote-events.png

Here is what the back of the barn looks like with the infrared "PiNoir" style PiCamera, a temperature sensor and the infrared floodlight that lights the area after dark without putting out white light:

docs/images/floodlight-cam-sensor.jpg

For more details on the infrared camera, infrared floodlight and temperature sensor, see Critter Infrared Camera and Temperature Sensor Details.

Another PiCamera imagenode watches the driveway and entrance area. It sees the mail truck come and go, and spots an occasional hawk. It uses a Raspberry Pi Zero W computer and a PiCamera that are encased in a "fake security camera" housing that cost about $5:

docs/images/camera-housing.jpg

And here is what it looks like assembled and mounted in our driveway. You can see the PiCamera behind the housing lens:

docs/images/camera-in-place-driveway.jpg

For more details on the Pi Zero based driveway camera and its enclosure, including the assembly pictures and some "action shots", see Driveway Camera Hardware Example.

The yin-yang-ranch projects are in early development and testing. Prototypes for all the modules in the design diagram above are working, and the early experiments have provided a lot of data to help with design changes and code refactoring. I have pushed the imageZMQ, imagenode and imagehub as separate repositories on GitHub (see links above).

The librarian and its communications programs have prototypes in this repository. They are documented here. The librarian is currently being refactored with a new design, but the prototype is what was used to generate the SMS texting example above. It has been running for about 3 years. It will eventually be pushed to its own GitHub repository.

The imageZMQ repository contains test programs that show how images can be sent from multiple Raspberry Pi computers simultaneously to a hub computer. The imagenode and imagehub programs are evolutions of the imageZMQ test programs timing_send_jpg_buf.py and timing_receive_jpg_buf.py. The Python code in those two programs is a brief "pseudo code" outline for the code that is in the imagenode and imagehub programs. Links to the full imagenode and imagehub repositories are above.

The yin-yang-ranch projects are in very early development and testing. I welcome questions and comments. The easiest way to make a comment or ask a question about the project is to open an issue. If your issue is specific to imageZMQ, imagenode or imagehub, it will be easiest if you open an issue in the appropriate project. Issues about the overall project design or about my PyCon 2020 presentation should go into this repository.

An imagenode & imagehub user and code contributor @sbkirby has designed a completely different approach to building an imagehub and librarian combination using a broad mix of tools in addition to Python including Node-Red, MQTT, MariaDB and OpenCV in Docker containers. He has posted it in this Github repository. I like his approach a lot, although I'm still working on a mostly Python approach to my own librarian that is an extension of the prototype librarian in this repository.

  • The Raspberry Pi Foundation and their remarkable Raspberry Pi tiny single board computers. Even their $10 Pi Zero runs Linux and OpenCV and can do serious computer vision image acquisition and processing. Raspberry Pi Foundation
  • Adafruit an amazing resource for electronics makers with helpful tutorials and electronic components of all kinds. Adafruit
  • ZeroMQ is a great network messaging library with great documentation at ZeroMQ.org.
  • OpenCV and its Python bindings provide great scaffolding for computer vision projects large or small: OpenCV.org.
  • PyImageSearch.com is the best resource for installing OpenCV and its Python bindings. Adrian Rosebrock provides many practical OpenCV techniques with tutorials, code examples, blogs and books at PyImageSearch.com. Installing OpenCV on my Raspberry Pi computers, Macs and Linux boxes went from frustrating to easy thanks to his tutorials. I also learned a LOT about computer vision methods and techniques by taking his PyImageSearch Gurus course. Highly recommended.
  • imutils is a collection of Python classes and methods that allows computer vision programs using OpenCV to be cleaner and more compact. It has a very helpful threaded image reader for Raspberry PiCamera modules or webcams. It allowed me to shorten my camera reading programs on the Raspberry Pi by half: imutils on GitHub. imutils is an open source project authored by Adrian Rosebrock.

yin-yang-ranch's People

Contributors

jeffbass avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

yin-yang-ranch's Issues

commhub

Hi,

is that still active ?
If so where to do you work on the commhub stuff ?

From your screenshot it looks like it is already working ?!?

YAML settings to detect birds in flight

Jeff,
I'm attempting to use imagenode with a DIY RPi Sky Camera to detect birds flying/migrating over my house, but I'm not having any success. I've used the Test #3 in imagenode/blob/master/docs/testing.rst in an attempt to tune my YAML file, but I'm not having any success. I've adjusted the delta_threshold, min_area, blur_kernel_size, min_motion_frames, and can't seem to get it to trigger satisfactorily. Am I asking to much of the software to detect such small objects?
I've adjusted the min_area to as low as 0.1, and delta_threshold to 1. I'm using a resolution of 1920,1456, and a framerate of 30 with a RPi HQ Camera Module. Are there any adjustments you would recommend to achieve my goal. The raptor migration season is just beginning here in Texas, and we have numerous Mississippi Kites flying over the house in the morning.

Where are the pitfalls for adding a live stream zmq publisher service to imagenode?

Jeff,

For the TL;DR on this, just scroll to the bottom to get to my questions.

A brief introduction for context...

Have been brainstorming a personal side-project for the past few months, and feel like I'm ready to start putting it together. The motivation was just that this is something that seemed interesting and fun. And also as possibly a cool learning vehicle for my grandson

The goal is a small-scale distributed facial recognition and leaming pipeline hosted on a network of Raspberry Pi computers. Something that could easily support presence detection within the context of smart home automation. Have bigger/crazier ideas too, but this was a good place to start.

Had learned about imagezmq from the PylmageSearch blog, and that led me here.

Just being completely honest here, my first reaction to your imagenode and imagehub repositories went something like... Awesome! I'm going to steal a bunch of this stuff.

Well done. And after looking at it for awhile, I've come to recognize that what you've built is a much closer fit to my design than I had initially realized.

SentinelCamOverview

My initial goals here are to be able to recognize people and vehicles (and possibly pets) that are known to the house. Identifying package and mail delivery. Knowing when a strange car has pulled into the driveway.

Significantly, any new/unknown face should automatically be enrolled and subsequently recognized. We can always make a "formal introduction" later by labeling the new face at our leisure. Or deleting any that are not wanted.

  • A full video clip of each motion event is automatically captured, preserving the original data.
  • Will likely support a configurable timestamp on the captured video as an option
  • The motion detector will also perform object tracking
  • Object ids and associated tracking centroids are logged as an outcome of motion detection
  • An optional time-lapse capture is also part of the design

I wanted central logging of errors and exceptions rather than keeping them on the SD card of the camera nodes. Using PyZMQ async logging for both that reason and to capture camera event data. A single detector could potentially generate a number of different result values in sequence: there can be multiple objects.

To support this design pattern, the camera startup goes something like this.

  1. Initialize logging, and bind to the socket as publisher
  2. Connect to the data sink with a ZMQ REQ
  3. Send a "CAMERA UP" command
  4. Data sink notifies cam watcher about the new camera using ZMQ request
  5. Cam watcher connects and subscribes to publishing service of camera (if not already connected)
  6. Cam watcher replies with OK
  7. Data sink responds to camera with OK
    8, Camera initialization completes and processing loop begins

This allows cameras to be added and removed dynamically. The cameras can push out a periodic heartbeat over the log as a health check. The cameras just need to know which data sink to connect to. The data sink then introduces the cam watcher.

Most inference runs as a batch job on a separate box(s). Some inference can be moved onto specific cameras that have USB accelerators where real time facial recognition is desired, such as the front door or foyer. All results are stored in a database.

Motion event playback can produce the original video, and support the inclusion of optional annotations/labeling. i.e. show the bound boxes around each face along with a name.

Does any of this design interest you? I guess what I'm trying to ask in a round about way... Should I just fork your stuff and move on, or would you like any of this for yourself?

PyCon 2020 Questions

  1. It looks like the imagenode camera detectors run single threaded. Was this a design decision, or is there more to that than meets the eye?

  2. What are the pit falls for adding a live-stream video imagezmq publishing service on the imagenode?

My thinking on that second question, is that it might be desirable to tap into the live camera feed on demand. This would support not only monitoring on a console or handheld, but would also allow a batch job to analyze a motion event while it is in progress.

Most cameras wouldn't have a subscriber, they would just routinely publish on the socket, it would be available for any application that might want it.

Thanks Jeff!

[email protected]
https://blog.swanriver.dev

Interfacing the imagehub to the librarian

Jeff, I enjoyed your PyCon 2020 Talk about the Yin-Yang-Ranch. Thank you for all the hard work developing imagenode, imagehub and imagezmq. I've been spending some time building a Distributed Computer Vision system at home using your tools, and I'm at the point where I would like to learn more about how you integrated (or plan to integrate) the librarian into the system. You put a huge two-way arrow between the imagehub and the librarian on your illustration. Would you explain the operation you had in mind for that portion of your overall design? Thanks

PyCon 2020 Question? Open an issue!

PyCon 2020 is happening Online because of the COVID-19 crisis. I am one of the speakers who was supposed to give a talk. I was, of course, planning on 5 minutes of Answering Questions at the end of my talk. Here is an alternative. I'm proposing a new "post an issue to pose a question" method -- right here, in this yin-yang-ranch GitHub repository. You'll need a (free) GitHub account.

My PyCon 2020 talk title:
Yin Yang Ranch: Building a Distributed Computer Vision Pipeline using Python, OpenCV and ZMQ

My PyCon presentation video: PyCon Video
My PyCon slides: PyCon Slide Deck
How to Ask a Question about my talk in this GitHub repository:

  1. Click the Issues tab. (2nd tab from the left on main repository page)
  2. Click the green "New Issue" button
  3. In the Title area of the New Issue page, put a SHORT version of your question.
  4. In the Text area of the New Issue page, put a little more about your question. Click the green Submit new issue button
  5. I'll reply fairly quickly; you'll get an email from GitHub letting you know I've replied.
  6. This is Open Source! If you can help with an answer to a question, post a comment in the question's issue thread. You may have a better answer than I do. Thanks!

[Please Remember to Be Courteous. Also, please Stay on the Topic of my Talk. Thanks.]

comment

Hi Jeff, love your project, found it on Adrians site
I look as old as you, been 20 years in electronics, and about the same in software. Now doing windows based comms, web servers, api servers etc.

if you need some help, yell]
[email protected]

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.