The ehSoja is a new module for recognizing soybean plants through the eSoja app! eSoja is a mobile application for the agricultors, in specific, soy farmers. eSoja provides its users with features that help them in monitoring, controlling and obtaining forecasts about their planting and harvesting. Our eSoja extension, ehSoja, enhances the native functions of the application and provides it with an innovation. Currently, the user needs to manually enter the number of pods within a plant so that the aplication can estimate the harvest data for them. Therefore, we developed the upload of a soybean plant image so that informations like the amount of pods and grains per pod can be deduced through an analysis of the image. This functionality guarantees agility and versatility to the user, who will no longer need to make effort to obtain an estimate of his harvest.
π± See the eSoja app with our modifications here! πΏ
πΏ See the eSoja server with our modifications here! π±
π± See our dataset here! πΏ
We have four sprints, each one lasting three weeks, dedicated to the development of our client's issue's solution. That being said, we prioritize the desired features according to the table below, so that each sprint will have improvements over the previous one.
Basic model training to localize soy plants in an image βοΈ |
Basic model training to localize pods in soy plants within an image:heavy_check_mark: |
Marking the pods found in the soy plant βοΈ |
Create/change the old plant's registration interface to give to the user access to the new functionalities βοΈ |
Create an interface that allows the user to visualize the image analysis result βοΈ |
Enhance the recognition model of pods βοΈ |
Count how many pods were found in the soy plant βοΈ |
Calculate the amount of seeds in each pod βοΈ |
Estimate the number of soybeans in a soy plant according to the image analysis βοΈ |
Sprint | Description | Availability | Read-me | Source code |
---|---|---|---|---|
Sprint 1 |
Training a base model to recognize and mark the soy elements on the example images |
18/09 |
||
Sprint 2 |
The user can submit images of their planting so it can be analyzed by the algorithm which will return the results |
09/10 |
||
Sprint 3 |
Counting pods and updating its data in the database |
06/11 |
||
Sprint 4 |
Estimate how many soybeans there are in the soy plant |
27/11 |
For the last sprint we integrated the application with the dispatch of the images, the counting of the recognized pods in the images and the grains quantity estimative. You can click on the preview option to quickly analyze the chosen image or finish importing all the images. After the pods of all images have been counted, the user is redirected to the plot statistics pages, where is shown the amount of pods and grains in each sample. In this page is shown, too, the production expectation.
We have made several attempts to improve the IoU, Intersection over Union, which is calculated from the division between the detection masks and the masks annotated by us. Finally, we obtained a mean percentage of 44% similarity between the ground-truth bounding boxes and the predicted bounding boxes.
Check some examples with selected images. First, there is the annotated image, then the detected image and the confusion matrix that compare the annotations and the detections in the image.
Below, there's the confusion matrix for all the images in the validation dataset. Our validation dataset contains around 150 images.
- BΓ‘rbara dos Santos Port (Scrum Master)
- Rafael Furtado Rodrigues dos Santos (Product Owner)
- Anna Carolina de Oliveira Vale Mendes (Development Team)
- Anna Yukimi Yamada (Development Team)
- Gabriel Azevedo de Souza (Development Team)
- Maria Eduarda BasΓlio de Oliveira (Development Team)
- Pedro Reginaldo TomΓ© Silva (Development Team)