Welcome to SafeAR, a privacy-focused solution designed for augmented reality (AR) contexts. Our system processes input from mobile device cameras and returns a sanitized version of the data, ensuring that sensitive information is obscured.
SafeAR Service receives images for obfuscation along with metadata specifying the classes to be obfuscated and the
respective method. It returns sanitized images to the client.
The repository is organized as follows:
safeAR-aaS/
│
├── 🏛️ assets/ # Logos and other visual assets
├── 🚰 src/ # Source code
├── 📁 seg_models/ # Pre-trained instance segmentation models (onnx format)
├── 🤷🏻♀️ .gitignore # Git ignore file
├── 🛠️ config.yml # Configuration file
├── 🐍 main.py # Main script to run the API
├── 📦 setup.py # Setup file for the API
├── 📜 README.md # Readme file
├── 🐳 Dockerfile # Dockerfile for containerization
└── 📜 requirements.txt # Required packages
Conda Environment:
# Clone the repository
git clone https://github.com/CIIC-C-T-Polytechnic-of-Leiria/SafeAR.git
cd SafeAR
# Configure conda environment
conda create -n safeAR python=3.10
conda activate safeAR
# Install the required packages
pip install -r requirements.txt
Docker Image:
# Build the Docker image
docker build -t safear .
Note: The versions of CUDA, cuDNN, and ONNX Runtime must be compatible with each other and with your GPU. Check the official documentation to ensure compatibility.
Yolov5-seg model
You may run this Colab script to download the model and convert them to ONNX format.
Afterward, move the exported onnx
model(s) to the seg_models
directory.
Yolov8-seg model
You may download the model from the Ultralytics repository: Yolov8 Repository
Afterward, move the exported onnx
model(s) to the seg_models
directory.
Yolov9-seg and Gelan models
You may run this Colab script to download the models and convert them to ONNX format.
Afterward, move the exported onnx
model(s) to the seg_models
directory.
RTMDet model
Under construction...
🚧 : Under construction...
Model | Size (MB) | Training Data | Classes | Inference Time CPU (ms)* | Inference Time GPU (ms)* |
---|---|---|---|---|---|
YOLOv5n-seg | 8.5 | COCO 2017 | 80 | - | - |
YOLOv8n-seg | 13.8 | COCO 2017 | 80 | - | ~20 |
YOLOv9c-seg | 111.1 | COCO 2017 | 80 | - | - |
gelan-c-seg | 110.0 | COCO 2017 | 80 | - | - |
RTMDet | - | COCO 2017 | 80 | - | - |
Note: Measured on: HP Victus, 32 GB of memory, Intel i5-12500Hx16 processor, Nvidia GeForceRTX 4060, Pop!_OS 22.04 LTS operating system
The CLI provides a convenient way to obfuscate images using various obfuscation techniques. Here's an example command to get you started:
python main.py \
--model_number 0 \
--class_id_list 0 \
--obfuscation_type_list blurring \
--image_base64_file test_samples/images/img_640x640_base64.txt
Parameters | Description | Required |
---|---|---|
--model_number | Model number for object detection (0-based index) | Yes |
--class_id_list | Space-separated list of class IDs to obfuscate | Yes |
--obfuscation_type_list | Space-separated list of obfuscation types (blurring, masking, pixelation) | Yes |
--image_base64_file | Path to the base64-encoded image file | Yes |
--square | Optional: size of the square for pixelation effect | No |
--sigma | Optional: sigma value for blurring effect | No |
You can also use Docker to run the CLI:
docker run -it safear --model_number 0 \
--class_id_list 0 \
--obfuscation_type_list blurring \
--image_base64_file test_samples/images/img_640x640_base64.txt
Note: The Docker command is just an example and may need to be modified to fit your specific use case.
You can also use the SafeARService
class directly in your Python scripts for more flexibility and customization.
Here's an example usage:
from safear_service import SafeARService
# Initialize the SafeARService
safe_ar_service = SafeARService()
# Configure the SafeARService with the desired model number and obfuscation policies
safe_ar_service.configure(model_number=0, obfuscation_policies={0: "blurring", 1: "blurring"})
# Auxiliary function to read the base64 image from a file
image_base64 = safe_ar_service.read_base64_image("test_samples/images/img_640x640_base64.txt")
# Image Obfuscation using the SafeARService
processed_frame_bytes = safe_ar_service.process_frame(image_base64)
# Auxiliary function to save the processed frame to a file
safe_ar_service.save_processed_frame(processed_frame_bytes, "outputs/img_out.png")
Here are the main tasks we plan to tackle in the near future:
- Model selection: Enable users to choose from multiple pre-trained models.
- Metadata anonymization: Implement metadata anonymization for enhanced privacy.
- Sensor data utilization: Leverage mobile device sensor data to boost performance.
- Inpainting obfuscation: Add inpainting as an obfuscation technique.
- Package distribution: Publish SafeAR as a PyPI package for easier installation.
This work is funded by FCT - Fundação para a Ciência e a Tecnologia, I.P., through project with reference 2022.09235.PTDC.
This project is licensed under GPLv3.