SoilSight is a desktop application that automates detection and morphometric analysis of microplastic particles from microscopy images and live camera feeds. It reduces manual annotation effort by combining instance segmentation models (PyTorch) with a PyQt6-based GUI and optional cloud integrations (Roboflow, Directus).
This repository contains the GUI, local model artifacts, inference helpers, and service connectors used for data export and remote model hosting.
- Instance Segmentation: Detects particles and displays segmentation masks and confidence scores.
- Morphometrics: Computes area, perimeter, equivalent circular diameter, aspect ratio, circularity, skeleton length, and other shape metrics.
- Color Analysis: Extracts color composition for each detected particle.
- Live & Batch Processing: Works with live camera feeds (microscope cameras) and static image batches.
- Services Integration: Supports Directus for record storage and Roboflow for remote inference/annotations via
services/connectors. - Extensible UI: Separate pages for Camera, Farm (project management), and Samples.
- Python 3.11 or newer
- A GPU is recommended for local inference with PyTorch, but CPU will work for smaller images or testing.
- Roboflow Inference Server must be installed and running — see Roboflow Inference Installation for detailed setup instructions.
Step 1: Install Roboflow Inference (required first)
Follow the official Roboflow Inference installation guide. Then start the server on your device before running SoilSight:
# After installing inference-cli
inference server start --dev
# Server will run on http://localhost:9001Step 2: Setup and Run SoilSight
python -m venv .venv311
source .venv311/bin/activate # On Windows: .\.venv311\Scripts\Activate.ps1
pip install -r requirements.txt
# Copy and configure environment variables
cp .env.example .env
# Edit .env with your Roboflow API key and workspace
python main.pyIf you encounter an "untrusted developer" warning when opening the packaged DMG app, run:
xattr -d com.apple.quarantine /Applications/SoilSight.appThen launch the app normally.
The app defaults to using a local Roboflow Inference server on http://localhost:9001. This is ideal for offline work and avoids cloud API rate limits.
Quick Start: Using Python CLI
pip install inference-cli
inference server start --dev
# Server runs at http://localhost:9001Alternative: Using Docker (all platforms)
docker pull roboflow/roboflow-inference-server-cpu:latest
docker run -p 9001:9001 roboflow/roboflow-inference-server-cpu:latestFor complete installation options and troubleshooting, see Roboflow Inference Installation.
Configure SoilSight for Local Inference:
Set your Roboflow credentials in .env (created during Quickstart setup):
ROBOFLOW_API_KEY = "<YOUR_ROBOFLOW_API_KEY>"
ROBOFLOW_WORKSPACE = "soilsight-xstgr"
ROBOFLOW_WORKFLOW = "detect-count-and-visualize"
ROBOFLOW_API_URL = "http://localhost:9001" # Local inference serverThen run python main.py once the inference server is active.
If you prefer cloud-hosted inference, update your .env file:
ROBOFLOW_API_KEY = "<YOUR_ROBOFLOW_API_KEY>"
ROBOFLOW_WORKSPACE = "<YOUR_WORKSPACE>"
ROBOFLOW_WORKFLOW = "<YOUR_WORKFLOW_ID>"
ROBOFLOW_API_URL = "https://serverless.roboflow.com"Then run:
python main.pyRunning the app will open the Qt GUI. The main entry point is main.py and navigation is handled by ui_nav.py.
Camerapage: start/stop live capture, run real-time inference, save snapshots.Farmpage: manage projects, metadata, and batch operations.Samplespage: review saved images, re-run inference, export results.
UI files are located in layouts/ and controllers are in mpcamera/controllers/ (e.g. camera_page.py, farm_page.py, samples_page.py).
Prediction debugging output can be found in prediction_debug.txt (root and mpcamera/).
Local model weights are stored in the models/ folder. Examples:
optimized-maskrcnn-resnet50.pthPH-optimized-maskrcnn-resnet101.pth
To use a local model, set the appropriate model path in the app settings or update utils/local_models_utils.py / utils/inference_utils.py as needed. The app also includes support for Roboflow-hosted models via services/roboflow.py.
Logs are written to ~/.mpcamera/debug.log and the console. For troubleshooting issues:
# View live logs (development)
tail -f ~/.mpcamera/debug.log
# View packaged app logs (after running DMG)
tail -f ~/.mpcamera/debug.logAll errors are logged with full tracebacks for debugging. Check the logs if:
- Inference doesn't start
- Camera doesn't initialize
- UI pages don't load
- Models fail to load
Application settings are stored at ~/.mpcamera/config.json. The schema is defined in mpcamera/config_schema.json. Settings include:
- Model path (local or Roboflow)
- Camera device selection
- Pixel-to-micrometer scale
- API credentials (can be synced from
.env)
To reset to defaults, delete ~/.mpcamera/config.json and restart the app.
Run the test suite:
python -m pytest tests/ -vGUI (PyQt6) -> Inference layer (PyTorch models + utils/inference_utils.py) -> Morphometrics utilities (utils/morphometrics/*) -> Services (services/directus.py, services/roboflow.py) for export and remote inference.
main.py— application entry pointui_nav.py— navigation and startup logicmpcamera/— main package with controllers, UI helpers, and assetscontrollers/— page logic (camera, farm, samples, settings)services/— external integrations (Directus, Roboflow)utils/— image processing, inference helpers, and morphometric calculatorslogging_utils.py— centralized logging configurationpath_utils.py— file path resolution for development and bundled environmentsconfig.py— application configuration with JSON schema validation
layouts/— Qt Designer.uifilesmodels/— local model weights (gitignored)build/— packaging scripts for macOS and Windows