Update markdown
All checks were successful
Build Worker Base and Application Images / check-base-changes (push) Successful in 8s
Build Worker Base and Application Images / build-base (push) Has been skipped
Build Worker Base and Application Images / build-docker (push) Successful in 2m15s
Build Worker Base and Application Images / deploy-stack (push) Successful in 8s
All checks were successful
Build Worker Base and Application Images / check-base-changes (push) Successful in 8s
Build Worker Base and Application Images / build-base (push) Has been skipped
Build Worker Base and Application Images / build-docker (push) Successful in 2m15s
Build Worker Base and Application Images / deploy-stack (push) Successful in 8s
This commit is contained in:
parent
1c21f417ce
commit
cfc7503a14
3 changed files with 327 additions and 61 deletions
76
worker.md
76
worker.md
|
@ -2,6 +2,12 @@
|
|||
|
||||
This document outlines the WebSocket-based communication protocol between the CMS backend and a detector worker. As a worker developer, your primary responsibility is to implement a WebSocket server that adheres to this protocol.
|
||||
|
||||
The current Python Detector Worker implementation supports advanced computer vision pipelines with:
|
||||
- Multi-class YOLO detection with parallel processing
|
||||
- PostgreSQL database integration with automatic schema management
|
||||
- Redis integration for image storage and pub/sub messaging
|
||||
- Hierarchical pipeline execution with detection → classification branching
|
||||
|
||||
## 1. Connection
|
||||
|
||||
The worker must run a WebSocket server, preferably on port `8000`. The backend system, which is managed by a container orchestration service, will automatically discover and establish a WebSocket connection to your worker.
|
||||
|
@ -25,14 +31,34 @@ To enable modularity and dynamic configuration, the backend will send you a URL
|
|||
2. Extracting its contents.
|
||||
3. Interpreting the contents to configure its internal pipeline.
|
||||
|
||||
**The contents of the `.mpta` file are entirely up to the user who configures the model in the CMS.** This allows for maximum flexibility. For example, the archive could contain:
|
||||
**The current implementation supports comprehensive pipeline configurations including:**
|
||||
|
||||
- AI/ML Models: Pre-trained models for libraries like TensorFlow, PyTorch, or ONNX.
|
||||
- Configuration Files: A `config.json` or `pipeline.yaml` that defines a sequence of operations, specifies model paths, or sets detection thresholds.
|
||||
- Scripts: Custom Python scripts for pre-processing or post-processing.
|
||||
- API Integration Details: A JSON file with endpoint information and credentials for interacting with third-party detection services.
|
||||
- **AI/ML Models**: YOLO models (.pt files) for detection and classification
|
||||
- **Pipeline Configuration**: `pipeline.json` defining hierarchical detection→classification workflows
|
||||
- **Multi-class Detection**: Simultaneous detection of multiple object classes (e.g., Car + Frontal)
|
||||
- **Parallel Processing**: Concurrent execution of classification branches with ThreadPoolExecutor
|
||||
- **Database Integration**: PostgreSQL configuration for automatic table creation and updates
|
||||
- **Redis Actions**: Image storage with region cropping and pub/sub messaging
|
||||
- **Dynamic Field Mapping**: Template-based field resolution for database operations
|
||||
|
||||
Essentially, the `.mpta` file is a self-contained package that tells your worker *how* to process the video stream for a given subscription.
|
||||
**Enhanced MPTA Structure:**
|
||||
```
|
||||
pipeline.mpta/
|
||||
├── pipeline.json # Main configuration with redis/postgresql settings
|
||||
├── car_detection.pt # Primary YOLO detection model
|
||||
├── brand_classifier.pt # Classification model for car brands
|
||||
├── bodytype_classifier.pt # Classification model for body types
|
||||
└── ...
|
||||
```
|
||||
|
||||
The `pipeline.json` now supports advanced features like:
|
||||
- Multi-class detection with `expectedClasses` validation
|
||||
- Parallel branch processing with `parallel: true`
|
||||
- Database actions with `postgresql_update_combined`
|
||||
- Redis actions with region-specific image cropping
|
||||
- Branch synchronization with `waitForBranches`
|
||||
|
||||
Essentially, the `.mpta` file is a self-contained package that tells your worker *how* to process the video stream for a given subscription, including complex multi-stage AI pipelines with database persistence.
|
||||
|
||||
## 4. Messages from Worker to Backend
|
||||
|
||||
|
@ -79,6 +105,15 @@ Sent when the worker detects a relevant object. The `detection` object should be
|
|||
|
||||
- **Type:** `imageDetection`
|
||||
|
||||
**Enhanced Detection Capabilities:**
|
||||
|
||||
The current implementation supports multi-class detection with parallel classification processing. When a vehicle is detected, the system:
|
||||
|
||||
1. **Multi-Class Detection**: Simultaneously detects "Car" and "Frontal" classes
|
||||
2. **Parallel Processing**: Runs brand and body type classification concurrently
|
||||
3. **Database Integration**: Automatically creates and updates PostgreSQL records
|
||||
4. **Redis Storage**: Saves cropped frontal images with expiration
|
||||
|
||||
**Payload Example:**
|
||||
|
||||
```json
|
||||
|
@ -88,19 +123,38 @@ Sent when the worker detects a relevant object. The `detection` object should be
|
|||
"timestamp": "2025-07-14T12:34:56.789Z",
|
||||
"data": {
|
||||
"detection": {
|
||||
"carModel": "Civic",
|
||||
"class": "Car",
|
||||
"confidence": 0.92,
|
||||
"carBrand": "Honda",
|
||||
"carYear": 2023,
|
||||
"carModel": "Civic",
|
||||
"bodyType": "Sedan",
|
||||
"licensePlateText": "ABCD1234",
|
||||
"licensePlateConfidence": 0.95
|
||||
"branch_results": {
|
||||
"car_brand_cls_v1": {
|
||||
"class": "Honda",
|
||||
"confidence": 0.89,
|
||||
"brand": "Honda"
|
||||
},
|
||||
"car_bodytype_cls_v1": {
|
||||
"class": "Sedan",
|
||||
"confidence": 0.85,
|
||||
"body_type": "Sedan"
|
||||
}
|
||||
}
|
||||
},
|
||||
"modelId": 101,
|
||||
"modelName": "US-LPR-and-Vehicle-ID"
|
||||
"modelName": "Car Frontal Detection V1"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Database Integration:**
|
||||
|
||||
Each detection automatically:
|
||||
- Creates a record in `gas_station_1.car_frontal_info` table
|
||||
- Generates a unique `session_id` for tracking
|
||||
- Updates the record with classification results after parallel processing completes
|
||||
- Stores cropped frontal images in Redis with the session_id as key
|
||||
|
||||
### 4.3. Patch Session
|
||||
|
||||
> **Note:** Patch messages are only used when the worker can't keep up and needs to retroactively send detections. Normally, detections should be sent in real-time using `imageDetection` messages. Use `patchSession` only to update session data after the fact.
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue