Greetings,
My youngest son has Autism Spectrum Disorder (ASD), he’s 5 now. Recently he developed a habit of taking off his clothes and playing around without any clothes on! I already have a few cameras around the house, however, I was wondering if anyone knows a solution that can detect if my son is moving around the house without any clothes on so I can fire up automation and play a pre-recorded voice note on speakers in the house asking him to put on his clothes again!
So I’ve managed to get a live camera feed into Hom Assistant, using (GitHub - snowzach/doods2: API for detecting objects in images and video streams using Tensorflow) also was able to detect objects and persons using (tensorflow) and HA saves a snapshot for labels I want to capture, such as “person” or “people”. Seems to be working just fine.
I was looking for some pre-trained models to detect nudity / NSFW and found some such as: GitHub - minto5050/NSFW-detection: Trained tensorflow model for detecting nudity in images. However, I couldn’t manage to upload this model and run it for some reason. I’ve downloaded the model and labels files and placed them in the models and declared it in the config file but it isn’t working.
Here is how config.yaml lookslike:
doods:
log: detections
boxes:
enabled: True
boxColor: [0, 255, 0]
boxThickness: 1
fontScale: 1.2
fontColor: [0, 255, 0]
fontThickness: 1
regions:
enabled: True
boxColor: [255, 0, 255]
boxThickness: 1
fontScale: 1.2
fontColor: [255, 0, 255]
fontThickness: 1
globals:
enabled: True
fontScale: 1.2
fontColor: [255, 255, 0]
fontThickness: 1
detectors:
- name: default
type: tflite
modelFile: models/coco_ssd_mobilenet_v1_1.0_quant.tflite
labelFile: models/coco_labels0.txt
- name: tensorflow
type: tensorflow
modelFile: models/faster_rcnn_inception_v2_coco_2018_01_28.pb
labelFile: models/coco_labels1.txt
- name: nsfws
type: tensorflow
modelFile: models/NSFW.tflite
labelFile: models/dict.txt
- name: pytorch
type: pytorch
modelFile: ultralytics/yolov5,yolov5s
mqtt:
metrics: true
broker:
host: "mqttBroker"
#port: 1883
#user: "username"
#password: "password"
requests:
- id: firstrequest
detector_name: default
preprocess: []
separate_detections: false
crop: false
binary_images: false
detect:
"*": 50
regions:
- top: 0.1
left: 0.1
bottom: 0.9
right: 0.9
detect:
"*": 50
covers: false
data: rtsp://192.168.2.231/ch0_0.h264
After restarting the container, I get this in the log:
2022-05-15 16:08:19,836 - doods.doods - INFO - Registered detector type:tflite name:default
2022-05-15 16:08:21,966 - doods.doods - INFO - Registered detector type:tensorflow name:tensorflow
2022-05-15 16:08:21,967 - doods.doods - ERROR - Could not create detector tensorflow/nsfws: Error parsing message with type 'tensorflow.GraphDef'
Using cache found in /root/.cache/torch/hub/ultralytics_yolov5_master
YOLOv5 🚀 2022-5-10 torch 1.10.2+cu102 CPU
Fusing layers...
YOLOv5s summary: 213 layers, 7225885 parameters, 0 gradients
Adding AutoShape...
2022-05-15 16:08:24,158 - doods.doods - INFO - Registered detector type:pytorch name:pytorch
2022-05-15 16:08:24,208 - uvicorn.error - INFO - Started server process [1]
2022-05-15 16:08:24,208 - uvicorn.error - INFO - Waiting for application startup.
2022-05-15 16:08:24,208 - uvicorn.error - INFO - Application startup complete.
2022-05-15 16:08:24,209 - uvicorn.error - INFO - Uvicorn running on http://0.0.0.0:8080 (Press CTRL+C to quit)
GitHub Discord Credits
Only NSFW detector is giving an error.
Any thoughts on how can I make this model or any other similar model work for my purpose above?
OR is there any other way to achieve my initial objective above?
Thanks and much appreciated.