Configuration
OpenDataCam offers several customization options:
- Video input: run from a file, change webcam resolution, change camera type (raspberry cam, usb cam...)
- Neural network: change YOLO weights files depending on your hardware capacity, desired FPS (tinyYOLOv4, full yolov4 ...)
- Change display classes: We default to mobility classes (car, bus, person...), but you can change this.
General
All settings are in the config.json
file that you will find in the same directory you run the install script.
Any time the config.json
file was changed, OpenDataCam needs to be properly restarted using docker-compose down && docker-compose up -d
(or the corresponding npm
commands).
Run opendatacam on a video file
By default, OpenDataCam will run on a demo video file, if you want to change it, you should just drag & drop on the UI the new file.
Specificities of running on a file
- Opendatacam will restart the video file when it reaches the end
- When you click on record, Opendatacam will reload the file to start the recording at the beggining
- LIMITATION: it will only record from frame nº25
Neural network weights
You can change YOLO weights files depending on what objects you want to track and which hardware your are running opendatacam on.
Lighters weights file results in speed improvements, but loss in accuracy, for example yolov4
run at ~1-2 FPS on Jetson Nano, ~5-6 FPS on Jetson TX2, and ~22 FPS on Jetson Xavier
In order to have good enough tracking accuracy for cars and mobility objects, from our experiments we found out that the sweet spot was to be able to run YOLO at least at 8-9 FPS.
For a standard install of opendatacam, these are the default weights we pick depending on your hardware:
- Jetson Nano:
yolov4-tiny
- Jetson Xavier:
yolov4
- Desktop install:
yolov4
If you want to use other weights, please see use custom weigths.
Track only specific classes
By default, the opendatacam will track all the classes that the neural network is trained to track. In our case, YOLO is trained with the VOC dataset, here is the complete list of classes
You can restrict the opendatacam to some specific classes with the VALID_CLASSES option in the config.json file .
Find which classes YOLO is tracking depending on the weights you are running. For example yolov4 trained on COCO dataset classes
Here is a way to only track buses and person:
{
"VALID_CLASSES": ["bus","car"]
}
In order to track all the classes (default value), you need to set it to:
{
"VALID_CLASSES": ["*"]
}
Extra note: the tracking algorithm might work better by allowing all the classes, in our test we saw that for some classes like Bike/Motorbike, YOLO had a hard time distinguishing them well, and was switching between classes across frames for the same object. By keeping all the detections classes we saw that we can avoid losing some objects, this is discussed here
Display custom classes
By default we are displaying the mobility classes:
If you want to customize it you should modify the DISPLAY_CLASSES
config.
"DISPLAY_CLASSES": [
{ "class": "bicycle", "hexcode": "1F6B2"},
{ "class": "person", "hexcode": "1F6B6"},
{ "class": "truck", "hexcode": "1F69B"},
{ "class": "motorbike", "hexcode": "1F6F5"},
{ "class": "car", "hexcode": "1F697"},
{ "class": "bus", "hexcode": "1F683"}
]
You can associate any icon that are in the public/static/icons/openmojis
folder. (they are from https://openmoji.org/, you can search the hexcode / unicode icon id directly there)
For example:
"DISPLAY_CLASSES": [
{ "class": "dog", "icon": "1F415"},
{ "class": "cat", "icon": "1F431"}
]
LIMITATION: You can display a maximum of 6 classes, if you add more, it will just display the first 6 classes
Customize pathfinder colors
You can change the PATHFINDER_COLORS
variable in the config.json
. The app picks randomly for each new tracked object a color inside it. The colors need to be in HEX format.
"PATHFINDER_COLORS": [
"#1f77b4",
"#ff7f0e",
"#2ca02c",
"#d62728",
"#9467bd",
"#8c564b",
"#e377c2",
"#7f7f7f",
"#bcbd22",
"#17becf"
]
For example, with only 2 colors:
"PATHFINDER_COLORS": [
"#1f77b4",
"#e377c2"
]
Customize Counter colors
You can change the COUNTER_COLORS
variable in the config.json
. As you draw counter lines, the app will pick the colors in the order you specified them.
You need to add "key":"value" for counter lines, the key should be the label of the color (without space, numbers or special characters), and the value the color in HEX.
For example, you can modify the default from:
"COUNTER_COLORS": {
"yellow": "#FFE700",
"turquoise": "#A3FFF4",
"green": "#a0f17f",
"purple": "#d070f0",
"red": "#AB4435"
}
To
"COUNTER_COLORS": {
"white": "#fff"
}
And after restarting OpenDataCam you should get a white line when defining a counter area:
NOTE: If you draw more line than COUNTER_COLORS defined, the lines will be black
Advanced settings
Video input
OpenDataCam is capable to take in input several video streams: pre-recorded file, usbcam, raspberry cam, remote IP cam etc etc..
This is configurable via the VIDEO_INPUT
ans VIDEO_INPUTS_PARAMS
settings.
"VIDEO_INPUTS_PARAMS": {
"file": "opendatacam_videos/demo.mp4",
"usbcam": "v4l2src device=/dev/video0 ! video/x-raw, framerate=30/1, width=640, height=360 ! videoconvert ! appsink",
"raspberrycam": "nvarguscamerasrc ! video/x-raw(memory:NVMM),width=1280, height=720, framerate=30/1, format=NV12 ! nvvidconv ! video/x-raw, format=BGRx, width=640, height=360 ! videoconvert ! video/x-raw, format=BGR ! appsink",
"remote_cam": "YOUR IP CAM STREAM (can be .m3u8, MJPEG ...), anything supported by opencv",
"remote_hls_gstreamer": "souphttpsrc location=http://YOUR_HLSSTREAM_URL_HERE.m3u8 ! hlsdemux ! decodebin ! videoconvert ! videoscale ! appsink"
}
With the default installation, OpenDataCam will have VIDEO_INPUT
set to usbcam
. See below how to change this
Technical note:
Behind the hoods, this config input becomes the input of the darknet process which then get fed into OpenCV VideoCapture().
As we compile OpenCV with Gstreamer support when installing OpenDataCam, we can use any Gstreamer pipeline as input + other VideoCapture supported format like video files / IP cam streams.
You can add your own gstreamer pipeline for your needs by adding an entry to "VIDEO_INPUTS_PARAMS"
Run from an usbcam
- Verify if you have an usbcam detected
ls /dev/video*
# Output should be: /dev/video0
#
- Change
VIDEO_INPUT
to"usbcam"
"VIDEO_INPUT": "usbcam"
- (Optional) If your device is on
video1
orvideo2
instead of defaultvideo0
, changeVIDEO_INPUTS_PARAMS > usbcam
to your video device, for example if /dev/video1
"VIDEO_INPUTS_PARAMS": {
"usbcam": "v4l2src device=/dev/video1 ! video/x-raw, framerate=30/1, width=640, height=360 ! videoconvert ! appsink"
}
Run from a file
You have two options to run from a file:
EASY SOLUTION: Drag and drop the file on the UI , OpenDataCam will restart on it
Read a file from the filesystem directly by setting the path in the
config.json
For example, you have a file.mp4
you want to run OpenDataCam on :
For a docker (standard install) of OpenDataCam:
You need to mount the file in the docker container, copy the file in the folder where you have the docker-compose.yml
file
create a folder called
opendatacam_videos
and put the file in itmount the
opendatacam_videos
folder usingvolumes
in thedocker-compose.yml
file
volumes:
- './config.json:/var/local/opendatacam/config.json'
- './opendatacam_videos:/var/local/darknet/opendatacam_videos'
Once you do have the video file inside the opendatacam_videos
folder, you can modify the config.json
the following way:
- Change
VIDEO_INPUT
to"file"
"VIDEO_INPUT": "file"
- Change
VIDEO_INPUTS_PARAMS > file
with the path to your file
"VIDEO_INPUTS_PARAMS": {
"file": "opendatacam_videos/file.mp4"
}
Once config.json
is saved, you only need to restart the docker container using
sudo docker-compose restart
For a non docker install of OpenDataCam:
Same steps as above but instead of mountin the opendatacam_videos
you should just create in in the /darknet
folder.
Run from IP cam
- Change
VIDEO_INPUT
to"remote_cam"
"VIDEO_INPUT": "remote_cam"
- Change
VIDEO_INPUTS_PARAMS > remote_cam
to your IP cam stream, for example
"VIDEO_INPUTS_PARAMS": {
"remote_cam": "http://162.143.172.100:8081/-wvhttp-01-/GetOneShot?image_size=640x480&frame_count=1000000000"
}
NB: this IP cam won't work, it is just an example. Only use IP Cam you own yourself.
Run from Raspberry Pi cam (Jetson nano)
For a docker (standard install) of OpenDataCam:
Not supported yet, follow https://github.com/opendatacam/opendatacam/issues/178 for updates
For a non docker install of OpenDataCam:
- Change
VIDEO_INPUT
to"raspberrycam"
"VIDEO_INPUT": "raspberrycam"
- Restart node.js app
Change webcam resolution
As explained on the Technical note above, you can modify the Gstreamer pipeline as you like, by default we use a 640x360 feed from the webcam.
If you want to change this, you need to:
First know which resolution your webcam supports, run
v4l2-ctl --list-formats-ext
.Let's say we will use
1280x720
Change the Gstreamer pipeline accordingly:
"v4l2src device=/dev/video0 ! video/x-raw, framerate=30/1, width=1280, height=720 ! videoconvert ! appsink"
Restart OpenDataCam
NOTE: Increasing webcam resolution won't increase OpenDataCam accuracy, the input of the neural network is 400x400 max, and it might cause the UI to have logs as the MJPEG stream becomes very slow for higher resolution
Use Custom Neural Network weights
For a docker (standard install) of OpenDataCam:
We ship inside the docker container those YOLO weights:
- Jetson Nano:
yolov4-tiny
- Jetson Xavier:
yolov4
- Desktop install:
yolov4
In order to switch to another one you need:
to mount the necessary files into the darknet folder of the docker container so OpenDataCam has access to those new weights.
change the
config.json
accordingly
For example, if you want to use yolov3-tiny-prn
, you need to:
download
yolov3-tiny-prn.weights
the same directory as thedocker-compose.yml
file(optional) download the
.cfg
,.data
and.names
files if they are custom (not default darknet)mount the weights file using
volumes
in thedocker-compose.yml
file
volumes:
- './config.json:/var/local/opendatacam/config.json'
- './yolov3-tiny-prn.weights:/var/local/darknet/yolov3-tiny-prn.weights'
- (optional) if you have custom
.cfg
,.data
and.names
files you should mount them too
volumes:
- './config.json:/var/local/opendatacam/config.json'
- './yolov3-tiny-prn.weights:/var/local/darknet/yolov3-tiny-prn.weights'
- './coco.data:/var/local/darknet/cfg/coco.data'
- './yolov3-tiny-prn.cfg:/var/local/darknet/cfg/yolov3-tiny-prn.cfg'
- './coco.names:/var/local/darknet/cfg/coco.names'
change the
config.json
add an entry to the
NEURAL_NETWORK_PARAMS
setting inconfig.json
.
"yolov3-tiny-prn": {
"data": "cfg/coco.data",
"cfg": "cfg/yolov3-tiny-prn.cfg",
"weights": "yolov3-tiny-prn.weights"
}
- change the
NEURAL_NETWORK
param to the key you defined inNEURAL_NETWORK_PARAMS
"NEURAL_NETWORK": "yolov3-tiny-prn"
- If you've added new volumes to your
docker-compose.yml
, you need to update the container using:
sudo docker-compose up -d
- Otherwise, if you just updated files from existing volumes, you need to restart the container using:
sudo docker-compose restart
For a non-docker install of opendatacam:
It is the same as above, but instead of mounting the files in the docker container you just need to directly copy them in the /darknet
folder
copy
.weights
files ,.cfg
and.data
file into the darknet folderSames steps at above
Restart the node.js app (not need to recompile)
Tracker settings
You can tweak some settings of the tracker to optimize OpenDataCam better for your needs
"TRACKER_SETTINGS": {
"objectMaxAreaInPercentageOfFrame": 80,
"confidence_threshold": 0.2,
"iouLimit": 0.05,
"unMatchedFrameTolerance": 5,
"fastDelete": true,
"matchingAlgorithm": "kdTree"
}
objectMaxAreaInPercentageOfFrame
: Filters out Objects which area (width * height) is higher than a certain percentage of the total frame areaconfidence_threshold
: Filters out object that have less than this confidence value (value given by neural network)iouLimit
: When tracking from frame to frame exclude object from beeing matched as same object as previous frame (same id) if their IOU (Intersection over union) is lower than this. More details on how tracker works here: https://github.com/opendatacam/node-moving-things-tracker/blob/master/README.md#how-does-it-workunMatchedFrameTolerance
: This the number of frame we keep predicting the object trajectory if it is not matched by the next frame list of detections. Setting this higher will cause less ID switches, but more potential false positive with an ID going to another object.fastDelete
: If false, detections will always be kept forunMatchedFrameTolerance
in the buffer. Otherwise, detections will be dropped from the tracker buffer if they can not be machted the next frame they appeared. Setting this tofalse
can help with tracking difficult objects, but may have side effects like more frequent object ID switches or lower FPS as more objects will be kept in the buffer.matchingAlgorithm
: The algorithm used to match tracks with new detections. Can be eitherkdTree
ormunkres
. See https://github.com/opendatacam/node-moving-things-tracker/pull/21 for differences in performance of the matching algorithms.
Counter settings
"COUNTER_SETTINGS": {
"countingAreaMinFramesInsideToBeCounted": 1,
"countingAreaVerifyIfObjectEntersCrossingOneEdge": true,
"minAngleWithCountingLineThreshold": 5,
"computeTrajectoryBasedOnNbOfPastFrame": 5
}
countingAreaMinFramesInsideToBeCounted
: this is the minimum number of frames the object needs to remain in the area to be countedcountingAreaVerifyIfObjectEntersCrossingOneEdge
: (defaulttrue
)if
true
: in order to count the tracked item, the algorithm checks if the object trajectory crosses one of the edges of the polygon otherwise it won't count it.. this is to avoid to count id reassignment inside the polygon.if
false
: the algorithm for the counting won't check this.. it will be very dump and basically count the item if it remains more thancountingAreaMinFramesInsideToBeCounted
inside the zone.. but if its IDs reassigns inside it could count it twice...
minAngleWithCountingLineThreshold
: Count items crossing the counting line only if the angle between their trajectory and the counting line is superior to this angle (in degree). 90 degree would count nothing (or only perfectly perpendicular object) whereas 0 will count everything.
computeTrajectoryBasedOnNbOfPastFrame
: This tells the counting algorithm to compute the trajectory to determine if an object crosses the line based on this number of past frame. As you can see below in reality the trajectory of the center of the bbox given by YOLO is moving a little bit from frame to frame, so this can smooth out and be more reliable to determine if object is crossing the line and the value of the angle of crossing
NB: if the object has changed ID in the past frames, it will take the last past frame known with the same ID.
Database
Database backend can be selected by setting the DATABASE
key.
See below for a list of supported database backends.
MongoDB
The following configuration options exists for MongoDB
url
: By default Opendatacam will use the MongoDB instance running locally under the same docker compose file. If you want to persist the data on a remote mongodb instance, you can change the settingurl
. See the example below:persistTracker
: Iftrue
, raw Tracker output will be stored. This allows for in-dept analysis of trajectories.
"DATABASE_PARAMS": {
"mongo": {
"url": "mongodb://my-mongo-server.domain.tld:27017",
"persistTracker": false
}
}
By default the Mongodb will be persisted in the /data/db
directory of your host machine
Ports
You can modify the default ports used by OpenDataCam.
"PORTS": {
"app": 8080,
"darknet_json_stream": 8070,
"darknet_mjpeg_stream": 8090
}
Tracker accuracy display
The tracker accuracy layer shows a heatmap like this one:
This heatmap highlights the areas where the tracker accuracy isn't really good to help you:
- Set counter lines where things are well tracked
- Decide if you should change the camera viewpoint
Behind the hoods, it displays a metric of the tracker called "zombies" which represent the predicted bounding box when the tracked isn't able to asign a bounding box from the YOLO detections.
You can tweak all the settings of this display with the TRACKER_ACCURACY_DISPLAY
setting.
nbFrameBuffer | Number of previous frames displayed on the heatmap |
---|---|
radius | Radius of the points displayed on the heatmap (in % of the width of the canvas) |
blur | Blur of the points displayed on the heatmap (in % of the width of the canvas) |
step | For each point displayed, how much the point should contribute to the increase of the heapmap value (the range is between 0-1), increasing this will cause the heatmap to reach the higher values of the gradient faster |
gradient | Colors gradient, insert as many values as you like between 0-1 (hex value supported, ex: "#fff" or "white") |
canvasResolutionFactor | In order to improve performance, the tracker accuracy canvas resolution is downscaled by a factor of 10 by default (set a value between 0-1) |
"TRACKER_ACCURACY_DISPLAY": {
"nbFrameBuffer": 300,
"settings": {
"radius": 3.1,
"blur": 6.2,
"step": 0.1,
"gradient": {
"0.4":"orange",
"1":"red"
},
"canvasResolutionFactor": 0.1
}
}
For example, if you change the gradient with:
"gradient": {
"0.4":"yellow",
"0.6":"#fff",
"0.7":"red",
"0.8":"yellow",
"1":"red"
}
Use Environment Variables
Some of the entries in config.json
can be overwritten using environment variables. Currently this is the PORTS
object and the setting for the MONGODB_URL
. See the file .env.example as an example how to set them. Make sure the use the exact same names or opendatacam will fall back to config.json
, and if that is not present the general defaults.
Without Docker
If you are running opendatacam without docker you can set these by:
- adding a file called
.env
to the root of the project, then these will be picked up by the dotenv package. - adding these variables to your
.bashrc
or.zshrc
depending on what shell you are using or any other configuration file that gets loaded into your shell sessions. - adding them to the command you use to start the opendatacam,
for example in bash
MONGODB_URL=mongodb://mongo:27017 PORT_APP=8080 PORT_DARKNET_MJPEG_STREAM=8090 PORT_DARKNET_JSON_STREAM=8070 node server.js
If you are on windows we suggest using thecross-env
package to set these variables.
With docker-compose
If you are running opendatacam with docker-compose.yml
you can set them as environment section to the service opendatacam like shown below.
service:
opendatacam:
environment:
- PORT_APP=8080
You also can can declare these environment variables in a .env
file in the folder where the docker-compose
command is invoked. Then these will be available within the docker-compose.yml
file and you can pass them through to the container like shown below.
The .env
file.
PORT_APP=8080
the docker-compose.yml
file.
service:
opendatacam:
environment:
- PORT_APP
There is also the possibility the have the .env
in the directory where the docker-compose
command is executed and add the env_file
section to the docker-compose.yml configuration.
service:
opendatacam:
env_file:
- ./.env
You also can add these variables to the call of the docker-compose
command. For example like this docker-compose up -e PORT_APP=8080
.
GPS
OpenDataCam can obtain the current position of the tracker via GPS and persist it along other counter data. This is useful in situations where the OpenDataCam is mobile e.g. used as a dashcam or mounted to a drone.
Requirements
To receive GPS position a GPS enabled device must be connected to your Jetson or PC. See GPSD's list of supported devices.
Additionally you will need GPSD running. GPSD can either run in Docker or as a system service.
Running GPSD in Docker
The easiest way to run GPSD is through docker using the opensourcefoundries/gpsd image.
The easiest way is to add the GPSD service to your docker-compose.yml
the following way.
services:
# Add the following service to you docker compose file
gpsd:
image: opensourcefoundries/gpsd
# List your GPS device here and make sure that it matches the device in the entrypoint line
devices:
- /dev/ttyACM0
entrypoint: ["/bin/sh", "-c", "/sbin/syslogd -S -O - -n & exec /usr/sbin/gpsd -N -n -G /dev/ttyACM0", "--"]
ports:
- "2947:2947"
restart: always
If GPSD been added to your Docker compose file, please change your GPS hostname
setting in config.json
to the name of your GPSD service.
In the example above this would be "hostname": "gpsd"
.
Alternatively, if you don't run OpenDataCam in docker you can only start GPSD via the following command:
# This assumes your device is /dev/ttyACM0. Please Change accordingly to your setup.
GPS_DEVICE=/dev/ttyACM0; docker run -d -p 2947:2947 --device=$GPS_DEVICE opensourcefoundries/gpsd $GPS_DEVICE
Running GPSD as system service
Please read your operating system documentation.
Configuration
To enable GPS add the following section to your config.json
"GPS": {
"enabled": true,
"port": 2947,
"hostname": "localhost",
"signalLossTimeoutSeconds": 60,
"csvExportOpenStreetMapsUrl": true
}
Whereas
enabled
is a flag to control the featureport
andhostname
: Contain the location of the GPS Deamon.signalLossTimeoutSeconds
: In case of temporary position loss, the old signal will remain valid for this many seconds.csvExportOpenStreetMapsUrl
: Besides the rawlat
andlon
values, a link to OpenStreetMaps may be added to the exported CSV