Technical architecture overview:
Run in simulation mode
Simulation mode is useful to work on the UI and node.js feature deployment without having to run the neural network / the webcam.
Dependency: Mongodb installed (optional, only to record data) : see tutorial
# Clone repo
git clone https://github.com/opendatacam/opendatacam.git
# Install dependencies
# Run in dev mode
npm run dev
# Open browser on http://localhost:8080/
If you have an error while doing
npm install it is probably a problem with node-gyp, you need to install additional dependencies depending on your platform: https://github.com/nodejs/node-gyp#on-unix
The new simulation mode allows to feed YOLO JSON detections into OpenDataCam. As for the video either pre-extracted frames or a video file where the frames will be extracted using
The simulation can be customized in the OpenDataCam config by adding it as a new video source.
"simulation": "--yolo_json public/static/placeholder/alexeydetections30FPS.json --video_file_or_folder public/static/placeholder/frames --isLive true --jsonFps 20 --mjpgFps 0.2"
detectionsA relative or absolute path to a MOT challenge, or the JSON file with Darknet detections to use. For relative paths, the repository root will be used as the base.
video_file_or_folder: A file or folder to find JPGs. If
detectionspoints to a MOT challenge, the image folder will be taken for MOT's
seqinfo.ini. If it's a file the images will be extracted using
ffmpeg. If it's a folder it will expect the images in MOT format, or short format (
102.jpg, ...) to be present there.
isLive: Should the simulation behave like a live source (e.g. WebCam), or like a file. If
true, the simulation will silently loop from the beginning without killing the stream. If
false, the simulation will kill the streams at the end of JSON file just like Darknet.
jsonFps: Approximate frames per second for the JSON stream.
mjpgFps: Only when using
ffmpeg. Approximate frames per second for the MJPG stream. Having this set lower than
jsonFps, will make the video skip a few frames.
darknetStdout: If the simulation should mimic the output of Darknet on stdout.
json_port: The TCP port for JSON streaming
mjpg_port: The TCP port for MJGP streeaming
yolo_json: Deprecated. Use
The simulation JSON and MJPG streams can also be started without Opendatacam by invoking
node scripts/YoloSimulation.js from the repository root folder.
- For next release only: Set $VERSION instead of master for the Kubernete install script, see: https://github.com/opendatacam/opendatacam/pull/247
- Make sure that config.json has the TO_REPLACE_VIDEO_INPUT, TO_REPLACE_VIDEO_INPUT values that will be replaced by sed on installation
- Search and replace OLD_VERSION with NEW_VERSION in all documentation
- Make sure correct version in config.json > OPENDATACAM_VERSION
- Make sure correct version in package.json
- Make sure correct version in README "Install and start OpenDataCam" wget install script
- Make sure correct version in JETSON_NANO.md "Install OpenDataCam" wget install script
- Make sure correct VERSION in /docker/install-opendatacam.sh
- Generate up to date api documentation
npm run generateapidoc(not needed anymore since https://github.com/opendatacam/opendatacam/pull/336)
- Add Release on github
After you've added the release to GitHub, a GitHub Action Workflow will create the Docker images and automatically upload them to Docker Hub. It is no longer necessary to create a git tag or Docker Images manually.
Markdown table of content generator
List all cams
You can run
npm run lint to check the whole code base.
npx eslint yourfile.js to check only a single file.