Skip to main content

Development Guidelines

Technical architecture overview:

Technical architecture

Run in simulation mode

Simulation mode is useful to work on the UI and node.js feature deployment without having to run the neural network / the webcam.

Dependency: Mongodb installed (optional, only to record data) : see tutorial

# Clone repo
git clone

# Install dependencies
npm i
# Run in dev mode
npm run dev
# Open browser on http://localhost:8080/

If you have an error while doing npm install it is probably a problem with node-gyp, you need to install additional dependencies depending on your platform:

Simulation Mode

The new simulation mode allows to feed YOLO JSON detections into OpenDataCam. As for the video either pre-extracted frames or a video file where the frames will be extracted using ffmpeg.

The simulation can be customized in the OpenDataCam config by adding it as a new video source.

"simulation": "--yolo_json public/static/placeholder/alexeydetections30FPS.json --video_file_or_folder public/static/placeholder/frames --isLive true --jsonFps 20 --mjpgFps 0.2"


  • detections A relative or absolute path to a MOT challenge, or the JSON file with Darknet detections to use. For relative paths, the repository root will be used as the base.
  • video_file_or_folder: A file or folder to find JPGs. If detections points to a MOT challenge, the image folder will be taken for MOT's seqinfo.ini. If it's a file the images will be extracted using ffmpeg. If it's a folder it will expect the images in MOT format, or short format (001.jpg, 002.jpg, ..., 101.jpg, 102.jpg, ...) to be present there.
  • isLive: Should the simulation behave like a live source (e.g. WebCam), or like a file. If true, the simulation will silently loop from the beginning without killing the stream. If false, the simulation will kill the streams at the end of JSON file just like Darknet.
  • jsonFps: Approximate frames per second for the JSON stream.
  • mjpgFps: Only when using ffmpeg. Approximate frames per second for the MJPG stream. Having this set lower than jsonFps, will make the video skip a few frames.
  • darknetStdout: If the simulation should mimic the output of Darknet on stdout.
  • json_port: The TCP port for JSON streaming
  • mjpg_port: The TCP port for MJGP streeaming
  • yolo_json: Deprecated. Use detection instead

The simulation JSON and MJPG streams can also be started without Opendatacam by invoking node scripts/YoloSimulation.js from the repository root folder.

Release checklist

  • For next release only: Set $VERSION instead of master for the Kubernete install script, see:
  • Make sure that config.json has the TO_REPLACE_VIDEO_INPUT, TO_REPLACE_VIDEO_INPUT values that will be replaced by sed on installation
  • Search and replace OLD_VERSION with NEW_VERSION in all documentation
  • Make sure correct version in config.json > OPENDATACAM_VERSION
  • Make sure correct version in package.json
  • Make sure correct version in README "Install and start OpenDataCam" wget install script
  • Make sure correct version in "Install OpenDataCam" wget install script
  • Make sure correct VERSION in /docker/
  • Generate up to date api documentation npm run generateapidoc (not needed anymore since
  • Add Release on github

After you've added the release to GitHub, a GitHub Action Workflow will create the Docker images and automatically upload them to Docker Hub. It is no longer necessary to create a git tag or Docker Images manually.

Markdown table of content generator

List all cams

v4l2-ctl --list-devices

Technical architecture

Technical architecture

Code Style

Opendatacam uses the style. You can run npm run lint to check the whole code base. Or npx eslint yourfile.js to check only a single file.