uv4l-raspicam

NAME

raspicam – UV4L plug-in driver for all the Raspberry Pi CSI Camera Boards and the TC358743 HDMI to MIPI converter chipset

SYNOPSIS

uv4l [ uv4l-options ] –driver raspicam [ raspicam-options ]

DESCRIPTION

This is the Userspace Video4Linux2 driver for all the Raspberry Pi CSI Camera Boards with support for Stereoscopic vision. See the uv4l manual page for other details and options.

Global options:

--encoding arg (=JPEG Video Capture)
GPU Video encoding to use. Valid values are: yuv420, nv21, yvu420, rgb565, rgb565x, rgb24, bgr24, rgba, bgra, jpeg, mjpeg, h264

--width arg (=max)
image width

--height arg (=max)
image height

--stereoscopic-mode [=arg(=side_by_side)] (=none)
set up how the stereoscopic image is to be packed - either with the two images side by side, or one above the other (top/bottom). Available modes are "side_by_side", "top_bottom" or "none" (stereo disabled). Note that it is also perfectly valid to independently use each camera module from separate instances of uv4l running at the same time.

--camera-number arg
select the first channel camera. The second channel is always the other one. Valid values are 0 and 1. Default is not specified.

--decimate [=arg(=yes)] (=no)
also sometimes referred to as half/half mode. When a stereoscopic camera mode is enabled, the output frame ends up still being the same size as the
original (e.g. 1920x1080), but the individual images are squashed (2:1) in one dimension to fit them both in (i.e. each eye would become either 960x1080, or 1920x540, but with non-square pixels).

--swap-eyes [=arg(=yes)] (=no)
when a stereoscopic camera mode is enabled, it allows putting the image for the right eye on the left/top, and left eye on the right/bottom. Display and H264 generally want --swap_eyes=0, JPEG stereoscopic wants --swap_eyes=1. This option might have no effect if not implemented in the firmware.

Video Capture options:

--framerate arg (=30)
Video maximum framerate, from 0 (auto) to 90 (max). The firmware might not allow more than 30fps on high resolutions.

--video-denoise [=arg(=yes)] (=yes)
turn on video denoise

H264 encoding options:

--profile arg (=high)
profile for H264 encoding. Valid values are: baseline, high, main

--level arg (=4)
level for H264 encoding. Valid values are: 4, 4.1, 4.2

--bitrate arg (=17000000)
Constant Video Bitrate

--intra-refresh-mode arg (=dummy)
intra refresh mode: adaptive, both, cyclic, cyclicrows, dummy

--intra-period arg
intra-frame refresh period (key frame rate/GoP size)

--inline-headers [=arg(=yes)] (=no)
insert inline headers (SPS, PPS) to stream

--quantisation-parameter arg
Quantisation Parameter for Variable BitRate from 10 to 40 (alternative to bitrate)

Video processing and object detection with cascade classifiers:

--text-overlay [=arg(=enabled)] (=disabled)
enable text over video

--text-filename arg
(=/usr/share/uv4l/raspicam/text.json)
JSON file containing the properties of the text to draw onto the video stream

--object-detection [=arg(=enabled)] (=disabled)
enable real-time object detection or tracking. This option has effect with 'yuv420', 'h264' and 'mjpeg' video encodings only

--object-detection-mode arg (=accurate_tracking)
object detection method: accurate_detection, accurate_tracking

--min-object-size arg (=80 80)
Minimum object size to detect: width height.
Default is good for face detection at 320x240 ~15fps video on a Raspberry 1.

--main-classifier arg (=/usr/share/uv4l/raspicam/lbpcascade_frontalface.xml)
Path to the XML classifier file used for object detection. Default is for face detection, another classifier can be 'haarcascade_frontalface_alt2.xml'

--secondary-classifier arg
(=/usr/share/uv4l/raspicam/lbpcascade_frontalface.xml)
Path to the additional XML classifier file used for object tracking

Still Capture options (‘jpeg’ encoding only):

--raw [=arg(=yes)] (=no)
add raw bayer data to jpeg metadata

--quality arg (=85)
jpeg quality, from 0 to 100

--stills-denoise [=arg(=yes)] (=yes)
turn on denoise for stills

Preview options:

--preview arg (=0 0 1920 1080)
preview window settings

--fullscreen [=arg(=yes)] (=yes)
fullscreen preview mode

--opacity arg (=255)
preview window opacity, from 0 to 255

--nopreview [=arg(=yes)] (=yes)
do not display a preview window. it does NOT display by default.

--display-num arg
display on which to display the preview window (dispmanx/tvservice numbering)

Image parameters:

--sharpness arg (=0)
image sharpness from -100 to 100

--contrast arg (=0)
image contrast from -100 to 100

--brightness arg (=50)
image brightness from 0 to 100

--saturation arg (=0)
image saturation from -100 to 100

--iso arg (=0)
capture ISO. 0 for auto.

--vstab [=arg(=yes)] (=no)
turn on video stabilization

--ev arg (=0)
EV compensation from -10 to 10

--exposure arg (=auto)
exposure mode: antishake, auto, backlight, beach, fireworks, fixedfps, night, nightpreview, snow, sports, spotlight, verylong

--awb arg (=auto)
AWB mode: auto, cloudy, flash, fluorescent, horizon, incandescent, off, shade, sun, tungsten

--red-gain arg (=100)
auto-white balance red gain from 0 to 800, effective when awb is 'off'

--blue-gain arg (=100)
auto-white balance blue gain from 0 to 800, effective when awb is 'off'

--imgfx arg (=none)
image effect: blur, cartoon, colourbalance, colourpoint colourswap, denoise, emboss, film, gpen, hatch, negative, none, oilpaint, pastel, posterise, saturation, sketch, solarise, washedout, watercolour, colour

--colfx [=arg(=128 128)]
colour effect: U V

--metering arg (=average)
metering mode: average, backlit, matrix, spot

--rotation arg (=0)
image rotation from 0 to 359 degrees

--hflip [=arg(=yes)] (=no)
horizontal flip

--vflip [=arg(=yes)] (=no)
vertical flip

--roi arg (=0 0 1 1)
region of interest x y w h as normalized coordinates in the interval [0, 1] rational

--shutter-speed arg (=0)
shutter speed in 100 us

--drc arg (=off)
dynamic range compression strength: high, low, medium, off

--text-annotation arg
basic, static text annotation over video, still and preview. The string must not be longer than 31 characters. For a more advanced feature see the --text-overlay option

--text-annotation-background [=arg(=yes)] (=no)
enable black background for annotated text

--black-level-compensation [=arg(=yes)] (=yes)
use black level compensation block in the ISP

--lens-shading [=arg(=yes)] (=yes)
use lens shading block in the ISP

--automatic-defective-pixel-correlation [=arg(=yes)] (=yes)
use automatic defective pixel correlation block in the ISP

--white-balance-gain [=arg(=yes)] (=yes)
use white balance gain block in the ISP

--crosstalk [=arg(=yes)] (=yes)
use crosstalk block in the ISP

--gamma [=arg(=yes)] (=yes)
use gamma block in the ISP

--sharpening [=arg(=yes)] (=yes)
use sharpening block in the ISP

Advanced options:

--statistics [=arg(=enabled)] (=enabled)
enable statistics (i.e. detected fps)

--output-buffers arg (=3)
number of buffers to be enqueued for video output

TC358743 HDMI to MIPI converter options:

--tc358743 [=arg(=yes)] (=no)
enable the TC358743 camera serial converter chipset (HDMI to MIPI)

--tc358743-i2c-dev arg (=/dev/i2c-1)
device node for the communication with the TC358743 chip over I2C bus

--tc358743-init-command arg (=/usr/share/uv4l/raspicam/tc358743_init.sh)
path to custom helper init script or init command

--tc358743-no-signal-fallthrough arg (=no)
when this option is set, if no signal is detected when the stream starts, then fall through the specified width, height and frame rate, otherwise an error occurs

--tc358743-edid-file arg
read the EDID from the given file in hex format

--record [=arg(=yes)] (=no)
record a raw H264 video while streaming (a new .264 file is created each time the stream starts)

--recording-dir arg (=/usr/share/uv4l/recordings)
path to the directory where recorded video files are stored

--recording-bitrate arg (=800000)
recording bitrate in bit/s

TensorFlow model options:

--tflite-model-file arg
path to the .tflite SSD/CNN model file to be loaded into the Edge TPU module (if present) or run by the CPU. The model must accept RGB input images with width multiple of 32 and height multiple of 16.

--tflite-model-output-topk arg (=5)
only consider the specified top-k predictions at most with the highest confidence (use 0 for maximum number)

--tflite-model-output-threshold arg (=0.25)
minimum confidence threshold for returned predictions

--tflite-overlay-model-output [=arg(=yes)] (=no)
draw model output onto the image: boundary boxes, confidence scores, class ids, etc... (might not be supported by all the video encodings)
--tflite-detection-classids arg
for detection only consider these object class ids among the top-k predictions

Pan/Tilt object tracking with PID controller options:

--tracking-pan-tilt [=arg(=yes)] (=no)
enable object tracking with pan/tilt servos of objects detected by the TensorFlow model

--tracking-strategy arg (=maxarea)
'maxarea': track the detected object with the largest boundary box area; 'all': track the detected objects all together

--tracking-pan-pid-kp arg (=0)
proportional constant for the pan PID controller

--tracking-pan-pid-ki arg (=0)
integrative constant for the pan PID controller

--tracking-pan-pid-kd arg (=0)
derivative constant for the pan PID controller

--tracking-tilt-pid-kp arg (=0)
proportional constant for the tilt PID controller

--tracking-tilt-pid-ki arg (=0)
integrative constant for the tilt PID controller

--tracking-tilt-pid-kd arg (=0)
derivative constant for the tilt PID controller

--tracking-pid-p-on-m [=arg(=PonM)] (=PonM)
select proportional-on-measurement (PonM) instead of proportional-on-error (PonE) for the PID controller

--tracking-pan-home-position arg (=0)
home position for pan servo in degrees from -90 to 90

--tracking-tilt-home-position arg (=0)
home position for tilt servo in degrees from -90 to 90

--tracking-home-init [=arg(=yes)] (=yes)
put pan/tilt servos in home position before start capturing

--tracking-home-timeout arg (=15000)
put pan/tilt servos in home position if no object is detected for the specified amount of time in ms at least (0 for no timeout)

--tracking-pan-servo-channel1 [=arg(=yes)] (=yes)
specify whether pad servo is on channel and tilt servo is on channel 2, or vicevarsa otherwise

--tracking-hat-i2c-dev arg (=/dev/i2c-1)
pan/tilt hat device node

Other options:

--driver-config-file arg
path to the configuration file containing driver options. Options specified via command line have higher priority.

--custom-sensor-config arg
select one sensor mode. Possible values for Camera v1.x are:
0 for normal mode,
1 for 1080P30 cropped 1-30fps mode,
2 for 5MPix 1-15fps mode,
3 for 5MPix 0.1666-1fps mode,
4 for 2x2 binned 1296x972 1-42fps mode,
5 for 2x2 binned 1296x730 1-49fps mode,
6 for VGA 30-60fps mode,
7 for VGA 60-90fps mode
For Camera v2.x are:
0 for normal mode,
1 for 1080P30 cropped 0.1-30fps mode,
2 for 8MPix 0.1-15fps mode,
3 for 8MPix 0.1-15fps mode,
4 for 2x2 binned 1640x1232 0.1-40fps mode,
5 for 2x2 binned 1640x922 0.1-40fps mode,
6 for 16:9 1280x720 40-90fps mode,
7 for 7=4:3 640x480 40-90fps mode

SEE ALSO

uv4l(1)

AUTHOR

<info@linux-projects.org> https://linux-projects.org

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close