├── .github
└── ISSUE_TEMPLATE
│ ├── bug_report.md
│ └── feature_request.md
├── .gitignore
├── LICENSE
├── README.md
├── dual_camera.py
├── face_detect.py
├── simple_camera.cpp
└── simple_camera.py
/.github/ISSUE_TEMPLATE/bug_report.md:
--------------------------------------------------------------------------------
1 | ---
2 | name: Issue report
3 | about: Create a report to help us improve
4 | title: "[BUG]"
5 | labels: bug
6 | assignees: ''
7 |
8 | ---
9 |
10 | **Describe the issue**
11 | Please describe the issue
12 |
13 | **What version of L4T/JetPack**
14 | L4T/JetPack version:
15 |
16 | **What version of OpenCV**
17 | OpenCV version:
18 |
19 | **Python Version**
20 | Python version if applicable:
21 |
22 | **To Reproduce**
23 | Steps to reproduce the behavior:
24 | For example, what command line did you run?
25 |
26 | **Expected behavior**
27 | A clear and concise description of what you expected to happen.
28 |
29 | **Additional context**
30 | Add any other context about the problem here.
31 |
--------------------------------------------------------------------------------
/.github/ISSUE_TEMPLATE/feature_request.md:
--------------------------------------------------------------------------------
1 | ---
2 | name: Feature request
3 | about: Suggest an idea for this project
4 | title: "[FEATURE REQUEST]"
5 | labels: ''
6 | assignees: ''
7 |
8 | ---
9 |
10 | **Is your feature request related to a problem? Please describe.**
11 | A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
12 |
13 | **Describe the solution you'd like**
14 | A clear and concise description of what you want to happen.
15 |
16 | **Describe alternatives you've considered**
17 | A clear and concise description of any alternative solutions or features you've considered.
18 |
19 | **Additional context**
20 | Add any other context or screenshots about the feature request here.
21 |
--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------
1 | simple_camera
2 | __pycache__
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | MIT License
2 |
3 | Copyright (c) 2019-2022 JetsonHacks
4 |
5 | Permission is hereby granted, free of charge, to any person obtaining a copy
6 | of this software and associated documentation files (the "Software"), to deal
7 | in the Software without restriction, including without limitation the rights
8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9 | copies of the Software, and to permit persons to whom the Software is
10 | furnished to do so, subject to the following conditions:
11 |
12 | The above copyright notice and this permission notice shall be included in all
13 | copies or substantial portions of the Software.
14 |
15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21 | SOFTWARE.
22 |
23 | OpenCV
24 | https://opencv.org
25 | License
26 | By downloading, copying, installing or using the software you agree to this license. If you do not agree to this license, do not download, install, copy or use the software.
27 |
28 | License Agreement
29 | For Open Source Computer Vision Library
30 | (3-clause BSD License)
31 | Copyright (C) 2000-2019, Intel Corporation, all rights reserved.
32 | Copyright (C) 2009-2011, Willow Garage Inc., all rights reserved.
33 | Copyright (C) 2009-2016, NVIDIA Corporation, all rights reserved.
34 | Copyright (C) 2010-2013, Advanced Micro Devices, Inc., all rights reserved.
35 | Copyright (C) 2015-2016, OpenCV Foundation, all rights reserved.
36 | Copyright (C) 2015-2016, Itseez Inc., all rights reserved.
37 | Third party copyrights are property of their respective owners.
38 |
39 | Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
40 |
41 | Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
42 | Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
43 | Neither the names of the copyright holders nor the names of the contributors may be used to endorse or promote products derived from this software without specific prior written permission.
44 | This software is provided by the copyright holders and contributors "as is" and any express or implied warranties, including, but not limited to, the implied warranties of merchantability and fitness for a particular purpose are disclaimed. In no event shall copyright holders or contributors be liable for any direct, indirect, incidental, special, exemplary, or consequential damages (including, but not limited to, procurement of substitute goods or services; loss of use, data, or profits; or business interruption) however caused and on any theory of liability, whether in contract, strict liability, or tort (including negligence or otherwise) arising in any way out of the use of this software, even if advised of the possibility of such damage.
45 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # CSI-Camera
2 | Simple example of using a MIPI-CSI(2) Camera (like the Raspberry Pi Version 2 camera) with the NVIDIA Jetson Developer Kits with CSI camera ports. This includes the recent Jetson Nano and Jetson Xavier NX. This is support code for the article on JetsonHacks: https://wp.me/p7ZgI9-19v
3 |
4 | For the Nanos and Xavier NX, the camera should be installed in the MIPI-CSI Camera Connector on the carrier board. The pins on the camera ribbon should face the Jetson module, the tape stripe faces outward.
5 |
6 | Some Jetson developer kits have two CSI camera slots. You can use the sensor_mode attribute with the GStreamer nvarguscamerasrc element to specify which camera. Valid values are 0 or 1 (the default is 0 if not specified), i.e.
7 |
8 | ```
9 | nvarguscamerasrc sensor_id=0
10 | ```
11 |
12 | To test the camera:
13 |
14 | ```
15 | # Simple Test
16 | # Ctrl^C to exit
17 | # sensor_id selects the camera: 0 or 1 on Jetson Nano B01
18 | $ gst-launch-1.0 nvarguscamerasrc sensor_id=0 ! nvoverlaysink
19 |
20 | # More specific - width, height and framerate are from supported video modes
21 | # Example also shows sensor_mode parameter to nvarguscamerasrc
22 | # See table below for example video modes of example sensor
23 | $ gst-launch-1.0 nvarguscamerasrc sensor_id=0 ! \
24 | 'video/x-raw(memory:NVMM),width=1920, height=1080, framerate=30/1' ! \
25 | nvvidconv flip-method=0 ! 'video/x-raw,width=960, height=540' ! \
26 | nvvidconv ! nvegltransform ! nveglglessink -e
27 | ```
28 |
29 | Note: The cameras may report differently than show below. You can use the simple gst-launch example above to determine the camera modes that are reported by the sensor you are using.
30 | ```
31 | GST_ARGUS: 1920 x 1080 FR = 29.999999 fps Duration = 33333334 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;
32 | ```
33 |
34 | Also, the display transform may be sensitive to width and height (in the above example, width=960, height=540). If you experience issues, check to see if your display width and height is the same ratio as the camera frame size selected (In the above example, 960x540 is 1/4 the size of 1920x1080).
35 |
36 |
37 | ## Samples
38 |
39 |
40 | ### simple_camera.py
41 | simple_camera.py is a Python script which reads from the camera and displays the frame to a window on the screen using OpenCV:
42 | ```
43 | $ python simple_camera.py
44 | ```
45 | ### face_detect.py
46 |
47 | face_detect.py is a python script which reads from the camera and uses Haar Cascades to detect faces and eyes:
48 | ```
49 | $ python face_detect.py
50 | ```
51 | Haar Cascades is a machine learning based approach where a cascade function is trained from a lot of positive and negative images. The function is then used to detect objects in other images.
52 |
53 | See: https://docs.opencv.org/3.3.1/d7/d8b/tutorial_py_face_detection.html
54 |
55 |
56 | ### dual_camera.py
57 | Note: You will need install numpy for the Dual Camera Python example to work:
58 | ```
59 | $ pip3 install numpy
60 | ```
61 | This example is for the newer Jetson boards (Jetson Nano, Jetson Xavier NX) with two CSI-MIPI camera ports. This is a simple Python program which reads both CSI cameras and displays them in one window. The window is 1920x540. For performance, the script uses a separate thread for reading each camera image stream. To run the script:
62 |
63 | ```
64 | $ python3 dual_camera.py
65 | ```
66 |
67 | ### simple_camera.cpp
68 | The last example is a simple C++ program which reads from the camera and displays to a window on the screen using OpenCV:
69 |
70 | ```
71 | $ g++ -std=c++11 -Wall -I/usr/lib/opencv -I/usr/include/opencv4 simple_camera.cpp -L/usr/lib -lopencv_core -lopencv_highgui -lopencv_videoio -o simple_camera
72 |
73 | $ ./simple_camera
74 | ```
75 | This program is a simple outline, and does not handle needed error checking well. For better C++ code, use https://github.com/dusty-nv/jetson-utils
76 |
77 |
Notes
78 |
79 | Camera Image Formats
80 | You can use v4l2-ctl to determine the camera capabilities. v4l2-ctl is in the v4l-utils:
81 | ```
82 | $ sudo apt-get install v4l-utils
83 | ```
84 | For the Raspberry Pi V2 camera, a typical output is (assuming the camera is /dev/video0):
85 |
86 | ```
87 | $ v4l2-ctl --list-formats-ext
88 | ioctl: VIDIOC_ENUM_FMT
89 | Index : 0
90 | Type : Video Capture
91 | Pixel Format: 'RG10'
92 | Name : 10-bit Bayer RGRG/GBGB
93 | Size: Discrete 3280x2464
94 | Interval: Discrete 0.048s (21.000 fps)
95 | Size: Discrete 3280x1848
96 | Interval: Discrete 0.036s (28.000 fps)
97 | Size: Discrete 1920x1080
98 | Interval: Discrete 0.033s (30.000 fps)
99 | Size: Discrete 1280x720
100 | Interval: Discrete 0.017s (60.000 fps)
101 | Size: Discrete 1280x720
102 | Interval: Discrete 0.017s (60.000 fps)
103 | ```
104 |
105 | GStreamer Parameter
106 | For the GStreamer pipeline, the nvvidconv flip-method parameter can rotate/flip the image. This is useful when the mounting of the camera is of a different orientation than the default.
107 |
108 | ```
109 |
110 | flip-method : video flip methods
111 | flags: readable, writable, controllable
112 | Enum "GstNvVideoFlipMethod" Default: 0, "none"
113 | (0): none - Identity (no rotation)
114 | (1): counterclockwise - Rotate counter-clockwise 90 degrees
115 | (2): rotate-180 - Rotate 180 degrees
116 | (3): clockwise - Rotate clockwise 90 degrees
117 | (4): horizontal-flip - Flip horizontally
118 | (5): upper-right-diagonal - Flip across upper right/lower left diagonal
119 | (6): vertical-flip - Flip vertically
120 | (7): upper-left-diagonal - Flip across upper left/low
121 | ```
122 |
123 | OpenCV and Python
124 | Starting with L4T 32.2.1 / JetPack 4.2.2, GStreamer support is built in to OpenCV.
125 | The OpenCV version is 3.3.1 for those versions. Please note that if you are using
126 | earlier versions of OpenCV (most likely installed from the Ubuntu repository), you
127 | will get 'Unable to open camera' errors.
128 |
129 | If you can open the camera in GStreamer from the command line, and have issues opening the camera in Python, check the OpenCV version.
130 |
131 | ```
132 | >>>cv2.__version__
133 | ```
134 |
135 | Release Notes
136 |
137 | v3.2 Release January, 2022
138 | * Add Exception handling to Python code
139 | * Faster GStreamer pipelines, better performance
140 | * Better naming of variables, simplification
141 | * Remove Instrumented examples
142 | * L4T 32.6.1 (JetPack 4.5)
143 | * OpenCV 4.4.1
144 | * Python3
145 | * Tested on Jetson Nano B01, Jetson Xavier NX
146 | * Tested with Raspberry Pi V2 cameras
147 |
148 |
149 | v3.11 Release April, 2020
150 | * Release both cameras in dual camera example (bug-fix)
151 |
152 | v3.1 Release March, 2020
153 | * L4T 32.3.1 (JetPack 4.3)
154 | * OpenCV 4.1.1
155 | * Tested on Jetson Nano B01
156 | * Tested with Raspberry Pi v2 cameras
157 |
158 | v3.0 December 2019
159 | * L4T 32.3.1
160 | * OpenCV 4.1.1.
161 | * Tested with Raspberry Pi v2 camera
162 |
163 | v2.0 Release September, 2019
164 | * L4T 32.2.1 (JetPack 4.2.2)
165 | * OpenCV 3.3.1
166 | * Tested on Jetson Nano
167 |
168 | Initial Release (v1.0) March, 2019
169 | * L4T 32.1.0 (JetPack 4.2)
170 | * Tested on Jetson Nano
171 |
172 |
173 |
--------------------------------------------------------------------------------
/dual_camera.py:
--------------------------------------------------------------------------------
1 | # MIT License
2 | # Copyright (c) 2019-2022 JetsonHacks
3 |
4 | # A simple code snippet
5 | # Using two CSI cameras (such as the Raspberry Pi Version 2) connected to a
6 | # NVIDIA Jetson Nano Developer Kit with two CSI ports (Jetson Nano, Jetson Xavier NX) via OpenCV
7 | # Drivers for the camera and OpenCV are included in the base image in JetPack 4.3+
8 |
9 | # This script will open a window and place the camera stream from each camera in a window
10 | # arranged horizontally.
11 | # The camera streams are each read in their own thread, as when done sequentially there
12 | # is a noticeable lag
13 |
14 | import cv2
15 | import threading
16 | import numpy as np
17 |
18 |
19 | class CSI_Camera:
20 |
21 | def __init__(self):
22 | # Initialize instance variables
23 | # OpenCV video capture element
24 | self.video_capture = None
25 | # The last captured image from the camera
26 | self.frame = None
27 | self.grabbed = False
28 | # The thread where the video capture runs
29 | self.read_thread = None
30 | self.read_lock = threading.Lock()
31 | self.running = False
32 |
33 | def open(self, gstreamer_pipeline_string):
34 | try:
35 | self.video_capture = cv2.VideoCapture(
36 | gstreamer_pipeline_string, cv2.CAP_GSTREAMER
37 | )
38 | # Grab the first frame to start the video capturing
39 | self.grabbed, self.frame = self.video_capture.read()
40 |
41 | except RuntimeError:
42 | self.video_capture = None
43 | print("Unable to open camera")
44 | print("Pipeline: " + gstreamer_pipeline_string)
45 |
46 |
47 | def start(self):
48 | if self.running:
49 | print('Video capturing is already running')
50 | return None
51 | # create a thread to read the camera image
52 | if self.video_capture != None:
53 | self.running = True
54 | self.read_thread = threading.Thread(target=self.updateCamera)
55 | self.read_thread.start()
56 | return self
57 |
58 | def stop(self):
59 | self.running = False
60 | # Kill the thread
61 | self.read_thread.join()
62 | self.read_thread = None
63 |
64 | def updateCamera(self):
65 | # This is the thread to read images from the camera
66 | while self.running:
67 | try:
68 | grabbed, frame = self.video_capture.read()
69 | with self.read_lock:
70 | self.grabbed = grabbed
71 | self.frame = frame
72 | except RuntimeError:
73 | print("Could not read image from camera")
74 | # FIX ME - stop and cleanup thread
75 | # Something bad happened
76 |
77 | def read(self):
78 | with self.read_lock:
79 | frame = self.frame.copy()
80 | grabbed = self.grabbed
81 | return grabbed, frame
82 |
83 | def release(self):
84 | if self.video_capture != None:
85 | self.video_capture.release()
86 | self.video_capture = None
87 | # Now kill the thread
88 | if self.read_thread != None:
89 | self.read_thread.join()
90 |
91 |
92 | """
93 | gstreamer_pipeline returns a GStreamer pipeline for capturing from the CSI camera
94 | Flip the image by setting the flip_method (most common values: 0 and 2)
95 | display_width and display_height determine the size of each camera pane in the window on the screen
96 | Default 1920x1080
97 | """
98 |
99 |
100 | def gstreamer_pipeline(
101 | sensor_id=0,
102 | capture_width=1920,
103 | capture_height=1080,
104 | display_width=1920,
105 | display_height=1080,
106 | framerate=30,
107 | flip_method=0,
108 | ):
109 | return (
110 | "nvarguscamerasrc sensor-id=%d ! "
111 | "video/x-raw(memory:NVMM), width=(int)%d, height=(int)%d, framerate=(fraction)%d/1 ! "
112 | "nvvidconv flip-method=%d ! "
113 | "video/x-raw, width=(int)%d, height=(int)%d, format=(string)BGRx ! "
114 | "videoconvert ! "
115 | "video/x-raw, format=(string)BGR ! appsink"
116 | % (
117 | sensor_id,
118 | capture_width,
119 | capture_height,
120 | framerate,
121 | flip_method,
122 | display_width,
123 | display_height,
124 | )
125 | )
126 |
127 |
128 | def run_cameras():
129 | window_title = "Dual CSI Cameras"
130 | left_camera = CSI_Camera()
131 | left_camera.open(
132 | gstreamer_pipeline(
133 | sensor_id=0,
134 | capture_width=1920,
135 | capture_height=1080,
136 | flip_method=0,
137 | display_width=960,
138 | display_height=540,
139 | )
140 | )
141 | left_camera.start()
142 |
143 | right_camera = CSI_Camera()
144 | right_camera.open(
145 | gstreamer_pipeline(
146 | sensor_id=1,
147 | capture_width=1920,
148 | capture_height=1080,
149 | flip_method=0,
150 | display_width=960,
151 | display_height=540,
152 | )
153 | )
154 | right_camera.start()
155 |
156 | if left_camera.video_capture.isOpened() and right_camera.video_capture.isOpened():
157 |
158 | cv2.namedWindow(window_title, cv2.WINDOW_AUTOSIZE)
159 |
160 | try:
161 | while True:
162 | _, left_image = left_camera.read()
163 | _, right_image = right_camera.read()
164 | # Use numpy to place images next to each other
165 | camera_images = np.hstack((left_image, right_image))
166 | # Check to see if the user closed the window
167 | # Under GTK+ (Jetson Default), WND_PROP_VISIBLE does not work correctly. Under Qt it does
168 | # GTK - Substitute WND_PROP_AUTOSIZE to detect if window has been closed by user
169 | if cv2.getWindowProperty(window_title, cv2.WND_PROP_AUTOSIZE) >= 0:
170 | cv2.imshow(window_title, camera_images)
171 | else:
172 | break
173 |
174 | # This also acts as
175 | keyCode = cv2.waitKey(30) & 0xFF
176 | # Stop the program on the ESC key
177 | if keyCode == 27:
178 | break
179 | finally:
180 |
181 | left_camera.stop()
182 | left_camera.release()
183 | right_camera.stop()
184 | right_camera.release()
185 | cv2.destroyAllWindows()
186 | else:
187 | print("Error: Unable to open both cameras")
188 | left_camera.stop()
189 | left_camera.release()
190 | right_camera.stop()
191 | right_camera.release()
192 |
193 |
194 |
195 | if __name__ == "__main__":
196 | run_cameras()
197 |
--------------------------------------------------------------------------------
/face_detect.py:
--------------------------------------------------------------------------------
1 | # MIT License
2 | # Copyright (c) 2019-2022 JetsonHacks
3 | # See LICENSE for OpenCV license and additional information
4 |
5 | # https://docs.opencv.org/3.3.1/d7/d8b/tutorial_py_face_detection.html
6 | # On the Jetson Nano, OpenCV comes preinstalled
7 | # Data files are in /usr/sharc/OpenCV
8 |
9 | import cv2
10 |
11 | # gstreamer_pipeline returns a GStreamer pipeline for capturing from the CSI camera
12 | # Defaults to 1920x1080 @ 30fps
13 | # Flip the image by setting the flip_method (most common values: 0 and 2)
14 | # display_width and display_height determine the size of the window on the screen
15 | # Notice that we drop frames if we fall outside the processing time in the appsink element
16 |
17 | def gstreamer_pipeline(
18 | capture_width=1920,
19 | capture_height=1080,
20 | display_width=960,
21 | display_height=540,
22 | framerate=30,
23 | flip_method=0,
24 | ):
25 | return (
26 | "nvarguscamerasrc ! "
27 | "video/x-raw(memory:NVMM), "
28 | "width=(int)%d, height=(int)%d, framerate=(fraction)%d/1 ! "
29 | "nvvidconv flip-method=%d ! "
30 | "video/x-raw, width=(int)%d, height=(int)%d, format=(string)BGRx ! "
31 | "videoconvert ! "
32 | "video/x-raw, format=(string)BGR ! appsink drop=True"
33 | % (
34 | capture_width,
35 | capture_height,
36 | framerate,
37 | flip_method,
38 | display_width,
39 | display_height,
40 | )
41 | )
42 |
43 |
44 | def face_detect():
45 | window_title = "Face Detect"
46 | face_cascade = cv2.CascadeClassifier(
47 | "/usr/share/opencv4/haarcascades/haarcascade_frontalface_default.xml"
48 | )
49 | eye_cascade = cv2.CascadeClassifier(
50 | "/usr/share/opencv4/haarcascades/haarcascade_eye.xml"
51 | )
52 | video_capture = cv2.VideoCapture(gstreamer_pipeline(), cv2.CAP_GSTREAMER)
53 | if video_capture.isOpened():
54 | try:
55 | cv2.namedWindow(window_title, cv2.WINDOW_AUTOSIZE)
56 | while True:
57 | ret, frame = video_capture.read()
58 | gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
59 | faces = face_cascade.detectMultiScale(gray, 1.3, 5)
60 |
61 | for (x, y, w, h) in faces:
62 | cv2.rectangle(frame, (x, y), (x + w, y + h), (255, 0, 0), 2)
63 | roi_gray = gray[y : y + h, x : x + w]
64 | roi_color = frame[y : y + h, x : x + w]
65 | eyes = eye_cascade.detectMultiScale(roi_gray)
66 | for (ex, ey, ew, eh) in eyes:
67 | cv2.rectangle(
68 | roi_color, (ex, ey), (ex + ew, ey + eh), (0, 255, 0), 2
69 | )
70 | # Check to see if the user closed the window
71 | # Under GTK+ (Jetson Default), WND_PROP_VISIBLE does not work correctly. Under Qt it does
72 | # GTK - Substitute WND_PROP_AUTOSIZE to detect if window has been closed by user
73 | if cv2.getWindowProperty(window_title, cv2.WND_PROP_AUTOSIZE) >= 0:
74 | cv2.imshow(window_title, frame)
75 | else:
76 | break
77 | keyCode = cv2.waitKey(10) & 0xFF
78 | # Stop the program on the ESC key or 'q'
79 | if keyCode == 27 or keyCode == ord('q'):
80 | break
81 | finally:
82 | video_capture.release()
83 | cv2.destroyAllWindows()
84 | else:
85 | print("Unable to open camera")
86 |
87 |
88 | if __name__ == "__main__":
89 | face_detect()
90 |
--------------------------------------------------------------------------------
/simple_camera.cpp:
--------------------------------------------------------------------------------
1 | // simple_camera.cpp
2 | // MIT License
3 | // Copyright (c) 2019-2022 JetsonHacks
4 | // See LICENSE for OpenCV license and additional information
5 | // Using a CSI camera (such as the Raspberry Pi Version 2) connected to a
6 | // NVIDIA Jetson Nano Developer Kit using OpenCV
7 | // Drivers for the camera and OpenCV are included in the base image
8 |
9 | #include
10 |
11 | std::string gstreamer_pipeline (int capture_width, int capture_height, int display_width, int display_height, int framerate, int flip_method) {
12 | return "nvarguscamerasrc ! video/x-raw(memory:NVMM), width=(int)" + std::to_string(capture_width) + ", height=(int)" +
13 | std::to_string(capture_height) + ", framerate=(fraction)" + std::to_string(framerate) +
14 | "/1 ! nvvidconv flip-method=" + std::to_string(flip_method) + " ! video/x-raw, width=(int)" + std::to_string(display_width) + ", height=(int)" +
15 | std::to_string(display_height) + ", format=(string)BGRx ! videoconvert ! video/x-raw, format=(string)BGR ! appsink";
16 | }
17 |
18 | int main()
19 | {
20 | int capture_width = 1280 ;
21 | int capture_height = 720 ;
22 | int display_width = 1280 ;
23 | int display_height = 720 ;
24 | int framerate = 30 ;
25 | int flip_method = 0 ;
26 |
27 | std::string pipeline = gstreamer_pipeline(capture_width,
28 | capture_height,
29 | display_width,
30 | display_height,
31 | framerate,
32 | flip_method);
33 | std::cout << "Using pipeline: \n\t" << pipeline << "\n";
34 |
35 | cv::VideoCapture cap(pipeline, cv::CAP_GSTREAMER);
36 | if(!cap.isOpened()) {
37 | std::cout<<"Failed to open camera."<= 0:
60 | cv2.imshow(window_title, frame)
61 | else:
62 | break
63 | keyCode = cv2.waitKey(10) & 0xFF
64 | # Stop the program on the ESC key or 'q'
65 | if keyCode == 27 or keyCode == ord('q'):
66 | break
67 | finally:
68 | video_capture.release()
69 | cv2.destroyAllWindows()
70 | else:
71 | print("Error: Unable to open camera")
72 |
73 |
74 | if __name__ == "__main__":
75 | show_camera()
76 |
--------------------------------------------------------------------------------