├── LICENSE ├── README.md ├── raspi_graspinghand.png ├── raspi_sobel.png ├── raspi_wrongformat.png └── vx_cam_test.c /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2017 Tim Lukins 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # OpenVX Running on a Raspberry Pi 2 | 3 | ![Raspberry edges](raspi_graspinghand.png) 4 | 5 | **_N.B. 26th March 2017 - This article was written almost 2 years ago about April 2015. Things have moved on a bit with OpenVX - but not too much - so hopefully this is still mostly relevant. I may update it at somepoint..._** 6 | 7 | ### Introduction 8 | 9 | This post is (was) intended as a quick introduction to the possibility of using OpenVX on that most ubiquitous and appealing of embeddable/single-board computers: the Raspberry Pi. 10 | 11 | However, in the course of writing this all down, it seems to have grown in length to cover everything from accessing video on Linux, modifying the build instructions, to writing a custom OpenVX graph. Hopefully this won't put anyone off! 12 | 13 | The intention is still the same: to allow the Pi enthusiast a way to experience OpenVX as a means to writing a Computer Vision application. 14 | 15 | For simplicity, we give instructions for compiling the OpenVX sample implementation directly on the Pi itself. This may be slower - but avoids any complexities with cross-compiling, and it will result in reasonably optimised executable. 16 | 17 | **However, a key realisation at the conclusion of all this is that the resulting performance lag is rather a hindrance for any serious real-time use. We discuss this further in the final section of this article, and have a few suggestions about what can be done.** 18 | 19 | The article assumes a basic familiarity with the Raspberry Pi command line and general operation, and it will help to understand programming `C` a bit. Plus, we are basing this on the assumption the reader has purchased and connected an official Raspberry Pi camera module. Using an alternative camera is possible - but will require locating and using an appropriate Video for Linux kernel module. Furthermore, these instructions should also be roughly compatible with other Debian based Linux distribution - so could be followed to perform installation on other such systems. 20 | 21 | ### Updating the Pi and Enabling the Camera Module 22 | This article was written while using a Raspberry Pi 1 Model B+ with a Pi NoIR camera module attached (as per the image at the head of the article- with the lovely case provided by a [Grasping Hand](http://www.graspinghand.com)). On to this was performed a clear install of the latest version of Raspbian (wheezy-3.18). 23 | 24 | If you want a small reminder on how to install your system onto the SD card look [here](https://www.raspberrypi.org/documentation/installation/installing-images/). Following a clean install you will also need to update and configure it, for which the commands are as follows: 25 | 26 | sudo rpi-update 27 | sudo reboot 28 | sudo raspi-config 29 | 30 | The last command will launch you into the configuration utility. Assuming you have already configured this Pi before - e.g. enlarged the filesystem, selected a locale, etc. you might not have to do much. However, you might not have turned on the camera, in which case go to `Option 5) "Enable Camera"` and enable it. Then choose "Finish" and "Yes" to reboot. 31 | 32 | The documentation about installing [the camera module](https://www.raspberrypi.org/documentation/configuration/camera.md) and using the [control software](https://www.raspberrypi.org/documentation/raspbian/applications/camera.md) actually tends to omit the fact that there is another way to effectively "talk" to the camera via a [Video4Linux](http://en.wikipedia.org/wiki/Video4Linux) kernel module (now bundled in as part of the recent updates). Simply type: 33 | 34 | sudo modprobe bcm2835-v4l2 35 | 36 | N.B. If you want this to happen every time you boot the system - add an entry to the `/etc/modules` file with: 37 | 38 | echo "bcm2835-v4l2" | sudo tee -a /etc/modules 39 | 40 | Now if you do a `lsmod` after you should see this driver loaded alongside the supporting v4l modules. e.g. the top module listed here: 41 | 42 | $ lsmod 43 | Module Size Used by 44 | bcm2835_v4l2 40308 0 45 | videobuf2_vmalloc 3360 1 bcm2835_v4l2 46 | videobuf2_memops 2361 1 videobuf2_vmalloc 47 | videobuf2_core 41981 1 bcm2835_v4l2 48 | v4l2_common 8285 2 bcm2835_v4l2,videobuf2_core 49 | videodev 154352 3 bcm2835_v4l2,v4l2_common,videobuf2_core 50 | media 16088 1 videodev 51 | ... 52 | 53 | What this module ultimately does is create a `/dev/video0` device on the filesystem, and (because this is Linux where [everything is a file](http://en.wikipedia.org/wiki/Everything_is_a_file)) you can now communicate directly with it. 54 | 55 | Now, for example, if I am remotely accessing my Pi, I can log-in and allow the [X server software on my Mac](http://xquartz.macosforge.org/landing/) to display with `export DISPLAY=:0.0` and `ssh -X ip.addr.of.pi`. Then, having installed `mplayer` on the Pi (via `sudo apt-get install mplayer`) I can then type: 56 | 57 | mplayer tv:///dev/video0 58 | 59 | This should open a remote window from your Pi, in which you should see some live video from the camera! 60 | 61 | ### Download OpenVX 62 | 63 | At this point we have got the system up and running and checked we can access video from the camera. Now we are ready to download the [OpenVX 1.0 sample implementation](https://www.khronos.org/openvx/) onto the Pi (click on the link there for "Sample implementation (tgz)". 64 | 65 | Alternatively, you can use these commands to download and unpack it directly... 66 | 67 | wget https://www.khronos.org/registry/vx/sample/openvx_sample_20141217.tar.gz 68 | tar -zxvf openvx_sample_20141217.tar.gz 69 | cd openvx_sample/ 70 | 71 | ### Define a Sobel Graph 72 | 73 | The code for the implementation of OpenVX comes with an example tool called `vx_cam_test`. 74 | 75 | We want to add our own definition of a OpenVX graph to this code that will demonstrate the Sobel operator. (Note that you can accomplish this without really understanding how to program in `C` or the operation of OpenVX and just get it working by downloading and replacing the file with the changes already made [here](vx_cam_test.c).) 76 | 77 | Having unpacked the sample implementation,`cd` into the directory `openvx_sample` and locate the file `./sample/tests/vx_cam_test.c`. Copy/define this new function in the file: 78 | 79 | ```c 80 | vx_graph vxSobelGraph(vx_context context, 81 | vx_image input, 82 | vx_threshold thresh, 83 | vx_image output) 84 | { 85 | vx_int32 by = 0; 86 | vx_scalar shift = vxCreateScalar(context,VX_TYPE_INT32,&by); 87 | vx_graph graph = vxCreateGraph(context); 88 | if (graph) 89 | { 90 | vx_uint32 n = 0; 91 | 92 | vx_image virts[] = { 93 | vxCreateVirtualImage(graph, 0, 0, VX_DF_IMAGE_U8), 94 | vxCreateVirtualImage(graph, 0, 0, VX_DF_IMAGE_S16), 95 | vxCreateVirtualImage(graph, 0, 0, VX_DF_IMAGE_S16), 96 | vxCreateVirtualImage(graph, 0, 0, VX_DF_IMAGE_S16), 97 | vxCreateVirtualImage(graph, 0, 0, VX_DF_IMAGE_U8), 98 | }; 99 | 100 | vx_node nodes[] = { 101 | vxChannelExtractNode(graph, input, VX_CHANNEL_Y, virts[0]), 102 | vxSobel3x3Node(graph, virts[0], virts[1], virts[2]), 103 | vxMagnitudeNode(graph, virts[1], virts[2], virts[3]), 104 | vxConvertDepthNode(graph,virts[3],virts[4],VX_CONVERT_POLICY_WRAP,shift), 105 | vxChannelCombineNode(graph, virts[4], virts[4], virts[4], 0, output), 106 | }; 107 | 108 | vxAddParameterToGraphByIndex(graph, nodes[0], 0); //Input 109 | vxAddParameterToGraphByIndex(graph, nodes[4], 4); //Output 110 | 111 | for (n = 0; n < dimof(nodes); n++) 112 | vxReleaseNode(&nodes[n]); 113 | for (n = 0; n < dimof(virts); n++) 114 | vxReleaseImage(&virts[n]); 115 | } 116 | return graph; 117 | } 118 | ``` 119 | 120 | This is one of the fundamental way to write an OpenVX application. Briefly: the concept is that we set-up a **graph** of connected **nodes** representing atomic operations performed on image data. These are connected internally by specifying virtual images of the correct type between nodes, and externally by adding parameters to allow the data of certain nodes to accessed. The great advantage of using graphs is that OpenVX can optimise and distribute the graph operations to available hardware "under the bonnet" so that the programmer doesn't have to. 121 | 122 | After adding this function - we need to "wire it up". Look around lines 466 to 469 in the same file and replace the graph we are defining with this new function. 123 | 124 | // input is the YUYV image. 125 | // output is the RGB image 126 | //graph = vxPseudoCannyGraph(context, images[0], thresh, images[dimof(captures)]); 127 | graph = vxSobelGraph(context, images[0], thresh, images[dimof(captures)]); 128 | 129 | Then look from about lines 491 onwards and make sure the input and output image parameters on the graph are set correctly as follows: 130 | 131 | status = vxSetGraphParameterByIndex(graph, 0, (vx_reference)images[camIdx]); 132 | assert(status == VX_SUCCESS); 133 | status = vxSetGraphParameterByIndex(graph, 1, (vx_reference)images[dispIdx]); 134 | assert(status == VX_SUCCESS); 135 | 136 | By default, the `vx_cam_test` tool only processes 10 frames before quiting. You can easily modify the code for more processing, or indeed larger frame size by changing the default values at the top of the main function. 137 | 138 | int main(int argc, char *argv[]) 139 | { 140 | uint32_t width = 320, height = 240, count = 10; 141 | ... 142 | 143 | ### Prepare to Build 144 | 145 | Now we ready to build and install the OpenVX libraries along with the `vx_cam_test` tool to run our Sobel graph example. 146 | 147 | Start by making sure your Pi's build commands and utilities are up-to-date. The latest version of Raspbian has git-core, gcc, and build-essential packages already installed by default. 148 | 149 | sudo apt-get update -y && sudo apt-get upgrade -y 150 | 151 | This might take a while. When finished, install the `cmake` tool. 152 | 153 | sudo apt-get install cmake 154 | 155 | And the SDL library, which is a dependency of our example. **Note that this is version 1.2 - not 2.** 156 | 157 | sudo apt-get install libsdl1.2-dev 158 | 159 | Also get the `checkinstall` tool, which is a great way to finally package our build (in case we want to remove or upgrade it later). 160 | 161 | sudo apt-get install checkinstall 162 | 163 | Now, before we actually set the build running, there are three very quick minor changes to be made to the `cmake` instructions for it to work on the Raspberry Pi. 164 | 165 | First, edit line 27 of the top-level (i.e. in openvx_sample) `./CMakeLists.txt` to use version 2.8.9 of `cmake` instead of 2.8.12. 166 | 167 | cmake_minimum_required(VERSION 2.8.9) 168 | 169 | Now, then change line 43 of `./cmake_utils/CMake_linux_tools.cmake` to remove the reference to `-m32 -march=core2`. Just leave it blank. 170 | 171 | set(ARCH_BIT "" ) 172 | 173 | Finally, the file `./sample/tests/CMakeLists.txt` needs to be updated to include the SDL library locations. Insert the following after the set `TARGET_NAME_CAM_TEST` on line 45... 174 | 175 | set( TARGET_NAME_CAM_TEST vx_cam_test ) 176 | 177 | include( FindSDL ) 178 | include_directories( BEFORE ${SDL_INCLUDE_DIR} ) 179 | 180 | ...and update line 53 to include the `${SDL_LIBRARY}`. 181 | 182 | target_link_libraries( ${TARGET_NAME_CAM_TEST} 183 | openvx-debug-lib openvx-extras-lib openvx-helper 184 | openvx vxu ${SDL_LIBRARY} ) 185 | 186 | Phew. Now we are good to go! 187 | 188 | ### Build It 189 | 190 | Having done all the hard work above, we now can just type the following commands from the top-level directory (i.e. in `openvx_sample`). Notice how we perform what is called an "out of source" build by making a separate folder to keep it all in. 191 | 192 | mkdir build 193 | cd build 194 | cmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=/usr/local -DOPENVX_USE_SDL=1 .. 195 | make 196 | 197 | You should then see the code being compiled and linked - each line with a percentage telling you how much has been completed. 198 | 199 | When it has finished, you can then type the following to use `checkinstall` to create an official package on the system. 200 | 201 | sudo checkinstall --pkgname openvx --pkgversion 1.0 202 | 203 | Just follow the instructions on screen -e.g. enter "OpenVX" for summary of the package. Using this mechanism, if you then ever want to remove the package - it's a simple: 204 | 205 | sudo apt-get remove --purge openvx 206 | 207 | ### Run It 208 | 209 | Following installation, the OpenVX libraries and the `vx_cam_test` command will be available. 210 | 211 | To run it, just type: 212 | 213 | vx_cam_test 214 | 215 | However, we are one (final!) step away from making sure this will work. Remember that our input image in the filter code is expected to be in the YUYV format. If the camera kernel module is not currently set to generate this, it will cause misalignment of the image data. 216 | 217 | The `w4l2-ctl` command can be used to fix this. Using it to list all available formats you should see an option similar to this: 218 | 219 | $ v4l2-ctl --list-formats 220 | ... 221 | Index : 1 222 | Type : Video Capture 223 | Pixel Format: 'YUYV' 224 | Name : 4:2:2, packed, YUYV 225 | ... 226 | 227 | We can change to this format (and set the expected resolution of the camera) by using the following command - where the last number is the index of the format we want to use (1 in this case): 228 | 229 | v4l2-ctl --set-fmt-video="width=320,height=240,pixelformat=1" 230 | 231 | If you forget to do this when running the vx_cam_test tool - you'll know it instantly because the video playback will look like this: 232 | 233 | ![Wrong](raspi_wrongformat.png) 234 | 235 | Otherwise, you should see some actual output like this - displaying a live, Sobel graph processed view from the camera. 236 | 237 | ![Correct](raspi_sobel.png) 238 | 239 | N.B. If you are trying to do this remotely and nothing displays - remember to `export DISPLAY=:0.0` and use `ssh -X` to allow your X server permission.) 240 | 241 | ### Acceleration 242 | 243 | What you'll immediately notice about this even very simple graph is the lag! It isn't exactly fast - even downsampled to 320x240 pixels. 244 | 245 | Sadly, at this early stage, the OpenVX sample implementation doesn't offer up any support for hardware acceleration on the Raspberry Pi. Nor should it - as it is only intended to provide an initial framework for OpenVX programming. Hardware support for specific embedded hardware - for example by Movidius and Nvidia - is only just starting to emerge, and will (of course) be deployed first on other consumer devices. 246 | 247 | However, that doesn't mean the Pi can't exploit it's own hardware to provide a faster implementation. 248 | 249 | One trick to achieving this is to shift the processing to the GPU, by using OpenGL shaders to instead perform the calculations required. This has [already been proven](http://robotblogging.blogspot.co.uk/2013/10/gpu-accelerated-camera-processing-on.html) and indeed is widely exploited in other accelerated image processing libraries [on other platforms](https://github.com/BradLarson/GPUImage) (which are also recently available to the Pi). 250 | 251 | Doing this, and so increasing the real-time processing capability of the Pi, and would open up a world of possibilities - imagine an [existing drone controller](http://www.emlid.com) combined with additional visual information, or other [drone/robotics platforms](https://www.kickstarter.com/projects/ziphius/ziphius-the-aquatic-drone) able to make sense of their world autonomously. Maybe even [spot your neighbours cat more effectively](http://norris.org.au/cattack/)spot your neighbours cat more effectively). 252 | 253 | -------------------------------------------------------------------------------- /raspi_graspinghand.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/timlukins/openvx-pi/e13a4bf3cb7177fa5f6e091d717fb2370c4a6279/raspi_graspinghand.png -------------------------------------------------------------------------------- /raspi_sobel.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/timlukins/openvx-pi/e13a4bf3cb7177fa5f6e091d717fb2370c4a6279/raspi_sobel.png -------------------------------------------------------------------------------- /raspi_wrongformat.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/timlukins/openvx-pi/e13a4bf3cb7177fa5f6e091d717fb2370c4a6279/raspi_wrongformat.png -------------------------------------------------------------------------------- /vx_cam_test.c: -------------------------------------------------------------------------------- 1 | /* 2 | * Copyright (c) 2011-2014 The Khronos Group Inc. 3 | * 4 | * Permission is hereby granted, free of charge, to any person obtaining a 5 | * copy of this software and/or associated documentation files (the 6 | * "Materials"), to deal in the Materials without restriction, including 7 | * without limitation the rights to use, copy, modify, merge, publish, 8 | * distribute, sublicense, and/or sell copies of the Materials, and to 9 | * permit persons to whom the Materials are furnished to do so, subject to 10 | * the following conditions: 11 | * 12 | * The above copyright notice and this permission notice shall be included 13 | * in all copies or substantial portions of the Materials. 14 | * 15 | * THE MATERIALS ARE PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, 16 | * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF 17 | * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. 18 | * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY 19 | * CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, 20 | * TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE 21 | * MATERIALS OR THE USE OR OTHER DEALINGS IN THE MATERIALS. 22 | */ 23 | 24 | #include 25 | #include 26 | 27 | #include 28 | #if defined(EXPERIMENTAL_USE_XML) 29 | #include 30 | #endif 31 | #include 32 | #include 33 | 34 | #include 35 | 36 | #include 37 | #include 38 | #include 39 | #include 40 | #include 41 | #include 42 | #include 43 | #include 44 | 45 | #define MAX_PATH (256) 46 | #define MAX_DISPLAY (4) 47 | #define MAX_CAPTURE (4) 48 | 49 | vx_graph vxPyramidIntegral(vx_context context, vx_image image, vx_threshold thresh, vx_image output) 50 | { 51 | vx_graph graph = vxCreateGraph(context); 52 | vx_uint32 width, height, i; 53 | vxQueryImage(image, VX_IMAGE_ATTRIBUTE_WIDTH, &width, sizeof(width)); 54 | vxQueryImage(image, VX_IMAGE_ATTRIBUTE_HEIGHT, &height, sizeof(height)); 55 | if (graph) 56 | { 57 | vx_pyramid pyr = vxCreatePyramid(context, 4, 0.5f, width, height, VX_DF_IMAGE_U8); 58 | vx_uint32 shift16 = 16, shift8 = 8; 59 | vx_scalar sshift16 = vxCreateScalar(context, VX_TYPE_UINT32, &shift16); 60 | vx_scalar sshift8 = vxCreateScalar(context, VX_TYPE_UINT32, &shift8); 61 | vx_image images[] = { 62 | vxCreateVirtualImage(graph, 0, 0, VX_DF_IMAGE_VIRT), 63 | vxCreateVirtualImage(graph, 0, 0, VX_DF_IMAGE_VIRT), 64 | vxCreateVirtualImage(graph, 0, 0, VX_DF_IMAGE_VIRT), 65 | vxCreateVirtualImage(graph, 0, 0, VX_DF_IMAGE_VIRT), 66 | vxCreateVirtualImage(graph, 0, 0, VX_DF_IMAGE_S16), 67 | }; 68 | vx_node nodes[] = { 69 | vxGaussianPyramidNode(graph, image, pyr), 70 | vxIntegralImageNode(graph, vxGetPyramidLevel(pyr, 0), images[0]), 71 | vxIntegralImageNode(graph, vxGetPyramidLevel(pyr, 1), images[1]), 72 | vxIntegralImageNode(graph, vxGetPyramidLevel(pyr, 2), images[2]), 73 | vxIntegralImageNode(graph, vxGetPyramidLevel(pyr, 3), images[3]), 74 | vxConvertDepthNode(graph, images[0], images[4], VX_CONVERT_POLICY_WRAP, sshift16), 75 | vxConvertDepthNode(graph, images[4], output, VX_CONVERT_POLICY_WRAP, sshift8), 76 | }; 77 | vxAddParameterToGraphByIndex(graph, nodes[0], 0); 78 | vxAddParameterToGraphByIndex(graph, nodes[5], 1); 79 | for (i = 0; i < dimof(images); i++) 80 | vxReleaseImage(&images[i]); 81 | for (i = 0; i < dimof(images); i++) 82 | vxReleaseNode(&nodes[i]); 83 | vxReleasePyramid(&pyr); 84 | } 85 | return graph; 86 | } 87 | 88 | vx_graph vxSobelGraph(vx_context context, vx_image input, vx_threshold thresh, vx_image output) 89 | { 90 | vx_int32 by = 0; 91 | vx_scalar shift = vxCreateScalar(context,VX_TYPE_INT32,&by); 92 | vx_graph graph = vxCreateGraph(context); 93 | if (graph) 94 | { 95 | vx_uint32 n = 0; 96 | 97 | vx_image virts[] = { 98 | vxCreateVirtualImage(graph, 0, 0, VX_DF_IMAGE_U8), 99 | vxCreateVirtualImage(graph, 0, 0, VX_DF_IMAGE_S16), 100 | vxCreateVirtualImage(graph, 0, 0, VX_DF_IMAGE_S16), 101 | vxCreateVirtualImage(graph, 0, 0, VX_DF_IMAGE_S16), 102 | vxCreateVirtualImage(graph, 0, 0, VX_DF_IMAGE_U8), 103 | }; 104 | 105 | vx_node nodes[] = { 106 | vxChannelExtractNode(graph, input, VX_CHANNEL_Y, virts[0]), 107 | vxSobel3x3Node(graph, virts[0], virts[1], virts[2]), 108 | vxMagnitudeNode(graph, virts[1], virts[2], virts[3]), 109 | vxConvertDepthNode(graph,virts[3],virts[4],VX_CONVERT_POLICY_WRAP,shift), 110 | vxChannelCombineNode(graph, virts[4], virts[4], virts[4], 0, output), 111 | }; 112 | vxAddParameterToGraphByIndex(graph, nodes[0], 0); //Input 113 | vxAddParameterToGraphByIndex(graph, nodes[4], 4); //Output 114 | 115 | for (n = 0; n < dimof(nodes); n++) 116 | vxReleaseNode(&nodes[n]); 117 | for (n = 0; n < dimof(virts); n++) 118 | vxReleaseImage(&virts[n]); 119 | } 120 | return graph; 121 | } 122 | 123 | int32_t v4l2_start(int32_t dev) { 124 | uint32_t type = V4L2_BUF_TYPE_VIDEO_CAPTURE; 125 | int32_t ret = ioctl(dev, VIDIOC_STREAMON, &type); 126 | return (ret == 0?1:0); 127 | } 128 | 129 | int32_t v4l2_stop(int32_t dev) { 130 | int32_t type = V4L2_BUF_TYPE_VIDEO_CAPTURE; 131 | int32_t ret = ioctl(dev, VIDIOC_STREAMOFF, &type); 132 | return (ret == 0?1:0); 133 | } 134 | 135 | int32_t v4l2_control_set(int32_t dev, uint32_t control, int32_t value) 136 | { 137 | struct v4l2_control ctrl = {0}; 138 | struct v4l2_queryctrl qctrl = {0}; 139 | qctrl.id = control; 140 | if (ioctl(dev, VIDIOC_QUERYCTRL, &qctrl) == 0) { 141 | if ((qctrl.flags & V4L2_CTRL_TYPE_BOOLEAN) || 142 | (qctrl.type & V4L2_CTRL_TYPE_INTEGER)) 143 | { 144 | int min = qctrl.minimum; 145 | int max = qctrl.maximum; 146 | int step = qctrl.step; 147 | printf("Ctrl: %d Min:%d Max:%d Step:%d\n",control,min,max,step); 148 | if ((min <= value) && (value <= max)) { 149 | if (step > 1) 150 | value = value - (value % step); 151 | } else { 152 | value = qctrl.default_value; 153 | } 154 | ctrl.id = control; 155 | ctrl.value = value; 156 | if (ioctl(dev, VIDIOC_S_CTRL, &ctrl) == 0) 157 | return 1; 158 | } 159 | } 160 | return 0; 161 | } 162 | 163 | int32_t v4l2_control_get(int32_t dev, uint32_t control, int32_t *value) 164 | { 165 | struct v4l2_control ctrl = {0}; 166 | struct v4l2_queryctrl qctrl = {0}; 167 | qctrl.id = control; 168 | if (ioctl(dev, VIDIOC_QUERYCTRL, &qctrl) == 0) { 169 | if ((qctrl.flags & V4L2_CTRL_TYPE_BOOLEAN) || 170 | (qctrl.type & V4L2_CTRL_TYPE_INTEGER)) { 171 | printf("Ctrl: %s Min:%d Max:%d Step:%d dflt:%d\n", 172 | qctrl.name, 173 | qctrl.minimum, 174 | qctrl.maximum, 175 | qctrl.step, 176 | qctrl.default_value); 177 | ctrl.id = control; 178 | if (ioctl(dev, VIDIOC_G_CTRL, &ctrl) == 0) { 179 | *value = ctrl.value; 180 | printf("Ctrl: %s Value:%d\n",qctrl.name, ctrl.value); 181 | return 1; 182 | } 183 | } 184 | } 185 | return 0; 186 | } 187 | 188 | int32_t v4l2_reset_control(int32_t dev, uint32_t control) 189 | { 190 | struct v4l2_control ctrl = {0}; 191 | struct v4l2_queryctrl qctrl = {0}; 192 | int ret = 0; 193 | 194 | qctrl.id = control; 195 | ret = ioctl(dev, VIDIOC_QUERYCTRL, &qctrl); 196 | if ((qctrl.flags & V4L2_CTRL_TYPE_BOOLEAN) || 197 | (qctrl.type & V4L2_CTRL_TYPE_INTEGER)) { 198 | ctrl.id = control; 199 | ctrl.value = qctrl.default_value; 200 | ret = ioctl(dev, VIDIOC_S_CTRL, &ctrl); 201 | } 202 | return (ret == 0 ? 1:0); 203 | } 204 | 205 | void v4l2_close(int32_t dev) { 206 | close(dev); 207 | } 208 | 209 | int32_t v4l2_open(uint32_t devnum, uint32_t capabilities) { 210 | char devname[MAX_PATH] = {0}; 211 | struct v4l2_capability cap = {{0},{0},{0},0,0,0}; 212 | int32_t cnt = snprintf(devname, sizeof(devname), "/dev/video%u", devnum); 213 | int32_t dev = open(devname, O_RDWR); 214 | int32_t ret = ioctl(dev, VIDIOC_QUERYCAP, &cap); 215 | if (cnt == 0 || ret || (capabilities & cap.capabilities) != capabilities) { 216 | close(dev); 217 | dev = -1; 218 | } 219 | return dev; 220 | } 221 | 222 | typedef struct _v4l2_pix_to_df_image_lut_t { 223 | uint32_t v4l2; 224 | vx_df_image df_image; 225 | uint32_t stride_x; 226 | vx_enum space; 227 | } V4L2_to_VX_DF_IMAGE_t; 228 | 229 | V4L2_to_VX_DF_IMAGE_t codes[] = { 230 | {V4L2_PIX_FMT_UYVY, VX_DF_IMAGE_UYVY, sizeof(uint16_t), VX_COLOR_SPACE_BT601_625}, 231 | {V4L2_PIX_FMT_YUYV, VX_DF_IMAGE_YUYV, sizeof(uint16_t), VX_COLOR_SPACE_BT601_625}, // capture 232 | {V4L2_PIX_FMT_YUYV, VX_DF_IMAGE_YUYV, sizeof(uint16_t), VX_COLOR_SPACE_BT601_625}, 233 | {V4L2_PIX_FMT_NV12, VX_DF_IMAGE_NV12, sizeof(uint8_t), VX_COLOR_SPACE_BT601_625}, 234 | //{V4L2_PIX_FMT_BGR24, VX_DF_IMAGE_BGR, sizeof(uint8_t), 0}, // I don't think this is supported by capture or displays we use 235 | {V4L2_PIX_FMT_RGB24, VX_DF_IMAGE_RGB, sizeof(uint8_t), 0}, 236 | {V4L2_PIX_FMT_RGB32, VX_DF_IMAGE_RGBX, sizeof(uint8_t), 0}, // we'll use a global alpha 237 | }; 238 | uint32_t numCodes = dimof(codes); 239 | 240 | void print_buffer(struct v4l2_buffer *desc) { 241 | printf("struct v4l2_buffer [%u] type:%u used:%u len:%u flags=0x%08x field:%u memory:%u offset:%u\n", 242 | desc->index, 243 | desc->type, 244 | desc->bytesused, 245 | desc->length, 246 | desc->flags, 247 | desc->field, 248 | desc->memory, 249 | desc->m.offset); 250 | } 251 | 252 | uint32_t v4l2_dequeue(int32_t dev) { 253 | struct v4l2_buffer buf = {0}; 254 | int32_t ret; 255 | buf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE; 256 | buf.memory = V4L2_MEMORY_MMAP; 257 | ret = ioctl(dev, VIDIOC_DQBUF, &buf); 258 | print_buffer(&buf); 259 | return (ret == 0 ? buf.index : UINT32_MAX); 260 | } 261 | 262 | int32_t v4l2_queue(int32_t dev, uint32_t index) { 263 | struct v4l2_buffer buf = {0}; 264 | int32_t ret = 0; 265 | buf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE; 266 | buf.index = index; 267 | ret = ioctl(dev, VIDIOC_QUERYBUF, &buf); 268 | buf.bytesused = 0; 269 | buf.field = V4L2_FIELD_ANY; 270 | ret = ioctl(dev, VIDIOC_QBUF, &buf); 271 | return (ret==0?1:0); 272 | } 273 | 274 | // C99 275 | int32_t vxReleaseV4L2Images(vx_uint32 count, vx_image images[count], void *ptrs[count], int32_t dev) { 276 | uint32_t i = 0; 277 | int32_t ret = 0; 278 | struct v4l2_requestbuffers reqbuf = {0}; 279 | for (i = 0; i < count; i++) { 280 | struct v4l2_buffer buf; 281 | buf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE; 282 | buf.index = i; 283 | ret = ioctl(dev, VIDIOC_QUERYBUF, &buf); 284 | vxReleaseImage(&images[i]); 285 | munmap(ptrs[i], buf.length); 286 | } 287 | reqbuf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE; 288 | reqbuf.memory = V4L2_MEMORY_MMAP; 289 | reqbuf.count = 0; 290 | ret = ioctl(dev, VIDIOC_REQBUFS, &reqbuf); 291 | return (ret == 0?1:0); 292 | } 293 | // C99 294 | int32_t vxCreateV4L2Images(vx_context context, vx_uint32 count, vx_image images[count], vx_uint32 width, vx_uint32 height, vx_df_image format, 295 | int32_t dev, void *ptrs[count]) { 296 | uint32_t c, i; 297 | struct v4l2_format fmt = {0}; 298 | struct v4l2_requestbuffers reqbuf = {0}; 299 | int32_t ret = 0; 300 | struct v4l2_buffer buffers[count]; 301 | 302 | ret = ioctl(dev, VIDIOC_G_FMT, &fmt); 303 | fmt.fmt.pix.width = width; 304 | fmt.fmt.pix.height = height; 305 | fmt.fmt.pix.pixelformat = 0u; 306 | fmt.fmt.pix.field = V4L2_FIELD_ANY; 307 | for (c = 0; c < numCodes; c++) { 308 | if (format == codes[c].df_image) { 309 | fmt.fmt.pix.pixelformat = codes[c].v4l2; 310 | break; 311 | } 312 | } 313 | if (c == numCodes) { 314 | return 0; 315 | } 316 | ret = ioctl(dev, VIDIOC_S_FMT, &fmt); 317 | reqbuf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE; 318 | reqbuf.memory = V4L2_MEMORY_MMAP; 319 | reqbuf.count = count; 320 | ret = ioctl(dev, VIDIOC_REQBUFS, &reqbuf); 321 | for (i = 0; i < count; i++) { 322 | buffers[i].type = reqbuf.type; 323 | buffers[i].memory = reqbuf.memory; 324 | buffers[i].index = i; 325 | ret = ioctl(dev, VIDIOC_QUERYBUF, &buffers[i]); 326 | print_buffer(&buffers[i]); 327 | if (buffers[i].flags != V4L2_BUF_FLAG_MAPPED) { 328 | ptrs[i] = mmap(NULL, buffers[i].length, PROT_READ | PROT_WRITE, MAP_SHARED, dev, buffers[i].m.offset); 329 | if (ptrs[i] == MAP_FAILED) { 330 | // failure 331 | printf("Failed to map buffer!\n"); 332 | exit(-1); 333 | } else { 334 | vx_imagepatch_addressing_t addr = { 335 | .dim_x = width, 336 | .dim_y = height, 337 | .stride_x = codes[c].stride_x, 338 | .stride_y = width*codes[c].stride_x, 339 | .scale_x = VX_SCALE_UNITY, 340 | .scale_y = VX_SCALE_UNITY, 341 | .step_x = 1, 342 | .step_y = 1, 343 | }; 344 | images[i] = vxCreateImageFromHandle(context, format, &addr, &ptrs[i], VX_IMPORT_TYPE_HOST); 345 | vxSetImageAttribute(images[i], VX_IMAGE_ATTRIBUTE_SPACE, &codes[c].space, sizeof(codes[c].space)); 346 | if (vxGetStatus((vx_reference)images[i]) != VX_SUCCESS) 347 | vxReleaseImage(&images[i]); 348 | buffers[i].bytesused = 0; 349 | buffers[i].field = V4L2_FIELD_ANY; 350 | print_buffer(&buffers[i]); 351 | ret = ioctl(dev, VIDIOC_QBUF, &buffers[i]); 352 | ret = ioctl(dev, VIDIOC_QUERYBUF, &buffers[i]); 353 | if (!(buffers[i].flags & V4L2_BUF_FLAG_QUEUED)) { 354 | printf("Failed to Queue Buffer [%u]!\n", i); 355 | print_buffer(&buffers[i]); 356 | } 357 | } 358 | } else { 359 | printf("Buffer already mapped\n"); 360 | } 361 | } 362 | return (ret == 0?1:0); 363 | } 364 | 365 | int main(int argc, char *argv[]) 366 | { 367 | uint32_t width = 320, height = 240, count = 10; 368 | vx_df_image format = VX_DF_IMAGE_YUYV; 369 | SDL_Surface *screen = NULL; 370 | SDL_Surface *backplanes[MAX_DISPLAY] = {NULL}; 371 | SDL_Rect src, dst; 372 | void *captures[MAX_CAPTURE]; 373 | vx_image images[dimof(captures) + dimof(backplanes)] = {0}; 374 | vx_uint32 camIdx = 0, dispIdx = 0; 375 | vx_context context = vxCreateContext(); 376 | vx_status status = VX_SUCCESS; 377 | int cam = v4l2_open(0, V4L2_CAP_VIDEO_CAPTURE | V4L2_CAP_STREAMING); 378 | vx_graph graph = 0; 379 | vx_threshold thresh = vxCreateThreshold(context, VX_THRESHOLD_TYPE_RANGE, VX_TYPE_UINT8); 380 | vx_char xmlfile[MAX_PATH]; 381 | 382 | vxLoadKernels(context, "openvx-debug"); 383 | vxLoadKernels(context, "openvx-extras"); 384 | vxRegisterHelperAsLogReader(context); 385 | 386 | if (cam == -1) 387 | exit(-1); 388 | 389 | if (argc > 1) 390 | count = atoi(argv[1]); 391 | if (argc > 2) 392 | strncpy(xmlfile, argv[2], MAX_PATH); 393 | if (SDL_Init(SDL_INIT_VIDEO) != 0) 394 | { 395 | exit(-1); 396 | } 397 | screen = SDL_SetVideoMode(width, height, 24, SDL_SWSURFACE | SDL_DOUBLEBUF); 398 | SDL_GetClipRect(screen, &dst); 399 | printf("Rect: {%d, %d}, %ux%u\n", dst.x, dst.y, dst.w, dst.h); 400 | SDL_WM_SetCaption( "OpenVX", NULL ); 401 | if (context && cam) 402 | { 403 | // describes YUYV 404 | uint32_t s,i = 0; 405 | 406 | if (vxCreateV4L2Images(context, MAX_CAPTURE, images, width, height, format, cam, captures)) 407 | { 408 | i += MAX_CAPTURE; 409 | // create the display images as the backplanes 410 | for (s = 0; s < dimof(backplanes); s++,i++) 411 | { 412 | vx_uint32 depth = 24; 413 | vx_imagepatch_addressing_t map[] = {{ 414 | width, height, 415 | (depth/8)*sizeof(vx_uint8), width*(depth/8)*sizeof(vx_uint8), 416 | VX_SCALE_UNITY, VX_SCALE_UNITY, 417 | 1,1 418 | }}; 419 | void *ptrs[1]; 420 | backplanes[s] = SDL_CreateRGBSurface(SDL_SWSURFACE, 421 | width, 422 | height, 423 | depth, 424 | 0x000000FF, 425 | 0x0000FF00, 426 | 0x00FF0000, 427 | 0x00000000); 428 | assert(backplanes[s] != NULL); 429 | ptrs[0] = backplanes[s]->pixels; 430 | map[0].stride_y = backplanes[s]->pitch; // incase mapped to 2D memory 431 | printf("Mapping %p with stride %d to a vx_image\n", ptrs[0], map[0].stride_y); 432 | images[i] = vxCreateImageFromHandle(context, VX_DF_IMAGE_RGB, map, ptrs, VX_IMPORT_TYPE_HOST); 433 | assert(images[i] != 0); 434 | SDL_GetClipRect(backplanes[s], &src); 435 | printf("Rect: {%d, %d}, %ux%u\n", src.x, src.y, src.w, src.h); 436 | } 437 | } 438 | { 439 | uint32_t c, ctrls[][2]= { 440 | {V4L2_CID_BRIGHTNESS,0}, 441 | {V4L2_CID_CONTRAST,0}, 442 | {V4L2_CID_SATURATION,0}, 443 | {V4L2_CID_GAIN,0}, 444 | }; 445 | // check the controls 446 | for (c = 0; c < dimof(ctrls); c++) 447 | { 448 | if (v4l2_reset_control(cam, ctrls[c][0]) == 0) 449 | { 450 | printf("Control %u is not supported\n", ctrls[c][0]); 451 | } 452 | } 453 | for (c = 0; c < dimof(ctrls); c++) 454 | { 455 | if (v4l2_control_get(cam, ctrls[c][0], (int32_t *)&ctrls[c][1]) == 0) 456 | { 457 | printf("Control %u is not supported\n", ctrls[c][0]); 458 | } 459 | } 460 | } 461 | assert(i == (dimof(backplanes)+dimof(captures))); 462 | { 463 | vx_int32 bounds[2] = {20, 180}; 464 | vxSetThresholdAttribute(thresh, VX_THRESHOLD_ATTRIBUTE_THRESHOLD_LOWER, &bounds[0], sizeof(bounds[0])); 465 | vxSetThresholdAttribute(thresh, VX_THRESHOLD_ATTRIBUTE_THRESHOLD_UPPER, &bounds[1], sizeof(bounds[1])); 466 | // input is the YUYV image. 467 | // output is the RGB image 468 | //graph = vxPseudoCannyGraph(context, images[0], thresh, images[dimof(captures)]); 469 | graph = vxSobelGraph(context, images[0], thresh, images[dimof(captures)]); 470 | //graph = vxPyramidIntegral(context, images[0], thresh, images[dimof(captures)]); 471 | if (graph) { 472 | status = vxVerifyGraph(graph); 473 | if (status != VX_SUCCESS) { 474 | printf("Graph failed verification!\n"); 475 | vxClearLog((vx_reference)graph); 476 | exit(-1); 477 | } else { 478 | printf("Graph is verified!\n"); 479 | } 480 | } 481 | } 482 | if (v4l2_start(cam)) { 483 | do { 484 | uint32_t cap = (size_t)v4l2_dequeue(cam); 485 | if (cap != UINT32_MAX) { 486 | printf("Index : %u\n", cap); 487 | camIdx = cap; 488 | dispIdx = cap + dimof(captures); 489 | printf("camIdx = %u, dispIdx = %u\n", camIdx, dispIdx); 490 | SDL_LockSurface(backplanes[cap]); 491 | status = vxSetGraphParameterByIndex(graph, 0, (vx_reference)images[camIdx]); 492 | assert(status == VX_SUCCESS); 493 | status = vxSetGraphParameterByIndex(graph, 1, (vx_reference)images[dispIdx]); 494 | assert(status == VX_SUCCESS); 495 | if (vxIsGraphVerified(graph) == vx_false_e) { 496 | status = vxVerifyGraph(graph); 497 | if (status != VX_SUCCESS) { 498 | printf("Verify Status = %d\n", status); 499 | exit(-1); 500 | } 501 | } 502 | status = vxProcessGraph(graph); 503 | if (status != VX_SUCCESS) { 504 | printf("status = %d\n", status); 505 | do { 506 | char message[VX_MAX_LOG_MESSAGE_LEN] = {0}; 507 | status = vxGetLogEntry((vx_reference)graph, message); 508 | if (status != VX_SUCCESS) 509 | printf("Message:[%d] %s\n", status, message); 510 | } while (status != VX_SUCCESS); 511 | } 512 | SDL_UnlockSurface(backplanes[cap]); 513 | SDL_BlitSurface(backplanes[cap], &src, screen, &dst); 514 | SDL_Flip(screen); 515 | if (v4l2_queue(cam, cap) == 0) 516 | printf("Failed to queue image!\n"); 517 | } else { 518 | printf("Failed to dequeue!\n"); 519 | break; 520 | } 521 | } while (count--); 522 | v4l2_stop(cam); 523 | #if defined(EXPERIMENTAL_USE_XML) 524 | status = vxExportToXML(context, xmlfile); 525 | #endif 526 | } else { 527 | printf("Failed to start capture!\n"); 528 | } 529 | vxReleaseV4L2Images(dimof(captures), images, captures, cam); 530 | for (s = 0; s < dimof(backplanes); s++) { 531 | SDL_FreeSurface(backplanes[s]); 532 | } 533 | SDL_FreeSurface(screen); 534 | vxReleaseThreshold(&thresh); 535 | vxReleaseGraph(&graph); 536 | v4l2_close(cam); 537 | vxReleaseContext(&context); 538 | } 539 | SDL_Quit(); 540 | return 0; 541 | } 542 | --------------------------------------------------------------------------------