├── README.md └── samples ├── ex1_depth_stream.py ├── ex2_rgb_stream.py ├── ex3_rgbd_stream.py ├── ex4_rgbd_syncd_aligned_stream.py ├── ex5_rgbd_overlayed.py └── ex6_ird_stream.py /README.md: -------------------------------------------------------------------------------- 1 | 2 | This guide assumes Carmine devices and Primesense's API's and drivers (OpenNI2 and NiTE2). Steps to install and configure opeNI2 with Msft's Kinect are being tested. In the meantime, the following link provides an easy to follwo guide (untested) for to use [OpenNI2 with Kinect](http://cupofpixel.blogspot.com/2013/04/how-to-use-openni-2-nite-2-kinect-on.html). The freenect driver link appears to be broken so use the direct [[link](https://github.com/OpenKinect/libfreenect/tree/master/OpenNI2-FreenectDriver)] instead. 3 | 4 | # OpenNI2 Python 5 | Compile and Install OpenNI2 6 | 7 | Installation instructions and samples using the official Python wrappers for OpenNI2 and OpenCV. Adapted from [Occipital](https://github.com/occipital/OpenNI2) 8 | 9 | * Tested on the following systems (running Python 2.7.3, OpenNI2.2, and OpenCV.2.4.10) 10 | 11 | + Windows 7 x64 12 | + Linux Ubuntu 14.04 x64 13 | + PandaBoard ES/BeagleBoard-xm running Ubuntu 12.04 OMAP/ARM 14 | + [Panda](https://docs.google.com/document/d/1MjW0vVms-r4gm0KSYxb3JSKimJ_kvG0yLDgu_mTnBzk/edit?usp=sharing) ubuntu installation -- gdoc instructions 15 | + [Beagle](https://docs.google.com/document/d/1sOKNSICoNeKMtrbIBvHbpXfJDbkfdD-jI5BVD8dfOMc/edit?usp=sharing) ubuntu installation -- gdoc instructions 16 | 17 | 18 | NOTE: root and ~ are used to represent home// 19 | 20 | ## Dependencies 21 | 22 | Windows use wheels from [LDF](http://www.lfd.uci.edu/~gohlke/pythonlibs/) 23 | 24 | Linux Ubuntu 25 | * OpenNI 26 | 27 | `sudo apt-get install libudev-dev libusb-1.0.0-dev` 28 | 29 | `sudo apt-get install gcc-multilib git-core build-essential` 30 | 31 | `sudo apt-get install doxygen graphviz default-jdk freeglut3-dev` 32 | 33 | * JDK 34 | There is a known issue with x86 architectures and jdk. Follow this [[link](https://www.digitalocean.com/community/tutorials/how-to-install-java-on-ubuntu-with-apt-get)] to add the jdk repo. 35 | 36 | `sudo apt-get install python-software-properties` 37 | 38 | `sudo add-apt-repository ppa:webupd8team/java` 39 | 40 | `sudo apt-get update` 41 | 42 | **NOTE**: the workaround was tested with jdk6 (commands to install other versions are on the site). 43 | 44 | `sudo apt-get install oracle-java6-installer` 45 | 46 | Then finally update the flags using the command from this [[link](http://stackoverflow.com/questions/25851510/openni2-error-when-running-make)] 47 | 48 | `export LDFLAGS+="-lc"` 49 | 50 | 51 | * Python Environment 52 | 53 | `sudo apt-get -y install python-dev python-numpy ` 54 | 55 | `sudo apt-get -y install python-scipy python-setuptools` 56 | 57 | `sudo apt-get -y install ipython python-pip` 58 | 59 | `sudo apt-get -y install libboost-python-dev` 60 | 61 | ## OpenNI2 Windows 7 x64 [installation](https://github.com/occipital/OpenNI2) details 62 | 63 | OpenNI2. Download msi installer from [structure io](OpenNI-Windows-x64-2.2.0.33) and follow the instructions 64 | 65 | Primesense Python Bindings 66 | * Pip, from terminal: 67 | 68 | + pip install primensense 69 | 70 | * Manual, download from [python wrapper](https://pypi.python.org/pypi/primesense/2.2.0.30-5) 71 | 72 | + On an administrator terminal (i.e., right click on a terminal and select "run as administrator") 73 | + Go to where the bindings were downloaded (e.g., cd C:\downloads\primensense) 74 | + python setup.py install 75 | 76 | ## Install OpenNI2 in Ubuntu 14.04 77 | `mkdir Install/kinect/openni2` 78 | 79 | `cd Install/kinect/openni2` 80 | 81 | Clone from occipital github 82 | 83 | `git clone https://github.com/occipital/OpenNI2` 84 | 85 | `cd OpenNI2` 86 | 87 | `make` 88 | 89 | `cd Packing` 90 | 91 | `python ReleaseVersion.py x64 #x86` 92 | 93 | If no errors, the compressed installer will be created in "Final" folder (i.e., OpenNI-Linux-64-2.2.tar.bz2). 94 | 95 | `cd Final && cp OpenNI-Linux--2.2.tar.bz2 ~/Install/kinect/openni2` 96 | 97 | Extract the contents to OpenNI-Linux-x64-2.2 and rename the folder (helps with multiple installations/versions) 98 | 99 | `mv ~/Install/openni2/OpenNI-Linux-x64-2.2 ~/Install/kinect/openni2/OpenNI2-x64` 100 | 101 | `cd ~/Install/kinect/openni2/OpenNI2-x64` 102 | 103 | Install 104 | 105 | `sudo ./install.sh` 106 | 107 | ## OpenNI2 in PandaBoard-ES and BeagleBoard-xM 108 | 109 | `mkdir Install/kinect/openni2` 110 | 111 | `cd Install/kinect/openni2` 112 | 113 | Clone from occipital github 114 | 115 | `git clone https://github.com/occipital/OpenNI2` 116 | 117 | `cd OpenNI2` 118 | 119 | Compile. Remove the floating point operation flag 120 | 121 | `gedit ThirdParty/PSCommon/BuildSystem/Platform.ARM` 122 | 123 | Remove/delete "-mfloat-abi=softfp" (save & close) 124 | 125 | `PLATFORM=Arm make # This steps took about 20 minutes` 126 | 127 | Create ARM installer 128 | 129 | `cd Packing/` 130 | 131 | `python ReleaseVersion.py Arm` 132 | 133 | If no errors, the compressed installer will be created in "Final" folder (i.e., OpenNI-Linux-Arm-2.2.tar.bz2). 134 | 135 | `cd Final && cp OpenNI-Linux-Arm-2.2.tar.bz2 ~/Install/kinect/openni2` 136 | 137 | Extract the contents to OpenNI-Linux-Arm-2.2 and rename the folder (helps with multiple installations/versions) 138 | 139 | `mv ~/Install/kinect/openni2/OpenNI-Linux-Arm-2.2 ~/Install/kinect/openni2/OpenNI2-Arm` 140 | 141 | `cd ~/Install/kinect/openni2/OpenNI2-Arm` 142 | 143 | Install 144 | 145 | `sudo ./install.sh` 146 | 147 | 148 | ## Install Python bindings in Ubuntu (12.04 ARM and 14.04 x64) 149 | 150 | * Option 1. via pip -- not preferred 151 | 152 | `sudo pip install primensense` 153 | 154 | * Option 2. Manual Installation -- preferred 155 | 156 | Download from: [primesense python wrapper](https://pypi.python.org/pypi/primesense/) and save in system Downloads 157 | 158 | Copy the tar.gz file to a known location 159 | 160 | `cd ~/Downloads && mv primesense-2.2.0.30-5.tar.gz ~/Install/kinect/openni2` 161 | 162 | Extract 163 | 164 | `tar -xvzf primesense-2.2.0.30-5.tar.gz` 165 | 166 | `cd primesense-2.2.0.30-5` 167 | 168 | `sudo python setup.py install` 169 | 170 | Direct the system to the location of libOpenNI2.so (Two methods) 171 | 172 | * Method 1. Create a symbolic link to OpenNI2/Redist/OpenNI2.so or copy the library to /usr/local/lib/ 173 | 174 | `sudo cp root/Install/kinect/openni2/OpenNI2-Arm/libOpenNI2.so /usr/local/lib # repeat for libNiTE2.so if available` 175 | 176 | `sudo ldconfig` 177 | 178 | * Method 2. For mulitple OpenNI installations. Direct the code to library location using: 179 | 180 | `openni2.initialize(root+"/Install/kinect/openni2/OpenNI2-Arm/Redist/")` 181 | 182 | 183 | ## Test the setup using the initialize and is_initialize methods 184 | 185 | `python` 186 | 187 | `>>from primesense import openni2` 188 | 189 | `>>openni2.initialize(root+"/Install/kinect/openni2/OpenNI2-Arm/Redist/")` 190 | 191 | `>>if (openni2.is_initialized()):` 192 | 193 | `>> print "OpenNI2 initialized"` 194 | 195 | `>>else:` 196 | 197 | `>> raise ValueError("OpenNI2 failed to initialize!!")` 198 | -------------------------------------------------------------------------------- /samples/ex1_depth_stream.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | ''' 3 | Created on 19Jun2015 4 | Stream depth video using openni2 opencv-python (cv2) 5 | 6 | Requires the following libraries: 7 | 1. OpenNI-Linux--2.2 8 | 2. primesense-2.2.0.30 9 | 3. Python 2.7+ 10 | 4. OpenCV 2.4.X 11 | 12 | Current features: 13 | 1. Convert primensense oni -> numpy 14 | 2. Stream and display depth 15 | 3. Keyboard commands 16 | press esc to exit 17 | press s to save current screen and distancemap 18 | 19 | NOTE: 20 | 1. On device streams: IR and RGB streams do not work together 21 | Depth & IR = OK 22 | Depth & RGB = OK 23 | RGB & IR = NOT OK 24 | 25 | 2. Do not synchronize with rgb or stream will feeze 26 | @author: Carlos Torres 27 | ''' 28 | 29 | import numpy as np 30 | import cv2 31 | from primesense import openni2#, nite2 32 | from primesense import _openni2 as c_api 33 | 34 | ## Path of the OpenNI redistribution OpenNI2.so or OpenNI2.dll 35 | # Windows 36 | #dist = 'C:\Program Files\OpenNI2\Redist\OpenNI2.dll' 37 | # OMAP 38 | #dist = '/home/carlos/Install/kinect/OpenNI2-Linux-ARM-2.2/Redist/' 39 | # Linux 40 | dist ='/home/carlos/Install/openni2/OpenNI-Linux-x64-2.2/Redist' 41 | 42 | ## Initialize openni and check 43 | openni2.initialize(dist) # 44 | if (openni2.is_initialized()): 45 | print "openNI2 initialized" 46 | else: 47 | print "openNI2 not initialized" 48 | 49 | ## Register the device 50 | dev = openni2.Device.open_any() 51 | 52 | ## Create the streams stream 53 | depth_stream = dev.create_depth_stream() 54 | 55 | ## Configure the depth_stream -- changes automatically based on bus speed 56 | #print 'Get b4 video mode', depth_stream.get_video_mode() # Checks depth video configuration 57 | depth_stream.set_video_mode(c_api.OniVideoMode(pixelFormat=c_api.OniPixelFormat.ONI_PIXEL_FORMAT_DEPTH_1_MM, resolutionX=320, resolutionY=240, fps=30)) 58 | 59 | ## Check and configure the mirroring -- default is True 60 | # print 'Mirroring info1', depth_stream.get_mirroring_enabled() 61 | depth_stream.set_mirroring_enabled(False) 62 | 63 | 64 | ## Start the streams 65 | depth_stream.start() 66 | 67 | ## Use 'help' to get more info 68 | # help(dev.set_image_registration_mode) 69 | 70 | def get_depth(): 71 | """ 72 | Returns numpy ndarrays representing the raw and ranged depth images. 73 | Outputs: 74 | dmap:= distancemap in mm, 1L ndarray, dtype=uint16, min=0, max=2**12-1 75 | d4d := depth for dislay, 3L ndarray, dtype=uint8, min=0, max=255 76 | Note1: 77 | fromstring is faster than asarray or frombuffer 78 | Note2: 79 | .reshape(120,160) #smaller image for faster response 80 | OMAP/ARM default video configuration 81 | .reshape(240,320) # Used to MATCH RGB Image (OMAP/ARM) 82 | Requires .set_video_mode 83 | """ 84 | dmap = np.fromstring(depth_stream.read_frame().get_buffer_as_uint16(),dtype=np.uint16).reshape(240,320) # Works & It's FAST 85 | d4d = np.uint8(dmap.astype(float) *255/ 2**12-1) # Correct the range. Depth images are 12bits 86 | d4d = cv2.cvtColor(d4d,cv2.COLOR_GRAY2RGB) 87 | # Shown unknowns in black 88 | d4d = 255 - d4d 89 | return dmap, d4d 90 | #get_depth 91 | 92 | 93 | ## main loop 94 | s=0 95 | done = False 96 | while not done: 97 | key = cv2.waitKey(1) 98 | ## Read keystrokes 99 | key = cv2.waitKey(1) & 255 100 | ## Read keystrokes 101 | if key == 27: # terminate 102 | print "\tESC key detected!" 103 | done = True 104 | elif chr(key) =='s': #screen capture 105 | print "\ts key detected. Saving image and distance map {}".format(s) 106 | cv2.imwrite("ex1_"+str(s)+'.png', d4d) 107 | np.savetxt("ex1dmap_"+str(s)+'.out',dmap) 108 | #s+=1 # uncomment for multiple captures 109 | #if 110 | 111 | ## Streams 112 | #DEPTH 113 | dmap,d4d = get_depth() 114 | #print 'Center pixel is {}mm away'.format(dmap[119,159]) 115 | 116 | ## Display the stream syde-by-side 117 | cv2.imshow('depth', d4d) 118 | # end while 119 | 120 | ## Release resources 121 | cv2.destroyAllWindows() 122 | depth_stream.stop() 123 | openni2.unload() 124 | print ("Terminated") 125 | -------------------------------------------------------------------------------- /samples/ex2_rgb_stream.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | ''' 3 | Created on 19Jun2015 4 | Stream rgb video using openni2 opencv-python (cv2) 5 | 6 | Requires the following libraries: 7 | 1. OpenNI-Linux--2.2 8 | 2. primesense-2.2.0.30 9 | 3. Python 2.7+ 10 | 4. OpenCV 2.4.X 11 | 12 | Current features: 13 | 1. Convert primensense oni -> numpy 14 | 2. Stream and display rgb 15 | 3. Keyboard commands 16 | press esc to exit 17 | press s to save current screen 18 | 19 | 20 | NOTE: 21 | 1. On device streams: IR and RGB streams do not work together 22 | Depth & IR = OK 23 | Depth & RGB = OK 24 | RGB & IR = NOT OK 25 | 26 | 2. Do not synchronize with depth or stream will feeze 27 | 28 | @author: Carlos Torres 29 | ''' 30 | 31 | import numpy as np 32 | import cv2 33 | from primesense import openni2#, nite2 34 | from primesense import _openni2 as c_api 35 | 36 | ## Path of the OpenNI redistribution OpenNI2.so or OpenNI2.dll 37 | # Windows 38 | #dist = 'C:\Program Files\OpenNI2\Redist\OpenNI2.dll' 39 | # OMAP 40 | #dist = '/home/carlos/Install/kinect/OpenNI2-Linux-ARM-2.2/Redist/' 41 | # Linux 42 | dist ='/home/carlos/Install/openni2/OpenNI-Linux-x64-2.2/Redist' 43 | 44 | ## Initialize openni and check 45 | openni2.initialize(dist) # 46 | if (openni2.is_initialized()): 47 | print "openNI2 initialized" 48 | else: 49 | print "openNI2 not initialized" 50 | 51 | ## Register the device 52 | dev = openni2.Device.open_any() 53 | 54 | ## Create the streams stream 55 | rgb_stream = dev.create_color_stream() 56 | 57 | ## Check and configure the depth_stream -- set automatically based on bus speed 58 | print 'The rgb video mode is', rgb_stream.get_video_mode() # Checks rgb video configuration 59 | rgb_stream.set_video_mode(c_api.OniVideoMode(pixelFormat=c_api.OniPixelFormat.ONI_PIXEL_FORMAT_RGB888, resolutionX=320, resolutionY=240, fps=30)) 60 | 61 | ## Start the streams 62 | rgb_stream.start() 63 | 64 | ## Use 'help' to get more info 65 | # help(dev.set_image_registration_mode) 66 | 67 | 68 | def get_rgb(): 69 | """ 70 | Returns numpy 3L ndarray to represent the rgb image. 71 | """ 72 | bgr = np.fromstring(rgb_stream.read_frame().get_buffer_as_uint8(),dtype=np.uint8).reshape(240,320,3) 73 | rgb = cv2.cvtColor(bgr,cv2.COLOR_BGR2RGB) 74 | return rgb 75 | #get_rgb 76 | 77 | 78 | ## main loop 79 | s=0 80 | done = False 81 | while not done: 82 | key = cv2.waitKey(1) & 255 83 | ## Read keystrokes 84 | if key == 27: # terminate 85 | print "\tESC key detected!" 86 | done = True 87 | elif chr(key) =='s': #screen capture 88 | print "\ts key detected. Saving image {}".format(s) 89 | cv2.imwrite("ex2_"+str(s)+'.png', rgb) 90 | #s+=1 # uncomment for multiple captures 91 | #if 92 | 93 | ## Streams 94 | #RGB 95 | rgb = get_rgb() 96 | 97 | ## Display the stream syde-by-side 98 | cv2.imshow('rgb', rgb) 99 | # end while 100 | 101 | ## Release resources 102 | cv2.destroyAllWindows() 103 | rgb_stream.stop() 104 | openni2.unload() 105 | print ("Terminated") 106 | -------------------------------------------------------------------------------- /samples/ex3_rgbd_stream.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | ''' 3 | Created on 19Jun2015 4 | Stream rgb and depth video side-by-side using openni2 opencv-python (cv2). Streams ARE NOT aligned, mirror-corrected, or synchronized. 5 | 6 | Requires the following libraries: 7 | 1. OpenNI-Linux--2.2 8 | 2. primesense-2.2.0.30 9 | 3. Python 2.7+ 10 | 4. OpenCV 2.4.X 11 | 12 | Current features: 13 | 1. Convert primensense oni -> numpy 14 | 2. Stream and display rgb and depth 15 | 3. Keyboard commands 16 | press esc to exit 17 | press s to save current screen 18 | 4. Sample mirroring configuration 19 | 20 | 21 | @author: Carlos Torres 22 | ''' 23 | 24 | import numpy as np 25 | import cv2 26 | from primesense import openni2#, nite2 27 | from primesense import _openni2 as c_api 28 | 29 | 30 | ## Path of the OpenNI redistribution OpenNI2.so or OpenNI2.dll 31 | # Windows 32 | #dist = 'C:\Program Files\OpenNI2\Redist\OpenNI2.dll' 33 | # OMAP 34 | #dist = '/home/carlos/Install/kinect/OpenNI2-Linux-ARM-2.2/Redist/' 35 | # Linux 36 | dist ='/home/carlos/Install/openni2/OpenNI-Linux-x64-2.2/Redist' 37 | 38 | ## Initialize openni and check 39 | openni2.initialize(dist) # 40 | if (openni2.is_initialized()): 41 | print "openNI2 initialized" 42 | else: 43 | print "openNI2 not initialized" 44 | 45 | ## Register the device 46 | dev = openni2.Device.open_any() 47 | 48 | ## Create the streams stream 49 | rgb_stream = dev.create_color_stream() 50 | depth_stream = dev.create_depth_stream() 51 | 52 | ## Configure the depth_stream -- changes automatically based on bus speed 53 | print 'Get b4 video mode', depth_stream.get_video_mode() # Checks depth video configuration 54 | depth_stream.set_video_mode(c_api.OniVideoMode(pixelFormat=c_api.OniPixelFormat.ONI_PIXEL_FORMAT_DEPTH_1_MM, resolutionX=320, resolutionY=240, fps=30)) 55 | 56 | 57 | ## Check and configure the mirroring -- default is True. See the effects 58 | # rgb mirroring = True 59 | # depth mirroring = False 60 | print 'Mirroring info1', depth_stream.get_mirroring_enabled() 61 | depth_stream.set_mirroring_enabled(False) 62 | #rgb_stream.set_mirroring_enabled(False) 63 | 64 | 65 | ## Start the streams 66 | rgb_stream.start() 67 | depth_stream.start() 68 | 69 | ## Use 'help' to get more info 70 | # help(dev.set_image_registration_mode) 71 | 72 | 73 | def get_rgb(): 74 | """ 75 | Returns numpy 3L ndarray to represent the rgb image. 76 | """ 77 | bgr = np.fromstring(rgb_stream.read_frame().get_buffer_as_uint8(),dtype=np.uint8).reshape(240,320,3) 78 | rgb = cv2.cvtColor(bgr,cv2.COLOR_BGR2RGB) 79 | return rgb 80 | #get_rgb 81 | 82 | 83 | def get_depth(): 84 | """ 85 | Returns numpy ndarrays representing the raw and ranged depth images. 86 | Outputs: 87 | dmap:= distancemap in mm, 1L ndarray, dtype=uint16, min=0, max=2**12-1 88 | d4d := depth for dislay, 3L ndarray, dtype=uint8, min=0, max=255 89 | Note1: 90 | fromstring is faster than asarray or frombuffer 91 | Note2: 92 | .reshape(120,160) #smaller image for faster response 93 | OMAP/ARM default video configuration 94 | .reshape(240,320) # Used to MATCH RGB Image (OMAP/ARM) 95 | Requires .set_video_mode 96 | """ 97 | dmap = np.fromstring(depth_stream.read_frame().get_buffer_as_uint16(),dtype=np.uint16).reshape(240,320) # Works & It's FAST 98 | d4d = np.uint8(dmap.astype(float) *255/ 2**12-1) # Correct the range. Depth images are 12bits 99 | d4d = 255 - cv2.cvtColor(d4d,cv2.COLOR_GRAY2RGB) 100 | return dmap, d4d 101 | #get_depth 102 | 103 | 104 | ## main loop 105 | done = False 106 | s = 0 107 | while not done: 108 | key = cv2.waitKey(1) & 255 109 | ## Read keystrokes 110 | if key == 27: # terminate 111 | print "\tESC key detected!" 112 | done = True 113 | elif chr(key) =='s': #screen capture 114 | print "\ts key detected. Saving image {}".format(s) 115 | cv2.imwrite("ex3_"+str(s)+'.png', rgbd) 116 | #s+=1 # uncomment for multiple captures 117 | #if 118 | 119 | ## Streams 120 | #RGB 121 | rgb = get_rgb() 122 | 123 | #DEPTH 124 | _,d4d = get_depth() 125 | 126 | # Canvas 127 | rgbd = np.hstack((rgb,d4d)) 128 | 129 | 130 | ## Display the stream syde-by-side 131 | cv2.imshow('depth || rgb', rgbd) 132 | # end while 133 | 134 | ## Release resources 135 | cv2.destroyAllWindows() 136 | rgb_stream.stop() 137 | depth_stream.stop() 138 | openni2.unload() 139 | print ("Terminated") 140 | -------------------------------------------------------------------------------- /samples/ex4_rgbd_syncd_aligned_stream.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | ''' 3 | Created on 19Jun2015 4 | Stream rgb and depth video side-by-side using openni2 opencv-python (cv2). Streams ARE aligned, mirror-corrected, and synchronized. 5 | 6 | Requires the following libraries: 7 | 1. OpenNI-Linux--2.2 8 | 2. primesense-2.2.0.30 9 | 3. Python 2.7+ 10 | 4. OpenCV 2.4.X 11 | 12 | Current features: 13 | 1. Convert primensense oni -> numpy 14 | 2. Stream and display rgb and depth 15 | 3. Keyboard commands 16 | press esc to exit 17 | press s to save current screen 18 | 4. Sync and registered depth & rgb streams 19 | 5. Mirroring corrected 20 | 21 | NOTE: 22 | 1. On device streams: IR and RGB streams do not work together 23 | Depth & IR = OK 24 | Depth & RGB = OK 25 | RGB & IR = NOT OK 26 | @author: Carlos Torres 27 | ''' 28 | 29 | import numpy as np 30 | import cv2 31 | from primesense import openni2#, nite2 32 | from primesense import _openni2 as c_api 33 | 34 | 35 | ## Path of the OpenNI redistribution OpenNI2.so or OpenNI2.dll 36 | # Windows 37 | #dist = 'C:\Program Files\OpenNI2\Redist\OpenNI2.dll' 38 | # OMAP 39 | #dist = '/home/carlos/Install/kinect/OpenNI2-Linux-ARM-2.2/Redist/' 40 | # Linux 41 | dist ='/home/carlos/Install/openni2/OpenNI-Linux-x64-2.2/Redist' 42 | 43 | ## Initialize openni and check 44 | openni2.initialize(dist) # 45 | if (openni2.is_initialized()): 46 | print "openNI2 initialized" 47 | else: 48 | print "openNI2 not initialized" 49 | 50 | ## Register the device 51 | dev = openni2.Device.open_any() 52 | 53 | ## Create the streams stream 54 | rgb_stream = dev.create_color_stream() 55 | depth_stream = dev.create_depth_stream() 56 | 57 | ## Configure the depth_stream -- changes automatically based on bus speed 58 | #print 'Depth video mode info', depth_stream.get_video_mode() # Checks depth video configuration 59 | depth_stream.set_video_mode(c_api.OniVideoMode(pixelFormat=c_api.OniPixelFormat.ONI_PIXEL_FORMAT_DEPTH_1_MM, resolutionX=320, resolutionY=240, fps=30)) 60 | 61 | ## Check and configure the mirroring -- default is True 62 | ## Note: I disable mirroring 63 | # print 'Mirroring info1', depth_stream.get_mirroring_enabled() 64 | depth_stream.set_mirroring_enabled(False) 65 | rgb_stream.set_mirroring_enabled(False) 66 | 67 | ## More infor on streams depth_ and rgb_ 68 | #help(depth_stream) 69 | 70 | 71 | ## Start the streams 72 | rgb_stream.start() 73 | depth_stream.start() 74 | 75 | ## Synchronize the streams 76 | dev.set_depth_color_sync_enabled(True) # synchronize the streams 77 | 78 | ## IMPORTANT: ALIGN DEPTH2RGB (depth wrapped to match rgb stream) 79 | dev.set_image_registration_mode(openni2.IMAGE_REGISTRATION_DEPTH_TO_COLOR) 80 | 81 | 82 | def get_rgb(): 83 | """ 84 | Returns numpy 3L ndarray to represent the rgb image. 85 | """ 86 | bgr = np.fromstring(rgb_stream.read_frame().get_buffer_as_uint8(),dtype=np.uint8).reshape(240,320,3) 87 | rgb = cv2.cvtColor(bgr,cv2.COLOR_BGR2RGB) 88 | return rgb 89 | #get_rgb 90 | 91 | 92 | def get_depth(): 93 | """ 94 | Returns numpy ndarrays representing the raw and ranged depth images. 95 | Outputs: 96 | dmap:= distancemap in mm, 1L ndarray, dtype=uint16, min=0, max=2**12-1 97 | d4d := depth for dislay, 3L ndarray, dtype=uint8, min=0, max=255 98 | Note1: 99 | fromstring is faster than asarray or frombuffer 100 | Note2: 101 | .reshape(120,160) #smaller image for faster response 102 | OMAP/ARM default video configuration 103 | .reshape(240,320) # Used to MATCH RGB Image (OMAP/ARM) 104 | Requires .set_video_mode 105 | """ 106 | dmap = np.fromstring(depth_stream.read_frame().get_buffer_as_uint16(),dtype=np.uint16).reshape(240,320) # Works & It's FAST 107 | d4d = np.uint8(dmap.astype(float) *255/ 2**12-1) # Correct the range. Depth images are 12bits 108 | d4d = 255 - cv2.cvtColor(d4d,cv2.COLOR_GRAY2RGB) 109 | return dmap, d4d 110 | #get_depth 111 | 112 | 113 | ## main loop 114 | s=0 115 | done = False 116 | while not done: 117 | key = cv2.waitKey(1) & 255 118 | ## Read keystrokes 119 | if key == 27: # terminate 120 | print "\tESC key detected!" 121 | done = True 122 | elif chr(key) =='s': #screen capture 123 | print "\ts key detected. Saving image {}".format(s) 124 | cv2.imwrite("ex4_"+str(s)+'.png', canvas) 125 | #s+=1 # uncomment for multiple captures 126 | #if 127 | 128 | ## Streams 129 | #RGB 130 | rgb = get_rgb() 131 | 132 | #DEPTH 133 | _,d4d = get_depth() 134 | 135 | # canvas 136 | canvas = np.hstack((rgb,d4d)) 137 | ## Display the stream syde-by-side 138 | cv2.imshow('depth || rgb', canvas ) 139 | # end while 140 | 141 | ## Release resources 142 | cv2.destroyAllWindows() 143 | rgb_stream.stop() 144 | depth_stream.stop() 145 | openni2.unload() 146 | print ("Terminated") 147 | -------------------------------------------------------------------------------- /samples/ex5_rgbd_overlayed.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | ''' 3 | Created on 19Jun2015 4 | Stream rgb and depth video side-by-side using openni2 opencv-python (cv2). 5 | RGB is overlayed on top on readable-depth. In addition, streams are aligned, mirror-corrected, and synchronized. 6 | 7 | Requires the following libraries: 8 | 1. OpenNI-Linux--2.2 9 | 2. primesense-2.2.0.30 10 | 3. Python 2.7+ 11 | 4. OpenCV 2.4.X 12 | 13 | Current features: 14 | 1. Convert primensense oni -> numpy 15 | 2. Stream and display rgb || depth || rgbd overlayed 16 | 3. Keyboard commands 17 | press esc to exit 18 | press s to save current screen and distancemap 19 | 4. Sync and registered depth & rgb streams 20 | 5. Print distance to center pixel 21 | 6. Masks and overlays rgb stream on the depth stream 22 | 23 | NOTE: 24 | 1. On device streams: IR and RGB streams do not work together 25 | Depth & IR = OK 26 | Depth & RGB = OK 27 | RGB & IR = NOT OK 28 | @author: Carlos Torres 29 | ''' 30 | 31 | import numpy as np 32 | import cv2 33 | from primesense import openni2#, nite2 34 | from primesense import _openni2 as c_api 35 | 36 | 37 | ## Path of the OpenNI redistribution OpenNI2.so or OpenNI2.dll 38 | # Windows 39 | #dist = 'C:\Program Files\OpenNI2\Redist\OpenNI2.dll' 40 | # OMAP 41 | #dist = '/home/carlos/Install/kinect/OpenNI2-Linux-ARM-2.2/Redist/' 42 | # Linux 43 | dist ='/home/carlos/Install/openni2/OpenNI-Linux-x64-2.2/Redist' 44 | 45 | 46 | ## initialize openni and check 47 | openni2.initialize(dist) #'C:\Program Files\OpenNI2\Redist\OpenNI2.dll') # accepts the path of the OpenNI redistribution 48 | if (openni2.is_initialized()): 49 | print "openNI2 initialized" 50 | else: 51 | print "openNI2 not initialized" 52 | 53 | ## Register the device 54 | dev = openni2.Device.open_any() 55 | 56 | ## create the streams stream 57 | rgb_stream = dev.create_color_stream() 58 | depth_stream = dev.create_depth_stream() 59 | 60 | ##configure the depth_stream 61 | #print 'Get b4 video mode', depth_stream.get_video_mode() 62 | depth_stream.set_video_mode(c_api.OniVideoMode(pixelFormat=c_api.OniPixelFormat.ONI_PIXEL_FORMAT_DEPTH_1_MM, resolutionX=320, resolutionY=240, fps=30)) 63 | 64 | 65 | ## Check and configure the mirroring -- default is True 66 | # print 'Mirroring info1', depth_stream.get_mirroring_enabled() 67 | depth_stream.set_mirroring_enabled(False) 68 | rgb_stream.set_mirroring_enabled(False) 69 | 70 | 71 | ## start the stream 72 | rgb_stream.start() 73 | depth_stream.start() 74 | 75 | ## synchronize the streams 76 | dev.set_depth_color_sync_enabled(True) # synchronize the streams 77 | 78 | ## IMPORTANT: ALIGN DEPTH2RGB (depth wrapped to match rgb stream) 79 | dev.set_image_registration_mode(openni2.IMAGE_REGISTRATION_DEPTH_TO_COLOR) 80 | 81 | ##help(dev.set_image_registration_mode) 82 | 83 | 84 | def get_rgb(): 85 | """ 86 | Returns numpy 3L ndarray to represent the rgb image. 87 | """ 88 | bgr = np.fromstring(rgb_stream.read_frame().get_buffer_as_uint8(),dtype=np.uint8).reshape(240,320,3) 89 | rgb = cv2.cvtColor(bgr,cv2.COLOR_BGR2RGB) 90 | return rgb 91 | #get_rgb 92 | 93 | 94 | 95 | 96 | def get_depth(): 97 | """ 98 | Returns numpy ndarrays representing the raw and ranged depth images. 99 | Outputs: 100 | dmap:= distancemap in mm, 1L ndarray, dtype=uint16, min=0, max=2**12-1 101 | d4d := depth for dislay, 3L ndarray, dtype=uint8, min=0, max=255 102 | Note1: 103 | fromstring is faster than asarray or frombuffer 104 | Note2: 105 | .reshape(120,160) #smaller image for faster response 106 | OMAP/ARM default video configuration 107 | .reshape(240,320) # Used to MATCH RGB Image (OMAP/ARM) 108 | Requires .set_video_mode 109 | """ 110 | dmap = np.fromstring(depth_stream.read_frame().get_buffer_as_uint16(),dtype=np.uint16).reshape(240,320) # Works & It's FAST 111 | d4d = np.uint8(dmap.astype(float) *255/ 2**12-1) # Correct the range. Depth images are 12bits 112 | d4d = 255 - cv2.cvtColor(d4d,cv2.COLOR_GRAY2RGB) 113 | return dmap, d4d 114 | #get_depth 115 | 116 | 117 | 118 | 119 | def mask_rgbd(d4d,rgb, th=0): 120 | """ 121 | Overlays images and uses some blur to slightly smooth the mask 122 | (3L ndarray, 3L ndarray) -> 3L ndarray 123 | th:= threshold 124 | """ 125 | mask = d4d.copy() 126 | #mask = cv2.GaussianBlur(mask, (5,5),0) 127 | idx =(mask>th) 128 | mask[idx] = rgb[idx] 129 | return mask 130 | #mask_rgbd 131 | 132 | 133 | ## main loop 134 | s=0 135 | done = False 136 | while not done: 137 | key = cv2.waitKey(1) & 255 138 | ## Read keystrokes 139 | if key == 27: # terminate 140 | print "\tESC key detected!" 141 | done = True 142 | elif chr(key) =='s': #screen capture 143 | print "\ts key detected. Saving image and distance map {}".format(s) 144 | cv2.imwrite("ex5_"+str(s)+'.png', canvas) 145 | np.savetxt("ex5dmap_"+str(s)+'.out',dmap) 146 | #s+=1 # uncomment for multiple captures 147 | #if 148 | 149 | ## Streams 150 | #RGB 151 | rgb = get_rgb() 152 | 153 | #DEPTH 154 | dmap,d4d = get_depth() 155 | 156 | # Overlay rgb over the depth stream 157 | rgbd = mask_rgbd(d4d,rgb) 158 | 159 | # canvas 160 | canvas = np.hstack((d4d,rgb,rgbd)) 161 | 162 | ## Distance map 163 | print 'Center pixel is {} mm away'.format(dmap[119,159]) 164 | 165 | ## Display the stream 166 | cv2.imshow('depth || rgb || rgbd', canvas ) 167 | # end while 168 | 169 | ## Release resources 170 | cv2.destroyAllWindows() 171 | rgb_stream.stop() 172 | depth_stream.stop() 173 | openni2.unload() 174 | print ("Terminated") 175 | -------------------------------------------------------------------------------- /samples/ex6_ird_stream.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | ''' 3 | Official primense openni2 and nite2 python bindings. 4 | 5 | Streams infra-red camera 6 | 7 | ref: 8 | http://www.eml.ele.cst.nihon-u.ac.jp/~momma/wiki/wiki.cgi/OpenNI/Python.html 9 | 10 | @author: Carlos Torres 11 | ''' 12 | 13 | import cv2 14 | 15 | from primesense import openni2#, nite2 16 | import numpy as np 17 | from primesense import _openni2 as c_api 18 | 19 | #import matplotlib.pyplot as plt 20 | 21 | 22 | ## Directory where OpenNI2.so is located 23 | dist = '/home/carlos/Install/kinect/OpenNI2-Linux-Arm-2.2/Redist/' 24 | 25 | ## Initialize openni and check 26 | openni2.initialize(dist)#'C:\Program Files\OpenNI2\Redist\OpenNI2.dll') # accepts the path of the OpenNI redistribution 27 | if (openni2.is_initialized()): 28 | print "openNI2 initialized" 29 | else: 30 | print "openNI2 not initialized" 31 | 32 | #### initialize nite and check 33 | ##nite2.initialize() 34 | ##if (nite2.is_initialized()): 35 | ## print "nite2 initialized" 36 | ##else: 37 | ## print "nite2 not initialized" 38 | #### =============================== 39 | 40 | 41 | dev = openni2.Device.open_any() 42 | print 'Some Device Information' 43 | print '\t', dev.get_sensor_info(openni2.SENSOR_DEPTH) 44 | print '\t', dev.get_sensor_info(openni2.SENSOR_IR) 45 | print '\t', dev.get_sensor_info(openni2.SENSOR_COLOR) 46 | #ut = nite2.UserTracker(dev) 47 | 48 | ## streams 49 | # Depth stream 50 | depth_stream = dev.create_depth_stream() 51 | 52 | # IR stream 53 | ir_stream = dev.create_ir_stream() 54 | 55 | ## Set stream speed and resolution 56 | w = 640 57 | h = 480 58 | fps=30 59 | 60 | ## Set the video properties 61 | #print 'Get b4 video mode', depth_stream.get_video_mode() 62 | depth_stream.set_video_mode(c_api.OniVideoMode(pixelFormat=c_api.OniPixelFormat.ONI_PIXEL_FORMAT_DEPTH_1_MM, resolutionX=w, resolutionY=h, fps=fps)) 63 | #print 'Get after video mode', depth_stream.get_video_mode() 64 | 65 | ## Start the streams 66 | depth_stream.start() 67 | ir_stream.start() 68 | 69 | 70 | def get_depth1(): 71 | """ 72 | Returns numpy ndarrays representing raw and ranged depth images. 73 | Outputs: 74 | depth:= raw depth, 1L ndarray, dtype=uint16, min=0, max=2**12-1 75 | d4d := depth for dislay, 3L ndarray, dtype=uint8, min=0, max=255 76 | Note1: 77 | fromstring is faster than asarray or frombuffer 78 | Note2: 79 | depth = depth.reshape(120,160) #smaller image for faster response 80 | NEEDS default video configuration 81 | depth = depth.reshape(240,320) # Used to MATCH RGB Image (OMAP/ARM) 82 | """ 83 | depth_frame = depth_stream.read_frame() 84 | depth = np.fromstring(depth_frame().get_buffer_as_uint16(),dtype=np.uint16).reshape(h,w) # Works & It's FAST 85 | d4d = np.uint8(depth.astype(float) *255/ 2**12-1) # Correct the range. Depth images are 12bits 86 | #d4d = cv2.cvtColor(d4d,cv2.COLOR_GRAY2RGB) 87 | d4d = np.dstack((d4d,d4d,d4d)) # faster than cv2 conversion 88 | return depth, d4d 89 | #get_depth 90 | 91 | 92 | def get_depth(): 93 | """ 94 | Returns numpy ndarrays representing the raw and ranged depth images. 95 | Outputs: 96 | dmap:= distancemap in mm, 1L ndarray, dtype=uint16, min=0, max=2**12-1 97 | d4d := depth for dislay, 3L ndarray, dtype=uint8, min=0, max=255 98 | Note1: 99 | fromstring is faster than asarray or frombuffer 100 | Note2: 101 | .reshape(120,160) #smaller image for faster response 102 | OMAP/ARM default video configuration 103 | .reshape(240,320) # Used to MATCH RGB Image (OMAP/ARM) 104 | Requires .set_video_mode 105 | """ 106 | dmap = np.fromstring(depth_stream.read_frame().get_buffer_as_uint16(),dtype=np.uint16).reshape(h,w) # Works & It's FAST 107 | d4d = np.uint8(depth.astype(float) *255/ 2**12-1) # Correct the range. Depth images are 12bits 108 | #d4d = cv2.cvtColor(d4d,cv2.COLOR_GRAY2RGB) 109 | d4d = np.dstack((d4d,d4d,d4d)) # faster than cv2 conversion 110 | return dmap, d4d 111 | #get_depth 112 | 113 | def get_ir(): 114 | """ 115 | Returns numpy ndarrays representing raw and ranged infra-red(IR) images. 116 | Outputs: 117 | ir := raw IR, 1L ndarray, dtype=uint16, min=0, max=2**12-1 118 | ir4d := IR for display, 3L ndarray, dtype=uint8, min=0, max=255 119 | """ 120 | ir_frame = ir_stream.read_frame() 121 | ir_frame_data = ir_stream.read_frame().get_buffer_as_uint16() 122 | ir4d = np.ndarray((ir_frame.height, ir_frame.width),dtype=np.uint16, buffer = ir_frame_data).astype(np.float32) 123 | ir4d = np.uint8((ir4d/ir4d.max()) * 255) 124 | ir4d = cv2.cvtColor(ir4d,cv2.COLOR_GRAY2RGB) 125 | return ir_frame, ir4d 126 | #get_ir 127 | 128 | frame_idx = 0 129 | 130 | ## main loop 131 | done = False 132 | while not done: 133 | key = cv2.waitKey(1) 134 | if (key&255) == 27: 135 | done = True 136 | ## Read in the streams 137 | # Depth 138 | _,d4d = get_depth() 139 | # Infrared 140 | _, ir4d = get_ir() 141 | cv2.imshow("Depth||IR", np.hstack((d4d, ir4d))) 142 | frame_idx+=1 143 | # end while 144 | 145 | ## Release resources and terminate 146 | cv2.destroyAllWindows() 147 | depth_stream.stop() 148 | openni2.unload() 149 | print ("Terminated") 150 | --------------------------------------------------------------------------------