Skip to content

Commit

Permalink
Cleaned up the demos to reflect decisions made from the forum/irc dis…
Browse files Browse the repository at this point in the history
…cussions.

1. Abstracted the frame conversion code to frame_convert.py.  This will prevent the massive changes that we have been seeing as all of the duplicative code is in there now.  This makes optimization and normalization experiments cleaner to test out.
2. Removed demo_ipython and demo_kill_async as they are mostly duplicates of the other demos
3. Made the "multi" demo default to using all kinects at once instead of one at a time
4. Change the default normalization to make better use of the 8 bit range.

Signed-off-by: Brandyn A. White <[email protected]>
Brandyn A. White authored and qdot committed Dec 29, 2010
1 parent 5fff205 commit 610ec4b
Showing 10 changed files with 136 additions and 88 deletions.
8 changes: 4 additions & 4 deletions wrappers/python/README
Original file line number Diff line number Diff line change
@@ -12,17 +12,17 @@ Install
- Global Install: sudo python setup.py install
- Local Directory Install: python setup.py build_ext --inplace

Why do the demos truncate the depth?
The depth is 11 bits, if you want to display it as an 8 bit gray image you need to lose information somewhere. The truncation allows you to differentiate between local depth differences but doesn't give you the absolute depth due to ambiguities; however, normalization gives you the absolute depth differences between pixels but you will lose resolution due to the difference between high and low depths. We feel that this truncation produces the best results visually as a demo while being simple. See glview for an example of using colors to extend the range.
Why is frame_convert.py there? Why not just use 1 file?
We had individual file demos and when we started experimenting with optimization and normalization it made maintaning the duplicative code a nightmare. Now we have this separate file so that we can keep those changes abstracted.

Do I need to call sync_stop when the program ends?
No, it is not necessary.

Do you need to run everything with root?
No. Use the udev drivers available in the project main directory.

Why does sync_multi call sync_stop after each kinect?
The goal is to test multiple kinects, but some machines don't have the USB bandwidth for it. By default, this only lets one run at a time so that you can have many kinects on a hub or a slow laptop. You can comment out the line if your machine can handle it.
Why does sync_multi have trouble with multiple kinects?
The goal is to test multiple kinects, but some machines don't have the USB bandwidth for it. By default, this lets them all run, however if you uncomment the sync_stop line it only lets one run at a time so that you can have many kinects on a hub or a slow laptop.

Differences From C Library
Things that are intentially different to be more Pythonic
20 changes: 5 additions & 15 deletions wrappers/python/demo_cv_async.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
#!/usr/bin/env python
import freenect
import cv
import numpy as np
import frame_convert

cv.NamedWindow('Depth')
cv.NamedWindow('RGB')
@@ -10,33 +10,23 @@

def display_depth(dev, data, timestamp):
global keep_running
data = data.astype(np.uint8)
image = cv.CreateImageHeader((data.shape[1], data.shape[0]),
cv.IPL_DEPTH_8U,
1)
cv.SetData(image, data.tostring(),
data.dtype.itemsize * data.shape[1])
cv.ShowImage('Depth', image)
cv.ShowImage('Depth', frame_convert.pretty_depth_cv(data))
if cv.WaitKey(10) == 27:
keep_running = False


def display_rgb(dev, data, timestamp):
global keep_running
image = cv.CreateImageHeader((data.shape[1], data.shape[0]),
cv.IPL_DEPTH_8U,
3)
# Note: We swap from RGB to BGR here
cv.SetData(image, data[:, :, ::-1].tostring(),
data.dtype.itemsize * 3 * data.shape[1])
cv.ShowImage('RGB', image)
cv.ShowImage('RGB', frame_convert.video_cv(data))
if cv.WaitKey(10) == 27:
keep_running = False


def body(*args):
if not keep_running:
raise freenect.Kill


print('Press ESC in window to stop')
freenect.runloop(depth=display_depth,
video=display_rgb,
18 changes: 13 additions & 5 deletions wrappers/python/demo_cv_sync.py
Original file line number Diff line number Diff line change
@@ -1,15 +1,23 @@
#!/usr/bin/env python
import freenect
import cv
import numpy as np
import frame_convert

cv.NamedWindow('Depth')
cv.NamedWindow('Video')
print('Press ESC in window to stop')


def get_depth():
return frame_convert.pretty_depth_cv(freenect.sync_get_depth()[0])


def get_video():
return frame_convert.video_cv(freenect.sync_get_video()[0])


while 1:
depth, timestamp = freenect.sync_get_depth()
rgb, timestamp = freenect.sync_get_video()
cv.ShowImage('Depth', depth.astype(np.uint8))
cv.ShowImage('Video', rgb[:, :, ::-1].astype(np.uint8))
cv.ShowImage('Depth', get_depth())
cv.ShowImage('Video', get_video())
if cv.WaitKey(10) == 27:
break
30 changes: 23 additions & 7 deletions wrappers/python/demo_cv_sync_multi.py
Original file line number Diff line number Diff line change
@@ -1,23 +1,39 @@
#!/usr/bin/env python
"""This goes through each kinect on your system, grabs one frame and
displays it. Uncomment the commented line to shut down after each frame
if your system can't handle it (will get very low FPS but it should work).
This will keep trying indeces until it finds one that doesn't work, then it
starts from 0.
"""
import freenect
import cv
import numpy as np
import frame_convert

cv.NamedWindow('Depth')
cv.NamedWindow('Video')
ind = 0
print('Press ESC to stop')
print('%s\nPress ESC to stop' % __doc__)


def get_depth(ind):
return frame_convert.pretty_depth_cv(freenect.sync_get_depth(ind)[0])


def get_video(ind):
return frame_convert.video_cv(freenect.sync_get_video(ind)[0])


while 1:
print(ind)
try:
depth, timestamp = freenect.sync_get_depth(ind)
rgb, timestamp = freenect.sync_get_video(ind)
depth = get_depth(ind)
video = get_video(ind)
except TypeError:
ind = 0
continue
ind += 1
cv.ShowImage('Depth', depth.astype(np.uint8))
cv.ShowImage('Video', rgb[:, :, ::-1].astype(np.uint8))
cv.ShowImage('Depth', depth)
cv.ShowImage('Video', video)
if cv.WaitKey(10) == 27:
break
freenect.sync_stop() # NOTE: May remove if you have good USB bandwidth
#freenect.sync_stop() # NOTE: Uncomment if your machine can't handle it
10 changes: 9 additions & 1 deletion wrappers/python/demo_cv_thresh_sweep.py
Original file line number Diff line number Diff line change
@@ -11,8 +11,16 @@
def disp_thresh(lower, upper):
depth, timestamp = freenect.sync_get_depth()
depth = 255 * np.logical_and(depth > lower, depth < upper)
cv.ShowImage('Depth', depth.astype(np.uint8))
depth = depth.astype(np.uint8)
image = cv.CreateImageHeader((depth.shape[1], depth.shape[0]),
cv.IPL_DEPTH_8U,
1)
cv.SetData(image, depth.tostring(),
depth.dtype.itemsize * depth.shape[1])
cv.ShowImage('Depth', image)
cv.WaitKey(10)


lower = 0
upper = 100
max_upper = 2048
29 changes: 0 additions & 29 deletions wrappers/python/demo_ipython.py

This file was deleted.

14 changes: 0 additions & 14 deletions wrappers/python/demo_kill_async.py

This file was deleted.

6 changes: 4 additions & 2 deletions wrappers/python/demo_mp_async.py
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
#!/usr/bin/env python
import freenect
import matplotlib.pyplot as mp
import numpy as np
import signal
import frame_convert

mp.ion()
image_rgb = None
@@ -12,7 +12,7 @@

def display_depth(dev, data, timestamp):
global image_depth
data = data.astype(np.uint8)
data = frame_convert.pretty_depth(data)
mp.gray()
mp.figure(1)
if image_depth:
@@ -40,6 +40,8 @@ def body(*args):
def handler(signum, frame):
global keep_running
keep_running = False


print('Press Ctrl-C in terminal to stop')
signal.signal(signal.SIGINT, handler)
freenect.runloop(depth=display_depth,
30 changes: 19 additions & 11 deletions wrappers/python/demo_mp_sync.py
Original file line number Diff line number Diff line change
@@ -1,31 +1,39 @@
#!/usr/bin/env python
import freenect
import matplotlib.pyplot as mp
import numpy as np
import frame_convert
import signal

keep_running = True
mp.ion()
mp.figure(1)
mp.gray()
image_depth = mp.imshow(freenect.sync_get_depth()[0].astype(np.uint8),
interpolation='nearest', animated=True)
mp.figure(2)
image_rgb = mp.imshow(freenect.sync_get_video()[0],
interpolation='nearest', animated=True)


def get_depth():
return frame_convert.pretty_depth(freenect.sync_get_depth()[0])


def get_video():
return freenect.sync_get_video()[0]


def handler(signum, frame):
"""Sets up the kill handler, catches SIGINT"""
global keep_running
keep_running = False


mp.ion()
mp.gray()
mp.figure(1)
image_depth = mp.imshow(get_depth(), interpolation='nearest', animated=True)
mp.figure(2)
image_rgb = mp.imshow(get_video(), interpolation='nearest', animated=True)
print('Press Ctrl-C in terminal to stop')
signal.signal(signal.SIGINT, handler)

while keep_running:
mp.figure(1)
image_depth.set_data(freenect.sync_get_depth()[0].astype(np.uint8))
image_depth.set_data(get_depth())
mp.figure(2)
image_rgb.set_data(freenect.sync_get_video()[0])
image_rgb.set_data(get_video())
mp.draw()
mp.waitforbuttonpress(0.01)
59 changes: 59 additions & 0 deletions wrappers/python/frame_convert.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,59 @@
import cv
import numpy as np


def pretty_depth(depth):
"""Converts depth into a 'nicer' format for display
This is abstracted to allow for experimentation with normalization
Args:
depth: A numpy array with 2 bytes per pixel
Returns:
A numpy array that has been processed whos datatype is unspecified
"""
np.clip(depth, 0, 2**10 - 1, depth)
depth >>= 2
depth = depth.astype(np.uint8)
return depth


def pretty_depth_cv(depth):
"""Converts depth into a 'nicer' format for display
This is abstracted to allow for experimentation with normalization
Args:
depth: A numpy array with 2 bytes per pixel
Returns:
An opencv image who's datatype is unspecified
"""
depth = pretty_depth(depth)
image = cv.CreateImageHeader((depth.shape[1], depth.shape[0]),
cv.IPL_DEPTH_8U,
1)
cv.SetData(image, depth.tostring(),
depth.dtype.itemsize * depth.shape[1])
return image


def video_cv(video):
"""Converts video into a BGR format for opencv
This is abstracted out to allow for experimentation
Args:
video: A numpy array with 1 byte per pixel, 3 channels RGB
Returns:
An opencv image who's datatype is 1 byte, 3 channel BGR
"""
video = video[:, :, ::-1] # RGB -> BGR
image = cv.CreateImageHeader((video.shape[1], video.shape[0]),
cv.IPL_DEPTH_8U,
3)
cv.SetData(image, video.tostring(),
video.dtype.itemsize * 3 * video.shape[1])
return image

0 comments on commit 610ec4b

Please sign in to comment.