运行时初始化
VideoStream必须在Runtime初始化时指定UseColor;分辨率有两种格式:Resolution1280x1024和Resolution640x480;图像类型有三种:Color,ColorYUV和ColorYUVRaw。
DepthStream必须在Runtime初始化时指定UseDepthAndPlayerIndex;的分辨率有两种格式:Resolution320x240和Resolution80x60;图像类型只有一种:DepthAndPlayerIndex。
运行的时候
当一个视频帧准备好时,runtime发出VideoFrameReady信号,并调用nui_ColorFrameReady。其余的DepthFrameReady和SkeletonFrameReady,与其相关的EventHandler函数类似。
nui_ColorFrameReady函数的内容是,更新对应控件(即video)的图像内容。
处理Depth数据
void nui_DepthFrameReady(object sender, ImageFrameReadyEventArgs e) { PlanarImage Image = e.ImageFrame.Image; byte[] convertedDepthFrame = convertDepthFrame(Image.Bits); depth.Source = BitmapSource.Create( Image.Width, Image.Height, 96, 96, PixelFormats.Bgr32, null, convertedDepthFrame, Image.Width * 4); ++totalFrames; DateTime cur = DateTime.Now; if (cur.Subtract(lastTime) > TimeSpan.FromSeconds(1)) { int frameDiff = totalFrames - lastFrames; lastFrames = totalFrames; lastTime = cur; frameRate.Text = frameDiff.ToString() + " fps"; } }
函数大概意思:首先,从事件中获得图片信息Image;然后,将原始的16位景深图转为32位图像;接着,设置depth控件的图像为convertedDepthFrame,即转换后的32位景深图;最后,将总帧数加1,并判断,如果统计事件超过一秒,则刷新fps的记录。
原始的16位景深图描述如下(这里第8个问题为更详细的解释):
如果景深数据以raw或gray-scale显示,用户很难分清场景中的人像。所以,程序将不同的用户渲染一种不同的颜色。先看代码:
// Converts a 16-bit grayscale depth frame which includes player indexes into a 32-bit frame // that displays different players in different colors byte[] convertDepthFrame(byte[] depthFrame16) { for (int i16 = 0, i32 = 0; i16 < depthFrame16.Length && i32 < depthFrame32.Length; i16 += 2, i32 += 4) { int player = depthFrame16[i16] & 0x07; int realDepth = (depthFrame16[i16+1] << 5) | (depthFrame16[i16] >> 3); // transform 13-bit depth information into an 8-bit intensity appropriate // for display (we disregard information in most significant bit) byte intensity = (byte)(255 - (255 * realDepth / 0x0fff)); depthFrame32[i32 + RED_IDX] = 0; depthFrame32[i32 + GREEN_IDX] = 0; depthFrame32[i32 + BLUE_IDX] = 0; // choose different display colors based on player switch (player) { case 0: depthFrame32[i32 + RED_IDX] = (byte)(intensity / 2); depthFrame32[i32 + GREEN_IDX] = (byte)(intensity / 2); depthFrame32[i32 + BLUE_IDX] = (byte)(intensity / 2); break; case 1: depthFrame32[i32 + RED_IDX] = intensity; break; case 2: depthFrame32[i32 + GREEN_IDX] = intensity; break; case 3: depthFrame32[i32 + RED_IDX] = (byte)(intensity / 4); depthFrame32[i32 + GREEN_IDX] = (byte)(intensity); depthFrame32[i32 + BLUE_IDX] = (byte)(intensity); break; case 4: depthFrame32[i32 + RED_IDX] = (byte)(intensity); depthFrame32[i32 + GREEN_IDX] = (byte)(intensity); depthFrame32[i32 + BLUE_IDX] = (byte)(intensity / 4); break; case 5: depthFrame32[i32 + RED_IDX] = (byte)(intensity); depthFrame32[i32 + GREEN_IDX] = (byte)(intensity / 4); depthFrame32[i32 + BLUE_IDX] = (byte)(intensity); break; case 6: depthFrame32[i32 + RED_IDX] = (byte)(intensity / 2); depthFrame32[i32 + GREEN_IDX] = (byte)(intensity / 2); depthFrame32[i32 + BLUE_IDX] = (byte)(intensity); break; case 7: depthFrame32[i32 + RED_IDX] = (byte)(255 - intensity); depthFrame32[i32 + GREEN_IDX] = (byte)(255 - intensity); depthFrame32[i32 + BLUE_IDX] = (byte)(255 - intensity); break; } } return depthFrame32; }
解释一下具体步骤:
处理骨架数据
void nui_SkeletonFrameReady(object sender, SkeletonFrameReadyEventArgs e) { SkeletonFrame skeletonFrame = e.SkeletonFrame; int iSkeleton = 0; Brush[] brushes = new Brush[6]; brushes[0] = new SolidColorBrush(Color.FromRgb(255, 0, 0)); brushes[1] = new SolidColorBrush(Color.FromRgb(0, 255, 0)); brushes[2] = new SolidColorBrush(Color.FromRgb(64, 255, 255)); brushes[3] = new SolidColorBrush(Color.FromRgb(255, 255, 64)); brushes[4] = new SolidColorBrush(Color.FromRgb(255, 64, 255)); brushes[5] = new SolidColorBrush(Color.FromRgb(128, 128, 255)); skeleton.Children.Clear(); foreach (SkeletonData data in skeletonFrame.Skeletons) { if (SkeletonTrackingState.Tracked == data.TrackingState) { // Draw bones Brush brush = brushes[iSkeleton % brushes.Length]; skeleton.Children.Add(getBodySegment(data.Joints, brush, JointID.HipCenter, JointID.Spine, JointID.ShoulderCenter, JointID.Head)); skeleton.Children.Add(getBodySegment(data.Joints, brush, JointID.ShoulderCenter, JointID.ShoulderLeft, JointID.ElbowLeft, JointID.WristLeft, JointID.HandLeft)); skeleton.Children.Add(getBodySegment(data.Joints, brush, JointID.ShoulderCenter, JointID.ShoulderRight, JointID.ElbowRight, JointID.WristRight, JointID.HandRight)); skeleton.Children.Add(getBodySegment(data.Joints, brush, JointID.HipCenter, JointID.HipLeft, JointID.KneeLeft, JointID.AnkleLeft, JointID.FootLeft)); skeleton.Children.Add(getBodySegment(data.Joints, brush, JointID.HipCenter, JointID.HipRight, JointID.KneeRight, JointID.AnkleRight, JointID.FootRight)); // Draw joints foreach (Joint joint in data.Joints) { Point jointPos = getDisplayPosition(joint); Line jointLine = new Line(); jointLine.X1 = jointPos.X - 3; jointLine.X2 = jointLine.X1 + 6; jointLine.Y1 = jointLine.Y2 = jointPos.Y; jointLine.Stroke = jointColors[joint.ID]; jointLine.StrokeThickness = 6; skeleton.Children.Add(jointLine); } } iSkeleton++; } // for each skeleton }
iSkeleton标明当前是第几个骨架,从0到5。brush数组标明了每个骨架的骨头绘制颜色。然后,就开始for循环,获得每个骨架的信息,然后判断,如果已经捕捉到(tracked),则开始绘制骨头、绘制关节点。
getBodySegment()函数在文档中解释道,此函数有3个参数:
getBodySegment()返回Polyline,连接各个JointID。函数原型为:
Polyline getBodySegment(Microsoft.Research.Kinect.Nui.JointsCollection joints, Brush brush, params JointID[] ids) { PointCollection points = new PointCollection(ids.Length); for (int i = 0; i < ids.Length; ++i ) { points.Add(getDisplayPosition(joints[ids[i]])); } Polyline polyline = new Polyline(); polyline.Points = points; polyline.Stroke = brush; polyline.StrokeThickness = 5; return polyline; }
里面有个叫的getDisplayPosition()函数,将关节点从原始数据转化为程序界面显示区域的坐标。解释如下:
骨架的数据、颜色图像数据和深度数据都是基于不同坐标的。为了从三个流中显示一致的数据,程序就要将各个坐标转换啦!转换步骤如下:
函数原型为:
private Point getDisplayPosition(Joint joint) { float depthX, depthY; nui.SkeletonEngine.SkeletonToDepthImage(joint.Position, out depthX, out depthY); depthX = depthX * 320; //convert to 320, 240 space depthY = depthY * 240; //convert to 320, 240 space int colorX, colorY; ImageViewArea iv = new ImageViewArea(); // only ImageResolution.Resolution640x480 is supported at this point nui.NuiCamera.GetColorPixelCoordinatesFromDepthPixel(ImageResolution.Resolution640x480, iv, (int)depthX, (int)depthY, (short)0, out colorX, out colorY); // map back to skeleton.Width & skeleton.Height return new Point((int)(skeleton.Width * colorX / 640.0), (int)(skeleton.Height * colorY / 480)); }
好了,说完了。小朋友们,下次节目,再见吧~
注:文章所述之“文档”皆指:http://research.microsoft.com/en-us/um/redmond/projects/kinectsdk/docs/SkeletalViewer_Walkthrough.pdf
有没有完整代码啊?求完整版啊,麻烦发邮箱[email protected]
装了Kinect SDK,在开始菜单里面就能找到Sample啊!