kinect / pykinect2 Goto Github PK
View Code? Open in Web Editor NEWWrapper to expose Kinect for Windows v2 API in Python
License: MIT License
Wrapper to expose Kinect for Windows v2 API in Python
License: MIT License
Hi all,
I am a newbie in Pykinect2 and python. I have printed both joints and jointPoints in def draw_body(self, joints, jointPoints, color): and gets no output.
also tried to print body in _def run(self):
_body = self.bodies.bodies[i]
Any recommendation will be much appreciated!
Lots of dependencies. Let's get this packaged up nicely so its easy to install
Actual sizeof reported is 80. please investigate
Hi, Is it possible to retrieve the z coordinates for skeleton ? I'm trying by inspriring of the exemple, but ColorSpacePoint don't have it, and if i add it myself i don't retrieve it in the joints later ...
depthframe = kinect.get_last_depth_frame()
ptr_depth = numpy.ctypeslib.as_ctypes(depthframe.flatten())
L = kinect._depth_frame_data_capacity.value
S = 1080 * 1920
TYPE_ColorSpacePointArray = PyKinectV2._ColorSpacePoint * S
csps1 = TYPE_ColorSpacePointArray()
x = kinect._mapper.MapDepthFrameToColorSpace(L, ptr_depth, S, deneme)
i have an error => 'The parameter is incorrect'
Can someone please point me in the right direction to get the Z coordinate of a joint?
Is that possible?
[DllImport("user32.dll", EntryPoint = "FindWindowEx")]
public static extern IntPtr FindWindowEx(IntPtr hwndParent, IntPtr hwndChildAfter, string lpszClass, string lpszWindow);
[DllImport("User32.dll")]
public static extern int SendMessage(IntPtr hWnd, int uMsg, int wParam, string lParam);
private void textBox1_TextChanged(object sender, EventArgs e)
{
Process[] notepadprocess = Process.GetProcessesByName("notepad");
IntPtr child = FindWindowEx(notepadprocess[0].MainWindowHandle, new IntPtr(0), "EDIT", null);
SendMessage(child, 0X000C, notepadprocess[0].Id, textBox1.Text);
}
above codes was good for my textbox1.Text value on my form to a notepad that already running
but my Exactly question is :
how to use SendMessage in other process INSTEAD OF NOTEPAD
many thanks for cooperations
Hi im new to python and i wanted to try to do something with the new kinect for a project
the idea was that the kinect wood track someone in my room and if they left( the room) and dont return after 2 min ( return to the room ) it wood send a signal to my phone.
but i can't find any documentation on PyKinect2 and it's the only moduel fore kinect v2 in python
i'd be glad if somebody could help mi.
thx
package should not install on Mac/Linux, may need to implement a c function that has a #include <windows.h>
I got x,y in depth space by using kinect.body_joints_to_depth_space(joints)
and i got a depth value correspondent to this x,y in millimeters. So, how can i convert x,y that are pixel number of depth image to millimeters in camera space? I know only the one solution that requires a focal length of the depth camera. Is it possible to get it somehow? Or maybe there is an another way to get it?
Implement the 2D Face APIs. This wrapper should be generated in the same was as PyKinect2.py. Then this generated file needs to be integrated into the PyKinectRuntime.py file.
There are dependencies for Face within C:\Program Files\Microsoft SDKs\Kinect\v2.0_1409\Redist\Face. All the files in NuiDatabase need to be present for the application to work. We need to figure out the best way to include these dependencies.
Implement the VGB APIs. This wrapper should be generated in the same was as PyKinect2.py. Then this generated file needs to be integrated into the PyKinectRuntime.py file.
There are dependencies for VGB within C:\Program Files\Microsoft SDKs\Kinect\v2.0_1409\Redist\VGB\x86.
We need to figure out the best way to include these dependencies.
Refer to http://aka.ms/vcpython27 for VC installer in case they run into any issues during install
Hi,
I am having trouble running the example script. Every time I run the script in python (Using Spyder that comes with Anaconda) I get this error:
File "build\bdist.win-amd64\egg\pykinect2\PyKinectV2.py", line 2216, in
AssertionError: 80
Thanks for your help!
for easier distribution of the library, move the sample as a separate component. This will allow for an easier install package and the sample can have its own dependency setup that is independent of the main package.
After playing around a while with the pyKinect BodyGame example and doing some research i didn't figure out how to receive the z value by using the x and y coordinate from the body joints.
In #31 you wrote:
You need to get z from depth frame using x and y you got from body_joints_to_depth_space
what makes sense but this function doesn't exist anymore ( in the PyKinectRuntime.py ).
I've managed to receive the depth frame and to get the x and y coordinates like
joint_points[PyKinectV2.JointType_Head].x
and respectively
joint_points[PyKinectV2.JointType_Head].y
but what's the meaning of this values? ( As an example x: 1529.32343242342 and y: 125.3425425 )
They don't fit as index for a standard image matrix ( i could crop them but i'm not shure if this is the "good way" ).
Can u explain how i reach my goal to receive the z value and would it be possible to write an API documentation as mentioned in other Issues before?
At least a list of functions with a small explanation how to use them would be enormous useful.
Thanks for the reply
This is all new so we are standardizing on 3.4+
Whole running import lines right after installation:
from pykinect2 import PyKinectV2
from pykinect2.PyKinectV2 import *
from pykinect2 import PyKinectRuntime
I got:
File "", line unknown
SyntaxError: unknown encoding for '....../lib/python3.5/site-packages/pykinect2/PyKinectV2.py': mbcs
I tried even manually rewrite the file, but did not help. Any ideas?
I am using line code under detail for up arrow key to type "UP" in notepad
SendKeys.Send("UP");
This is working exactly in My notepad process but I need use The above Line code in another application except notepad like games
I had guess The notepad process should be change to another process that i would like work it on but was not good and not works in my game process
Any suggestion can be useful
I don't know if that's something you can solve on your side, but I just spent the last hour trying to understand why a couple of scripts that rely on PyKinect2 were failing.
It turns out that, despite initializing PyKinectV2 correctly with PyKinectV2.FrameSourceTypes_Infrared
, has_new_infrared_frame
was always returning false
. Moreover, some scripts aborted because PyKinectV2Runtime
did not have the attribute get_last_infrared_frame
.
I had previously installed pykinect2 through pip install pykinect2
, so guess what pip show pykinect2
returns?
Name: pykinect2
Version: 0.1.0
Summary: Wrapper to expose Kinect for Windows v2 API in Python
Home-page: https://github.com/Kinect/PyKinect2/
Author: Microsoft Corporation
Author-email: [email protected]
License: MIT
Location: c:\programdata\anaconda3\lib\site-packages\pykinect2-0.1.0-py3.6.egg
Requires: numpy, comtypes
Yes. Version 0.1.0.
Solution? Cloning this repository and installing it through easy_install
.
pip install comtypes
git clone https://github.com/Kinect/PyKinect2.git
python -m easy_install PyKinect2
TL;DR: If you are using Anaconda's python distribution on Windows, installing pykinect2
through pip will result in installing an older version (0.1.0
). To fix that, install it from the repository using easy_install
AttributeError: 'PyKinectRuntime' object has no attribute 'infrared_frame_desc'
The README states that "Kinect for Windows SDK v2" is one of the dependencies. Does this mean that this does not work with Ubuntu 14.04 OS ?
Add to the examples folder
Using the following snippet, where mapper is a coordinatermapper object result in a "python has stopped working" error message
CSP_Count=kinect._depth_frame_data_capacity
CSP_type= _ColorSpacePoint * CSP_Count.value
CSP=ctypes.cast(CSP_type(), ctypes.POINTER(_ColorSpacePoint))
mapper.MapDepthFrameToColorSpace(kinect._depth_frame_data_capacity,kinect._depth_frame_data, CSP_Count, CSP)
that snippet is conform to the profile of the function defined in PyKinectV2.py which in turn is conform to the documentation
I got the function to perform properly by changing the profile of the method to use an array of c_float instead of _ColorSpacePoint (PyKinectV2.py line 2114) and changing the cast accordingingly.
I'm using Python 2.7.9 32 bit (one of the libs I use does not perform well in 64 bit) and the latest kinect v2 sdk (1409)
(edit: corrected a markdown typo)
Hi,
I am new to python and kinect. I have been studying the example given here,as there are no other tutorials/documentation. I am able to get the depth of a joint (any joint) of the bodies tracked by the kinect. But I would like to track only the closest body(based on the depth calculated). Any ideas or suggestions? Thank you so much in advance.
I am using kinect V2 sensor ( not v1) and pykinect2 for acquiring skeleton data especially joint XYZ coordinate using example "Kinect v2 Body Game"
can I get X, Y and Z coordinate(xc,yc,zc) of joint ( ex SpineMid) based on space reference frame fixed on the sensor by command
joint_points = self._kinect.body_joints_to_color_space(joints)
depth_points = self._kinect.body_joints_to_depth_space(joints)
xc= depth_points[PyKinectV2.JointType_SpineMid].x
yc= depth_points[PyKinectV2.JointType_SpineMid].y
zc= self._depth[yc* 512 + xc] is it correct
or what will happen
if I write
xc= joint_points[PyKinectV2.JointType_SpineMid].x
yc= joint_points[PyKinectV2.JointType_SpineMid].y
zc= self._depth[yc* 512 + xc]
I am new to this area
help is highly appreciated
Thanks
Hello,
How can I get the point cloud of the all image?The coordinates x,y and z of each pixel?
And the respective rgbd match?
Thanks
Method is stubbed out but not yet implemented
I have a Kinect V2 that works with Processing, but I would like to use it with Python 3 Anaconda 64-bit. I realize that 64-bit is not very well tested, but has anyone gotten it to work?
I am doing a project to track human gait. How to use this library to get images from the feed to create a data set ?
Hi,
How could I save the timestamp while save the XYZ data? I am new in coding and Pykinect2, so sorry for this simple question.
Any demo/example will be much aprreciate.
Thanks
We will have something that looks very similar to this: https://cla2.dotnetfoundation.org/
I've seen a few questions asking similar things but with no clear solution. I'd like to get the corresponding depth from the depth sensor of a given x, y point in the colour image (MapColorFrameToDepthSpace method from the SDK).
How do I access this in python?
Thanks!
I used Pip for install, and it worked.
but when I am running examples, the anaconda prompt show the error, ImportError: cannot import name PyKinectV2.
How can I solve?
Do the work to have PyKinect2 show up in the Python Package Index so it's discoverable, easy to use, etc.
Hello
I am trying to change the example PyKinectBodyGame.py to track only one person at a time.
I set to 1 the variable KINECT_MAX_BODY_COUNT in PyKinectRuntime object but it does not limit the number of bodies to 1.
I also limited the for loop in line 146 to 0, but the id number of the detected body does not start at 0 (I can see that there are detected bodies but none is drawn).
Is the body ID number stored in the camera? And if so, how could I solve this? I basically need to have only one player at a time without being disctracted by other persons entering the scene.
Thanks for your time, any feedback is appreciated.
Mick
I am a beginner in python: I hope somebody can help
Hi,
I have to going through the example codes and trying to figure out various stuffs. Is there anyway to track the state of the hand? I can see that it has variables- HandState_Lasso,HandState_Open , HandState_Closed,HandState_Unknown and HandState_NotTracked. But exactly do we use this to track each hands or both the hands together for that matter?
In the C++ documentation I found out there is a small snippet:
HandState leftHandState;
hr = body->get_HandLeftState(&leftHandState);
if (SUCCEEDED(hr))
{
if (leftHandState == HandState_Closed) {
std::cout << "CLOSED HAND\n";
}
else if (leftHandState == HandState_Open) {
std::cout << "OPEN HAND\n";
}
else if (leftHandState == HandState_Lasso) {
std::cout << "PEW PEW HANDS\n";
}
else if (leftHandState == HandState_NotTracked) {
std::cout << "HAND IS NOT TRACKED\n";
}
else if (leftHandState == HandState_Unknown) {
std::cout << "HANDS STATE IS UNKNOWN\n";
}
}
Is there anything similar in Pykinect2? I found the C++ snippet online and haven't tried it out.
python -m pip install setuptools --upgrade
depthframe = self._kinect.get_last_depth_frame()
ptr_depth = np.ctypeslib.as_ctypes(depthframe.flatten())
L = depthframe.size
TYPE_CameraSpacePointArray = PyKinectV2._CameraSpacePoint * L
csps1 = TYPE_CameraSpacePointArray()
error_state = self._kinect._mapper.MapColorFrameToCameraSpace(L, ptr_depth, L, csps1)
It doesn't work.
solved.....
Eh.... My bad. The size of cameraspacepoints should be equal to the color frame size.
S = 1080*1920
TYPE_CameraSpacePointArray = PyKinectV2._CameraSpacePoint * S
csps1 = TYPE_CameraSpacePointArray()
error_state = self._kinect._mapper.MapColorFrameToCameraSpace(L, ptr_depth, S, csps1)
Method is stubbed out but not yet implemented
i am a newbie to pykinect2, and there are no reliable sources available to get to know about the functions included in the library. also the example included in the library, doesn't help much and there is no help available when we want to utilise the Depth and IR stream. Moreover, there also are significant changes in the library when compared to pykinect, and hence the examples available for it dont help either.
i would be grateful for any help!
File "C:\Users\David\Anaconda2\lib\site-packages\pykinect2\PyKinectV2.py", line 2216, in
assert sizeof(tagSTATSTG) == 72, sizeof(tagSTATSTG)
First of all, thanks for the code. I am using your code, and I am just wondering whether I can retrieve each joint coordinate (maybe like in x and y values, or x, y, and z values).
Big thanks in advance. Looking forward for any response.
Add to the examples folder
My system is ubuntu18.04, installed libfreenect2, debugging can be used.My system is ubuntu18.04, installed libfreenect2, debugging can be used. But I don't know how to save pictures locally, especially using opencv-python.
Method is stubbed out but not yet implemented
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.