Sample records for microsoft kinect sensor

  1. Calibration of Kinect for Xbox One and Comparison between the Two Generations of Microsoft Sensors

    PubMed Central

    Pagliari, Diana; Pinto, Livio

    2015-01-01

    In recent years, the videogame industry has been characterized by a great boost in gesture recognition and motion tracking, following the increasing request of creating immersive game experiences. The Microsoft Kinect sensor allows acquiring RGB, IR and depth images with a high frame rate. Because of the complementary nature of the information provided, it has proved an attractive resource for researchers with very different backgrounds. In summer 2014, Microsoft launched a new generation of Kinect on the market, based on time-of-flight technology. This paper proposes a calibration of Kinect for Xbox One imaging sensors, focusing on the depth camera. The mathematical model that describes the error committed by the sensor as a function of the distance between the sensor itself and the object has been estimated. All the analyses presented here have been conducted for both generations of Kinect, in order to quantify the improvements that characterize every single imaging sensor. Experimental results show that the quality of the delivered model improved applying the proposed calibration procedure, which is applicable to both point clouds and the mesh model created with the Microsoft Fusion Libraries. PMID:26528979

  2. Calibration of Kinect for Xbox One and Comparison between the Two Generations of Microsoft Sensors.

    PubMed

    Pagliari, Diana; Pinto, Livio

    2015-10-30

    In recent years, the videogame industry has been characterized by a great boost in gesture recognition and motion tracking, following the increasing request of creating immersive game experiences. The Microsoft Kinect sensor allows acquiring RGB, IR and depth images with a high frame rate. Because of the complementary nature of the information provided, it has proved an attractive resource for researchers with very different backgrounds. In summer 2014, Microsoft launched a new generation of Kinect on the market, based on time-of-flight technology. This paper proposes a calibration of Kinect for Xbox One imaging sensors, focusing on the depth camera. The mathematical model that describes the error committed by the sensor as a function of the distance between the sensor itself and the object has been estimated. All the analyses presented here have been conducted for both generations of Kinect, in order to quantify the improvements that characterize every single imaging sensor. Experimental results show that the quality of the delivered model improved applying the proposed calibration procedure, which is applicable to both point clouds and the mesh model created with the Microsoft Fusion Libraries.

  3. The validity of the first and second generation Microsoft Kinect™ for identifying joint center locations during static postures.

    PubMed

    Xu, Xu; McGorry, Raymond W

    2015-07-01

    The Kinect™ sensor released by Microsoft is a low-cost, portable, and marker-less motion tracking system for the video game industry. Since the first generation Kinect sensor was released in 2010, many studies have been conducted to examine the validity of this sensor when used to measure body movement in different research areas. In 2014, Microsoft released the computer-used second generation Kinect sensor with a better resolution for the depth sensor. However, very few studies have performed a direct comparison between all the Kinect sensor-identified joint center locations and their corresponding motion tracking system-identified counterparts, the result of which may provide some insight into the error of the Kinect-identified segment length, joint angles, as well as the feasibility of adapting inverse dynamics to Kinect-identified joint centers. The purpose of the current study is to first propose a method to align the coordinate system of the Kinect sensor with respect to the global coordinate system of a motion tracking system, and then to examine the accuracy of the Kinect sensor-identified coordinates of joint locations during 8 standing and 8 sitting postures of daily activities. The results indicate the proposed alignment method can effectively align the Kinect sensor with respect to the motion tracking system. The accuracy level of the Kinect-identified joint center location is posture-dependent and joint-dependent. For upright standing posture, the average error across all the participants and all Kinect-identified joint centers is 76 mm and 87 mm for the first and second generation Kinect sensor, respectively. In general, standing postures can be identified with better accuracy than sitting postures, and the identification accuracy of the joints of the upper extremities is better than for the lower extremities. This result may provide some information regarding the feasibility of using the Kinect sensor in future studies. Copyright © 2015 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  4. Enhanced computer vision with Microsoft Kinect sensor: a review.

    PubMed

    Han, Jungong; Shao, Ling; Xu, Dong; Shotton, Jamie

    2013-10-01

    With the invention of the low-cost Microsoft Kinect sensor, high-resolution depth and visual (RGB) sensing has become available for widespread use. The complementary nature of the depth and visual information provided by the Kinect sensor opens up new opportunities to solve fundamental problems in computer vision. This paper presents a comprehensive review of recent Kinect-based computer vision algorithms and applications. The reviewed approaches are classified according to the type of vision problems that can be addressed or enhanced by means of the Kinect sensor. The covered topics include preprocessing, object tracking and recognition, human activity analysis, hand gesture analysis, and indoor 3-D mapping. For each category of methods, we outline their main algorithmic contributions and summarize their advantages/differences compared to their RGB counterparts. Finally, we give an overview of the challenges in this field and future research trends. This paper is expected to serve as a tutorial and source of references for Kinect-based computer vision researchers.

  5. Low cost sensing of vegetation volume and structure with a Microsoft Kinect sensor

    NASA Astrophysics Data System (ADS)

    Azzari, G.; Goulden, M.

    2011-12-01

    The market for videogames and digital entertainment has decreased the cost of advanced technology to affordable levels. The Microsoft Kinect sensor for Xbox 360 is an infrared time of flight camera designed to track body position and movement at a single-articulation level. Using open source drivers and libraries, we acquired point clouds of vegetation directly from the Kinect sensor. The data were filtered for outliers, co-registered, and cropped to isolate the plant of interest from the surroundings and soil. The volume of single plants was then estimated with several techniques, including fitting with solid shapes (cylinders, spheres, boxes), voxel counts, and 3D convex/concave hulls. Preliminary results are presented here. The volume of a series of wild artichoke plants was measured from nadir using a Kinect on a 3m-tall tower. The calculated volumes were compared with harvested biomass; comparisons and derived allometric relations will be presented, along with examples of the acquired point clouds. This Kinect sensor shows promise for ground-based, automated, biomass measurement systems, and possibly for comparison/validation of remotely sensed LIDAR.

  6. Validity of the Microsoft Kinect for measurement of neck angle: comparison with electrogoniometry.

    PubMed

    Allahyari, Teimour; Sahraneshin Samani, Ali; Khalkhali, Hamid-Reza

    2017-12-01

    Considering the importance of evaluating working postures, many techniques and tools have been developed to identify and eliminate awkward postures and prevent musculoskeletal disorders (MSDs). The introduction of the Microsoft Kinect sensor, which is a low-cost, easy to set up and markerless motion capture system, offers promising possibilities for postural studies. Considering the Kinect's special ability in head-pose and facial-expression tracking and complexity of cervical spine movements, this study aimed to assess concurrent validity of the Microsoft Kinect against an electrogoniometer for neck angle measurements. A special software program was developed to calculate the neck angle based on Kinect skeleton tracking data. Neck angles were measured simultaneously by electrogoniometer and the developed software program in 10 volunteers. The results were recorded in degrees and the time required for each method was also measured. The Kinect's ability to identify body joints was reliable and precise. There was moderate to excellent agreement between the Kinect-based method and the electrogoniometer (paired-sample t test, p ≥ 0.25; intraclass correlation for test-retest reliability, ≥0.75). Kinect-based measurement was much faster and required less equipment, but accurate measurement with Microsoft Kinect was only possible if the participant was in its field of view.

  7. Kinect as a Tool for Gait Analysis: Validation of a Real-Time Joint Extraction Algorithm Working in Side View

    PubMed Central

    Cippitelli, Enea; Gasparrini, Samuele; Spinsante, Susanna; Gambi, Ennio

    2015-01-01

    The Microsoft Kinect sensor has gained attention as a tool for gait analysis for several years. Despite the many advantages the sensor provides, however, the lack of a native capability to extract joints from the side view of a human body still limits the adoption of the device to a number of relevant applications. This paper presents an algorithm to locate and estimate the trajectories of up to six joints extracted from the side depth view of a human body captured by the Kinect device. The algorithm is then applied to extract data that can be exploited to provide an objective score for the “Get Up and Go Test”, which is typically adopted for gait analysis in rehabilitation fields. Starting from the depth-data stream provided by the Microsoft Kinect sensor, the proposed algorithm relies on anthropometric models only, to locate and identify the positions of the joints. Differently from machine learning approaches, this solution avoids complex computations, which usually require significant resources. The reliability of the information about the joint position output by the algorithm is evaluated by comparison to a marker-based system. Tests show that the trajectories extracted by the proposed algorithm adhere to the reference curves better than the ones obtained from the skeleton generated by the native applications provided within the Microsoft Kinect (Microsoft Corporation, Redmond, WA, USA, 2013) and OpenNI (OpenNI organization, Tel Aviv, Israel, 2013) Software Development Kits. PMID:25594588

  8. Using the Microsoft Kinect™ to assess 3-D shoulder kinematics during computer use.

    PubMed

    Xu, Xu; Robertson, Michelle; Chen, Karen B; Lin, Jia-Hua; McGorry, Raymond W

    2017-11-01

    Shoulder joint kinematics has been used as a representative indicator to investigate musculoskeletal symptoms among computer users for office ergonomics studies. The traditional measurement of shoulder kinematics normally requires a laboratory-based motion tracking system which limits the field studies. In the current study, a portable, low cost, and marker-less Microsoft Kinect™ sensor was examined for its feasibility on shoulder kinematics measurement during computer tasks. Eleven healthy participants performed a standardized computer task, and their shoulder kinematics data were measured by a Kinect sensor and a motion tracking system concurrently. The results indicated that placing the Kinect sensor in front of the participants would yielded a more accurate shoulder kinematics measurements then placing the Kinect sensor 15° or 30° to one side. The results also showed that the Kinect sensor had a better estimate on shoulder flexion/extension, compared with shoulder adduction/abduction and shoulder axial rotation. The RMSE of front-placed Kinect sensor on shoulder flexion/extension was less than 10° for both the right and the left shoulder. The measurement error of the front-placed Kinect sensor on the shoulder adduction/abduction was approximately 10° to 15°, and the magnitude of error is proportional to the magnitude of that joint angle. After the calibration, the RMSE on shoulder adduction/abduction were less than 10° based on an independent dataset of 5 additional participants. For shoulder axial rotation, the RMSE of front-placed Kinect sensor ranged between approximately 15° to 30°. The results of the study suggest that the Kinect sensor can provide some insight on shoulder kinematics for improving office ergonomics. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. SU-E-I-92: Accuracy Evaluation of Depth Data in Microsoft Kinect.

    PubMed

    Kozono, K; Aoki, M; Ono, M; Kamikawa, Y; Arimura, H; Toyofuku, F

    2012-06-01

    Microsoft Kinect has potential for use in real-time patient position monitoring in diagnostic radiology and radiotherapy. We evaluated the accuracy of depth image data and the device-to-device variation in various conditions simulating clinical applications in a hospital. Kinect sensor consists of infrared-ray depth camera and RGB camera. We developed a computer program using OpenNI and OpenCV for measuring quantitative distance data. The program displays depth image obtained from Kinect sensor on the screen, and the cartesian coordinates at an arbitrary point selected by mouse-clicking can be measured. A rectangular box without luster (300 × 198 × 50 mm 3 ) was used as a measuring object. The object was placed on the floor at various distances ranging from 0 to 400 cm in increments of 10 cm from the sensor, and depth data were measured for 10 points on the planar surface of the box. The measured distance data were calibrated by using the least square method. The device-to-device variations were evaluated using five Kinect sensors. There was almost linear relationship between true and measured values. Kinect sensor was unable to measure at a distance of less than 50 cm from the sensor. It was found that distance data calibration was necessary for each sensor. The device-to-device variation error for five Kinect sensors was within 0.46% at the distance range from 50 cm to 2 m from the sensor. The maximum deviation of the distance data after calibration was 1.1 mm at a distance from 50 to 150 cm. The overall average error of five Kinect sensors was 0.18 mm at a distance range of 50 to 150 cm. Kinect sensor has distance accuracy of about 1 mm if each device is properly calibrated. This sensor will be useable for positioning of patients in diagnostic radiology and radiotherapy. © 2012 American Association of Physicists in Medicine.

  10. A Review on Technical and Clinical Impact of Microsoft Kinect on Physical Therapy and Rehabilitation

    PubMed Central

    Mousavi Hondori, Hossein; Khademi, Maryam

    2014-01-01

    This paper reviews technical and clinical impact of the Microsoft Kinect in physical therapy and rehabilitation. It covers the studies on patients with neurological disorders including stroke, Parkinson's, cerebral palsy, and MS as well as the elderly patients. Search results in Pubmed and Google scholar reveal increasing interest in using Kinect in medical application. Relevant papers are reviewed and divided into three groups: (1) papers which evaluated Kinect's accuracy and reliability, (2) papers which used Kinect for a rehabilitation system and provided clinical evaluation involving patients, and (3) papers which proposed a Kinect-based system for rehabilitation but fell short of providing clinical validation. At last, to serve as technical comparison to help future rehabilitation design other sensors similar to Kinect are reviewed. PMID:27006935

  11. A Review on Technical and Clinical Impact of Microsoft Kinect on Physical Therapy and Rehabilitation.

    PubMed

    Mousavi Hondori, Hossein; Khademi, Maryam

    2014-01-01

    This paper reviews technical and clinical impact of the Microsoft Kinect in physical therapy and rehabilitation. It covers the studies on patients with neurological disorders including stroke, Parkinson's, cerebral palsy, and MS as well as the elderly patients. Search results in Pubmed and Google scholar reveal increasing interest in using Kinect in medical application. Relevant papers are reviewed and divided into three groups: (1) papers which evaluated Kinect's accuracy and reliability, (2) papers which used Kinect for a rehabilitation system and provided clinical evaluation involving patients, and (3) papers which proposed a Kinect-based system for rehabilitation but fell short of providing clinical validation. At last, to serve as technical comparison to help future rehabilitation design other sensors similar to Kinect are reviewed.

  12. Harnessing the potential of the Kinect sensor for psychiatric rehabilitation for stroke survivors.

    PubMed

    Zhang, Melvyn W B; Ho, Roger C M

    2016-03-04

    Dominques et al. in their recent article described how low-cost sensors, such as Microsoft Kinect could be utilized for the measurement of various anthropometric measures. With the recent advances in sensors and sensor based technology, along with the rapid advancement in E-health, Microsoft Kinect has been increasingly recognized by researchers and bioengineers to be a low-cost sensor that could help in the collation of various measurements and various data. A recent systematic review done by Da Gama et al. (2015) have looked into the potential of Kinect in terms of motor rehabilitation. The systematic review highlighted the tremendous potential of the sensors and has clearly stated that there is a need for further studies evaluating its potential for rehabilitation. Zhang et al. (2015) in their recent article have advocated several reasons as to why biosensors are pertinent for stroke rehabilitation. Of note, recent studies done by the World Health Organization have highlighted that stroke is a growing epidemic. Aside to the utilization of smartphone based sensors for stroke rehabilitation, as proposed by Zhang et al. (2015), researchers have also investigated the use of other low cost alternatives, such as Kinect, to facilitate the rehabilitation of stroke survivors. Whilst it may seemed like that has been quite extensive evaluation of the Kinect sensor for stroke rehabilitation, one core area that bio-engineers and researchers have not looked into is that of the psychiatric and mental health issues that might at times arise following a stroke. It is thus the aim of this letter to address how such a sensor could be tapped upon for psychiatric rehabilitation amongst stroke survivors. To this end, the authors have thus conceptualized a game that could help in the cognitive remediation for stroke survivors using low cost Kinect sensors.

  13. Kinect Fusion improvement using depth camera calibration

    NASA Astrophysics Data System (ADS)

    Pagliari, D.; Menna, F.; Roncella, R.; Remondino, F.; Pinto, L.

    2014-06-01

    Scene's 3D modelling, gesture recognition and motion tracking are fields in rapid and continuous development which have caused growing demand on interactivity in video-game and e-entertainment market. Starting from the idea of creating a sensor that allows users to play without having to hold any remote controller, the Microsoft Kinect device was created. The Kinect has always attract researchers in different fields, from robotics to Computer Vision (CV) and biomedical engineering as well as third-party communities that have released several Software Development Kit (SDK) versions for Kinect in order to use it not only as a game device but as measurement system. Microsoft Kinect Fusion control libraries (firstly released in March 2013) allow using the device as a 3D scanning and produce meshed polygonal of a static scene just moving the Kinect around. A drawback of this sensor is the geometric quality of the delivered data and the low repeatability. For this reason the authors carried out some investigation in order to evaluate the accuracy and repeatability of the depth measured delivered by the Kinect. The paper will present a throughout calibration analysis of the Kinect imaging sensor, with the aim of establishing the accuracy and precision of the delivered information: a straightforward calibration of the depth sensor in presented and then the 3D data are correct accordingly. Integrating the depth correction algorithm and correcting the IR camera interior and exterior orientation parameters, the Fusion Libraries are corrected and a new reconstruction software is created to produce more accurate models.

  14. Design and test of an automated version of the modified Jebsen test of hand function using Microsoft Kinect.

    PubMed

    Simonsen, Daniel; Nielsen, Ida F; Spaich, Erika G; Andersen, Ole K

    2017-05-02

    The present paper describes the design and evaluation of an automated version of the Modified Jebsen Test of Hand Function (MJT) based on the Microsoft Kinect sensor. The MJT was administered twice to 11 chronic stroke subjects with varying degrees of hand function deficits. The test times of the MJT were evaluated manually by a therapist using a stopwatch, and automatically using the Microsoft Kinect sensor. The ground truth times were assessed based on inspection of the video-recordings. The agreement between the methods was evaluated along with the test-retest performance. The results from Bland-Altman analysis showed better agreement between the ground truth times and the automatic MJT time evaluations compared to the agreement between the ground truth times and the times estimated by the therapist. The results from the test-retest performance showed that the subjects significantly improved their performance in several subtests of the MJT, indicating a practice effect. The results from the test showed that the Kinect can be used for automating the MJT.

  15. Vertical dynamic deflection measurement in concrete beams with the Microsoft Kinect.

    PubMed

    Qi, Xiaojuan; Lichti, Derek; El-Badry, Mamdouh; Chow, Jacky; Ang, Kathleen

    2014-02-19

    The Microsoft Kinect is arguably the most popular RGB-D camera currently on the market, partially due to its low cost. It offers many advantages for the measurement of dynamic phenomena since it can directly measure three-dimensional coordinates of objects at video frame rate using a single sensor. This paper presents the results of an investigation into the development of a Microsoft Kinect-based system for measuring the deflection of reinforced concrete beams subjected to cyclic loads. New segmentation methods for object extraction from the Kinect's depth imagery and vertical displacement reconstruction algorithms have been developed and implemented to reconstruct the time-dependent displacement of concrete beams tested in laboratory conditions. The results demonstrate that the amplitude and frequency of the vertical displacements can be reconstructed with submillimetre and milliHz-level precision and accuracy, respectively.

  16. Vertical Dynamic Deflection Measurement in Concrete Beams with the Microsoft Kinect

    PubMed Central

    Qi, Xiaojuan; Lichti, Derek; El-Badry, Mamdouh; Chow, Jacky; Ang, Kathleen

    2014-01-01

    The Microsoft Kinect is arguably the most popular RGB-D camera currently on the market, partially due to its low cost. It offers many advantages for the measurement of dynamic phenomena since it can directly measure three-dimensional coordinates of objects at video frame rate using a single sensor. This paper presents the results of an investigation into the development of a Microsoft Kinect-based system for measuring the deflection of reinforced concrete beams subjected to cyclic loads. New segmentation methods for object extraction from the Kinect's depth imagery and vertical displacement reconstruction algorithms have been developed and implemented to reconstruct the time-dependent displacement of concrete beams tested in laboratory conditions. The results demonstrate that the amplitude and frequency of the vertical displacements can be reconstructed with submillimetre and milliHz-level precision and accuracy, respectively. PMID:24556668

  17. Microsoft Kinect Sensor Evaluation

    NASA Technical Reports Server (NTRS)

    Billie, Glennoah

    2011-01-01

    My summer project evaluates the Kinect game sensor input/output and its suitability to perform as part of a human interface for a spacecraft application. The primary objective is to evaluate, understand, and communicate the Kinect system's ability to sense and track fine (human) position and motion. The project will analyze the performance characteristics and capabilities of this game system hardware and its applicability for gross and fine motion tracking. The software development kit for the Kinect was also investigated and some experimentation has begun to understand its development environment. To better understand the software development of the Kinect game sensor, research in hacking communities has brought a better understanding of the potential for a wide range of personal computer (PC) application development. The project also entails the disassembly of the Kinect game sensor. This analysis would involve disassembling a sensor, photographing it, and identifying components and describing its operation.

  18. Developing movement recognition application with the use of Shimmer sensor and Microsoft Kinect sensor.

    PubMed

    Guzsvinecz, Tibor; Szucs, Veronika; Sik Lányi, Cecília

    2015-01-01

    Nowadays the development of virtual reality-based application is one of the most dynamically growing areas. These applications have a wide user base, more and more devices which are providing several kinds of user interactions and are available on the market. In the applications where the not-handheld devices are not necessary, the potential is that these can be used in educational, entertainment and rehabilitation applications. The purpose of this paper is to examine the precision and the efficiency of the not-handheld devices with user interaction in the virtual reality-based applications. The first task of the developed application is to support the rehabilitation process of stroke patients in their homes. A newly developed application will be introduced in this paper, which uses the two popular devices, the Shimmer sensor and the Microsoft Kinect sensor. To identify and to validate the actions of the user these sensors are working together in parallel mode. For the problem solving, the application is available to record an educational pattern, and then the software compares this pattern to the action of the user. The goal of the current research is to examine the extent of the difference in the recognition of the gestures, how precisely the two sensors are identifying the predefined actions. This could affect the rehabilitation process of the stroke patients and influence the efficiency of the rehabilitation. This application was developed in C# programming language and uses the original Shimmer connecting application as a base. During the working of this application it is possible to teach five-five different movements with the use of the Shimmer and the Microsoft Kinect sensors. The application can recognize these actions at any later time. This application uses a file-based database and the runtime memory of the application to store the saved data in order to reach the actions easier. The conclusion is that much more precise data were collected from the Microsoft Kinect sensor than the Shimmer sensors.

  19. Detection of Cardiopulmonary Activity and Related Abnormal Events Using Microsoft Kinect Sensor.

    PubMed

    Al-Naji, Ali; Chahl, Javaan

    2018-03-20

    Monitoring of cardiopulmonary activity is a challenge when attempted under adverse conditions, including different sleeping postures, environmental settings, and an unclear region of interest (ROI). This study proposes an efficient remote imaging system based on a Microsoft Kinect v2 sensor for the observation of cardiopulmonary-signal-and-detection-related abnormal cardiopulmonary events (e.g., tachycardia, bradycardia, tachypnea, bradypnea, and central apnoea) in many possible sleeping postures within varying environmental settings including in total darkness and whether the subject is covered by a blanket or not. The proposed system extracts the signal from the abdominal-thoracic region where cardiopulmonary activity is most pronounced, using a real-time image sequence captured by Kinect v2 sensor. The proposed system shows promising results in any sleep posture, regardless of illumination conditions and unclear ROI even in the presence of a blanket, whilst being reliable, safe, and cost-effective.

  20. Detection of Cardiopulmonary Activity and Related Abnormal Events Using Microsoft Kinect Sensor

    PubMed Central

    Chahl, Javaan

    2018-01-01

    Monitoring of cardiopulmonary activity is a challenge when attempted under adverse conditions, including different sleeping postures, environmental settings, and an unclear region of interest (ROI). This study proposes an efficient remote imaging system based on a Microsoft Kinect v2 sensor for the observation of cardiopulmonary-signal-and-detection-related abnormal cardiopulmonary events (e.g., tachycardia, bradycardia, tachypnea, bradypnea, and central apnoea) in many possible sleeping postures within varying environmental settings including in total darkness and whether the subject is covered by a blanket or not. The proposed system extracts the signal from the abdominal-thoracic region where cardiopulmonary activity is most pronounced, using a real-time image sequence captured by Kinect v2 sensor. The proposed system shows promising results in any sleep posture, regardless of illumination conditions and unclear ROI even in the presence of a blanket, whilst being reliable, safe, and cost-effective. PMID:29558414

  1. Accuracy of the Microsoft Kinect for measuring gait parameters during treadmill walking.

    PubMed

    Xu, Xu; McGorry, Raymond W; Chou, Li-Shan; Lin, Jia-Hua; Chang, Chien-Chi

    2015-07-01

    The measurement of gait parameters normally requires motion tracking systems combined with force plates, which limits the measurement to laboratory settings. In some recent studies, the possibility of using the portable, low cost, and marker-less Microsoft Kinect sensor to measure gait parameters on over-ground walking has been examined. The current study further examined the accuracy level of the Kinect sensor for assessment of various gait parameters during treadmill walking under different walking speeds. Twenty healthy participants walked on the treadmill and their full body kinematics data were measured by a Kinect sensor and a motion tracking system, concurrently. Spatiotemporal gait parameters and knee and hip joint angles were extracted from the two devices and were compared. The results showed that the accuracy levels when using the Kinect sensor varied across the gait parameters. Average heel strike frame errors were 0.18 and 0.30 frames for the right and left foot, respectively, while average toe off frame errors were -2.25 and -2.61 frames, respectively, across all participants and all walking speeds. The temporal gait parameters based purely on heel strike have less error than the temporal gait parameters based on toe off. The Kinect sensor can follow the trend of the joint trajectories for the knee and hip joints, though there was substantial error in magnitudes. The walking speed was also found to significantly affect the identified timing of toe off. The results of the study suggest that the Kinect sensor may be used as an alternative device to measure some gait parameters for treadmill walking, depending on the desired accuracy level. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.

  2. Using the Xbox Kinect sensor for positional data acquisition

    NASA Astrophysics Data System (ADS)

    Ballester, Jorge; Pheatt, Chuck

    2013-01-01

    The Kinect sensor was introduced in November 2010 by Microsoft for the Xbox 360 video game system. It is designed to be positioned above or below a video display to track player body and hand movements in three dimensions (3D). The sensor contains a red, green, and blue (RGB) camera, a depth sensor, an infrared (IR) light source, a three-axis accelerometer, and a multi-array microphone, as well as hardware required to transmit sensor information to an external receiver. In this article, we evaluate the capabilities of the Kinect sensor as a 3D data-acquisition platform for use in physics experiments. Data obtained for a simple pendulum, a spherical pendulum, projectile motion, and a bouncing basketball are presented. Overall, the Kinect sensor is found to be a useful data-acquisition tool for motion studies in the physics laboratory.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Edmunds, D; Donovan, E

    Purpose: To determine whether the Microsoft Kinect Version 2 (Kinect v2), a commercial off-the-shelf (COTS) depth sensors designed for entertainment purposes, were robust to the radiotherapy treatment environment and could be suitable for monitoring of voluntary breath-hold compliance. This could complement current visual monitoring techniques, and be useful for heart sparing left breast radiotherapy. Methods: In-house software to control Kinect v2 sensors, and capture output information, was developed using the free Microsoft software development kit, and the Cinder creative coding C++ library. Each sensor was used with a 12m USB 3.0 active cable. A solid water block was used asmore » the object. The depth accuracy and precision of the sensors was evaluated by comparing Kinect reported distance to the object with a precision laser measurement across a distance range of 0.6m to 2.0 m. The object was positioned on a high-precision programmable motion platform and moved in two programmed motion patterns and Kinect reported distance logged. Robustness to the radiation environment was tested by repeating all measurements with a linear accelerator operating over a range of pulse repetition frequencies (6Hz to 400Hz) and dose rates 50 to 1500 monitor units (MU) per minute. Results: The complex, consistent relationship between true and measured distance was unaffected by the radiation environment, as was the ability to detect motion. Sensor precision was < 1 mm and the accuracy between 1.3 mm and 1.8 mm when a distance correction was applied. Both motion patterns were tracked successfully with a root mean squared error (RMSE) of 1.4 and 1.1 mm respectively. Conclusion: Kinect v2 sensors are capable of tracking pre-programmed motion patterns with an accuracy <2 mm and appear robust to the radiotherapy treatment environment. A clinical trial using Kinect v2 sensor for monitoring voluntary breath hold has ethical approval and is open to recruitment. The authors are supported by a National Institute of Health Research (NIHR) Career Development Fellowship (CDF-2013-06-005). Microsoft Corporation donated three sensors. The views expressed in this publication are those of the author(s) and not necessarily those of the NHS, the National Institute for Health Research or the Department of Health.« less

  4. Web GIS in practice X: a Microsoft Kinect natural user interface for Google Earth navigation

    PubMed Central

    2011-01-01

    This paper covers the use of depth sensors such as Microsoft Kinect and ASUS Xtion to provide a natural user interface (NUI) for controlling 3-D (three-dimensional) virtual globes such as Google Earth (including its Street View mode), Bing Maps 3D, and NASA World Wind. The paper introduces the Microsoft Kinect device, briefly describing how it works (the underlying technology by PrimeSense), as well as its market uptake and application potential beyond its original intended purpose as a home entertainment and video game controller. The different software drivers available for connecting the Kinect device to a PC (Personal Computer) are also covered, and their comparative pros and cons briefly discussed. We survey a number of approaches and application examples for controlling 3-D virtual globes using the Kinect sensor, then describe Kinoogle, a Kinect interface for natural interaction with Google Earth, developed by students at Texas A&M University. Readers interested in trying out the application on their own hardware can download a Zip archive (included with the manuscript as additional files 1, 2, &3) that contains a 'Kinnogle installation package for Windows PCs'. Finally, we discuss some usability aspects of Kinoogle and similar NUIs for controlling 3-D virtual globes (including possible future improvements), and propose a number of unique, practical 'use scenarios' where such NUIs could prove useful in navigating a 3-D virtual globe, compared to conventional mouse/3-D mouse and keyboard-based interfaces. PMID:21791054

  5. Web GIS in practice X: a Microsoft Kinect natural user interface for Google Earth navigation.

    PubMed

    Boulos, Maged N Kamel; Blanchard, Bryan J; Walker, Cory; Montero, Julio; Tripathy, Aalap; Gutierrez-Osuna, Ricardo

    2011-07-26

    This paper covers the use of depth sensors such as Microsoft Kinect and ASUS Xtion to provide a natural user interface (NUI) for controlling 3-D (three-dimensional) virtual globes such as Google Earth (including its Street View mode), Bing Maps 3D, and NASA World Wind. The paper introduces the Microsoft Kinect device, briefly describing how it works (the underlying technology by PrimeSense), as well as its market uptake and application potential beyond its original intended purpose as a home entertainment and video game controller. The different software drivers available for connecting the Kinect device to a PC (Personal Computer) are also covered, and their comparative pros and cons briefly discussed. We survey a number of approaches and application examples for controlling 3-D virtual globes using the Kinect sensor, then describe Kinoogle, a Kinect interface for natural interaction with Google Earth, developed by students at Texas A&M University. Readers interested in trying out the application on their own hardware can download a Zip archive (included with the manuscript as additional files 1, 2, &3) that contains a 'Kinnogle installation package for Windows PCs'. Finally, we discuss some usability aspects of Kinoogle and similar NUIs for controlling 3-D virtual globes (including possible future improvements), and propose a number of unique, practical 'use scenarios' where such NUIs could prove useful in navigating a 3-D virtual globe, compared to conventional mouse/3-D mouse and keyboard-based interfaces.

  6. D Capturing Performances of Low-Cost Range Sensors for Mass-Market Applications

    NASA Astrophysics Data System (ADS)

    Guidi, G.; Gonizzi, S.; Micoli, L.

    2016-06-01

    Since the advent of the first Kinect as motion controller device for the Microsoft XBOX platform (November 2010), several similar active and low-cost range sensing devices have been introduced on the mass-market for several purposes, including gesture based interfaces, 3D multimedia interaction, robot navigation, finger tracking, 3D body scanning for garment design and proximity sensors for automotive. However, given their capability to generate a real time stream of range images, these has been used in some projects also as general purpose range devices, with performances that for some applications might be satisfying. This paper shows the working principle of the various devices, analyzing them in terms of systematic errors and random errors for exploring the applicability of them in standard 3D capturing problems. Five actual devices have been tested featuring three different technologies: i) Kinect V1 by Microsoft, Structure Sensor by Occipital, and Xtion PRO by ASUS, all based on different implementations of the Primesense sensor; ii) F200 by Intel/Creative, implementing the Realsense pattern projection technology; Kinect V2 by Microsoft, equipped with the Canesta TOF Camera. A critical analysis of the results tries first of all to compare them, and secondarily to focus the range of applications for which such devices could actually work as a viable solution.

  7. Patient walk detection in hospital room using Microsoft Kinect V2.

    PubMed

    Liang Liu; Mehrotra, Sanjay

    2016-08-01

    This paper describes a system using Kinect sensor to detect patient walk automatically in a hospital room setting. The system is especially essential for the case when the patient is alone and the nursing staff is absent. The patient activities are represented by the features extracted from Kinect V2 skeletons. The analysis to the recognized walk could help us to better understand the health situation of the patient and the possible hospital acquired infection (HAI), and provide valuable information to healthcare givers for making a corresponding treatment decision and alteration. The Kinect V2 depth sensor provides the ground truth.

  8. Performance analysis of the Microsoft Kinect sensor for 2D Simultaneous Localization and Mapping (SLAM) techniques.

    PubMed

    Kamarudin, Kamarulzaman; Mamduh, Syed Muhammad; Shakaff, Ali Yeon Md; Zakaria, Ammar

    2014-12-05

    This paper presents a performance analysis of two open-source, laser scanner-based Simultaneous Localization and Mapping (SLAM) techniques (i.e., Gmapping and Hector SLAM) using a Microsoft Kinect to replace the laser sensor. Furthermore, the paper proposes a new system integration approach whereby a Linux virtual machine is used to run the open source SLAM algorithms. The experiments were conducted in two different environments; a small room with no features and a typical office corridor with desks and chairs. Using the data logged from real-time experiments, each SLAM technique was simulated and tested with different parameter settings. The results show that the system is able to achieve real time SLAM operation. The system implementation offers a simple and reliable way to compare the performance of Windows-based SLAM algorithm with the algorithms typically implemented in a Robot Operating System (ROS). The results also indicate that certain modifications to the default laser scanner-based parameters are able to improve the map accuracy. However, the limited field of view and range of Kinect's depth sensor often causes the map to be inaccurate, especially in featureless areas, therefore the Kinect sensor is not a direct replacement for a laser scanner, but rather offers a feasible alternative for 2D SLAM tasks.

  9. Performance Analysis of the Microsoft Kinect Sensor for 2D Simultaneous Localization and Mapping (SLAM) Techniques

    PubMed Central

    Kamarudin, Kamarulzaman; Mamduh, Syed Muhammad; Shakaff, Ali Yeon Md; Zakaria, Ammar

    2014-01-01

    This paper presents a performance analysis of two open-source, laser scanner-based Simultaneous Localization and Mapping (SLAM) techniques (i.e., Gmapping and Hector SLAM) using a Microsoft Kinect to replace the laser sensor. Furthermore, the paper proposes a new system integration approach whereby a Linux virtual machine is used to run the open source SLAM algorithms. The experiments were conducted in two different environments; a small room with no features and a typical office corridor with desks and chairs. Using the data logged from real-time experiments, each SLAM technique was simulated and tested with different parameter settings. The results show that the system is able to achieve real time SLAM operation. The system implementation offers a simple and reliable way to compare the performance of Windows-based SLAM algorithm with the algorithms typically implemented in a Robot Operating System (ROS). The results also indicate that certain modifications to the default laser scanner-based parameters are able to improve the map accuracy. However, the limited field of view and range of Kinect's depth sensor often causes the map to be inaccurate, especially in featureless areas, therefore the Kinect sensor is not a direct replacement for a laser scanner, but rather offers a feasible alternative for 2D SLAM tasks. PMID:25490595

  10. Implementation of facial recognition with Microsoft Kinect v2 sensor for patient verification.

    PubMed

    Silverstein, Evan; Snyder, Michael

    2017-06-01

    The aim of this study was to present a straightforward implementation of facial recognition using the Microsoft Kinect v2 sensor for patient identification in a radiotherapy setting. A facial recognition system was created with the Microsoft Kinect v2 using a facial mapping library distributed with the Kinect v2 SDK as a basis for the algorithm. The system extracts 31 fiducial points representing various facial landmarks which are used in both the creation of a reference data set and subsequent evaluations of real-time sensor data in the matching algorithm. To test the algorithm, a database of 39 faces was created, each with 465 vectors derived from the fiducial points, and a one-to-one matching procedure was performed to obtain sensitivity and specificity data of the facial identification system. ROC curves were plotted to display system performance and identify thresholds for match determination. In addition, system performance as a function of ambient light intensity was tested. Using optimized parameters in the matching algorithm, the sensitivity of the system for 5299 trials was 96.5% and the specificity was 96.7%. The results indicate a fairly robust methodology for verifying, in real-time, a specific face through comparison from a precollected reference data set. In its current implementation, the process of data collection for each face and subsequent matching session averaged approximately 30 s, which may be too onerous to provide a realistic supplement to patient identification in a clinical setting. Despite the time commitment, the data collection process was well tolerated by all participants and most robust when consistent ambient light conditions were maintained across both the reference recording session and subsequent real-time identification sessions. A facial recognition system can be implemented for patient identification using the Microsoft Kinect v2 sensor and the distributed SDK. In its present form, the system is accurate-if time consuming-and further iterations of the method could provide a robust, easy to implement, and cost-effective supplement to traditional patient identification methods. © 2017 American Association of Physicists in Medicine.

  11. First Experiences with Kinect v2 Sensor for Close Range 3d Modelling

    NASA Astrophysics Data System (ADS)

    Lachat, E.; Macher, H.; Mittet, M.-A.; Landes, T.; Grussenmeyer, P.

    2015-02-01

    RGB-D cameras, also known as range imaging cameras, are a recent generation of sensors. As they are suitable for measuring distances to objects at high frame rate, such sensors are increasingly used for 3D acquisitions, and more generally for applications in robotics or computer vision. This kind of sensors became popular especially since the Kinect v1 (Microsoft) arrived on the market in November 2010. In July 2014, Windows has released a new sensor, the Kinect for Windows v2 sensor, based on another technology as its first device. However, due to its initial development for video games, the quality assessment of this new device for 3D modelling represents a major investigation axis. In this paper first experiences with Kinect v2 sensor are related, and the ability of close range 3D modelling is investigated. For this purpose, error sources on output data as well as a calibration approach are presented.

  12. Accuracy of the Microsoft Kinect sensor for measuring movement in people with Parkinson's disease.

    PubMed

    Galna, Brook; Barry, Gillian; Jackson, Dan; Mhiripiri, Dadirayi; Olivier, Patrick; Rochester, Lynn

    2014-04-01

    The Microsoft Kinect sensor (Kinect) is potentially a low-cost solution for clinical and home-based assessment of movement symptoms in people with Parkinson's disease (PD). The purpose of this study was to establish the accuracy of the Kinect in measuring clinically relevant movements in people with PD. Nine people with PD and 10 controls performed a series of movements which were measured concurrently with a Vicon three-dimensional motion analysis system (gold-standard) and the Kinect. The movements included quiet standing, multidirectional reaching and stepping and walking on the spot, and the following items from the Unified Parkinson's Disease Rating Scale: hand clasping, finger tapping, foot, leg agility, chair rising and hand pronation. Outcomes included mean timing and range of motion across movement repetitions. The Kinect measured timing of movement repetitions very accurately (low bias, 95% limits of agreement <10% of the group mean, ICCs >0.9 and Pearson's r>0.9). However, the Kinect had varied success measuring spatial characteristics, ranging from excellent for gross movements such as sit-to-stand (ICC=.989) to very poor for fine movement such as hand clasping (ICC=.012). Despite this, results from the Kinect related strongly to those obtained with the Vicon system (Pearson's r>0.8) for most movements. The Kinect can accurately measure timing and gross spatial characteristics of clinically relevant movements but not with the same spatial accuracy for smaller movements, such as hand clasping. Copyright © 2014 The Authors. Published by Elsevier B.V. All rights reserved.

  13. Reliability and validity of the Microsoft Kinect for assessment of manual wheelchair propulsion.

    PubMed

    Milgrom, Rachel; Foreman, Matthew; Standeven, John; Engsberg, Jack R; Morgan, Kerri A

    2016-01-01

    Concurrent validity and test-retest reliability of the Microsoft Kinect in quantification of manual wheelchair propulsion were examined. Data were collected from five manual wheelchair users on a roller system. Three Kinect sensors were used to assess test-retest reliability with a still pose. Three systems were used to assess concurrent validity of the Kinect to measure propulsion kinematics (joint angles, push loop characteristics): Kinect, Motion Analysis, and Dartfish ProSuite (Dartfish joint angles were limited to shoulder and elbow flexion). Intraclass correlation coefficients revealed good reliability (0.87-0.99) between five of the six joint angles (neck flexion, shoulder flexion, shoulder abduction, elbow flexion, wrist flexion). ICCs suggested good concurrent validity for elbow flexion between the Kinect and Dartfish and between the Kinect and Motion Analysis. Good concurrent validity was revealed for maximum height, hand-axle relationship, and maximum area (0.92-0.95) between the Kinect and Dartfish and maximum height and hand-axle relationship (0.89-0.96) between the Kinect and Motion Analysis. Analysis of variance revealed significant differences (p < 0.05) in maximum length between Dartfish (mean 58.76 cm) and the Kinect (40.16 cm). Results pose promising research and clinical implications for propulsion assessment and overuse injury prevention with the application of current findings to future technology.

  14. Automated Fall Detection With Quality Improvement “Rewind” to Reduce Falls in Hospital Rooms

    PubMed Central

    Rantz, Marilyn J.; Banerjee, Tanvi S.; Cattoor, Erin; Scott, Susan D.; Skubic, Marjorie; Popescu, Mihail

    2014-01-01

    The purpose of this study was to test the implementation of a fall detection and “rewind” privacy-protecting technique using the Microsoft® Kinect™ to not only detect but prevent falls from occurring in hospitalized patients. Kinect sensors were placed in six hospital rooms in a step-down unit and data were continuously logged. Prior to implementation with patients, three researchers performed a total of 18 falls (walking and then falling down or falling from the bed) and 17 non-fall events (crouching down, stooping down to tie shoe laces, and lying on the floor). All falls and non-falls were correctly identified using automated algorithms to process Kinect sensor data. During the first 8 months of data collection, processing methods were perfected to manage data and provide a “rewind” method to view events that led to falls for post-fall quality improvement process analyses. Preliminary data from this feasibility study show that using the Microsoft Kinect sensors provides detection of falls, fall risks, and facilitates quality improvement after falls in real hospital environments unobtrusively, while taking into account patient privacy. PMID:24296567

  15. Towards a detailed anthropometric body characterization using the Microsoft Kinect.

    PubMed

    Domingues, Ana; Barbosa, Filipa; Pereira, Eduardo M; Santos, Márcio Borgonovo; Seixas, Adérito; Vilas-Boas, João; Gabriel, Joaquim; Vardasca, Ricardo

    2016-01-01

    Anthropometry has been widely used in different fields, providing relevant information for medicine, ergonomics and biometric applications. However, the existent solutions present marked disadvantages, reducing the employment of this type of evaluation. Studies have been conducted in order to easily determine anthropometric measures considering data provided by low-cost sensors, such as the Microsoft Kinect. In this work, a methodology is proposed and implemented for estimating anthropometric measures considering the information acquired with this sensor. The measures obtained with this method were compared with the ones from a validation system, Qualisys. Comparing the relative errors determined with state-of-art references, for some of the estimated measures, lower errors were verified and a more complete characterization of the whole body structure was achieved.

  16. The feasibility of using Microsoft Kinect v2 sensors during radiotherapy delivery.

    PubMed

    Edmunds, David M; Bashforth, Sophie E; Tahavori, Fatemeh; Wells, Kevin; Donovan, Ellen M

    2016-11-08

    Consumer-grade distance sensors, such as the Microsoft Kinect devices (v1 and v2), have been investigated for use as marker-free motion monitoring systems for radiotherapy. The radiotherapy delivery environment is challenging for such sen-sors because of the proximity to electromagnetic interference (EMI) from the pulse forming network which fires the magnetron and electron gun of a linear accelerator (linac) during radiation delivery, as well as the requirement to operate them from the control area. This work investigated whether using Kinect v2 sensors as motion monitors was feasible during radiation delivery. Three sensors were used each with a 12 m USB 3.0 active cable which replaced the supplied 3 m USB 3.0 cable. Distance output data from the Kinect v2 sensors was recorded under four condi-tions of linac operation: (i) powered up only, (ii) pulse forming network operating with no radiation, (iii) pulse repetition frequency varied between 6 Hz and 400 Hz, (iv) dose rate varied between 50 and 1450 monitor units (MU) per minute. A solid water block was used as an object and imaged when static, moved in a set of steps from 0.6 m to 2.0 m from the sensor and moving dynamically in two sinusoidal-like trajectories. Few additional image artifacts were observed and there was no impact on the tracking of the motion patterns (root mean squared accuracy of 1.4 and 1.1mm, respectively). The sensors' distance accuracy varied by 2.0 to 3.8 mm (1.2 to 1.4 mm post distance calibration) across the range measured; the precision was 1 mm. There was minimal effect from the EMI on the distance calibration data: 0 mm or 1 mm reported distance change (2 mm maximum change at one position). Kinect v2 sensors operated with 12 m USB 3.0 active cables appear robust to the radiotherapy treatment environment. © 2016 The Authors.

  17. Integration of Kinect and Low-Cost Gnss for Outdoor Navigation

    NASA Astrophysics Data System (ADS)

    Pagliaria, D.; Pinto, L.; Reguzzoni, M.; Rossi, L.

    2016-06-01

    Since its launch on the market, Microsoft Kinect sensor has represented a great revolution in the field of low cost navigation, especially for indoor robotic applications. In fact, this system is endowed with a depth camera, as well as a visual RGB camera, at a cost of about 200. The characteristics and the potentiality of the Kinect sensor have been widely studied for indoor applications. The second generation of this sensor has been announced to be capable of acquiring data even outdoors, under direct sunlight. The task of navigating passing from an indoor to an outdoor environment (and vice versa) is very demanding because the sensors that work properly in one environment are typically unsuitable in the other one. In this sense the Kinect could represent an interesting device allowing bridging the navigation solution between outdoor and indoor. In this work the accuracy and the field of application of the new generation of Kinect sensor have been tested outdoor, considering different lighting conditions and the reflective properties of the emitted ray on different materials. Moreover, an integrated system with a low cost GNSS receiver has been studied, with the aim of taking advantage of the GNSS positioning when the satellite visibility conditions are good enough. A kinematic test has been performed outdoor by using a Kinect sensor and a GNSS receiver and it is here presented.

  18. Validation of Attitude and Heading Reference System and Microsoft Kinect for Continuous Measurement of Cervical Range of Motion Compared to the Optical Motion Capture System.

    PubMed

    Song, Young Seop; Yang, Kyung Yong; Youn, Kibum; Yoon, Chiyul; Yeom, Jiwoon; Hwang, Hyeoncheol; Lee, Jehee; Kim, Keewon

    2016-08-01

    To compare optical motion capture system (MoCap), attitude and heading reference system (AHRS) sensor, and Microsoft Kinect for the continuous measurement of cervical range of motion (ROM). Fifteen healthy adult subjects were asked to sit in front of the Kinect camera with optical markers and AHRS sensors attached to the body in a room equipped with optical motion capture camera. Subjects were instructed to independently perform axial rotation followed by flexion/extension and lateral bending. Each movement was repeated 5 times while being measured simultaneously with 3 devices. Using the MoCap system as the gold standard, the validity of AHRS and Kinect for measurement of cervical ROM was assessed by calculating correlation coefficient and Bland-Altman plot with 95% limits of agreement (LoA). MoCap and ARHS showed fair agreement (95% LoA<10°), while MoCap and Kinect showed less favorable agreement (95% LoA>10°) for measuring ROM in all directions. Intraclass correlation coefficient (ICC) values between MoCap and AHRS in -40° to 40° range were excellent for flexion/extension and lateral bending (ICC>0.9). ICC values were also fair for axial rotation (ICC>0.8). ICC values between MoCap and Kinect system in -40° to 40° range were fair for all motions. Our study showed feasibility of using AHRS to measure cervical ROM during continuous motion with an acceptable range of error. AHRS and Kinect system can also be used for continuous monitoring of flexion/extension and lateral bending in ordinary range.

  19. Estimating the Dead Space Volume Between a Headform and N95 Filtering Facepiece Respirator Using Microsoft Kinect.

    PubMed

    Xu, Ming; Lei, Zhipeng; Yang, James

    2015-01-01

    N95 filtering facepiece respirator (FFR) dead space is an important factor for respirator design. The dead space refers to the cavity between the internal surface of the FFR and the wearer's facial surface. This article presents a novel method to estimate the dead space volume of FFRs and experimental validation. In this study, six FFRs and five headforms (small, medium, large, long/narrow, and short/wide) are used for various FFR and headform combinations. Microsoft Kinect Sensors (Microsoft Corporation, Redmond, WA) are used to scan the headforms without respirators and then scan the headforms with the FFRs donned. The FFR dead space is formed through geometric modeling software, and finally the volume is obtained through LS-DYNA (Livermore Software Technology Corporation, Livermore, CA). In the experimental validation, water is used to measure the dead space. The simulation and experimental dead space volumes are 107.5-167.5 mL and 98.4-165.7 mL, respectively. Linear regression analysis is conducted to correlate the results from Kinect and water, and R(2) = 0.85.

  20. Accuracy of Kinect's skeleton tracking for upper body rehabilitation applications.

    PubMed

    Mobini, Amir; Behzadipour, Saeed; Saadat Foumani, Mahmoud

    2014-07-01

    Games and their use in rehabilitation have formed a new and rapidly growing area of research. A critical hardware component of rehabilitation programs is the input device that measures the patients' movements. After Microsoft released Kinect, extensive research has been initiated on its applications as an input device for rehabilitation. However, since most of the works in this area rely on a qualitative determination of the joints' movements rather than an accurate quantitative one, detailed analysis of patients' movements is hindered. The aim of this article is to determine the accuracy of the Kinect's joint tracking. To fulfill this task, a model of upper body was fabricated. The displacements of the joint centers were estimated by Kinect at different positions and were then compared with the actual ones from measurement. Moreover, the dependency of Kinect's error on distance and joint type was measured and analyzed. It measures and reports the accuracy of a sensor that can be directly used for monitoring physical therapy exercises. Using this sensor facilitates remote rehabilitation.

  1. Modelling and Simulation of the Knee Joint with a Depth Sensor Camera for Prosthetics and Movement Rehabilitation

    NASA Astrophysics Data System (ADS)

    Risto, S.; Kallergi, M.

    2015-09-01

    The purpose of this project was to model and simulate the knee joint. A computer model of the knee joint was first created, which was controlled by Microsoft's Kinect for Windows. Kinect created a depth map of the knee and lower leg motion independent of lighting conditions through an infrared sensor. A combination of open source software such as Blender, Python, Kinect SDK and NI_Mate were implemented for the creation and control of the simulated knee based on movements of a live physical model. A physical size model of the knee and lower leg was also created, the movement of which was controlled remotely by the computer model and Kinect. The real time communication of the model and the robotic knee was achieved through programming in Python and Arduino language. The result of this study showed that Kinect in the modelling of human kinematics and can play a significant role in the development of prosthetics and other assistive technologies.

  2. SU-E-J-66: Evaluation of a Real-Time Positioning Assistance Simulator System for Skull Radiography Using the Microsoft Kinect

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurata, T; Ono, M; Kozono, K

    2014-06-01

    Purpose: The purpose of this study is to investigate the feasibility of a low cost, small size positioning assistance simulator system for skull radiography using the Microsoft Kinect sensor. A conventional radiographic simulator system can only measure the three-dimensional coordinates of an x-ray tube using angle sensors, but not measure the movement of the subject. Therefore, in this study, we developed a real-time simulator system using the Microsoft Kinect to measure both the x-ray tube and the subject, and evaluated its accuracy and feasibility by comparing the simulated and the measured x-ray images. Methods: This system can track a headmore » phantom by using Face Tracking, which is one of the functions of the Kinect. The relative relationship between the Kinect and the head phantom was measured and the projection image was calculated by using the ray casting method, and by using three-dimensional CT head data with 220 slices at 512 × 512 pixels. X-ray images were thus obtained by using a computed radiography (CR) system. We could then compare the simulated projection images with the measured x-ray images from 0 degrees to 45 degrees at increments of 15 degrees by calculating the cross correlation coefficient C. Results: The calculation time of the simulated projection images was almost real-time (within 1 second) by using the Graphics Processing Unit(GPU). The cross-correlation coefficients C are: 0.916; 0.909; 0.891; and, 0.886 at 0, 15, 30, and 45 degrees, respectively. As a result, there were strong correlations between the simulated and measured images. Conclusion: This system can be used to perform head positioning more easily and accurately. It is expected that this system will be useful for learning radiographic techniques by students. Moreover, it could also be used for predicting the actual x-ray image prior to x-ray exposure in clinical environments.« less

  3. Evaluation of the Microsoft Kinect as a clinical assessment tool of body sway.

    PubMed

    Yeung, L F; Cheng, Kenneth C; Fong, C H; Lee, Winson C C; Tong, Kai-Yu

    2014-09-01

    Total body center of mass (TBCM) is a useful kinematic measurement of body sway. However, expensive equipment and high technical requirement limit the use of motion capture systems in large-scale clinical settings. Center of pressure (CP) measurement obtained from force plates cannot accurately represent TBCM during large body sway movement. Microsoft Kinect is a rapidly developing, inexpensive, and portable posturographic device, which provides objective and quantitative measurement of TBCM sway. The purpose of this study was to evaluate Kinect as a clinical assessment tool for TBCM sway measurement. The performance of the Kinect system was compared with a Vicon motion capture system and a force plate. Ten healthy male subjects performed four upright quiet standing tasks: (1) eyes open (EOn), (2) eyes closed (ECn), (3) eyes open standing on foam (EOf), and (4) eyes closed standing on foam (ECf). Our results revealed that the Kinect system produced highly correlated measurement of TBCM sway (mean RMSE=4.38 mm; mean CORR=0.94 in Kinect-Vicon comparison), as well as comparable intra-session reliability to Vicon. However, the Kinect device consistently overestimated the 95% CL of sway by about 3mm. This offset could be due to the limited accuracy, resolution, and sensitivity of the Kinect sensors. The Kinect device was more accurate in the medial-lateral than in the anterior-posterior direction, and performed better than the force plate in more challenging balance tasks, such as (ECf) with larger TBCM sway. Overall, Kinect is a cost-effective alternative to a motion capture and force plate system for clinical assessment of TBCM sway. Copyright © 2014 Elsevier B.V. All rights reserved.

  4. The concurrent validity and intrarater reliability of the Microsoft Kinect to measure thoracic kyphosis.

    PubMed

    Quek, June; Brauer, Sandra G; Treleaven, Julia; Clark, Ross A

    2017-09-01

    This study aims to investigate the concurrent validity and intrarater reliability of the Microsoft Kinect to measure thoracic kyphosis against the Flexicurve. Thirty-three healthy individuals (age: 31±11.0 years, men: 17, height: 170.2±8.2 cm, weight: 64.2±12.0 kg) participated, with 29 re-examined for intrarater reliability 1-7 days later. Thoracic kyphosis was measured using the Flexicurve and the Microsoft Kinect consecutively in both standing and sitting positions. Both the kyphosis index and angle were calculated. The Microsoft Kinect showed excellent concurrent validity (intraclass correlation coefficient=0.76-0.82) and reliability (intraclass correlation coefficient=0.81-0.98) for measuring thoracic kyphosis (angle and index) in both standing and sitting postures. This study is the first to show that the Microsoft Kinect has excellent validity and intrarater reliability to measure thoracic kyphosis, which is promising for its use in the clinical setting.

  5. Microsoft Kinect Visual and Depth Sensors for Breathing and Heart Rate Analysis

    PubMed Central

    Procházka, Aleš; Schätz, Martin; Vyšata, Oldřich; Vališ, Martin

    2016-01-01

    This paper is devoted to a new method of using Microsoft (MS) Kinect sensors for non-contact monitoring of breathing and heart rate estimation to detect possible medical and neurological disorders. Video sequences of facial features and thorax movements are recorded by MS Kinect image, depth and infrared sensors to enable their time analysis in selected regions of interest. The proposed methodology includes the use of computational methods and functional transforms for data selection, as well as their denoising, spectral analysis and visualization, in order to determine specific biomedical features. The results that were obtained verify the correspondence between the evaluation of the breathing frequency that was obtained from the image and infrared data of the mouth area and from the thorax movement that was recorded by the depth sensor. Spectral analysis of the time evolution of the mouth area video frames was also used for heart rate estimation. Results estimated from the image and infrared data of the mouth area were compared with those obtained by contact measurements by Garmin sensors (www.garmin.com). The study proves that simple image and depth sensors can be used to efficiently record biomedical multidimensional data with sufficient accuracy to detect selected biomedical features using specific methods of computational intelligence. The achieved accuracy for non-contact detection of breathing rate was 0.26% and the accuracy of heart rate estimation was 1.47% for the infrared sensor. The following results show how video frames with depth data can be used to differentiate different kinds of breathing. The proposed method enables us to obtain and analyse data for diagnostic purposes in the home environment or during physical activities, enabling efficient human–machine interaction. PMID:27367687

  6. Microsoft Kinect Visual and Depth Sensors for Breathing and Heart Rate Analysis.

    PubMed

    Procházka, Aleš; Schätz, Martin; Vyšata, Oldřich; Vališ, Martin

    2016-06-28

    This paper is devoted to a new method of using Microsoft (MS) Kinect sensors for non-contact monitoring of breathing and heart rate estimation to detect possible medical and neurological disorders. Video sequences of facial features and thorax movements are recorded by MS Kinect image, depth and infrared sensors to enable their time analysis in selected regions of interest. The proposed methodology includes the use of computational methods and functional transforms for data selection, as well as their denoising, spectral analysis and visualization, in order to determine specific biomedical features. The results that were obtained verify the correspondence between the evaluation of the breathing frequency that was obtained from the image and infrared data of the mouth area and from the thorax movement that was recorded by the depth sensor. Spectral analysis of the time evolution of the mouth area video frames was also used for heart rate estimation. Results estimated from the image and infrared data of the mouth area were compared with those obtained by contact measurements by Garmin sensors (www.garmin.com). The study proves that simple image and depth sensors can be used to efficiently record biomedical multidimensional data with sufficient accuracy to detect selected biomedical features using specific methods of computational intelligence. The achieved accuracy for non-contact detection of breathing rate was 0.26% and the accuracy of heart rate estimation was 1.47% for the infrared sensor. The following results show how video frames with depth data can be used to differentiate different kinds of breathing. The proposed method enables us to obtain and analyse data for diagnostic purposes in the home environment or during physical activities, enabling efficient human-machine interaction.

  7. Matching the best viewing angle in depth cameras for biomass estimation based on poplar seedling geometry.

    PubMed

    Andújar, Dionisio; Fernández-Quintanilla, César; Dorado, José

    2015-06-04

    In energy crops for biomass production a proper plant structure is important to optimize wood yields. A precise crop characterization in early stages may contribute to the choice of proper cropping techniques. This study assesses the potential of the Microsoft Kinect for Windows v.1 sensor to determine the best viewing angle of the sensor to estimate the plant biomass based on poplar seedling geometry. Kinect Fusion algorithms were used to generate a 3D point cloud from the depth video stream. The sensor was mounted in different positions facing the tree in order to obtain depth (RGB-D) images from different angles. Individuals of two different ages, e.g., one month and one year old, were scanned. Four different viewing angles were compared: top view (0°), 45° downwards view, front view (90°) and ground upwards view (-45°). The ground-truth used to validate the sensor readings consisted of a destructive sampling in which the height, leaf area and biomass (dry weight basis) were measured in each individual plant. The depth image models agreed well with 45°, 90° and -45° measurements in one-year poplar trees. Good correlations (0.88 to 0.92) between dry biomass and the area measured with the Kinect were found. In addition, plant height was accurately estimated with a few centimeters error. The comparison between different viewing angles revealed that top views showed poorer results due to the fact the top leaves occluded the rest of the tree. However, the other views led to good results. Conversely, small poplars showed better correlations with actual parameters from the top view (0°). Therefore, although the Microsoft Kinect for Windows v.1 sensor provides good opportunities for biomass estimation, the viewing angle must be chosen taking into account the developmental stage of the crop and the desired parameters. The results of this study indicate that Kinect is a promising tool for a rapid canopy characterization, i.e., for estimating crop biomass production, with several important advantages: low cost, low power needs and a high frame rate (frames per second) when dynamic measurements are required.

  8. Real Time Apnoea Monitoring of Children Using the Microsoft Kinect Sensor: A Pilot Study.

    PubMed

    Al-Naji, Ali; Gibson, Kim; Lee, Sang-Heon; Chahl, Javaan

    2017-02-03

    The objective of this study was to design a non-invasive system for the observation of respiratory rates and detection of apnoea using analysis of real time image sequences captured in any given sleep position and under any light conditions (even in dark environments). A Microsoft Kinect sensor was used to visualize the variations in the thorax and abdomen from the respiratory rhythm. These variations were magnified, analyzed and detected at a distance of 2.5 m from the subject. A modified motion magnification system and frame subtraction technique were used to identify breathing movements by detecting rapid motion areas in the magnified frame sequences. The experimental results on a set of video data from five subjects (3 h for each subject) showed that our monitoring system can accurately measure respiratory rate and therefore detect apnoea in infants and young children. The proposed system is feasible, accurate, safe and low computational complexity, making it an efficient alternative for non-contact home sleep monitoring systems and advancing health care applications.

  9. Can shoulder range of movement be measured accurately using the Microsoft Kinect sensor plus Medical Interactive Recovery Assistant (MIRA) software?

    PubMed

    Wilson, James D; Khan-Perez, Jennifer; Marley, Dominic; Buttress, Susan; Walton, Michael; Li, Baihua; Roy, Bibhas

    2017-12-01

    This study compared the accuracy of measuring shoulder range of movement (ROM) with a simple laptop-sensor combination vs. trained observers (shoulder physiotherapists and shoulder surgeons) using motion capture (MoCap) laboratory equipment as the gold standard. The Microsoft Kinect sensor (Microsoft Corp., Redmond, WA, USA) tracks 3-dimensional human motion. Ordinarily used with an Xbox (Microsoft Corp.) video game console, Medical Interactive Recovery Assistant (MIRA) software (MIRA Rehab Ltd., London, UK) allows this small sensor to measure shoulder movement with a standard computer. Shoulder movements of 49 healthy volunteers were simultaneously measured by trained observers, MoCap, and the MIRA device. Internal rotation was assessed with the shoulder abducted 90° and external rotation with the shoulder adducted. Visual estimation and MIRA measurements were compared with gold standard MoCap measurements for agreement using Bland-Altman methods. There were 1670 measurements analyzed. The MIRA evaluations of all 4 cardinal shoulder movements were significantly more precise, with narrower limits of agreement, than the measurements of trained observers. MIRA achieved ±11° (95% confidence interval [CI], 8.7°-12.6°) for forward flexion vs. ±16° (95% CI, 14.6°-17.6°) by trained observers. For abduction, MIRA showed ±11° (95% CI, 8.7°-12.8°) against ±15° (95% CI, 13.4°-16.2°) for trained observers. MIRA attained ±10° (95% CI, 8.1°-11.9°) during external rotation measurement, whereas trained observers only reached ±21° (95% CI, 18.7°-22.6°). For internal rotation, MIRA achieved ±9° (95% CI, 7.2°-10.4°), which was again better than TOs at ±18° (95% CI, 16.0°-19.3°). A laptop combined with a Microsoft Kinect sensor and the MIRA software can measure shoulder movements with acceptable levels of accuracy. This technology, which can be easily set up, may also allow precise shoulder ROM measurement outside the clinic setting. Copyright © 2017 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Elsevier Inc. All rights reserved.

  10. A Microsoft Kinect-Based Point-of-Care Gait Assessment Framework for Multiple Sclerosis Patients.

    PubMed

    Gholami, Farnood; Trojan, Daria A; Kovecses, Jozsef; Haddad, Wassim M; Gholami, Behnood

    2017-09-01

    Gait impairment is a prevalent and important difficulty for patients with multiple sclerosis (MS), a common neurological disorder. An easy to use tool to objectively evaluate gait in MS patients in a clinical setting can assist clinicians to perform an objective assessment. The overall objective of this study is to develop a framework to quantify gait abnormalities in MS patients using the Microsoft Kinect for the Windows sensor; an inexpensive, easy to use, portable camera. Specifically, we aim to evaluate its feasibility for utilization in a clinical setting, assess its reliability, evaluate the validity of gait indices obtained, and evaluate a novel set of gait indices based on the concept of dynamic time warping. In this study, ten ambulatory MS patients, and ten age and sex-matched normal controls were studied at one session in a clinical setting with gait assessment using a Kinect camera. The expanded disability status scale (EDSS) clinical ambulation score was calculated for the MS subjects, and patients completed the Multiple Sclerosis walking scale (MSWS). Based on this study, we established the potential feasibility of using a Microsoft Kinect camera in a clinical setting. Seven out of the eight gait indices obtained using the proposed method were reliable with intraclass correlation coefficients ranging from 0.61 to 0.99. All eight MS gait indices were significantly different from those of the controls (p-values less than 0.05). Finally, seven out of the eight MS gait indices were correlated with the objective and subjective gait measures (Pearson's correlation coefficients greater than 0.40). This study shows that the Kinect camera is an easy to use tool to assess gait in MS patients in a clinical setting.

  11. A mixture model for robust registration in Kinect sensor

    NASA Astrophysics Data System (ADS)

    Peng, Li; Zhou, Huabing; Zhu, Shengguo

    2018-03-01

    The Microsoft Kinect sensor has been widely used in many applications, but it suffers from the drawback of low registration precision between color image and depth image. In this paper, we present a robust method to improve the registration precision by a mixture model that can handle multiply images with the nonparametric model. We impose non-parametric geometrical constraints on the correspondence, as a prior distribution, in a reproducing kernel Hilbert space (RKHS).The estimation is performed by the EM algorithm which by also estimating the variance of the prior model is able to obtain good estimates. We illustrate the proposed method on the public available dataset. The experimental results show that our approach outperforms the baseline methods.

  12. Investigating the Feasibility of Conducting Human Tracking and Following in an Indoor Environment Using a Microsoft Kinect and the Robot Operating System

    DTIC Science & Technology

    2017-06-01

    implement human following on a mobile robot in an indoor environment . B. FUTURE WORK Future work that could be conducted in the realm of this thesis...FEASIBILITY OF CONDUCTING HUMAN TRACKING AND FOLLOWING IN AN INDOOR ENVIRONMENT USING A MICROSOFT KINECT AND THE ROBOT OPERATING SYSTEM by...FEASIBILITY OF CONDUCTING HUMAN TRACKING AND FOLLOWING IN AN INDOOR ENVIRONMENT USING A MICROSOFT KINECT AND THE ROBOT OPERATING SYSTEM 5. FUNDING NUMBERS

  13. Validation of the Microsoft Kinect® camera system for measurement of lower extremity jump landing and squatting kinematics.

    PubMed

    Eltoukhy, Moataz; Kelly, Adam; Kim, Chang-Young; Jun, Hyung-Pil; Campbell, Richard; Kuenze, Christopher

    2016-01-01

    Cost effective, quantifiable assessment of lower extremity movement represents potential improvement over standard tools for evaluation of injury risk. Ten healthy participants completed three trials of a drop jump, overhead squat, and single leg squat task. Peak hip and knee kinematics were assessed using an 8 camera BTS Smart 7000DX motion analysis system and the Microsoft Kinect® camera system. The agreement and consistency between both uncorrected and correct Kinect kinematic variables and the BTS camera system were assessed using interclass correlations coefficients. Peak sagittal plane kinematics measured using the Microsoft Kinect® camera system explained a significant amount of variance [Range(hip) = 43.5-62.8%; Range(knee) = 67.5-89.6%] in peak kinematics measured using the BTS camera system. Across tasks, peak knee flexion angle and peak hip flexion were found to be consistent and in agreement when the Microsoft Kinect® camera system was directly compared to the BTS camera system but these values were improved following application of a corrective factor. The Microsoft Kinect® may not be an appropriate surrogate for traditional motion analysis technology, but it may have potential applications as a real-time feedback tool in pathological or high injury risk populations.

  14. Generation of RGB-D data for SLAM using robotic framework V-REP

    NASA Astrophysics Data System (ADS)

    Gritsenko, Pavel S.; Gritsenko, Igor S.; Seidakhmet, Askar Zh.; Abduraimov, Azizbek E.

    2017-09-01

    In this article, we will present a methodology to debug RGB-D SLAM systems as well as to generate testing data. We have created a model of a laboratory with an area of 250 m2 (25 × 10) with set of objects of different type. V-REP Microsoft Kinect sensor simulation model was used as a basis for robot vision system. Motion path of the sensor model has multiple loops. We have written a program in V-Rep native language Lua to record data array from the Microsoft Kinect sensor model. The array includes both RGB and Depth streams with full resolution (640 × 480) for every 10 cm of the path. The simulated path has absolute accuracy, since it is a simulation, and is represented by an array of transformation matrices (4 × 4). The length of the data array is 1000 steps or 100 m. The path simulates frequently occurring cases in SLAM, including loops. It is worth noting that the path was modeled for a mobile robot and it is represented by a 2D path parallel to the floor at a height of 40 cm.

  15. Depth estimation of features in video frames with improved feature matching technique using Kinect sensor

    NASA Astrophysics Data System (ADS)

    Sharma, Kajal; Moon, Inkyu; Kim, Sung Gaun

    2012-10-01

    Estimating depth has long been a major issue in the field of computer vision and robotics. The Kinect sensor's active sensing strategy provides high-frame-rate depth maps and can recognize user gestures and human pose. This paper presents a technique to estimate the depth of features extracted from video frames, along with an improved feature-matching method. In this paper, we used the Kinect camera developed by Microsoft, which captured color and depth images for further processing. Feature detection and selection is an important task for robot navigation. Many feature-matching techniques have been proposed earlier, and this paper proposes an improved feature matching between successive video frames with the use of neural network methodology in order to reduce the computation time of feature matching. The features extracted are invariant to image scale and rotation, and different experiments were conducted to evaluate the performance of feature matching between successive video frames. The extracted features are assigned distance based on the Kinect technology that can be used by the robot in order to determine the path of navigation, along with obstacle detection applications.

  16. Validation of enhanced kinect sensor based motion capturing for gait assessment

    PubMed Central

    Müller, Björn; Ilg, Winfried; Giese, Martin A.

    2017-01-01

    Optical motion capturing systems are expensive and require substantial dedicated space to be set up. On the other hand, they provide unsurpassed accuracy and reliability. In many situations however flexibility is required and the motion capturing system can only temporarily be placed. The Microsoft Kinect v2 sensor is comparatively cheap and with respect to gait analysis promising results have been published. We here present a motion capturing system that is easy to set up, flexible with respect to the sensor locations and delivers high accuracy in gait parameters comparable to a gold standard motion capturing system (VICON). Further, we demonstrate that sensor setups which track the person only from one-side are less accurate and should be replaced by two-sided setups. With respect to commonly analyzed gait parameters, especially step width, our system shows higher agreement with the VICON system than previous reports. PMID:28410413

  17. User-Centered Design of a Controller-Free Game for Hand Rehabilitation.

    PubMed

    Proffitt, Rachel; Sevick, Marisa; Chang, Chien-Yen; Lange, Belinda

    2015-08-01

    The purpose of this study was to develop and test a hand therapy game using the Microsoft (Redmond, WA) Kinect(®) sensor with a customized videogame. Using the Microsoft Kinect sensor as an input device, a customized game for hand rehabilitation was developed that required players to perform various gestures to accomplish a virtual cooking task. Over the course of two iterative sessions, 11 participants with different levels of wrist, hand, and finger injuries interacted with the game in a single session, and user perspectives and feedback were obtained via a questionnaire and semistructured interviews. Participants reported high levels of enjoyment, specifically related to the challenging nature of the game and the visuals. Participant feedback from the first iterative round of testing was incorporated to produce a second prototype for the second round of testing. Additionally, participants expressed the desire to have the game adapt and be customized to their unique hand therapy needs. The game tested in this study has the potential to be a unique and cutting edge method for the delivery of hand rehabilitation for a diverse population.

  18. Real Time Apnoea Monitoring of Children Using the Microsoft Kinect Sensor: A Pilot Study

    PubMed Central

    Al-Naji, Ali; Gibson, Kim; Lee, Sang-Heon; Chahl, Javaan

    2017-01-01

    The objective of this study was to design a non-invasive system for the observation of respiratory rates and detection of apnoea using analysis of real time image sequences captured in any given sleep position and under any light conditions (even in dark environments). A Microsoft Kinect sensor was used to visualize the variations in the thorax and abdomen from the respiratory rhythm. These variations were magnified, analyzed and detected at a distance of 2.5 m from the subject. A modified motion magnification system and frame subtraction technique were used to identify breathing movements by detecting rapid motion areas in the magnified frame sequences. The experimental results on a set of video data from five subjects (3 h for each subject) showed that our monitoring system can accurately measure respiratory rate and therefore detect apnoea in infants and young children. The proposed system is feasible, accurate, safe and low computational complexity, making it an efficient alternative for non-contact home sleep monitoring systems and advancing health care applications. PMID:28165382

  19. Feasibility of a Customized, In-Home, Game-Based Stroke Exercise Program Using the Microsoft Kinect® Sensor.

    PubMed

    Proffitt, Rachel; Lange, Belinda

    2015-01-01

    The objective of this study was to determine the feasibility of a 6-week, game-based, in-home telerehabilitation exercise program using the Microsoft Kinect® for individuals with chronic stroke. Four participants with chronic stroke completed the intervention based on games designed with the customized Mystic Isle software. The games were tailored to each participant's specific rehabilitation needs to facilitate the attainment of individualized goals determined through the Canadian Occupational Performance Measure. Likert scale questionnaires assessed the feasibility and utility of the game-based intervention. Supplementary clinical outcome data were collected. All participants played the games with moderately high enjoyment. Participant feedback helped identify barriers to use (especially, limited free time) and possible improvements. An in-home, customized, virtual reality game intervention to provide rehabilitative exercises for persons with chronic stroke is practicable. However, future studies are necessary to determine the intervention's impact on participant function, activity, and involvement.

  20. Depth-color fusion strategy for 3-D scene modeling with Kinect.

    PubMed

    Camplani, Massimo; Mantecon, Tomas; Salgado, Luis

    2013-12-01

    Low-cost depth cameras, such as Microsoft Kinect, have completely changed the world of human-computer interaction through controller-free gaming applications. Depth data provided by the Kinect sensor presents several noise-related problems that have to be tackled to improve the accuracy of the depth data, thus obtaining more reliable game control platforms and broadening its applicability. In this paper, we present a depth-color fusion strategy for 3-D modeling of indoor scenes with Kinect. Accurate depth and color models of the background elements are iteratively built, and used to detect moving objects in the scene. Kinect depth data is processed with an innovative adaptive joint-bilateral filter that efficiently combines depth and color by analyzing an edge-uncertainty map and the detected foreground regions. Results show that the proposed approach efficiently tackles main Kinect data problems: distance-dependent depth maps, spatial noise, and temporal random fluctuations are dramatically reduced; objects depth boundaries are refined, and nonmeasured depth pixels are interpolated. Moreover, a robust depth and color background model and accurate moving objects silhouette are generated.

  1. Fusion of 3D laser scanner and depth images for obstacle recognition in mobile applications

    NASA Astrophysics Data System (ADS)

    Budzan, Sebastian; Kasprzyk, Jerzy

    2016-02-01

    The problem of obstacle detection and recognition or, generally, scene mapping is one of the most investigated problems in computer vision, especially in mobile applications. In this paper a fused optical system using depth information with color images gathered from the Microsoft Kinect sensor and 3D laser range scanner data is proposed for obstacle detection and ground estimation in real-time mobile systems. The algorithm consists of feature extraction in the laser range images, processing of the depth information from the Kinect sensor, fusion of the sensor information, and classification of the data into two separate categories: road and obstacle. Exemplary results are presented and it is shown that fusion of information gathered from different sources increases the effectiveness of the obstacle detection in different scenarios, and it can be used successfully for road surface mapping.

  2. Reliability and concurrent validity of the Microsoft Xbox One Kinect for assessment of standing balance and postural control.

    PubMed

    Clark, Ross A; Pua, Yong-Hao; Oliveira, Cristino C; Bower, Kelly J; Thilarajah, Shamala; McGaw, Rebekah; Hasanki, Ksaniel; Mentiplay, Benjamin F

    2015-07-01

    The Microsoft Kinect V2 for Windows, also known as the Xbox One Kinect, includes new and potentially far improved depth and image sensors which may increase its accuracy for assessing postural control and balance. The aim of this study was to assess the concurrent validity and reliability of kinematic data recorded using a marker-based three dimensional motion analysis (3DMA) system and the Kinect V2 during a variety of static and dynamic balance assessments. Thirty healthy adults performed two sessions, separated by one week, consisting of static standing balance tests under different visual (eyes open vs. closed) and supportive (single limb vs. double limb) conditions, and dynamic balance tests consisting of forward and lateral reach and an assessment of limits of stability. Marker coordinate and joint angle data were concurrently recorded using the Kinect V2 skeletal tracking algorithm and the 3DMA system. Task-specific outcome measures from each system on Day 1 and 2 were compared. Concurrent validity of trunk angle data during the dynamic tasks and anterior-posterior range and path length in the static balance tasks was excellent (Pearson's r>0.75). In contrast, concurrent validity for medial-lateral range and path length was poor to modest for all trials except single leg eyes closed balance. Within device test-retest reliability was variable; however, the results were generally comparable between devices. In conclusion, the Kinect V2 has the potential to be used as a reliable and valid tool for the assessment of some aspects of balance performance. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. Performance Measurements for the Microsoft Kinect Skeleton

    DTIC Science & Technology

    2012-03-01

    Information Inter- faces and Presentation]: User Interfaces—Input devices and strate- gies; 1 INTRODUCTION The Microsoft Kinect for Xbox 360 (“Kinect...these values. 2 MEASUREMENTS We conducted our tests on a machine configured with Windows 7 Ultimate (Service Pack 1) equipped with two Intel Core2...test. We tested with one , two, and three users present, although only two skeletons may be tracked. 2.1 Range We need to know how close and how far a

  4. Design and Test of a Closed-Loop FES System for Supporting Function of the Hemiparetic Hand Based on Automatic Detection using the Microsoft Kinect sensor.

    PubMed

    Simonsen, Daniel; Spaich, Erika G; Hansen, John; Andersen, Ole K

    2016-10-26

    This paper describes the design of a FES system automatically controlled in a closed loop using a Microsoft Kinect sensor, for assisting both cylindrical grasping and hand opening. The feasibility of the system was evaluated in real-time in stroke patients with hand function deficits. A hand function exercise was designed in which the subjects performed an arm and hand exercise in sitting position. The subject had to grasp one of two differently sized cylindrical objects and move it forward or backwards in the sagittal plane. This exercise was performed with each cylinder with and without FES support. Results showed that the stroke patients were able to perform up to 29% more successful grasps when they were assisted by FES. Moreover, the hand grasp-and-hold and hold-and-release durations were shorter for the smaller of the two cylinders. FES was appropriately timed in more than 95% of all trials indicating successful closed loop FES control. Future studies should incorporate options for assisting forward reaching in order to target a larger group of stroke patients.

  5. Rank preserving sparse learning for Kinect based scene classification.

    PubMed

    Tao, Dapeng; Jin, Lianwen; Yang, Zhao; Li, Xuelong

    2013-10-01

    With the rapid development of the RGB-D sensors and the promptly growing population of the low-cost Microsoft Kinect sensor, scene classification, which is a hard, yet important, problem in computer vision, has gained a resurgence of interest recently. That is because the depth of information provided by the Kinect sensor opens an effective and innovative way for scene classification. In this paper, we propose a new scheme for scene classification, which applies locality-constrained linear coding (LLC) to local SIFT features for representing the RGB-D samples and classifies scenes through the cooperation between a new rank preserving sparse learning (RPSL) based dimension reduction and a simple classification method. RPSL considers four aspects: 1) it preserves the rank order information of the within-class samples in a local patch; 2) it maximizes the margin between the between-class samples on the local patch; 3) the L1-norm penalty is introduced to obtain the parsimony property; and 4) it models the classification error minimization by utilizing the least-squares error minimization. Experiments are conducted on the NYU Depth V1 dataset and demonstrate the robustness and effectiveness of RPSL for scene classification.

  6. An evaluation of 3D head pose estimation using the Microsoft Kinect v2.

    PubMed

    Darby, John; Sánchez, María B; Butler, Penelope B; Loram, Ian D

    2016-07-01

    The Kinect v2 sensor supports real-time non-invasive 3D head pose estimation. Because the sensor is small, widely available and relatively cheap it has great potential as a tool for groups interested in measuring head posture. In this paper we compare the Kinect's head pose estimates with a marker-based record of ground truth in order to establish its accuracy. During movement of the head and neck alone (with static torso), we find average errors in absolute yaw, pitch and roll angles of 2.0±1.2°, 7.3±3.2° and 2.6±0.7°, and in rotations relative to the rest pose of 1.4±0.5°, 2.1±0.4° and 2.0±0.8°. Larger head rotations where it becomes difficult to see facial features can cause estimation to fail (10.2±6.1% of all poses in our static torso range of motion tests) but we found no significant changes in performance with the participant standing further away from Kinect - additionally enabling full-body pose estimation - or without performing face shape calibration, something which is not always possible for younger or disabled participants. Where facial features remain visible, the sensor has applications in the non-invasive assessment of postural control, e.g. during a programme of physical therapy. In particular, a multi-Kinect setup covering the full range of head (and body) movement would appear to be a promising way forward. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. Feasibility of Using Microsoft Kinect to Assess Upper Limb Movement in Type III Spinal Muscular Atrophy Patients

    PubMed Central

    Siebourg-Polster, Juliane; Wolf, Detlef; Czech, Christian; Bonati, Ulrike; Fischer, Dirk; Khwaja, Omar; Strahm, Martin

    2017-01-01

    Although functional rating scales are being used increasingly as primary outcome measures in spinal muscular atrophy (SMA), sensitive and objective assessment of early-stage disease progression and drug efficacy remains challenging. We have developed a game based on the Microsoft Kinect sensor, specifically designed to measure active upper limb movement. An explorative study was conducted to determine the feasibility of this new tool in 18 ambulant SMA type III patients and 19 age- and gender-matched healthy controls. Upper limb movement was analysed elaborately through derived features such as elbow flexion and extension angles, arm lifting angle, velocity and acceleration. No significant differences were found in the active range of motion between ambulant SMA type III patients and controls. Hand velocity was found to be different but further validation is necessary. This study presents an important step in the process of designing and handling digital biomarkers as complementary outcome measures for clinical trials. PMID:28122039

  8. Arabic sign language recognition based on HOG descriptor

    NASA Astrophysics Data System (ADS)

    Ben Jmaa, Ahmed; Mahdi, Walid; Ben Jemaa, Yousra; Ben Hamadou, Abdelmajid

    2017-02-01

    We present in this paper a new approach for Arabic sign language (ArSL) alphabet recognition using hand gesture analysis. This analysis consists in extracting a histogram of oriented gradient (HOG) features from a hand image and then using them to generate an SVM Models. Which will be used to recognize the ArSL alphabet in real-time from hand gesture using a Microsoft Kinect camera. Our approach involves three steps: (i) Hand detection and localization using a Microsoft Kinect camera, (ii) hand segmentation and (iii) feature extraction using Arabic alphabet recognition. One each input image first obtained by using a depth sensor, we apply our method based on hand anatomy to segment hand and eliminate all the errors pixels. This approach is invariant to scale, to rotation and to translation of the hand. Some experimental results show the effectiveness of our new approach. Experiment revealed that the proposed ArSL system is able to recognize the ArSL with an accuracy of 90.12%.

  9. Feasibility of Using Microsoft Kinect to Assess Upper Limb Movement in Type III Spinal Muscular Atrophy Patients.

    PubMed

    Chen, Xing; Siebourg-Polster, Juliane; Wolf, Detlef; Czech, Christian; Bonati, Ulrike; Fischer, Dirk; Khwaja, Omar; Strahm, Martin

    2017-01-01

    Although functional rating scales are being used increasingly as primary outcome measures in spinal muscular atrophy (SMA), sensitive and objective assessment of early-stage disease progression and drug efficacy remains challenging. We have developed a game based on the Microsoft Kinect sensor, specifically designed to measure active upper limb movement. An explorative study was conducted to determine the feasibility of this new tool in 18 ambulant SMA type III patients and 19 age- and gender-matched healthy controls. Upper limb movement was analysed elaborately through derived features such as elbow flexion and extension angles, arm lifting angle, velocity and acceleration. No significant differences were found in the active range of motion between ambulant SMA type III patients and controls. Hand velocity was found to be different but further validation is necessary. This study presents an important step in the process of designing and handling digital biomarkers as complementary outcome measures for clinical trials.

  10. Feasibility of a Customized, In-Home, Game-Based Stroke Exercise Program Using the Microsoft Kinect® Sensor

    PubMed Central

    PROFFITT, RACHEL; LANGE, BELINDA

    2015-01-01

    The objective of this study was to determine the feasibility of a 6-week, game-based, in-home telerehabilitation exercise program using the Microsoft Kinect® for individuals with chronic stroke. Four participants with chronic stroke completed the intervention based on games designed with the customized Mystic Isle software. The games were tailored to each participant’s specific rehabilitation needs to facilitate the attainment of individualized goals determined through the Canadian Occupational Performance Measure. Likert scale questionnaires assessed the feasibility and utility of the game-based intervention. Supplementary clinical outcome data were collected. All participants played the games with moderately high enjoyment. Participant feedback helped identify barriers to use (especially, limited free time) and possible improvements. An in-home, customized, virtual reality game intervention to provide rehabilitative exercises for persons with chronic stroke is practicable. However, future studies are necessary to determine the intervention’s impact on participant function, activity, and involvement. PMID:27563384

  11. The Kinect as an interventional tracking system

    NASA Astrophysics Data System (ADS)

    Wang, Xiang L.; Stolka, Philipp J.; Boctor, Emad; Hager, Gregory; Choti, Michael

    2012-02-01

    This work explores the suitability of low-cost sensors for "serious" medical applications, such as tracking of interventional tools in the OR, for simulation, and for education. Although such tracking - i.e. the acquisition of pose data e.g. for ultrasound probes, tissue manipulation tools, needles, but also tissue, bone etc. - is well established, it relies mostly on external devices such as optical or electromagnetic trackers, both of which mandate the use of special markers or sensors attached to each single entity whose pose is to be recorded, and also require their calibration to the tracked entity, i.e. the determination of the geometric relationship between the marker's and the object's intrinsic coordinate frames. The Microsoft Kinect sensor is a recently introduced device for full-body tracking in the gaming market, but it was quickly hacked - due to its wide range of tightly integrated sensors (RGB camera, IR depth and greyscale camera, microphones, accelerometers, and basic actuation) - and used beyond this area. As its field of view and its accuracy are within reasonable usability limits, we describe a medical needle-tracking system for interventional applications based on the Kinect sensor, standard biopsy needles, and no necessary attachments, thus saving both cost and time. Its twin cameras are used as a stereo pair to detect needle-shaped objects, reconstruct their pose in four degrees of freedom, and provide information about the most likely candidate.

  12. Feedback Control for a Smart Wheelchair Trainer Based on the Kinect Sensor

    NASA Astrophysics Data System (ADS)

    Darling, Aurelia McLaughlin

    This thesis describes a Microsoft Kinect-based feedback controller for a robot-assisted powered wheelchair trainer for children with a severe motor and/or cognitive disability. In one training mode, "computer gaming" mode, the wheelchair is allowed to rotate left and right while the children use a joystick to play video games shown on a screen in front of them. This enables them to learn the use of the joystick in a motivating environment, while experiencing the sensation and dynamics of turning in a safe setting. During initial pilot testing of the device, it was found that the wheelchair would creep forward while children were playing the games. This thesis presents a mathematical model of the wheelchair dynamics that explains the origin of the creep as a center of gravity offset from the wheel axis or a mismatch of the torques applied to the chair. Given these possible random perturbations, a feedback controller was developed to cancel these effects, correcting the system creep. The controller uses a Microsoft Kinect sensor to detect the distance to the screen displaying the computer game, as well as the left-right position (parallel parking concept) with respect to the screen, and then adjusts the wheel torque commands based on this measurement. We show through experimental testing that this controller effectively stops the creep. An added benefit of the feedback controller is that it approximates a washout filter, such as those used in aircraft simulators, to convey a more realistic sense of forward/backward motion during game play.

  13. Geometric investigation of a gaming active device

    NASA Astrophysics Data System (ADS)

    Menna, Fabio; Remondino, Fabio; Battisti, Roberto; Nocerino, Erica

    2011-07-01

    3D imaging systems are widely available and used for surveying, modeling and entertainment applications, but clear statements regarding their characteristics, performances and limitations are still missing. The VDI/VDE and the ASTME57 committees are trying to set some standards but the commercial market is not reacting properly. Since many new users are approaching these 3D recording methodologies, clear statements and information clarifying if a package or system satisfies certain requirements before investing are fundamental for those users who are not really familiar with these technologies. Recently small and portable consumer-grade active sensors came on the market, like TOF rangeimaging cameras or low-cost triangulation-based range sensor. A quite interesting active system was produced by PrimeSense and launched on the market thanks to the Microsoft Xbox project with the name of Kinect. The article reports the geometric investigation of the Kinect active sensors, considering its measurement performances, the accuracy of the retrieved range data and the possibility to use it for 3D modeling application.

  14. Improved kinect-based spatiotemporal and kinematic treadmill gait assessment.

    PubMed

    Eltoukhy, Moataz; Oh, Jeonghoon; Kuenze, Christopher; Signorile, Joseph

    2017-01-01

    A cost-effective, clinician friendly gait assessment tool that can automatically track patients' anatomical landmarks can provide practitioners with important information that is useful in prescribing rehabilitative and preventive therapies. This study investigated the validity and reliability of the Microsoft Kinect v2 as a potential inexpensive gait analysis tool. Ten healthy subjects walked on a treadmill at 1.3 and 1.6m·s -1 , as spatiotemporal parameters and kinematics were extracted concurrently using the Kinect and three-dimensional motion analysis. Spatiotemporal measures included step length and width, step and stride times, vertical and mediolateral pelvis motion, and foot swing velocity. Kinematic outcomes included hip, knee, and ankle joint angles in the sagittal plane. The absolute agreement and relative consistency between the two systems were assessed using interclass correlations coefficients (ICC2,1), while reproducibility between systems was established using Lin's Concordance Correlation Coefficient (rc). Comparison of ensemble curves and associated 90% confidence intervals (CI90) of the hip, knee, and ankle joint angles were performed to investigate if the Kinect sensor could consistently and accurately assess lower extremity joint motion throughout the gait cycle. Results showed that the Kinect v2 sensor has the potential to be an effective clinical assessment tool for sagittal plane knee and hip joint kinematics, as well as some spatiotemporal temporal variables including pelvis displacement and step characteristics during the gait cycle. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. A High Spatial Resolution Depth Sensing Method Based on Binocular Structured Light

    PubMed Central

    Yao, Huimin; Ge, Chenyang; Xue, Jianru; Zheng, Nanning

    2017-01-01

    Depth information has been used in many fields because of its low cost and easy availability, since the Microsoft Kinect was released. However, the Kinect and Kinect-like RGB-D sensors show limited performance in certain applications and place high demands on accuracy and robustness of depth information. In this paper, we propose a depth sensing system that contains a laser projector similar to that used in the Kinect, and two infrared cameras located on both sides of the laser projector, to obtain higher spatial resolution depth information. We apply the block-matching algorithm to estimate the disparity. To improve the spatial resolution, we reduce the size of matching blocks, but smaller matching blocks generate lower matching precision. To address this problem, we combine two matching modes (binocular mode and monocular mode) in the disparity estimation process. Experimental results show that our method can obtain higher spatial resolution depth without loss of the quality of the range image, compared with the Kinect. Furthermore, our algorithm is implemented on a low-cost hardware platform, and the system can support the resolution of 1280 × 960, and up to a speed of 60 frames per second, for depth image sequences. PMID:28397759

  16. Recognition of a Person Wearing Sport Shoes or High Heels through Gait Using Two Types of Sensors.

    PubMed

    Derlatka, Marcin; Bogdan, Mariusz

    2018-05-21

    Biometrics is currently an area that is both very interesting as well as rapidly growing. Among various types of biometrics the human gait recognition seems to be one of the most intriguing. However, one of the greatest problems within this field of biometrics is the change in gait caused by footwear. A change of shoes results in a significant lowering of accuracy in recognition of people. The following work presents a method which uses data gathered by two sensors: force plates and Microsoft Kinect v2 to reduce this problem. Microsoft Kinect is utilized to measure the body height of a person which allows the reduction of the set of recognized people only to those whose height is similar to that which has been measured. The entire process is preceded by identifying the type of footwear which the person is wearing. The research was conducted on data obtained from 99 people (more than 3400 strides) and the proposed method allowed us to reach a Correct Classification Rate (CCR) greater than 88% which, in comparison to earlier methods reaching CCR’s of <80%, is a significant improvement. The work presents advantages as well as limitations of the proposed method.

  17. Use of the Microsoft Kinect system to characterize balance ability during balance training.

    PubMed

    Lim, Dohyung; Kim, ChoongYeon; Jung, HoHyun; Jung, Dukyoung; Chun, Keyoung Jin

    2015-01-01

    The risk of falling increases significantly in the elderly because of deterioration of the neural musculature regulatory mechanisms. Several studies have investigated methods of preventing falling using real-time systems to evaluate balance; however, it is difficult to monitor the results of such characterizations in real time. Herein, we describe the use of Microsoft's Kinect depth sensor system to evaluate balance in real time. Six healthy male adults (25.5±1.8 years, 173.9±6.4 cm, 71.4±6.5 kg, and 23.6±2.4 kg/m(2)), with normal balance abilities and with no musculoskeletal disorders, were selected to participate in the experiment. Movements of the participants were induced by controlling the base plane of the balance training equipment in various directions. The dynamic motion of the subjects was measured using two Kinect depth sensor systems and a three-dimensional motion capture system with eight infrared cameras. The two systems yielded similar results for changes in the center of body mass (P>0.05) with a large Pearson's correlation coefficient of γ>0.60. The results for the two systems showed similarity in the mean lower-limb joint angle with flexion-extension movements, and these values were highly correlated (hip joint: within approximately 4.6°; knee joint: within approximately 8.4°) (0.40<γ<0.74) (P>0.05). Large differences with a low correlation were, however, observed for the lower-limb joint angle in relation to abduction-adduction and internal-external rotation motion (γ<0.40) (P<0.05). These findings show that clinical and dynamic accuracy can be achieved using the Kinect system in balance training by measuring changes in the center of body mass and flexion-extension movements of the lower limbs, but not abduction-adduction and internal-external rotation.

  18. Motor Rehabilitation Using Kinect: A Systematic Review.

    PubMed

    Da Gama, Alana; Fallavollita, Pascal; Teichrieb, Veronica; Navab, Nassir

    2015-04-01

    Interactive systems are being developed with the intention to help in the engagement of patients on various therapies. Amid the recent technological advances, Kinect™ from Microsoft (Redmond, WA) has helped pave the way on how user interaction technology facilitates and complements many clinical applications. In order to examine the actual status of Kinect developments for rehabilitation, this article presents a systematic review of articles that involve interactive, evaluative, and technical advances related to motor rehabilitation. Systematic research was performed in the IEEE Xplore and PubMed databases using the key word combination "Kinect AND rehabilitation" with the following inclusion criteria: (1) English language, (2) page number >4, (3) Kinect system for assistive interaction or clinical evaluation, or (4) Kinect system for improvement or evaluation of the sensor tracking or movement recognition. Quality assessment was performed by QualSyst standards. In total, 109 articles were found in the database research, from which 31 were included in the review: 13 were focused on the development of assistive systems for rehabilitation, 3 in evaluation, 3 in the applicability category, 7 on validation of Kinect anatomic and clinical evaluation, and 5 on improvement techniques. Quality analysis of all included articles is also presented with their respective QualSyst checklist scores. Research and development possibilities and future works with the Kinect for rehabilitation application are extensive. Methodological improvements when performing studies on this area need to be further investigated.

  19. Volcanological applications of the Kinect sensor

    NASA Astrophysics Data System (ADS)

    Tortini, R.; Carn, S. A.

    2012-12-01

    The Kinect is a motion capture device designed for the Microsoft Xbox system. The device comprises a visible (RGB) camera and an infrared (IR) camera, refractor and light emitter emitting a known structured light pattern at a near-infrared wavelength of 830 nm, plus a three-axis accelerometer and four microphones. Moreover, by combining the signal from the IR camera and the light emitter it is possible to produce a distance image (depth). Thanks to the efforts of the free and open source software community, although originally intended to be used for videogames the Kinect can be exploited as a short range low-cost LiDAR sensor by scientists in various fields. The main limitation of the Kinect is its working distance, which ranges from ~0.5 to 15 m, with a distance sensitivity of ~1 mm at 0.5 m and ~8 cm at 5 m estimated by Mankoff et al. (2011). After their co-registration, we will present the calibration process for the RGB, depth and IR intensity images, and a sensitivity analysis of the IR intensity to the color spectrum will be performed. We expect the intensity to exhibit a non-linear correlation with distance of the target from the sensor, with lower sensitivity and larger errors at greater distances. We envisage several possible applications of the small-scale, precise topographic data acquired by the Kinect in volcanology, and solicit other ideas from the community. Possible applications could include monitoring of light tephra accumulation to characterize mass flux, monitoring of active lava flows or mapping inactive lava tubes, capturing topographic data on the outcrop scale, mapping surface roughness variations on volcanic mass flow deposits, or visualizing analog volcano models in the lab. As a demonstration, we will present an application of the Kinect as a tool for 3D visualization of volcanic rock samples. Data will be collected with free and open source software, demonstrating the cost-effectiveness of the Kinect for volcanological applications, particularly where conditions may be unsuitable for the deployment of more costly instruments. K.D. Mankoff, T.A. Russo, B.K. Norris, S. Hossainzadeh, L. Beem, J.I. Walter, and S.M. Tulaczyk, "Kinects as sensors in earth science: glaciological, geomorphological, and hydrological applications". AGU Fall Meeting 2012, San Francisco (USA), poster.

  20. Virtual GEOINT Center: C2ISR through an avatar's eyes

    NASA Astrophysics Data System (ADS)

    Seibert, Mark; Tidbal, Travis; Basil, Maureen; Muryn, Tyler; Scupski, Joseph; Williams, Robert

    2013-05-01

    As the number of devices collecting and sending data in the world are increasing, finding ways to visualize and understand that data is becoming more and more of a problem. This has often been coined as the problem of "Big Data." The Virtual Geoint Center (VGC) aims to aid in solving that problem by providing a way to combine the use of the virtual world with outside tools. Using open-source software such as OpenSim and Blender, the VGC uses a visually stunning 3D environment to display the data sent to it. The VGC is broken up into two major components: The Kinect Minimap, and the Geoint Map. The Kinect Minimap uses the Microsoft Kinect and its open-source software to make a miniature display of people the Kinect detects in front of it. The Geoint Map collect smartphone sensor information from online databases and displays them in real time onto a map generated by Google Maps. By combining outside tools and the virtual world, the VGC can help a user "visualize" data, and provide additional tools to "understand" the data.

  1. Measuring the Negative Impact of Long Sitting Hours at High School Students Using the Microsoft Kinect.

    PubMed

    Gal-Nadasan, Norbert; Gal-Nadasan, Emanuela Georgiana; Stoicu-Tivadar, Vasile; Poenaru, Dan V; Popa-Andrei, Diana

    2017-01-01

    This paper suggests the usage of the Microsoft Kinect to detect the onset of the scoliosis at high school students due to incorrect sitting positions. The measurement is done by measuring the overall posture in orthostatic position using the Microsoft Kinect. During the measuring process several key points of the human body are tracked like the hips and shoulders to form the postural data. The test was done on 30 high school students who spend 6 to 7 hours per day in the school benches. The postural data is statistically processed by IBM Watson's Analytics. From the statistical analysis we have obtained that a prolonged sitting position at such young ages affects in a negative way the spinal cord and facilitates the appearance of malicious postures like scoliosis and lordosis.

  2. Kinect the dots: 3D control of optical tweezers

    NASA Astrophysics Data System (ADS)

    Shaw, Lucy; Preece, Daryl; Rubinsztein-Dunlop, Halina

    2013-07-01

    Holographically generated optical traps confine micron- and sub-micron sized particles close to the center of focused light beams. They also provide a way of trapping multiple particles and moving them in three dimensions. However, in many systems the user interface is not always advantageous or intuitive especially for collaborative work and when depth information is required. We discuss and evaluate a set of multi-beam optical tweezers that utilize off the shelf gaming technology to facilitate user interaction. We use the Microsoft Kinect sensor bar as a way of getting the user input required to generate arbitrary optical force fields and control optically trapped particles. We demonstrate that the system can also be used for dynamic light control.

  3. Validity of an Interactive Functional Reach Test.

    PubMed

    Galen, Sujay S; Pardo, Vicky; Wyatt, Douglas; Diamond, Andrew; Brodith, Victor; Pavlov, Alex

    2015-08-01

    Videogaming platforms such as the Microsoft (Redmond, WA) Kinect(®) are increasingly being used in rehabilitation to improve balance performance and mobility. These gaming platforms do not have built-in clinical measures that offer clinically meaningful data. We have now developed software that will enable the Kinect sensor to assess a patient's balance using an interactive functional reach test (I-FRT). The aim of the study was to test the concurrent validity of the I-FRT and to establish the feasibility of implementing the I-FRT in a clinical setting. The concurrent validity of the I-FRT was tested among 20 healthy adults (mean age, 25.8±3.4 years; 14 women). The Functional Reach Test (FRT) was measured simultaneously by both the Kinect sensor using the I-FRT software and the Optotrak Certus(®) 3D motion-capture system (Northern Digital Inc., Waterloo, ON, Canada). The feasibility of implementing the I-FRT in a clinical setting was assessed by performing the I-FRT in 10 participants with mild balance impairments recruited from the outpatient physical therapy clinic (mean age, 55.8±13.5 years; four women) and obtaining their feedback using a NASA Task Load Index (NASA-TLX) questionnaire. There was moderate to good agreement between FRT measures made by the two measurement systems. The greatest agreement between the two measurement system was found with the Kinect sensor placed at a distance of 2.5 m [intraclass correlation coefficient (2,k)=0.786; P<0.001] from the participant. Participants with mild balance impairments whose balance was assessed using the I-FRT software scored their experience favorably by assigning lower scores for the Frustration, Mental Demand, and Temporal Demand subscales on the NASA/TLX questionnaire. FRT measures made using the Kinect sensor I-FRT software provides a valid clinical measure that can be used with the gaming platforms.

  4. Kinect Technology Game Play to Mimic Quake Catcher Network (QCN) Sensor Deployment During a Rapid Aftershock Mobilization Program (RAMP)

    NASA Astrophysics Data System (ADS)

    Kilb, D. L.; Yang, A.; Rohrlick, D.; Cochran, E. S.; Lawrence, J.; Chung, A. I.; Neighbors, C.; Choo, Y.

    2011-12-01

    The Kinect technology allows for hands-free game play, greatly increasing the accessibility of gaming for those uncomfortable using controllers. How it works is the Kinect camera transmits invisible near-infrared light and measures its "time of flight" to reflect off an object, allowing it to distinguish objects within 1 centimeter in depth and 3 mm in height and width. The middleware can also respond to body gestures and voice commands. Here, we use the Kinect Windows SDK software to create a game that mimics how scientists deploy seismic instruments following a large earthquake. The educational goal of the game is to allow the players to explore 3D space as they learn about the Quake Catcher Network's (QCN) Rapid Aftershock Mobilization Program (RAMP). Many of the scenarios within the game are taken from factual RAMP experiences. To date, only the PC platform (or a Mac running PC emulator software) is available for use, but we hope to move to other platforms (e.g., Xbox 360, iPad, iPhone) as they become available. The game is written in programming language C# using Microsoft XNA and Visual Studio 2010, graphic shading is added using High Level Shader Language (HLSL), and rendering is produced using XNA's graphics libraries. Key elements of the game include selecting sensor locations, adequately installing the sensor, and monitoring the incoming data. During game play aftershocks can occur unexpectedly, as can other problems that require attention (e.g., power outages, equipment failure, and theft). The player accrues points for quickly deploying the first sensor (recording as many initial aftershocks as possible), correctly installing the sensors (orientation with respect to north, properly securing, and testing), distributing the sensors adequately in the region, and troubleshooting problems. One can also net points for efficient use of game play time. Setting up for game play in your local environment requires: (1) the Kinect hardware ( $145); (2) a computer with a Windows operating system (Mac users can use a Windows emulator); and (3) our free QCN game software (available from http://quakeinfo.ucsd.edu/ dkilb/WEB/QCN/Downloads.html).

  5. Design and test of a Microsoft Kinect-based system for delivering adaptive visual feedback to stroke patients during training of upper limb movement.

    PubMed

    Simonsen, Daniel; Popovic, Mirjana B; Spaich, Erika G; Andersen, Ole Kæseler

    2017-11-01

    The present paper describes the design and test of a low-cost Microsoft Kinect-based system for delivering adaptive visual feedback to stroke patients during the execution of an upper limb exercise. Eleven sub-acute stroke patients with varying degrees of upper limb function were recruited. Each subject participated in a control session (repeated twice) and a feedback session (repeated twice). In each session, the subjects were presented with a rectangular pattern displayed on a vertical mounted monitor embedded in the table in front of the patient. The subjects were asked to move a marker inside the rectangular pattern by using their most affected hand. During the feedback session, the thickness of the rectangular pattern was changed according to the performance of the subject, and the color of the marker changed according to its position, thereby guiding the subject's movements. In the control session, the thickness of the rectangular pattern and the color of the marker did not change. The results showed that the movement similarity and smoothness was higher in the feedback session than in the control session while the duration of the movement was longer. The present study showed that adaptive visual feedback delivered by use of the Kinect sensor can increase the similarity and smoothness of upper limb movement in stroke patients.

  6. Automated In-Home Fall Risk Assessment and Detection Sensor System for Elders.

    PubMed

    Rantz, Marilyn; Skubic, Marjorie; Abbott, Carmen; Galambos, Colleen; Popescu, Mihail; Keller, James; Stone, Erik; Back, Jessie; Miller, Steven J; Petroski, Gregory F

    2015-06-01

    Falls are a major problem for the elderly people leading to injury, disability, and even death. An unobtrusive, in-home sensor system that continuously monitors older adults for fall risk and detects falls could revolutionize fall prevention and care. A fall risk and detection system was developed and installed in the apartments of 19 older adults at a senior living facility. The system includes pulse-Doppler radar, a Microsoft Kinect, and 2 web cameras. To collect data for comparison with sensor data and for algorithm development, stunt actors performed falls in participants' apartments each month for 2 years and participants completed fall risk assessments (FRAs) using clinically valid, standardized instruments. The FRAs were scored by clinicians and recorded by the sensing modalities. Participants' gait parameters were measured as they walked on a GAITRite mat. These data were used as ground truth, objective data to use in algorithm development and to compare with radar and Kinect generated variables. All FRAs are highly correlated (p < .01) with the Kinect gait velocity and Kinect stride length. Radar velocity is correlated (p < .05) to all the FRAs and highly correlated (p < .01) to most. Real-time alerts of actual falls are being sent to clinicians providing faster responses to urgent situations. The in-home FRA and detection system has the potential to help older adults remain independent, maintain functional ability, and live at home longer. © The Author 2015. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  7. Measuring and Inferring the State of the User via the Microsoft Kinect with Application to Cyber Security Research

    DTIC Science & Technology

    2018-01-16

    ARL-TN-0864 ● JAN 2018 US Army Research Laboratory Measuring and Inferring the State of the User via the Microsoft Kinect with...Application to Cyber Security Research by Christopher J Garneau Approved for public release; distribution is unlimited...this report when it is no longer needed. Do not return it to the originator. ARL-TN-0864● JAN 2018 US Army Research Laboratory

  8. Development and preliminary validation of an interactive remote physical therapy system.

    PubMed

    Mishra, Anup K; Skubic, Marjorie; Abbott, Carmen

    2015-01-01

    In this paper, we present an interactive physical therapy system (IPTS) for remote quantitative assessment of clients in the home. The system consists of two different interactive interfaces connected through a network, for a real-time low latency video conference using audio, video, skeletal, and depth data streams from a Microsoft Kinect. To test the potential of IPTS, experiments were conducted with 5 independent living senior subjects in Kansas City, MO. Also, experiments were conducted in the lab to validate the real-time biomechanical measures calculated using the skeletal data from the Microsoft Xbox 360 Kinect and Microsoft Xbox One Kinect, with ground truth data from a Vicon motion capture system. Good agreements were found in the validation tests. The results show potential capabilities of the IPTS system to provide remote physical therapy to clients, especially older adults, who may find it difficult to visit the clinic.

  9. Kinect-based virtual rehabilitation and evaluation system for upper limb disorders: A case study.

    PubMed

    Ding, W L; Zheng, Y Z; Su, Y P; Li, X L

    2018-04-19

    To help patients with disabilities of the arm and shoulder recover the accuracy and stability of movements, a novel and simple virtual rehabilitation and evaluation system called the Kine-VRES system was developed using Microsoft Kinect. First, several movements and virtual tasks were designed to increase the coordination, control and speed of the arm movements. The movements of the patients were then captured using the Kinect sensor, and kinematics-based interaction and real-time feedback were integrated into the system to enhance the motivation and self-confidence of the patient. Finally, a quantitative evaluation method of upper limb movements was provided using the recorded kinematics during hand-to-hand movement. A preliminary study of this rehabilitation system indicates that the shoulder movements of two participants with ataxia became smoother after three weeks of training (one hour per day). This case study demonstrated the effectiveness of the designed system, which could be promising for the rehabilitation of patients with upper limb disorders.

  10. Comparative analysis of respiratory motion tracking using Microsoft Kinect v2 sensor.

    PubMed

    Silverstein, Evan; Snyder, Michael

    2018-05-01

    To present and evaluate a straightforward implementation of a marker-less, respiratory motion-tracking process utilizing Kinect v2 camera as a gating tool during 4DCT or during radiotherapy treatments. Utilizing the depth sensor on the Kinect as well as author written C# code, respiratory motion of a subject was tracked by recording depth values obtained at user selected points on the subject, with each point representing one pixel on the depth image. As a patient breathes, specific anatomical points on the chest/abdomen will move slightly within the depth image across pixels. By tracking how depth values change for a specific pixel, instead of how the anatomical point moves throughout the image, a respiratory trace can be obtained based on changing depth values of the selected pixel. Tracking these values was implemented via marker-less setup. Varian's RPM system and the Anzai belt system were used in tandem with the Kinect to compare respiratory traces obtained by each using two different subjects. Analysis of the depth information from the Kinect for purposes of phase- and amplitude-based binning correlated well with the RPM and Anzai systems. Interquartile Range (IQR) values were obtained comparing times correlated with specific amplitude and phase percentages against each product. The IQR time spans indicated the Kinect would measure specific percentage values within 0.077 s for Subject 1 and 0.164 s for Subject 2 when compared to values obtained with RPM or Anzai. For 4DCT scans, these times correlate to less than 1 mm of couch movement and would create an offset of 1/2 an acquired slice. By tracking depth values of user selected pixels within the depth image, rather than tracking specific anatomical locations, respiratory motion can be tracked and visualized utilizing the Kinect with results comparable to that of the Varian RPM and Anzai belt. © 2018 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  11. SU-E-I-91: Development of a Compact Radiographic Simulator Using Microsoft Kinect.

    PubMed

    Ono, M; Kozono, K; Aoki, M; Mizoguchi, A; Kamikawa, Y; Umezu, Y; Arimura, H; Toyofuku, F

    2012-06-01

    Radiographic simulator system is useful for learning radiographic techniques and confirmation of positioning before x-ray irradiation. Conventional x-ray simulators have drawbacks in cost and size, and are only applicable to situations in which position of the object does not change. Therefore, we have developed a new radiographic simulator system using an infrared-ray based three-dimensional shape measurement device (Microsoft Kinect). We made a computer program using OpenCV and OpenNI for processing of depth image data obtained from Kinect, and calculated the exact distance from Kinect to the object by calibration. Theobject was measured from various directions, and positional relationship between the x-ray tube and the object was obtained. X-ray projection images were calculated by projecting x-rays onto the mathematical three-dimensional CT data of a head phantom with almost the same size. The object was rotated from 0 degree (standard position) through 90 degrees in increments of 10 degrees, and the accuracy of the measured rotation angle values was evaluated. In order to improve the computational time, the projection image size was changed (512*512, 256*256, and 128*128). The x-ray simulation images corresponding to the radiographic images produced by using the x-ray tube were obtained. The three-dimensional position of the object was measured with good precision from 0 to 50 degrees, but above 50 degrees, measured position error increased with the increase of the rotation angle. The computational time and image size were 30, 12, and 7 seconds for 512*512, 256*256, and 128*128, respectively. We could measure the three-dimensional position of the object using properly calibrated Kinect sensor, and obtained projection images at relatively high-speed using the three-dimensional CTdata. It was suggested that this system can be used for obtaining simulated projection x-ray images before x-ray exposure by attaching this device onto an x-ray tube. © 2012 American Association of Physicists in Medicine.

  12. A Kinect-based system for automatic recording of some pigeon behaviors.

    PubMed

    Lyons, Damian M; MacDonall, James S; Cunningham, Kelly M

    2015-12-01

    Contact switches and touch screens are the state of the art for recording pigeons' pecking behavior. Recording other behavior, however, requires a different sensor for each behavior, and some behaviors cannot easily be recorded. We present a flexible and inexpensive image-based approach to detecting and counting pigeon behaviors that is based on the Kinect sensor from Microsoft. Although the system is as easy to set up and use as the standard approaches, it is more flexible because it can record behaviors in addition to key pecking. In this article, we show how both the fast, fine motion of key pecking and the gross body activity of feeding can be measured. Five pigeons were trained to peck at a lighted contact switch, a pigeon key, to obtain food reward. The timing of the pecks and the food reward signals were recorded in a log file using standard equipment. The Kinect-based system, called BehaviorWatch, also measured the pecking and feeding behavior and generated a different log file. For key pecking, BehaviorWatch had an average sensitivity of 95% and a precision of 91%, which were very similar to the pecking measurements from the standard equipment. For detecting feeding activity, BehaviorWatch had a sensitivity of 95% and a precision of 97%. These results allow us to demonstrate that an advantage of the Kinect-based approach is that it can also be reliably used to measure activity other than key pecking.

  13. Low-dimensional dynamical characterization of human performance of cancer patients using motion data.

    PubMed

    Hasnain, Zaki; Li, Ming; Dorff, Tanya; Quinn, David; Ueno, Naoto T; Yennu, Sriram; Kolatkar, Anand; Shahabi, Cyrus; Nocera, Luciano; Nieva, Jorge; Kuhn, Peter; Newton, Paul K

    2018-05-18

    Biomechanical characterization of human performance with respect to fatigue and fitness is relevant in many settings, however is usually limited to either fully qualitative assessments or invasive methods which require a significant experimental setup consisting of numerous sensors, force plates, and motion detectors. Qualitative assessments are difficult to standardize due to their intrinsic subjective nature, on the other hand, invasive methods provide reliable metrics but are not feasible for large scale applications. Presented here is a dynamical toolset for detecting performance groups using a non-invasive system based on the Microsoft Kinect motion capture sensor, and a case study of 37 cancer patients performing two clinically monitored tasks before and after therapy regimens. Dynamical features are extracted from the motion time series data and evaluated based on their ability to i) cluster patients into coherent fitness groups using unsupervised learning algorithms and to ii) predict Eastern Cooperative Oncology Group performance status via supervised learning. The unsupervised patient clustering is comparable to clustering based on physician assigned Eastern Cooperative Oncology Group status in that they both have similar concordance with change in weight before and after therapy as well as unexpected hospitalizations throughout the study. The extracted dynamical features can predict physician, coordinator, and patient Eastern Cooperative Oncology Group status with an accuracy of approximately 80%. The non-invasive Microsoft Kinect sensor and the proposed dynamical toolset comprised of data preprocessing, feature extraction, dimensionality reduction, and machine learning offers a low-cost and general method for performance segregation and can complement existing qualitative clinical assessments. Copyright © 2018 Elsevier Ltd. All rights reserved.

  14. Evaluation of the Microsoft Kinect for screening ACL injury.

    PubMed

    Stone, Erik E; Butler, Michael; McRuer, Aaron; Gray, Aaron; Marks, Jeffrey; Skubic, Marjorie

    2013-01-01

    A study was conducted to evaluate the use of the skeletal model generated by the Microsoft Kinect SDK in capturing four biomechanical measures during the Drop Vertical Jump test. These measures, which include: knee valgus motion from initial contact to peak flexion, frontal plane knee angle at initial contact, frontal plane knee angle at peak flexion, and knee-to-ankle separation ratio at peak flexion, have proven to be useful in screening for future knee anterior cruciate ligament (ACL) injuries among female athletes. A marker-based Vicon motion capture system was used for ground truth. Results indicate that the Kinect skeletal model likely has acceptable accuracy for use as part of a screening tool to identify elevated risk for ACL injury.

  15. Microsoft Kinect can distinguish differences in over-ground gait between older persons with and without Parkinson's disease.

    PubMed

    Eltoukhy, Moataz; Kuenze, Christopher; Oh, Jeonghoon; Jacopetti, Marco; Wooten, Savannah; Signorile, Joseph

    2017-06-01

    Gait patterns differ between healthy elders and those with Parkinson's disease (PD). A simple, low-cost clinical tool that can evaluate kinematic differences between these populations would be invaluable diagnostically; since gait analysis in a clinical setting is impractical due to cost and technical expertise. This study investigated the between group differences between the Kinect and a 3D movement analysis system (BTS) and reported validity and reliability of the Kinect v2 sensor for gait analysis. Nineteen subjects participated, eleven without (C) and eight with PD (PD). Outcome measures included spatiotemporal parameters and kinematics. Ankle range of motion for C was significantly less during ankle swing compared to PD (p=0.04) for the Kinect. Both systems showed significant differences for stride length (BTS (C 1.24±0.16, PD=1.01±0.17, p=0.009), Kinect (C=1.24±0.17, PD=1.00±0.18, p=0.009)), gait velocity (BTS (C=1.06±0.14, PD=0.83±0.15, p=0.01), Kinect (C=1.06±0.15, PD=0.83±0.16, p=0.01)), and swing velocity (BTS (C=2.50±0.27, PD=2.12±0.36, p=0.02), Kinect (C=2.32±0.25, PD=1.95±0.31, p=0.01)) between groups. Agreement (Range ICC =0.93-0.99) and consistency (Range ICC =0.94-0.99) were excellent between systems for stride length, stance duration, swing duration, gait velocity, and swing velocity. The Kinect v2 can was sensitive enough to detect between group differences and consistently produced results similar to the BTS system. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.

  16. Low-Cost 3d Devices and Laser Scanners Comparison for the Application in Orthopedic Centres

    NASA Astrophysics Data System (ADS)

    Redaelli, D. F.; Gonizzi Barsanti, S.; Fraschini, P.; Biffi, E.; Colombo, G.

    2018-05-01

    Low-cost 3D sensors are nowadays widely diffused and many different solutions are available on the market. Some of these devices were developed for entertaining purposes, but are used also for acquisition and processing of different 3D data with the aim of documentation, research and study. Given the fact that these sensors were not developed for this purpose, it is necessary to evaluate their use in the capturing process. This paper shows a preliminary research comparing the Kinect 1 and 2 by Microsoft, the Structure Sensor by Occipital and the O&P Scan by Rodin4D in a medical scenario (i.e. human body scans). In particular, these sensors were compared to Minolta Vivid 9i, chosen as reference because of its higher accuracy. Different test objects were analysed: a calibrated flat plane, for the evaluation of the systematic distance error for each device, and three different parts of a mannequin, used as samples of human body parts. The results showed that the use of a certified flat plane is a good starting point in characterizing the sensors, but a complete analysis with objects similar to the ones of the real context of application is required. For example, the Kinect 2 presented the best results among the low-cost sensors on the flat plane, while the Structure Sensor was more reliable on the mannequin parts.

  17. An Approach to the Use of Depth Cameras for Weed Volume Estimation

    PubMed Central

    Andújar, Dionisio; Dorado, José; Fernández-Quintanilla, César; Ribeiro, Angela

    2016-01-01

    The use of depth cameras in precision agriculture is increasing day by day. This type of sensor has been used for the plant structure characterization of several crops. However, the discrimination of small plants, such as weeds, is still a challenge within agricultural fields. Improvements in the new Microsoft Kinect v2 sensor can capture the details of plants. The use of a dual methodology using height selection and RGB (Red, Green, Blue) segmentation can separate crops, weeds, and soil. This paper explores the possibilities of this sensor by using Kinect Fusion algorithms to reconstruct 3D point clouds of weed-infested maize crops under real field conditions. The processed models showed good consistency among the 3D depth images and soil measurements obtained from the actual structural parameters. Maize plants were identified in the samples by height selection of the connected faces and showed a correlation of 0.77 with maize biomass. The lower height of the weeds made RGB recognition necessary to separate them from the soil microrelief of the samples, achieving a good correlation of 0.83 with weed biomass. In addition, weed density showed good correlation with volumetric measurements. The canonical discriminant analysis showed promising results for classification into monocots and dictos. These results suggest that estimating volume using the Kinect methodology can be a highly accurate method for crop status determination and weed detection. It offers several possibilities for the automation of agricultural processes by the construction of a new system integrating these sensors and the development of algorithms to properly process the information provided by them. PMID:27347972

  18. An Approach to the Use of Depth Cameras for Weed Volume Estimation.

    PubMed

    Andújar, Dionisio; Dorado, José; Fernández-Quintanilla, César; Ribeiro, Angela

    2016-06-25

    The use of depth cameras in precision agriculture is increasing day by day. This type of sensor has been used for the plant structure characterization of several crops. However, the discrimination of small plants, such as weeds, is still a challenge within agricultural fields. Improvements in the new Microsoft Kinect v2 sensor can capture the details of plants. The use of a dual methodology using height selection and RGB (Red, Green, Blue) segmentation can separate crops, weeds, and soil. This paper explores the possibilities of this sensor by using Kinect Fusion algorithms to reconstruct 3D point clouds of weed-infested maize crops under real field conditions. The processed models showed good consistency among the 3D depth images and soil measurements obtained from the actual structural parameters. Maize plants were identified in the samples by height selection of the connected faces and showed a correlation of 0.77 with maize biomass. The lower height of the weeds made RGB recognition necessary to separate them from the soil microrelief of the samples, achieving a good correlation of 0.83 with weed biomass. In addition, weed density showed good correlation with volumetric measurements. The canonical discriminant analysis showed promising results for classification into monocots and dictos. These results suggest that estimating volume using the Kinect methodology can be a highly accurate method for crop status determination and weed detection. It offers several possibilities for the automation of agricultural processes by the construction of a new system integrating these sensors and the development of algorithms to properly process the information provided by them.

  19. Development and assessment of a Microsoft Kinect based system for imaging the breast in three dimensions.

    PubMed

    Wheat, J S; Choppin, S; Goyal, A

    2014-06-01

    Three-dimensional surface imaging technologies have been used in the planning and evaluation of breast reconstructive and cosmetic surgery. The aim of this study was to develop a 3D surface imaging system based on the Microsoft Kinect and assess the accuracy and repeatability with which the system could image the breast. A system comprising two Kinects, calibrated to provide a complete 3D image of the mannequin was developed. Digital measurements of Euclidean and surface distances between landmarks showed acceptable agreement with manual measurements. The mean differences for Euclidean and surface distances were 1.9mm and 2.2mm, respectively. The system also demonstrated good intra- and inter-rater reliability (ICCs>0.999). The Kinect-based 3D surface imaging system offers a low-cost, readily accessible alternative to more expensive, commercially available systems, which have had limited clinical use. Copyright © 2014 IPEM. Published by Elsevier Ltd. All rights reserved.

  20. Natural User Interface Sensors for Human Body Measurement

    NASA Astrophysics Data System (ADS)

    Boehm, J.

    2012-08-01

    The recent push for natural user interfaces (NUI) in the entertainment and gaming industry has ushered in a new era of low cost three-dimensional sensors. While the basic idea of using a three-dimensional sensor for human gesture recognition dates some years back it is not until recently that such sensors became available on the mass market. The current market leader is PrimeSense who provide their technology for the Microsoft Xbox Kinect. Since these sensors are developed to detect and observe human users they should be ideally suited to measure the human body. We describe the technology of a line of NUI sensors and assess their performance in terms of repeatability and accuracy. We demonstrate the implementation of a prototype scanner integrating several NUI sensors to achieve full body coverage. We present the results of the obtained surface model of a human body.

  1. Training Classifiers with Shadow Features for Sensor-Based Human Activity Recognition.

    PubMed

    Fong, Simon; Song, Wei; Cho, Kyungeun; Wong, Raymond; Wong, Kelvin K L

    2017-02-27

    In this paper, a novel training/testing process for building/using a classification model based on human activity recognition (HAR) is proposed. Traditionally, HAR has been accomplished by a classifier that learns the activities of a person by training with skeletal data obtained from a motion sensor, such as Microsoft Kinect. These skeletal data are the spatial coordinates (x, y, z) of different parts of the human body. The numeric information forms time series, temporal records of movement sequences that can be used for training a classifier. In addition to the spatial features that describe current positions in the skeletal data, new features called 'shadow features' are used to improve the supervised learning efficacy of the classifier. Shadow features are inferred from the dynamics of body movements, and thereby modelling the underlying momentum of the performed activities. They provide extra dimensions of information for characterising activities in the classification process, and thereby significantly improve the classification accuracy. Two cases of HAR are tested using a classification model trained with shadow features: one is by using wearable sensor and the other is by a Kinect-based remote sensor. Our experiments can demonstrate the advantages of the new method, which will have an impact on human activity detection research.

  2. Training Classifiers with Shadow Features for Sensor-Based Human Activity Recognition

    PubMed Central

    Fong, Simon; Song, Wei; Cho, Kyungeun; Wong, Raymond; Wong, Kelvin K. L.

    2017-01-01

    In this paper, a novel training/testing process for building/using a classification model based on human activity recognition (HAR) is proposed. Traditionally, HAR has been accomplished by a classifier that learns the activities of a person by training with skeletal data obtained from a motion sensor, such as Microsoft Kinect. These skeletal data are the spatial coordinates (x, y, z) of different parts of the human body. The numeric information forms time series, temporal records of movement sequences that can be used for training a classifier. In addition to the spatial features that describe current positions in the skeletal data, new features called ‘shadow features’ are used to improve the supervised learning efficacy of the classifier. Shadow features are inferred from the dynamics of body movements, and thereby modelling the underlying momentum of the performed activities. They provide extra dimensions of information for characterising activities in the classification process, and thereby significantly improve the classification accuracy. Two cases of HAR are tested using a classification model trained with shadow features: one is by using wearable sensor and the other is by a Kinect-based remote sensor. Our experiments can demonstrate the advantages of the new method, which will have an impact on human activity detection research. PMID:28264470

  3. A Multi-Resolution Approach for an Automated Fusion of Different Low-Cost 3D Sensors

    PubMed Central

    Dupuis, Jan; Paulus, Stefan; Behmann, Jan; Plümer, Lutz; Kuhlmann, Heiner

    2014-01-01

    The 3D acquisition of object structures has become a common technique in many fields of work, e.g., industrial quality management, cultural heritage or crime scene documentation. The requirements on the measuring devices are versatile, because spacious scenes have to be imaged with a high level of detail for selected objects. Thus, the used measuring systems are expensive and require an experienced operator. With the rise of low-cost 3D imaging systems, their integration into the digital documentation process is possible. However, common low-cost sensors have the limitation of a trade-off between range and accuracy, providing either a low resolution of single objects or a limited imaging field. Therefore, the use of multiple sensors is desirable. We show the combined use of two low-cost sensors, the Microsoft Kinect and the David laserscanning system, to achieve low-resolved scans of the whole scene and a high level of detail for selected objects, respectively. Afterwards, the high-resolved David objects are automatically assigned to their corresponding Kinect object by the use of surface feature histograms and SVM-classification. The corresponding objects are fitted using an ICP-implementation to produce a multi-resolution map. The applicability is shown for a fictional crime scene and the reconstruction of a ballistic trajectory. PMID:24763255

  4. A multi-resolution approach for an automated fusion of different low-cost 3D sensors.

    PubMed

    Dupuis, Jan; Paulus, Stefan; Behmann, Jan; Plümer, Lutz; Kuhlmann, Heiner

    2014-04-24

    The 3D acquisition of object structures has become a common technique in many fields of work, e.g., industrial quality management, cultural heritage or crime scene documentation. The requirements on the measuring devices are versatile, because spacious scenes have to be imaged with a high level of detail for selected objects. Thus, the used measuring systems are expensive and require an experienced operator. With the rise of low-cost 3D imaging systems, their integration into the digital documentation process is possible. However, common low-cost sensors have the limitation of a trade-off between range and accuracy, providing either a low resolution of single objects or a limited imaging field. Therefore, the use of multiple sensors is desirable. We show the combined use of two low-cost sensors, the Microsoft Kinect and the David laserscanning system, to achieve low-resolved scans of the whole scene and a high level of detail for selected objects, respectively. Afterwards, the high-resolved David objects are automatically assigned to their corresponding Kinect object by the use of surface feature histograms and SVM-classification. The corresponding objects are fitted using an ICP-implementation to produce a multi-resolution map. The applicability is shown for a fictional crime scene and the reconstruction of a ballistic trajectory.

  5. Using Kinect to Measure Wave Spectrum

    NASA Astrophysics Data System (ADS)

    Fong, J.; Loose, B.; Lovely, A.

    2012-12-01

    Gas exchange at the air-sea interface is enhanced by aqueous turbulence generated by capillary-gravity waves, affecting the absorption of atmospheric carbon dioxide by the ocean. The mean squared wave slope of these waves correlates strongly with the gas transfer velocity. To measure the energy in capillary-gravity waves, this project aims to use the Microsoft Xbox Kinect to measure the short period wave spectrum. Kinect is an input device for the Xbox 360 with an infrared laser and camera that can be used to map objects at high frequency and spatial resolution, similar to a LiDAR sensor. For air-sea gas exchange, we are interested in the short period gravity waves with a wavenumber of 40 to 100 radians per meter. We have successfully recorded data from Kinect at a sample rate of 30 Hz with 640x480 pixel resolution, consistent with the manufacturer specifications for its scanning capabilities. At 0.5 m distance from the surface, this yields a nominal resolution of approximately 0.7 mm with a theoretical vertical precision of 0.24 mm and a practical 1 σ noise level of 0.91 mm. We have found that Kinect has some limitations in its ability to detect the air-water interface. Clean water proved to be a weaker reflector for the Kinect IR source, whereas a relatively strong signal can be received for liquids with a high concentration of suspended solids. Colloids such as milk and Ca(OH)2 in water proved more suitable media from which height and wave spectra were detectable. Moreover, we will show results from monochromatic as well as wind-wave laboratory studies. With the wave field measurements from Kinect, gas transfer velocities at the air-sea interface can be determined.

  6. Kinect4FOG: monitoring and improving mobility in people with Parkinson's using a novel system incorporating the Microsoft Kinect v2.

    PubMed

    Amini, Amin; Banitsas, Konstantinos; Young, William R

    2018-05-23

    Parkinson's is a neurodegenerative condition associated with several motor symptoms including tremors and slowness of movement. Freezing of gait (FOG); the sensation of one's feet being "glued" to the floor, is one of the most debilitating symptoms associated with advanced Parkinson's. FOG not only contributes to falls and related injuries, but also compromises quality of life as people often avoid engaging in functional daily activities both inside and outside the home. In the current study, we describe a novel system designed to detect FOG and falling in people with Parkinson's (PwP) as well as monitoring and improving their mobility using laser-based visual cues cast by an automated laser system. The system utilizes a RGB-D sensor based on Microsoft Kinect v2 and a laser casting system consisting of two servo motors and an Arduino microcontroller. This system was evaluated by 15 PwP with FOG. Here, we present details of the system along with a summary of feedback provided by PwP. Despite limitations regarding its outdoor use, feedback was very positive in terms of domestic usability and convenience, where 12/15 PwP showed interest in installing and using the system at their homes. Implications for Rehabilitation Providing an automatic and remotely manageable monitoring system for PwP gait analysis and fall detection. Providing an automatic, unobtrusive and dynamic visual cue system for PwP based on laser line projection. Gathering feedback from PwP about the practical usage of the implemented system through focus group events.

  7. Development and Validation of a Portable and Inexpensive Tool to Measure the Drop Vertical Jump Using the Microsoft Kinect V2.

    PubMed

    Gray, Aaron D; Willis, Brad W; Skubic, Marjorie; Huo, Zhiyu; Razu, Swithin; Sherman, Seth L; Guess, Trent M; Jahandar, Amirhossein; Gulbrandsen, Trevor R; Miller, Scott; Siesener, Nathan J

    Noncontact anterior cruciate ligament (ACL) injury in adolescent female athletes is an increasing problem. The knee-ankle separation ratio (KASR), calculated at initial contact (IC) and peak flexion (PF) during the drop vertical jump (DVJ), is a measure of dynamic knee valgus. The Microsoft Kinect V2 has shown promise as a reliable and valid marker-less motion capture device. The Kinect V2 will demonstrate good to excellent correlation between KASR results at IC and PF during the DVJ, as compared with a "gold standard" Vicon motion analysis system. Descriptive laboratory study. Level 2. Thirty-eight healthy volunteer subjects (20 male, 18 female) performed 5 DVJ trials, simultaneously measured by a Vicon MX-T40S system, 2 AMTI force platforms, and a Kinect V2 with customized software. A total of 190 jumps were completed. The KASR was calculated at IC and PF during the DVJ. The intraclass correlation coefficient (ICC) assessed the degree of KASR agreement between the Kinect and Vicon systems. The ICCs of the Kinect V2 and Vicon KASR at IC and PF were 0.84 and 0.95, respectively, showing excellent agreement between the 2 measures. The Kinect V2 successfully identified the KASR at PF and IC frames in 182 of 190 trials, demonstrating 95.8% reliability. The Kinect V2 demonstrated excellent ICC of the KASR at IC and PF during the DVJ when compared with the Vicon system. A customized Kinect V2 software program demonstrated good reliability in identifying the KASR at IC and PF during the DVJ. Reliable, valid, inexpensive, and efficient screening tools may improve the accessibility of motion analysis assessment of adolescent female athletes.

  8. Measuring Waves and Erosion in Underwater Oil Blobs and Monitoring Other Arbitrary Surfaces with a Kinect v2 Time-of-Flight Camera

    NASA Astrophysics Data System (ADS)

    Butkiewicz, T.

    2014-12-01

    We developed free software that enables researchers to utilize Microsoft's new Kinect for Windows v2 sensor for a range of coastal and ocean mapping applications, as well as monitoring and measuring experimental scenes. While the original Kinect device used structured light and had very poor resolution, many geophysical researchers found uses for it in their experiments. The new next generation of this sensor uses time-of-flight technology, and can produce higher resolution depth measurements with an order of magnitude more accuracy. It also is capable of measurement through and under water. An analysis tool in our application lets users quickly select any arbitrary surface in the sensor's view. The tools automatically scans the surface, then calibrates and aligns a measurement volume to it. Depth readings from the sensor are converted into 3D point clouds, and points falling within this volume are projected into surface coordinates. Raster images can be output which consist of height fields aligned to the surface, generated from these projected measurements and interpolations between them. Images have a simple 1 pixel = 1 mm resolution and intensity values representing mm in height from the base-plane, which enables easy measurement and calculations to be conducted on the images in other analysis packages. Single snapshots can be taken manually on demand, or the software can monitor the surface automatically, capturing frames at preset intervals. This produces time lapse animations of dynamically changing surfaces. We apply this analysis tool to an experiment studying the behavior of underwater oil in response to flowing water of different speeds and temperatures. Blobs of viscous oils are placed in a flume apparatus, which circulates water past them. Over the course of a couple hours, the oil blobs spread out, waves slowly ripple across their surfaces, and erosions occur as smaller blobs break off from the main blob. All of this can be captured in 3D, with mm accuracy, through the water using the Kinect for Windows v2 sensor and our K2MapKit software.

  9. The efficacy of the Microsoft KinectTM to assess human bimanual coordination.

    PubMed

    Liddy, Joshua J; Zelaznik, Howard N; Huber, Jessica E; Rietdyk, Shirley; Claxton, Laura J; Samuel, Arjmand; Haddad, Jeffrey M

    2017-06-01

    The Microsoft Kinect has been used in studies examining posture and gait. Despite the advantages of portability and low cost, this device has not been used to assess interlimb coordination. Fundamental insights into movement control, variability, health, and functional status can be gained by examining coordination patterns. In this study, we investigated the efficacy of the Microsoft Kinect to capture bimanual coordination relative to a research-grade motion capture system. Twenty-four healthy adults performed coordinated hand movements in two patterns (in-phase and antiphase) at eight movement frequencies (1.00-3.33 Hz). Continuous relative phase (CRP) and discrete relative phase (DRP) were used to quantify the means (mCRP and mDRP) and variability (sdCRP and sdDRP) of coordination patterns. Between-device agreement was assessed using Bland-Altman bias with 95 % limits of agreement, concordance correlation coefficients (absolute agreement), and Pearson correlation coefficients (relative agreement). Modest-to-excellent relative and absolute agreements were found for mCRP in all conditions. However, mDRP showed poor agreement for the in-phase pattern at low frequencies, due to large between-device differences in a subset of participants. By contrast, poor absolute agreement was observed for both sdCRP and sdDRP, while relative agreement ranged from poor to excellent. Overall, the Kinect captures the macroscopic patterns of bimanual coordination better than coordination variability.

  10. Concurrent validity of the Microsoft Kinect for Windows v2 for measuring spatiotemporal gait parameters.

    PubMed

    Dolatabadi, Elham; Taati, Babak; Mihailidis, Alex

    2016-09-01

    This paper presents a study to evaluate the concurrent validity of the Microsoft Kinect for Windows v2 for measuring the spatiotemporal parameters of gait. Twenty healthy adults performed several sequences of walks across a GAITRite mat under three different conditions: usual pace, fast pace, and dual task. Each walking sequence was simultaneously captured with two Kinect for Windows v2 and the GAITRite system. An automated algorithm was employed to extract various spatiotemporal features including stance time, step length, step time and gait velocity from the recorded Kinect v2 sequences. Accuracy in terms of reliability, concurrent validity and limits of agreement was examined for each gait feature under different walking conditions. The 95% Bland-Altman limits of agreement were narrow enough for the Kinect v2 to be a valid tool for measuring all reported spatiotemporal parameters of gait in all three conditions. An excellent intraclass correlation coefficient (ICC2, 1) ranging from 0.9 to 0.98 was observed for all gait measures across different walking conditions. The inter trial reliability of all gait parameters were shown to be strong for all walking types (ICC3, 1 > 0.73). The results of this study suggest that the Kinect for Windows v2 has the capacity to measure selected spatiotemporal gait parameters for healthy adults. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.

  11. From Psychomotor to "Motorpsycho": Learning through Gestures with Body Sensory Technologies

    ERIC Educational Resources Information Center

    Xu, Xinhao; Ke, Fengfeng

    2014-01-01

    As information and communication technology continues to evolve, body sensory technologies, like the Microsoft Kinect, provide learning designers new approaches to facilitating learning in an innovative way. With the advent of body sensory technology like the Kinect, it is important to use motor activities for learning in good and effective ways.…

  12. Examining the feasibility of a Microsoft Kinect ™ based game intervention for individuals with anterior cruciate ligament injury risk.

    PubMed

    Zhiyu Huo; Griffin, Joseph; Babiuch, Ryan; Gray, Aaron; Willis, Bradley; Marjorie, Skubic; Shining Sun

    2015-01-01

    We describe a feasibility study in which the Microsoft Kinect is used for a game-based exercise to strengthen posterior chain muscles which are often weak in those at high risk of anterior cruciate ligament (ACL) injury. In the game, subjects perform a single posterior chain strengthening exercise. The game uses a side-scrolling video display driven by a hip abduction exercise while a player lies down on the floor. Leg lifts beyond a predetermined angle trigger the jumping action of an animated tiger. We describe the scene and game control, which uses depth images from the Kinect. Although Kinect-based skeletal data are used for many games, the skeletal model does not yield good estimates for positions on the floor. Our proposed system uses multiple leg angle estimators for different angle regions to recognize the player lying down and capture the angle between two legs. We conducted an experiment that validates our system with marker-based Vicon ground truth data. We also present results of an end-to-end test using the game, showing feasibility.

  13. Preoperative implant selection for unilateral breast reconstruction using 3D imaging with the Microsoft Kinect sensor.

    PubMed

    Pöhlmann, Stefanie T L; Harkness, Elaine; Taylor, Christopher J; Gandhi, Ashu; Astley, Susan M

    2017-08-01

    This study aimed to investigate whether breast volume measured preoperatively using a Kinect 3D sensor could be used to determine the most appropriate implant size for reconstruction. Ten patients underwent 3D imaging before and after unilateral implant-based reconstruction. Imaging used seven configurations, varying patient pose and Kinect location, which were compared regarding suitability for volume measurement. Four methods of defining the breast boundary for automated volume calculation were compared, and repeatability assessed over five repetitions. The most repeatable breast boundary annotation used an ellipse to track the inframammary fold and a plane describing the chest wall (coefficient of repeatability: 70 ml). The most reproducible imaging position comparing pre- and postoperative volume measurement of the healthy breast was achieved for the sitting patient with elevated arms and Kinect centrally positioned (coefficient of repeatability: 141 ml). Optimal implant volume was calculated by correcting used implant volume by the observed postoperative asymmetry. It was possible to predict implant size using a linear model derived from preoperative volume measurement of the healthy breast (coefficient of determination R 2  = 0.78, standard error of prediction 120 ml). Mastectomy specimen weight and experienced surgeons' choice showed similar predictive ability (both: R 2  = 0.74, standard error: 141/142 ml). A leave one-out validation showed that in 61% of cases, 3D imaging could predict implant volume to within 10%; however for 17% of cases it was >30%. This technology has the potential to facilitate reconstruction surgery planning and implant procurement to maximise symmetry after unilateral reconstruction. Copyright © 2017 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.

  14. Use of Pattern Classification Algorithms to Interpret Passive and Active Data Streams from a Walking-Speed Robotic Sensor Platform

    NASA Astrophysics Data System (ADS)

    Dieckman, Eric Allen

    In order to perform useful tasks for us, robots must have the ability to notice, recognize, and respond to objects and events in their environment. This requires the acquisition and synthesis of information from a variety of sensors. Here we investigate the performance of a number of sensor modalities in an unstructured outdoor environment, including the Microsoft Kinect, thermal infrared camera, and coffee can radar. Special attention is given to acoustic echolocation measurements of approaching vehicles, where an acoustic parametric array propagates an audible signal to the oncoming target and the Kinect microphone array records the reflected backscattered signal. Although useful information about the target is hidden inside the noisy time domain measurements, the Dynamic Wavelet Fingerprint process (DWFP) is used to create a time-frequency representation of the data. A small-dimensional feature vector is created for each measurement using an intelligent feature selection process for use in statistical pattern classification routines. Using our experimentally measured data from real vehicles at 50 m, this process is able to correctly classify vehicles into one of five classes with 94% accuracy. Fully three-dimensional simulations allow us to study the nonlinear beam propagation and interaction with real-world targets to improve classification results.

  15. Measuring Patient Mobility in the ICU Using a Novel Noninvasive Sensor.

    PubMed

    Ma, Andy J; Rawat, Nishi; Reiter, Austin; Shrock, Christine; Zhan, Andong; Stone, Alex; Rabiee, Anahita; Griffin, Stephanie; Needham, Dale M; Saria, Suchi

    2017-04-01

    To develop and validate a noninvasive mobility sensor to automatically and continuously detect and measure patient mobility in the ICU. Prospective, observational study. Surgical ICU at an academic hospital. Three hundred sixty-two hours of sensor color and depth image data were recorded and curated into 109 segments, each containing 1,000 images, from eight patients. None. Three Microsoft Kinect sensors (Microsoft, Beijing, China) were deployed in one ICU room to collect continuous patient mobility data. We developed software that automatically analyzes the sensor data to measure mobility and assign the highest level within a time period. To characterize the highest mobility level, a validated 11-point mobility scale was collapsed into four categories: nothing in bed, in-bed activity, out-of-bed activity, and walking. Of the 109 sensor segments, the noninvasive mobility sensor was developed using 26 of these from three ICU patients and validated on 83 remaining segments from five different patients. Three physicians annotated each segment for the highest mobility level. The weighted Kappa (κ) statistic for agreement between automated noninvasive mobility sensor output versus manual physician annotation was 0.86 (95% CI, 0.72-1.00). Disagreement primarily occurred in the "nothing in bed" versus "in-bed activity" categories because "the sensor assessed movement continuously," which was significantly more sensitive to motion than physician annotations using a discrete manual scale. Noninvasive mobility sensor is a novel and feasible method for automating evaluation of ICU patient mobility.

  16. Automation of workplace lifting hazard assessment for musculoskeletal injury prevention.

    PubMed

    Spector, June T; Lieblich, Max; Bao, Stephen; McQuade, Kevin; Hughes, Margaret

    2014-01-01

    Existing methods for practically evaluating musculoskeletal exposures such as posture and repetition in workplace settings have limitations. We aimed to automate the estimation of parameters in the revised United States National Institute for Occupational Safety and Health (NIOSH) lifting equation, a standard manual observational tool used to evaluate back injury risk related to lifting in workplace settings, using depth camera (Microsoft Kinect) and skeleton algorithm technology. A large dataset (approximately 22,000 frames, derived from six subjects) of simultaneous lifting and other motions recorded in a laboratory setting using the Kinect (Microsoft Corporation, Redmond, Washington, United States) and a standard optical motion capture system (Qualysis, Qualysis Motion Capture Systems, Qualysis AB, Sweden) was assembled. Error-correction regression models were developed to improve the accuracy of NIOSH lifting equation parameters estimated from the Kinect skeleton. Kinect-Qualysis errors were modelled using gradient boosted regression trees with a Huber loss function. Models were trained on data from all but one subject and tested on the excluded subject. Finally, models were tested on three lifting trials performed by subjects not involved in the generation of the model-building dataset. Error-correction appears to produce estimates for NIOSH lifting equation parameters that are more accurate than those derived from the Microsoft Kinect algorithm alone. Our error-correction models substantially decreased the variance of parameter errors. In general, the Kinect underestimated parameters, and modelling reduced this bias, particularly for more biased estimates. Use of the raw Kinect skeleton model tended to result in falsely high safe recommended weight limits of loads, whereas error-corrected models gave more conservative, protective estimates. Our results suggest that it may be possible to produce reasonable estimates of posture and temporal elements of tasks such as task frequency in an automated fashion, although these findings should be confirmed in a larger study. Further work is needed to incorporate force assessments and address workplace feasibility challenges. We anticipate that this approach could ultimately be used to perform large-scale musculoskeletal exposure assessment not only for research but also to provide real-time feedback to workers and employers during work method improvement activities and employee training.

  17. Development of a robust and cost-effective 3D respiratory motion monitoring system using the kinect device: Accuracy comparison with the conventional stereovision navigation system.

    PubMed

    Bae, Myungsoo; Lee, Sangmin; Kim, Namkug

    2018-07-01

    To develop and validate a robust and cost-effective 3D respiratory monitoring system based on a Kinect device with a custom-made simple marker. A 3D respiratory monitoring system comprising the simple marker and the Microsoft Kinect v2 device was developed. The marker was designed for simple and robust detection, and the tracking algorithm was developed using the depth, RGB, and infra-red images acquired from the Kinect sensor. A Kalman filter was used to suppress movement noises. The major movements of the marker attached to the four different locations of body surface were determined from the initially collected tracking points of the marker while breathing. The signal level of respiratory motion with the tracking point was estimated along the major direction vector. The accuracy of the results was evaluated through a comparison with those of the conventional stereovision navigation system (NDI Polaris Spectra). Sixteen normal volunteers were enrolled to evaluate the accuracy of this system. The correlation coefficients between the respiratory motion signal from the Kinect device and conventional navigation system ranged from 0.970 to 0.999 and from 0.837 to 0.995 at the abdominal and thoracic surfaces, respectively. The respiratory motion signal from this system was obtained at 27-30 frames/s. This system with the Kinect v2 device and simple marker could be used for cost-effective, robust and accurate 3D respiratory motion monitoring. In addition, this system is as reliable for respiratory motion signal generation and as practically useful as the conventional stereovision navigation system and is less sensitive to patient posture. Copyright © 2018 Elsevier B.V. All rights reserved.

  18. Three-dimensional assessment of squats and drop jumps using the Microsoft Xbox One Kinect: Reliability and validity.

    PubMed

    Mentiplay, Benjamin F; Hasanki, Ksaniel; Perraton, Luke G; Pua, Yong-Hao; Charlton, Paula C; Clark, Ross A

    2018-03-01

    The Microsoft Xbox One Kinect™ (Kinect V2) contains a depth camera that can be used to manually identify anatomical landmark positions in three-dimensions independent of the standard skeletal tracking, and therefore has potential for low-cost, time-efficient three-dimensional movement analysis (3DMA). This study examined inter-session reliability and concurrent validity of the Kinect V2 for the assessment of coronal and sagittal plane kinematics for the trunk, hip and knee during single leg squats (SLS) and drop vertical jumps (DVJ). Thirty young, healthy participants (age = 23 ± 5yrs, male/female = 15/15) performed a SLS and DVJ protocol that was recorded concurrently by the Kinect V2 and 3DMA during two sessions, one week apart. The Kinect V2 demonstrated good to excellent reliability for all SLS and DVJ variables (ICC ≥ 0.73). Concurrent validity ranged from poor to excellent (ICC = 0.02 to 0.98) during the SLS task, although trunk, hip and knee flexion and two-dimensional measures of knee abduction and frontal plane projection angle all demonstrated good to excellent validity (ICC ≥ 0.80). Concurrent validity for the DVJ task was typically worse, with only two variables exceeding ICC = 0.75 (trunk and hip flexion). These findings indicate that the Kinect V2 may have potential for large-scale screening for ACL injury risk, however future prospective research is required.

  19. Validation of Foot Placement Locations from Ankle Data of a Kinect v2 Sensor.

    PubMed

    Geerse, Daphne; Coolen, Bert; Kolijn, Detmar; Roerdink, Melvyn

    2017-10-10

    The Kinect v2 sensor may be a cheap and easy to use sensor to quantify gait in clinical settings, especially when applied in set-ups integrating multiple Kinect sensors to increase the measurement volume. Reliable estimates of foot placement locations are required to quantify spatial gait parameters. This study aimed to systematically evaluate the effects of distance from the sensor, side and step length on estimates of foot placement locations based on Kinect's ankle body points. Subjects (n = 12) performed stepping trials at imposed foot placement locations distanced 2 m or 3 m from the Kinect sensor (distance), for left and right foot placement locations (side), and for five imposed step lengths. Body points' time series of the lower extremities were recorded with a Kinect v2 sensor, placed frontoparallelly on the left side, and a gold-standard motion-registration system. Foot placement locations, step lengths, and stepping accuracies were compared between systems using repeated-measures ANOVAs, agreement statistics and two one-sided t -tests to test equivalence. For the right side at the 2 m distance from the sensor we found significant between-systems differences in foot placement locations and step lengths, and evidence for nonequivalence. This distance by side effect was likely caused by differences in body orientation relative to the Kinect sensor. It can be reduced by using Kinect's higher-dimensional depth data to estimate foot placement locations directly from the foot's point cloud and/or by using smaller inter-sensor distances in the case of a multi-Kinect v2 set-up to estimate foot placement locations at greater distances from the sensor.

  20. A study on validating KinectV2 in comparison of Vicon system as a motion capture system for using in Health Engineering in industry

    NASA Astrophysics Data System (ADS)

    Jebeli, Mahvash; Bilesan, Alireza; Arshi, Ahmadreza

    2017-06-01

    The currently available commercial motion capture systems are constrained by space requirement and thus pose difficulties when used in developing kinematic description of human movements within the existing manufacturing and production cells. The Kinect sensor does not share similar limitations but it is not as accurate. The proposition made in this article is to adopt the Kinect sensor in to facilitate implementation of Health Engineering concepts to industrial environments. This article is an evaluation of the Kinect sensor accuracy when providing three dimensional kinematic data. The sensor is thus utilized to assist in modeling and simulation of worker performance within an industrial cell. For this purpose, Kinect 3D data was compared to that of Vicon motion capture system in a gait analysis laboratory. Results indicated that the Kinect sensor exhibited a coefficient of determination of 0.9996 on the depth axis and 0.9849 along the horizontal axis and 0.2767 on vertical axis. The results prove the competency of the Kinect sensor to be used in the industrial environments.

  1. Measuring Patient Mobility in the ICU Using a Novel Noninvasive Sensor

    PubMed Central

    Ma, Andy J.; Rawat, Nishi; Reiter, Austin; Shrock, Christine; Zhan, Andong; Stone, Alex; Rabiee, Anahita; Griffin, Stephanie; Needham, Dale M.; Saria, Suchi

    2017-01-01

    Objectives To develop and validate a noninvasive mobility sensor to automatically and continuously detect and measure patient mobility in the ICU. Design Prospective, observational study. Setting Surgical ICU at an academic hospital. Patients Three hundred sixty-two hours of sensor color and depth image data were recorded and curated into 109 segments, each containing 1,000 images, from eight patients. Interventions None. Measurements and Main Results Three Microsoft Kinect sensors (Microsoft, Beijing, China) were deployed in one ICU room to collect continuous patient mobility data. We developed software that automatically analyzes the sensor data to measure mobility and assign the highest level within a time period. To characterize the highest mobility level, a validated 11-point mobility scale was collapsed into four categories: nothing in bed, in-bed activity, out-of-bed activity, and walking. Of the 109 sensor segments, the noninvasive mobility sensor was developed using 26 of these from three ICU patients and validated on 83 remaining segments from five different patients. Three physicians annotated each segment for the highest mobility level. The weighted Kappa (κ) statistic for agreement between automated noninvasive mobility sensor output versus manual physician annotation was 0.86 (95% CI, 0.72–1.00). Disagreement primarily occurred in the “nothing in bed” versus “in-bed activity” categories because “the sensor assessed movement continuously,” which was significantly more sensitive to motion than physician annotations using a discrete manual scale. Conclusions Noninvasive mobility sensor is a novel and feasible method for automating evaluation of ICU patient mobility. PMID:28291092

  2. Reliability and validity of the Microsoft Kinect for evaluating static foot posture

    PubMed Central

    2013-01-01

    Background The evaluation of foot posture in a clinical setting is useful to screen for potential injury, however disagreement remains as to which method has the greatest clinical utility. An inexpensive and widely available imaging system, the Microsoft Kinect™, may possess the characteristics to objectively evaluate static foot posture in a clinical setting with high accuracy. The aim of this study was to assess the intra-rater reliability and validity of this system for assessing static foot posture. Methods Three measures were used to assess static foot posture; traditional visual observation using the Foot Posture Index (FPI), a 3D motion analysis (3DMA) system and software designed to collect and analyse image and depth data from the Kinect. Spearman’s rho was used to assess intra-rater reliability and concurrent validity of the Kinect to evaluate foot posture, and a linear regression was used to examine the ability of the Kinect to predict total visual FPI score. Results The Kinect demonstrated moderate to good intra-rater reliability for four FPI items of foot posture (ρ = 0.62 to 0.78) and moderate to good correlations with the 3DMA system for four items of foot posture (ρ = 0.51 to 0.85). In contrast, intra-rater reliability of visual FPI items was poor to moderate (ρ = 0.17 to 0.63), and correlations with the Kinect and 3DMA systems were poor (absolute ρ = 0.01 to 0.44). Kinect FPI items with moderate to good reliability predicted 61% of the variance in total visual FPI score. Conclusions The majority of the foot posture items derived using the Kinect were more reliable than the traditional visual assessment of FPI, and were valid when compared to a 3DMA system. Individual foot posture items recorded using the Kinect were also shown to predict a moderate degree of variance in the total visual FPI score. Combined, these results support the future potential of the Kinect to accurately evaluate static foot posture in a clinical setting. PMID:23566934

  3. In-home fall risk assessment and detection sensor system.

    PubMed

    Rantz, Marilyn J; Skubic, Marjorie; Abbott, Carmen; Galambos, Colleen; Pak, Youngju; Ho, Dominic K C; Stone, Erik E; Rui, Liyang; Back, Jessica; Miller, Steven J

    2013-07-01

    Falls are a major problem in older adults. A continuous, unobtrusive, environmentally mounted (i.e., embedded into the environment and not worn by the individual), in-home monitoring system that automatically detects when falls have occurred or when the risk of falling is increasing could alert health care providers and family members to intervene to improve physical function or manage illnesses that may precipitate falls. Researchers at the University of Missouri Center for Eldercare and Rehabilitation Technology are testing such sensor systems for fall risk assessment (FRA) and detection in older adults' apartments in a senior living community. Initial results comparing ground truth (validated measures) of FRA data and GAITRite System parameters with data captured from Microsoft(®) Kinect and pulse-Doppler radar are reported. Copyright 2013, SLACK Incorporated.

  4. An immersive surgery training system with live streaming capability.

    PubMed

    Yang, Yang; Guo, Xinqing; Yu, Zhan; Steiner, Karl V; Barner, Kenneth E; Bauer, Thomas L; Yu, Jingyi

    2014-01-01

    Providing real-time, interactive immersive surgical training has been a key research area in telemedicine. Earlier approaches have mainly adopted videotaped training that can only show imagery from a fixed view point. Recent advances on commodity 3D imaging have enabled a new paradigm for immersive surgical training by acquiring nearly complete 3D reconstructions of actual surgical procedures. However, unlike 2D videotaping that can easily stream data in real-time, by far 3D imaging based solutions require pre-capturing and processing the data; surgical trainings using the data have to be conducted offline after the acquisition. In this paper, we present a new real-time immersive 3D surgical training system. Our solution builds upon the recent multi-Kinect based surgical training system [1] that can acquire and display high delity 3D surgical procedures using only a small number of Microsoft Kinect sensors. We build on top of the system a client-server model for real-time streaming. On the server front, we efficiently fuse multiple Kinect data acquired from different viewpoints and compress and then stream the data to the client. On the client front, we build an interactive space-time navigator to allow remote users (e.g., trainees) to witness the surgical procedure in real-time as if they were present in the room.

  5. Structure Sensor for mobile markerless augmented reality

    NASA Astrophysics Data System (ADS)

    Kilgus, T.; Bux, R.; Franz, A. M.; Johnen, W.; Heim, E.; Fangerau, M.; Müller, M.; Yen, K.; Maier-Hein, L.

    2016-03-01

    3D Visualization of anatomical data is an integral part of diagnostics and treatment in many medical disciplines, such as radiology, surgery and forensic medicine. To enable intuitive interaction with the data, we recently proposed a new concept for on-patient visualization of medical data which involves rendering of subsurface structures on a mobile display that can be moved along the human body. The data fusion is achieved with a range imaging device attached to the display. The range data is used to register static 3D medical imaging data with the patient body based on a surface matching algorithm. However, our previous prototype was based on the Microsoft Kinect camera and thus required a cable connection to acquire color and depth data. The contribution of this paper is two-fold. Firstly, we replace the Kinect with the Structure Sensor - a novel cable-free range imaging device - to improve handling and user experience and show that the resulting accuracy (target registration error: 4.8+/-1.5 mm) is comparable to that achieved with the Kinect. Secondly, a new approach to visualizing complex 3D anatomy based on this device, as well as 3D printed models of anatomical surfaces, is presented. We demonstrate that our concept can be applied to in vivo data and to a 3D printed skull of a forensic case. Our new device is the next step towards clinical integration and shows that the concept cannot only be applied during autopsy but also for presentation of forensic data to laypeople in court or medical education.

  6. Facial expression identification using 3D geometric features from Microsoft Kinect device

    NASA Astrophysics Data System (ADS)

    Han, Dongxu; Al Jawad, Naseer; Du, Hongbo

    2016-05-01

    Facial expression identification is an important part of face recognition and closely related to emotion detection from face images. Various solutions have been proposed in the past using different types of cameras and features. Microsoft Kinect device has been widely used for multimedia interactions. More recently, the device has been increasingly deployed for supporting scientific investigations. This paper explores the effectiveness of using the device in identifying emotional facial expressions such as surprise, smile, sad, etc. and evaluates the usefulness of 3D data points on a face mesh structure obtained from the Kinect device. We present a distance-based geometric feature component that is derived from the distances between points on the face mesh and selected reference points in a single frame. The feature components extracted across a sequence of frames starting and ending by neutral emotion represent a whole expression. The feature vector eliminates the need for complex face orientation correction, simplifying the feature extraction process and making it more efficient. We applied the kNN classifier that exploits a feature component based similarity measure following the principle of dynamic time warping to determine the closest neighbors. Preliminary tests on a small scale database of different facial expressions show promises of the newly developed features and the usefulness of the Kinect device in facial expression identification.

  7. Estimating Physical Activity Energy Expenditure with the Kinect Sensor in an Exergaming Environment

    PubMed Central

    Nathan, David; Huynh, Du Q.; Rubenson, Jonas; Rosenberg, Michael

    2015-01-01

    Active video games that require physical exertion during game play have been shown to confer health benefits. Typically, energy expended during game play is measured using devices attached to players, such as accelerometers, or portable gas analyzers. Since 2010, active video gaming technology incorporates marker-less motion capture devices to simulate human movement into game play. Using the Kinect Sensor and Microsoft SDK this research aimed to estimate the mechanical work performed by the human body and estimate subsequent metabolic energy using predictive algorithmic models. Nineteen University students participated in a repeated measures experiment performing four fundamental movements (arm swings, standing jumps, body-weight squats, and jumping jacks). Metabolic energy was captured using a Cortex Metamax 3B automated gas analysis system with mechanical movement captured by the combined motion data from two Kinect cameras. Estimations of the body segment properties, such as segment mass, length, centre of mass position, and radius of gyration, were calculated from the Zatsiorsky-Seluyanov's equations of de Leva, with adjustment made for posture cost. GPML toolbox implementation of the Gaussian Process Regression, a locally weighted k-Nearest Neighbour Regression, and a linear regression technique were evaluated for their performance on predicting the metabolic cost from new feature vectors. The experimental results show that Gaussian Process Regression outperformed the other two techniques by a small margin. This study demonstrated that physical activity energy expenditure during exercise, using the Kinect camera as a motion capture system, can be estimated from segmental mechanical work. Estimates for high-energy activities, such as standing jumps and jumping jacks, can be made accurately, but for low-energy activities, such as squatting, the posture of static poses should be considered as a contributing factor. When translated into the active video gaming environment, the results could be incorporated into game play to more accurately control the energy expenditure requirements. PMID:26000460

  8. Gait assessment using the Microsoft Xbox One Kinect: Concurrent validity and inter-day reliability of spatiotemporal and kinematic variables.

    PubMed

    Mentiplay, Benjamin F; Perraton, Luke G; Bower, Kelly J; Pua, Yong-Hao; McGaw, Rebekah; Heywood, Sophie; Clark, Ross A

    2015-07-16

    The revised Xbox One Kinect, also known as the Microsoft Kinect V2 for Windows, includes enhanced hardware which may improve its utility as a gait assessment tool. This study examined the concurrent validity and inter-day reliability of spatiotemporal and kinematic gait parameters estimated using the Kinect V2 automated body tracking system and a criterion reference three-dimensional motion analysis (3DMA) marker-based camera system. Thirty healthy adults performed two testing sessions consisting of comfortable and fast paced walking trials. Spatiotemporal outcome measures related to gait speed, speed variability, step length, width and time, foot swing velocity and medial-lateral and vertical pelvis displacement were examined. Kinematic outcome measures including ankle flexion, knee flexion and adduction and hip flexion were examined. To assess the agreement between Kinect and 3DMA systems, Bland-Altman plots, relative agreement (Pearson's correlation) and overall agreement (concordance correlation coefficients) were determined. Reliability was assessed using intraclass correlation coefficients, Cronbach's alpha and standard error of measurement. The spatiotemporal measurements had consistently excellent (r≥0.75) concurrent validity, with the exception of modest validity for medial-lateral pelvis sway (r=0.45-0.46) and fast paced gait speed variability (r=0.73). In contrast kinematic validity was consistently poor to modest, with all associations between the systems weak (r<0.50). In those measures with acceptable validity, the inter-day reliability was similar between systems. In conclusion, while the Kinect V2 body tracking may not accurately obtain lower body kinematic data, it shows great potential as a tool for measuring spatiotemporal aspects of gait. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Validation of Functional Reaching Volume as an Outcome Measure across the Spectrum of Abilities in Muscular Dystrophy

    DTIC Science & Technology

    2017-09-01

    interactive video game regardless of ambulatory status. The objective of this project is to produce a trial ready outcome measure that will enable clinical...custom-designed video game using the Microsoft Kinect camera, measures functional reaching volume (FRV) across the spectrum of the disease in DMD...Kinect, video game , clinical trial readiness, neuromuscular disease, Soliton, functional reaching volume 3. ACCOMPLISHMENTS: The PI is reminded

  10. MIT-Skywalker: On the use of a markerless system.

    PubMed

    Goncalves, Rogerio S; Hamilton, Taya; Krebs, Hermano I

    2017-07-01

    This paper describes our efforts to employ the Microsoft Kinect as a low cost vision control system for the MIT-Skywalker, a robotic gait rehabilitation device. The Kinect enables an alternative markerless solution to control the MIT-Skywalker and allows a more user-friendly set-up. A study involving eight healthy subjects and two stroke survivors using the MIT-Skywalker device demonstrates the advantages and challenges of this new proposed approach.

  11. Data fusion of multiple kinect sensors for a rehabilitation system.

    PubMed

    Huibin Du; Yiwen Zhao; Jianda Han; Zheng Wang; Guoli Song

    2016-08-01

    Kinect-like depth sensors have been widely used in rehabilitation systems. However, single depth sensor processes limb-blocking, data loss or data error poorly, making it less reliable. This paper focus on using two Kinect sensors and data fusion method to solve these problems. First, two Kinect sensors capture the motion data of the healthy arm of the hemiplegic patient; Second, merge the data using the method of Set-Membership-Filter (SMF); Then, mirror this motion data by the Middle-Plane; In the end, control the wearable robotic arm driving the patient's paralytic arm so that the patient can interactively and initiatively complete a variety of recovery actions prompted by computer with 3D animation games.

  12. Validation of Foot Placement Locations from Ankle Data of a Kinect v2 Sensor

    PubMed Central

    Geerse, Daphne; Coolen, Bert; Kolijn, Detmar; Roerdink, Melvyn

    2017-01-01

    The Kinect v2 sensor may be a cheap and easy to use sensor to quantify gait in clinical settings, especially when applied in set-ups integrating multiple Kinect sensors to increase the measurement volume. Reliable estimates of foot placement locations are required to quantify spatial gait parameters. This study aimed to systematically evaluate the effects of distance from the sensor, side and step length on estimates of foot placement locations based on Kinect’s ankle body points. Subjects (n = 12) performed stepping trials at imposed foot placement locations distanced 2 m or 3 m from the Kinect sensor (distance), for left and right foot placement locations (side), and for five imposed step lengths. Body points’ time series of the lower extremities were recorded with a Kinect v2 sensor, placed frontoparallelly on the left side, and a gold-standard motion-registration system. Foot placement locations, step lengths, and stepping accuracies were compared between systems using repeated-measures ANOVAs, agreement statistics and two one-sided t-tests to test equivalence. For the right side at the 2 m distance from the sensor we found significant between-systems differences in foot placement locations and step lengths, and evidence for nonequivalence. This distance by side effect was likely caused by differences in body orientation relative to the Kinect sensor. It can be reduced by using Kinect’s higher-dimensional depth data to estimate foot placement locations directly from the foot’s point cloud and/or by using smaller inter-sensor distances in the case of a multi-Kinect v2 set-up to estimate foot placement locations at greater distances from the sensor. PMID:28994731

  13. Real-time posture reconstruction for Microsoft Kinect.

    PubMed

    Shum, Hubert P H; Ho, Edmond S L; Jiang, Yang; Takagi, Shu

    2013-10-01

    The recent advancement of motion recognition using Microsoft Kinect stimulates many new ideas in motion capture and virtual reality applications. Utilizing a pattern recognition algorithm, Kinect can determine the positions of different body parts from the user. However, due to the use of a single-depth camera, recognition accuracy drops significantly when the parts are occluded. This hugely limits the usability of applications that involve interaction with external objects, such as sport training or exercising systems. The problem becomes more critical when Kinect incorrectly perceives body parts. This is because applications have limited information about the recognition correctness, and using those parts to synthesize body postures would result in serious visual artifacts. In this paper, we propose a new method to reconstruct valid movement from incomplete and noisy postures captured by Kinect. We first design a set of measurements that objectively evaluates the degree of reliability on each tracked body part. By incorporating the reliability estimation into a motion database query during run time, we obtain a set of similar postures that are kinematically valid. These postures are used to construct a latent space, which is known as the natural posture space in our system, with local principle component analysis. We finally apply frame-based optimization in the space to synthesize a new posture that closely resembles the true user posture while satisfying kinematic constraints. Experimental results show that our method can significantly improve the quality of the recognized posture under severely occluded environments, such as a person exercising with a basketball or moving in a small room.

  14. A Data Set of Human Body Movements for Physical Rehabilitation Exercises.

    PubMed

    Vakanski, Aleksandar; Jun, Hyung-Pil; Paul, David; Baker, Russell

    2018-03-01

    The article presents University of Idaho - Physical Rehabilitation Movement Data (UI-PRMD) - a publically available data set of movements related to common exercises performed by patients in physical rehabilitation programs. For the data collection, 10 healthy subjects performed 10 repetitions of different physical therapy movements, with a Vicon optical tracker and a Microsoft Kinect sensor used for the motion capturing. The data are in a format that includes positions and angles of full-body joints. The objective of the data set is to provide a basis for mathematical modeling of therapy movements, as well as for establishing performance metrics for evaluation of patient consistency in executing the prescribed rehabilitation exercises.

  15. Comparative efficacy of new interfaces for intra-procedural imaging review: the Microsoft Kinect, Hillcrest Labs Loop Pointer, and the Apple iPad.

    PubMed

    Chao, Cherng; Tan, Justin; Castillo, Edward M; Zawaideh, Mazen; Roberts, Anne C; Kinney, Thomas B

    2014-08-01

    We adapted and evaluated the Microsoft Kinect (touchless interface), Hillcrest Labs Loop Pointer (gyroscopic mouse), and the Apple iPad (multi-touch tablet) for intra-procedural imaging review efficacy in a simulation using MIM Software DICOM viewers. Using each device, 29 radiologists executed five basic interactions to complete the overall task of measuring an 8.1-cm hepatic lesion: scroll, window, zoom, pan, and measure. For each interaction, participants assessed the devices on a 3-point subjective scale (3 = highest usability score). The five individual scores were summed to calculate a subjective composite usability score (max 15 points). Overall task time to completion was recorded. Each user also assessed each device for its potential to jeopardize a sterile field. The composite usability scores were as follows: Kinect 9.9 (out of 15.0; SD = 2.8), Loop Pointer 12.9 (SD = 13.5), and iPad 13.5 (SD = 1.8). Mean task completion times were as follows: Kinect 156.7 s (SD = 86.5), Loop Pointer 51.5 s (SD = 30.6), and iPad 41.1 s (SD = 25.3). The mean hepatic lesion measurements were as follows: Kinect was 7.3 cm (SD = 0.9), Loop Pointer 7.8 cm (SD = 1.1), and iPad 8.2 cm (SD = 1.2). The mean deviations from true hepatic lesion measurement were as follows: Kinect 1.0 cm and for both the Loop Pointer and iPad, 0.9 cm (SD = 0.7). The Kinect had the least and iPad had the most subjective concern for compromising the sterile field. A new intra-operative imaging review interface may be near. Most surveyed foresee these devices as useful in procedures, and most do not anticipate problems with a sterile field. An ideal device would combine iPad's usability and accuracy with the Kinect's touchless aspect.

  16. Repurposing the Microsoft Kinect for Windows v2 for external head motion tracking for brain PET.

    PubMed

    Noonan, P J; Howard, J; Hallett, W A; Gunn, R N

    2015-11-21

    Medical imaging systems such as those used in positron emission tomography (PET) are capable of spatial resolutions that enable the imaging of small, functionally important brain structures. However, the quality of data from PET brain studies is often limited by subject motion during acquisition. This is particularly challenging for patients with neurological disorders or with dynamic research studies that can last 90 min or more. Restraining head movement during the scan does not eliminate motion entirely and can be unpleasant for the subject. Head motion can be detected and measured using a variety of techniques that either use the PET data itself or an external tracking system. Advances in computer vision arising from the video gaming industry could offer significant benefits when re-purposed for medical applications. A method for measuring rigid body type head motion using the Microsoft Kinect v2 is described with results presenting  ⩽0.5 mm spatial accuracy. Motion data is measured in real-time at 30 Hz using the KinectFusion algorithm. Non-rigid motion is detected using the residual alignment energy data of the KinectFusion algorithm allowing for unreliable motion to be discarded. Motion data is aligned to PET listmode data using injected pulse sequences into the PET/CT gantry allowing for correction of rigid body motion. Pilot data from a clinical dynamic PET/CT examination is shown.

  17. Repurposing the Microsoft Kinect for Windows v2 for external head motion tracking for brain PET

    NASA Astrophysics Data System (ADS)

    Noonan, P. J.; Howard, J.; Hallett, W. A.; Gunn, R. N.

    2015-11-01

    Medical imaging systems such as those used in positron emission tomography (PET) are capable of spatial resolutions that enable the imaging of small, functionally important brain structures. However, the quality of data from PET brain studies is often limited by subject motion during acquisition. This is particularly challenging for patients with neurological disorders or with dynamic research studies that can last 90 min or more. Restraining head movement during the scan does not eliminate motion entirely and can be unpleasant for the subject. Head motion can be detected and measured using a variety of techniques that either use the PET data itself or an external tracking system. Advances in computer vision arising from the video gaming industry could offer significant benefits when re-purposed for medical applications. A method for measuring rigid body type head motion using the Microsoft Kinect v2 is described with results presenting  ⩽0.5 mm spatial accuracy. Motion data is measured in real-time at 30 Hz using the KinectFusion algorithm. Non-rigid motion is detected using the residual alignment energy data of the KinectFusion algorithm allowing for unreliable motion to be discarded. Motion data is aligned to PET listmode data using injected pulse sequences into the PET/CT gantry allowing for correction of rigid body motion. Pilot data from a clinical dynamic PET/CT examination is shown.

  18. Kinect-based sign language recognition of static and dynamic hand movements

    NASA Astrophysics Data System (ADS)

    Dalawis, Rando C.; Olayao, Kenneth Deniel R.; Ramos, Evan Geoffrey I.; Samonte, Mary Jane C.

    2017-02-01

    A different approach of sign language recognition of static and dynamic hand movements was developed in this study using normalized correlation algorithm. The goal of this research was to translate fingerspelling sign language into text using MATLAB and Microsoft Kinect. Digital input image captured by Kinect devices are matched from template samples stored in a database. This Human Computer Interaction (HCI) prototype was developed to help people with communication disability to express their thoughts with ease. Frame segmentation and feature extraction was used to give meaning to the captured images. Sequential and random testing was used to test both static and dynamic fingerspelling gestures. The researchers explained some factors they encountered causing some misclassification of signs.

  19. Validity of the Kinect for Gait Assessment: A Focused Review

    PubMed Central

    Springer, Shmuel; Yogev Seligmann, Galit

    2016-01-01

    Gait analysis may enhance clinical practice. However, its use is limited due to the need for expensive equipment which is not always available in clinical settings. Recent evidence suggests that Microsoft Kinect may provide a low cost gait analysis method. The purpose of this report is to critically evaluate the literature describing the concurrent validity of using the Kinect as a gait analysis instrument. An online search of PubMed, CINAHL, and ProQuest databases was performed. Included were studies in which walking was assessed with the Kinect and another gold standard device, and consisted of at least one numerical finding of spatiotemporal or kinematic measures. Our search identified 366 papers, from which 12 relevant studies were retrieved. The results demonstrate that the Kinect is valid only for some spatiotemporal gait parameters. Although the kinematic parameters measured by the Kinect followed the trend of the joint trajectories, they showed poor validity and large errors. In conclusion, the Kinect may have the potential to be used as a tool for measuring spatiotemporal aspects of gait, yet standardized methods should be established, and future examinations with both healthy subjects and clinical participants are required in order to integrate the Kinect as a clinical gait analysis tool. PMID:26861323

  20. Automated Tracking and Quantification of Autistic Behavioral Symptoms Using Microsoft Kinect.

    PubMed

    Kang, Joon Young; Kim, Ryunhyung; Kim, Hyunsun; Kang, Yeonjune; Hahn, Susan; Fu, Zhengrui; Khalid, Mamoon I; Schenck, Enja; Thesen, Thomas

    2016-01-01

    The prevalence of autism spectrum disorder (ASD) has risen significantly in the last ten years, and today, roughly 1 in 68 children has been diagnosed. One hallmark set of symptoms in this disorder are stereotypical motor movements. These repetitive movements may include spinning, body-rocking, or hand-flapping, amongst others. Despite the growing number of individuals affected by autism, an effective, accurate method of automatically quantifying such movements remains unavailable. This has negative implications for assessing the outcome of ASD intervention and drug studies. Here we present a novel approach to detecting autistic symptoms using the Microsoft Kinect v.2 to objectively and automatically quantify autistic body movements. The Kinect camera was used to film 12 actors performing three separate stereotypical motor movements each. Visual Gesture Builder (VGB) was implemented to analyze the skeletal structures in these recordings using a machine learning approach. In addition, movement detection was hard-coded in Matlab. Manual grading was used to confirm the validity and reliability of VGB and Matlab analysis. We found that both methods were able to detect autistic body movements with high probability. The machine learning approach yielded highest detection rates, supporting its use in automatically quantifying complex autistic behaviors with multi-dimensional input.

  1. Wind tunnel experiments: influence of erosion and deposition on wind-packing of new snow

    NASA Astrophysics Data System (ADS)

    Sommer, Christian G.; Lehning, Michael; Fierz, Charles

    2018-01-01

    Wind sometimes creates a hard, wind-packed layer at the surface of a snowpack. The formation of such wind crusts was observed during wind tunnel experiments with combined SnowMicroPen and Microsoft Kinect sensors. The former provides the hardness of new and wind-packed snow and the latter spatial snow depth data in the test section. Previous experiments showed that saltation is necessary but not sufficient for wind-packing. The combination of hardness and snow depth data now allows to study the case with saltation in more detail. The Kinect data requires complex processing but with the appropriate corrections, snow depth changes can be measured with an accuracy of about 1 mm. The Kinect is therefore well suited to quantify erosion and deposition. We found that no hardening occurred during erosion and that a wind crust may or may not form when snow is deposited. Deposition is more efficient at hardening snow in wind-exposed than in wind-sheltered areas. The snow hardness increased more on the windward side of artificial obstacles placed in the wind tunnel. Similarly, the snow was harder in positions with a low Sx parameter. Sx describes how wind-sheltered (high Sx) or wind-exposed (low Sx) a position is and was calculated based on the Kinect data. The correlation between Sx and snow hardness was -0.63. We also found a negative correlation of -0.4 between the snow hardness and the deposition rate. Slowly deposited snow is harder than a rapidly growing accumulation. Sx and the deposition rate together explain about half of the observed variability of snow hardness.

  2. Detection of patient's bed statuses in 3D using a Microsoft Kinect.

    PubMed

    Li, Yun; Berkowitz, Lyle; Noskin, Gary; Mehrotra, Sanjay

    2014-01-01

    Patients spend the vast majority of their hospital stay in an unmonitored bed where various mobility factors can impact patient safety and quality. Specifically, bed positioning and a patient's related mobility in that bed can have a profound impact on risks such as pneumonias, blood clots, bed ulcers and falls. This issue has been exacerbated as the nurse-per-bed (NPB) ratio has decreased in recent years. To help assess these risks, it is critical to monitor a hospital bed's positional status (BPS). Two bed positional statuses, bed height (BH) and bed chair angle (BCA), are of critical interests for bed monitoring. In this paper, we develop a bed positional status detection system using a single Microsoft Kinect. Experimental results show that we are able to achieve 94.5% and 93.0% overall accuracy of the estimated BCA and BH in a simulated patient's room environment.

  3. PATHway: Decision Support in Exercise Programmes for Cardiac Rehabilitation.

    PubMed

    Filos, Dimitris; Triantafyllidis, Andreas; Chouvarda, Ioanna; Buys, Roselien; Cornelissen, Véronique; Budts, Werner; Walsh, Deirdre; Woods, Catherine; Moran, Kieran; Maglaveras, Nicos

    2016-01-01

    Rehabilitation is important for patients with cardiovascular diseases (CVD) to improve health outcomes and quality of life. However, adherence to current exercise programmes in cardiac rehabilitation is limited. We present the design and development of a Decision Support System (DSS) for telerehabilitation, aiming to enhance exercise programmes for CVD patients through ensuring their safety, personalising the programme according to their needs and performance, and motivating them toward meeting their physical activity goals. The DSS processes data originated from a Microsoft Kinect camera, a blood pressure monitor, a heart rate sensor and questionnaires, in order to generate a highly individualised exercise programme and improve patient adherence. Initial results within the EU-funded PATHway project show the potential of our approach.

  4. Augmented Virtual Reality Laboratory

    NASA Technical Reports Server (NTRS)

    Tully-Hanson, Benjamin

    2015-01-01

    Real time motion tracking hardware has for the most part been cost prohibitive for research to regularly take place until recently. With the release of the Microsoft Kinect in November 2010, researchers now have access to a device that for a few hundred dollars is capable of providing redgreenblue (RGB), depth, and skeleton data. It is also capable of tracking multiple people in real time. For its original intended purposes, i.e. gaming, being used with the Xbox 360 and eventually Xbox One, it performs quite well. However, researchers soon found that although the sensor is versatile, it has limitations in real world applications. I was brought aboard this summer by William Little in the Augmented Virtual Reality (AVR) Lab at Kennedy Space Center to find solutions to these limitations.

  5. "Kinect-ing" with clinicians: a knowledge translation resource to support decision making about video game use in rehabilitation.

    PubMed

    Levac, Danielle; Espy, Deborah; Fox, Emily; Pradhan, Sujata; Deutsch, Judith E

    2015-03-01

    Microsoft's Kinect for Xbox 360 virtual reality (VR) video games are promising rehabilitation options because they involve motivating, full-body movement practice. However, these games were designed for recreational use, which creates challenges for clinical implementation. Busy clinicians require decision-making support to inform game selection and implementation that address individual therapeutic goals. This article describes the development and preliminary evaluation of a knowledge translation (KT) resource to support clinical decision making about selection and use of Kinect games in physical therapy. The knowledge-to-action framework guided the development of the Kinecting With Clinicians (KWiC) resource. Five physical therapists with VR and video game expertise analyzed the Kinect Adventure games. A consensus-building method was used to arrive at categories to organize clinically relevant attributes guiding game selection and game play. The process and results of an exploratory usability evaluation of the KWiC resource by clinicians through interviews and focus groups at 4 clinical sites is described. Subsequent steps in the evaluation and KT process are proposed, including making the KWiC resource Web-based and evaluating the utility of the online resource in clinical practice. © 2015 American Physical Therapy Association.

  6. Depth map occlusion filling and scene reconstruction using modified exemplar-based inpainting

    NASA Astrophysics Data System (ADS)

    Voronin, V. V.; Marchuk, V. I.; Fisunov, A. V.; Tokareva, S. V.; Egiazarian, K. O.

    2015-03-01

    RGB-D sensors are relatively inexpensive and are commercially available off-the-shelf. However, owing to their low complexity, there are several artifacts that one encounters in the depth map like holes, mis-alignment between the depth and color image and lack of sharp object boundaries in the depth map. Depth map generated by Kinect cameras also contain a significant amount of missing pixels and strong noise, limiting their usability in many computer vision applications. In this paper, we present an efficient hole filling and damaged region restoration method that improves the quality of the depth maps obtained with the Microsoft Kinect device. The proposed approach is based on a modified exemplar-based inpainting and LPA-ICI filtering by exploiting the correlation between color and depth values in local image neighborhoods. As a result, edges of the objects are sharpened and aligned with the objects in the color image. Several examples considered in this paper show the effectiveness of the proposed approach for large holes removal as well as recovery of small regions on several test images of depth maps. We perform a comparative study and show that statistically, the proposed algorithm delivers superior quality results compared to existing algorithms.

  7. Validation of Static and Dynamic Balance Assessment Using Microsoft Kinect for Young and Elderly Populations.

    PubMed

    Eltoukhy, Moataz A; Kuenze, Christopher; Oh, Jeonghoon; Signorile, Joseph F

    2018-01-01

    Reduction in balance is an indicator of fall risk, and therefore, an accurate and cost-effective balance assessment tool is essential for prescribing effective postural control strategies. This study established the validity of the Kinect v2 sensor in assessing center of mass (CoM) excursion and velocity during single-leg balance and voluntary ankle sway tasks among young and elderly subjects. We compared balance outcome measures (anteroposterior (AP) and mediolateral (ML) CoM excursion and velocity and average sway length) to a traditional three-dimensional motion analysis system. Twenty subjects (10 young: age = y, height cm, weight kg; 10 elderly: age y, height cm, weight kg), with no history of lower extremity injury, participated in this study. Subjects performed six randomized trials; four single-leg stand (SLS) and two ankle sway trials. SLS and voluntary ankle sway trials showed that consistency (ICC(2, k)) and agreement (ICC(3, k)) for all variables when all subjects were considered, as well as when the elderly and young groups were analyzed separately. Concordance between systems ranged from poor to nearly perfect depending on the group, task, and variable assessed.

  8. Detecting Key Inter-Joint Distances and Anthropometry Effects for Static Gesture Development using Microsoft Kinect

    DTIC Science & Technology

    2013-09-01

    DATES COVERED (From - To) 1 Sep 2013–30 Sep 2013 4 . TITLE AND SUBTITLE Detecting Key Inter-Joint Distances and Anthropometry Effects for Static Gesture...13. SUPPLEMENTARY NOTES “Nintendo Wii” is a registered trademark of Nintendo Company, Ltd. “ PlayStation ” is a registered trademark of Sony...Computer Entertainment; PlayStation “Move” ® (Sony Computer Entertainment). “Kinect” is a registered trademark of Microsoft Corporation. Merriam-Webster

  9. Microsoft Kinect-based Continuous Performance Test: An Objective Attention Deficit Hyperactivity Disorder Assessment.

    PubMed

    Delgado-Gomez, David; Peñuelas-Calvo, Inmaculada; Masó-Besga, Antonio Eduardo; Vallejo-Oñate, Silvia; Baltasar Tello, Itziar; Arrua Duarte, Elsa; Vera Varela, María Constanza; Carballo, Juan; Baca-García, Enrique

    2017-03-20

    One of the major challenges in mental medical care is finding out new instruments for an accurate and objective evaluation of the attention deficit hyperactivity disorder (ADHD). Early ADHD identification, severity assessment, and prompt treatment are essential to avoid the negative effects associated with this mental condition. The aim of our study was to develop a novel ADHD assessment instrument based on Microsoft Kinect, which identifies ADHD cardinal symptoms in order to provide a more accurate evaluation. A group of 30 children, aged 8-12 years (10.3 [SD 1.4]; male 70% [21/30]), who were referred to the Child and Adolescent Psychiatry Unit of the Department of Psychiatry at Fundación Jiménez Díaz Hospital (Madrid, Spain), were included in this study. Children were required to meet the Diagnostic and Statistical Manual of Mental Disorders (DSM-5) criteria of ADHD diagnosis. One of the parents or guardians of the children filled the Spanish version of the Strengths and Weaknesses of ADHD Symptoms and Normal Behavior (SWAN) rating scale used in clinical practice. Each child conducted a Kinect-based continuous performance test (CPT) in which the reaction time (RT), the commission errors, and the time required to complete the reaction (CT) were calculated. The correlations of the 3 predictors, obtained using Kinect methodology, with respect to the scores of the SWAN scale were calculated. The RT achieved a correlation of -.11, -.29, and -.37 with respect to the inattention, hyperactivity, and impulsivity factors of the SWAN scale. The correlations of the commission error with respect to these 3 factors were -.03, .01, and .24, respectively. Our findings show a relation between the Microsoft Kinect-based version of the CPT and ADHD symptomatology assessed through parental report. Results point out the importance of future research on the development of objective measures for the diagnosis of ADHD among children and adolescents. ©David Delgado-Gomez, Inmaculada Peñuelas-Calvo, Antonio Eduardo Masó-Besga, Silvia Vallejo-Oñate, Itziar Baltasar Tello, Elsa Arrua Duarte, María Constanza Vera Varela, Juan Carballo, Enrique Baca-García. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 20.03.2017.

  10. Navigation of a virtual exercise environment with Microsoft Kinect by people post-stroke or with cerebral palsy.

    PubMed

    Pool, Sean M; Hoyle, John M; Malone, Laurie A; Cooper, Lloyd; Bickel, C Scott; McGwin, Gerald; Rimmer, James H; Eberhardt, Alan W

    2016-04-08

    One approach to encourage and facilitate exercise is through interaction with virtual environments. The present study assessed the utility of Microsoft Kinect as an interface for choosing between multiple routes within a virtual environment through body gestures and voice commands. The approach was successfully tested on 12 individuals post-stroke and 15 individuals with cerebral palsy (CP). Participants rated their perception of difficulty in completing each gesture using a 5-point Likert scale questionnaire. The "most viable" gestures were defined as those with average success rates of 90% or higher and perception of difficulty ranging between easy and very easy. For those with CP, hand raises, hand extensions, and head nod gestures were found most viable. For those post-stroke, the most viable gestures were torso twists, head nods, as well as hand raises and hand extensions using the less impaired hand. Voice commands containing two syllables were viable (>85% successful) for those post-stroke; however, participants with CP were unable to complete any voice commands with a high success rate. This study demonstrated that Kinect may be useful for persons with mobility impairments to interface with virtual exercise environments, but the effectiveness of the various gestures depends upon the disability of the user.

  11. Sci—Thur AM: YIS - 10: Use of the Microsoft Kinect for applications of patient surface data to radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guillet, Dominique; Syme, Alasdair; DeBlois, François

    Current techniques to acquire patient surface data are often very expensive and lack flexibility. In this study, the use of the Microsoft Kinect to reliably acquire 3D scans of patient surface is investigated. A design is presented to make the system easily applicable to the clinic. Potential applications of the device to radiotherapy are also presented. Scan reproducibility was tested by repeatedly scanning an anthropomorphic phantom. Scan accuracy was tested by comparing Kinect scans to the surface extracted from a CT dataset of a Rando® anthropomorphic phantom, which was considered as the true reference surface. Average signed distances of 0.12more » ± 2.34 mm and 0.13 ± 2.04 mm were obtained between the compared surfaces for reproducibility and accuracy respectively. This is conclusive, since it indicates that the variations observed come largely from noise distributed around an average distance close to 0 mm. Moreover, the range of the noise is small enough for the system to reliably capture a patient's surface. A system was also designed using two Kinects used together to acquire 3D surfaces in a quick and stable way that is applicable to the clinic. Finally, applications of the device to radiotherapy are demonstrated. Its use to detect local positioning errors is presented, where small local variations difficult to see with the naked eye are clearly visible. The system was also used to predict collisions using gantry and patient scans and thus ensure the safety of unconventional trajectories.« less

  12. Dioptric defocus maps across the visual field for different indoor environments.

    PubMed

    García, Miguel García; Ohlendorf, Arne; Schaeffel, Frank; Wahl, Siegfried

    2018-01-01

    One of the factors proposed to regulate the eye growth is the error signal derived from the defocus in the retina and actually, this might arise from defocus not only in the fovea but the whole visual field. Therefore, myopia could be better predicted by spatio-temporally mapping the 'environmental defocus' over the visual field. At present, no devices are available that could provide this information. A 'Kinect sensor v1' camera (Microsoft Corp.) and a portable eye tracker were used for developing a system for quantifying 'indoor defocus error signals' across the central 58° of the visual field. Dioptric differences relative to the fovea (assumed to be in focus) were recorded over the visual field and 'defocus maps' were generated for various scenes and tasks.

  13. Mobile, Virtual Enhancements for Rehabilitation (MOVER)

    DTIC Science & Technology

    2015-05-31

    patient uses COTS input devices, such as the Microsoft Kinect and the Wii Balance Board , to perform therapeutic exercises that are mapped to controls...in place of having an exercise creation tool for the therapists, we have simplified the process by hardcoding specific, commonly used balance

  14. A wirelessly-powered homecage with animal behavior analysis and closed-loop power control.

    PubMed

    Yaoyao Jia; Zheyuan Wang; Canales, Daniel; Tinkler, Morgan; Chia-Chun Hsu; Madsen, Teresa E; Mirbozorgi, S Abdollah; Rainnie, Donald; Ghovanloo, Maysam

    2016-08-01

    This paper presents a new EnerCage-homecage system, EnerCage-HC2, for longitudinal electrophysiology data acquisition experiments on small freely moving animal subjects, such as rodents. EnerCage-HC2 is equipped with multi-coil wireless power transmission (WPT), closed-loop power control, bidirectional data communication via Bluetooth Low Energy (BLE), and Microsoft Kinect® based animal behavior tracking and analysis. The EnerCage-HC2 achieves a homogeneous power transfer efficiency (PTE) of 14% on average, with ~42 mW power delivered to the load (PDL) at a nominal height of 7 cm by the closed-loop power control mechanism. The Microsoft Kinect® behavioral analysis algorithm can not only track the animal position in real-time but also classify 5 different types of rodent behaviors: standstill, walking, grooming, rearing, and rotating. A proof-of-concept in vivo experiment was conducted on two awake freely behaving rats while successfully operating a one-channel stimulator and generating an ethogram.

  15. Retraining function in people with Parkinson's disease using the Microsoft kinect: game design and pilot testing.

    PubMed

    Galna, Brook; Jackson, Dan; Schofield, Guy; McNaney, Roisin; Webster, Mary; Barry, Gillian; Mhiripiri, Dadirayi; Balaam, Madeline; Olivier, Patrick; Rochester, Lynn

    2014-04-14

    Computer based gaming systems, such as the Microsoft Kinect (Kinect), can facilitate complex task practice, enhance sensory feedback and action observation in novel, relevant and motivating modes of exercise which can be difficult to achieve with standard physiotherapy for people with Parkinson's disease (PD). However, there is a current need for safe, feasible and effective exercise games that are appropriate for PD rehabilitation. The aims of this study were to i) develop a computer game to rehabilitate dynamic postural control for people with PD using the Kinect; and ii) pilot test the game's safety and feasibility in a group of people with PD. A rehabilitation game aimed at training dynamic postural control was developed through an iterative process with input from a design workshop of people with PD. The game trains dynamic postural control through multi-directional reaching and stepping tasks, with increasing complexity across 12 levels of difficulty. Nine people with PD pilot tested the game for one session. Participant feedback to identify issues relating to safety and feasibility were collected using semi-structured interviews. Participants reported that they felt safe whilst playing the game. In addition, there were no adverse events whilst playing. In general, the participants stated that they enjoyed the game and seven of the nine participants said they could imagine themselves using the game at home, especially if they felt it would improve their balance. The Flow State Scale indicated participants were immersed in the gameplay and enjoyed the experience. However, some participants reported that they found it difficult to discriminate between different types and orientations of visual objects in the game and some also had difficulty with the stepping tasks, especially when performed at the same time as the reaching tasks. Computer-based rehabilitation games using the Kinect are safe and feasible for people with PD although intervention trials are needed to test their safety, feasibility and efficacy in the home.

  16. Retraining function in people with Parkinson’s disease using the Microsoft kinect: game design and pilot testing

    PubMed Central

    2014-01-01

    Background Computer based gaming systems, such as the Microsoft Kinect (Kinect), can facilitate complex task practice, enhance sensory feedback and action observation in novel, relevant and motivating modes of exercise which can be difficult to achieve with standard physiotherapy for people with Parkinson’s disease (PD). However, there is a current need for safe, feasible and effective exercise games that are appropriate for PD rehabilitation. The aims of this study were to i) develop a computer game to rehabilitate dynamic postural control for people with PD using the Kinect; and ii) pilot test the game’s safety and feasibility in a group of people with PD. Methods A rehabilitation game aimed at training dynamic postural control was developed through an iterative process with input from a design workshop of people with PD. The game trains dynamic postural control through multi-directional reaching and stepping tasks, with increasing complexity across 12 levels of difficulty. Nine people with PD pilot tested the game for one session. Participant feedback to identify issues relating to safety and feasibility were collected using semi-structured interviews. Results Participants reported that they felt safe whilst playing the game. In addition, there were no adverse events whilst playing. In general, the participants stated that they enjoyed the game and seven of the nine participants said they could imagine themselves using the game at home, especially if they felt it would improve their balance. The Flow State Scale indicated participants were immersed in the gameplay and enjoyed the experience. However, some participants reported that they found it difficult to discriminate between different types and orientations of visual objects in the game and some also had difficulty with the stepping tasks, especially when performed at the same time as the reaching tasks. Conclusion Computer-based rehabilitation games using the Kinect are safe and feasible for people with PD although intervention trials are needed to test their safety, feasibility and efficacy in the home. PMID:24731758

  17. Low-cost three-dimensional gait analysis system for mice with an infrared depth sensor.

    PubMed

    Nakamura, Akihiro; Funaya, Hiroyuki; Uezono, Naohiro; Nakashima, Kinichi; Ishida, Yasumasa; Suzuki, Tomohiro; Wakana, Shigeharu; Shibata, Tomohiro

    2015-11-01

    Three-dimensional (3D) open-field gait analysis of mice is an essential procedure in genetic and nerve regeneration research. Existing gait analysis systems are generally expensive and may interfere with the natural behaviors of mice because of optical markers and transparent floors. In contrast, the proposed system captures the subjects shape from beneath using a low-cost infrared depth sensor (Microsoft Kinect) and an opaque infrared pass filter. This means that we can track footprints and 3D paw-tip positions without optical markers or a transparent floor, thereby preventing any behavioral changes. Our experimental results suggest with healthy mice that they are more active on opaque floors and spend more time in the center of the open-field, when compared with transparent floors. The proposed system detected footprints with a comparable performance to existing systems, and precisely tracked the 3D paw-tip positions in the depth image coordinates. Copyright © 2015 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  18. “Kinect-ing” With Clinicians: A Knowledge Translation Resource to Support Decision Making About Video Game Use in Rehabilitation

    PubMed Central

    Levac, Danielle; Espy, Deborah; Fox, Emily; Pradhan, Sujata

    2015-01-01

    Microsoft's Kinect for Xbox 360 virtual reality (VR) video games are promising rehabilitation options because they involve motivating, full-body movement practice. However, these games were designed for recreational use, which creates challenges for clinical implementation. Busy clinicians require decision-making support to inform game selection and implementation that address individual therapeutic goals. This article describes the development and preliminary evaluation of a knowledge translation (KT) resource to support clinical decision making about selection and use of Kinect games in physical therapy. The knowledge-to-action framework guided the development of the Kinecting With Clinicians (KWiC) resource. Five physical therapists with VR and video game expertise analyzed the Kinect Adventure games. A consensus-building method was used to arrive at categories to organize clinically relevant attributes guiding game selection and game play. The process and results of an exploratory usability evaluation of the KWiC resource by clinicians through interviews and focus groups at 4 clinical sites is described. Subsequent steps in the evaluation and KT process are proposed, including making the KWiC resource Web-based and evaluating the utility of the online resource in clinical practice. PMID:25256741

  19. Evaluation of the microsoft kinect skeletal versus depth data analysis for timed-up and go and figure of 8 walk tests.

    PubMed

    Hotrabhavananda, Benjamin; Mishra, Anup K; Skubic, Marjorie; Hotrabhavananda, Nijaporn; Abbott, Carmen

    2016-08-01

    We compared the performance of the Kinect skeletal data with the Kinect depth data in capturing different gait parameters during the Timed-up and Go Test (TUG) and Figure of 8 Walk Test (F8W). The gait parameters considered were stride length, stride time, and walking speed for the TUG, and number of steps and completion time for the F8W. A marker-based Vicon motion capture system was used for the ground-truth measurements. Five healthy participants were recruited for the experiment and were asked to perform three trials of each task. Results show that depth data analysis yields stride length and stride time measures with significantly low percentile errors as compared to the skeletal data analysis. However, the skeletal and depth data performed similar with less than 3% of absolute mean percentile error in determining the walking speed for the TUG and both parameters of F8W. The results show potential capabilities of Kinect depth data analysis in computing many gait parameters, whereas, the Kinect skeletal data can also be used for walking speed in TUG and F8W gait parameters.

  20. Determination of repeatability of kinect sensor.

    PubMed

    Bonnechère, Bruno; Sholukha, Victor; Jansen, Bart; Omelina, Lubos; Rooze, Marcel; Van Sint Jan, Serge

    2014-05-01

    The Kinect™ (Microsoft™, Redmond, WA) sensor, originally developed for gaming purposes, may have interesting possibilities for other fields such as posture and motion assessment. The ability of the Kinect sensor to perform biomechanical measurements has previously been studied and shows promising results. However, interday repeatability of the device is still not known. This study assessed the intra- and interday repeatability of the Kinect sensor compared with a standard stereophotogrammetric device during posture assessment for measuring segment lengths. Forty subjects took part in the study. Five motionless captures were performed in one session to assess posture. Data were simultaneously recorded with both devices. Similar intraclass correlations coefficient (ICC) values were found for intraday (ICC=0.94 for the Kinect device and 0.98 for the stereophotogrammetric device) and interday (ICC=0.88 and 0.87, respectively) repeatability. Results of this study suggest that a cost-effective, easy-to-use, and portable single markerless camera offers the same repeatability during posture assessment as an expensive, time-consuming, and nontransportable marker-based device.

  1. Feasibility study of using a Microsoft Kinect for virtual coaching of wheelchair transfer techniques.

    PubMed

    Hwang, Seonhong; Tsai, Chung-Ying; Koontz, Alicia M

    2017-05-24

    The purpose of this study was to test the concurrent validity and test-retest reliability of the Kinect skeleton tracking algorithm for measurement of trunk, shoulder, and elbow joint angle measurement during a wheelchair transfer task. Eight wheelchair users were recruited for this study. Joint positions were recorded simultaneously by the Kinect and Vicon motion capture systems while subjects transferred from their wheelchairs to a level bench. Shoulder, elbow, and trunk angles recorded with the Kinect system followed a similar trajectory as the angles recorded with the Vicon system with correlation coefficients that are larger than 0.71 on both sides (leading arm and trailing arm). The root mean square errors (RMSEs) ranged from 5.18 to 22.46 for the shoulder, elbow, and trunk angles. The 95% limits of agreement (LOA) for the discrepancy between the two systems exceeded the clinical significant level of 5°. For the trunk, shoulder, and elbow angles, the Kinect had very good relative reliability for the measurement of sagittal, frontal and horizontal trunk angles, as indicated by the high intraclass correlation coefficient (ICC) values (>0.90). Small standard error of the measure (SEM) values, indicating good absolute reliability, were observed for all joints except for the leading arm's shoulder joint. Relatively large minimal detectable changes (MDCs) were observed in all joint angles. The Kinect motion tracking has promising performance levels for some upper limb joints. However, more accurate measurement of the joint angles may be required. Therefore, understanding the limitations in precision and accuracy of Kinect is imperative before utilization of Kinect.

  2. Recognition-Based Physical Response to Facilitate EFL Learning

    ERIC Educational Resources Information Center

    Hwang, Wu-Yuin; Shih, Timothy K.; Yeh, Shih-Ching; Chou, Ke-Chien; Ma, Zhao-Heng; Sommool, Worapot

    2014-01-01

    This study, based on total physical response and cognitive psychology, proposed a Kinesthetic English Learning System (KELS), which utilized Microsoft's Kinect technology to build kinesthetic interaction with life-related contexts in English. A subject test with 39 tenth-grade students was conducted following empirical research method in order to…

  3. Scanning 3D full human bodies using Kinects.

    PubMed

    Tong, Jing; Zhou, Jin; Liu, Ligang; Pan, Zhigeng; Yan, Hao

    2012-04-01

    Depth camera such as Microsoft Kinect, is much cheaper than conventional 3D scanning devices, and thus it can be acquired for everyday users easily. However, the depth data captured by Kinect over a certain distance is of extreme low quality. In this paper, we present a novel scanning system for capturing 3D full human body models by using multiple Kinects. To avoid the interference phenomena, we use two Kinects to capture the upper part and lower part of a human body respectively without overlapping region. A third Kinect is used to capture the middle part of the human body from the opposite direction. We propose a practical approach for registering the various body parts of different views under non-rigid deformation. First, a rough mesh template is constructed and used to deform successive frames pairwisely. Second, global alignment is performed to distribute errors in the deformation space, which can solve the loop closure problem efficiently. Misalignment caused by complex occlusion can also be handled reasonably by our global alignment algorithm. The experimental results have shown the efficiency and applicability of our system. Our system obtains impressive results in a few minutes with low price devices, thus is practically useful for generating personalized avatars for everyday users. Our system has been used for 3D human animation and virtual try on, and can further facilitate a range of home–oriented virtual reality (VR) applications.

  4. Dioptric defocus maps across the visual field for different indoor environments

    PubMed Central

    García, Miguel García; Ohlendorf, Arne; Schaeffel, Frank; Wahl, Siegfried

    2017-01-01

    One of the factors proposed to regulate the eye growth is the error signal derived from the defocus in the retina and actually, this might arise from defocus not only in the fovea but the whole visual field. Therefore, myopia could be better predicted by spatio-temporally mapping the ‘environmental defocus’ over the visual field. At present, no devices are available that could provide this information. A ‘Kinect sensor v1’ camera (Microsoft Corp.) and a portable eye tracker were used for developing a system for quantifying ‘indoor defocus error signals’ across the central 58° of the visual field. Dioptric differences relative to the fovea (assumed to be in focus) were recorded over the visual field and ‘defocus maps’ were generated for various scenes and tasks. PMID:29359108

  5. Volume measurement of the leg with the depth camera for quantitative evaluation of edema

    NASA Astrophysics Data System (ADS)

    Kiyomitsu, Kaoru; Kakinuma, Akihiro; Takahashi, Hiroshi; Kamijo, Naohiro; Ogawa, Keiko; Tsumura, Norimichi

    2017-02-01

    Volume measurement of the leg is important in the evaluation of leg edema. Recently, method for measurement by using a depth camera is proposed. However, many depth cameras are expensive. Therefore, we propose a method using Microsoft Kinect. We obtain a point cloud of the leg by Kinect Fusion technique and calculate the volume. We measured the volume of leg for three healthy students during three days. In each measurement, the increase of volume was confirmed from morning to evening. It is known that the volume of leg is increased in doing office work. Our experimental results meet this expectation.

  6. Measurement of Shoulder Range of Motion in Patients with Adhesive Capsulitis Using a Kinect

    PubMed Central

    Chung, Sun Gun; Kim, Hee Chan; Kwak, Youngbin; Park, Hee-won; Kim, Keewon

    2015-01-01

    Range of motion (ROM) measurements are essential for the evaluation for and diagnosis of adhesive capsulitis of the shoulder (AC). However, taking these measurements using a goniometer is inconvenient and sometimes unreliable. The Kinect (Microsoft, Seattle, WA, USA) is gaining attention as a new motion detecting device that is nonintrusive and easy to implement. This study aimed to apply Kinect to measure shoulder ROM in AC; we evaluated its validity by calculating the agreement of the measurements obtained using Kinect with those obtained using goniometer and assessed its utility for the diagnosis of AC. Both shoulders of 15 healthy volunteers and affected shoulders of 12 patients with AC were included in the study. The passive and active ROM of each were measured with a goniometer for flexion, abduction, and external rotation. Their active shoulder motions for each direction were again captured using Kinect and the ROM values were calculated. The agreement between the two measurements was tested with the intraclass correlation coefficient (ICC). Diagnostic performance using the Kinect ROM was evaluated with Cohen’s kappa value. The cutoff values of the limited ROM were determined in the following ways: the same as passive ROM values, reflecting the mean difference, and based on receiver operating characteristic curves. The ICC for flexion/abduction/external rotation between goniometric passive ROM and the Kinect ROM were 0.906/0.942/0.911, while those between active ROMs and the Kinect ROMs were 0.864/0.932/0.925. Cohen’s kappa values were 0.88, 0.88, and 1.0 with the cutoff values in the order above. Measurements of the shoulder ROM using Kinect show excellent agreement with those taken using a goniometer. These results indicate that the Kinect can be used to measure shoulder ROM and to diagnose AC as an alternative to goniometer. PMID:26107943

  7. KINECTATION (Kinect for Presentation): Control Presentation with Interactive Board and Record Presentation with Live Capture Tools

    NASA Astrophysics Data System (ADS)

    Sutoyo, Rhio; Herriyandi; Fennia Lesmana, Tri; Susanto, Edy

    2017-01-01

    Presentation is one the most common activity performed in various fields of work (e.g. lecturer, employee, manager, etc.). The purpose of presentation is to demonstrate or introduce presenters’ idea to the attendees. Within the given time and specific place, presenters must transfer their knowledge and leave great impression for their audience. Generally, presenters use several handy tools such as mouse, presenter, and webcam to help them to navigate their slides. Nevertheless, some of these tools have several constraints and limitations such as not portable and does not support multimedia. In this research, we develop an application that assist presenters to control their presentation materials by using Microsoft KINECT. In this research, we manipulate colour image, image depth, and the skeleton of the presenters captured by the KINECT. Then, we show the post-process image results into the projector screen. The KINECT has more useful than other tools because it supports video and audio recording. Moreover, it also able to capture presenters’ movement that can be used as an input to interact and manipulate the content (i.e. by touching the projection wall). Not only this application provides an alternative in controlling presentation activity, but it also makes the presentation more efficient and attractive.

  8. Easy and Fast Reconstruction of a 3D Avatar with an RGB-D Sensor.

    PubMed

    Mao, Aihua; Zhang, Hong; Liu, Yuxin; Zheng, Yinglong; Li, Guiqing; Han, Guoqiang

    2017-05-12

    This paper proposes a new easy and fast 3D avatar reconstruction method using an RGB-D sensor. Users can easily implement human body scanning and modeling just with a personal computer and a single RGB-D sensor such as a Microsoft Kinect within a small workspace in their home or office. To make the reconstruction of 3D avatars easy and fast, a new data capture strategy is proposed for efficient human body scanning, which captures only 18 frames from six views with a close scanning distance to fully cover the body; meanwhile, efficient alignment algorithms are presented to locally align the data frames in the single view and then globally align them in multi-views based on pairwise correspondence. In this method, we do not adopt shape priors or subdivision tools to synthesize the model, which helps to reduce modeling complexity. Experimental results indicate that this method can obtain accurate reconstructed 3D avatar models, and the running performance is faster than that of similar work. This research offers a useful tool for the manufacturers to quickly and economically create 3D avatars for products design, entertainment and online shopping.

  9. Comparative abilities of Microsoft Kinect and Vicon 3D motion capture for gait analysis.

    PubMed

    Pfister, Alexandra; West, Alexandre M; Bronner, Shaw; Noah, Jack Adam

    2014-07-01

    Biomechanical analysis is a powerful tool in the evaluation of movement dysfunction in orthopaedic and neurologic populations. Three-dimensional (3D) motion capture systems are widely used, accurate systems, but are costly and not available in many clinical settings. The Microsoft Kinect™ has the potential to be used as an alternative low-cost motion analysis tool. The purpose of this study was to assess concurrent validity of the Kinect™ with Brekel Kinect software in comparison to Vicon Nexus during sagittal plane gait kinematics. Twenty healthy adults (nine male, 11 female) were tracked while walking and jogging at three velocities on a treadmill. Concurrent hip and knee peak flexion and extension and stride timing measurements were compared between Vicon and Kinect™. Although Kinect measurements were representative of normal gait, the Kinect™ generally under-estimated joint flexion and over-estimated extension. Kinect™ and Vicon hip angular displacement correlation was very low and error was large. Kinect™ knee measurements were somewhat better than hip, but were not consistent enough for clinical assessment. Correlation between Kinect™ and Vicon stride timing was high and error was fairly small. Variability in Kinect™ measurements was smallest at the slowest velocity. The Kinect™ has basic motion capture capabilities and with some minor adjustments will be an acceptable tool to measure stride timing, but sophisticated advances in software and hardware are necessary to improve Kinect™ sensitivity before it can be implemented for clinical use.

  10. Data fusion for a vision-aided radiological detection system: Calibration algorithm performance

    NASA Astrophysics Data System (ADS)

    Stadnikia, Kelsey; Henderson, Kristofer; Martin, Allan; Riley, Phillip; Koppal, Sanjeev; Enqvist, Andreas

    2018-05-01

    In order to improve the ability to detect, locate, track and identify nuclear/radiological threats, the University of Florida nuclear detection community has teamed up with the 3D vision community to collaborate on a low cost data fusion system. The key is to develop an algorithm to fuse the data from multiple radiological and 3D vision sensors as one system. The system under development at the University of Florida is being assessed with various types of radiological detectors and widely available visual sensors. A series of experiments were devised utilizing two EJ-309 liquid organic scintillation detectors (one primary and one secondary), a Microsoft Kinect for Windows v2 sensor and a Velodyne HDL-32E High Definition LiDAR Sensor which is a highly sensitive vision sensor primarily used to generate data for self-driving cars. Each experiment consisted of 27 static measurements of a source arranged in a cube with three different distances in each dimension. The source used was Cf-252. The calibration algorithm developed is utilized to calibrate the relative 3D-location of the two different types of sensors without need to measure it by hand; thus, preventing operator manipulation and human errors. The algorithm can also account for the facility dependent deviation from ideal data fusion correlation. Use of the vision sensor to determine the location of a sensor would also limit the possible locations and it does not allow for room dependence (facility dependent deviation) to generate a detector pseudo-location to be used for data analysis later. Using manually measured source location data, our algorithm-predicted the offset detector location within an average of 20 cm calibration-difference to its actual location. Calibration-difference is the Euclidean distance from the algorithm predicted detector location to the measured detector location. The Kinect vision sensor data produced an average calibration-difference of 35 cm and the HDL-32E produced an average calibration-difference of 22 cm. Using NaI and He-3 detectors in place of the EJ-309, the calibration-difference was 52 cm for NaI and 75 cm for He-3. The algorithm is not detector dependent; however, from these results it was determined that detector dependent adjustments are required.

  11. SU-G-JeP1-14: Respiratory Motion Tracking Using Kinect V2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Silverstein, E; Snyder, M

    Purpose: Investigate capability and accuracy of Kinect v2 camera for tracking respiratory motion to use as a tool during 4DCT or in combination with motion management during radiotherapy treatments. Methods: Utilizing the depth sensor on the Kinect as well as code written in C#, the respiratory motion of a patient was tracked by recording the depth (distance) values obtained at several points on the patient. Respiratory traces were also obtained using Varian’s RPM system, which traces the movement of a propriety marker placed on the patient’s abdomen, as well as an Anzai belt, which utilizes a pressure sensor to trackmore » respiratory motion. With the Kinect mounted 60 cm above the patient and pointing straight down, 11 breathing cycles were recorded with each system simultaneously. Relative displacement values during this time period were saved to file. While RPM and the Kinect give displacement values in distance units, the Anzai system has arbitrary units. As such, displacement for all three are displayed relative to the maximum value for the time interval from that system. Additional analysis was performed between RPM and Kinect for absolute displacement values. Results: Analysis of the data from all three systems indicates the relative motion obtained from the Kinect is both accurate and in sync with the data from RPM and Anzai. The absolute displacement data from RPM and Kinect show similar displacement values throughout the acquisition except for the depth obtained from the Kinect during maximum exhalation (largest distance from Kinect). Conclusion: By simply utilizing the depth data of specific points on a patient obtained from the Kinect, respiratory motion can be tracked and visualized with accuracy comparable to that of the Varian RPM and Anzai belt.« less

  12. Handheld real-time volumetric 3-D gamma-ray imaging

    NASA Astrophysics Data System (ADS)

    Haefner, Andrew; Barnowski, Ross; Luke, Paul; Amman, Mark; Vetter, Kai

    2017-06-01

    This paper presents the concept of real-time fusion of gamma-ray imaging and visual scene data for a hand-held mobile Compton imaging system in 3-D. The ability to obtain and integrate both gamma-ray and scene data from a mobile platform enables improved capabilities in the localization and mapping of radioactive materials. This not only enhances the ability to localize these materials, but it also provides important contextual information of the scene which once acquired can be reviewed and further analyzed subsequently. To demonstrate these concepts, the high-efficiency multimode imager (HEMI) is used in a hand-portable implementation in combination with a Microsoft Kinect sensor. This sensor, in conjunction with open-source software, provides the ability to create a 3-D model of the scene and to track the position and orientation of HEMI in real-time. By combining the gamma-ray data and visual data, accurate 3-D maps of gamma-ray sources are produced in real-time. This approach is extended to map the location of radioactive materials within objects with unknown geometry.

  13. Learning discriminative features from RGB-D images for gender and ethnicity identification

    NASA Astrophysics Data System (ADS)

    Azzakhnini, Safaa; Ballihi, Lahoucine; Aboutajdine, Driss

    2016-11-01

    The development of sophisticated sensor technologies gave rise to an interesting variety of data. With the appearance of affordable devices, such as the Microsoft Kinect, depth-maps and three-dimensional data became easily accessible. This attracted many computer vision researchers seeking to exploit this information in classification and recognition tasks. In this work, the problem of face classification in the context of RGB images and depth information (RGB-D images) is addressed. The purpose of this paper is to study and compare some popular techniques for gender recognition and ethnicity classification to understand how much depth data can improve the quality of recognition. Furthermore, we investigate which combination of face descriptors, feature selection methods, and learning techniques is best suited to better exploit RGB-D images. The experimental results show that depth data improve the recognition accuracy for gender and ethnicity classification applications in many use cases.

  14. Use of natural user interfaces in water simulations

    NASA Astrophysics Data System (ADS)

    Donchyts, G.; Baart, F.; van Dam, A.; Jagers, B.

    2013-12-01

    Conventional graphical user interfaces, used to edit input and present results of earth science models, have seen little innovation for the past two decades. In most cases model data is presented and edited using 2D projections even when working with 3D data. The emergence of 3D motion sensing technologies, such as Microsoft Kinect and LEAP Motion, opens new possibilities for user interaction by adding more degrees of freedom compared to a classical way using mouse and keyboard. Here we investigate how interaction with hydrodynamic numerical models can be improved using these new technologies. Our research hypothesis (H1) states that properly designed 3D graphical user interface paired with the 3D motion sensor can significantly reduce the time required to setup and use numerical models. In this work we have used a LEAP motion controller combined with a shallow water flow model engine D-Flow Flexible Mesh. Interacting with numerical model using hands

  15. A practical indoor context-aware surveillance system with multi-Kinect sensors

    NASA Astrophysics Data System (ADS)

    Jia, Lili; You, Ying; Li, Tiezhu; Zhang, Shun

    2014-11-01

    In this paper we develop a novel practical application, which give scalable services to the end users when abnormal actives are happening. Architecture of the application has been presented consisting of network infrared cameras and a communication module. In this intelligent surveillance system we use Kinect sensors as the input cameras. Kinect is an infrared laser camera which its user can access the raw infrared sensor stream. We install several Kinect sensors in one room to track the human skeletons. Each sensor returns the body positions with 15 coordinates in its own coordinate system. We use calibration algorithms to calibrate all the body positions points into one unified coordinate system. With the body positions points, we can infer the surveillance context. Furthermore, the messages from the metadata index matrix will be sent to mobile phone through communication module. User will instantly be aware of an abnormal case happened in the room without having to check the website. In conclusion, theoretical analysis and experimental results in this paper show that the proposed system is reasonable and efficient. And the application method introduced in this paper is not only to discourage the criminals and assist police in the apprehension of suspects, but also can enabled the end-users monitor the indoor environments anywhere and anytime by their phones.

  16. The application of a low-cost 3D depth camera for patient set-up and respiratory motion management in radiotherapy

    NASA Astrophysics Data System (ADS)

    Tahavori, Fatemeh

    Respiratory motion induces uncertainty in External Beam Radiotherapy (EBRT), which can result in sub-optimal dose delivery to the target tissue and unwanted dose to normal tissue. The conventional approach to managing patient respiratory motion for EBRT within the area of abdominal-thoracic cancer is through the use of internal radiological imaging methods (e.g. Megavoltage imaging or Cone-Beam Computed Tomography) or via surrogate estimates of tumour position using external markers placed on the patient chest. This latter method uses tracking with video-based techniques, and relies on an assumed correlation or mathematical model, between the external surrogate signal and the internal target position. The marker's trajectory can be used in both respiratory gating techniques and real-time tracking methods. Internal radiological imaging methods bring with them limited temporal resolution, and additional radiation burden, which can be addressed by external marker-based methods that carry no such issues. Moreover, by including multiple external markers and placing them closer to the internal target organs, the effciency of correlation algorithms can be increased. However, the quality of such external monitoring methods is underpinned by the performance of the associated correlation model. Therefore, several new approaches to correlation modelling have been developed as part of this thesis and compared using publicly-available datasets. Highly competitive results have been obtained when compared against state-of-the-art methods. Marker-based methods also have the disadvantages of requiring manual set-up time for marker placement and patient positioning and potential issues with reproducibility of marker placement. This motivates the investigation of non-contact marker-free methods for use in EBRT, which is the main topic of this thesis. The Microsoft Kinect is used as an example of a low-cost consumer grade 3D depth camera for capturing and analysing external respiratory motion. This thesis makes the first presentation of detailed studies of external respiratory motion captured using such low-cost technology and demonstrates its potential in a healthcare environment. Firstly, the fundamental performance of a range of Microsoft Kinect sensors is assessed for use in radiotherapy (and potentially other healthcare applications), in terms of static and dynamic performance using both phantoms and volunteers. Then external respiratory motion is captured using the above technology from a group of 32 healthy volunteers and Principal Component Analysis (PCA) is applied to a region of interest encompassing the complete anterior surface to demonstrate breathing style. This work demonstrates that this surface motion can be compactly described by the first two PCA eigenvectors. The reproducibility of subject-specific EBRT set-up using conventional laser-based alignment and marker-based Deep Inspiration Breath Hold (DIBH) methods are also studied using the Microsoft Kinect sensor. A cohort of five healthy female volunteers is repeatedly set-up for left-sided breast cancer EBRT and multiple DIBH episodes captured over five separate sessions representing multiple fractionated radiotherapy treatment sessions, but without dose delivery. This provided an independent assessment that subjects were set-up and generally achieved variations within currently accepted margins of clinical practice. Moreover, this work demonstrated the potential role of consumer-grade 3D depth camera technology as a possible replacement for marker based set-up and DIBH management procedures. This brings with it the additional benefits of low cost, and potential through-put benefits, as patient set-up could ultimately be fully automated with this technology, and DIBH could be independently monitored without requiring preparatory manual intervention.

  17. Accuracy and Reliability of the Kinect Version 2 for Clinical Measurement of Motor Function

    PubMed Central

    Kayser, Bastian; Mansow-Model, Sebastian; Verrel, Julius; Paul, Friedemann; Brandt, Alexander U.; Schmitz-Hübsch, Tanja

    2016-01-01

    Background The introduction of low cost optical 3D motion tracking sensors provides new options for effective quantification of motor dysfunction. Objective The present study aimed to evaluate the Kinect V2 sensor against a gold standard motion capture system with respect to accuracy of tracked landmark movements and accuracy and repeatability of derived clinical parameters. Methods Nineteen healthy subjects were concurrently recorded with a Kinect V2 sensor and an optical motion tracking system (Vicon). Six different movement tasks were recorded with 3D full-body kinematics from both systems. Tasks included walking in different conditions, balance and adaptive postural control. After temporal and spatial alignment, agreement of movements signals was described by Pearson’s correlation coefficient and signal to noise ratios per dimension. From these movement signals, 45 clinical parameters were calculated, including ranges of motions, torso sway, movement velocities and cadence. Accuracy of parameters was described as absolute agreement, consistency agreement and limits of agreement. Intra-session reliability of 3 to 5 measurement repetitions was described as repeatability coefficient and standard error of measurement for each system. Results Accuracy of Kinect V2 landmark movements was moderate to excellent and depended on movement dimension, landmark location and performed task. Signal to noise ratio provided information about Kinect V2 landmark stability and indicated larger noise behaviour in feet and ankles. Most of the derived clinical parameters showed good to excellent absolute agreement (30 parameters showed ICC(3,1) > 0.7) and consistency (38 parameters showed r > 0.7) between both systems. Conclusion Given that this system is low-cost, portable and does not require any sensors to be attached to the body, it could provide numerous advantages when compared to established marker- or wearable sensor based system. The Kinect V2 has the potential to be used as a reliable and valid clinical measurement tool. PMID:27861541

  18. Wii, Kinect, and Move. Heart Rate, Oxygen Consumption, Energy Expenditure, and Ventilation due to Different Physically Active Video Game Systems in College Students

    PubMed Central

    SCHEER, KRISTA S.; SIEBRANT, SARAH M.; BROWN, GREGORY A.; SHAW, BRANDON S.; SHAW, INA

    2014-01-01

    Nintendo Wii, Sony Playstation Move, and Microsoft XBOX Kinect are home video gaming systems that involve player movement to control on-screen game play. Numerous investigations have demonstrated that playing Wii is moderate physical activity at best, but Move and Kinect have not been as thoroughly investigated. The purpose of this study was to compare heart rate, oxygen consumption, and ventilation while playing the games Wii Boxing, Kinect Boxing, and Move Gladiatorial Combat. Heart rate, oxygen consumption, and ventilation were measured at rest and during a graded exercise test in 10 males and 9 females (19.8 ± 0.33 y, 175.4 ± 2.0 cm, 80.2 ± 7.7 kg,). On another day, in a randomized order, the participants played Wii Boxing, Kinect Boxing, and Move Gladiatorial Combat while heart rate, ventilation, and oxygen consumption were measured. There were no differences in heart rate (116.0 ± 18.3 vs. 119.3 ± 17.6 vs. 120.1 ± 17.6 beats/min), oxygen consumption (9.2 ± 3.0 vs. 10.6 ± 2.4 vs. 9.6 ± 2.4 ml/kg/min), or minute ventilation (18.9 ± 5.7 vs. 20.8 ± 8.0 vs. 19.7 ± 6.4 L/min) when playing Wii boxing, Kinect boxing, or Move Gladiatorial Combat (respectively). Playing Nintendo Wii Boxing, XBOX Kinect Boxing, and Sony PlayStation Move Gladiatorial Combat all increase heart rate, oxygen consumption, and ventilation above resting levels but there were no significant differences between gaming systems. Overall, playing a “physically active” home video game system does not meet the minimal threshold for moderate intensity physical activity, regardless of gaming system. PMID:27182399

  19. Combination of Tls Point Clouds and 3d Data from Kinect v2 Sensor to Complete Indoor Models

    NASA Astrophysics Data System (ADS)

    Lachat, E.; Landes, T.; Grussenmeyer, P.

    2016-06-01

    The combination of data coming from multiple sensors is more and more applied for remote sensing issues (multi-sensor imagery) but also in cultural heritage or robotics, since it often results in increased robustness and accuracy of the final data. In this paper, the reconstruction of building elements such as window frames or door jambs scanned thanks to a low cost 3D sensor (Kinect v2) is presented. Their combination within a global point cloud of an indoor scene acquired with a terrestrial laser scanner (TLS) is considered. If the added elements acquired with the Kinect sensor enable to reach a better level of detail of the final model, an adapted acquisition protocol may also provide several benefits as for example time gain. The paper aims at analyzing whether the two measurement techniques can be complementary in this context. The limitations encountered during the acquisition and reconstruction steps are also investigated.

  20. Comparison of 3D Joint Angles Measured With the Kinect 2.0 Skeletal Tracker Versus a Marker-Based Motion Capture System.

    PubMed

    Guess, Trent M; Razu, Swithin; Jahandar, Amirhossein; Skubic, Marjorie; Huo, Zhiyu

    2017-04-01

    The Microsoft Kinect is becoming a widely used tool for inexpensive, portable measurement of human motion, with the potential to support clinical assessments of performance and function. In this study, the relative osteokinematic Cardan joint angles of the hip and knee were calculated using the Kinect 2.0 skeletal tracker. The pelvis segments of the default skeletal model were reoriented and 3-dimensional joint angles were compared with a marker-based system during a drop vertical jump and a hip abduction motion. Good agreement between the Kinect and marker-based system were found for knee (correlation coefficient = 0.96, cycle RMS error = 11°, peak flexion difference = 3°) and hip (correlation coefficient = 0.97, cycle RMS = 12°, peak flexion difference = 12°) flexion during the landing phase of the drop vertical jump and for hip abduction/adduction (correlation coefficient = 0.99, cycle RMS error = 7°, peak flexion difference = 8°) during isolated hip motion. Nonsagittal hip and knee angles did not correlate well for the drop vertical jump. When limited to activities in the optimal capture volume and with simple modifications to the skeletal model, the Kinect 2.0 skeletal tracker can provide limited 3-dimensional kinematic information of the lower limbs that may be useful for functional movement assessment.

  1. Wind Tunnel Experiments: Influence of Erosion and Deposition on Wind-Packing of New Snow

    NASA Astrophysics Data System (ADS)

    Sommer, C.; Fierz, C. G.; Lehning, M.

    2017-12-01

    We observed the formation of wind crusts in wind tunnel experiments. A SnowMicroPen was used to measure the hardness profile of the snow and a Microsoft Kinect provided distributed snow depth data. Earlier experiments showed that no crust forms without saltation and that the dynamics of erosion and deposition may be a key factor to explain wind-packing. The Kinect data could be used to quantify spatial erosion and deposition patterns and the combination with the SnowMicroPen data allowed to study the effect of erosion and deposition on wind-hardening. We found that erosion had no hardening effect on fresh snow and that deposition is a necessary but not sufficient condition for wind crust formation. Deposited snow was only hardened in wind-exposed areas. The Kinect data was used to calculate the wind-exposure parameter Sx. We observed no significant hardening for Sx>0.25. The variability of resulting wind crust hardnesses at Sx<0.25 was still large, however.

  2. Designing an Orthotic Insole by Using Kinect® XBOX Gaming Sensor Scanner and Computer Aided Engineering Software

    NASA Astrophysics Data System (ADS)

    Hafiz Burhan, Mohd; Nor, Nik Hisyamudin Muhd; Yarwindran, Mogan; Ibrahim, Mustaffa; Fahrul Hassan, Mohd; Azwir Azlan, Mohd; Turan, Faiz Mohd; Johan, Kartina

    2017-08-01

    Healthcare and medical is one of the most expensive field in the modern world. In order to fulfil medical requirement, this study aimed to design an orthotic insole by using Kinect Xbox Gaming Sensor Scanner and CAE softwares. The accuracy of the Kinect® XBOX 360 gaming sensor is capable of producing 3D reconstructed geometry with the maximum and minimum error of 3.78% (2.78mm) and 1.74% (0.46mm) respectively. The orthotic insole design process had been done by using Autodesk Meshmixer 2.6 and Solidworks 2014 software. Functionality of the orthotic insole designed was capable of reducing foot pressure especially in the metatarsal area. Overall, the proposed method was proved to be highly potential in the design of the insole where it promises low cost, less time consuming, and efficiency in regards that the Kinect® XBOX 360 device promised low price compared to other digital 3D scanner since the software needed to run the device can be downloaded for free.

  3. Can low-cost motion-tracking systems substitute a Polhemus system when researching social motor coordination in children?

    PubMed

    Romero, Veronica; Amaral, Joseph; Fitzpatrick, Paula; Schmidt, R C; Duncan, Amie W; Richardson, Michael J

    2017-04-01

    Functionally stable and robust interpersonal motor coordination has been found to play an integral role in the effectiveness of social interactions. However, the motion-tracking equipment required to record and objectively measure the dynamic limb and body movements during social interaction has been very costly, cumbersome, and impractical within a non-clinical or non-laboratory setting. Here we examined whether three low-cost motion-tracking options (Microsoft Kinect skeletal tracking of either one limb or whole body and a video-based pixel change method) can be employed to investigate social motor coordination. Of particular interest was the degree to which these low-cost methods of motion tracking could be used to capture and index the coordination dynamics that occurred between a child and an experimenter for three simple social motor coordination tasks in comparison to a more expensive, laboratory-grade motion-tracking system (i.e., a Polhemus Latus system). Overall, the results demonstrated that these low-cost systems cannot substitute the Polhemus system in some tasks. However, the lower-cost Microsoft Kinect skeletal tracking and video pixel change methods were successfully able to index differences in social motor coordination in tasks that involved larger-scale, naturalistic whole body movements, which can be cumbersome and expensive to record with a Polhemus. However, we found the Kinect to be particularly vulnerable to occlusion and the pixel change method to movements that cross the video frame midline. Therefore, particular care needs to be taken in choosing the motion-tracking system that is best suited for the particular research.

  4. Validity and sensitivity of the longitudinal asymmetry index to detect gait asymmetry using Microsoft Kinect data.

    PubMed

    Auvinet, E; Multon, F; Manning, V; Meunier, J; Cobb, J P

    2017-01-01

    Gait asymmetry information is a key point in disease screening and follow-up. Constant Relative Phase (CRP) has been used to quantify within-stride asymmetry index, which requires noise-free and accurate motion capture, which is difficult to obtain in clinical settings. This study explores a new index, the Longitudinal Asymmetry Index (ILong) which is derived using data from a low-cost depth camera (Kinect). ILong is based on depth images averaged over several gait cycles, rather than derived joint positions or angles. This study aims to evaluate (1) the validity of CRP computed with Kinect, (2) the validity and sensitivity of ILong for measuring gait asymmetry based solely on data provided by a depth camera, (3) the clinical applicability of a posteriorly mounted camera system to avoid occlusion caused by the standard front-fitted treadmill consoles and (4) the number of strides needed to reliably calculate ILong. The gait of 15 subjects was recorded concurrently with a marker-based system (MBS) and Kinect, and asymmetry was artificially reproduced by introducing a 5cm sole attached to one foot. CRP computed with Kinect was not reliable. ILong detected this disturbed gait reliably and could be computed from a posteriorly placed Kinect without loss of validity. A minimum of five strides was needed to achieve a correlation coefficient of 0.9 between standard MBS and low-cost depth camera based ILong. ILong provides a clinically pragmatic method for measuring gait asymmetry, with application for improved patient care through enhanced disease, screening, diagnosis and monitoring. Copyright © 2016. Published by Elsevier B.V.

  5. Mobile, Virtual Enhancements for Rehabilitation (MOVER)

    DTIC Science & Technology

    2015-02-28

    patient uses COTS input devices, such as the Microsoft Kinect and the Wii Balance Board , to perform therapeutic exercises that are mapped to...by hardcoding specific, commonly used balance exercises into the system and enabling the therapists to select and customize pre-identified... balance disorder patients. We made these games highly customizable to enable therapists to tune each game to the capabilities of individual patients

  6. Microsoft kinect-based artificial perception system for control of functional electrical stimulation assisted grasping.

    PubMed

    Strbac, Matija; Kočović, Slobodan; Marković, Marko; Popović, Dejan B

    2014-01-01

    We present a computer vision algorithm that incorporates a heuristic model which mimics a biological control system for the estimation of control signals used in functional electrical stimulation (FES) assisted grasping. The developed processing software acquires the data from Microsoft Kinect camera and implements real-time hand tracking and object analysis. This information can be used to identify temporal synchrony and spatial synergies modalities for FES control. Therefore, the algorithm acts as artificial perception which mimics human visual perception by identifying the position and shape of the object with respect to the position of the hand in real time during the planning phase of the grasp. This artificial perception used within the heuristically developed model allows selection of the appropriate grasp and prehension. The experiments demonstrate that correct grasp modality was selected in more than 90% of tested scenarios/objects. The system is portable, and the components are low in cost and robust; hence, it can be used for the FES in clinical or even home environment. The main application of the system is envisioned for functional electrical therapy, that is, intensive exercise assisted with FES.

  7. Fall detection in homes of older adults using the Microsoft Kinect.

    PubMed

    Stone, Erik E; Skubic, Marjorie

    2015-01-01

    A method for detecting falls in the homes of older adults using the Microsoft Kinect and a two-stage fall detection system is presented. The first stage of the detection system characterizes a person's vertical state in individual depth image frames, and then segments on ground events from the vertical state time series obtained by tracking the person over time. The second stage uses an ensemble of decision trees to compute a confidence that a fall preceded on a ground event. Evaluation was conducted in the actual homes of older adults, using a combined nine years of continuous data collected in 13 apartments. The dataset includes 454 falls, 445 falls performed by trained stunt actors and nine naturally occurring resident falls. The extensive data collection allows for characterization of system performance under real-world conditions to a degree that has not been shown in other studies. Cross validation results are included for standing, sitting, and lying down positions, near (within 4 m) versus far fall locations, and occluded versus not occluded fallers. The method is compared against five state-of-the-art fall detection algorithms and significantly better results are achieved.

  8. Microsoft Kinect-Based Artificial Perception System for Control of Functional Electrical Stimulation Assisted Grasping

    PubMed Central

    Kočović, Slobodan; Popović, Dejan B.

    2014-01-01

    We present a computer vision algorithm that incorporates a heuristic model which mimics a biological control system for the estimation of control signals used in functional electrical stimulation (FES) assisted grasping. The developed processing software acquires the data from Microsoft Kinect camera and implements real-time hand tracking and object analysis. This information can be used to identify temporal synchrony and spatial synergies modalities for FES control. Therefore, the algorithm acts as artificial perception which mimics human visual perception by identifying the position and shape of the object with respect to the position of the hand in real time during the planning phase of the grasp. This artificial perception used within the heuristically developed model allows selection of the appropriate grasp and prehension. The experiments demonstrate that correct grasp modality was selected in more than 90% of tested scenarios/objects. The system is portable, and the components are low in cost and robust; hence, it can be used for the FES in clinical or even home environment. The main application of the system is envisioned for functional electrical therapy, that is, intensive exercise assisted with FES. PMID:25202707

  9. VOLUMNECT: measuring volumes with Kinect

    NASA Astrophysics Data System (ADS)

    Quintino Ferreira, Beatriz; Griné, Miguel; Gameiro, Duarte; Costeira, João. Paulo; Sousa Santos, Beatriz

    2014-03-01

    This article presents a solution to volume measurement object packing using 3D cameras (such as the Microsoft KinectTM). We target application scenarios, such as warehouses or distribution and logistics companies, where it is important to promptly compute package volumes, yet high accuracy is not pivotal. Our application auto- matically detects cuboid objects using the depth camera data and computes their volume and sorting it allowing space optimization. The proposed methodology applies to a point cloud simple computer vision and image processing methods, as connected components, morphological operations and Harris corner detector, producing encouraging results, namely an accuracy in volume measurement of 8mm. Aspects that can be further improved are identified; nevertheless, the current solution is already promising turning out to be cost effective for the envisaged scenarios.

  10. Computer vision for RGB-D sensors: Kinect and its applications.

    PubMed

    Shao, Ling; Han, Jungong; Xu, Dong; Shotton, Jamie

    2013-10-01

    Kinect sensor, high-resolution depth and visual (RGB) sensing has become available for widespread use as an off-the-shelf technology. This special issue is specifically dedicated to new algorithms and/or new applications based on the Kinect (or similar RGB-D) sensors. In total, we received over ninety submissions from more than twenty countries all around the world. The submissions cover a wide range of areas including object and scene classification, 3-D pose estimation, visual tracking, data fusion, human action/activity recognition, 3-D reconstruction, mobile robotics, and so on. After two rounds of review by at least two (mostly three) expert reviewers for each paper, the Guest Editors have selected twelve high-quality papers to be included in this highly popular special issue. The papers that comprise this issue are briefly summarized.

  11. Object Detection using the Kinect

    DTIC Science & Technology

    2012-03-01

    Kinect camera and point cloud data from the Kinect’s structured light stereo system (figure 1). We obtain reasonable results using a single prototype...same manner we present in this report. For example, at Willow Garage , Steder uses a 3-D feature he developed to classify objects directly from point...detecting backpacks using the data available from the Kinect sensor. 4 3.1 Point Cloud Filtering Dense point clouds derived from stereo are notoriously

  12. Extension of an iterative closest point algorithm for simultaneous localization and mapping in corridor environments

    NASA Astrophysics Data System (ADS)

    Yue, Haosong; Chen, Weihai; Wu, Xingming; Wang, Jianhua

    2016-03-01

    Three-dimensional (3-D) simultaneous localization and mapping (SLAM) is a crucial technique for intelligent robots to navigate autonomously and execute complex tasks. It can also be applied to shape measurement, reverse engineering, and many other scientific or engineering fields. A widespread SLAM algorithm, named KinectFusion, performs well in environments with complex shapes. However, it cannot handle translation uncertainties well in highly structured scenes. This paper improves the KinectFusion algorithm and makes it competent in both structured and unstructured environments. 3-D line features are first extracted according to both color and depth data captured by Kinect sensor. Then the lines in the current data frame are matched with the lines extracted from the entire constructed world model. Finally, we fuse the distance errors of these line-pairs into the standard KinectFusion framework and estimate sensor poses using an iterative closest point-based algorithm. Comparative experiments with the KinectFusion algorithm and one state-of-the-art method in a corridor scene have been done. The experimental results demonstrate that after our improvement, the KinectFusion algorithm can also be applied to structured environments and has higher accuracy. Experiments on two open access datasets further validated our improvements.

  13. Heart Rate and Liking During "Kinect Boxing" Versus "Wii Boxing": The Potential for Enjoyable Vigorous Physical Activity Videogames.

    PubMed

    Sanders, Gabriel J; Peacock, Corey A; Barkley, Jacob E; Gish, Brian; Brock, Scott; Volpenhein, Josh

    2015-08-01

    Nintendo(®) (Kyoto, Japan) "Wii™ Sports Boxing" ("Wii Boxing") and Xbox(®) (Microsoft, Redmond, WA) "Kinect(®) Sports Boxing" ("Kinect Boxing") are both boxing simulation videogames that are available for two different active videogame (AVG) systems. Although these AVGs are similar, the style of gameplay required is different (i.e., upper body only versus total body movements) and may alter physical activity intensity and one's preference for playing one game over the other. AVGs that elicit the greatest physiologic challenge and are preferred by users should be identified in an effort to enhance the efficacy of physical activity interventions and programs that include AVGs. The mean heart rate (HRmean) and peak heart rate (HRpeak) for 27 adults (22.7±4.2 years old) were recorded during four 10-minute conditions: seated rest, treadmill walking at 3 miles/hour, "Wii Boxing," and "Kinect Boxing." Upon completion of all four conditions, participants indicated which condition they preferred, and HRmean and HRpeak were calculated as a percentage of age-predicted maximum heart rate to classify physical activity intensity for the three activity conditions (treadmill, "Wii Boxing," and "Kinect Boxing"). "Kinect Boxing" significantly (P<0.001) increased percentage HRmean (64.1±1.6 percent of age-predicted maximum) and percentage HRpeak (76.5±1.9 percent) above all other conditions: Wii HRmean, 53.0±1.2 percent; Wii HRpeak, 61.8±1.5 percent; treadmill HRmean, 52.4±1.2 percent; treadmill HRpeak, 55.2±2.2 percent. Percentage HRpeak for "Kinect Boxing" was great enough to be considered a vigorous-intensity physical activity. There was no difference (P=0.55) in percentage HRmean between "Wii Boxing" and treadmill walking. Participants also preferred "Kinect Boxing" (P<0.001; n=26) to all other conditions ("Wii Boxing," n=1; treadmill n=0). "Kinect Boxing" was the most preferred and the only condition that was physiologically challenging enough to be classified as a vigorous-intensity physical activity.

  14. Feasibility, safety and outcomes of playing Kinect Adventures!™ for people with Parkinson's disease: a pilot study.

    PubMed

    Pompeu, J E; Arduini, L A; Botelho, A R; Fonseca, M B F; Pompeu, S M A A; Torriani-Pasin, C; Deutsch, J E

    2014-06-01

    To assess the feasibility, safety and outcomes of playing Microsoft Kinect Adventures™ for people with Parkinson's disease in order to guide the design of a randomised clinical trial. Single-group, blinded trial. Rehabilitation Center of São Camilo University, Brazil. Seven patients (six males, one female) with Parkinson's disease (Hoehn and Yahr Stages 2 and 3). Fourteen 60-minute sessions, three times per week, playing four games of Kinect Adventures! The feasibility and safety outcomes were patients' game performance and adverse events, respectively. The clinical outcomes were the 6-minute walk test, Balance Evaluation System Test, Dynamic Gait Index and Parkinson's Disease Questionnaire (PDQ-39). Patients' scores for the four games showed improvement. The mean [standard deviation (SD)] scores in the first and last sessions of the Space Pop game were 151 (36) and 198 (29), respectively [mean (SD) difference 47 (7), 95% confidence interval 15 to 79]. There were no adverse events. Improvements were also seen in the 6-minute walk test, Balance Evaluation System Test, Dynamic Gait Index and PDQ-39 following training. Kinect-based training was safe and feasible for people with Parkinson's disease (Hoehn and Yahr Stages 2 and 3). Patients improved their scores for all four games. No serious adverse events occurred during training with Kinect Adventures!, which promoted improvement in activities (balance and gait), body functions (cardiopulmonary aptitude) and participation (quality of life). Copyright © 2013 Chartered Society of Physiotherapy. Published by Elsevier Ltd. All rights reserved.

  15. Mobile, Virtual Enhancements for Rehabilitation (MOVER)

    DTIC Science & Technology

    2015-08-28

    bottom of the figure. The patient uses COTS input devices, such as the Microsoft Kinect and the Wii Balance Board , to perform therapeutic exercises...specific, commonly used balance exercises into the system and enabling the therapists to select and customize pre-identified parameters for these exercises... balance disorder patients. We made these games highly customizable to enable therapists to tune each game to the capabilities of individual

  16. Investigation of the Microsoft Kinect v2 Sensor as a Multi-Purpose Device for a Radiation Oncology Clinic

    NASA Astrophysics Data System (ADS)

    Silverstein, Evan Asher

    For a radiation oncology clinic, the number of devices available to assist in the workflow for radiotherapy treatments are quite numerous. Processes such as patient verification, motion management, or respiratory motion tracking can all be improved upon by devices currently on the market. These three specific processes can directly impact patient safety and treatment efficacy and, as such, are important to track and quantify. Most products available will only provide a solution for one of these processes and may be outside the reach of a typical radiation oncology clinic due to difficult implementation and incorporation with already existing hardware. This manuscript investigates the use of the Microsoft Kinect v2 sensor to provide solutions for all three processes all while maintaining a relatively simple and easy to use implementation. To assist with patient verification, the Kinect system was programmed to create a facial recognition and recall process. The basis of the facial recognition algorithm was created by utilizing a facial mapping library distributed by Microsoft within the Software Developers Toolkit (SDK). Here, the system extracts 31 fiducial points representing various facial landmarks. 3D vectors are created between each of the 31 points and the magnitude of each vector is calculated by the system. This allows for a face to be defined as a collection of 465 specific vector magnitudes. The 465 vector magnitudes defining a face are then used in both the creation of a facial reference data set and subsequent evaluations of real-time sensor data in the matching algorithm. To test the algorithm, a database of 39 faces was created, each with 465 vectors derived from the fiducial points, and a one-to-one matching procedure was performed to obtain sensitivity and specificity data of the facial identification system. In total, 5299 trials were performed and threshold parameters were created for match determination. Optimization of said parameters in the matching algorithm by way of ROC curves indicated the sensitivity of the system for was 96.5% and the specificity was 96.7%. These results indicate a fairly robust methodology for verifying, in real-time, a specific face through comparison from a pre-collected reference data set. In its current implementation, the process of data collection for each face and subsequent matching session averaged approximately 30 seconds, which may be too onerous to provide a realistic supplement to patient identification in a clinical setting. Despite the time commitment, the data collection process was well tolerated by all participants. It was found that ambient light played a crucial role in the accuracy and reproducibility of the facial recognition system. Testing with various light levels found that ambient light greater than 200 lux produced the most accurate results. As such, the acquisition process should be setup in such a way to ensure consistent ambient light conditions across both the reference recording session and subsequent real-time identification sessions. In developing a motion management process with the Kinect, two separate, but complimentary processes were created. First, to track large scale anatomical movements, the automatic skeletal tracking capabilities of the Kinect were utilized. 25 specific body joints (head, elbow, knee, etc) make up the skeletal frame and are locked to relative positions on the body. Using code written in C#, these joints are tracked, in 3D space, and compared to an initial state of the patient allowing for an indication of anatomical motion. Additionally, to track smaller, more subtle movements on a specific area of the body, a user drawn ROI can be created. Here, the depth values of all pixels associated with the body in the ROI are compared to the initial state. The system counts the number of live pixels with a depth difference greater than a specified threshold compared to the initial state and the area of each of those pixels is calculated based on their depth. The percentage of area moved (PAM) compared to the ROI area then becomes an indication of gross movement within the ROI. In this study, 9 specific joints proved to be stable during data acquisition. When moved in orthogonal directions, each coordinate recorded had a relatively linear trend of movement but not the expected 1:1 relationship to couch movement. Instead, calculation of the vector magnitude between the initial and current position proved a better indicator of movement. 5 of the 9 joints (Left/Right Elbow, Left/Right Hip, and Spine-Base) showed relatively consistent values for radial movements of 5mm and 10mm, achieving 20%-25% coefficient of variation. For these 5 joints, this allowed for threshold values for calculated radial distances of 3mm and 7.5 mm to be set for 5mm and 10mm of actual movement, respectively. (Abstract shortened by ProQuest.).

  17. Wii, Kinect, and Move. Heart Rate, Oxygen Consumption, Energy Expenditure, and Ventilation due to Different Physically Active Video Game Systems in College Students.

    PubMed

    Scheer, Krista S; Siebrant, Sarah M; Brown, Gregory A; Shaw, Brandon S; Shaw, Ina

    Nintendo Wii, Sony Playstation Move , and Microsoft XBOX Kinect are home video gaming systems that involve player movement to control on-screen game play. Numerous investigations have demonstrated that playing Wii is moderate physical activity at best, but Move and Kinect have not been as thoroughly investigated. The purpose of this study was to compare heart rate, oxygen consumption, and ventilation while playing the games Wii Boxing, Kinect Boxing, and Move Gladiatorial Combat. Heart rate, oxygen consumption, and ventilation were measured at rest and during a graded exercise test in 10 males and 9 females (19.8 ± 0.33 y, 175.4 ± 2.0 cm, 80.2 ± 7.7 kg,). On another day, in a randomized order, the participants played Wii Boxing, K inect Boxing, and Move Gladiatorial Combat while heart rate, ventilation, and oxygen consumption were measured. There were no differences in heart rate (116.0 ± 18.3 vs. 119.3 ± 17.6 vs. 120.1 ± 17.6 beats/min), oxygen consumption (9.2 ± 3.0 vs. 10.6 ± 2.4 vs. 9.6 ± 2.4 ml/kg/min), or minute ventilation (18.9 ± 5.7 vs. 20.8 ± 8.0 vs. 19.7 ± 6.4 L/min) when playing Wii boxing, Kinect boxing, or Move Gladiatorial Combat (respectively). Playing Nintendo Wii Boxing, XBOX Kinect Boxing, and Sony PlayStation Move Gladiatorial Combat all increase heart rate, oxygen consumption, and ventilation above resting levels but there were no significant differences between gaming systems. Overall, playing a "physically active" home video game system does not meet the minimal threshold for moderate intensity physical activity, regardless of gaming system.

  18. SU-E-J-197: Investigation of Microsoft Kinect 2.0 Depth Resolution for Patient Motion Tracking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Silverstein, E; Snyder, M

    2015-06-15

    Purpose: Investigate the use of the Kinect 2.0 for patient motion tracking during radiotherapy by studying spatial and depth resolution capabilities. Methods: Using code written in C#, depth map data was abstracted from the Kinect to create an initial depth map template indicative of the initial position of an object to be compared to the depth map of the object over time. To test this process, simple setup was created in which two objects were imaged: a 40 cm × 40 cm board covered in non reflective material and a 15 cm × 26 cm textbook with a slightly reflective,more » glossy cover. Each object, imaged and measured separately, was placed on a movable platform with object to camera distance measured. The object was then moved a specified amount to ascertain whether the Kinect’s depth camera would visualize the difference in position of the object. Results: Initial investigations have shown the Kinect depth resolution is dependent on the object to camera distance. Measurements indicate that movements as small as 1 mm can be visualized for objects as close as 50 cm away. This depth resolution decreases linearly with object to camera distance. At 4 m, the depth resolution had decreased to observe a minimum movement of 1 cm. Conclusion: The improved resolution and advanced hardware of the Kinect 2.0 allows for increase of depth resolution over the Kinect 1.0. Although obvious that the depth resolution should decrease with increasing distance from an object given the decrease in number of pixels representing said object, the depth resolution at large distances indicates its usefulness in a clinical setting.« less

  19. Effects of Virtual Reality Training using Xbox Kinect on Motor Function in Stroke Survivors: A Preliminary Study.

    PubMed

    Park, Dae-Sung; Lee, Do-Gyun; Lee, Kyeongbong; Lee, GyuChang

    2017-10-01

    Although the Kinect gaming system (Microsoft Corp, Redmond, WA) has been shown to be of therapeutic benefit in rehabilitation, the applicability of Kinect-based virtual reality (VR) training to improve motor function following a stroke has not been investigated. This study aimed to investigate the effects of VR training, using the Xbox Kinect-based game system, on the motor recovery of patients with chronic hemiplegic stroke. This was a randomized controlled trial. Twenty patients with hemiplegic stroke were randomly assigned to either the intervention group or the control group. Participants in the intervention group (n = 10) received 30 minutes of conventional physical therapy plus 30 minutes of VR training using Xbox Kinect-based games, and those in the control group (n = 10) received 30 minutes of conventional physical therapy only. All interventions consisted of daily sessions for a 6-week period. All measurements using Fugl-Meyer Assessment (FMA-LE), the Berg Balance Scale (BBS), the Timed Up and Go test (TUG), and the 10-meter Walk Test (10mWT) were performed at baseline and at the end of the 6 weeks. The scores on the FMA-LE, BBS, TUG, and 10mWT improved significantly from baseline to post intervention in both the intervention and the control groups after training. The pre-to-post difference scores on BBS, TUG, and 10mWT for the intervention group were significantly more improved than those for the control group (P <.05). Evidence from the present study supports the use of additional VR training with the Xbox Kinect gaming system as an effective therapeutic approach for improving motor function during stroke rehabilitation. Copyright © 2017 National Stroke Association. Published by Elsevier Inc. All rights reserved.

  20. Digitization and visualization of greenhouse tomato plants in indoor environments.

    PubMed

    Li, Dawei; Xu, Lihong; Tan, Chengxiang; Goodman, Erik D; Fu, Daichang; Xin, Longjiao

    2015-02-10

    This paper is concerned with the digitization and visualization of potted greenhouse tomato plants in indoor environments. For the digitization, an inexpensive and efficient commercial stereo sensor-a Microsoft Kinect-is used to separate visual information about tomato plants from background. Based on the Kinect, a 4-step approach that can automatically detect and segment stems of tomato plants is proposed, including acquisition and preprocessing of image data, detection of stem segments, removing false detections and automatic segmentation of stem segments. Correctly segmented texture samples including stems and leaves are then stored in a texture database for further usage. Two types of tomato plants-the cherry tomato variety and the ordinary variety are studied in this paper. The stem detection accuracy (under a simulated greenhouse environment) for the cherry tomato variety is 98.4% at a true positive rate of 78.0%, whereas the detection accuracy for the ordinary variety is 94.5% at a true positive of 72.5%. In visualization, we combine L-system theory and digitized tomato organ texture data to build realistic 3D virtual tomato plant models that are capable of exhibiting various structures and poses in real time. In particular, we also simulate the growth process on virtual tomato plants by exerting controls on two L-systems via parameters concerning the age and the form of lateral branches. This research may provide useful visual cues for improving intelligent greenhouse control systems and meanwhile may facilitate research on artificial organisms.

  1. Feasibility of Using Low-Cost Motion Capture for Automated Screening of Shoulder Motion Limitation after Breast Cancer Surgery.

    PubMed

    Gritsenko, Valeriya; Dailey, Eric; Kyle, Nicholas; Taylor, Matt; Whittacre, Sean; Swisher, Anne K

    2015-01-01

    To determine if a low-cost, automated motion analysis system using Microsoft Kinect could accurately measure shoulder motion and detect motion impairments in women following breast cancer surgery. Descriptive study of motion measured via 2 methods. Academic cancer center oncology clinic. 20 women (mean age = 60 yrs) were assessed for active and passive shoulder motions during a routine post-operative clinic visit (mean = 18 days after surgery) following mastectomy (n = 4) or lumpectomy (n = 16) for breast cancer. Participants performed 3 repetitions of active and passive shoulder motions on the side of the breast surgery. Arm motion was recorded using motion capture by Kinect for Windows sensor and on video. Goniometric values were determined from video recordings, while motion capture data were transformed to joint angles using 2 methods (body angle and projection angle). Correlation of motion capture with goniometry and detection of motion limitation. Active shoulder motion measured with low-cost motion capture agreed well with goniometry (r = 0.70-0.80), while passive shoulder motion measurements did not correlate well. Using motion capture, it was possible to reliably identify participants whose range of shoulder motion was reduced by 40% or more. Low-cost, automated motion analysis may be acceptable to screen for moderate to severe motion impairments in active shoulder motion. Automatic detection of motion limitation may allow quick screening to be performed in an oncologist's office and trigger timely referrals for rehabilitation.

  2. Development of a novel virtual reality gait intervention.

    PubMed

    Boone, Anna E; Foreman, Matthew H; Engsberg, Jack R

    2017-02-01

    Improving gait speed and kinematics can be a time consuming and tiresome process. We hypothesize that incorporating virtual reality videogame play into variable improvement goals will improve levels of enjoyment and motivation and lead to improved gait performance. To develop a feasible, engaging, VR gait intervention for improving gait variables. Completing this investigation involved four steps: 1) identify gait variables that could be manipulated to improve gait speed and kinematics using the Microsoft Kinect and free software, 2) identify free internet videogames that could successfully manipulate the chosen gait variables, 3) experimentally evaluate the ability of the videogames and software to manipulate the gait variables, and 4) evaluate the enjoyment and motivation from a small sample of persons without disability. The Kinect sensor was able to detect stride length, cadence, and joint angles. FAAST software was able to identify predetermined gait variable thresholds and use the thresholds to play free online videogames. Videogames that involved continuous pressing of a keyboard key were found to be most appropriate for manipulating the gait variables. Five participants without disability evaluated the effectiveness for modifying the gait variables and enjoyment and motivation during play. Participants were able to modify gait variables to permit successful videogame play. Motivation and enjoyment were high. A clinically feasible and engaging virtual intervention for improving gait speed and kinematics has been developed and initially tested. It may provide an engaging avenue for achieving thousands of repetitions necessary for neural plastic changes and improved gait. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. Folk Dance Pattern Recognition Over Depth Images Acquired via Kinect Sensor

    NASA Astrophysics Data System (ADS)

    Protopapadakis, E.; Grammatikopoulou, A.; Doulamis, A.; Grammalidis, N.

    2017-02-01

    The possibility of accurate recognition of folk dance patterns is investigated in this paper. System inputs are raw skeleton data, provided by a low cost sensor. In particular, data were obtained by monitoring three professional dancers, using a Kinect II sensor. A set of six traditional Greek dances (without their variations) consists the investigated data. A two-step process was adopted. At first, the most descriptive skeleton data were selected using a combination of density based and sparse modelling algorithms. Then, the representative data served as training set for a variety of classifiers.

  4. Topographic Controls on Southern California Ecosystem Function and Post-fire Recovery: a Satellite and Near-surface Remote Sensing Approach

    NASA Astrophysics Data System (ADS)

    Azzari, George

    Southern Californian wildfires can influence climate in a variety of ways, including changes in surface albedo, emission of greenhouse gases and aerosols, and the production of tropospheric ozone. Ecosystem post-fire recovery plays a key role in determining the strength, duration, and relative importance of these climate forcing agents. Southern California's ecosystems vary markedly with topography, creating sharp transitions with elevation, aspect, and slope. Little is known about the ways topography influences ecosystem properties and function, particularly in the context of post-fire recovery. We combined images from the USGS satellite Landsat 5 with flux tower measurements to analyze pre- and post-fire albedo and carbon exchanged by Southern California's ecosystems in the Santa Ana Mountains. We reduced the sources of external variability in Landsat images using several correction methods for topographic and bidirectional effects. We used time series of corrected images to infer the Net Ecosystem Exchange and surface albedo, and calculated the radiative forcing due to CO2 emissions and albedo changes. We analyzed the patterns of recovery and radiative forcing on north- and south-facing slopes, stratified by vegetation classes including grassland, coastal sage scrub, chaparral, and evergreen oak forest. We found that topography strongly influenced post-fire recovery and radiative forcing. Field observations are often limited by the difficulty of collecting ground validation data. Current instrumentation networks do not provide adequate spatial resolution for landscape-level analysis. The deployment of consumer-market technology could reduce the cost of near-surface measurements, allowing the installation of finer-scale instrument networks. We tested the performance of the Microsoft Kinect sensor for measuring vegetation structure. We used Kinect to acquire 3D vegetation point clouds in the field, and used these data to compute plant height, crown diameter, and volume. We found good agreement between Kinect-derived and manual measurements.

  5. Mobile, Virtual Enhancements for Rehabilitation (MOVER)

    DTIC Science & Technology

    2015-08-28

    The patient uses COTS input devices, such as the Microsoft Kinect and the Wii Balance Board , to perform therapeutic exercises that are mapped to...motion and balance disorder patients. We made these games highly customizable to enable therapists to tune each game to the capabilities of individual...settings. Figure 5 shows the setting for the target graphic styles. Figure 6 shows the setting for which foot the patient must balance on during the

  6. Efficient source separation algorithms for acoustic fall detection using a microsoft kinect.

    PubMed

    Li, Yun; Ho, K C; Popescu, Mihail

    2014-03-01

    Falls have become a common health problem among older adults. In previous study, we proposed an acoustic fall detection system (acoustic FADE) that employed a microphone array and beamforming to provide automatic fall detection. However, the previous acoustic FADE had difficulties in detecting the fall signal in environments where interference comes from the fall direction, the number of interferences exceeds FADE's ability to handle or a fall is occluded. To address these issues, in this paper, we propose two blind source separation (BSS) methods for extracting the fall signal out of the interferences to improve the fall classification task. We first propose the single-channel BSS by using nonnegative matrix factorization (NMF) to automatically decompose the mixture into a linear combination of several basis components. Based on the distinct patterns of the bases of falls, we identify them efficiently and then construct the interference free fall signal. Next, we extend the single-channel BSS to the multichannel case through a joint NMF over all channels followed by a delay-and-sum beamformer for additional ambient noise reduction. In our experiments, we used the Microsoft Kinect to collect the acoustic data in real-home environments. The results show that in environments with high interference and background noise levels, the fall detection performance is significantly improved using the proposed BSS approaches.

  7. Development and evaluation of low cost game-based balance rehabilitation tool using the Microsoft Kinect sensor.

    PubMed

    Lange, Belinda; Chang, Chien-Yen; Suma, Evan; Newman, Bradley; Rizzo, Albert Skip; Bolas, Mark

    2011-01-01

    The use of the commercial video games as rehabilitation tools, such as the Nintendo WiiFit, has recently gained much interest in the physical therapy arena. Motion tracking controllers such as the Nintendo Wiimote are not sensitive enough to accurately measure performance in all components of balance. Additionally, users can figure out how to "cheat" inaccurate trackers by performing minimal movement (e.g. wrist twisting a Wiimote instead of a full arm swing). Physical rehabilitation requires accurate and appropriate tracking and feedback of performance. To this end, we are developing applications that leverage recent advances in commercial video game technology to provide full-body control of animated virtual characters. A key component of our approach is the use of newly available low cost depth sensing camera technology that provides markerless full-body tracking on a conventional PC. The aim of this research was to develop and assess an interactive game-based rehabilitation tool for balance training of adults with neurological injury.

  8. Illumination-invariant hand gesture recognition

    NASA Astrophysics Data System (ADS)

    Mendoza-Morales, América I.; Miramontes-Jaramillo, Daniel; Kober, Vitaly

    2015-09-01

    In recent years, human-computer interaction (HCI) has received a lot of interest in industry and science because it provides new ways to interact with modern devices through voice, body, and facial/hand gestures. The application range of the HCI is from easy control of home appliances to entertainment. Hand gesture recognition is a particularly interesting problem because the shape and movement of hands usually are complex and flexible to be able to codify many different signs. In this work we propose a three step algorithm: first, detection of hands in the current frame is carried out; second, hand tracking across the video sequence is performed; finally, robust recognition of gestures across subsequent frames is made. Recognition rate highly depends on non-uniform illumination of the scene and occlusion of hands. In order to overcome these issues we use two Microsoft Kinect devices utilizing combined information from RGB and infrared sensors. The algorithm performance is tested in terms of recognition rate and processing time.

  9. Kinect-based posture tracking for correcting positions during exercise.

    PubMed

    Guerrero, Cesar; Uribe-Quevedo, Alvaro

    2013-01-01

    The Kinect sensor has opened the path for developing numerous applications in several different areas. Medical and health applications are benefiting from the Kinect as it allows non-invasive body motion capture that can be used in motor rehabilitation and phobia treatment. A major advantage of the Kinect is that allows developing solutions that can be used at home or even the office thus, expanding the user freedom for interacting with complementary solutions to its physical activities without requiring any traveling. This paper present a Kinect-based posture tracking software for assisting the user in successfully match postures required in some exercises for strengthen body muscles. Unlike several video games available, this tool offers a user interface for customizing posture parameters, so it can be enhanced by healthcare professionals or by their guidance through the user.

  10. Keeping up with video game technology: objective analysis of Xbox Kinect™ and PlayStation 3 Move™ for use in burn rehabilitation.

    PubMed

    Parry, Ingrid; Carbullido, Clarissa; Kawada, Jason; Bagley, Anita; Sen, Soman; Greenhalgh, David; Palmieri, Tina

    2014-08-01

    Commercially available interactive video games are commonly used in rehabilitation to aide in physical recovery from a variety of conditions and injuries, including burns. Most video games were not originally designed for rehabilitation purposes and although some games have shown therapeutic potential in burn rehabilitation, the physical demands of more recently released video games, such as Microsoft Xbox Kinect™ (Kinect) and Sony PlayStation 3 Move™ (PS Move), have not been objectively evaluated. Video game technology is constantly evolving and demonstrating different immersive qualities and interactive demands that may or may not have therapeutic potential for patients recovering from burns. This study analyzed the upper extremity motion demands of Kinect and PS Move using three-dimensional motion analysis to determine their applicability in burn rehabilitation. Thirty normal children played each video game while real-time movement of their upper extremities was measured to determine maximal excursion and amount of elevation time. Maximal shoulder flexion, shoulder abduction and elbow flexion range of motion were significantly greater while playing Kinect than the PS Move (p≤0.01). Elevation time of the arms above 120° was also significantly longer with Kinect (p<0.05). The physical demands for shoulder and elbow range of motion while playing the Kinect, and to a lesser extent PS Move, are comparable to functional motion needed for daily tasks such as eating with a utensil and hair combing. Therefore, these more recently released commercially available video games show therapeutic potential in burn rehabilitation. Objectively quantifying the physical demands of video games commonly used in rehabilitation aides clinicians in the integration of them into practice and lays the framework for further research on their efficacy. Copyright © 2013 Elsevier Ltd and ISBI. All rights reserved.

  11. Energy Expenditure During Xbox Kinect Play in Early Adolescents: The Relationship with Player Mode and Game Enjoyment.

    PubMed

    Verhoeven, Katrien; Abeele, Vero Vanden; Gers, Brent; Seghers, Jan

    2015-12-01

    There has been growing interest in the use of active videogames to influence levels of physical activity. Most studies have investigated energy expenditure in general, without taking into account moderating factors such as player mode and game enjoyment. This study therefore examines whether children's energy expenditure and game enjoyment are higher when games are played in a two-player mode than in a single-player mode. Forty-three children from the 7th grade who exhibited an inactive lifestyle engaged in six sports exergames on an Xbox(®) Kinect(®) (Microsoft, Redmond, WA) console. The player mode (single-player or two-player mode) was manipulated (within-subjects design). The primary parameters were "energy expenditure," which was measured with a SenseWear(®) device (Bodymedia Inc., Pittsburgh, PA), and "game enjoyment," which was assessed through self-report. On average, Kinect play elicits moderate physical activity (approximately 4 metabolic equivalents of task). Games that are played in a two-player mode elicit more energy than games that are played in a single-player mode. However, this was only the case for simultaneous play (boxing, dancing, and tennis), not for turn-based play (bowling, baseball, and golf). Furthermore, participants generally liked exergaming, regardless of their sex or the player mode. Finally, no significant correlation was found between energy expenditure and game enjoyment. This study has shown that Kinect play elicits physical activity of moderate intensity. Furthermore, Kinect play is generally enjoyed by both boys and girls. Simultaneous play may be the best suited to increase levels of physical activity in early adolescents who exhibit an inactive lifestyle.

  12. HoloHands: games console interface for controlling holographic optical manipulation

    NASA Astrophysics Data System (ADS)

    McDonald, C.; McPherson, M.; McDougall, C.; McGloin, D.

    2012-10-01

    The increased application of holographic optical manipulation techniques within the life sciences has sparked the development of accessible interfaces for control of holographic optical tweezers. Of particular interest are those that employ familiar, commercially available technologies. Here we present the use of a low cost games console interface, the Microsoft Kinect for the control of holographic optical tweezers and a study into the effect of using such a system upon the quality of trap generated.

  13. Heart Rate Detection Using Microsoft Kinect: Validation and Comparison to Wearable Devices.

    PubMed

    Gambi, Ennio; Agostinelli, Angela; Belli, Alberto; Burattini, Laura; Cippitelli, Enea; Fioretti, Sandro; Pierleoni, Paola; Ricciuti, Manola; Sbrollini, Agnese; Spinsante, Susanna

    2017-08-02

    Contactless detection is one of the new frontiers of technological innovation in the field of healthcare, enabling unobtrusive measurements of biomedical parameters. Compared to conventional methods for Heart Rate (HR) detection that employ expensive and/or uncomfortable devices, such as the Electrocardiograph (ECG) or pulse oximeter, contactless HR detection offers fast and continuous monitoring of heart activities and provides support for clinical analysis without the need for the user to wear a device. This paper presents a validation study for a contactless HR estimation method exploiting RGB (Red, Green, Blue) data from a Microsoft Kinect v2 device. This method, based on Eulerian Video Magnification (EVM), Photoplethysmography (PPG) and Videoplethysmography (VPG), can achieve performance comparable to classical approaches exploiting wearable systems, under specific test conditions. The output given by a Holter, which represents the gold-standard device used in the test for ECG extraction, is considered as the ground-truth, while a comparison with a commercial smartwatch is also included. The validation process is conducted with two modalities that differ for the availability of a priori knowledge about the subjects' normal HR. The two test modalities provide different results. In particular, the HR estimation differs from the ground-truth by 2% when the knowledge about the subject's lifestyle and his/her HR is considered and by 3.4% if no information about the person is taken into account.

  14. Heart Rate Detection Using Microsoft Kinect: Validation and Comparison to Wearable Devices

    PubMed Central

    Agostinelli, Angela; Belli, Alberto; Cippitelli, Enea; Fioretti, Sandro; Pierleoni, Paola; Ricciuti, Manola

    2017-01-01

    Contactless detection is one of the new frontiers of technological innovation in the field of healthcare, enabling unobtrusive measurements of biomedical parameters. Compared to conventional methods for Heart Rate (HR) detection that employ expensive and/or uncomfortable devices, such as the Electrocardiograph (ECG) or pulse oximeter, contactless HR detection offers fast and continuous monitoring of heart activities and provides support for clinical analysis without the need for the user to wear a device. This paper presents a validation study for a contactless HR estimation method exploiting RGB (Red, Green, Blue) data from a Microsoft Kinect v2 device. This method, based on Eulerian Video Magnification (EVM), Photoplethysmography (PPG) and Videoplethysmography (VPG), can achieve performance comparable to classical approaches exploiting wearable systems, under specific test conditions. The output given by a Holter, which represents the gold-standard device used in the test for ECG extraction, is considered as the ground-truth, while a comparison with a commercial smartwatch is also included. The validation process is conducted with two modalities that differ for the availability of a priori knowledge about the subjects’ normal HR. The two test modalities provide different results. In particular, the HR estimation differs from the ground-truth by 2% when the knowledge about the subject’s lifestyle and his/her HR is considered and by 3.4% if no information about the person is taken into account. PMID:28767091

  15. Validation of a method for real time foot position and orientation tracking with Microsoft Kinect technology for use in virtual reality and treadmill based gait training programs.

    PubMed

    Paolini, Gabriele; Peruzzi, Agnese; Mirelman, Anat; Cereatti, Andrea; Gaukrodger, Stephen; Hausdorff, Jeffrey M; Della Croce, Ugo

    2014-09-01

    The use of virtual reality for the provision of motor-cognitive gait training has been shown to be effective for a variety of patient populations. The interaction between the user and the virtual environment is achieved by tracking the motion of the body parts and replicating it in the virtual environment in real time. In this paper, we present the validation of a novel method for tracking foot position and orientation in real time, based on the Microsoft Kinect technology, to be used for gait training combined with virtual reality. The validation of the motion tracking method was performed by comparing the tracking performance of the new system against a stereo-photogrammetric system used as gold standard. Foot position errors were in the order of a few millimeters (average RMSD from 4.9 to 12.1 mm in the medio-lateral and vertical directions, from 19.4 to 26.5 mm in the anterior-posterior direction); the foot orientation errors were also small (average %RMSD from 5.6% to 8.8% in the medio-lateral and vertical directions, from 15.5% to 18.6% in the anterior-posterior direction). The results suggest that the proposed method can be effectively used to track feet motion in virtual reality and treadmill-based gait training programs.

  16. Comparison between low-cost marker-less and high-end marker-based motion capture systems for the computer-aided assessment of working ergonomics.

    PubMed

    Patrizi, Alfredo; Pennestrì, Ettore; Valentini, Pier Paolo

    2016-01-01

    The paper deals with the comparison between a high-end marker-based acquisition system and a low-cost marker-less methodology for the assessment of the human posture during working tasks. The low-cost methodology is based on the use of a single Microsoft Kinect V1 device. The high-end acquisition system is the BTS SMART that requires the use of reflective markers to be placed on the subject's body. Three practical working activities involving object lifting and displacement have been investigated. The operational risk has been evaluated according to the lifting equation proposed by the American National Institute for Occupational Safety and Health. The results of the study show that the risk multipliers computed from the two acquisition methodologies are very close for all the analysed activities. In agreement to this outcome, the marker-less methodology based on the Microsoft Kinect V1 device seems very promising to promote the dissemination of computer-aided assessment of ergonomics while maintaining good accuracy and affordable costs. PRACTITIONER’S SUMMARY: The study is motivated by the increasing interest for on-site working ergonomics assessment. We compared a low-cost marker-less methodology with a high-end marker-based system. We tested them on three different working tasks, assessing the working risk of lifting loads. The two methodologies showed comparable precision in all the investigations.

  17. Passive in-home measurement of stride-to-stride gait variability comparing vision and Kinect sensing.

    PubMed

    Stone, Erik E; Skubic, Marjorie

    2011-01-01

    We present an analysis of measuring stride-to-stride gait variability passively, in a home setting using two vision based monitoring techniques: anonymized video data from a system of two web-cameras, and depth imagery from a single Microsoft Kinect. Millions of older adults fall every year. The ability to assess the fall risk of elderly individuals is essential to allowing them to continue living safely in independent settings as they age. Studies have shown that measures of stride-to-stride gait variability are predictive of falls in older adults. For this analysis, a set of participants were asked to perform a number of short walks while being monitored by the two vision based systems, along with a marker based Vicon motion capture system for ground truth. Measures of stride-to-stride gait variability were computed using each of the systems and compared against those obtained from the Vicon.

  18. Development of a Kinect-based exergaming system for motor rehabilitation in neurological disorders

    NASA Astrophysics Data System (ADS)

    Estepa, A.; Sponton Piriz, S.; Albornoz, E.; Martínez, C.

    2016-04-01

    The development of videogames for physical therapy, known as exergames, has gained much interest in the last years. In this work, a sytem for rehabilitation and clinical evaluation of neurological patients is presented. The Microsoft Kinect device is used to track the full body of patients, and three games were developed to exercise and assess different aspects of balance and gait rehabilitation. The system provides visual feedback by means of an avatar that follows the movements of the patients, and sound and visual stimuli for giving orders during the experience. Also, the system includes a database and management tools for further analysis and monitoring of therapies. The results obtained show, on the one side, a great reception and interest of patients to use the system. On the other side, the specialists considered very useful the data collected and the quantitative analysis provided by the system, which was then adopted for the clinical routine.

  19. Getting the point across: exploring the effects of dynamic virtual humans in an interactive museum exhibit on user perceptions.

    PubMed

    Rivera-Gutierrez, Diego; Ferdig, Rick; Li, Jian; Lok, Benjamin

    2014-04-01

    We have created “You, M.D.”, an interactive museum exhibit in which users learn about topics in public health literacy while interacting with virtual humans. You, M.D. is equipped with a weight sensor, a height sensor and a Microsoft Kinect that gather basic user information. Conceptually, You, M.D. could use this user information to dynamically select the appearance of the virtual humans in the interaction attempting to improve learning outcomes and user perception for each particular user. For this concept to be possible, a better understanding of how different elements of the visual appearance of a virtual human affects user perceptions is required. In this paper, we present the results of an initial user study with a large sample size (n =333) ran using You, M.D. The study measured users’ reactions based on the user’s gender and body-mass index (BMI) when facing virtual humans with BMI either concordant or discordant from the user’s BMI. The results of the study indicate that concordance between the users’ BMI and the virtual human’s BMI affects male and female users differently. The results also show that female users rate virtual humans as more knowledgeable than male users rate the same virtual humans.

  20. Reliability and comparison of Kinect-based methods for estimating spatiotemporal gait parameters of healthy and post-stroke individuals.

    PubMed

    Latorre, Jorge; Llorens, Roberto; Colomer, Carolina; Alcañiz, Mariano

    2018-04-27

    Different studies have analyzed the potential of the off-the-shelf Microsoft Kinect, in its different versions, to estimate spatiotemporal gait parameters as a portable markerless low-cost alternative to laboratory grade systems. However, variability in populations, measures, and methodologies prevents accurate comparison of the results. The objective of this study was to determine and compare the reliability of the existing Kinect-based methods to estimate spatiotemporal gait parameters in healthy and post-stroke adults. Forty-five healthy individuals and thirty-eight stroke survivors participated in this study. Participants walked five meters at a comfortable speed and their spatiotemporal gait parameters were estimated from the data retrieved by a Kinect v2, using the most common methods in the literature, and by visual inspection of the videotaped performance. Errors between both estimations were computed. For both healthy and post-stroke participants, highest accuracy was obtained when using the speed of the ankles to estimate gait speed (3.6-5.5 cm/s), stride length (2.5-5.5 cm), and stride time (about 45 ms), and when using the distance between the sacrum and the ankles and toes to estimate double support time (about 65 ms) and swing time (60-90 ms). Although the accuracy of these methods is limited, these measures could occasionally complement traditional tools. Copyright © 2018 Elsevier Ltd. All rights reserved.

  1. On the Use of a Low-Cost Thermal Sensor to Improve Kinect People Detection in a Mobile Robot

    PubMed Central

    Susperregi, Loreto; Sierra, Basilio; Castrillón, Modesto; Lorenzo, Javier; Martínez-Otzeta, Jose María; Lazkano, Elena

    2013-01-01

    Detecting people is a key capability for robots that operate in populated environments. In this paper, we have adopted a hierarchical approach that combines classifiers created using supervised learning in order to identify whether a person is in the view-scope of the robot or not. Our approach makes use of vision, depth and thermal sensors mounted on top of a mobile platform. The set of sensors is set up combining the rich data source offered by a Kinect sensor, which provides vision and depth at low cost, and a thermopile array sensor. Experimental results carried out with a mobile platform in a manufacturing shop floor and in a science museum have shown that the false positive rate achieved using any single cue is drastically reduced. The performance of our algorithm improves other well-known approaches, such as C4 and histogram of oriented gradients (HOG). PMID:24172285

  2. Semantic Mapping and Motion Planning with Turtlebot Roomba

    NASA Astrophysics Data System (ADS)

    Aslam Butt, Rizwan; Usman Ali, Syed M.

    2013-12-01

    In this paper, we have successfully demonstrated the semantic mapping and motion planning experiments on Turtlebot Robot using Microsoft Kinect in ROS environment. Moreover, we have also performed the comparative studies on various sampling based motion planning algorithms with Turtlebot in Open Motion Planning Library. Our comparative analysis revealed that Expansive Space Trees (EST) surmounted all other approaches with respect to memory occupation and processing time. We have also tried to summarize the related concepts of autonomous robotics which we hope would be helpful for beginners.

  3. Kinect-Based Five-Times-Sit-to-Stand Test for Clinical and In-Home Assessment of Fall Risk in Older People.

    PubMed

    Ejupi, Andreas; Brodie, Matthew; Gschwind, Yves J; Lord, Stephen R; Zagler, Wolfgang L; Delbaere, Kim

    2015-01-01

    Accidental falls remain an important problem in older people. The five-times-sit-to-stand (5STS) test is commonly used as a functional test to assess fall risk. Recent advances in sensor technologies hold great promise for more objective and accurate assessments. The aims of this study were: (1) to examine the feasibility of a low-cost and portable Kinect-based 5STS test to discriminate between fallers and nonfallers and (2) to investigate whether this test can be used for supervised clinical, supervised and unsupervised in-home fall risk assessments. A total of 94 community-dwelling older adults were assessed by the Kinect-based 5STS test in the laboratory and 20 participants were tested in their own homes. An algorithm was developed to automatically calculate timing- and speed-related measurements from the Kinect-based sensor data to discriminate between fallers and nonfallers. The associations of these measurements with standard clinical fall risk tests and the results of supervised and unsupervised in-home assessments were examined. Fallers were significantly slower than nonfallers on Kinect-based measures. The mean velocity of the sit-to-stand transitions discriminated well between the fallers and nonfallers based on 12-month retrospective fall data. The Kinect-based measures collected in the laboratory correlated strongly with those collected in the supervised (r = 0.704-0.832) and unsupervised (r = 0.775-0.931) in-home assessments. In summary, we found that the Kinect-based 5STS test discriminated well between the fallers and nonfallers and was feasible to administer in clinical and supervised in-home settings. This test may be useful in clinical settings for identifying high-risk fallers for further intervention or for regular in-home assessments in the future. © 2015 S. Karger AG, Basel.

  4. Tensor body: real-time reconstruction of the human body and avatar synthesis from RGB-D.

    PubMed

    Barmpoutis, Angelos

    2013-10-01

    Real-time 3-D reconstruction of the human body has many applications in anthropometry, telecommunications, gaming, fashion, and other areas of human-computer interaction. In this paper, a novel framework is presented for reconstructing the 3-D model of the human body from a sequence of RGB-D frames. The reconstruction is performed in real time while the human subject moves arbitrarily in front of the camera. The method employs a novel parameterization of cylindrical-type objects using Cartesian tensor and b-spline bases along the radial and longitudinal dimension respectively. The proposed model, dubbed tensor body, is fitted to the input data using a multistep framework that involves segmentation of the different body regions, robust filtering of the data via a dynamic histogram, and energy-based optimization with positive-definite constraints. A Riemannian metric on the space of positive-definite tensor splines is analytically defined and employed in this framework. The efficacy of the presented methods is demonstrated in several real-data experiments using the Microsoft Kinect sensor.

  5. Hybrid Orientation Based Human Limbs Motion Tracking Method

    PubMed Central

    Glonek, Grzegorz; Wojciechowski, Adam

    2017-01-01

    One of the key technologies that lays behind the human–machine interaction and human motion diagnosis is the limbs motion tracking. To make the limbs tracking efficient, it must be able to estimate a precise and unambiguous position of each tracked human joint and resulting body part pose. In recent years, body pose estimation became very popular and broadly available for home users because of easy access to cheap tracking devices. Their robustness can be improved by different tracking modes data fusion. The paper defines the novel approach—orientation based data fusion—instead of dominating in literature position based approach, for two classes of tracking devices: depth sensors (i.e., Microsoft Kinect) and inertial measurement units (IMU). The detailed analysis of their working characteristics allowed to elaborate a new method that let fuse more precisely limbs orientation data from both devices and compensates their imprecisions. The paper presents the series of performed experiments that verified the method’s accuracy. This novel approach allowed to outperform the precision of position-based joints tracking, the methods dominating in the literature, of up to 18%. PMID:29232832

  6. NeuroKinect: A Novel Low-Cost 3Dvideo-EEG System for Epileptic Seizure Motion Quantification

    PubMed Central

    Cunha, João Paulo Silva; Choupina, Hugo Miguel Pereira; Rocha, Ana Patrícia; Fernandes, José Maria; Achilles, Felix; Loesch, Anna Mira; Vollmar, Christian; Hartl, Elisabeth; Noachtar, Soheyl

    2016-01-01

    Epilepsy is a common neurological disorder which affects 0.5–1% of the world population. Its diagnosis relies both on Electroencephalogram (EEG) findings and characteristic seizure−induced body movements − called seizure semiology. Thus, synchronous EEG and (2D)video recording systems (known as Video−EEG) are the most accurate tools for epilepsy diagnosis. Despite the establishment of several quantitative methods for EEG analysis, seizure semiology is still analyzed by visual inspection, based on epileptologists’ subjective interpretation of the movements of interest (MOIs) that occur during recorded seizures. In this contribution, we present NeuroKinect, a low-cost, easy to setup and operate solution for a novel 3Dvideo-EEG system. It is based on a RGB-D sensor (Microsoft Kinect camera) and performs 24/7 monitoring of an Epilepsy Monitoring Unit (EMU) bed. It does not require the attachment of any reflectors or sensors to the patient’s body and has a very low maintenance load. To evaluate its performance and usability, we mounted a state-of-the-art 6-camera motion-capture system and our low-cost solution over the same EMU bed. A comparative study of seizure-simulated MOIs showed an average correlation of the resulting 3D motion trajectories of 84.2%. Then, we used our system on the routine of an EMU and collected 9 different seizures where we could perform 3D kinematic analysis of 42 MOIs arising from the temporal (TLE) (n = 19) and extratemporal (ETE) brain regions (n = 23). The obtained results showed that movement displacement and movement extent discriminated both seizure MOI groups with statistically significant levels (mean = 0.15 m vs. 0.44 m, p<0.001; mean = 0.068 m3 vs. 0.14 m3, p<0.05, respectively). Furthermore, TLE MOIs were significantly shorter than ETE (mean = 23 seconds vs 35 seconds, p<0.01) and presented higher jerking levels (mean = 345 ms−3 vs 172 ms−3, p<0.05). Our newly implemented 3D approach is faster by 87.5% in extracting body motion trajectories when compared to a 2D frame by frame tracking procedure. We conclude that this new approach provides a more comfortable (both for patients and clinical professionals), simpler, faster and lower-cost procedure than previous approaches, therefore providing a reliable tool to quantitatively analyze MOI patterns of epileptic seizures in the routine of EMUs around the world. We hope this study encourages other EMUs to adopt similar approaches so that more quantitative information is used to improve epilepsy diagnosis. PMID:26799795

  7. Automated Assessment of Postural Stability (AAPS)

    DTIC Science & Technology

    2017-10-01

    evaluation capability, 15 healthy subjects (7 male, 8 female) were required to perform the BESS test, while simultaneously being tracked by a Kinect 2.0...scale, specific behaviors corresponding to deficits in postural control while simultaneously spotting the subject to prevent falls. The subject under...of the error detection algorithm, we simultaneously collected data using a Kinect sensor and a 12-Camera Qualisys system. Qualisys data have been post

  8. Validity and reliability of head posture measurement using Microsoft Kinect.

    PubMed

    Oh, Baek-Lok; Kim, Jongmin; Kim, Jongshin; Hwang, Jeong-Min; Lee, Jehee

    2014-11-01

    To investigate the validity and reliability of Microsoft Kinect-based head tracker (KHT) for measuring head posture. Considering the cervical range of motion (CROM) as a reference, one-dimensional and three-dimensional (1D and 3D) head postures of 12 normal subjects (28-58 years of age; 6 women and 6 men) were obtained using the KHT. The KHT was validated by Pearson's correlation coefficient and intraclass correlation (ICC) coefficient. Test-retest reliability of the KHT was determined by its 95% limit of agreement (LoA) with the Bland-Altman plot. Face recognition success rate was evaluated for each head posture. Measurements of 1D and 3D head posture performed using the KHT were very close to those of the CROM with correlation coefficients of 0.99 and 0.97 (p<0.05), respectively, as well as with an ICC of >0.99 and 0.98, respectively. The reliability tests of the KHT in terms of 1D and 3D head postures had 95% LoA angles of approximately ±2.5° and ±6.5°, respectively. The KHT showed good agreement with the CROM and relatively favourable test-retest reliability. Considering its high performance, convenience and low cost, KHT could be clinically used as a head posture-measuring system. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  9. Dactyl Alphabet Gesture Recognition in a Video Sequence Using Microsoft Kinect

    NASA Astrophysics Data System (ADS)

    Artyukhin, S. G.; Mestetskiy, L. M.

    2015-05-01

    This paper presents an efficient framework for solving the problem of static gesture recognition based on data obtained from the web cameras and depth sensor Kinect (RGB-D - data). Each gesture given by a pair of images: color image and depth map. The database store gestures by it features description, genereated by frame for each gesture of the alphabet. Recognition algorithm takes as input a video sequence (a sequence of frames) for marking, put in correspondence with each frame sequence gesture from the database, or decide that there is no suitable gesture in the database. First, classification of the frame of the video sequence is done separately without interframe information. Then, a sequence of successful marked frames in equal gesture is grouped into a single static gesture. We propose a method combined segmentation of frame by depth map and RGB-image. The primary segmentation is based on the depth map. It gives information about the position and allows to get hands rough border. Then, based on the color image border is specified and performed analysis of the shape of the hand. Method of continuous skeleton is used to generate features. We propose a method of skeleton terminal branches, which gives the opportunity to determine the position of the fingers and wrist. Classification features for gesture is description of the position of the fingers relative to the wrist. The experiments were carried out with the developed algorithm on the example of the American Sign Language. American Sign Language gesture has several components, including the shape of the hand, its orientation in space and the type of movement. The accuracy of the proposed method is evaluated on the base of collected gestures consisting of 2700 frames.

  10. A Kinect-Based Real-Time Compressive Tracking Prototype System for Amphibious Spherical Robots

    PubMed Central

    Pan, Shaowu; Shi, Liwei; Guo, Shuxiang

    2015-01-01

    A visual tracking system is essential as a basis for visual servoing, autonomous navigation, path planning, robot-human interaction and other robotic functions. To execute various tasks in diverse and ever-changing environments, a mobile robot requires high levels of robustness, precision, environmental adaptability and real-time performance of the visual tracking system. In keeping with the application characteristics of our amphibious spherical robot, which was proposed for flexible and economical underwater exploration in 2012, an improved RGB-D visual tracking algorithm is proposed and implemented. Given the limited power source and computational capabilities of mobile robots, compressive tracking (CT), which is the effective and efficient algorithm that was proposed in 2012, was selected as the basis of the proposed algorithm to process colour images. A Kalman filter with a second-order motion model was implemented to predict the state of the target and select candidate patches or samples for the CT tracker. In addition, a variance ratio features shift (VR-V) tracker with a Kalman estimation mechanism was used to process depth images. Using a feedback strategy, the depth tracking results were used to assist the CT tracker in updating classifier parameters at an adaptive rate. In this way, most of the deficiencies of CT, including drift and poor robustness to occlusion and high-speed target motion, were partly solved. To evaluate the proposed algorithm, a Microsoft Kinect sensor, which combines colour and infrared depth cameras, was adopted for use in a prototype of the robotic tracking system. The experimental results with various image sequences demonstrated the effectiveness, robustness and real-time performance of the tracking system. PMID:25856331

  11. A Kinect-based real-time compressive tracking prototype system for amphibious spherical robots.

    PubMed

    Pan, Shaowu; Shi, Liwei; Guo, Shuxiang

    2015-04-08

    A visual tracking system is essential as a basis for visual servoing, autonomous navigation, path planning, robot-human interaction and other robotic functions. To execute various tasks in diverse and ever-changing environments, a mobile robot requires high levels of robustness, precision, environmental adaptability and real-time performance of the visual tracking system. In keeping with the application characteristics of our amphibious spherical robot, which was proposed for flexible and economical underwater exploration in 2012, an improved RGB-D visual tracking algorithm is proposed and implemented. Given the limited power source and computational capabilities of mobile robots, compressive tracking (CT), which is the effective and efficient algorithm that was proposed in 2012, was selected as the basis of the proposed algorithm to process colour images. A Kalman filter with a second-order motion model was implemented to predict the state of the target and select candidate patches or samples for the CT tracker. In addition, a variance ratio features shift (VR-V) tracker with a Kalman estimation mechanism was used to process depth images. Using a feedback strategy, the depth tracking results were used to assist the CT tracker in updating classifier parameters at an adaptive rate. In this way, most of the deficiencies of CT, including drift and poor robustness to occlusion and high-speed target motion, were partly solved. To evaluate the proposed algorithm, a Microsoft Kinect sensor, which combines colour and infrared depth cameras, was adopted for use in a prototype of the robotic tracking system. The experimental results with various image sequences demonstrated the effectiveness, robustness and real-time performance of the tracking system.

  12. Game Design to Measure Reflexes and Attention Based on Biofeedback Multi-Sensor Interaction

    PubMed Central

    Ortiz-Vigon Uriarte, Inigo de Loyola; Garcia-Zapirain, Begonya; Garcia-Chimeno, Yolanda

    2015-01-01

    This paper presents a multi-sensor system for implementing biofeedback as a human-computer interaction technique in a game involving driving cars in risky situations. The sensors used are: Eye Tracker, Kinect, pulsometer, respirometer, electromiography (EMG) and galvanic skin resistance (GSR). An algorithm has been designed which gives rise to an interaction logic with the game according to the set of physiological constants obtained from the sensors. The results reflect a 72.333 response to the System Usability Scale (SUS), a significant difference of p = 0.026 in GSR values in terms of the difference between the start and end of the game, and an r = 0.659 and p = 0.008 correlation while playing with the Kinect between the breathing level and the energy and joy factor. All the sensors used had an impact on the end results, whereby none of them should be disregarded in future lines of research, even though it would be interesting to obtain separate breathing values from that of the cardio. PMID:25789493

  13. Kinect, a Novel Cutting Edge Tool in Pavement Data Collection

    NASA Astrophysics Data System (ADS)

    Mahmoudzadeh, A.; Firoozi Yeganeh, S.; Golroo, A.

    2015-12-01

    Pavement roughness and surface distress detection is of interest of decision makers due to vehicle safety, user satisfaction, and cost saving. Data collection, as a core of pavement management systems, is required for these detections. There are two major types of data collection: traditional/manual data collection and automated/semi-automated data collection. This paper study different non-destructive tools in detecting cracks and potholes. For this purpose, automated data collection tools, which have been utilized recently are discussed and their applications are criticized. The main issue is the significant amount of money as a capital investment needed to buy the vehicle. The main scope of this paper is to study the approach and related tools that not only are cost-effective but also precise and accurate. The new sensor called Kinect has all of these specifications. It can capture both RGB images and depth which are of significant use in measuring cracks and potholes. This sensor is able to take image of surfaces with adequate resolution to detect cracks along with measurement of distance between sensor and obstacles in front of it which results in depth of defects. This technology has been very recently studied by few researchers in different fields of studies such as project management, biomedical engineering, etc. Pavement management has not paid enough attention to use of Kinect in monitoring and detecting distresses. This paper is aimed at providing a thorough literature review on usage of Kinect in pavement management and finally proposing the best approach which is cost-effective and precise.

  14. Kinect2 - respiratory movement detection study.

    PubMed

    Rihana, Sandy; Younes, Elie; Visvikis, Dimitris; Fayad, Hadi

    2016-08-01

    Radiotherapy is one of the main cancer treatments. It consists in irradiating tumor cells to destroy them while sparing healthy tissue. The treatment is planned based on Computed Tomography (CT) and is delivered over fractions during several days. One of the main challenges is replacing patient in the same position every day to irradiate the tumor volume while sparing healthy tissues. Many patient positioning techniques are available. They are both invasive and not accurate performed using tattooed marker on the patient's skin aligned with a laser system calibrated in the treatment room or irradiating using X-ray. Currently systems such as Vision RT use two Time of Flight cameras. Time of Flight cameras have the advantage of having a very fast acquisition rate allows the real time monitoring of patient movement and patient repositioning. The purpose of this work is to test the Microsoft Kinect2 camera for potential use for patient positioning and respiration trigging. This type of Time of Flight camera is non-invasive and costless which facilitate its transfer to clinical practice.

  15. The development and evaluation of a program for leg-strengthening exercises and balance assessment using Kinect.

    PubMed

    Choi, Jin-Seung; Kang, Dong-Won; Seo, Jeong-Woo; Kim, Dae-Hyeok; Yang, Seung-Tae; Tack, Gye-Rae

    2016-01-01

    [Purpose] In this study, a program was developed for leg-strengthening exercises and balance assessment using Microsoft Kinect. [Subjects and Methods] The program consists of three leg-strengthening exercises (knee flexion, hip flexion, and hip extension) and the one-leg standing test (OLST). The program recognizes the correct exercise posture by comparison with the range of motion of the hip and knee joints and provides a number of correct action examples to improve training. The program measures the duration of the OLST and presents this as the balance-age. The accuracy of the program was analyzed using the data of five male adults. [Results] In terms of the motion recognition accuracy, the sensitivity and specificity were 95.3% and 100%, respectively. For the balance assessment, the time measured using the existing method with a stopwatch had an absolute error of 0.37 sec. [Conclusion] The developed program can be used to enable users to conduct leg-strengthening exercises and balance assessments at home.

  16. Kazakh Traditional Dance Gesture Recognition

    NASA Astrophysics Data System (ADS)

    Nussipbekov, A. K.; Amirgaliyev, E. N.; Hahn, Minsoo

    2014-04-01

    Full body gesture recognition is an important and interdisciplinary research field which is widely used in many application spheres including dance gesture recognition. The rapid growth of technology in recent years brought a lot of contribution in this domain. However it is still challenging task. In this paper we implement Kazakh traditional dance gesture recognition. We use Microsoft Kinect camera to obtain human skeleton and depth information. Then we apply tree-structured Bayesian network and Expectation Maximization algorithm with K-means clustering to calculate conditional linear Gaussians for classifying poses. And finally we use Hidden Markov Model to detect dance gestures. Our main contribution is that we extend Kinect skeleton by adding headwear as a new skeleton joint which is calculated from depth image. This novelty allows us to significantly improve the accuracy of head gesture recognition of a dancer which in turn plays considerable role in whole body gesture recognition. Experimental results show the efficiency of the proposed method and that its performance is comparable to the state-of-the-art system performances.

  17. A Tool for the Automated Collection of Space Utilization Data: Three Dimensional Space Utilization Monitor

    NASA Technical Reports Server (NTRS)

    Vos, Gordon A.; Fink, Patrick; Ngo, Phong H.; Morency, Richard; Simon, Cory; Williams, Robert E.; Perez, Lance C.

    2017-01-01

    Space Human Factors and Habitability (SHFH) Element within the Human Research Program (HRP) and the Behavioral Health and Performance (BHP) Element are conducting research regarding Net Habitable Volume (NHV), the internal volume within a spacecraft or habitat that is available to crew for required activities, as well as layout and accommodations within the volume. NASA needs methods to unobtrusively collect NHV data without impacting crew time. Data required includes metrics such as location and orientation of crew, volume used to complete tasks, internal translation paths, flow of work, and task completion times. In less constrained environments methods exist yet many are obtrusive and require significant post-processing. ?Examplesused in terrestrial settings include infrared (IR) retro-reflective marker based motion capture, GPS sensor tracking, inertial tracking, and multi-camera methods ?Due to constraints of space operations many such methods are infeasible. Inertial tracking systems typically rely upon a gravity vector to normalize sensor readings,and traditional IR systems are large and require extensive calibration. ?However, multiple technologies have not been applied to space operations for these purposes. Two of these include: 3D Radio Frequency Identification Real-Time Localization Systems (3D RFID-RTLS) ?Depth imaging systems which allow for 3D motion capture and volumetric scanning (such as those using IR-depth cameras like the Microsoft Kinect or Light Detection and Ranging / Light-Radar systems, referred to as LIDAR)

  18. Developing a Natural User Interface and Facial Recognition System With OpenCV and the Microsoft Kinect

    NASA Technical Reports Server (NTRS)

    Gutensohn, Michael

    2018-01-01

    The task for this project was to design, develop, test, and deploy a facial recognition system for the Kennedy Space Center Augmented/Virtual Reality Lab. This system will serve as a means of user authentication as part of the NUI of the lab. The overarching goal is to create a seamless user interface that will allow the user to initiate and interact with AR and VR experiences without ever needing to use a mouse or keyboard at any step in the process.

  19. Multiple object, three-dimensional motion tracking using the Xbox Kinect sensor

    NASA Astrophysics Data System (ADS)

    Rosi, T.; Onorato, P.; Oss, S.

    2017-11-01

    In this article we discuss the capability of the Xbox Kinect sensor to acquire three-dimensional motion data of multiple objects. Two experiments regarding fundamental features of Newtonian mechanics are performed to test the tracking abilities of our setup. Particular attention is paid to check and visualise the conservation of linear momentum, angular momentum and energy. In both experiments, two objects are tracked while falling in the gravitational field. The obtained data is visualised in a 3D virtual environment to help students understand the physics behind the performed experiments. The proposed experiments were analysed with a group of university students who are aspirant physics and mathematics teachers. Their comments are presented in this paper.

  20. Gaze Estimation Method Using Analysis of Electrooculogram Signals and Kinect Sensor

    PubMed Central

    Tanno, Koichi

    2017-01-01

    A gaze estimation system is one of the communication methods for severely disabled people who cannot perform gestures and speech. We previously developed an eye tracking method using a compact and light electrooculogram (EOG) signal, but its accuracy is not very high. In the present study, we conducted experiments to investigate the EOG component strongly correlated with the change of eye movements. The experiments in this study are of two types: experiments to see objects only by eye movements and experiments to see objects by face and eye movements. The experimental results show the possibility of an eye tracking method using EOG signals and a Kinect sensor. PMID:28912800

  1. Kinecting Physics: Conceptualization of Motion Through Visualization and Embodiment

    NASA Astrophysics Data System (ADS)

    Anderson, Janice L.; Wall, Steven D.

    2016-04-01

    The purpose of this work was to share our findings in using the Kinect technology to facilitate the understanding of basic kinematics with middle school science classrooms. This study marks the first three iterations of this design-based research that examines the pedagogical potential of using the Kinect technology. To this end, we explored the impact of using the Kinect in conjunction with an SDK Physical Virtual Graphing program on students' understanding of displacement, velocity and acceleration compared to students who conducted more traditional inquiry of the same concepts. Results of this study show that, while there may be some affordances to be gained from integrating this technology, there is a need for a scaffolded approach that helps students to understand the "messiness" of the data collected. Further, meta-cognitive activities, such as reflective opportunities, should be integrated into the inquiry experiences in order to scaffold student learning and reinforce concepts being presented. While the Kinect did work to generate large-scale visualization and embodied interactions that served as a mechanism for student understanding, this study also suggests that a complementary approach that includes both the use of hands-on inquiry and the use of the Kinect sensor, with each activity informing the other, could be a powerful technique for supporting students' learning of kinematics.

  2. Self-esteem recognition based on gait pattern using Kinect.

    PubMed

    Sun, Bingli; Zhang, Zhan; Liu, Xingyun; Hu, Bin; Zhu, Tingshao

    2017-10-01

    Self-esteem is an important aspect of individual's mental health. When subjects are not able to complete self-report questionnaire, behavioral assessment will be a good supplement. In this paper, we propose to use gait data collected by Kinect as an indicator to recognize self-esteem. 178 graduate students without disabilities participate in our study. Firstly, all participants complete the 10-item Rosenberg Self-Esteem Scale (RSS) to acquire self-esteem score. After completing the RRS, each participant walks for two minutes naturally on a rectangular red carpet, and the gait data are recorded using Kinect sensor. After data preprocessing, we extract a few behavioral features to train predicting model by machine learning. Based on these features, we build predicting models to recognize self-esteem. For self-esteem prediction, the best correlation coefficient between predicted score and self-report score is 0.45 (p<0.001). We divide the participants according to gender, and for males, the correlation coefficient is 0.43 (p<0.001), for females, it is 0.59 (p<0.001). Using gait data captured by Kinect sensor, we find that the gait pattern could be used to recognize self-esteem with a fairly good criterion validity. The gait predicting model can be taken as a good supplementary method to measure self-esteem. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Automated extraction and validation of children's gait parameters with the Kinect.

    PubMed

    Motiian, Saeid; Pergami, Paola; Guffey, Keegan; Mancinelli, Corrie A; Doretto, Gianfranco

    2015-12-02

    Gait analysis for therapy regimen prescription and monitoring requires patients to physically access clinics with specialized equipment. The timely availability of such infrastructure at the right frequency is especially important for small children. Besides being very costly, this is a challenge for many children living in rural areas. This is why this work develops a low-cost, portable, and automated approach for in-home gait analysis, based on the Microsoft Kinect. A robust and efficient method for extracting gait parameters is introduced, which copes with the high variability of noisy Kinect skeleton tracking data experienced across the population of young children. This is achieved by temporally segmenting the data with an approach based on coupling a probabilistic matching of stride template models, learned offline, with the estimation of their global and local temporal scaling. A preliminary study conducted on healthy children between 2 and 4 years of age is performed to analyze the accuracy, precision, repeatability, and concurrent validity of the proposed method against the GAITRite when measuring several spatial and temporal children's gait parameters. The method has excellent accuracy and good precision, with segmenting temporal sequences of body joint locations into stride and step cycles. Also, the spatial and temporal gait parameters, estimated automatically, exhibit good concurrent validity with those provided by the GAITRite, as well as very good repeatability. In particular, on a range of nine gait parameters, the relative and absolute agreements were found to be good and excellent, and the overall agreements were found to be good and moderate. This work enables and validates the automated use of the Kinect for children's gait analysis in healthy subjects. In particular, the approach makes a step forward towards developing a low-cost, portable, parent-operated in-home tool for clinicians assisting young children.

  4. Remotely controlling of mobile robots using gesture captured by the Kinect and recognized by machine learning method

    NASA Astrophysics Data System (ADS)

    Hsu, Roy CHaoming; Jian, Jhih-Wei; Lin, Chih-Chuan; Lai, Chien-Hung; Liu, Cheng-Ting

    2013-01-01

    The main purpose of this paper is to use machine learning method and Kinect and its body sensation technology to design a simple, convenient, yet effective robot remote control system. In this study, a Kinect sensor is used to capture the human body skeleton with depth information, and a gesture training and identification method is designed using the back propagation neural network to remotely command a mobile robot for certain actions via the Bluetooth. The experimental results show that the designed mobile robots remote control system can achieve, on an average, more than 96% of accurate identification of 7 types of gestures and can effectively control a real e-puck robot for the designed commands.

  5. Using Data From the Microsoft Kinect 2 to Quantify Upper Limb Behavior: A Feasibility Study.

    PubMed

    Dehbandi, Behdad; Barachant, Alexandre; Harary, David; Long, John Davis; Tsagaris, K Zoe; Bumanlag, Silverio Joseph; He, Victor; Putrino, David

    2017-09-01

    The objective of this study was to assess whether the novel application of a machine learning approach to data collected from the Microsoft Kinect 2 (MK2) could be used to classify differing levels of upper limb impairment. Twenty-four healthy subjects completed items of the Wolf Motor Function Test (WMFT), which is a clinically validated metric of upper limb function for stroke survivors. Subjects completed the WMFT three times: 1) as a healthy individual; 2) emulating mild impairment; and 3) emulating moderate impairment. A MK2 was positioned in front of participants, and collected kinematic data as they completed the WMFT. A classification framework, based on Riemannian geometry and the use of covariance matrices as feature representation of the MK2 data, was developed for these data, and its ability to successfully classify subjects as either "healthy," "mildly impaired," or "moderately impaired" was assessed. Mean accuracy for our classifier was 91.7%, with a specific accuracy breakdown of 100%, 83.3%, and 91.7% for the "healthy," "mildly impaired," and "moderately impaired" conditions, respectively. We conclude that data from the MK2 is of sufficient quality to perform objective motor behavior classification in individuals with upper limb impairment. The data collection and analysis framework that we have developed has the potential to disrupt the field of clinical assessment. Future studies will focus on validating this protocol on large populations of individuals with actual upper limb impairments in order to create a toolkit that is clinically validated and available to the clinical community.

  6. Generation of binary holograms with a Kinect sensor for a high speed color holographic display

    NASA Astrophysics Data System (ADS)

    Leportier, Thibault; Park, Min-Chul; Yano, Sumio; Son, Jung-Young

    2017-05-01

    The Kinect sensor is a device that enables to capture a real scene with a camera and a depth sensor. A virtual model of the scene can then be obtained with a point cloud representation. A complex hologram can then be computed. However, complex data cannot be used directly because display devices cannot handle amplitude and phase modulation at the same time. Binary holograms are commonly used since they present several advantages. Among the methods that were proposed to convert holograms into a binary format, the direct-binary search (DBS) not only gives the best performance, it also offers the possibility to choose the display parameters of the binary hologram differently than the original complex hologram. Since wavelength and reconstruction distance can be modified, compensation of chromatic aberrations can be handled. In this study, we examine the potential of DBS for RGB holographic display.

  7. Automatic Non-Destructive Growth Measurement of Leafy Vegetables Based on Kinect

    PubMed Central

    Hu, Yang; Wang, Le; Xiang, Lirong; Wu, Qian; Jiang, Huanyu

    2018-01-01

    Non-destructive plant growth measurement is essential for plant growth and health research. As a 3D sensor, Kinect v2 has huge potentials in agriculture applications, benefited from its low price and strong robustness. The paper proposes a Kinect-based automatic system for non-destructive growth measurement of leafy vegetables. The system used a turntable to acquire multi-view point clouds of the measured plant. Then a series of suitable algorithms were applied to obtain a fine 3D reconstruction for the plant, while measuring the key growth parameters including relative/absolute height, total/projected leaf area and volume. In experiment, 63 pots of lettuce in different growth stages were measured. The result shows that the Kinect-measured height and projected area have fine linear relationship with reference measurements. While the measured total area and volume both follow power law distributions with reference data. All these data have shown good fitting goodness (R2 = 0.9457–0.9914). In the study of biomass correlations, the Kinect-measured volume was found to have a good power law relationship (R2 = 0.9281) with fresh weight. In addition, the system practicality was validated by performance and robustness analysis. PMID:29518958

  8. A cyber-physical system for senior collapse detection

    NASA Astrophysics Data System (ADS)

    Grewe, Lynne; Magaña-Zook, Steven

    2014-06-01

    Senior Collapse Detection (SCD) is a system that uses cyber-physical techniques to create a "smart home" system to predict and detect the falling of senior/geriatric participants in home environments. This software application addresses the needs of millions of senior citizens who live at home by themselves and can find themselves in situations where they have fallen and need assistance. We discuss how SCD uses imagery, depth and audio to fuse and interact in a system that does not require the senior to wear any devices allowing them to be more autonomous. The Microsoft Kinect Sensor is used to collect imagery, depth and audio. We will begin by discussing the physical attributes of the "collapse detection problem". Next, we will discuss the task of feature extraction resulting in skeleton and joint tracking. Improvements in error detection of joint tracking will be highlighted. Next, we discuss the main module of "fall detection" using our mid-level skeleton features. Attributes including acceleration, position and room environment factor into the SCD fall detection decision. Finally, how a detected fall and the resultant emergency response are handled will be presented. Results in a home environment will be given.

  9. The role of fluctuations and interactions in pedestrian dynamics

    NASA Astrophysics Data System (ADS)

    Corbetta, Alessandro; Meeusen, Jasper; Benzi, Roberto; Lee, Chung-Min; Toschi, Federico

    Understanding quantitatively the statistical behaviour of pedestrians walking in crowds is a major scientific challenge of paramount societal relevance. Walking humans exhibit a rich (stochastic) dynamics whose small and large deviations are driven, among others, by own will as well as by environmental conditions. Via 24/7 automatic pedestrian tracking from multiple overhead Microsoft Kinect depth sensors, we collected large ensembles of pedestrian trajectories (in the order of tens of millions) in different real-life scenarios. These scenarios include both narrow corridors and large urban hallways, enabling us to cover and compare a wide spectrum of typical pedestrian dynamics. We investigate the pedestrian motion measuring the PDFs, e.g. those of position, velocity and acceleration, and at unprecedentedly high statistical resolution. We consider the dependence of PDFs on flow conditions, focusing on diluted dynamics and pair-wise interactions (''collisions'') for mutual avoidance. By means of Langevin-like models we provide models for the measured data, inclusive typical fluctuations and rare events. This work is part of the JSTP research programme ``Vision driven visitor behaviour analysis and crowd management'' with Project Number 341-10-001, which is financed by the Netherlands Organisation for Scientific Research (NWO).

  10. A low-cost rapid upper limb assessment method in manual assembly line based on somatosensory interaction technology

    NASA Astrophysics Data System (ADS)

    Jiang, Shengqian; Liu, Peng; Fu, Danni; Xue, Yiming; Luo, Wentao; Wang, Mingjie

    2017-04-01

    As an effective survey method of upper limb disorder, rapid upper limb assessment (RULA) has a wide application in industry period. However, it is very difficult to rapidly evaluate operator's postures in real complex work place. In this paper, a real-time RULA method is proposed to accurately assess the potential risk of operator's postures based on the somatosensory data collected from Kinect sensor, which is a line of motion sensing input devices by Microsoft. First, the static position information of each bone point is collected to obtain the effective angles of body parts based on the calculating methods based on joints angles. Second, a whole RULA score of body is obtained to assess the risk level of current posture in real time. Third, those RULA scores are compared with the results provided by a group of ergonomic practitionerswho were asked to observe the same static postures. All the experiments were carried out in an ergonomic lab. The results show that the proposed method can detect operator's postures more accurately. What's more, this method is applied in a real-time condition which can improve the evaluating efficiency.

  11. Validity of the microsoft kinect system in assessment of compensatory stepping behavior during standing and treadmill walking.

    PubMed

    Shani, Guy; Shapiro, Amir; Oded, Goldstein; Dima, Kagan; Melzer, Itshak

    2017-01-01

    Rapid compensatory stepping plays an important role in preventing falls when balance is lost; however, these responses cannot be accurately quantified in the clinic. The Microsoft Kinect™ system provides real-time anatomical landmark position data in three dimensions (3D), which may bridge this gap. Compensatory stepping reactions were evoked in 8 young adults by a sudden platform horizontal motion on which the subject stood or walked on a treadmill. The movements were recorded with both a 3D-APAS motion capture and Microsoft Kinect™ systems. The outcome measures consisted of compensatory step times (milliseconds) and length (centimeters). The average values of two standing and walking trials for Microsoft Kinect™ and the 3D-APAS systems were compared using t -test, Pearson's correlation, Altman-bland plots, and the average difference of root mean square error (RMSE) of joint position. The Microsoft Kinect™ had high correlations for the compensatory step times ( r  = 0.75-0.78, p  = 0.04) during standing and moderate correlations for walking ( r  = 0.53-0.63, p  = 0.05). The step length, however had a very high correlations for both standing and walking ( r  > 0.97, p  = 0.01). The RMSE showed acceptable differences during the perturbation trials with smallest relative error in anterior-posterior direction (2-3%) and the highest in the vertical direction (11-13%). No systematic bias were evident in the Bland and Altman graphs. The Microsoft Kinect™ system provides comparable data to a video-based 3D motion analysis system when assessing step length and less accurate but still clinically acceptable for step times during balance recovery when balance is lost and fall is initiated.

  12. HoloHands: games console interface for controlling holographic optical manipulation

    NASA Astrophysics Data System (ADS)

    McDonald, C.; McPherson, M.; McDougall, C.; McGloin, D.

    2013-03-01

    The increasing number of applications for holographic manipulation techniques has sparked the development of more accessible control interfaces. Here, we describe a holographic optical tweezers experiment which is controlled by gestures that are detected by a Microsoft Kinect. We demonstrate that this technique can be used to calibrate the tweezers using the Stokes drag method and compare this to automated calibrations. We also show that multiple particle manipulation can be handled. This is a promising new line of research for gesture-based control which could find applications in a wide variety of experimental situations.

  13. Affordable, automatic quantitative fall risk assessment based on clinical balance scales and Kinect data.

    PubMed

    Colagiorgio, P; Romano, F; Sardi, F; Moraschini, M; Sozzi, A; Bejor, M; Ricevuti, G; Buizza, A; Ramat, S

    2014-01-01

    The problem of a correct fall risk assessment is becoming more and more critical with the ageing of the population. In spite of the available approaches allowing a quantitative analysis of the human movement control system's performance, the clinical assessment and diagnostic approach to fall risk assessment still relies mostly on non-quantitative exams, such as clinical scales. This work documents our current effort to develop a novel method to assess balance control abilities through a system implementing an automatic evaluation of exercises drawn from balance assessment scales. Our aim is to overcome the classical limits characterizing these scales i.e. limited granularity and inter-/intra-examiner reliability, to obtain objective scores and more detailed information allowing to predict fall risk. We used Microsoft Kinect to record subjects' movements while performing challenging exercises drawn from clinical balance scales. We then computed a set of parameters quantifying the execution of the exercises and fed them to a supervised classifier to perform a classification based on the clinical score. We obtained a good accuracy (~82%) and especially a high sensitivity (~83%).

  14. A Tool for the Automated Collection of Space Utilization Data: Three Dimensional Space Utilization Monitor

    NASA Technical Reports Server (NTRS)

    Vos, Gordon A.; Fink, Patrick; Ngo, Phong H.; Morency, Richard; Simon, Cory; Williams, Robert E.; Perez, Lance C.

    2015-01-01

    Space Human Factors and Habitability (SHFH) Element within the Human Research Program (HRP), in collaboration with the Behavioral Health and Performance (BHP) Element, is conducting research regarding Net Habitable Volume (NHV), the internal volume within a spacecraft or habitat that is available to crew for required activities, as well as layout and accommodations within that volume. NASA is looking for innovative methods to unobtrusively collect NHV data without impacting crew time. Data required includes metrics such as location and orientation of crew, volume used to complete tasks, internal translation paths, flow of work, and task completion times. In less constrained environments methods for collecting such data exist yet many are obtrusive and require significant post-processing. Example technologies used in terrestrial settings include infrared (IR) retro-reflective marker based motion capture, GPS sensor tracking, inertial tracking, and multiple camera filmography. However due to constraints of space operations many such methods are infeasible, such as inertial tracking systems which typically rely upon a gravity vector to normalize sensor readings, and traditional IR systems which are large and require extensive calibration. However multiple technologies have not yet been applied to space operations for these explicit purposes. Two of these include 3-Dimensional Radio Frequency Identification Real-Time Localization Systems (3D RFID-RTLS) and depth imaging systems which allow for 3D motion capture and volumetric scanning (such as those using IR-depth cameras like the Microsoft Kinect or Light Detection and Ranging / Light-Radar systems, referred to as LIDAR).

  15. A method for rapid 3D scanning and replication of large paleontological specimens

    PubMed Central

    Das, Anshuman J.; Murmann, Denise C.; Cohrn, Kenneth; Raskar, Ramesh

    2017-01-01

    We demonstrate a fast and cost-effective technique to perform three dimensional (3D) scanning and replication of large paleontological specimens, in this case the entire skull of a Tyrannosaurus rex (T.rex) with a volume in the range of 2 m3. The technique involves time-of-flight (TOF) depth sensing using the Kinect scanning module commonly used in gesture recognition in gaming. Raw data from the Kinect sensor was captured using open source software and the reconstruction was done rapidly making this a viable method that can be adopted by museums and researchers in paleontology. The current method has the advantage of being low-cost as compared to industrial scanners and photogrammetric methods but also of accurately scanning a substantial volume range which is well suited for large specimens. The depth resolution from the Kinect sensor was measured to be around 0.6 mm which is ideal for scanning large specimens with reasonable structural detail. We demonstrate the efficacy of this method on the skull of FMNH PR 2081, also known as SUE, a near complete T.rex at the Field Museum of Natural History. PMID:28678817

  16. A method for rapid 3D scanning and replication of large paleontological specimens.

    PubMed

    Das, Anshuman J; Murmann, Denise C; Cohrn, Kenneth; Raskar, Ramesh

    2017-01-01

    We demonstrate a fast and cost-effective technique to perform three dimensional (3D) scanning and replication of large paleontological specimens, in this case the entire skull of a Tyrannosaurus rex (T.rex) with a volume in the range of 2 m3. The technique involves time-of-flight (TOF) depth sensing using the Kinect scanning module commonly used in gesture recognition in gaming. Raw data from the Kinect sensor was captured using open source software and the reconstruction was done rapidly making this a viable method that can be adopted by museums and researchers in paleontology. The current method has the advantage of being low-cost as compared to industrial scanners and photogrammetric methods but also of accurately scanning a substantial volume range which is well suited for large specimens. The depth resolution from the Kinect sensor was measured to be around 0.6 mm which is ideal for scanning large specimens with reasonable structural detail. We demonstrate the efficacy of this method on the skull of FMNH PR 2081, also known as SUE, a near complete T.rex at the Field Museum of Natural History.

  17. Measurement of body joint angles for physical therapy based on mean shift tracking using two low cost Kinect images.

    PubMed

    Chen, Y C; Lee, H J; Lin, K H

    2015-08-01

    Range of motion (ROM) is commonly used to assess a patient's joint function in physical therapy. Because motion capture systems are generally very expensive, physical therapists mostly use simple rulers to measure patients' joint angles in clinical diagnosis, which will suffer from low accuracy, low reliability, and subjective. In this study we used color and depth image feature from two sets of low-cost Microsoft Kinect to reconstruct 3D joint positions, and then calculate moveable joint angles to assess the ROM. A Gaussian background model is first used to segment the human body from the depth images. The 3D coordinates of the joints are reconstructed from both color and depth images. To track the location of joints throughout the sequence more precisely, we adopt the mean shift algorithm to find out the center of voxels upon the joints. The two sets of Kinect are placed three meters away from each other and facing to the subject. The joint moveable angles and the motion data are calculated from the position of joints frame by frame. To verify the results of our system, we take the results from a motion capture system called VICON as golden standard. Our 150 test results showed that the deviation of joint moveable angles between those obtained by VICON and our system is about 4 to 8 degree in six different upper limb exercises, which are acceptable in clinical environment.

  18. Markerless Knee Joint Position Measurement Using Depth Data during Stair Walking

    PubMed Central

    Mita, Akira; Yorozu, Ayanori; Takahashi, Masaki

    2017-01-01

    Climbing and descending stairs are demanding daily activities, and the monitoring of them may reveal the presence of musculoskeletal diseases at an early stage. A markerless system is needed to monitor such stair walking activity without mentally or physically disturbing the subject. Microsoft Kinect v2 has been used for gait monitoring, as it provides a markerless skeleton tracking function. However, few studies have used this device for stair walking monitoring, and the accuracy of its skeleton tracking function during stair walking has not been evaluated. Moreover, skeleton tracking is not likely to be suitable for estimating body joints during stair walking, as the form of the body is different from what it is when it walks on level surfaces. In this study, a new method of estimating the 3D position of the knee joint was devised that uses the depth data of Kinect v2. The accuracy of this method was compared with that of the skeleton tracking function of Kinect v2 by simultaneously measuring subjects with a 3D motion capture system. The depth data method was found to be more accurate than skeleton tracking. The mean error of the 3D Euclidian distance of the depth data method was 43.2 ± 27.5 mm, while that of the skeleton tracking was 50.4 ± 23.9 mm. This method indicates the possibility of stair walking monitoring for the early discovery of musculoskeletal diseases. PMID:29165396

  19. Developing a multi-Kinect-system for monitoring in dairy cows: object recognition and surface analysis using wavelets.

    PubMed

    Salau, J; Haas, J H; Thaller, G; Leisen, M; Junge, W

    2016-09-01

    Camera-based systems in dairy cattle were intensively studied over the last years. Different from this study, single camera systems with a limited range of applications were presented, mostly using 2D cameras. This study presents current steps in the development of a camera system comprising multiple 3D cameras (six Microsoft Kinect cameras) for monitoring purposes in dairy cows. An early prototype was constructed, and alpha versions of software for recording, synchronizing, sorting and segmenting images and transforming the 3D data in a joint coordinate system have already been implemented. This study introduced the application of two-dimensional wavelet transforms as method for object recognition and surface analyses. The method was explained in detail, and four differently shaped wavelets were tested with respect to their reconstruction error concerning Kinect recorded depth maps from different camera positions. The images' high frequency parts reconstructed from wavelet decompositions using the haar and the biorthogonal 1.5 wavelet were statistically analyzed with regard to the effects of image fore- or background and of cows' or persons' surface. Furthermore, binary classifiers based on the local high frequencies have been implemented to decide whether a pixel belongs to the image foreground and if it was located on a cow or a person. Classifiers distinguishing between image regions showed high (⩾0.8) values of Area Under reciever operation characteristic Curve (AUC). The classifications due to species showed maximal AUC values of 0.69.

  20. Adherence to a Videogame-Based Physical Activity Program for Older Adults with Schizophrenia.

    PubMed

    Leutwyler, Heather; Hubbard, Erin M; Dowling, Glenna A

    2014-08-01

    Adults with schizophrenia are a growing segment of the older adult population. Evidence suggests that they engage in limited physical activity. Interventions are needed that are tailored around their unique limitations. An active videogame-based physical activity program that can be offered at a treatment facility can overcome these barriers and increase motivation to engage in physical activity. The purpose of this report is to describe the adherence to a videogame-based physical activity program using the Kinect(®) for Xbox(®) 360 game system (Microsoft(®), Redmond, WA) in older adults with schizophrenia. This was a descriptive longitudinal study among 34 older adults with schizophrenia to establish the adherence to an active videogame-based physical activity program. In our ongoing program, once a week for 6 weeks, participants played an active videogame, using the Kinect for Xbox 360 game system, for 30 minutes. Adherence was measured with a count of sessions attended and with the total minutes attended out of the possible total minutes of attendance (180 minutes). Thirty-four adults with schizophrenia enrolled in the study. The mean number of groups attended was five out of six total (standard deviation=2), and the mean total minutes attended were 139 out of 180 possible (standard deviation=55). Fifty percent had perfect attendance. Older adults with schizophrenia need effective physical activity programs. Adherence to our program suggests that videogames that use the Kinect for Xbox 360 game system are an innovative way to make physical activity accessible to this population.

  1. Multiview point clouds denoising based on interference elimination

    NASA Astrophysics Data System (ADS)

    Hu, Yang; Wu, Qian; Wang, Le; Jiang, Huanyu

    2018-03-01

    Newly emerging low-cost depth sensors offer huge potentials for three-dimensional (3-D) modeling, but existing high noise restricts these sensors from obtaining accurate results. Thus, we proposed a method for denoising registered multiview point clouds with high noise to solve that problem. The proposed method is aimed at fully using redundant information to eliminate the interferences among point clouds of different views based on an iterative procedure. In each iteration, noisy points are either deleted or moved to their weighted average targets in accordance with two cases. Simulated data and practical data captured by a Kinect v2 sensor were tested in experiments qualitatively and quantitatively. Results showed that the proposed method can effectively reduce noise and recover local features from highly noisy multiview point clouds with good robustness, compared to truncated signed distance function and moving least squares (MLS). Moreover, the resulting low-noise point clouds can be further smoothed by the MLS to achieve improved results. This study provides the feasibility of obtaining fine 3-D models with high-noise devices, especially for depth sensors, such as Kinect.

  2. The effect of sensor-based exercise at home on functional performance associated with fall risk in older people - a comparison of two exergame interventions.

    PubMed

    Gschwind, Yves J; Schoene, Daniel; Lord, Stephen R; Ejupi, Andreas; Valenzuela, Trinidad; Aal, Konstantin; Woodbury, Ashley; Delbaere, Kim

    2015-01-01

    There is good evidence that balance challenging exercises can reduce falls in older people. However, older people often find it difficult to incorporate such programs in their daily life. Videogame technology has been proposed to promote enjoyable, balance-challenging exercise. As part of a larger analysis, we compared feasibility and efficacy of two exergame interventions: step-mat-training (SMT) and Microsoft-Kinect® (KIN) exergames. 148 community-dwelling people, aged 65+ years participated in two exergame studies in Sydney, Australia (KIN: n = 57, SMT: n = 91). Both interventions were delivered as unsupervised exercise programs in participants' homes for 16 weeks. Assessment measures included overall physiological fall risk, muscle strength, finger-press reaction time, proprioception, vision, balance and executive functioning. For participants allocated to the intervention arms, the median time played each week was 17 min (IQR 32) for KIN and 48 min (IQR 94) for SMT. Compared to the control group, SMT participants improved their fall risk score (p = 0.036), proprioception (p = 0.015), reaction time (p = 0.003), sit-to-stand performance (p = 0.011) and executive functioning (p = 0.001), while KIN participants improved their muscle strength (p = 0.032) and vision (p = 0.010), and showed a trend towards improved fall risk scores (p = 0.057). The findings suggest that it is feasible for older people to conduct an unsupervised exercise program at home using exergames. Both interventions reduced fall risk and SMT additionally improved specific cognitive functions. However, further refinement of the systems is required to improve adherence and maximise the benefits of exergames to deliver fall prevention programs in older people's homes. ACTRN12613000671763 (Step Mat Training RCT) ACTRN12614000096651 (MS Kinect RCT).

  3. The Performance Analysis of AN Indoor Mobile Mapping System with Rgb-D Sensor

    NASA Astrophysics Data System (ADS)

    Tsai, G. J.; Chiang, K. W.; Chu, C. H.; Chen, Y. L.; El-Sheimy, N.; Habib, A.

    2015-08-01

    Over the years, Mobile Mapping Systems (MMSs) have been widely applied to urban mapping, path management and monitoring and cyber city, etc. The key concept of mobile mapping is based on positioning technology and photogrammetry. In order to achieve the integration, multi-sensor integrated mapping technology has clearly established. In recent years, the robotic technology has been rapidly developed. The other mapping technology that is on the basis of low-cost sensor has generally used in robotic system, it is known as the Simultaneous Localization and Mapping (SLAM). The objective of this study is developed a prototype of indoor MMS for mobile mapping applications, especially to reduce the costs and enhance the efficiency of data collection and validation of direct georeferenced (DG) performance. The proposed indoor MMS is composed of a tactical grade Inertial Measurement Unit (IMU), the Kinect RGB-D sensor and light detection, ranging (LIDAR) and robot. In summary, this paper designs the payload for indoor MMS to generate the floor plan. In first session, it concentrates on comparing the different positioning algorithms in the indoor environment. Next, the indoor plans are generated by two sensors, Kinect RGB-D sensor LIDAR on robot. Moreover, the generated floor plan will compare with the known plan for both validation and verification.

  4. Validity of the Microsoft Kinect for assessment of postural control.

    PubMed

    Clark, Ross A; Pua, Yong-Hao; Fortin, Karine; Ritchie, Callan; Webster, Kate E; Denehy, Linda; Bryant, Adam L

    2012-07-01

    Clinically feasible methods of assessing postural control such as timed standing balance and functional reach tests provide important information, however, they cannot accurately quantify specific postural control mechanisms. The Microsoft Kinect™ system provides real-time anatomical landmark position data in three dimensions (3D), and given that it is inexpensive, portable and simple to setup it may bridge this gap. This study assessed the concurrent validity of the Microsoft Kinect™ against a benchmark reference, a multiple-camera 3D motion analysis system, in 20 healthy subjects during three postural control tests: (i) forward reach, (ii) lateral reach, and (iii) single-leg eyes-closed standing balance. For the reach tests, the outcome measures consisted of distance reached and trunk flexion angle in the sagittal (forward reach) and coronal (lateral reach) planes. For the standing balance test the range and deviation of movement in the anatomical landmark positions for the sternum, pelvis, knee and ankle and the lateral and anterior trunk flexion angle were assessed. The Microsoft Kinect™ and 3D motion analysis systems had comparable inter-trial reliability (ICC difference=0.06±0.05; range, 0.00-0.16) and excellent concurrent validity, with Pearson's r-values >0.90 for the majority of measurements (r=0.96±0.04; range, 0.84-0.99). However, ordinary least products analyses demonstrated proportional biases for some outcome measures associated with the pelvis and sternum. These findings suggest that the Microsoft Kinect™ can validly assess kinematic strategies of postural control. Given the potential benefits it could therefore become a useful tool for assessing postural control in the clinical setting. Copyright © 2012 Elsevier B.V. All rights reserved.

  5. Implementation of Headtracking and 3D Stereo with Unity and VRPN for Computer Simulations

    NASA Technical Reports Server (NTRS)

    Noyes, Matthew A.

    2013-01-01

    This paper explores low-cost hardware and software methods to provide depth cues traditionally absent in monocular displays. The use of a VRPN server in conjunction with a Microsoft Kinect and/or Nintendo Wiimote to provide head tracking information to a Unity application, and NVIDIA 3D Vision for retinal disparity support, is discussed. Methods are suggested to implement this technology with NASA's EDGE simulation graphics package, along with potential caveats. Finally, future applications of this technology to astronaut crew training, particularly when combined with an omnidirectional treadmill for virtual locomotion and NASA's ARGOS system for reduced gravity simulation, are discussed.

  6. 3D Perception Technologies for Surgical Operating Theatres.

    PubMed

    Beyl, T; Schreiter, L; Nicolai, P; Raczkowsky, J; Wörn, H

    2016-01-01

    3D Perception technologies have been explored in various fields. This paper explores the application of such technologies for surgical operating theatres. Clinical applications can be found in workflow detection, tracking and analysis, collision avoidance with medical robots, perception of interaction between participants of the operation, training of the operation room crew, patient calibration and many more. In this paper a complete perception solution for the operating room is shown. The system is based on the ToF technology integrated to the Microsoft Kinect One implements a multi camera approach. Special emphasize is put on the tracking of the personnel and the evaluation of the system performance and accuracy.

  7. Exploring the Potential of the iPad and Xbox Kinect for Cognitive Science Research.

    PubMed

    Rolle, Camarin E; Voytek, Bradley; Gazzaley, Adam

    2015-06-01

    Many studies have validated consumer-facing hardware platforms as efficient, cost-effective, and accessible data collection instruments. However, there are few reports that have assessed the reliability of these platforms as assessment tools compared with traditional data collection platforms. Here we evaluated performance on a spatial attention paradigm obtained by our standard in-lab data collection platform, the personal computer (PC), and compared performance with that of two widely adopted, consumer technology devices: the Apple (Cupertino, CA) iPad(®) 2 and Microsoft (Redmond, WA) Xbox(®) Kinect(®). The task assessed spatial attention, a fundamental ability that we use to navigate the complex sensory input we face daily in order to effectively engage in goal-directed activities. Participants were presented with a central spatial cue indicating where on the screen a stimulus would appear. We manipulated spatial cueing such that, on a given trial, the cue presented one of four levels of information indicating the upcoming target location. Based on previous research, we hypothesized that as information of the cued spatial area decreased (i.e., larger area of possible target location) there would be a parametric decrease in performance, as revealed by slower response times and lower accuracies. Identical paradigm parameters were used for each of the three platforms, and testing was performed in a single session with a counterbalanced design. We found that performance on the Kinect and iPad showed a stronger parametric effect across the cued-information levels than that on the PC. Our results suggest that not only can the Kinect and iPad be reliably used as assessment tools to yield research-quality behavioral data, but that these platforms exploit mechanics that could be useful in building more interactive, and therefore effective, cognitive assessment and training designs. We include a discussion on the possible contributing factors to the differential effects between platforms, as well as potential confounds of the study.

  8. Using Free Internet Videogames in Upper Extremity Motor Training for Children with Cerebral Palsy.

    PubMed

    Sevick, Marisa; Eklund, Elizabeth; Mensch, Allison; Foreman, Matthew; Standeven, John; Engsberg, Jack

    2016-06-07

    Movement therapy is one type of upper extremity intervention for children with cerebral palsy (CP) to improve function. It requires high-intensity, repetitive and task-specific training. Tedium and lack of motivation are substantial barriers to completing the training. An approach to overcome these barriers is to couple the movement therapy with videogames. This investigation: (1) tested the feasibility of delivering a free Internet videogame upper extremity motor intervention to four children with CP (aged 8-17 years) with mild to moderate limitations to upper limb function; and (2) determined the level of intrinsic motivation during the intervention. The intervention used free Internet videogames in conjunction with the Microsoft Kinect motion sensor and the Flexible Action and Articulated Skeleton Toolkit software (FAAST) software. Results indicated that the intervention could be successfully delivered in the laboratory and the home, and pre- and post- impairment, function and performance assessments were possible. Results also indicated a high level of motivation among the participants. It was concluded that the use of inexpensive hardware and software in conjunction with free Internet videogames has the potential to be very motivating in helping to improve the upper extremity abilities of children with CP. Future work should include results from additional participants and from a control group in a randomized controlled trial to establish efficacy.

  9. Computer Games as Therapy for Persons with Stroke.

    PubMed

    Lauterbach, Sarah A; Foreman, Matt H; Engsberg, Jack R

    2013-02-01

    Stroke affects approximately 800,000 individuals each year, with 65% having residual impairments. Studies have demonstrated that mass practice leads to regaining motor function in affected extremities; however, traditional therapy does not include the repetitions needed for this recovery. Videogames have been shown to be good motivators to complete repetitions. Advances in technology and low-cost hardware bring new opportunities to use computer games during stroke therapy. This study examined the use of the Microsoft (Redmond, WA) Kinect™ and Flexible Action and Articulated Skeleton Toolkit (FAAST) software as a therapy tool to play existing free computer games on the Internet. Three participants attended a 1-hour session where they played two games with upper extremity movements as game controls. Video was taken for analysis of movement repetitions, and questions were answered about participant history and their perceptions of the games. Participants remained engaged through both games; regardless of previous computer use all participants successfully played two games. Five minutes of game play averaged 34 repetitions of the affected extremity. The Intrinsic Motivation Inventory showed a high level of satisfaction in two of the three participants. The Kinect Sensor with the FAAST software has the potential to be an economical tool to be used alongside traditional therapy to increase the number of repetitions completed in a motivating and engaging way for clients.

  10. Women with fibromyalgia's experience with three motion-controlled video game consoles and indicators of symptom severity and performance of activities of daily living.

    PubMed

    Mortensen, Jesper; Kristensen, Lola Qvist; Brooks, Eva Petersson; Brooks, Anthony Lewis

    2015-01-01

    Little is known of Motion-Controlled Video Games (MCVGs) as an intervention for people with chronic pain. The aim of this study was to explore the experience women with fibromyalgia syndrome (FMS) had, using commercially available MCVGs; and to investigate indicators of symptom severity and performance of activities of daily living (ADL). Of 15 female participants diagnosed with FMS, 7 completed a program of five sessions with Nintendo Wii (Wii), five sessions with PlayStation 3 Move (PS3 Move) and five sessions with Microsoft Xbox Kinect (Xbox Kinect). Interviews were conducted at baseline and post-intervention and were supported by data from observation and self-reported assessment. Participants experienced play with MCVGs as a way to get distraction from pain symptoms while doing fun and manageable exercise. They enjoyed the slow pace and familiarity of Wii, while some considered PS3 Move to be too fast paced. Xbox Kinect was reported as the best console for exercise. There were no indication of general improvement in symptom severity or performance of ADL. This study demonstrated MCVG as an effective healthcare intervention for the women with FMS who completed the program, with regards to temporary pain relief and enjoyable low impact exercise. Implications for Rehabilitation Exercise is recommended in the management of fibromyalgia syndrome (FMS). People with FMS often find it counterintuitive to exercise because of pain exacerbation, which may influence adherence to an exercise program. Motion-controlled video games may offer temporary pain relief and fun low impact exercise for women with FMS.

  11. Cost-effective surgical registration using consumer depth cameras

    NASA Astrophysics Data System (ADS)

    Potter, Michael; Yaniv, Ziv

    2016-03-01

    The high costs associated with technological innovation have been previously identified as both a major contributor to the rise of health care expenses, and as a limitation for widespread adoption of new technologies. In this work we evaluate the use of two consumer grade depth cameras, the Microsoft Kinect v1 and 3DSystems Sense, as a means for acquiring point clouds for registration. These devices have the potential to replace professional grade laser range scanning devices in medical interventions that do not require sub-millimetric registration accuracy, and may do so at a significantly reduced cost. To facilitate the use of these devices we have developed a near real-time (1-4 sec/frame) rigid registration framework combining several alignment heuristics with the Iterative Closest Point (ICP) algorithm. Using nearest neighbor registration error as our evaluation criterion we found the optimal scanning distances for the Sense and Kinect to be 50-60cm and 70-80cm respectively. When imaging a skull phantom at these distances, RMS error values of 1.35mm and 1.14mm were obtained. The registration framework was then evaluated using cranial MR scans of two subjects. For the first subject, the RMS error using the Sense was 1.28 +/- 0.01 mm. Using the Kinect this error was 1.24 +/- 0.03 mm. For the second subject, whose MR scan was significantly corrupted by metal implants, the errors increased to 1.44 +/- 0.03 mm and 1.74 +/- 0.06 mm but the system nonetheless performed within acceptable bounds.

  12. Gesture controlled human-computer interface for the disabled.

    PubMed

    Szczepaniak, Oskar M; Sawicki, Dariusz J

    2017-02-28

    The possibility of using a computer by a disabled person is one of the difficult problems of the human-computer interaction (HCI), while the professional activity (employment) is one of the most important factors affecting the quality of life, especially for disabled people. The aim of the project has been to propose a new HCI system that would allow for resuming employment for people who have lost the possibility of a standard computer operation. The basic requirement was to replace all functions of a standard mouse without the need of performing precise hand movements and using fingers. The Microsoft's Kinect motion controller had been selected as a device which would recognize hand movements. Several tests were made in order to create optimal working environment with the new device. The new communication system consisted of the Kinect device and the proper software had been built. The proposed system was tested by means of the standard subjective evaluations and objective metrics according to the standard ISO 9241-411:2012. The overall rating of the new HCI system shows the acceptance of the solution. The objective tests show that although the new system is a bit slower, it may effectively replace the computer mouse. The new HCI system fulfilled its task for a specific disabled person. This resulted in the ability to return to work. Additionally, the project confirmed the possibility of effective but nonstandard use of the Kinect device. Med Pr 2017;68(1):1-21. This work is available in Open Access model and licensed under a CC BY-NC 3.0 PL license.

  13. Adherence to a Videogame-Based Physical Activity Program for Older Adults with Schizophrenia

    PubMed Central

    Hubbard, Erin M.; Dowling, Glenna A.

    2014-01-01

    Abstract Objectives: Adults with schizophrenia are a growing segment of the older adult population. Evidence suggests that they engage in limited physical activity. Interventions are needed that are tailored around their unique limitations. An active videogame-based physical activity program that can be offered at a treatment facility can overcome these barriers and increase motivation to engage in physical activity. The purpose of this report is to describe the adherence to a videogame-based physical activity program using the Kinect® for Xbox® 360 game system (Microsoft®, Redmond, WA) in older adults with schizophrenia. Materials and Methods: This was a descriptive longitudinal study among 34 older adults with schizophrenia to establish the adherence to an active videogame-based physical activity program. In our ongoing program, once a week for 6 weeks, participants played an active videogame, using the Kinect for Xbox 360 game system, for 30 minutes. Adherence was measured with a count of sessions attended and with the total minutes attended out of the possible total minutes of attendance (180 minutes). Results: Thirty-four adults with schizophrenia enrolled in the study. The mean number of groups attended was five out of six total (standard deviation=2), and the mean total minutes attended were 139 out of 180 possible (standard deviation=55). Fifty percent had perfect attendance. Conclusions: Older adults with schizophrenia need effective physical activity programs. Adherence to our program suggests that videogames that use the Kinect for Xbox 360 game system are an innovative way to make physical activity accessible to this population. PMID:26192371

  14. A Kinect-Based Sign Language Hand Gesture Recognition System for Hearing- and Speech-Impaired: A Pilot Study of Pakistani Sign Language.

    PubMed

    Halim, Zahid; Abbas, Ghulam

    2015-01-01

    Sign language provides hearing and speech impaired individuals with an interface to communicate with other members of the society. Unfortunately, sign language is not understood by most of the common people. For this, a gadget based on image processing and pattern recognition can provide with a vital aid for detecting and translating sign language into a vocal language. This work presents a system for detecting and understanding the sign language gestures by a custom built software tool and later translating the gesture into a vocal language. For the purpose of recognizing a particular gesture, the system employs a Dynamic Time Warping (DTW) algorithm and an off-the-shelf software tool is employed for vocal language generation. Microsoft(®) Kinect is the primary tool used to capture video stream of a user. The proposed method is capable of successfully detecting gestures stored in the dictionary with an accuracy of 91%. The proposed system has the ability to define and add custom made gestures. Based on an experiment in which 10 individuals with impairments used the system to communicate with 5 people with no disability, 87% agreed that the system was useful.

  15. The Public Nights Program at Appalachian State University's Dark Sky Observatory Cline Visitor Center: Our First Year’s Results

    NASA Astrophysics Data System (ADS)

    Caton, Daniel B.; Smith, A. B.; Hawkins, R. L.

    2013-01-01

    We have completed our first year of public nights at our Dark Sky Observatory’s 32-inch telescope and the adjacent Cline Visitor Center. Our monthly public nights are composed of two groups of 60 visitors each that arrive for 1.5-hour sessions. Shorter summer nights limit us to one session. We use two large (70-inch) flat panel displays in the Center for a brief pre-observing discussion and to entertain visitors while they await their turn at the telescope’s eyepiece. One of them runs a Beta version of Microsoft’s Worldwide Telescope for Kinect. While the facility is fully ADA compliant, with eyepiece access via a DFM Engineering Articulated Relay Eyepiece, and a wheelchair lift if needed, we have only had one occasion to use this capability. We present some of our experiences in this poster and encourage readers to offer suggestions. The Visitor Center was established with the support of Mr. J. Donald Cline, for which we are very grateful. The Kinect system was donated by Marley Gray, at Microsoft/Charlotte. The telescope was partially funded by the National Science Foundation.

  16. Development of a system based on 3D vision, interactive virtual environments, ergonometric signals and a humanoid for stroke rehabilitation.

    PubMed

    Ibarra Zannatha, Juan Manuel; Tamayo, Alejandro Justo Malo; Sánchez, Angel David Gómez; Delgado, Jorge Enrique Lavín; Cheu, Luis Eduardo Rodríguez; Arévalo, Wilson Alexander Sierra

    2013-11-01

    This paper presents a stroke rehabilitation (SR) system for the upper limbs, developed as an interactive virtual environment (IVE) based on a commercial 3D vision system (a Microsoft Kinect), a humanoid robot (an Aldebaran's Nao), and devices producing ergonometric signals. In one environment, the rehabilitation routines, developed by specialists, are presented to the patient simultaneously by the humanoid and an avatar inside the IVE. The patient follows the rehabilitation task, while his avatar copies his gestures that are captured by the Kinect 3D vision system. The information of the patient movements, together with the signals obtained from the ergonometric measurement devices, is used also to supervise and to evaluate the rehabilitation progress. The IVE can also present an RGB image of the patient. In another environment, that uses the same base elements, four game routines--Touch the balls 1 and 2, Simon says, and Follow the point--are used for rehabilitation. These environments are designed to create a positive influence in the rehabilitation process, reduce costs, and engage the patient. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  17. Posture recognition associated with lifting of heavy objects using Kinect and Adaboost

    NASA Astrophysics Data System (ADS)

    Raut, Sayli; Navaneethakrishna, M.; Ramakrishnan, S.

    2017-12-01

    Lifting of heavy objects is the common task in the industries. Recent statistics from the Bureau of Labour indicate, back injuries account for one of every five injuries in the workplace. Eighty per cent of these injuries occur to the lower back and are associated with manual materials handling tasks. According to the Industrial ergonomic safety manual, Squatting is the correct posture for lifting a heavy object. In this work, an attempt has been made to monitor posture of the workers during squat and stoop using 3D motion capture and machine learning techniques. For this, Microsoft Kinect V2 is used for capturing the depth data. Further, Dynamic Time Warping and Euclidian distance algorithms are used for extraction of features. Ada-boost algorithm is used for classification of stoop and squat. The results show that the 3D image data is large and complex to analyze. The application of nonlinear and linear metrics captures the variation in the lifting pattern. Additionally, the features extracted from this metric resulted in a classification accuracy of 85% and 81% respectively. This framework may be put-upon to alert the workers in the industrial ergonomic environments.

  18. Development of a Low-Cost, Noninvasive, Portable Visual Speech Recognition Program.

    PubMed

    Kohlberg, Gavriel D; Gal, Ya'akov Kobi; Lalwani, Anil K

    2016-09-01

    Loss of speech following tracheostomy and laryngectomy severely limits communication to simple gestures and facial expressions that are largely ineffective. To facilitate communication in these patients, we seek to develop a low-cost, noninvasive, portable, and simple visual speech recognition program (VSRP) to convert articulatory facial movements into speech. A Microsoft Kinect-based VSRP was developed to capture spatial coordinates of lip movements and translate them into speech. The articulatory speech movements associated with 12 sentences were used to train an artificial neural network classifier. The accuracy of the classifier was then evaluated on a separate, previously unseen set of articulatory speech movements. The VSRP was successfully implemented and tested in 5 subjects. It achieved an accuracy rate of 77.2% (65.0%-87.6% for the 5 speakers) on a 12-sentence data set. The mean time to classify an individual sentence was 2.03 milliseconds (1.91-2.16). We have demonstrated the feasibility of a low-cost, noninvasive, portable VSRP based on Kinect to accurately predict speech from articulation movements in clinically trivial time. This VSRP could be used as a novel communication device for aphonic patients. © The Author(s) 2016.

  19. Development of a Kinect Software Tool to Classify Movements during Active Video Gaming.

    PubMed

    Rosenberg, Michael; Thornton, Ashleigh L; Lay, Brendan S; Ward, Brodie; Nathan, David; Hunt, Daniel; Braham, Rebecca

    2016-01-01

    While it has been established that using full body motion to play active video games results in increased levels of energy expenditure, there is little information on the classification of human movement during active video game play in relationship to fundamental movement skills. The aim of this study was to validate software utilising Kinect sensor motion capture technology to recognise fundamental movement skills (FMS), during active video game play. Two human assessors rated jumping and side-stepping and these assessments were compared to the Kinect Action Recognition Tool (KART), to establish a level of agreement and determine the number of movements completed during five minutes of active video game play, for 43 children (m = 12 years 7 months ± 1 year 6 months). During five minutes of active video game play, inter-rater reliability, when examining the two human raters, was found to be higher for the jump (r = 0.94, p < .01) than the sidestep (r = 0.87, p < .01), although both were excellent. Excellent reliability was also found between human raters and the KART system for the jump (r = 0.84, p, .01) and moderate reliability for sidestep (r = 0.6983, p < .01) during game play, demonstrating that both humans and KART had higher agreement for jumps than sidesteps in the game play condition. The results of the study provide confidence that the Kinect sensor can be used to count the number of jumps and sidestep during five minutes of active video game play with a similar level of accuracy as human raters. However, in contrast to humans, the KART system required a fraction of the time to analyse and tabulate the results.

  20. Development of a Kinect Software Tool to Classify Movements during Active Video Gaming

    PubMed Central

    Rosenberg, Michael; Lay, Brendan S.; Ward, Brodie; Nathan, David; Hunt, Daniel; Braham, Rebecca

    2016-01-01

    While it has been established that using full body motion to play active video games results in increased levels of energy expenditure, there is little information on the classification of human movement during active video game play in relationship to fundamental movement skills. The aim of this study was to validate software utilising Kinect sensor motion capture technology to recognise fundamental movement skills (FMS), during active video game play. Two human assessors rated jumping and side-stepping and these assessments were compared to the Kinect Action Recognition Tool (KART), to establish a level of agreement and determine the number of movements completed during five minutes of active video game play, for 43 children (m = 12 years 7 months ± 1 year 6 months). During five minutes of active video game play, inter-rater reliability, when examining the two human raters, was found to be higher for the jump (r = 0.94, p < .01) than the sidestep (r = 0.87, p < .01), although both were excellent. Excellent reliability was also found between human raters and the KART system for the jump (r = 0.84, p, .01) and moderate reliability for sidestep (r = 0.6983, p < .01) during game play, demonstrating that both humans and KART had higher agreement for jumps than sidesteps in the game play condition. The results of the study provide confidence that the Kinect sensor can be used to count the number of jumps and sidestep during five minutes of active video game play with a similar level of accuracy as human raters. However, in contrast to humans, the KART system required a fraction of the time to analyse and tabulate the results. PMID:27442437

  1. Application of 3-D imaging sensor for tracking minipigs in the open field test.

    PubMed

    Kulikov, Victor A; Khotskin, Nikita V; Nikitin, Sergey V; Lankin, Vasily S; Kulikov, Alexander V; Trapezov, Oleg V

    2014-09-30

    The minipig is a promising model in neurobiology and psychopharmacology. However, automated tracking of minipig behavior is still unresolved problem. The study was carried out on white, agouti and black (or spotted) minipiglets (n=108) bred in the Institute of Cytology and Genetics. New method of automated tracking of minipig behavior is based on Microsoft Kinect 3-D image sensor and the 3-D image reconstruction with EthoStudio software. The algorithms of distance run and time in the center evaluation were adapted for 3-D image data and new algorithm of vertical activity quantification was developed. The 3-D imaging system successfully detects white, black, spotted and agouti pigs in the open field test (OFT). No effect of sex or color on horizontal (distance run), vertical activities and time in the center was shown. Agouti pigs explored the arena more intensive than white or black animals, respectively. The OFT behavioral traits were compared with the fear reaction to experimenter. Time in the center of the OFT was positively correlated with fear reaction rank (ρ=0.21, p<0.05). Black pigs were significantly more fearful compared with white or agouti animals. The 3-D imaging system has three advantages over existing automated tracking systems: it avoids perspective distortion, distinguishes animals any color from any background and automatically evaluates vertical activity. The 3-D imaging system can be successfully applied for automated measurement of minipig behavior in neurobiological and psychopharmacological experiments. Copyright © 2014 Elsevier B.V. All rights reserved.

  2. Kinect-based choice reaching and stepping reaction time tests for clinical and in-home assessment of fall risk in older people: a prospective study.

    PubMed

    Ejupi, Andreas; Gschwind, Yves J; Brodie, Matthew; Zagler, Wolfgang L; Lord, Stephen R; Delbaere, Kim

    2016-01-01

    Quick protective reactions such as reaching or stepping are important to avoid a fall or minimize injuries. We developed Kinect-based choice reaching and stepping reaction time tests (Kinect-based CRTs) and evaluated their ability to differentiate between older fallers and non-fallers and the feasibility of administering them at home. A total of 94 community-dwelling older people were assessed on the Kinect-based CRTs in the laboratory and were followed-up for falls for 6 months. Additionally, a subgroup (n = 20) conducted the Kinect-based CRTs at home. Signal processing algorithms were developed to extract features for reaction, movement and the total time from the Kinect skeleton data. Nineteen participants (20.2 %) reported a fall in the 6 months following the assessment. The reaction time (fallers: 797 ± 136 ms, non-fallers: 714 ± 89 ms), movement time (fallers: 392 ± 50 ms, non-fallers: 358 ± 51 ms) and total time (fallers: 1189 ± 170 ms, non-fallers: 1072 ± 109 ms) of the reaching reaction time test differentiated well between the fallers and non-fallers. The stepping reaction time test did not significantly discriminate between the two groups in the prospective study. The correlations between the laboratory and in-home assessments were 0.689 for the reaching reaction time and 0.860 for stepping reaction time. The study findings indicate that the Kinect-based CRT tests are feasible to administer in clinical and in-home settings, and thus represents an important step towards the development of sensor-based fall risk self-assessments. With further validation, the assessments may prove useful as a fall risk screen and home-based assessment measures for monitoring changes over time and effects of fall prevention interventions.

  3. Commercially available gaming systems as clinical assessment tools to improve value in the orthopaedic setting: a systematic review.

    PubMed

    Ruff, Jessica; Wang, Tiffany L; Quatman-Yates, Catherine C; Phieffer, Laura S; Quatman, Carmen E

    2015-02-01

    Commercially available gaming systems (CAGS) such as the Wii Balance Board (WBB) and Microsoft Xbox with Kinect (Xbox Kinect) are increasingly used as balance training and rehabilitation tools. The purpose of this review was to answer the question, "Are commercially available gaming systems valid and reliable instruments for use as clinical diagnostic and functional assessment tools in orthopaedic settings?" and provide a summary of relevant studies, identify their strengths and weaknesses, and generate conclusions regarding general validity/reliability of WBB and Xbox Kinect in orthopaedics. A systematic search was performed using MEDLINE (1996-2013) and Scopus (1996-2013). Inclusion criteria were minimum of 5 subjects, full manuscript provided in English or translated, and studies incorporating investigation of CAG measurement properties. Exclusion criteria included reviews, systematic reviews, summary/clinical commentaries, or case studies; conference proceedings/presentations; cadaveric studies; studies of non-reversible, non-orthopaedic-related musculoskeletal disease; non-human trials; and therapeutic studies not reporting comparative evaluation to already established functional assessment criteria. All studies meeting inclusion and exclusion criteria were appraised for quality by two independent reviewers. Evidence levels (I-V) were assigned to each study based on established methodological criteria. 3 Level II, 7 level III, and 1 Level IV studies met inclusion criteria and provided information related to the use of the WBB and Xbox Kinect as clinical assessment tools in the field of orthopaedics. Studies have used the WBB in a variety of clinical applications, including the measurement of center of pressure (COP), measurement of medial-to-lateral (M/L) or anterior-to-posterior (A/P) symmetry, assessment anatomic landmark positioning, and assessment of fall risk. However, no uniform protocols or outcomes were used to evaluate the quality of the WBB as a clinical assessment tool; therefore a wide range of sensitivities, specificities, accuracies, and validities were reported. Currently it is not possible to make a universal generalization about the clinical utility of CAGS in the field of orthopaedics. However, there is evidence to support using the WBB and the Xbox Kinect as tools to obtain reliable and valid COP measurements. The Wii Fit Game may specifically provide reliable and valid measurements for predicting fall risk. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. Virtual Exercise Training Software System

    NASA Technical Reports Server (NTRS)

    Vu, L.; Kim, H.; Benson, E.; Amonette, W. E.; Barrera, J.; Perera, J.; Rajulu, S.; Hanson, A.

    2018-01-01

    The purpose of this study was to develop and evaluate a virtual exercise training software system (VETSS) capable of providing real-time instruction and exercise feedback during exploration missions. A resistive exercise instructional system was developed using a Microsoft Kinect depth-camera device, which provides markerless 3-D whole-body motion capture at a small form factor and minimal setup effort. It was hypothesized that subjects using the newly developed instructional software tool would perform the deadlift exercise with more optimal kinematics and consistent technique than those without the instructional software. Following a comprehensive evaluation in the laboratory, the system was deployed for testing and refinement in the NASA Extreme Environment Mission Operations (NEEMO) analog.

  5. Discriminative exemplar coding for sign language recognition with Kinect.

    PubMed

    Sun, Chao; Zhang, Tianzhu; Bao, Bing-Kun; Xu, Changsheng; Mei, Tao

    2013-10-01

    Sign language recognition is a growing research area in the field of computer vision. A challenge within it is to model various signs, varying with time resolution, visual manual appearance, and so on. In this paper, we propose a discriminative exemplar coding (DEC) approach, as well as utilizing Kinect sensor, to model various signs. The proposed DEC method can be summarized as three steps. First, a quantity of class-specific candidate exemplars are learned from sign language videos in each sign category by considering their discrimination. Then, every video of all signs is described as a set of similarities between frames within it and the candidate exemplars. Instead of simply using a heuristic distance measure, the similarities are decided by a set of exemplar-based classifiers through the multiple instance learning, in which a positive (or negative) video is treated as a positive (or negative) bag and those frames similar to the given exemplar in Euclidean space as instances. Finally, we formulate the selection of the most discriminative exemplars into a framework and simultaneously produce a sign video classifier to recognize sign. To evaluate our method, we collect an American sign language dataset, which includes approximately 2000 phrases, while each phrase is captured by Kinect sensor with color, depth, and skeleton information. Experimental results on our dataset demonstrate the feasibility and effectiveness of the proposed approach for sign language recognition.

  6. Proof of concept of the ability of the kinect to quantify upper extremity function in dystrophinopathy.

    PubMed

    Lowes, Linda P; Alfano, Lindsay N; Yetter, Brent A; Worthen-Chaudhari, Lise; Hinchman, William; Savage, Jordan; Samona, Patrick; Flanigan, Kevin M; Mendell, Jerry R

    2013-03-14

    Individuals with dystrophinopathy lose upper extremity strength in proximal muscles followed by those more distal. Current upper extremity evaluation tools fail to fully capture changes in upper extremity strength and function across the disease spectrum as they tend to focus solely on distal ability. The Kinect by Microsoft is a gaming interface that can gather positional information about an individual's upper extremity movement which can be used to determine functional reaching volume, velocity of movement, and rate of fatigue while playing an engaging video game. The purpose of this study was to determine the feasibility of using the Kinect platform to assess upper extremity function in individuals with dystrophinopathy across the spectrum of abilities. Investigators developed a proof-of-concept device, ACTIVE (Abilities Captured Through Interactive Video Evaluation), to measure functional reaching volume, movement velocity, and rate of fatigue. Five subjects with dystrophinopathy and 5 normal controls were tested using ACTIVE during one testing session. A single subject with dystrophinopathy was simultaneously tested with ACTIVE and a marker-based motion analysis system to establish preliminary validity of measurements. ACTIVE proof-of-concept ranked the upper extremity abilities of subjects with dystrophinopathy by Brooke score, and also differentiated them from performance of normal controls for the functional reaching volume and velocity tests. Preliminary test-retest reliability of the ACTIVE for 2 sequential trials was excellent for functional reaching volume (ICC=0.986, p<0.001) and velocity trials (ICC=0.963, p<0.001). The data from our pilot study with ACTIVE proof-of-concept demonstrates that newly available gaming technology has potential to be used to create a low-cost, widely-accessible and functional upper extremity outcome measure for use with children and adults with dystrophinopathy.

  7. Kinect Xbox 360 as a therapeutic modality for children with cerebral palsy in a school environment: a preliminary study.

    PubMed

    Luna-Oliva, Laura; Ortiz-Gutiérrez, Rosa María; Cano-de la Cuerda, Roberto; Piédrola, Rosa Martínez; Alguacil-Diego, Isabel M; Sánchez-Camarero, Carlos; Martínez Culebras, María Del Carmen

    2013-01-01

    Limited evidence is available about the effectiveness of virtual reality using low cost commercial consoles for children with developmental delay. The aim of this preliminary study is to evaluate the usefulness of a videogame system based on non-immersive virtual reality technology (Xbox 360 KinectTM) to support conventional rehabilitation treatment of children with cerebral palsy. Secondarily, to objectify changes in psychomotor status of children with cerebral palsy after receiving rehabilitation treatment in addition with this last generation game console. 11 children with cerebral palsy were included the study. A baseline, a post-treatment and a follow-up assessment were performed related to motor and the process skills, balance, gait speed, running and jumping and fine and manual finger dexterity. All the participants completed 8 weeks of videogame treatment, added to their conventional physiotherapy treatment, with Xbox 360 Kinect™ (Microsoft) game console. The Friedman test showed significant differences among the three assessments for each variable: GMFM (p = 0.001), AMPS motor (p = 0.001), AMPS process (p = 0.010), PRT (p = 0.005) and 10 MW (p = 0.029). Wilcoxon test showed significant statistically differences pre and post-treatment, in all the values. Similarly, results revealed significant differences between basal and follow-up assessment. There were not statistical differences between post-treatment and follow-up evaluation, indicating a long-term maintenance of the improvements achieved after treatment. Low cost video games based on motion capture are potential tools in the rehabilitation context in children with CP. Our Kinect Xbox 360 protocol has showed improvements in balance and ADL in CP participants in a school environment, but further studies are need to validate the potential benefits of these video game systems as a supplement for rehabilitation of children with CP.

  8. Ami - The chemist's amanuensis.

    PubMed

    Brooks, Brian J; Thorn, Adam L; Smith, Matthew; Matthews, Peter; Chen, Shaoming; O'Steen, Ben; Adams, Sam E; Townsend, Joe A; Murray-Rust, Peter

    2011-10-14

    The Ami project was a six month Rapid Innovation project sponsored by JISC to explore the Virtual Research Environment space. The project brainstormed with chemists and decided to investigate ways to facilitate monitoring and collection of experimental data.A frequently encountered use-case was identified of how the chemist reaches the end of an experiment, but finds an unexpected result. The ability to replay events can significantly help make sense of how things progressed. The project therefore concentrated on collecting a variety of dimensions of ancillary data - data that would not normally be collected due to practicality constraints. There were three main areas of investigation: 1) Development of a monitoring tool using infrared and ultrasonic sensors; 2) Time-lapse motion video capture (for example, videoing 5 seconds in every 60); and 3) Activity-driven video monitoring of the fume cupboard environs.The Ami client application was developed to control these separate logging functions. The application builds up a timeline of the events in the experiment and around the fume cupboard. The videos and data logs can then be reviewed after the experiment in order to help the chemist determine the exact timings and conditions used.The project experimented with ways in which a Microsoft Kinect could be used in a laboratory setting. Investigations suggest that it would not be an ideal device for controlling a mouse, but it shows promise for usages such as manipulating virtual molecules.

  9. Distance error correction for time-of-flight cameras

    NASA Astrophysics Data System (ADS)

    Fuersattel, Peter; Schaller, Christian; Maier, Andreas; Riess, Christian

    2017-06-01

    The measurement accuracy of time-of-flight cameras is limited due to properties of the scene and systematic errors. These errors can accumulate to multiple centimeters which may limit the applicability of these range sensors. In the past, different approaches have been proposed for improving the accuracy of these cameras. In this work, we propose a new method that improves two important aspects of the range calibration. First, we propose a new checkerboard which is augmented by a gray-level gradient. With this addition it becomes possible to capture the calibration features for intrinsic and distance calibration at the same time. The gradient strip allows to acquire a large amount of distance measurements for different surface reflectivities, which results in more meaningful training data. Second, we present multiple new features which are used as input to a random forest regressor. By using random regression forests, we circumvent the problem of finding an accurate model for the measurement error. During application, a correction value for each individual pixel is estimated with the trained forest based on a specifically tailored feature vector. With our approach the measurement error can be reduced by more than 40% for the Mesa SR4000 and by more than 30% for the Microsoft Kinect V2. In our evaluation we also investigate the impact of the individual forest parameters and illustrate the importance of the individual features.

  10. User acceptance of a touchless sterile system to control virtual orthodontic study models.

    PubMed

    Wan Hassan, Wan Nurazreena; Abu Kassim, Noor Lide; Jhawar, Abhishek; Shurkri, Norsyafiqah Mohd; Kamarul Baharin, Nur Azreen; Chan, Chee Seng

    2016-04-01

    In this article, we present an evaluation of user acceptance of our innovative hand-gesture-based touchless sterile system for interaction with and control of a set of 3-dimensional digitized orthodontic study models using the Kinect motion-capture sensor (Microsoft, Redmond, Wash). The system was tested on a cohort of 201 participants. Using our validated questionnaire, the participants evaluated 7 hand-gesture-based commands that allowed the user to adjust the model in size, position, and aspect and to switch the image on the screen to view the maxillary arch, the mandibular arch, or models in occlusion. Participants' responses were assessed using Rasch analysis so that their perceptions of the usefulness of the hand gestures for the commands could be directly referenced against their acceptance of the gestures. Their perceptions of the potential value of this system for cross-infection control were also evaluated. Most participants endorsed these commands as accurate. Our designated hand gestures for these commands were generally accepted. We also found a positive and significant correlation between our participants' level of awareness of cross infection and their endorsement to use this system in clinical practice. This study supports the adoption of this promising development for a sterile touch-free patient record-management system. Copyright © 2016 American Association of Orthodontists. Published by Elsevier Inc. All rights reserved.

  11. Using Free Internet Videogames in Upper Extremity Motor Training for Children with Cerebral Palsy

    PubMed Central

    Sevick, Marisa; Eklund, Elizabeth; Mensch, Allison; Foreman, Matthew; Standeven, John; Engsberg, Jack

    2016-01-01

    Movement therapy is one type of upper extremity intervention for children with cerebral palsy (CP) to improve function. It requires high-intensity, repetitive and task-specific training. Tedium and lack of motivation are substantial barriers to completing the training. An approach to overcome these barriers is to couple the movement therapy with videogames. This investigation: (1) tested the feasibility of delivering a free Internet videogame upper extremity motor intervention to four children with CP (aged 8–17 years) with mild to moderate limitations to upper limb function; and (2) determined the level of intrinsic motivation during the intervention. The intervention used free Internet videogames in conjunction with the Microsoft Kinect motion sensor and the Flexible Action and Articulated Skeleton Toolkit software (FAAST) software. Results indicated that the intervention could be successfully delivered in the laboratory and the home, and pre- and post- impairment, function and performance assessments were possible. Results also indicated a high level of motivation among the participants. It was concluded that the use of inexpensive hardware and software in conjunction with free Internet videogames has the potential to be very motivating in helping to improve the upper extremity abilities of children with CP. Future work should include results from additional participants and from a control group in a randomized controlled trial to establish efficacy. PMID:27338485

  12. Ami - The chemist's amanuensis

    PubMed Central

    2011-01-01

    The Ami project was a six month Rapid Innovation project sponsored by JISC to explore the Virtual Research Environment space. The project brainstormed with chemists and decided to investigate ways to facilitate monitoring and collection of experimental data. A frequently encountered use-case was identified of how the chemist reaches the end of an experiment, but finds an unexpected result. The ability to replay events can significantly help make sense of how things progressed. The project therefore concentrated on collecting a variety of dimensions of ancillary data - data that would not normally be collected due to practicality constraints. There were three main areas of investigation: 1) Development of a monitoring tool using infrared and ultrasonic sensors; 2) Time-lapse motion video capture (for example, videoing 5 seconds in every 60); and 3) Activity-driven video monitoring of the fume cupboard environs. The Ami client application was developed to control these separate logging functions. The application builds up a timeline of the events in the experiment and around the fume cupboard. The videos and data logs can then be reviewed after the experiment in order to help the chemist determine the exact timings and conditions used. The project experimented with ways in which a Microsoft Kinect could be used in a laboratory setting. Investigations suggest that it would not be an ideal device for controlling a mouse, but it shows promise for usages such as manipulating virtual molecules. PMID:21999587

  13. Kinematic Validation of a Multi-Kinect v2 Instrumented 10-Meter Walkway for Quantitative Gait Assessments.

    PubMed

    Geerse, Daphne J; Coolen, Bert H; Roerdink, Melvyn

    2015-01-01

    Walking ability is frequently assessed with the 10-meter walking test (10MWT), which may be instrumented with multiple Kinect v2 sensors to complement the typical stopwatch-based time to walk 10 meters with quantitative gait information derived from Kinect's 3D body point's time series. The current study aimed to evaluate a multi-Kinect v2 set-up for quantitative gait assessments during the 10MWT against a gold-standard motion-registration system by determining between-systems agreement for body point's time series, spatiotemporal gait parameters and the time to walk 10 meters. To this end, the 10MWT was conducted at comfortable and maximum walking speed, while 3D full-body kinematics was concurrently recorded with the multi-Kinect v2 set-up and the Optotrak motion-registration system (i.e., the gold standard). Between-systems agreement for body point's time series was assessed with the intraclass correlation coefficient (ICC). Between-systems agreement was similarly determined for the gait parameters' walking speed, cadence, step length, stride length, step width, step time, stride time (all obtained for the intermediate 6 meters) and the time to walk 10 meters, complemented by Bland-Altman's bias and limits of agreement. Body point's time series agreed well between the motion-registration systems, particularly so for body points in motion. For both comfortable and maximum walking speeds, the between-systems agreement for the time to walk 10 meters and all gait parameters except step width was high (ICC ≥ 0.888), with negligible biases and narrow limits of agreement. Hence, body point's time series and gait parameters obtained with a multi-Kinect v2 set-up match well with those derived with a gold standard in 3D measurement accuracy. Future studies are recommended to test the clinical utility of the multi-Kinect v2 set-up to automate 10MWT assessments, thereby complementing the time to walk 10 meters with reliable spatiotemporal gait parameters obtained objectively in a quick, unobtrusive and patient-friendly manner.

  14. Augmented reality & gesture-based architecture in games for the elderly.

    PubMed

    McCallum, Simon; Boletsis, Costas

    2013-01-01

    Serious games for health and, more specifically, for elderly people have developed rapidly in recent years. The recent popularization of novel interaction methods of consoles, such as the Nintendo Wii and Microsoft Kinect, has provided an opportunity for the elderly to engage in computer and video games. These interaction methods, however, still present various challenges for elderly users. To address these challenges, we propose an architecture consisted of Augmented Reality (as an output mechanism) combined with gestured-based devices (as an input method). The intention of this work is to provide a theoretical justification for using these technologies and to integrate them into an architecture, acting as a basis for potentially creating suitable interaction techniques for the elderly players.

  15. The Automated Assessment of Postural Stability: Balance Detection Algorithm.

    PubMed

    Napoli, Alessandro; Glass, Stephen M; Tucker, Carole; Obeid, Iyad

    2017-12-01

    Impaired balance is a common indicator of mild traumatic brain injury, concussion and musculoskeletal injury. Given the clinical relevance of such injuries, especially in military settings, it is paramount to develop more accurate and reliable on-field evaluation tools. This work presents the design and implementation of the automated assessment of postural stability (AAPS) system, for on-field evaluations following concussion. The AAPS is a computer system, based on inexpensive off-the-shelf components and custom software, that aims to automatically and reliably evaluate balance deficits, by replicating a known on-field clinical test, namely, the Balance Error Scoring System (BESS). The AAPS main innovation is its balance error detection algorithm that has been designed to acquire data from a Microsoft Kinect ® sensor and convert them into clinically-relevant BESS scores, using the same detection criteria defined by the original BESS test. In order to assess the AAPS balance evaluation capability, a total of 15 healthy subjects (7 male, 8 female) were required to perform the BESS test, while simultaneously being tracked by a Kinect 2.0 sensor and a professional-grade motion capture system (Qualisys AB, Gothenburg, Sweden). High definition videos with BESS trials were scored off-line by three experienced observers for reference scores. AAPS performance was assessed by comparing the AAPS automated scores to those derived by three experienced observers. Our results show that the AAPS error detection algorithm presented here can accurately and precisely detect balance deficits with performance levels that are comparable to those of experienced medical personnel. Specifically, agreement levels between the AAPS algorithm and the human average BESS scores ranging between 87.9% (single-leg on foam) and 99.8% (double-leg on firm ground) were detected. Moreover, statistically significant differences in balance scores were not detected by an ANOVA test with alpha equal to 0.05. Despite some level of disagreement between human and AAPS-generated scores, the use of an automated system yields important advantages over currently available human-based alternatives. These results underscore the value of using the AAPS, that can be quickly deployed in the field and/or in outdoor settings with minimal set-up time. Finally, the AAPS can record multiple error types and their time course with extremely high temporal resolution. These features are not achievable by humans, who cannot keep track of multiple balance errors with such a high resolution. Together, these results suggest that computerized BESS calculation may provide more accurate and consistent measures of balance than those derived from human experts.

  16. Comparison of the different approaches to generate holograms from data acquired with a Kinect sensor

    NASA Astrophysics Data System (ADS)

    Kang, Ji-Hoon; Leportier, Thibault; Ju, Byeong-Kwon; Song, Jin Dong; Lee, Kwang-Hoon; Park, Min-Chul

    2017-05-01

    Data of real scenes acquired in real-time with a Kinect sensor can be processed with different approaches to generate a hologram. 3D models can be generated from a point cloud or a mesh representation. The advantage of the point cloud approach is that computation process is well established since it involves only diffraction and propagation of point sources between parallel planes. On the other hand, the mesh representation enables to reduce the number of elements necessary to represent the object. Then, even though the computation time for the contribution of a single element increases compared to a simple point, the total computation time can be reduced significantly. However, the algorithm is more complex since propagation of elemental polygons between non-parallel planes should be implemented. Finally, since a depth map of the scene is acquired at the same time than the intensity image, a depth layer approach can also be adopted. This technique is appropriate for a fast computation since propagation of an optical wavefront from one plane to another can be handled efficiently with the fast Fourier transform. Fast computation with depth layer approach is convenient for real time applications, but point cloud method is more appropriate when high resolution is needed. In this study, since Kinect can be used to obtain both point cloud and depth map, we examine the different approaches that can be adopted for hologram computation and compare their performance.

  17. A dual-Kinect approach to determine torso surface motion for respiratory motion correction in PET

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heß, Mirco, E-mail: mirco.hess@uni-muenster.de; Büther, Florian; Dawood, Mohammad

    2015-05-15

    Purpose: Respiratory gating is commonly used to reduce blurring effects and attenuation correction artifacts in positron emission tomography (PET). Established clinically available methods that employ body-attached hardware for acquiring respiration signals rely on the assumption that external surface motion and internal organ motion are well correlated. In this paper, the authors present a markerless method comprising two Microsoft Kinects for determining the motion on the whole torso surface and aim to demonstrate its validity and usefulness—including the potential to study the external/internal correlation and to provide useful information for more advanced correction approaches. Methods: The data of two Kinects aremore » used to calculate 3D representations of a patient’s torso surface with high spatial coverage. Motion signals can be obtained for any position by tracking the mean distance to a virtual camera with a view perpendicular to the surrounding surface. The authors have conducted validation experiments including volunteers and a moving high-precision platform to verify the method’s suitability for providing meaningful data. In addition, the authors employed it during clinical {sup 18}F-FDG-PET scans and exemplarily analyzed the acquired data of ten cancer patients. External signals of abdominal and thoracic regions as well as data-driven signals were used for gating and compared with respect to detected displacement of present lesions. Additionally, the authors quantified signal similarities and time shifts by analyzing cross-correlation sequences. Results: The authors’ results suggest a Kinect depth resolution of approximately 1 mm at 75 cm distance. Accordingly, valid signals could be obtained for surface movements with small amplitudes in the range of only few millimeters. In this small sample of ten patients, the abdominal signals were better suited for gating the PET data than the thoracic signals and the correlation of data-driven signals was found to be stronger with abdominal signals than with thoracic signals (average Pearson correlation coefficients of 0.74 ± 0.17 and 0.45 ± 0.23, respectively). In all cases, except one, the abdominal respiratory motion preceded the thoracic motion—a maximum delay of approximately 600 ms was detected. Conclusions: The method provides motion information with sufficiently high spatial and temporal resolution. Thus, it enables meaningful analysis in the form of comparisons between amplitudes and phase shifts of signals from different regions. In combination with a large field-of-view, as given by combining the data of two Kinect cameras, it yields surface representations that might be useful in the context of motion correction and motion modeling.« less

  18. Data Acquisition Using Xbox Kinect Sensor

    ERIC Educational Resources Information Center

    Ballester, Jorge; Pheatt, Charles B.

    2012-01-01

    The study of motion is central in physics education and has taken many forms as technology has provided numerous methods to acquire data. For example, the analysis of still or moving images is particularly effective in discussions of two-dimensional motion. Introductory laboratory measurement methods have progressed through water clocks, spark…

  19. Kinect system in home-based cardiovascular rehabilitation.

    PubMed

    Vieira, Ágata; Gabriel, Joaquim; Melo, Cristina; Machado, Jorge

    2017-01-01

    Cardiovascular diseases lead to a high consumption of financial resources. An important part of the recovery process is the cardiovascular rehabilitation. This study aimed to present a new cardiovascular rehabilitation system to 11 outpatients with coronary artery disease from a Hospital in Porto, Portugal, later collecting their opinions. This system is based on a virtual reality game system, using the Kinect sensor while performing an exercise protocol which is integrated in a home-based cardiovascular rehabilitation programme, with a duration of 6 months and at the maintenance phase. The participants responded to a questionnaire asking for their opinion about the system. The results demonstrated that 91% of the participants (n = 10) enjoyed the artwork, while 100% (n = 11) agreed on the importance and usefulness of the automatic counting of the number of repetitions, moreover 64% (n = 7) reported motivation to continue performing the programme after the end of the study, and 100% (n = 11) recognized Kinect as an instrument with potential to be an asset in cardiovascular rehabilitation. Criticisms included limitations in motion capture and gesture recognition, 91% (n = 10), and the lack of home space, 27% (n = 3). According to the participants' opinions, the Kinect has the potential to be used in cardiovascular rehabilitation; however, several technical details require improvement, particularly regarding the motion capture and gesture recognition.

  20. The NASA Augmented/Virtual Reality Lab: The State of the Art at KSC

    NASA Technical Reports Server (NTRS)

    Little, William

    2017-01-01

    The NASA Augmented Virtual Reality (AVR) Lab at Kennedy Space Center is dedicated to the investigation of Augmented Reality (AR) and Virtual Reality (VR) technologies, with the goal of determining potential uses of these technologies as human-computer interaction (HCI) devices in an aerospace engineering context. Begun in 2012, the AVR Lab has concentrated on commercially available AR and VR devices that are gaining in popularity and use in a number of fields such as gaming, training, and telepresence. We are working with such devices as the Microsoft Kinect, the Oculus Rift, the Leap Motion, the HTC Vive, motion capture systems, and the Microsoft Hololens. The focus of our work has been on human interaction with the virtual environment, which in turn acts as a communications bridge to remote physical devices and environments which the operator cannot or should not control or experience directly. Particularly in reference to dealing with spacecraft and the oftentimes hazardous environments they inhabit, it is our hope that AR and VR technologies can be utilized to increase human safety and mission success by physically removing humans from those hazardous environments while virtually putting them right in the middle of those environments.

  1. Kinect-Based Virtual Game for the Elderly that Detects Incorrect Body Postures in Real Time

    PubMed Central

    Saenz-de-Urturi, Zelai; Garcia-Zapirain Soto, Begonya

    2016-01-01

    Poor posture can result in loss of physical function, which is necessary to preserving independence in later life. Its decline is often the determining factor for loss of independence in the elderly. To avoid this, a system to correct poor posture in the elderly, designed for Kinect-based indoor applications, is proposed in this paper. Due to the importance of maintaining a healthy life style in senior citizens, the system has been integrated into a game which focuses on their physical stimulation. The game encourages users to perform physical activities while the posture correction system helps them to adopt proper posture. The system captures limb node data received from the Kinect sensor in order to detect posture variations in real time. The DTW algorithm compares the original posture with the current one to detect any deviation from the original correct position. The system was tested and achieved a successful detection percentage of 95.20%. Experimental tests performed in a nursing home with different users show the effectiveness of the proposed solution. PMID:27196903

  2. Evaluation of sensors for inputting data in exergames for the elderly.

    PubMed

    Hors-Fraile, Santiago; Browne, James; Brox, Ellen; Evertsen, Gunn

    2013-01-01

    We aim to solve which off-the-shelf motion sensor device is the most suitable for extensive usage in PC open-source exergames for the elderly. To solve this problem, we studied the specifications of the market-available sensors to reduce the initial, broad set of sensors to only two candidates: the Nintendo Wii controllers and the Microsoft© Kinect™ camera. The capabilities of these two are tested with a demo implementation. We take into account both the accuracy in the movement-detection of the sensors, and the software-related issues. Our outcome indicates that the Microsoft© Kinect™ camera is the option that currently provides the best solution for our purpose. This study can be helpful for researchers to choose the device that suits their project needs better, removing the sensor-choosing task time from their schedule.

  3. 2.5D multi-view gait recognition based on point cloud registration.

    PubMed

    Tang, Jin; Luo, Jian; Tjahjadi, Tardi; Gao, Yan

    2014-03-28

    This paper presents a method for modeling a 2.5-dimensional (2.5D) human body and extracting the gait features for identifying the human subject. To achieve view-invariant gait recognition, a multi-view synthesizing method based on point cloud registration (MVSM) to generate multi-view training galleries is proposed. The concept of a density and curvature-based Color Gait Curvature Image is introduced to map 2.5D data onto a 2D space to enable data dimension reduction by discrete cosine transform and 2D principle component analysis. Gait recognition is achieved via a 2.5D view-invariant gait recognition method based on point cloud registration. Experimental results on the in-house database captured by a Microsoft Kinect camera show a significant performance gain when using MVSM.

  4. Laser radar: historical prospective-from the East to the West

    NASA Astrophysics Data System (ADS)

    Molebny, Vasyl; McManamon, Paul; Steinvall, Ove; Kobayashi, Takao; Chen, Weibiao

    2017-03-01

    This article discusses the history of laser radar development in America, Europe, and Asia. Direct detection laser radar is discussed for range finding, designation, and topographic mapping of Earth and of extraterrestrial objects. Coherent laser radar is discussed for environmental applications, such as wind sensing and for synthetic aperture laser radar development. Gated imaging is discussed through scattering layers for military, medical, and security applications. Laser microradars have found applications in intravascular studies and in ophthalmology for vision correction. Ghost laser radar has emerged as a new technology in theoretical and simulation applications. Laser radar is now emerging as an important technology for applications such as self-driving cars and unmanned aerial vehicles. It is also used by police to measure speed, and in gaming, such as the Microsoft Kinect.

  5. Appearance-based multimodal human tracking and identification for healthcare in the digital home.

    PubMed

    Yang, Mau-Tsuen; Huang, Shen-Yen

    2014-08-05

    There is an urgent need for intelligent home surveillance systems to provide home security, monitor health conditions, and detect emergencies of family members. One of the fundamental problems to realize the power of these intelligent services is how to detect, track, and identify people at home. Compared to RFID tags that need to be worn all the time, vision-based sensors provide a natural and nonintrusive solution. Observing that body appearance and body build, as well as face, provide valuable cues for human identification, we model and record multi-view faces, full-body colors and shapes of family members in an appearance database by using two Kinects located at a home's entrance. Then the Kinects and another set of color cameras installed in other parts of the house are used to detect, track, and identify people by matching the captured color images with the registered templates in the appearance database. People are detected and tracked by multisensor fusion (Kinects and color cameras) using a Kalman filter that can handle duplicate or partial measurements. People are identified by multimodal fusion (face, body appearance, and silhouette) using a track-based majority voting. Moreover, the appearance-based human detection, tracking, and identification modules can cooperate seamlessly and benefit from each other. Experimental results show the effectiveness of the human tracking across multiple sensors and human identification considering the information of multi-view faces, full-body clothes, and silhouettes. The proposed home surveillance system can be applied to domestic applications in digital home security and intelligent healthcare.

  6. Appearance-Based Multimodal Human Tracking and Identification for Healthcare in the Digital Home

    PubMed Central

    Yang, Mau-Tsuen; Huang, Shen-Yen

    2014-01-01

    There is an urgent need for intelligent home surveillance systems to provide home security, monitor health conditions, and detect emergencies of family members. One of the fundamental problems to realize the power of these intelligent services is how to detect, track, and identify people at home. Compared to RFID tags that need to be worn all the time, vision-based sensors provide a natural and nonintrusive solution. Observing that body appearance and body build, as well as face, provide valuable cues for human identification, we model and record multi-view faces, full-body colors and shapes of family members in an appearance database by using two Kinects located at a home's entrance. Then the Kinects and another set of color cameras installed in other parts of the house are used to detect, track, and identify people by matching the captured color images with the registered templates in the appearance database. People are detected and tracked by multisensor fusion (Kinects and color cameras) using a Kalman filter that can handle duplicate or partial measurements. People are identified by multimodal fusion (face, body appearance, and silhouette) using a track-based majority voting. Moreover, the appearance-based human detection, tracking, and identification modules can cooperate seamlessly and benefit from each other. Experimental results show the effectiveness of the human tracking across multiple sensors and human identification considering the information of multi-view faces, full-body clothes, and silhouettes. The proposed home surveillance system can be applied to domestic applications in digital home security and intelligent healthcare. PMID:25098207

  7. Efficacy of an Exercise Game Based on Kinect in Improving Physical Performances of Fall Risk Factors in Community-Dwelling Older Adults.

    PubMed

    Kayama, Hiroki; Okamoto, Kazuya; Nishiguchi, Shu; Yukutake, Taiki; Tanigawa, Takanori; Nagai, Koutatsu; Yamada, Minoru; Aoyama, Tomoki

    2013-08-01

    The purpose of this study was to demonstrate whether a 12-week program of training with dual-task Tai Chi (DTTC), which is a new concept game we developed using Kinect (Microsoft, Redmond, WA), would be effective in improving physical functions of fall risk factors. This study examined balance, muscle strength, locomotive ability, and dual-task ability in community-dwelling older adults (75.4±6.3 years) before and after 12 weeks of DTTC training (training group [TG]; n=32) or standardized training (control group [CG]; n=41). Primary end points were based on the difference in physical functions between the TG and the CG. Significant differences were observed between the two groups with significant group×time interaction for the following physical function measures: timed up-and-go (TUG) (P<0.01), one-leg standing (OLS) (P<0.05), and 5 chair stand (5-CS) (P<0.05). There were no significant differences among the other measures: 10-m walking time under standard conditions, manual-task conditions, and cognitive-task conditions, 10-m maximal walking time, and Functional Reach test scores. Thus, the scores of TUG, OLS, and 5-CS in the TG improved significantly with DTTC training compared with the CG. The results suggest that the DTTC training is effective in improving balance ability and mobility, which are risk factors for falls.

  8. Detection and Compensation of Degeneracy Cases for IMU-Kinect Integrated Continuous SLAM with Plane Features †

    PubMed Central

    Cho, HyunGi; Yeon, Suyong; Choi, Hyunga; Doh, Nakju

    2018-01-01

    In a group of general geometric primitives, plane-based features are widely used for indoor localization because of their robustness against noises. However, a lack of linearly independent planes may lead to a non-trivial estimation. This in return can cause a degenerate state from which all states cannot be estimated. To solve this problem, this paper first proposed a degeneracy detection method. A compensation method that could fix orientations by projecting an inertial measurement unit’s (IMU) information was then explained. Experiments were conducted using an IMU-Kinect v2 integrated sensor system prone to fall into degenerate cases owing to its narrow field-of-view. Results showed that the proposed framework could enhance map accuracy by successful detection and compensation of degenerated orientations. PMID:29565287

  9. Kinematic Validation of a Multi-Kinect v2 Instrumented 10-Meter Walkway for Quantitative Gait Assessments

    PubMed Central

    Geerse, Daphne J.; Coolen, Bert H.; Roerdink, Melvyn

    2015-01-01

    Walking ability is frequently assessed with the 10-meter walking test (10MWT), which may be instrumented with multiple Kinect v2 sensors to complement the typical stopwatch-based time to walk 10 meters with quantitative gait information derived from Kinect’s 3D body point’s time series. The current study aimed to evaluate a multi-Kinect v2 set-up for quantitative gait assessments during the 10MWT against a gold-standard motion-registration system by determining between-systems agreement for body point’s time series, spatiotemporal gait parameters and the time to walk 10 meters. To this end, the 10MWT was conducted at comfortable and maximum walking speed, while 3D full-body kinematics was concurrently recorded with the multi-Kinect v2 set-up and the Optotrak motion-registration system (i.e., the gold standard). Between-systems agreement for body point’s time series was assessed with the intraclass correlation coefficient (ICC). Between-systems agreement was similarly determined for the gait parameters’ walking speed, cadence, step length, stride length, step width, step time, stride time (all obtained for the intermediate 6 meters) and the time to walk 10 meters, complemented by Bland-Altman’s bias and limits of agreement. Body point’s time series agreed well between the motion-registration systems, particularly so for body points in motion. For both comfortable and maximum walking speeds, the between-systems agreement for the time to walk 10 meters and all gait parameters except step width was high (ICC ≥ 0.888), with negligible biases and narrow limits of agreement. Hence, body point’s time series and gait parameters obtained with a multi-Kinect v2 set-up match well with those derived with a gold standard in 3D measurement accuracy. Future studies are recommended to test the clinical utility of the multi-Kinect v2 set-up to automate 10MWT assessments, thereby complementing the time to walk 10 meters with reliable spatiotemporal gait parameters obtained objectively in a quick, unobtrusive and patient-friendly manner. PMID:26461498

  10. RehabGesture: An Alternative Tool for Measuring Human Movement.

    PubMed

    Brandão, Alexandre F; Dias, Diego R C; Castellano, Gabriela; Parizotto, Nivaldo A; Trevelin, Luis Carlos

    2016-07-01

    Systems for range of motion (ROM) measurement such as OptoTrak, Motion Capture, Motion Analysis, Vicon, and Visual 3D are so expensive that they become impracticable in public health systems and even in private rehabilitation clinics. Telerehabilitation is a branch within telemedicine intended to offer ways to increase motor and/or cognitive stimuli, aimed at faster and more effective recovery of given disabilities, and to measure kinematic data such as the improvement in ROM. In the development of the RehabGesture tool, we used the gesture recognition sensor Kinect(®) (Microsoft, Redmond, WA) and the concepts of Natural User Interface and Open Natural Interaction. RehabGesture can measure and record the ROM during rehabilitation sessions while the user interacts with the virtual reality environment. The software allows the measurement of the ROM (in the coronal plane) from 0° extension to 145° flexion of the elbow joint, as well as from 0° adduction to 180° abduction of the glenohumeral (shoulder) joint, leaving the standing position. The proposed tool has application in the fields of training and physical evaluation of professional and amateur athletes in clubs and gyms and may have application in rehabilitation and physiotherapy clinics for patients with compromised motor abilities. RehabGesture represents a low-cost solution to measure the movement of the upper limbs, as well as to stimulate the process of teaching and learning in disciplines related to the study of human movement, such as kinesiology.

  11. Three-Dimensional Object Recognition and Registration for Robotic Grasping Systems Using a Modified Viewpoint Feature Histogram

    PubMed Central

    Chen, Chin-Sheng; Chen, Po-Chun; Hsu, Chih-Ming

    2016-01-01

    This paper presents a novel 3D feature descriptor for object recognition and to identify poses when there are six-degrees-of-freedom for mobile manipulation and grasping applications. Firstly, a Microsoft Kinect sensor is used to capture 3D point cloud data. A viewpoint feature histogram (VFH) descriptor for the 3D point cloud data then encodes the geometry and viewpoint, so an object can be simultaneously recognized and registered in a stable pose and the information is stored in a database. The VFH is robust to a large degree of surface noise and missing depth information so it is reliable for stereo data. However, the pose estimation for an object fails when the object is placed symmetrically to the viewpoint. To overcome this problem, this study proposes a modified viewpoint feature histogram (MVFH) descriptor that consists of two parts: a surface shape component that comprises an extended fast point feature histogram and an extended viewpoint direction component. The MVFH descriptor characterizes an object’s pose and enhances the system’s ability to identify objects with mirrored poses. Finally, the refined pose is further estimated using an iterative closest point when the object has been recognized and the pose roughly estimated by the MVFH descriptor and it has been registered on a database. The estimation results demonstrate that the MVFH feature descriptor allows more accurate pose estimation. The experiments also show that the proposed method can be applied in vision-guided robotic grasping systems. PMID:27886080

  12. Multilevel depth and image fusion for human activity detection.

    PubMed

    Ni, Bingbing; Pei, Yong; Moulin, Pierre; Yan, Shuicheng

    2013-10-01

    Recognizing complex human activities usually requires the detection and modeling of individual visual features and the interactions between them. Current methods only rely on the visual features extracted from 2-D images, and therefore often lead to unreliable salient visual feature detection and inaccurate modeling of the interaction context between individual features. In this paper, we show that these problems can be addressed by combining data from a conventional camera and a depth sensor (e.g., Microsoft Kinect). We propose a novel complex activity recognition and localization framework that effectively fuses information from both grayscale and depth image channels at multiple levels of the video processing pipeline. In the individual visual feature detection level, depth-based filters are applied to the detected human/object rectangles to remove false detections. In the next level of interaction modeling, 3-D spatial and temporal contexts among human subjects or objects are extracted by integrating information from both grayscale and depth images. Depth information is also utilized to distinguish different types of indoor scenes. Finally, a latent structural model is developed to integrate the information from multiple levels of video processing for an activity detection. Extensive experiments on two activity recognition benchmarks (one with depth information) and a challenging grayscale + depth human activity database that contains complex interactions between human-human, human-object, and human-surroundings demonstrate the effectiveness of the proposed multilevel grayscale + depth fusion scheme. Higher recognition and localization accuracies are obtained relative to the previous methods.

  13. Accuracy analysis for triangulation and tracking based on time-multiplexed structured light.

    PubMed

    Wagner, Benjamin; Stüber, Patrick; Wissel, Tobias; Bruder, Ralf; Schweikard, Achim; Ernst, Floris

    2014-08-01

    The authors' research group is currently developing a new optical head tracking system for intracranial radiosurgery. This tracking system utilizes infrared laser light to measure features of the soft tissue on the patient's forehead. These features are intended to offer highly accurate registration with respect to the rigid skull structure by means of compensating for the soft tissue. In this context, the system also has to be able to quickly generate accurate reconstructions of the skin surface. For this purpose, the authors have developed a laser scanning device which uses time-multiplexed structured light to triangulate surface points. The accuracy of the authors' laser scanning device is analyzed and compared for different triangulation methods. These methods are given by the Linear-Eigen method and a nonlinear least squares method. Since Microsoft's Kinect camera represents an alternative for fast surface reconstruction, the authors' results are also compared to the triangulation accuracy of the Kinect device. Moreover, the authors' laser scanning device was used for tracking of a rigid object to determine how this process is influenced by the remaining triangulation errors. For this experiment, the scanning device was mounted to the end-effector of a robot to be able to calculate a ground truth for the tracking. The analysis of the triangulation accuracy of the authors' laser scanning device revealed a root mean square (RMS) error of 0.16 mm. In comparison, the analysis of the triangulation accuracy of the Kinect device revealed a RMS error of 0.89 mm. It turned out that the remaining triangulation errors only cause small inaccuracies for the tracking of a rigid object. Here, the tracking accuracy was given by a RMS translational error of 0.33 mm and a RMS rotational error of 0.12°. This paper shows that time-multiplexed structured light can be used to generate highly accurate reconstructions of surfaces. Furthermore, the reconstructed point sets can be used for high-accuracy tracking of objects, meeting the strict requirements of intracranial radiosurgery.

  14. A real time biofeedback using Kinect and Wii to improve gait for post-total knee replacement rehabilitation: a case study report.

    PubMed

    Levinger, Pazit; Zeina, Daniel; Teshome, Assefa K; Skinner, Elizabeth; Begg, Rezaul; Abbott, John Haxby

    2016-01-01

    This study aimed to develop a low-cost real-time biofeedback system to assist with rehabilitation for patients following total knee replacement (TKR) and to assess its feasibility of use in a post-TKR patient case study design with a comparison group. The biofeedback system consisted of Microsoft Kinect(TM) and Nintendo Wii balance board with a dedicated software. A six-week inpatient rehabilitation program was augmented by biofeedback and tested in a single patient following TKR. Three patients underwent a six weeks standard rehabilitation with no biofeedback and served as a control group. Gait, function and pain were assessed and compared before and after the rehabilitation. The biofeedback software incorporated real time visual feedback to correct limb alignment, movement pattern and weight distribution. Improvements in pain, function and quality of life were observed in both groups. The strong improvement in the knee moment pattern demonstrated in the case study indicates feasibility of the biofeedback-augmented intervention. This novel biofeedback software has used simple commercially accessible equipment that can be feasibly incorporated to augment a post-TKR rehabilitation program. Our preliminary results indicate the potential of this biofeedback-assisted rehabilitation to improve knee function during gait. Research is required to test this hypothesis. Implications for Rehabilitation The real-time biofeedback system developed integrated custom-made software and simple low-cost commercially accessible equipment such as Kinect and Wii board to provide augmented information during rehabilitation following TKR. The software incorporated key rehabilitation principles and visual feedback to correct alignment of the lower legs, pelvic and trunk as well as providing feedback on limbs weight distribution. The case study patient demonstrated greater improvement in their knee function where a more normal biphasic knee moment was achieved following the six-week biofeedback intervention.

  15. Evaluation of Children Playing a New-Generation Motion-Sensitive Active Videogame by Accelerometry and Indirect Calorimetry.

    PubMed

    Reading, Stacey A; Prickett, Karel

    2013-06-01

    New-generation active videogames (AVGs) use motion-capture video cameras to connect a player's arm, leg, and body movements through three-dimensional space to on-screen activity. We sought to determine if the whole-body movements required to play the AVG elicited moderate-intensity physical activity (PA) in children. A secondary aim was to examine the utility of using accelerometry to measure the activity intensity of AVG play in this age group. The PA levels of boys (n=26) and girls (n=15) 5-12 years of age were measured by triaxial accelerometry (n=25) or accelerometry and indirect calorimetry (IC) (n=16) while playing the "Kinect Adventures!" videogame for the Xbox Kinect (Microsoft(®), Redmond, WA) gaming system. The experiment simulated a typical 20-minute in-home free-play gaming session. Using 10-second recording epochs, the average (mean±standard deviation) PA intensity over 20 minutes was 4.4±0.9, 3.2±0.7, and 3.3±0.6 metabolic equivalents (METs) when estimated by IC or vertical axis (Crouter et al. intermittent lifestyle equation for vertical axis counts/10 seconds [Cva2RM]) and vector magnitude (Crouter et al. intermittent lifestyle equation for vector magnitude counts/10 seconds [Cvm2RM]) accelerometry. In total, 16.9±3.2 (IC), 10.6±4.5 (Cva2RM), and 11.1±3.9 (Cvm2RM) minutes of game playing time were at a 3 MET intensity or higher. In this study, children played the Xbox Kinect AVG at moderate-intensity PA levels. The study also showed that current accelerometry-based methods underestimated the PA of AVG play compared with IC. With proper guidance and recommendations for use, video motion-capture AVG systems could reduce sedentary screen time and increase total daily moderate PA levels for children. Further study of these AVG systems is warranted.

  16. Using data from the Microsoft Kinect 2 to determine postural stability in healthy subjects: A feasibility trial

    PubMed Central

    Smeragliuolo, Anna H.; Long, John Davis; Bumanlag, Silverio Joseph; He, Victor; Lampe, Anna

    2017-01-01

    The objective of this study was to determine whether kinematic data collected by the Microsoft Kinect 2 (MK2) could be used to quantify postural stability in healthy subjects. Twelve subjects were recruited for the project, and were instructed to perform a sequence of simple postural stability tasks. The movement sequence was performed as subjects were seated on top of a force platform, and the MK2 was positioned in front of them. This sequence of tasks was performed by each subject under three different postural conditions: “both feet on the ground” (1), “One foot off the ground” (2), and “both feet off the ground” (3). We compared force platform and MK2 data to quantify the degree to which the MK2 was returning reliable data across subjects. We then applied a novel machine-learning paradigm to the MK2 data in order to determine the extent to which data from the MK2 could be used to reliably classify different postural conditions. Our initial comparison of force plate and MK2 data showed a strong agreement between the two devices, with strong Pearson correlations between the trunk centroids “Spine_Mid” (0.85 ± 0.06), “Neck” (0.86 ± 0.07) and “Head” (0.87 ± 0.07), and the center of pressure centroid inferred by the force platform. Mean accuracy for the machine learning classifier from MK2 was 97.0%, with a specific classification accuracy breakdown of 90.9%, 100%, and 100% for conditions 1 through 3, respectively. Mean accuracy for the machine learning classifier derived from the force platform data was lower at 84.4%. We conclude that data from the MK2 has sufficient information content to allow us to classify sequences of tasks being performed under different levels of postural stability. Future studies will focus on validating this protocol on large populations of individuals with actual balance impairments in order to create a toolkit that is clinically validated and available to the medical community. PMID:28196139

  17. Using data from the Microsoft Kinect 2 to determine postural stability in healthy subjects: A feasibility trial.

    PubMed

    Dehbandi, Behdad; Barachant, Alexandre; Smeragliuolo, Anna H; Long, John Davis; Bumanlag, Silverio Joseph; He, Victor; Lampe, Anna; Putrino, David

    2017-01-01

    The objective of this study was to determine whether kinematic data collected by the Microsoft Kinect 2 (MK2) could be used to quantify postural stability in healthy subjects. Twelve subjects were recruited for the project, and were instructed to perform a sequence of simple postural stability tasks. The movement sequence was performed as subjects were seated on top of a force platform, and the MK2 was positioned in front of them. This sequence of tasks was performed by each subject under three different postural conditions: "both feet on the ground" (1), "One foot off the ground" (2), and "both feet off the ground" (3). We compared force platform and MK2 data to quantify the degree to which the MK2 was returning reliable data across subjects. We then applied a novel machine-learning paradigm to the MK2 data in order to determine the extent to which data from the MK2 could be used to reliably classify different postural conditions. Our initial comparison of force plate and MK2 data showed a strong agreement between the two devices, with strong Pearson correlations between the trunk centroids "Spine_Mid" (0.85 ± 0.06), "Neck" (0.86 ± 0.07) and "Head" (0.87 ± 0.07), and the center of pressure centroid inferred by the force platform. Mean accuracy for the machine learning classifier from MK2 was 97.0%, with a specific classification accuracy breakdown of 90.9%, 100%, and 100% for conditions 1 through 3, respectively. Mean accuracy for the machine learning classifier derived from the force platform data was lower at 84.4%. We conclude that data from the MK2 has sufficient information content to allow us to classify sequences of tasks being performed under different levels of postural stability. Future studies will focus on validating this protocol on large populations of individuals with actual balance impairments in order to create a toolkit that is clinically validated and available to the medical community.

  18. NeuroCognitive Patterns

    DTIC Science & Technology

    2016-10-28

    that exploits environmental and contextual information to provide likely interpretations for those neural signals. Innovative models for event...users. While the neural signals are vital to this architecture, contextual and environmental information is also needed in order to best anticipate...what action is intended. In order to obtain that contextual and environmental information, a commercially available Kinect Sensor is used to capture

  19. A study on the ergonomic assessment in the workplace

    NASA Astrophysics Data System (ADS)

    Tee, Kian Sek; Low, Eugene; Saim, Hashim; Zakaria, Wan Nurshazwani Wan; Khialdin, Safinaz Binti Mohd; Isa, Hazlita; Awad, M. I.; Soon, Chin Fhong

    2017-09-01

    Ergonomics has gained attention and take into consideration by the workers in the different fields of works recently. It has given a huge impact on the workers comfort which directly affects the work efficiency and productivity. The workers have claimed to suffer from the painful postures and injuries in their workplace. Musculoskeletal disorders (MSDs) is the most common problem frequently reported by the workers. This problem occurs due to the lack of knowledge and alertness from the workers to the ergonomic in their surroundings. This paper intends to review the approaches and instruments used by the previous works of the researchers in the evaluation of the ergonomics. The two main assessment methods often used for ergonomic evaluation are Rapid Upper Limb Assessment (RULA) and Rapid Entire Body Assessment (REBA). Popular devices are Inertial Measurement Units (IMU) and Microsoft Kinect.

  20. 2.5D Multi-View Gait Recognition Based on Point Cloud Registration

    PubMed Central

    Tang, Jin; Luo, Jian; Tjahjadi, Tardi; Gao, Yan

    2014-01-01

    This paper presents a method for modeling a 2.5-dimensional (2.5D) human body and extracting the gait features for identifying the human subject. To achieve view-invariant gait recognition, a multi-view synthesizing method based on point cloud registration (MVSM) to generate multi-view training galleries is proposed. The concept of a density and curvature-based Color Gait Curvature Image is introduced to map 2.5D data onto a 2D space to enable data dimension reduction by discrete cosine transform and 2D principle component analysis. Gait recognition is achieved via a 2.5D view-invariant gait recognition method based on point cloud registration. Experimental results on the in-house database captured by a Microsoft Kinect camera show a significant performance gain when using MVSM. PMID:24686727

  1. A New Calibration Method for Commercial RGB-D Sensors.

    PubMed

    Darwish, Walid; Tang, Shenjun; Li, Wenbin; Chen, Wu

    2017-05-24

    Commercial RGB-D sensors such as Kinect and Structure Sensors have been widely used in the game industry, where geometric fidelity is not of utmost importance. For applications in which high quality 3D is required, i.e., 3D building models of centimeter‑level accuracy, accurate and reliable calibrations of these sensors are required. This paper presents a new model for calibrating the depth measurements of RGB-D sensors based on the structured light concept. Additionally, a new automatic method is proposed for the calibration of all RGB-D parameters, including internal calibration parameters for all cameras, the baseline between the infrared and RGB cameras, and the depth error model. When compared with traditional calibration methods, this new model shows a significant improvement in depth precision for both near and far ranges.

  2. A Fast Robot Identification and Mapping Algorithm Based on Kinect Sensor.

    PubMed

    Zhang, Liang; Shen, Peiyi; Zhu, Guangming; Wei, Wei; Song, Houbing

    2015-08-14

    Internet of Things (IoT) is driving innovation in an ever-growing set of application domains such as intelligent processing for autonomous robots. For an autonomous robot, one grand challenge is how to sense its surrounding environment effectively. The Simultaneous Localization and Mapping with RGB-D Kinect camera sensor on robot, called RGB-D SLAM, has been developed for this purpose but some technical challenges must be addressed. Firstly, the efficiency of the algorithm cannot satisfy real-time requirements; secondly, the accuracy of the algorithm is unacceptable. In order to address these challenges, this paper proposes a set of novel improvement methods as follows. Firstly, the ORiented Brief (ORB) method is used in feature detection and descriptor extraction. Secondly, a bidirectional Fast Library for Approximate Nearest Neighbors (FLANN) k-Nearest Neighbor (KNN) algorithm is applied to feature match. Then, the improved RANdom SAmple Consensus (RANSAC) estimation method is adopted in the motion transformation. In the meantime, high precision General Iterative Closest Points (GICP) is utilized to register a point cloud in the motion transformation optimization. To improve the accuracy of SLAM, the reduced dynamic covariance scaling (DCS) algorithm is formulated as a global optimization problem under the G2O framework. The effectiveness of the improved algorithm has been verified by testing on standard data and comparing with the ground truth obtained on Freiburg University's datasets. The Dr Robot X80 equipped with a Kinect camera is also applied in a building corridor to verify the correctness of the improved RGB-D SLAM algorithm. With the above experiments, it can be seen that the proposed algorithm achieves higher processing speed and better accuracy.

  3. SAMuS: Service-Oriented Architecture for Multisensor Surveillance in Smart Homes

    PubMed Central

    Van de Walle, Rik

    2014-01-01

    The design of a service-oriented architecture for multisensor surveillance in smart homes is presented as an integrated solution enabling automatic deployment, dynamic selection, and composition of sensors. Sensors are implemented as Web-connected devices, with a uniform Web API. RESTdesc is used to describe the sensors and a novel solution is presented to automatically compose Web APIs that can be applied with existing Semantic Web reasoners. We evaluated the solution by building a smart Kinect sensor that is able to dynamically switch between IR and RGB and optimizing person detection by incorporating feedback from pressure sensors, as such demonstrating the collaboration among sensors to enhance detection of complex events. The performance results show that the platform scales for many Web APIs as composition time remains limited to a few hundred milliseconds in almost all cases. PMID:24778579

  4. Assessing video games to improve driving skills: a literature review and observational study.

    PubMed

    Sue, Damian; Ray, Pradeep; Talaei-Khoei, Amir; Jonnagaddala, Jitendra; Vichitvanichphong, Suchada

    2014-08-07

    For individuals, especially older adults, playing video games is a promising tool for improving their driving skills. The ease of use, wide availability, and interactivity of gaming consoles make them an attractive simulation tool. The objective of this study was to look at the feasibility and effects of installing video game consoles in the homes of individuals looking to improve their driving skills. A systematic literature review was conducted to assess the effect of playing video games on improving driving skills. An observatory study was performed to evaluate the feasibility of using an Xbox 360 Kinect console for improving driving skills. Twenty-nine articles, which discuss the implementation of video games in improving driving skills were found in literature. On our study, it was found the Xbox 360 with Kinect is capable of improving physical and mental activities. Xbox Video games were introduced to engage players in physical, visual and cognitive activities including endurance, postural sway, reaction time, eyesight, eye movement, attention and concentration, difficulties with orientation, and semantic fluency. However, manual dexterity, visuo-spatial perception and binocular vision could not be addressed by these games. It was observed that Xbox Kinect (by incorporating Kinect sensor facilities) combines physical, visual and cognitive engagement of players. These results were consistent with those from the literature review. From the research that has been carried out, we can conclude that video game consoles are a viable solution for improving user's physical and mental state. In future we propose to carry a thorough evaluation of the effects of video games on driving skills in elderly people.

  5. Marker-less respiratory motion modeling using the Microsoft Kinect for Windows

    NASA Astrophysics Data System (ADS)

    Tahavori, F.; Alnowami, M.; Wells, K.

    2014-03-01

    Patient respiratory motion is a major problem during external beam radiotherapy of the thoracic and abdominal regions due to the associated organ and target motion. In addition, such motion introduces uncertainty in both radiotherapy planning and delivery and may potentially vary between the planning and delivery sessions. The aim of this work is to examine subject-specific external respiratory motion and its associated drift from an assumed average cycle which is the basis for many respiratory motion compensated applications including radiotherapy treatment planning and delivery. External respiratory motion data were acquired from a group of 20 volunteers using a marker-less 3D depth camera, Kinect for Windows. The anterior surface encompassing thoracic and abdominal regions were subject to principal component analysis (PCA) to investigate dominant variations. The first principal component typically describes more than 70% of the motion data variance in the thoracic and abdominal surfaces. Across all of the subjects used in this study, 58% of subjects demonstrate largely abdominal breathing and 33% exhibited largely thoracic dominated breathing. In most cases there is observable drift in respiratory motion during the 300s capture period, which is visually demonstrated using Kernel Density Estimation. This study demonstrates that for this cohort of apparently healthy volunteers, there is significant respiratory motion drift in most cases, in terms of amplitude and relative displacement between the thoracic and abdominal respiratory components. This has implications for the development of effective motion compensation methodology.

  6. Active Gaming as a Form of Exercise to Induce Hypoalgesia.

    PubMed

    Carey, Christopher; Naugle, Keith E; Aqeel, Dania; Ohlman, Thomas; Naugle, Kelly M

    2017-08-01

    An acute bout of moderate-to-vigorous exercise temporarily reduces pain sensitivity in healthy adults. Recently, active gaming has been rising in popularity as a means of light-to-moderate exercise and may be particularly suitable for deconditioned individuals. Whether the physical activity elicited in active games can produce a hypoalgesic effect remains unknown. The purpose of this study was to determine whether active videogames can reduce pressure and heat pain sensitivity in healthy adults. We also evaluated the relationship between the physical activity elicited by the games and the magnitude of the hypoalgesic response. Twenty-one healthy adults played four different active games on separate days, including Microsoft ® Kinect Xbox ® One's Fighter Within and Sports Rival's Tennis, and Nintendo ® Wii™ Sports' Boxing and Tennis. Heat pain thresholds on the forearm and pressure pain thresholds (PPTs) on the trapezius and forearm were assessed immediately before and after a 15-minute active gaming or control session. Minutes spent in sedentary time and moderate-to-vigorous physical activity (MVPA) during active gaming were measured with an accelerometer. The analyses revealed that PPTs at the forearm and trapezius significantly increased from pretest to posttest following Kinect Fighter Within. PPTs at the trapezius also significantly increased from pretest to posttest following Wii Boxing. The magnitude of the hypoalgesic response was significantly correlated with MVPA and sedentary time during gameplay. These results suggest that an active gaming session played at a moderate intensity is capable of temporarily reducing pain sensitivity.

  7. Active Videogaming for Individuals with Severe Movement Disorders: Results from a Community Study.

    PubMed

    Chung, Peter J; Vanderbilt, Douglas L; Schrager, Sheree M; Nguyen, Eugene; Fowler, Eileen

    2015-06-01

    Active videogaming (AVG) has potential to provide positive health outcomes for individuals with cerebral palsy (CP), but their use for individuals with severe motor impairments is limited. Our objective was to evaluate the accessibility and enjoyment of videogames using the Kinect™ (Microsoft, Redmond, WA) with the Flexible Action and Articulated Skeleton Toolkit (FAAST) system (University of Southern California Institute for Creative Technologies, Los Angeles, CA) for individuals with severely limiting CP. A videogaming system was installed in a community center serving adults with CP, and a staff member was instructed in its use. Participants completed a baseline survey assessing demographics, mobility, and prior videogame experience; they then used the FAAST system with Kinect and completed a 5-point Likert survey to assess their experience. Descriptive statistics assessed overall enjoyment of the system, and Mann-Whitney U tests were conducted to determine whether responses differed by demographic factors, mobility, or prior videogame experience. Twenty-two subjects were recruited. The enjoyment scale demonstrated high internal consistency (Cronbach's alpha=0.88). The mean total enjoyment score was 4.24 out of 5. Median scores did not significantly differ by ethnicity, gender, CP severity, or previous videogame exposure. The FAAST with Kinect is a low-cost system that engages individuals with severe movement disorders across a wide range of physical ability and videogame experience. Further research should be conducted on in-home use, therapeutic applications, and potential benefits for socialization.

  8. Automatic detection of measurement points for non-contact vibrometer-based diagnosis of cardiac arrhythmias

    NASA Astrophysics Data System (ADS)

    Metzler, Jürgen; Kroschel, Kristian; Willersinn, Dieter

    2017-03-01

    Monitoring of the heart rhythm is the cornerstone of the diagnosis of cardiac arrhythmias. It is done by means of electrocardiography which relies on electrodes attached to the skin of the patient. We present a new system approach based on the so-called vibrocardiogram that allows an automatic non-contact registration of the heart rhythm. Because of the contactless principle, the technique offers potential application advantages in medical fields like emergency medicine (burn patient) or premature baby care where adhesive electrodes are not easily applicable. A laser-based, mobile, contactless vibrometer for on-site diagnostics that works with the principle of laser Doppler vibrometry allows the acquisition of vital functions in form of a vibrocardiogram. Preliminary clinical studies at the Klinikum Karlsruhe have shown that the region around the carotid artery and the chest region are appropriate therefore. However, the challenge is to find a suitable measurement point in these parts of the body that differs from person to person due to e. g. physiological properties of the skin. Therefore, we propose a new Microsoft Kinect-based approach. When a suitable measurement area on the appropriate parts of the body are detected by processing the Kinect data, the vibrometer is automatically aligned on an initial location within this area. Then, vibrocardiograms on different locations within this area are successively acquired until a sufficient measuring quality is achieved. This optimal location is found by exploiting the autocorrelation function.

  9. Processing Uav and LIDAR Point Clouds in Grass GIS

    NASA Astrophysics Data System (ADS)

    Petras, V.; Petrasova, A.; Jeziorska, J.; Mitasova, H.

    2016-06-01

    Today's methods of acquiring Earth surface data, namely lidar and unmanned aerial vehicle (UAV) imagery, non-selectively collect or generate large amounts of points. Point clouds from different sources vary in their properties such as number of returns, density, or quality. We present a set of tools with applications for different types of points clouds obtained by a lidar scanner, structure from motion technique (SfM), and a low-cost 3D scanner. To take advantage of the vertical structure of multiple return lidar point clouds, we demonstrate tools to process them using 3D raster techniques which allow, for example, the development of custom vegetation classification methods. Dense point clouds obtained from UAV imagery, often containing redundant points, can be decimated using various techniques before further processing. We implemented and compared several decimation techniques in regard to their performance and the final digital surface model (DSM). Finally, we will describe the processing of a point cloud from a low-cost 3D scanner, namely Microsoft Kinect, and its application for interaction with physical models. All the presented tools are open source and integrated in GRASS GIS, a multi-purpose open source GIS with remote sensing capabilities. The tools integrate with other open source projects, specifically Point Data Abstraction Library (PDAL), Point Cloud Library (PCL), and OpenKinect libfreenect2 library to benefit from the open source point cloud ecosystem. The implementation in GRASS GIS ensures long term maintenance and reproducibility by the scientific community but also by the original authors themselves.

  10. A New Calibration Method for Commercial RGB-D Sensors

    PubMed Central

    Darwish, Walid; Tang, Shenjun; Li, Wenbin; Chen, Wu

    2017-01-01

    Commercial RGB-D sensors such as Kinect and Structure Sensors have been widely used in the game industry, where geometric fidelity is not of utmost importance. For applications in which high quality 3D is required, i.e., 3D building models of centimeter-level accuracy, accurate and reliable calibrations of these sensors are required. This paper presents a new model for calibrating the depth measurements of RGB-D sensors based on the structured light concept. Additionally, a new automatic method is proposed for the calibration of all RGB-D parameters, including internal calibration parameters for all cameras, the baseline between the infrared and RGB cameras, and the depth error model. When compared with traditional calibration methods, this new model shows a significant improvement in depth precision for both near and far ranges. PMID:28538695

  11. Machine learning-based augmented reality for improved surgical scene understanding.

    PubMed

    Pauly, Olivier; Diotte, Benoit; Fallavollita, Pascal; Weidert, Simon; Euler, Ekkehard; Navab, Nassir

    2015-04-01

    In orthopedic and trauma surgery, AR technology can support surgeons in the challenging task of understanding the spatial relationships between the anatomy, the implants and their tools. In this context, we propose a novel augmented visualization of the surgical scene that mixes intelligently the different sources of information provided by a mobile C-arm combined with a Kinect RGB-Depth sensor. Therefore, we introduce a learning-based paradigm that aims at (1) identifying the relevant objects or anatomy in both Kinect and X-ray data, and (2) creating an object-specific pixel-wise alpha map that permits relevance-based fusion of the video and the X-ray images within one single view. In 12 simulated surgeries, we show very promising results aiming at providing for surgeons a better surgical scene understanding as well as an improved depth perception. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. Technical Assessment: Autonomy

    DTIC Science & Technology

    2015-02-23

    low-cost sensors for automotive applications, mobile devices, and video games . If DoD develops CONOPS for lower- performance systems, there is an...advancement in this area is Microsoft’s Kinect technology. While originally designed for the Xbox video game platform, it is now being used or developed for...One area worthy of consideration is applied game theory, which may allow systems to effectively respond to adversary actions. Recommendation 4

  13. Towards NIRS-based hand movement recognition.

    PubMed

    Paleari, Marco; Luciani, Riccardo; Ariano, Paolo

    2017-07-01

    This work reports on preliminary results about on hand movement recognition with Near InfraRed Spectroscopy (NIRS) and surface ElectroMyoGraphy (sEMG). Either basing on physical contact (touchscreens, data-gloves, etc.), vision techniques (Microsoft Kinect, Sony PlayStation Move, etc.), or other modalities, hand movement recognition is a pervasive function in today environment and it is at the base of many gaming, social, and medical applications. Albeit, in recent years, the use of muscle information extracted by sEMG has spread out from the medical applications to contaminate the consumer world, this technique still falls short when dealing with movements of the hand. We tested NIRS as a technique to get another point of view on the muscle phenomena and proved that, within a specific movements selection, NIRS can be used to recognize movements and return information regarding muscles at different depths. Furthermore, we propose here three different multimodal movement recognition approaches and compare their performances.

  14. Object and Facial Recognition in Augmented and Virtual Reality: Investigation into Software, Hardware and Potential Uses

    NASA Technical Reports Server (NTRS)

    Schulte, Erin

    2017-01-01

    As augmented and virtual reality grows in popularity, and more researchers focus on its development, other fields of technology have grown in the hopes of integrating with the up-and-coming hardware currently on the market. Namely, there has been a focus on how to make an intuitive, hands-free human-computer interaction (HCI) utilizing AR and VR that allows users to control their technology with little to no physical interaction with hardware. Computer vision, which is utilized in devices such as the Microsoft Kinect, webcams and other similar hardware has shown potential in assisting with the development of a HCI system that requires next to no human interaction with computing hardware and software. Object and facial recognition are two subsets of computer vision, both of which can be applied to HCI systems in the fields of medicine, security, industrial development and other similar areas.

  15. Assessing Video Games to Improve Driving Skills: A Literature Review and Observational Study

    PubMed Central

    Sue, Damian; Vichitvanichphong, Suchada

    2014-01-01

    Background For individuals, especially older adults, playing video games is a promising tool for improving their driving skills. The ease of use, wide availability, and interactivity of gaming consoles make them an attractive simulation tool. Objective The objective of this study was to look at the feasibility and effects of installing video game consoles in the homes of individuals looking to improve their driving skills. Methods A systematic literature review was conducted to assess the effect of playing video games on improving driving skills. An observatory study was performed to evaluate the feasibility of using an Xbox 360 Kinect console for improving driving skills. Results Twenty–nine articles, which discuss the implementation of video games in improving driving skills were found in literature. On our study, it was found the Xbox 360 with Kinect is capable of improving physical and mental activities. Xbox Video games were introduced to engage players in physical, visual and cognitive activities including endurance, postural sway, reaction time, eyesight, eye movement, attention and concentration, difficulties with orientation, and semantic fluency. However, manual dexterity, visuo-spatial perception and binocular vision could not be addressed by these games. It was observed that Xbox Kinect (by incorporating Kinect sensor facilities) combines physical, visual and cognitive engagement of players. These results were consistent with those from the literature review. Conclusions From the research that has been carried out, we can conclude that video game consoles are a viable solution for improving user’s physical and mental state. In future we propose to carry a thorough evaluation of the effects of video games on driving skills in elderly people. PMID:25654355

  16. Collision prediction software for radiotherapy treatments.

    PubMed

    Padilla, Laura; Pearson, Erik A; Pelizzari, Charles A

    2015-11-01

    This work presents a method of collision predictions for external beam radiotherapy using surface imaging. The present methodology focuses on collision prediction during treatment simulation to evaluate the clearance of a patient's treatment position and allow for its modification if necessary. A Kinect camera (Microsoft, Redmond, WA) is used to scan the patient and immobilization devices in the treatment position at the simulator. The surface is reconstructed using the skanect software (Occipital, Inc., San Francisco, CA). The treatment isocenter is marked using simulated orthogonal lasers projected on the surface scan. The point cloud of this surface is then shifted to isocenter and converted from Cartesian to cylindrical coordinates. A slab models the treatment couch. A cylinder with a radius equal to the normal distance from isocenter to the collimator plate, and a height defined by the collimator diameter is used to estimate collisions. Points within the cylinder clear through a full gantry rotation with the treatment couch at 0°, while points outside of it collide. The angles of collision are reported. This methodology was experimentally verified using a mannequin positioned in an alpha cradle with both arms up. A planning CT scan of the mannequin was performed, two isocenters were marked in pinnacle, and this information was exported to AlignRT (VisionRT, London, UK)--a surface imaging system for patient positioning. This was used to ensure accurate positioning of the mannequin in the treatment room, when available. Collision calculations were performed for the two treatment isocenters and the results compared to the collisions detected the room. The accuracy of the Kinect-Skanect surface was evaluated by comparing it to the external surface of the planning CT scan. Experimental verification results showed that the predicted angles of collision matched those recorded in the room within 0.5°, in most cases (largest deviation -1.2°). The accuracy study for the Kinect-Skanect surface showed an average discrepancy between the CT external contour and the surface scan of 2.2 mm. This methodology provides fast and reliable collision predictions using surface imaging. The use of the Kinect-Skanect system allows for a comprehensive modeling of the patient topography including all the relevant anatomy and immobilization devices that may lead to collisions. The use of this tool at the treatment simulation stage may allow therapists to evaluate the clearance of a patient's treatment position and optimize it before the planning CT scan is performed. This can allow for safer treatments for the patients due to better collision predictions and improved clinical workflow by minimizing replanning and resimulations due to unforeseen clearance issues.

  17. "Alien Health Game": An Embodied Exergame to Instruct in Nutrition and MyPlate.

    PubMed

    Johnson-Glenberg, Mina C; Hekler, Eric B

    2013-12-01

    A feasibility study was run on an immersive, embodied exergame ("Alien Health Game") designed to teach 4th-12th-grade students about nutrition and several U.S. Department of Agriculture MyPlate guidelines. This study assessed acceptability and limited efficacy. Students learned about the amount of nutrients and optimizers in common food items and practiced making rapid food choices while engaging in short cardiovascular activities. Nineteen 4th graders played a "mixed reality" game that included both digital components (projected graphics on the floor) and tangible, physical components (hand-held motion-tracking wands). Players made food choices and experienced immediate feedback on how each item affected the Alien avatar's alertness/health state. One member of the playing dyad had to run short distances to make the game work. The final level included a digital projection of the MyPlate icon, and each food item filled the appropriate quadrant dynamically. All students remained engaged with the game after approximately 1 hour of play. Significant learning gains were seen on a pretest and posttest that assessed nutrition knowledge (paired t18=4.13, P<0.001). In addition, significant learning gains were also seen in knowledge regarding MyPlate (paired t18=3.29, P<0.004). Results suggest preliminary feasibility via demonstrated acceptability and improved within-group content knowledge. Future research should explore improved measures of knowledge gains, alternative mechanisms for supporting the game mechanics to increase the scalability of the system (i.e., via Kinect(®) [Microsoft(®), Redmond, WA] sensors), and the formal evaluation of the system via a randomized controlled trial.

  18. The design of a purpose-built exergame for fall prediction and prevention for older people.

    PubMed

    Marston, Hannah R; Woodbury, Ashley; Gschwind, Yves J; Kroll, Michael; Fink, Denis; Eichberg, Sabine; Kreiner, Karl; Ejupi, Andreas; Annegarn, Janneke; de Rosario, Helios; Wienholtz, Arno; Wieching, Rainer; Delbaere, Kim

    2015-01-01

    Falls in older people represent a major age-related health challenge facing our society. Novel methods for delivery of falls prevention programs are required to increase effectiveness and adherence to these programs while containing costs. The primary aim of the Information and Communications Technology-based System to Predict and Prevent Falls (iStoppFalls) project was to develop innovative home-based technologies for continuous monitoring and exercise-based prevention of falls in community-dwelling older people. The aim of this paper is to describe the components of the iStoppFalls system. The system comprised of 1) a TV, 2) a PC, 3) the Microsoft Kinect, 4) a wearable sensor and 5) an assessment and training software as the main components. The iStoppFalls system implements existing technologies to deliver a tailored home-based exercise and education program aimed at reducing fall risk in older people. A risk assessment tool was designed to identify fall risk factors. The content and progression rules of the iStoppFalls exergames were developed from evidence-based fall prevention interventions targeting muscle strength and balance in older people. The iStoppFalls fall prevention program, used in conjunction with the multifactorial fall risk assessment tool, aims to provide a comprehensive and individualised, yet novel fall risk assessment and prevention program that is feasible for widespread use to prevent falls and fall-related injuries. This work provides a new approach to engage older people in home-based exercise programs to complement or provide a potentially motivational alternative to traditional exercise to reduce the risk of falling.

  19. Analysis of muscle activation in lower extremity for static balance.

    PubMed

    Chakravarty, Kingshuk; Chatterjee, Debatri; Das, Rajat Kumar; Tripathy, Soumya Ranjan; Sinha, Aniruddha

    2017-07-01

    Balance plays an important role for human bipedal locomotion. Degeneration of balance control is prominent in stroke patients, elderly adults and even for majority of obese people. Design of personalized balance training program, in order to strengthen muscles, requires the analysis of muscle activation during an activity. In this paper we have proposed an affordable and portable approach to analyze the relationship between the static balance strategy and activation of various lower extremity muscles. To do that we have considered Microsoft Kinect XBox 360 as a motion sensing device and Wii balance board for measuring external force information. For analyzing the muscle activation pattern related to static balance, participants are asked to do the single limb stance (SLS) exercise on the balance board and in front of the Kinect. Static optimization to minimize the overall muscle activation pattern is carried out using OpenSim, which is an open-source musculoskeletal simulation software. The study is done on ten normal and ten obese people, grouped according to body mass index (BMI). Results suggest that the lower extremity muscles like biceps femoris, psoas major, sartorius, iliacus play the major role for both maintaining the balance using one limb as well as maintaining the flexion of the other limb during SLS. Further investigations reveal that the higher muscle activations of the flexed leg for normal group demonstrate higher strength. Moreover, the lower muscle activation of the standing leg for normal group demonstrate more headroom for the biceps femoris-short-head and psoas major to withstand the load and hence have better static balance control.

  20. An evaluation of the Kinect-Ed presentation, a motivating nutrition and cooking intervention for young adolescents in grades 6-8.

    PubMed

    Santarossa, Sara; Ciccone, Jillian; Woodruff, Sarah J

    2015-09-01

    Recently, public health messaging has included having more family meals and involving young adolescents (YAs) with meal preparation to improve healthful diets and family dinner frequency (FDF). Kinect-Ed, a motivational nutrition education presentation was created to encourage YAs (grades 6-8) to help with meal preparation and ultimately improve FDF. The purpose of this study was to evaluate the Kinect-Ed presentation, with the goals of the presentation being to improve self-efficacy for cooking (SE), food preparation techniques (TECH), food preparation frequency (PREP), family meal attitudes and behaviours, and ultimately increase FDF. A sample of YAs (n = 219) from Southern Ontario, Canada, completed pre- and postpresentation surveys, measuring FDF, PREP, SE, and TECH. Kinect-Ed successfully improved participants' FDF (p < 0.01), PREP (p < 0.01), SE (p < 0.01), and TECH (<0.01). Overall, goals of the presentation were met. Encouraging YAs to help prepare meals and get involved in the kitchen may reduce the time needed from parents to prepare meals, and, in turn, allow more time for frequent family dinners.

  1. Acquisition and Neural Network Prediction of 3D Deformable Object Shape Using a Kinect and a Force-Torque Sensor.

    PubMed

    Tawbe, Bilal; Cretu, Ana-Maria

    2017-05-11

    The realistic representation of deformations is still an active area of research, especially for deformable objects whose behavior cannot be simply described in terms of elasticity parameters. This paper proposes a data-driven neural-network-based approach for capturing implicitly and predicting the deformations of an object subject to external forces. Visual data, in the form of 3D point clouds gathered by a Kinect sensor, is collected over an object while forces are exerted by means of the probing tip of a force-torque sensor. A novel approach based on neural gas fitting is proposed to describe the particularities of a deformation over the selectively simplified 3D surface of the object, without requiring knowledge of the object material. An alignment procedure, a distance-based clustering, and inspiration from stratified sampling support this process. The resulting representation is denser in the region of the deformation (an average of 96.6% perceptual similarity with the collected data in the deformed area), while still preserving the object's overall shape (86% similarity over the entire surface) and only using on average of 40% of the number of vertices in the mesh. A series of feedforward neural networks is then trained to predict the mapping between the force parameters characterizing the interaction with the object and the change in the object shape, as captured by the fitted neural gas nodes. This series of networks allows for the prediction of the deformation of an object when subject to unknown interactions.

  2. Model-Based Reinforcement of Kinect Depth Data for Human Motion Capture Applications

    PubMed Central

    Calderita, Luis Vicente; Bandera, Juan Pedro; Bustos, Pablo; Skiadopoulos, Andreas

    2013-01-01

    Motion capture systems have recently experienced a strong evolution. New cheap depth sensors and open source frameworks, such as OpenNI, allow for perceiving human motion on-line without using invasive systems. However, these proposals do not evaluate the validity of the obtained poses. This paper addresses this issue using a model-based pose generator to complement the OpenNI human tracker. The proposed system enforces kinematics constraints, eliminates odd poses and filters sensor noise, while learning the real dimensions of the performer's body. The system is composed by a PrimeSense sensor, an OpenNI tracker and a kinematics-based filter and has been extensively tested. Experiments show that the proposed system improves pure OpenNI results at a very low computational cost. PMID:23845933

  3. Intensive strength and balance training with the Kinect console (Xbox 360) in a patient with CMT1A.

    PubMed

    Pagliano, Emanuela; Foscan, Maria; Marchi, Alessia; Corlatti, Alice; Aprile, Giorgia; Riva, Daria

    2017-08-01

    Effective drugs for type 1A Charcot-Marie-Tooth (CMT1A) disease are not available. Various forms of moderate exercise are beneficial, but few data are available on the effectiveness of exercise in CMT1A children. To investigate the feasibility and effectiveness of exercises to improve ankle strength and limb function in a child with CMT1A. Outpatient clinic. Nine-year-old boy with CMT1A. The rehabilitation program consisted of ankle exercises and Kinect videogame-directed physical activities (using an Xbox 360 console/movement sensor) that aimed to improve balance and limb strength. The program was given 3 times a week for 5 weeks. The child was assessed at baseline, after 5 weeks, and 3 and 6 months after. By the end of follow-up, child balance and endurance had improved, but ankle strength did not. The encouraging results for balance and endurance justify further studies on videogame-directed activities in CMT1A children/adolescents.

  4. Simultaneous Calibration: A Joint Optimization Approach for Multiple Kinect and External Cameras.

    PubMed

    Liao, Yajie; Sun, Ying; Li, Gongfa; Kong, Jianyi; Jiang, Guozhang; Jiang, Du; Cai, Haibin; Ju, Zhaojie; Yu, Hui; Liu, Honghai

    2017-06-24

    Camera calibration is a crucial problem in many applications, such as 3D reconstruction, structure from motion, object tracking and face alignment. Numerous methods have been proposed to solve the above problem with good performance in the last few decades. However, few methods are targeted at joint calibration of multi-sensors (more than four devices), which normally is a practical issue in the real-time systems. In this paper, we propose a novel method and a corresponding workflow framework to simultaneously calibrate relative poses of a Kinect and three external cameras. By optimizing the final cost function and adding corresponding weights to the external cameras in different locations, an effective joint calibration of multiple devices is constructed. Furthermore, the method is tested in a practical platform, and experiment results show that the proposed joint calibration method can achieve a satisfactory performance in a project real-time system and its accuracy is higher than the manufacturer's calibration.

  5. Simultaneous Calibration: A Joint Optimization Approach for Multiple Kinect and External Cameras

    PubMed Central

    Liao, Yajie; Sun, Ying; Li, Gongfa; Kong, Jianyi; Jiang, Guozhang; Jiang, Du; Cai, Haibin; Ju, Zhaojie; Yu, Hui; Liu, Honghai

    2017-01-01

    Camera calibration is a crucial problem in many applications, such as 3D reconstruction, structure from motion, object tracking and face alignment. Numerous methods have been proposed to solve the above problem with good performance in the last few decades. However, few methods are targeted at joint calibration of multi-sensors (more than four devices), which normally is a practical issue in the real-time systems. In this paper, we propose a novel method and a corresponding workflow framework to simultaneously calibrate relative poses of a Kinect and three external cameras. By optimizing the final cost function and adding corresponding weights to the external cameras in different locations, an effective joint calibration of multiple devices is constructed. Furthermore, the method is tested in a practical platform, and experiment results show that the proposed joint calibration method can achieve a satisfactory performance in a project real-time system and its accuracy is higher than the manufacturer’s calibration. PMID:28672823

  6. Collision prediction software for radiotherapy treatments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Padilla, Laura; Pearson, Erik A.; Pelizzari, Charles A., E-mail: c-pelizzari@uchicago.edu

    2015-11-15

    Purpose: This work presents a method of collision predictions for external beam radiotherapy using surface imaging. The present methodology focuses on collision prediction during treatment simulation to evaluate the clearance of a patient’s treatment position and allow for its modification if necessary. Methods: A Kinect camera (Microsoft, Redmond, WA) is used to scan the patient and immobilization devices in the treatment position at the simulator. The surface is reconstructed using the SKANECT software (Occipital, Inc., San Francisco, CA). The treatment isocenter is marked using simulated orthogonal lasers projected on the surface scan. The point cloud of this surface is thenmore » shifted to isocenter and converted from Cartesian to cylindrical coordinates. A slab models the treatment couch. A cylinder with a radius equal to the normal distance from isocenter to the collimator plate, and a height defined by the collimator diameter is used to estimate collisions. Points within the cylinder clear through a full gantry rotation with the treatment couch at 0° , while points outside of it collide. The angles of collision are reported. This methodology was experimentally verified using a mannequin positioned in an alpha cradle with both arms up. A planning CT scan of the mannequin was performed, two isocenters were marked in PINNACLE, and this information was exported to AlignRT (VisionRT, London, UK)—a surface imaging system for patient positioning. This was used to ensure accurate positioning of the mannequin in the treatment room, when available. Collision calculations were performed for the two treatment isocenters and the results compared to the collisions detected the room. The accuracy of the Kinect-Skanect surface was evaluated by comparing it to the external surface of the planning CT scan. Results: Experimental verification results showed that the predicted angles of collision matched those recorded in the room within 0.5°, in most cases (largest deviation −1.2°). The accuracy study for the Kinect-Skanect surface showed an average discrepancy between the CT external contour and the surface scan of 2.2 mm. Conclusions: This methodology provides fast and reliable collision predictions using surface imaging. The use of the Kinect-Skanect system allows for a comprehensive modeling of the patient topography including all the relevant anatomy and immobilization devices that may lead to collisions. The use of this tool at the treatment simulation stage may allow therapists to evaluate the clearance of a patient’s treatment position and optimize it before the planning CT scan is performed. This can allow for safer treatments for the patients due to better collision predictions and improved clinical workflow by minimizing replanning and resimulations due to unforeseen clearance issues.« less

  7. A virtual pointer to support the adoption of professional vision in laparoscopic training.

    PubMed

    Feng, Yuanyuan; McGowan, Hannah; Semsar, Azin; Zahiri, Hamid R; George, Ivan M; Turner, Timothy; Park, Adrian; Kleinsmith, Andrea; Mentis, Helena M

    2018-05-23

    To assess a virtual pointer in supporting surgical trainees' development of professional vision in laparoscopic surgery. We developed a virtual pointing and telestration system utilizing the Microsoft Kinect movement sensor as an overlay for any imagine system. Training with the application was compared to a standard condition, i.e., verbal instruction with un-mediated gestures, in a laparoscopic training environment. Seven trainees performed four simulated laparoscopic tasks guided by an experienced surgeon as the trainer. Trainee performance was subjectively assessed by the trainee and trainer, and objectively measured by number of errors, time to task completion, and economy of movement. No significant differences in errors and time to task completion were obtained between virtual pointer and standard conditions. Economy of movement in the non-dominant hand was significantly improved when using virtual pointer ([Formula: see text]). The trainers perceived a significant improvement in trainee performance in virtual pointer condition ([Formula: see text]), while the trainees perceived no difference. The trainers' perception of economy of movement was similar between the two conditions in the initial three runs and became significantly improved in virtual pointer condition in the fourth run ([Formula: see text]). Results show that the virtual pointer system improves the trainer's perception of trainee's performance and this is reflected in the objective performance measures in the third and fourth training runs. The benefit of a virtual pointing and telestration system may be perceived by the trainers early on in training, but this is not evident in objective trainee performance until further mastery has been attained. In addition, the performance improvement of economy of motion specifically shows that the virtual pointer improves the adoption of professional vision- improved ability to see and use laparoscopic video results in more direct instrument movement.

  8. Fault tolerant multi-sensor fusion based on the information gain

    NASA Astrophysics Data System (ADS)

    Hage, Joelle Al; El Najjar, Maan E.; Pomorski, Denis

    2017-01-01

    In the last decade, multi-robot systems are used in several applications like for example, the army, the intervention areas presenting danger to human life, the management of natural disasters, the environmental monitoring, exploration and agriculture. The integrity of localization of the robots must be ensured in order to achieve their mission in the best conditions. Robots are equipped with proprioceptive (encoders, gyroscope) and exteroceptive sensors (Kinect). However, these sensors could be affected by various faults types that can be assimilated to erroneous measurements, bias, outliers, drifts,… In absence of a sensor fault diagnosis step, the integrity and the continuity of the localization are affected. In this work, we present a muti-sensors fusion approach with Fault Detection and Exclusion (FDE) based on the information theory. In this context, we are interested by the information gain given by an observation which may be relevant when dealing with the fault tolerance aspect. Moreover, threshold optimization based on the quantity of information given by a decision on the true hypothesis is highlighted.

  9. A New Multi-Sensor Fusion Scheme to Improve the Accuracy of Knee Flexion Kinematics for Functional Rehabilitation Movements.

    PubMed

    Tannous, Halim; Istrate, Dan; Benlarbi-Delai, Aziz; Sarrazin, Julien; Gamet, Didier; Ho Ba Tho, Marie Christine; Dao, Tien Tuan

    2016-11-15

    Exergames have been proposed as a potential tool to improve the current practice of musculoskeletal rehabilitation. Inertial or optical motion capture sensors are commonly used to track the subject's movements. However, the use of these motion capture tools suffers from the lack of accuracy in estimating joint angles, which could lead to wrong data interpretation. In this study, we proposed a real time quaternion-based fusion scheme, based on the extended Kalman filter, between inertial and visual motion capture sensors, to improve the estimation accuracy of joint angles. The fusion outcome was compared to angles measured using a goniometer. The fusion output shows a better estimation, when compared to inertial measurement units and Kinect outputs. We noted a smaller error (3.96°) compared to the one obtained using inertial sensors (5.04°). The proposed multi-sensor fusion system is therefore accurate enough to be applied, in future works, to our serious game for musculoskeletal rehabilitation.

  10. Kinect V2 Performance Assessment in Daily-Life Gestures: Cohort Study on Healthy Subjects for a Reference Database for Automated Instrumental Evaluations on Neurological Patients

    PubMed Central

    Malosio, Matteo; Molinari Tosatti, Lorenzo

    2017-01-01

    Background The increase of sanitary costs related to poststroke rehabilitation requires new sustainable and cost-effective strategies for promoting autonomous and dehospitalized motor training. In the Riprendo@Home and Future Home for Future Communities research projects, the promising approach of introducing low-cost technologies that promote home rehabilitation is exploited. In order to provide reliable evaluation of patients, a reference database of healthy people's performances is required and should consider variability related to healthy people performances. Methods 78 healthy subjects performed several repetitions of daily-life gestures, the reaching movement (RM) and hand-to-mouth (HtMM) movement with both the dominant and nondominant upper limbs. Movements were recorded with a Kinect V2. A synthetic biomechanical protocol based on kinematical, dynamical, and motor control parameters was used to assess motor performance of the healthy people. The investigation was conducted by clustering participants depending on their limb dominancy (right/left), gender (male/female), and age (young/middle/senior) as sources of variability. Results Results showed that limb dominancy has minor relevance in affecting RM and HtMM; gender has relevance in affecting the HtMM; age has major effect in affecting RM and HtMM. Conclusions An investigation of healthy subjects' upper limb performances during daily-life gestures was performed with the Kinect V2 sensor. Findings will be the basis for a database of normative data for neurological patients' motor evaluation. PMID:29358893

  11. An interactive VR system based on full-body tracking and gesture recognition

    NASA Astrophysics Data System (ADS)

    Zeng, Xia; Sang, Xinzhu; Chen, Duo; Wang, Peng; Guo, Nan; Yan, Binbin; Wang, Kuiru

    2016-10-01

    Most current virtual reality (VR) interactions are realized with the hand-held input device which leads to a low degree of presence. There is other solutions using sensors like Leap Motion to recognize the gestures of users in order to interact in a more natural way, but the navigation in these systems is still a problem, because they fail to map the actual walking to virtual walking only with a partial body of the user represented in the synthetic environment. Therefore, we propose a system in which users can walk around in the virtual environment as a humanoid model, selecting menu items and manipulating with the virtual objects using natural hand gestures. With a Kinect depth camera, the system tracks the joints of the user, mapping them to a full virtual body which follows the move of the tracked user. The movements of the feet can be detected to determine whether the user is in walking state, so that the walking of model in the virtual world can be activated and stopped by means of animation control in Unity engine. This method frees the hands of users comparing to traditional navigation way using hand-held device. We use the point cloud data getting from Kinect depth camera to recognize the gestures of users, such as swiping, pressing and manipulating virtual objects. Combining the full body tracking and gestures recognition using Kinect, we achieve our interactive VR system in Unity engine with a high degree of presence.

  12. The Impact of a Videogame-Based Pilot Physical Activity Program in Older Adults with Schizophrenia on Subjectively and Objectively Measured Physical Activity.

    PubMed

    Leutwyler, Heather; Hubbard, Erin; Cooper, Bruce; Dowling, Glenna

    2015-01-01

    The purpose of this report is to describe the impact of a videogame-based pilot physical activity program using the Kinect for Xbox 360 game system (Microsoft, Redmond, WA, USA) on physical activity in older adults with schizophrenia. In this one group pre-test, post-test pilot study, 20 participants played an active videogame for 30 min, once a week for 6 weeks. Physical activity was measured by self-report with the Yale Physical Activity Survey and objectively with the Sensewear Pro armband at enrollment and at the end of the 6-week program. There was a significant increase in frequency of self-reported vigorous physical activity. We did not detect a statistically significant difference in objectively measured physical activity although increase in number of steps and sedentary activity were in the desired direction. These results suggest participants' perception of physical activity intensity differs from the intensity objectively captured with a valid and reliable physical activity monitor.

  13. Children with developmental coordination disorder play active virtual reality games differently than children with typical development.

    PubMed

    Gonsalves, Leandra; Campbell, Amity; Jensen, Lynn; Straker, Leon

    2015-03-01

    Active virtual reality gaming (AVG) may be useful for children with developmental coordination disorder (DCD) to practice motor skills if their movement patterns are of good quality while engaged in AVG. This study aimed to examine: (1) the quality of motor patterns of children with DCD participating in AVG by comparing them with children with typical development (TD) and (2) whether differences existed in the motor patterns utilized with 2 AVG types: Sony PlayStation 3 Move and Microsoft Xbox 360 Kinect. This was a quasi-experimental, biomechanical laboratory-based study. Twenty-one children with DCD, aged 10 to 12 years, and 19 age- and sex-matched children with TD played a match of table tennis on each AVG type. Hand path, wrist angle, and elbow angle were recorded using a motion analysis system. Linear mixed-model analyses were used to determine differences between DCD and TD groups and Move and Kinect AVG type for forehands and backhands. Children with DCD utilized a slower hand path speed (backhand mean difference [MD]=1.20 m/s; 95% confidence interval [95% CI]=0.41, 1.98); greater wrist extension (forehand MD=34.3°; 95% CI=22.6, 47.0); and greater elbow flexion (forehand MD=22.3°; 95% CI=7.4, 37.1) compared with children with TD when engaged in AVG. There also were differences in movement patterns utilized between AVG types. Only simple kinematic measures were compared, and no data regarding movement outcome were assessed. If a therapeutic treatment goal is to promote movement quality in children with DCD, clinical judgment is required to select the most appropriate AVG type and determine whether movement quality is adequate for unsupervised practice. © 2015 American Physical Therapy Association.

  14. Comparison Between RGB and Rgb-D Cameras for Supporting Low-Cost Gnss Urban Navigation

    NASA Astrophysics Data System (ADS)

    Rossi, L.; De Gaetani, C. I.; Pagliari, D.; Realini, E.; Reguzzoni, M.; Pinto, L.

    2018-05-01

    A pure GNSS navigation is often unreliable in urban areas because of the presence of obstructions, thus preventing a correct reception of the satellite signal. The bridging between GNSS outages, as well as the vehicle attitude reconstruction, can be recovered by using complementary information, such as visual data acquired by RGB-D or RGB cameras. In this work, the possibility of integrating low-cost GNSS and visual data by means of an extended Kalman filter has been investigated. The focus is on the comparison between the use of RGB-D or RGB cameras. In particular, a Microsoft Kinect device (second generation) and a mirrorless Canon EOS M RGB camera have been compared. The former is an interesting RGB-D camera because of its low-cost, easiness of use and raw data accessibility. The latter has been selected for the high-quality of the acquired images and for the possibility of mounting fixed focal length lenses with a lower weight and cost with respect to a reflex camera. The designed extended Kalman filter takes as input the GNSS-only trajectory and the relative orientation between subsequent pairs of images. Depending on the visual data acquisition system, the filter is different because RGB-D cameras acquire both RGB and depth data, allowing to solve the scale problem, which is instead typical of image-only solutions. The two systems and filtering approaches were assessed by ad-hoc experimental tests, showing that the use of a Kinect device for supporting a u-blox low-cost receiver led to a trajectory with a decimeter accuracy, that is 15 % better than the one obtained when using the Canon EOS M camera.

  15. Mobile Monitoring Stations and Web Visualization of Biotelemetric System - Guardian II

    NASA Astrophysics Data System (ADS)

    Krejcar, Ondrej; Janckulik, Dalibor; Motalova, Leona; Kufel, Jan

    The main area of interest of our project is to provide solution which can be used in different areas of health care and which will be available through PDAs (Personal Digital Assistants), web browsers or desktop clients. The realized system deals with an ECG sensor connected to mobile equipment, such as PDA/Embedded, based on Microsoft Windows Mobile operating system. The whole system is based on the architecture of .NET Compact Framework, and Microsoft SQL Server. Visualization possibilities of web interface and ECG data are also discussed and final suggestion is made to Microsoft Silverlight solution along with current screenshot representation of implemented solution. The project was successfully tested in real environment in cryogenic room (-136OC).

  16. A Kinect(™) camera based navigation system for percutaneous abdominal puncture.

    PubMed

    Xiao, Deqiang; Luo, Huoling; Jia, Fucang; Zhang, Yanfang; Li, Yong; Guo, Xuejun; Cai, Wei; Fang, Chihua; Fan, Yingfang; Zheng, Huimin; Hu, Qingmao

    2016-08-07

    Percutaneous abdominal puncture is a popular interventional method for the management of abdominal tumors. Image-guided puncture can help interventional radiologists improve targeting accuracy. The second generation of Kinect(™) was released recently, we developed an optical navigation system to investigate its feasibility for guiding percutaneous abdominal puncture, and compare its performance on needle insertion guidance with that of the first-generation Kinect(™). For physical-to-image registration in this system, two surfaces extracted from preoperative CT and intraoperative Kinect(™) depth images were matched using an iterative closest point (ICP) algorithm. A 2D shape image-based correspondence searching algorithm was proposed for generating a close initial position before ICP matching. Evaluation experiments were conducted on an abdominal phantom and six beagles in vivo. For phantom study, a two-factor experiment was designed to evaluate the effect of the operator's skill and trajectory on target positioning error (TPE). A total of 36 needle punctures were tested on a Kinect(™) for Windows version 2 (Kinect(™) V2). The target registration error (TRE), user error, and TPE are 4.26  ±  1.94 mm, 2.92  ±  1.67 mm, and 5.23  ±  2.29 mm, respectively. No statistically significant differences in TPE regarding operator's skill and trajectory are observed. Additionally, a Kinect(™) for Windows version 1 (Kinect(™) V1) was tested with 12 insertions, and the TRE evaluated with the Kinect(™) V1 is statistically significantly larger than that with the Kinect(™) V2. For the animal experiment, fifteen artificial liver tumors were inserted guided by the navigation system. The TPE was evaluated as 6.40  ±  2.72 mm, and its lateral and longitudinal component were 4.30  ±  2.51 mm and 3.80  ±  3.11 mm, respectively. This study demonstrates that the navigation accuracy of the proposed system is acceptable, and that the second generation Kinect(™)-based navigation is superior to the first-generation Kinect(™), and has potential of clinical application in percutaneous abdominal puncture.

  17. Research on virtual Guzheng based on Kinect

    NASA Astrophysics Data System (ADS)

    Li, Shuyao; Xu, Kuangyi; Zhang, Heng

    2018-05-01

    There are a lot of researches on virtual instruments, but there are few on classical Chinese instruments, and the techniques used are very limited. This paper uses Unity 3D and Kinect camera combined with virtual reality technology and gesture recognition method to design a virtual playing system of Guzheng, a traditional Chinese musical instrument, with demonstration function. In this paper, the real scene obtained by Kinect camera is fused with virtual Guzheng in Unity 3D. The depth data obtained by Kinect and the Suzuki85 algorithm are used to recognize the relative position of the user's right hand and the virtual Guzheng, and the hand gesture of the user is recognized by Kinect.

  18. Assessment of Application Technology of Natural User Interfaces in the Creation of a Virtual Chemical Laboratory

    NASA Astrophysics Data System (ADS)

    Jagodziński, Piotr; Wolski, Robert

    2015-02-01

    Natural User Interfaces (NUI) are now widely used in electronic devices such as smartphones, tablets and gaming consoles. We have tried to apply this technology in the teaching of chemistry in middle school and high school. A virtual chemical laboratory was developed in which students can simulate the performance of laboratory activities similar to those that they perform in a real laboratory. Kinect sensor was used for the detection and analysis of the student's hand movements, which is an example of NUI. The studies conducted found the effectiveness of educational virtual laboratory. The extent to which the use of a teaching aid increased the students' progress in learning chemistry was examined. The results indicate that the use of NUI creates opportunities to both enhance and improve the quality of the chemistry education. Working in a virtual laboratory using the Kinect interface results in greater emotional involvement and an increased sense of self-efficacy in the laboratory work among students. As a consequence, students are getting higher marks and are more interested in the subject of chemistry.

  19. Depth-aware image seam carving.

    PubMed

    Shen, Jianbing; Wang, Dapeng; Li, Xuelong

    2013-10-01

    Image seam carving algorithm should preserve important and salient objects as much as possible when changing the image size, while not removing the secondary objects in the scene. However, it is still difficult to determine the important and salient objects that avoid the distortion of these objects after resizing the input image. In this paper, we develop a novel depth-aware single image seam carving approach by taking advantage of the modern depth cameras such as the Kinect sensor, which captures the RGB color image and its corresponding depth map simultaneously. By considering both the depth information and the just noticeable difference (JND) model, we develop an efficient JND-based significant computation approach using the multiscale graph cut based energy optimization. Our method achieves the better seam carving performance by cutting the near objects less seams while removing distant objects more seams. To the best of our knowledge, our algorithm is the first work to use the true depth map captured by Kinect depth camera for single image seam carving. The experimental results demonstrate that the proposed approach produces better seam carving results than previous content-aware seam carving methods.

  20. Generation of binary holograms for deep scenes captured with a camera and a depth sensor

    NASA Astrophysics Data System (ADS)

    Leportier, Thibault; Park, Min-Chul

    2017-01-01

    This work presents binary hologram generation from images of a real object acquired from a Kinect sensor. Since hologram calculation from a point-cloud or polygon model presents a heavy computational burden, we adopted a depth-layer approach to generate the holograms. This method enables us to obtain holographic data of large scenes quickly. Our investigations focus on the performance of different methods, iterative and noniterative, to convert complex holograms into binary format. Comparisons were performed to examine the reconstruction of the binary holograms at different depths. We also propose to modify the direct binary search algorithm to take into account several reference image planes. Then, deep scenes featuring multiple planes of interest can be reconstructed with better efficiency.

  1. Automated Technology for In-home Fall Risk Assessment and Detection Sensor System

    PubMed Central

    Rantz, Marilyn J.; Skubic, Marjorie; Abbott, Carmen; Galambos, Colleen; Pak, Youngju; Ho, Dominic K.C.; Stone, Erik E.; Rui, Liyang; Back, Jessica; Miller, Steven J.

    2013-01-01

    Falls are a major problem for older adults. A continuous, unobtrusive, environmentally mounted in-home monitoring system that automatically detects when falls have occurred or when the risk of falling is increasing could alert health care providers and family members so they could intervene to improve physical function or mange illnesses that are precipitating falls. Researchers at the University of Missouri (MU)Center for Eldercare and Rehabilitation Technology are testing such sensor systems for fall risk assessment and detection in older adults’ apartments in a senior living community. Initial results comparing ground truth fall risk assessment data and GAITRite gait parameters with gait parameters captured from Mircosoft Kinect and Pulse-Dopplar radar are reported. PMID:23675644

  2. Compensation method for the influence of angle of view on animal temperature measurement using thermal imaging camera combined with depth image.

    PubMed

    Jiao, Leizi; Dong, Daming; Zhao, Xiande; Han, Pengcheng

    2016-12-01

    In the study, we proposed an animal surface temperature measurement method based on Kinect sensor and infrared thermal imager to facilitate the screening of animals with febrile diseases. Due to random motion and small surface temperature variation of animals, the influence of the angle of view on temperature measurement is significant. The method proposed in the present study could compensate the temperature measurement error caused by the angle of view. Firstly, we analyzed the relationship between measured temperature and angle of view and established the mathematical model for compensating the influence of the angle of view with the correlation coefficient above 0.99. Secondly, the fusion method of depth and infrared thermal images was established for synchronous image capture with Kinect sensor and infrared thermal imager and the angle of view of each pixel was calculated. According to experimental results, without compensation treatment, the temperature image measured in the angle of view of 74° to 76° showed the difference of more than 2°C compared with that measured in the angle of view of 0°. However, after compensation treatment, the temperature difference range was only 0.03-1.2°C. This method is applicable for real-time compensation of errors caused by the angle of view during the temperature measurement process with the infrared thermal imager. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Application of the Augmented Reality in prototyping the educational simulator in sport - the example of judo

    NASA Astrophysics Data System (ADS)

    Cieślńiski, Wojciech B.; Sobecki, Janusz; Piepiora, Paweł A.; Piepiora, Zbigniew N.; Witkowski, Kazimierz

    2016-04-01

    The mental training (Galloway, 2011) is one of the measures of the psychological preparation in sport. Especially such as the judo discipline requires the mental training, due to the fact that the judo is a combat sport, the direct, physical confrontation of two opponents. Hence the mental preparation should be an essential element of preparing for the sports fight. In the article are described the basics of the AR systems and presents selected elements of the AR systems: sight glasses Vuzix glasses, Kinect sensor and an interactive floor Multitap. Next, there are proposed the scenarios for using the AR in the mental training which are based on using both Vuzix glasses type head as well as the interactive floor Multitap. All options, except for the last, are provided for using the Kinect sensor. In addition, these variants differ as to the primary user of the system. It can be an competitor, his coach the competitor and the coach at the same time. In the end of the article are presented methods of exploring, both, the effectiveness and usefulness, and/or the User Experience of the proposed prototypes. There are presented three educational training simulator prototype models in sport (judo) describing their functionality based on the theory of sports training (the cyclical nature of sports training) and the theory of subtle interactions, enabling an explanation of the effects of sports training using the augmented reality technology.

  4. Gait parameters extraction by using mobile robot equipped with Kinect v2

    NASA Astrophysics Data System (ADS)

    Ogawa, Ami; Mita, Akira; Yorozu, Ayanori; Takahashi, Masaki

    2016-04-01

    The needs for monitoring systems to be used in houses are getting stronger because of the increase of the single household population due to the low birth rate and longevity. Among others, gait parameters are under the spotlight to be examined as the relations with several diseases have been reported. It is known that the gait parameters obtained at a walk test are different from those obtained under the daily life. Thus, the system which can measure the gait parameters in the real living environment is needed. Generally, gait abilities are evaluated by a measurement test, such as Timed Up and Go test and 6-minute walking test. However, these methods need measurers, so the accuracy depends on them and the lack of objectivity is pointed out. Although, a precise motion capture system is used for more objective measurement, it is hard to be used in daily measurement, because the subjects have to put the markers on their body. To solve this problem, marker less sensors, such as Kinect, are developed and used for gait information acquisition. When they are attached to a mobile robot, there is no limitation of distance. However, they still have challenges of calibration for gait parameters, and the important gait parameters to be acquired are not well examined. Therefore, in this study, we extract the important parameters for gait analysis, which have correlations with diseases and age differences, and suggest the gait parameters extraction from depth data by Kinect v2 which is mounted on a mobile robot aiming at applying to the living environment.

  5. Multiplayer Kinect Serious Games: A Review

    ERIC Educational Resources Information Center

    Alshammari, Ali; Whittinghill, David

    2015-01-01

    Single and multiplayer serious Kinect games have been used in many different areas, including education. Due to its relative newness as a technology, a dearth of literature exists concerning the requirements for the use of Kinect games in educational settings. A comprehensive review was conducted to include various perspectives in order to provide…

  6. Low Cost and Efficient 3d Indoor Mapping Using Multiple Consumer Rgb-D Cameras

    NASA Astrophysics Data System (ADS)

    Chen, C.; Yang, B. S.; Song, S.

    2016-06-01

    Driven by the miniaturization, lightweight of positioning and remote sensing sensors as well as the urgent needs for fusing indoor and outdoor maps for next generation navigation, 3D indoor mapping from mobile scanning is a hot research and application topic. The point clouds with auxiliary data such as colour, infrared images derived from 3D indoor mobile mapping suite can be used in a variety of novel applications, including indoor scene visualization, automated floorplan generation, gaming, reverse engineering, navigation, simulation and etc. State-of-the-art 3D indoor mapping systems equipped with multiple laser scanners product accurate point clouds of building interiors containing billions of points. However, these laser scanner based systems are mostly expensive and not portable. Low cost consumer RGB-D Cameras provides an alternative way to solve the core challenge of indoor mapping that is capturing detailed underlying geometry of the building interiors. Nevertheless, RGB-D Cameras have a very limited field of view resulting in low efficiency in the data collecting stage and incomplete dataset that missing major building structures (e.g. ceilings, walls). Endeavour to collect a complete scene without data blanks using single RGB-D Camera is not technic sound because of the large amount of human labour and position parameters need to be solved. To find an efficient and low cost way to solve the 3D indoor mapping, in this paper, we present an indoor mapping suite prototype that is built upon a novel calibration method which calibrates internal parameters and external parameters of multiple RGB-D Cameras. Three Kinect sensors are mounted on a rig with different view direction to form a large field of view. The calibration procedure is three folds: 1, the internal parameters of the colour and infrared camera inside each Kinect are calibrated using a chess board pattern, respectively; 2, the external parameters between the colour and infrared camera inside each Kinect are calibrated using a chess board pattern; 3, the external parameters between every Kinect are firstly calculated using a pre-set calibration field and further refined by an iterative closet point algorithm. Experiments are carried out to validate the proposed method upon RGB-D datasets collected by the indoor mapping suite prototype. The effectiveness and accuracy of the proposed method is evaluated by comparing the point clouds derived from the prototype with ground truth data collected by commercial terrestrial laser scanner at ultra-high density. The overall analysis of the results shows that the proposed method achieves seamless integration of multiple point clouds form different RGB-D cameras collected at 30 frame per second.

  7. A Navigation System for the Visually Impaired: A Fusion of Vision and Depth Sensor

    PubMed Central

    Kanwal, Nadia; Bostanci, Erkan; Currie, Keith; Clark, Adrian F.

    2015-01-01

    For a number of years, scientists have been trying to develop aids that can make visually impaired people more independent and aware of their surroundings. Computer-based automatic navigation tools are one example of this, motivated by the increasing miniaturization of electronics and the improvement in processing power and sensing capabilities. This paper presents a complete navigation system based on low cost and physically unobtrusive sensors such as a camera and an infrared sensor. The system is based around corners and depth values from Kinect's infrared sensor. Obstacles are found in images from a camera using corner detection, while input from the depth sensor provides the corresponding distance. The combination is both efficient and robust. The system not only identifies hurdles but also suggests a safe path (if available) to the left or right side and tells the user to stop, move left, or move right. The system has been tested in real time by both blindfolded and blind people at different indoor and outdoor locations, demonstrating that it operates adequately. PMID:27057135

  8. A Tale of Two Observing Systems: Interoperability in the World of Microsoft Windows

    NASA Astrophysics Data System (ADS)

    Babin, B. L.; Hu, L.

    2008-12-01

    Louisiana Universities Marine Consortium's (LUMCON) and Dauphin Island Sea Lab's (DISL) Environmental Monitoring System provide a unified coastal ocean observing system. These two systems are mirrored to maintain autonomy while offering an integrated data sharing environment. Both systems collect data via Campbell Scientific Data loggers, store the data in Microsoft SQL servers, and disseminate the data in real- time on the World Wide Web via Microsoft Internet Information Servers and Active Server Pages (ASP). The utilization of Microsoft Windows technologies presented many challenges to these observing systems as open source tools for interoperability grow. The current open source tools often require the installation of additional software. In order to make data available through common standards formats, "home grown" software has been developed. One example of this is the development of software to generate xml files for transmission to the National Data Buoy Center (NDBC). OOSTethys partners develop, test and implement easy-to-use, open-source, OGC-compliant software., and have created a working prototype of networked, semantically interoperable, real-time data systems. Partnering with OOSTethys, we are developing a cookbook to implement OGC web services. The implementation will be written in ASP, will run in a Microsoft operating system environment, and will serve data via Sensor Observation Services (SOS). This cookbook will give observing systems running Microsoft Windows the tools to easily participate in the Open Geospatial Consortium (OGC) Oceans Interoperability Experiment (OCEANS IE).

  9. Supporting Foreign Language Vocabulary Learning through Kinect-Based Gaming

    ERIC Educational Resources Information Center

    Urun, Mehmet Faith; Aksoy, Hasan; Comez, Rasim

    2017-01-01

    This study aimed to explore the effectiveness of a Kinect-based game called Tom Clancy's Ghost Recon: Future Soldier to investigate possible contributions of game-based learning in a virtual language classroom at a state university in Ankara, Turkey. A quasi-experimental design where the treatment group (N= 26) was subjected to kinect-based…

  10. Active Video Game Playing in Children and Adolescents With Cystic Fibrosis: Exercise or Just Fun?

    PubMed

    Salonini, Elena; Gambazza, Simone; Meneghelli, Ilaria; Tridello, Gloria; Sanguanini, Milva; Cazzarolli, Clizia; Zanini, Alessandra; Assael, Baroukh M

    2015-08-01

    Xbox Kinect has been proposed as an exercise intervention in cystic fibrosis (CF), but its potential has not been compared with standard training modalities. Using a crossover design, subjects were randomized to 2 intervention groups: Xbox Kinect and a traditional stationary cycle. Heart rate, SpO2, dyspnea, and fatigue were measured. Subject satisfaction was tested. Thirty subjects with CF (11 males, mean ± SD age of 12 ± 2.5 y, mean ± SD FEV1 of 73 ± 16% of predicted) were enrolled. Xbox Kinect provided a cardiovascular demand similar to a stationary cycle, although the modality was different (interval vs. continuous). Maximum heart rates were similar (P = .2). Heart rate target was achieved more frequently with a stationary cycle (P = .02). Xbox Kinect caused less dyspnea (P = .001) and fatigue (P < .001) and was more enjoyable than a stationary cycle (P < .001). Subjects preferred Xbox Kinect for its interactivity. Xbox Kinect has the potential to be employed as an exercise intervention in young subjects with CF, but investigation over longer periods is needed. Copyright © 2015 by Daedalus Enterprises.

  11. The valuable use of Microsoft Kinect™ sensor 3D kinematic in the rehabilitation process in basketball

    NASA Astrophysics Data System (ADS)

    Braidot, Ariel; Favaretto, Guillermo; Frisoli, Melisa; Gemignani, Diego; Gumpel, Gustavo; Massuh, Roberto; Rayan, Josefina; Turin, Matías

    2016-04-01

    Subjects who practice sports either as professionals or amateurs, have a high incidence of knee injuries. There are a few publications that show studies from a kinematic point of view of lateral-structure-knee injuries, including meniscal (meniscal tears or chondral injury), without anterior cruciate ligament rupture. The use of standard motion capture systems for measuring outdoors sport is hard to implement due to many operative reasons. Recently released, the Microsoft Kinect™ is a sensor that was developed to track movements for gaming purposes and has seen an increased use in clinical applications. The fact that this device is a simple and portable tool allows the acquisition of data of sport common movements in the field. The development and testing of a set of protocols for 3D kinematic measurement using the Microsoft Kinect™ system is presented in this paper. The 3D kinematic evaluation algorithms were developed from information available and with the use of Microsoft’s Software Development Kit 1.8 (SDK). Along with this, an algorithm for calculating the lower limb joints angles was implemented. Thirty healthy adult volunteers were measured, using five different recording protocols for sport characteristic gestures which involve high knee injury risk in athletes.

  12. Kinecting Physics: Conceptualization of Motion through Visualization and Embodiment

    ERIC Educational Resources Information Center

    Anderson, Janice L.; Wall, Steven D.

    2016-01-01

    The purpose of this work was to share our findings in using the Kinect technology to facilitate the understanding of basic kinematics with middle school science classrooms. This study marks the first three iterations of this design-based research that examines the pedagogical potential of using the Kinect technology. To this end, we explored the…

  13. A depth enhancement strategy for kinect depth image

    NASA Astrophysics Data System (ADS)

    Quan, Wei; Li, Hua; Han, Cheng; Xue, Yaohong; Zhang, Chao; Hu, Hanping; Jiang, Zhengang

    2018-03-01

    Kinect is a motion sensing input device which is widely used in computer vision and other related fields. However, there are many inaccurate depth data in Kinect depth images even Kinect v2. In this paper, an algorithm is proposed to enhance Kinect v2 depth images. According to the principle of its depth measuring, the foreground and the background are considered separately. As to the background, the holes are filled according to the depth data in the neighborhood. And as to the foreground, a filling algorithm, based on the color image concerning about both space and color information, is proposed. An adaptive joint bilateral filtering method is used to reduce noise. Experimental results show that the processed depth images have clean background and clear edges. The results are better than ones of traditional Strategies. It can be applied in 3D reconstruction fields to pretreat depth image in real time and obtain accurate results.

  14. Usability testing of gaming and social media applications for stroke and cerebral palsy upper limb rehabilitation.

    PubMed

    Valdés, Bulmaro A; Hilderman, Courtney G E; Hung, Chai-Ting; Shirzad, Navid; Van der Loos, H F Machiel

    2014-01-01

    As part of the FEATHERS (Functional Engagement in Assisted Therapy Through Exercise Robotics) project, two motion tracking and one social networking applications were developed for upper limb rehabilitation of stroke survivors and teenagers with cerebral palsy. The project aims to improve the engagement of clients during therapy by using video games and a social media platform. The applications allow users to control a cursor on a personal computer through bimanual motions, and to interact with their peers and therapists through the social media. The tracking applications use either a Microsoft Kinect or a PlayStation Eye camera, and the social media application was developed on Facebook. This paper presents a usability testing of these applications that was conducted with therapists from two rehabilitation clinics. The "Cognitive Walkthrough" and "Think Aloud" methods were used. The objectives of the study were to investigate the ease of use and potential issues or improvements of the applications, as well as the factors that facilitate and impede the adoption of technology in current rehabilitation programs.

  15. Continuous-scanning laser Doppler vibrometry: Extensions to arbitrary areas, multi-frequency and 3D capture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weekes, B.; Ewins, D.; Acciavatti, F.

    2014-05-27

    To date, differing implementations of continuous scan laser Doppler vibrometry have been demonstrated by various academic institutions, but since the scan paths were defined using step or sine functions from function generators, the paths were typically limited to 1D line scans or 2D areas such as raster paths or Lissajous trajectories. The excitation was previously often limited to a single frequency due to the specific signal processing performed to convert the scan data into an ODS. In this paper, a configuration of continuous-scan laser Doppler vibrometry is demonstrated which permits scanning of arbitrary areas, with the benefit of allowing multi-frequency/broadbandmore » excitation. Various means of generating scan paths to inspect arbitrary areas are discussed and demonstrated. Further, full 3D vibration capture is demonstrated by the addition of a range-finding facility to the described configuration, and iteratively relocating a single scanning laser head. Here, the range-finding facility was provided by a Microsoft Kinect, an inexpensive piece of consumer electronics.« less

  16. Accurate Fall Detection in a Top View Privacy Preserving Configuration.

    PubMed

    Ricciuti, Manola; Spinsante, Susanna; Gambi, Ennio

    2018-05-29

    Fall detection is one of the most investigated themes in the research on assistive solutions for aged people. In particular, a false-alarm-free discrimination between falls and non-falls is indispensable, especially to assist elderly people living alone. Current technological solutions designed to monitor several types of activities in indoor environments can guarantee absolute privacy to the people that decide to rely on them. Devices integrating RGB and depth cameras, such as the Microsoft Kinect, can ensure privacy and anonymity, since the depth information is considered to extract only meaningful information from video streams. In this paper, we propose an accurate fall detection method investigating the depth frames of the human body using a single device in a top-view configuration, with the subjects located under the device inside a room. Features extracted from depth frames train a classifier based on a binary support vector machine learning algorithm. The dataset includes 32 falls and 8 activities considered for comparison, for a total of 800 sequences performed by 20 adults. The system showed an accuracy of 98.6% and only one false positive.

  17. Impact of a Pilot Videogame-Based Physical Activity Program on Walking Speed in Adults with Schizophrenia.

    PubMed

    Leutwyler, H; Hubbard, E; Cooper, B A; Dowling, G

    2017-11-10

    The purpose of this report is to describe the impact of a videogame-based physical activity program using the Kinect for Xbox 360 game system (Microsoft, Redmond, WA) on walking speed in adults with schizophrenia. In this randomized controlled trial, 28 participants played either an active videogame for 30 min (intervention group) or played a sedentary videogame for 30 min (control group), once a week for 6 weeks. Walking speed was measured objectively with the Short Physical Performance Battery at enrollment and at the end of the 6-week program. The intervention group (n = 13) showed an average improvement in walking speed of 0.08 m/s and the control group (n = 15) showed an average improvement in walking speed of 0.03 m/s. Although the change in walking speed was not statistically significant, the intervention group had between a small and substantial clinically meaningful change. The results suggest a videogame based physical activity program provides clinically meaningful improvement in walking speed, an important indicator of health status.

  18. Video game-based coordinative training improves ataxia in children with degenerative ataxia.

    PubMed

    Ilg, Winfried; Schatton, Cornelia; Schicks, Julia; Giese, Martin A; Schöls, Ludger; Synofzik, Matthis

    2012-11-13

    Degenerative ataxias in children present a rare condition where effective treatments are lacking. Intensive coordinative training based on physiotherapeutic exercises improves degenerative ataxia in adults, but such exercises have drawbacks for children, often including a lack of motivation for high-frequent physiotherapy. Recently developed whole-body controlled video game technology might present a novel treatment strategy for highly interactive and motivational coordinative training for children with degenerative ataxias. We examined the effectiveness of an 8-week coordinative training for 10 children with progressive spinocerebellar ataxia. Training was based on 3 Microsoft Xbox Kinect video games particularly suitable to exercise whole-body coordination and dynamic balance. Training was started with a laboratory-based 2-week training phase and followed by 6 weeks training in children's home environment. Rater-blinded assessments were performed 2 weeks before laboratory-based training, immediately prior to and after the laboratory-based training period, as well as after home training. These assessments allowed for an intraindividual control design, where performance changes with and without training were compared. Ataxia symptoms were significantly reduced (decrease in Scale for the Assessment and Rating of Ataxia score, p = 0.0078) and balance capacities improved (dynamic gait index, p = 0.04) after intervention. Quantitative movement analysis revealed improvements in gait (lateral sway: p = 0.01; step length variability: p = 0.01) and in goal-directed leg placement (p = 0.03). Despite progressive cerebellar degeneration, children are able to improve motor performance by intensive coordination training. Directed training of whole-body controlled video games might present a highly motivational, cost-efficient, and home-based rehabilitation strategy to train dynamic balance and interaction with dynamic environments in a large variety of young-onset neurologic conditions. This study provides Class III evidence that directed training with Xbox Kinect video games can improve several signs of ataxia in adolescents with progressive ataxia as measured by SARA score, Dynamic Gait Index, and Activity-specific Balance Confidence Scale at 8 weeks of training.

  19. Identifying Features of Bodily Expression As Indicators of Emotional Experience during Multimedia Learning

    PubMed Central

    Riemer, Valentin; Frommel, Julian; Layher, Georg; Neumann, Heiko; Schrader, Claudia

    2017-01-01

    The importance of emotions experienced by learners during their interaction with multimedia learning systems, such as serious games, underscores the need to identify sources of information that allow the recognition of learners’ emotional experience without interrupting the learning process. Bodily expression is gaining in attention as one of these sources of information. However, to date, the question of how bodily expression can convey different emotions has largely been addressed in research relying on acted emotion displays. Following a more contextualized approach, the present study aims to identify features of bodily expression (i.e., posture and activity of the upper body and the head) that relate to genuine emotional experience during interaction with a serious game. In a multimethod approach, 70 undergraduates played a serious game relating to financial education while their bodily expression was captured using an off-the-shelf depth-image sensor (Microsoft Kinect). In addition, self-reports of experienced enjoyment, boredom, and frustration were collected repeatedly during gameplay, to address the dynamic changes in emotions occurring in educational tasks. Results showed that, firstly, the intensities of all emotions indeed changed significantly over the course of the game. Secondly, by using generalized estimating equations, distinct features of bodily expression could be identified as significant indicators for each emotion under investigation. A participant keeping their head more turned to the right was positively related to frustration being experienced, whereas keeping their head more turned to the left was positively related to enjoyment. Furthermore, having their upper body positioned more closely to the gaming screen was also positively related to frustration. Finally, increased activity of a participant’s head emerged as a significant indicator of boredom being experienced. These results confirm the value of bodily expression as an indicator of emotional experience in multimedia learning systems. Furthermore, the findings may guide developers of emotion recognition procedures by focusing on the identified features of bodily expression. PMID:28798717

  20. Optimizing Distributed Sensor Placement for Border Patrol Interdiction Using Microsoft Excel

    DTIC Science & Technology

    2007-04-01

    weather conditions and they can be evaded by using techniques which minimize heat signatures use of lasers and other technologies day or night (26:8...technologies which can be used for border security. Maier [2004] developed a seismic intrusion sensor technology which uses fiber optic cables, lasers , and...needed to create the is used as the base map for the network. program originally developed by Keyhole by Google Inc. It provides satellite images of

  1. Statistical Validation for Clinical Measures: Repeatability and Agreement of Kinect™-Based Software.

    PubMed

    Lopez, Natalia; Perez, Elisa; Tello, Emanuel; Rodrigo, Alejandro; Valentinuzzi, Max E

    2018-01-01

    The rehabilitation process is a fundamental stage for recovery of people's capabilities. However, the evaluation of the process is performed by physiatrists and medical doctors, mostly based on their observations, that is, a subjective appreciation of the patient's evolution. This paper proposes a tracking platform of the movement made by an individual's upper limb using Kinect sensor(s) to be applied for the patient during the rehabilitation process. The main contribution is the development of quantifying software and the statistical validation of its performance, repeatability, and clinical use in the rehabilitation process. The software determines joint angles and upper limb trajectories for the construction of a specific rehabilitation protocol and quantifies the treatment evolution. In turn, the information is presented via a graphical interface that allows the recording, storage, and report of the patient's data. For clinical purposes, the software information is statistically validated with three different methodologies, comparing the measures with a goniometer in terms of agreement and repeatability. The agreement of joint angles measured with the proposed software and goniometer is evaluated with Bland-Altman plots; all measurements fell well within the limits of agreement, meaning interchangeability of both techniques. Additionally, the results of Bland-Altman analysis of repeatability show 95% confidence. Finally, the physiotherapists' qualitative assessment shows encouraging results for the clinical use. The main conclusion is that the software is capable of offering a clinical history of the patient and is useful for quantification of the rehabilitation success. The simplicity, low cost, and visualization possibilities enhance the use of the software Kinect for rehabilitation and other applications, and the expert's opinion endorses the choice of our approach for clinical practice. Comparison of the new measurement technique with established goniometric methods determines that the proposed software agrees sufficiently to be used interchangeably.

  2. Obstacle Detection and Avoidance of a Mobile Robotic Platform Using Active Depth Sensing

    DTIC Science & Technology

    2014-06-01

    price of nearly one tenth of a laser range finder, the Xbox Kinect uses an infrared projector and camera to capture images of its environment in three...inception. At the price of nearly one tenth of a laser range finder, the Xbox Kinect uses an infrared projector and camera to capture images of its...cropped between 280 and 480 pixels. ........11 Figure 9. RGB image captured by the camera on the Xbox Kinect. ...............................12 Figure

  3. Parametric Human Body Reconstruction Based on Sparse Key Points.

    PubMed

    Cheng, Ke-Li; Tong, Ruo-Feng; Tang, Min; Qian, Jing-Ye; Sarkis, Michel

    2016-11-01

    We propose an automatic parametric human body reconstruction algorithm which can efficiently construct a model using a single Kinect sensor. A user needs to stand still in front of the sensor for a couple of seconds to measure the range data. The user's body shape and pose will then be automatically constructed in several seconds. Traditional methods optimize dense correspondences between range data and meshes. In contrast, our proposed scheme relies on sparse key points for the reconstruction. It employs regression to find the corresponding key points between the scanned range data and some annotated training data. We design two kinds of feature descriptors as well as corresponding regression stages to make the regression robust and accurate. Our scheme follows with dense refinement where a pre-factorization method is applied to improve the computational efficiency. Compared with other methods, our scheme achieves similar reconstruction accuracy but significantly reduces runtime.

  4. Systematic review of Kinect applications in elderly care and stroke rehabilitation

    PubMed Central

    2014-01-01

    In this paper we present a review of the most current avenues of research into Kinect-based elderly care and stroke rehabilitation systems to provide an overview of the state of the art, limitations, and issues of concern as well as suggestions for future work in this direction. The central purpose of this review was to collect all relevant study information into one place in order to support and guide current research as well as inform researchers planning to embark on similar studies or applications. The paper is structured into three main sections, each one presenting a review of the literature for a specific topic. Elderly Care section is comprised of two subsections: Fall detection and Fall risk reduction. Stroke Rehabilitation section contains studies grouped under Evaluation of Kinect’s spatial accuracy, and Kinect-based rehabilitation methods. The third section, Serious and exercise games, contains studies that are indirectly related to the first two sections and present a complete system for elderly care or stroke rehabilitation in a Kinect-based game format. Each of the three main sections conclude with a discussion of limitations of Kinect in its respective applications. The paper concludes with overall remarks regarding use of Kinect in elderly care and stroke rehabilitation applications and suggestions for future work. A concise summary with significant findings and subject demographics (when applicable) of each study included in the review is also provided in table format. PMID:24996956

  5. Pre-impact fall detection system using dynamic threshold and 3D bounding box

    NASA Astrophysics Data System (ADS)

    Otanasap, Nuth; Boonbrahm, Poonpong

    2017-02-01

    Fall prevention and detection system have to subjugate many challenges in order to develop an efficient those system. Some of the difficult problems are obtrusion, occlusion and overlay in vision based system. Other associated issues are privacy, cost, noise, computation complexity and definition of threshold values. Estimating human motion using vision based usually involves with partial overlay, caused either by direction of view point between objects or body parts and camera, and these issues have to be taken into consideration. This paper proposes the use of dynamic threshold based and bounding box posture analysis method with multiple Kinect cameras setting for human posture analysis and fall detection. The proposed work only uses two Kinect cameras for acquiring distributed values and differentiating activities between normal and falls. If the peak value of head velocity is greater than the dynamic threshold value, bounding box posture analysis will be used to confirm fall occurrence. Furthermore, information captured by multiple Kinect placed in right angle will address the skeleton overlay problem due to single Kinect. This work contributes on the fusion of multiple Kinect based skeletons, based on dynamic threshold and bounding box posture analysis which is the only research work reported so far.

  6. Usability evaluation of low-cost virtual reality hand and arm rehabilitation games.

    PubMed

    Seo, Na Jin; Arun Kumar, Jayashree; Hur, Pilwon; Crocher, Vincent; Motawar, Binal; Lakshminarayanan, Kishor

    2016-01-01

    The emergence of lower-cost motion tracking devices enables home-based virtual reality rehabilitation activities and increased accessibility to patients. Currently, little documentation on patients' expectations for virtual reality rehabilitation is available. This study surveyed 10 people with stroke for their expectations of virtual reality rehabilitation games. This study also evaluated the usability of three lower-cost virtual reality rehabilitation games using a survey and House of Quality analysis. The games (kitchen, archery, and puzzle) were developed in the laboratory to encourage coordinated finger and arm movements. Lower-cost motion tracking devices, the P5 Glove and Microsoft Kinect, were used to record the movements. People with stroke were found to desire motivating and easy-to-use games with clinical insights and encouragement from therapists. The House of Quality analysis revealed that the games should be improved by obtaining evidence for clinical effectiveness, including clinical feedback regarding improving functional abilities, adapting the games to the user's changing functional ability, and improving usability of the motion-tracking devices. This study reports the expectations of people with stroke for rehabilitation games and usability analysis that can help guide development of future games.

  7. Performance on naturalistic virtual reality tasks depends on global cognitive functioning as assessed via traditional neurocognitive tests.

    PubMed

    Oliveira, Jorge; Gamito, Pedro; Alghazzawi, Daniyal M; Fardoun, Habib M; Rosa, Pedro J; Sousa, Tatiana; Picareli, Luís Felipe; Morais, Diogo; Lopes, Paulo

    2017-08-14

    This investigation sought to understand whether performance in naturalistic virtual reality tasks for cognitive assessment relates to the cognitive domains that are supposed to be measured. The Shoe Closet Test (SCT) was developed based on a simple visual search task involving attention skills, in which participants have to match each pair of shoes with the colors of the compartments in a virtual shoe closet. The interaction within the virtual environment was made using the Microsoft Kinect. The measures consisted of concurrent paper-and-pencil neurocognitive tests for global cognitive functioning, executive functions, attention, psychomotor ability, and the outcomes of the SCT. The results showed that the SCT correlated with global cognitive performance as measured with the Montreal Cognitive Assessment (MoCA). The SCT explained one third of the total variance of this test and revealed good sensitivity and specificity in discriminating scores below one standard deviation in this screening tool. These findings suggest that performance of such functional tasks involves a broad range of cognitive processes that are associated with global cognitive functioning and that may be difficult to isolate through paper-and-pencil neurocognitive tests.

  8. Design and Evaluation of an Interactive Exercise Coaching System for Older Adults: Lessons Learned

    PubMed Central

    Ofli, Ferda; Kurillo, Gregorij; Obdržálek, Štěpán; Bajcsy, Ruzena; Jimison, Holly; Pavel, Misha

    2016-01-01

    Although the positive effects of exercise on the well-being and quality of independent living for older adults are well-accepted, many elderly individuals lack access to exercise facilities, or the skills and motivation to perform exercise at home. To provide a more engaging environment that promotes physical activity, various fitness applications have been proposed. Many of the available products, however, are geared toward a younger population and are not appropriate or engaging for an older population. To address these issues, we developed an automated interactive exercise coaching system using the Microsoft Kinect. The coaching system guides users through a series of video exercises, tracks and measures their movements, provides real-time feedback, and records their performance over time. Our system consists of exercises to improve balance, flexibility, strength, and endurance, with the aim of reducing fall risk and improving performance of daily activities. In this paper, we report on the development of the exercise system, discuss the results of our recent field pilot study with six independently-living elderly individuals, and highlight the lessons learned relating to the in-home system setup, user tracking, feedback, and exercise performance evaluation. PMID:25594988

  9. Hand pose estimation in depth image using CNN and random forest

    NASA Astrophysics Data System (ADS)

    Chen, Xi; Cao, Zhiguo; Xiao, Yang; Fang, Zhiwen

    2018-03-01

    Thanks to the availability of low cost depth cameras, like Microsoft Kinect, 3D hand pose estimation attracted special research attention in these years. Due to the large variations in hand`s viewpoint and the high dimension of hand motion, 3D hand pose estimation is still challenging. In this paper we propose a two-stage framework which joint with CNN and Random Forest to boost the performance of hand pose estimation. First, we use a standard Convolutional Neural Network (CNN) to regress the hand joints` locations. Second, using a Random Forest to refine the joints from the first stage. In the second stage, we propose a pyramid feature which merges the information flow of the CNN. Specifically, we get the rough joints` location from first stage, then rotate the convolutional feature maps (and image). After this, for each joint, we map its location to each feature map (and image) firstly, then crop features at each feature map (and image) around its location, put extracted features to Random Forest to refine at last. Experimentally, we evaluate our proposed method on ICVL dataset and get the mean error about 11mm, our method is also real-time on a desktop.

  10. Real-time skeleton tracking for embedded systems

    NASA Astrophysics Data System (ADS)

    Coleca, Foti; Klement, Sascha; Martinetz, Thomas; Barth, Erhardt

    2013-03-01

    Touch-free gesture technology is beginning to become more popular with consumers and may have a significant future impact on interfaces for digital photography. However, almost every commercial software framework for gesture and pose detection is aimed at either desktop PCs or high-powered GPUs, making mobile implementations for gesture recognition an attractive area for research and development. In this paper we present an algorithm for hand skeleton tracking and gesture recognition that runs on an ARM-based platform (Pandaboard ES, OMAP 4460 architecture). The algorithm uses self-organizing maps to fit a given topology (skeleton) into a 3D point cloud. This is a novel way of approaching the problem of pose recognition as it does not employ complex optimization techniques or data-based learning. After an initial background segmentation step, the algorithm is ran in parallel with heuristics, which detect and correct artifacts arising from insufficient or erroneous input data. We then optimize the algorithm for the ARM platform using fixed-point computation and the NEON SIMD architecture the OMAP4460 provides. We tested the algorithm with two different depth-sensing devices (Microsoft Kinect, PMD Camboard). For both input devices we were able to accurately track the skeleton at the native framerate of the cameras.

  11. Kinect Posture Reconstruction Based on a Local Mixture of Gaussian Process Models.

    PubMed

    Liu, Zhiguang; Zhou, Liuyang; Leung, Howard; Shum, Hubert P H

    2016-11-01

    Depth sensor based 3D human motion estimation hardware such as Kinect has made interactive applications more popular recently. However, it is still challenging to accurately recognize postures from a single depth camera due to the inherently noisy data derived from depth images and self-occluding action performed by the user. In this paper, we propose a new real-time probabilistic framework to enhance the accuracy of live captured postures that belong to one of the action classes in the database. We adopt the Gaussian Process model as a prior to leverage the position data obtained from Kinect and marker-based motion capture system. We also incorporate a temporal consistency term into the optimization framework to constrain the velocity variations between successive frames. To ensure that the reconstructed posture resembles the accurate parts of the observed posture, we embed a set of joint reliability measurements into the optimization framework. A major drawback of Gaussian Process is its cubic learning complexity when dealing with a large database due to the inverse of a covariance matrix. To solve the problem, we propose a new method based on a local mixture of Gaussian Processes, in which Gaussian Processes are defined in local regions of the state space. Due to the significantly decreased sample size in each local Gaussian Process, the learning time is greatly reduced. At the same time, the prediction speed is enhanced as the weighted mean prediction for a given sample is determined by the nearby local models only. Our system also allows incrementally updating a specific local Gaussian Process in real time, which enhances the likelihood of adapting to run-time postures that are different from those in the database. Experimental results demonstrate that our system can generate high quality postures even under severe self-occlusion situations, which is beneficial for real-time applications such as motion-based gaming and sport training.

  12. MaLT - Combined Motor and Language Therapy Tool for Brain Injury Patients Using Kinect.

    PubMed

    Wairagkar, Maitreyee; McCrindle, Rachel; Robson, Holly; Meteyard, Lotte; Sperrin, Malcom; Smith, Andy; Pugh, Moyra

    2017-03-23

    The functional connectivity and structural proximity of elements of the language and motor systems result in frequent co-morbidity post brain injury. Although rehabilitation services are becoming increasingly multidisciplinary and "integrated", treatment for language and motor functions often occurs in isolation. Thus, behavioural therapies which promote neural reorganisation do not reflect the high intersystem connectivity of the neurologically intact brain. As such, there is a pressing need for rehabilitation tools which better reflect and target the impaired cognitive networks. The objective of this research is to develop a combined high dosage therapy tool for language and motor rehabilitation. The rehabilitation therapy tool developed, MaLT (Motor and Language Therapy), comprises a suite of computer games targeting both language and motor therapy that use the Kinect sensor as an interaction device. The games developed are intended for use in the home environment over prolonged periods of time. In order to track patients' engagement with the games and their rehabilitation progress, the game records patient performance data for the therapist to interrogate. MaLT incorporates Kinect-based games, a database of objects and language parameters, and a reporting tool for therapists. Games have been developed that target four major language therapy tasks involving single word comprehension, initial phoneme identification, rhyme identification and a naming task. These tasks have 8 levels each increasing in difficulty. A database of 750 objects is used to programmatically generate appropriate questions for the game, providing both targeted therapy and unique gameplay every time. The design of the games has been informed by therapists and by discussions with a Public Patient Involvement (PPI) group. Pilot MaLT trials have been conducted with three stroke survivors for the duration of 6 to 8 weeks. Patients' performance is monitored through MaLT's reporting facility presented as graphs plotted from patient game data. Performance indicators include reaction time, accuracy, number of incorrect responses and hand use. The resultant games have also been tested by the PPI with a positive response and further suggestions for future modifications made. MaLT provides a tool that innovatively combines motor and language therapy for high dosage rehabilitation in the home. It has demonstrated that motion sensor technology can be successfully combined with a language therapy task to target both upper limb and linguistic impairment in patients following brain injury. The initial studies on stroke survivors have demonstrated that the combined therapy approach is viable and the outputs of this study will inform planned larger scale future trials.

  13. Person-Generated Health Data in Simulated Rehabilitation Using Kinect for Stroke: Literature Review.

    PubMed

    Dimaguila, Gerardo Luis; Gray, Kathleen; Merolli, Mark

    2018-05-08

    Person- or patient-generated health data (PGHD) are health, wellness, and clinical data that people generate, record, and analyze for themselves. There is potential for PGHD to improve the efficiency and effectiveness of simulated rehabilitation technologies for stroke. Simulated rehabilitation is a type of telerehabilitation that uses computer technologies and interfaces to allow the real-time simulation of rehabilitation activities or a rehabilitation environment. A leading technology for simulated rehabilitation is Microsoft's Kinect, a video-based technology that uses infrared to track a user's body movements. This review attempts to understand to what extent Kinect-based stroke rehabilitation systems (K-SRS) have used PGHD and to what benefit. The review is conducted in two parts. In part 1, aspects of relevance for PGHD were searched for in existing systematic reviews on K-SRS. The following databases were searched: IEEE Xplore, Association of Computing Machinery Digital Library, PubMed, Biomed Central, Cochrane Library, and Campbell Collaboration. In part 2, original research papers that presented or used K-SRS were reviewed in terms of (1) types of PGHD, (2) patient access to PGHD, (3) PGHD use, and (4) effects of PGHD use. The search was conducted in the same databases as part 1 except Cochrane and Campbell Collaboration. Reference lists on K-SRS of the reviews found in part 1 were also included in the search for part 2. There was no date restriction. The search was closed in June 2017. The quality of the papers was not assessed, as it was not deemed critical to understanding PGHD access and use in studies that used K-SRS. In part 1, 192 papers were identified, and after assessment only 3 papers were included. Part 1 showed that previous reviews focused on technical effectiveness of K-SRS with some attention on clinical effectiveness. None of those reviews reported on home-based implementation or PGHD use. In part 2, 163 papers were identified and after assessment, 41 papers were included. Part 2 showed that there is a gap in understanding how PGHD use may affect patients using K-SRS and a lack of patient participation in the design of such systems. This paper calls specifically for further studies of K-SRS-and for studies of technologies that allow patients to generate their own health data in general-to pay more attention to how patients' own use of their data may influence their care processes and outcomes. Future studies that trial the effectiveness of K-SRS outside the clinic should also explore how patients and carers use PGHD in home rehabilitation programs. ©Gerardo Luis Dimaguila, Kathleen Gray, Mark Merolli. Originally published in JMIR Rehabilitation and Assistive Technology (http://rehab.jmir.org), 08.05.2018.

  14. Mechanism of Kinect-based virtual reality training for motor functional recovery of upper limbs after subacute stroke.

    PubMed

    Bao, Xiao; Mao, Yurong; Lin, Qiang; Qiu, Yunhai; Chen, Shaozhen; Li, Le; Cates, Ryan S; Zhou, Shufeng; Huang, Dongfeng

    2013-11-05

    The Kinect-based virtual reality system for the Xbox 360 enables users to control and interact with the game console without the need to touch a game controller, and provides rehabilitation training for stroke patients with lower limb dysfunctions. However, the underlying mechanism remains unclear. In this study, 18 healthy subjects and five patients after subacute stroke were included. The five patients were scanned using functional MRI prior to training, 3 weeks after training and at a 12-week follow-up, and then compared with healthy subjects. The Fugl-Meyer Assessment and Wolf Motor Function Test scores of the hemiplegic upper limbs of stroke patients were significantly increased 3 weeks after training and at the 12-week follow-up. Functional MRI results showed that contralateral primary sensorimotor cortex was activated after Kinect-based virtual reality training in the stroke patients compared with the healthy subjects. Contralateral primary sensorimotor cortex, the bilateral supplementary motor area and the ipsilateral cerebellum were also activated during hand-clenching in all 18 healthy subjects. Our findings indicate that Kinect-based virtual reality training could promote the recovery of upper limb motor function in subacute stroke patients, and brain reorganization by Kinect-based virtual reality training may be linked to the contralateral sensorimotor cortex.

  15. Mechanism of Kinect-based virtual reality training for motor functional recovery of upper limbs after subacute stroke

    PubMed Central

    Bao, Xiao; Mao, Yurong; Lin, Qiang; Qiu, Yunhai; Chen, Shaozhen; Li, Le; Cates, Ryan S.; Zhou, Shufeng; Huang, Dongfeng

    2013-01-01

    The Kinect-based virtual reality system for the Xbox 360 enables users to control and interact with the game console without the need to touch a game controller, and provides rehabilitation training for stroke patients with lower limb dysfunctions. However, the underlying mechanism remains unclear. In this study, 18 healthy subjects and five patients after subacute stroke were included. The five patients were scanned using functional MRI prior to training, 3 weeks after training and at a 12-week follow-up, and then compared with healthy subjects. The Fugl-Meyer Assessment and Wolf Motor Function Test scores of the hemiplegic upper limbs of stroke patients were significantly increased 3 weeks after training and at the 12-week follow-up. Functional MRI results showed that contralateral primary sensorimotor cortex was activated after Kinect-based virtual reality training in the stroke patients compared with the healthy subjects. Contralateral primary sensorimotor cortex, the bilateral supplementary motor area and the ipsilateral cerebellum were also activated during hand-clenching in all 18 healthy subjects. Our findings indicate that Kinect-based virtual reality training could promote the recovery of upper limb motor function in subacute stroke patients, and brain reorganization by Kinect-based virtual reality training may be linked to the contralateral sensorimotor cortex. PMID:25206611

  16. Non-contact and noise tolerant heart rate monitoring using microwave doppler sensor and range imagery.

    PubMed

    Matsunag, Daichi; Izumi, Shintaro; Okuno, Keisuke; Kawaguchi, Hiroshi; Yoshimoto, Masahiko

    2015-01-01

    This paper describes a non-contact and noise-tolerant heart beat monitoring system. The proposed system comprises a microwave Doppler sensor and range imagery using Microsoft Kinect™. The possible application of the proposed system is a driver health monitoring. We introduce the sensor fusion approach to minimize the heart beat detection error. The proposed algorithm can subtract a body motion artifact from Doppler sensor output using time-frequency analysis. The body motion artifact is a crucially important problem for biosignal monitoring using microwave Doppler sensor. The body motion speed is obtainable from range imagery, which has 5-mm resolution at 30-cm distance. Measurement results show that the success rate of the heart beat detection is improved about 75% on average when the Doppler wave is degraded by the body motion artifact.

  17. Development of paper-based sensor coupled with smartphone detector for simple creatinine determination

    NASA Astrophysics Data System (ADS)

    Tambaru, David; Rupilu, Reski Helena; Nitti, Fidelis; Gauru, Imanuel; Suwari

    2017-03-01

    Creatinine level in urine is one of the most important indicators for kidney diseases. A routine assay for this compound is vital especially for those who suffer from kidney malfunction. However, the existing methods are mostly expensive, impractical and time consuming. Here in, we report a research on the development of sensor for creatinine analysis by using cheap materials such as paper and coupled with a smartphone as the detector leading to an inexpensive and free-instrument method. This research was done based on the Jaffe reaction in which the creatinine was reacted with picric acid in basic solution to form an orange-red creatinine-picric complex. The red-green-blue intensity of the complex, captured with a smartphone, was measured and then digitized with free-download Microsoft Visual c# 2010I Express applications, as the analytical response. This proposed method was evaluated based on its precision, accuracy, percent of recovery and limit of detection. It was found that the precision, accuracy, percent of recovery and limit of detection of this method were 5.55%, 0.74 %, 96.73 ± 6.12 % and 8.02 ppm, respectively. It can be concluded that the paper based sensors with digital imaging approach using Microsoft Visual c# 2010I Express,with its simplicity and affordabilitycan be applied for on-site determination of creatinine level.

  18. Development of automatic body condition scoring using a low-cost 3-dimensional Kinect camera.

    PubMed

    Spoliansky, Roii; Edan, Yael; Parmet, Yisrael; Halachmi, Ilan

    2016-09-01

    Body condition scoring (BCS) is a farm-management tool for estimating dairy cows' energy reserves. Today, BCS is performed manually by experts. This paper presents a 3-dimensional algorithm that provides a topographical understanding of the cow's body to estimate BCS. An automatic BCS system consisting of a Kinect camera (Microsoft Corp., Redmond, WA) triggered by a passive infrared motion detector was designed and implemented. Image processing and regression algorithms were developed and included the following steps: (1) image restoration, the removal of noise; (2) object recognition and separation, identification and separation of the cows; (3) movie and image selection, selection of movies and frames that include the relevant data; (4) image rotation, alignment of the cow parallel to the x-axis; and (5) image cropping and normalization, removal of irrelevant data, setting the image size to 150×200 pixels, and normalizing image values. All steps were performed automatically, including image selection and classification. Fourteen individual features per cow, derived from the cows' topography, were automatically extracted from the movies and from the farm's herd-management records. These features appear to be measurable in a commercial farm. Manual BCS was performed by a trained expert and compared with the output of the training set. A regression model was developed, correlating the features with the manual BCS references. Data were acquired for 4 d, resulting in a database of 422 movies of 101 cows. Movies containing cows' back ends were automatically selected (389 movies). The data were divided into a training set of 81 cows and a test set of 20 cows; both sets included the identical full range of BCS classes. Accuracy tests gave a mean absolute error of 0.26, median absolute error of 0.19, and coefficient of determination of 0.75, with 100% correct classification within 1 step and 91% correct classification within a half step for BCS classes. Results indicated good repeatability, with all standard deviations under 0.33. The algorithm is independent of the background and requires 10 cows for training with approximately 30 movies of 4 s each. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  19. SU-E-J-12: An Image-Guided Soft Robotic Patient Positioning System for Maskless Head-And-Neck Cancer Radiotherapy: A Proof-Of-Concept Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ogunmolu, O; Gans, N; Jiang, S

    Purpose: We propose a surface-image-guided soft robotic patient positioning system for maskless head-and-neck radiotherapy. The ultimate goal of this project is to utilize a soft robot to realize non-rigid patient positioning and real-time motion compensation. In this proof-of-concept study, we design a position-based visual servoing control system for an air-bladder-based soft robot and investigate its performance in controlling the flexion/extension cranial motion on a mannequin head phantom. Methods: The current system consists of Microsoft Kinect depth camera, an inflatable air bladder (IAB), pressured air source, pneumatic valve actuators, custom-built current regulators, and a National Instruments myRIO microcontroller. The performance ofmore » the designed system was evaluated on a mannequin head, with a ball joint fixed below its neck to simulate torso-induced head motion along flexion/extension direction. The IAB is placed beneath the mannequin head. The Kinect camera captures images of the mannequin head, extracts the face, and measures the position of the head relative to the camera. This distance is sent to the myRIO, which runs control algorithms and sends actuation commands to the valves, inflating and deflating the IAB to induce head motion. Results: For a step input, i.e. regulation of the head to a constant displacement, the maximum error was a 6% overshoot, which the system then reduces to 0% steady-state error. In this initial investigation, the settling time to reach the regulated position was approximately 8 seconds, with 2 seconds of delay between the command start of motion due to capacitance of the pneumatics, for a total of 10 seconds to regulate the error. Conclusion: The surface image-guided soft robotic patient positioning system can achieve accurate mannequin head flexion/extension motion. Given this promising initial Result, the extension of the current one-dimensional soft robot control to multiple IABs for non-rigid positioning control will be pursued.« less

  20. Markerless motion capture systems as training device in neurological rehabilitation: a systematic review of their use, application, target population and efficacy.

    PubMed

    Knippenberg, Els; Verbrugghe, Jonas; Lamers, Ilse; Palmaers, Steven; Timmermans, Annick; Spooren, Annemie

    2017-06-24

    Client-centred task-oriented training is important in neurological rehabilitation but is time consuming and costly in clinical practice. The use of technology, especially motion capture systems (MCS) which are low cost and easy to apply in clinical practice, may be used to support this kind of training, but knowledge and evidence of their use for training is scarce. The present review aims to investigate 1) which motion capture systems are used as training devices in neurological rehabilitation, 2) how they are applied, 3) in which target population, 4) what the content of the training and 5) efficacy of training with MCS is. A computerised systematic literature review was conducted in four databases (PubMed, Cinahl, Cochrane Database and IEEE). The following MeSH terms and key words were used: Motion, Movement, Detection, Capture, Kinect, Rehabilitation, Nervous System Diseases, Multiple Sclerosis, Stroke, Spinal Cord, Parkinson Disease, Cerebral Palsy and Traumatic Brain Injury. The Van Tulder's Quality assessment was used to score the methodological quality of the selected studies. The descriptive analysis is reported by MCS, target population, training parameters and training efficacy. Eighteen studies were selected (mean Van Tulder score = 8.06 ± 3.67). Based on methodological quality, six studies were selected for analysis of training efficacy. Most commonly used MCS was Microsoft Kinect, training was mostly conducted in upper limb stroke rehabilitation. Training programs varied in intensity, frequency and content. None of the studies reported an individualised training program based on client-centred approach. Motion capture systems are training devices with potential in neurological rehabilitation to increase the motivation during training and may assist improvement on one or more International Classification of Functioning, Disability and Health (ICF) levels. Although client-centred task-oriented training is important in neurological rehabilitation, the client-centred approach was not included. Future technological developments should take up the challenge to combine MCS with the principles of a client-centred task-oriented approach and prove efficacy using randomised controlled trials with long-term follow-up. Prospero registration number 42016035582 .

  1. A comparison of the upper limb movement kinematics utilized by children playing virtual and real table tennis.

    PubMed

    Bufton, Amy; Campbell, Amity; Howie, Erin; Straker, Leon

    2014-12-01

    Active virtual games (AVG) may facilitate gross motor skill development, depending on their fidelity. This study compared the movement patterns of nineteen 10-12 yr old children, while playing table tennis on three AVG consoles (Nintendo Wii, Xbox Kinect, Sony Move) and as a real world task. Wrist and elbow joint angles and hand path distance and speed were captured. Children playing real table tennis had significantly smaller (e.g. Wrist Angle Forehand Real-Kinect: Mean Difference (MD): -18.2°, 95% Confidence Interval (CI): -26.15 to -10.26) and slower (e.g. Average Speed Forehand Real-Kinect: MD: -1.98 ms(-1), 95% CI: -2.35 to -1.61) movements than when using all three AVGs. Hand path distance was smaller in forehand and backhand strokes (e.g. Kinect-Wii: MD: 0.46 m, 95% CI: 0.13-0.79) during playing with Kinect than Move and Wii. The movement patterns when playing real and virtual table tennis were different and this may impede the development of real world gross motor skills. Several elements, including display, input and task characteristics, may have contributed to the differences in movement patterns observed. Understanding the interface components for AVGs may help development of higher fidelity games to potentially enhance the development of gross motor skill and thus participation in PA. Copyright © 2014 Elsevier B.V. All rights reserved.

  2. Interacting With A Near Real-Time Urban Digital Watershed Using Emerging Geospatial Web Technologies

    NASA Astrophysics Data System (ADS)

    Liu, Y.; Fazio, D. J.; Abdelzaher, T.; Minsker, B.

    2007-12-01

    The value of real-time hydrologic data dissemination including river stage, streamflow, and precipitation for operational stormwater management efforts is particularly high for communities where flash flooding is common and costly. Ideally, such data would be presented within a watershed-scale geospatial context to portray a holistic view of the watershed. Local hydrologic sensor networks usually lack comprehensive integration with sensor networks managed by other agencies sharing the same watershed due to administrative, political, but mostly technical barriers. Recent efforts on providing unified access to hydrological data have concentrated on creating new SOAP-based web services and common data format (e.g. WaterML and Observation Data Model) for users to access the data (e.g. HIS and HydroSeek). Geospatial Web technology including OGC sensor web enablement (SWE), GeoRSS, Geo tags, Geospatial browsers such as Google Earth and Microsoft Virtual Earth and other location-based service tools provides possibilities for us to interact with a digital watershed in near-real-time. OGC SWE proposes a revolutionary concept towards a web-connected/controllable sensor networks. However, these efforts have not provided the capability to allow dynamic data integration/fusion among heterogeneous sources, data filtering and support for workflows or domain specific applications where both push and pull mode of retrieving data may be needed. We propose a light weight integration framework by extending SWE with open source Enterprise Service Bus (e.g., mule) as a backbone component to dynamically transform, transport, and integrate both heterogeneous sensor data sources and simulation model outputs. We will report our progress on building such framework where multi-agencies" sensor data and hydro-model outputs (with map layers) will be integrated and disseminated in a geospatial browser (e.g. Microsoft Virtual Earth). This is a collaborative project among NCSA, USGS Illinois Water Science Center, Computer Science Department at UIUC funded by the Adaptive Environmental Infrastructure Sensing and Information Systems initiative at UIUC.

  3. The Design and Development of an Omni-Directional Mobile Robot Oriented to an Intelligent Manufacturing System

    PubMed Central

    Qian, Jun; Zi, Bin; Ma, Yangang; Zhang, Dan

    2017-01-01

    In order to transport materials flexibly and smoothly in a tight plant environment, an omni-directional mobile robot based on four Mecanum wheels was designed. The mechanical system of the mobile robot is made up of three separable layers so as to simplify its combination and reorganization. Each modularized wheel was installed on a vertical suspension mechanism, which ensures the moving stability and keeps the distances of four wheels invariable. The control system consists of two-level controllers that implement motion control and multi-sensor data processing, respectively. In order to make the mobile robot navigate in an unknown semi-structured indoor environment, the data from a Kinect visual sensor and four wheel encoders were fused to localize the mobile robot using an extended Kalman filter with specific processing. Finally, the mobile robot was integrated in an intelligent manufacturing system for material conveying. Experimental results show that the omni-directional mobile robot can move stably and autonomously in an indoor environment and in industrial fields. PMID:28891964

  4. The Design and Development of an Omni-Directional Mobile Robot Oriented to an Intelligent Manufacturing System.

    PubMed

    Qian, Jun; Zi, Bin; Wang, Daoming; Ma, Yangang; Zhang, Dan

    2017-09-10

    In order to transport materials flexibly and smoothly in a tight plant environment, an omni-directional mobile robot based on four Mecanum wheels was designed. The mechanical system of the mobile robot is made up of three separable layers so as to simplify its combination and reorganization. Each modularized wheel was installed on a vertical suspension mechanism, which ensures the moving stability and keeps the distances of four wheels invariable. The control system consists of two-level controllers that implement motion control and multi-sensor data processing, respectively. In order to make the mobile robot navigate in an unknown semi-structured indoor environment, the data from a Kinect visual sensor and four wheel encoders were fused to localize the mobile robot using an extended Kalman filter with specific processing. Finally, the mobile robot was integrated in an intelligent manufacturing system for material conveying. Experimental results show that the omni-directional mobile robot can move stably and autonomously in an indoor environment and in industrial fields.

  5. Statistical Validation for Clinical Measures: Repeatability and Agreement of Kinect™-Based Software

    PubMed Central

    Tello, Emanuel; Rodrigo, Alejandro; Valentinuzzi, Max E.

    2018-01-01

    Background The rehabilitation process is a fundamental stage for recovery of people's capabilities. However, the evaluation of the process is performed by physiatrists and medical doctors, mostly based on their observations, that is, a subjective appreciation of the patient's evolution. This paper proposes a tracking platform of the movement made by an individual's upper limb using Kinect sensor(s) to be applied for the patient during the rehabilitation process. The main contribution is the development of quantifying software and the statistical validation of its performance, repeatability, and clinical use in the rehabilitation process. Methods The software determines joint angles and upper limb trajectories for the construction of a specific rehabilitation protocol and quantifies the treatment evolution. In turn, the information is presented via a graphical interface that allows the recording, storage, and report of the patient's data. For clinical purposes, the software information is statistically validated with three different methodologies, comparing the measures with a goniometer in terms of agreement and repeatability. Results The agreement of joint angles measured with the proposed software and goniometer is evaluated with Bland-Altman plots; all measurements fell well within the limits of agreement, meaning interchangeability of both techniques. Additionally, the results of Bland-Altman analysis of repeatability show 95% confidence. Finally, the physiotherapists' qualitative assessment shows encouraging results for the clinical use. Conclusion The main conclusion is that the software is capable of offering a clinical history of the patient and is useful for quantification of the rehabilitation success. The simplicity, low cost, and visualization possibilities enhance the use of the software Kinect for rehabilitation and other applications, and the expert's opinion endorses the choice of our approach for clinical practice. Comparison of the new measurement technique with established goniometric methods determines that the proposed software agrees sufficiently to be used interchangeably. PMID:29750166

  6. A Kinect based intelligent e-rehabilitation system in physical therapy.

    PubMed

    Gal, Norbert; Andrei, Diana; Nemeş, Dan Ion; Nădăşan, Emanuela; Stoicu-Tivadar, Vasile

    2015-01-01

    This paper presents an intelligent Kinect and fuzzy inference system based e-rehabilitation system. The Kinect can detect the posture and motion of the patients while the fuzzy inference system can interpret the acquired data on the cognitive level. The system is capable to assess the initial posture and motion ranges of 20 joints. Using angles to describe the motion of the joints, exercise patterns can be developed for each patient. Using the exercise descriptors the fuzzy inference system can track the patient and deliver real-time feedback to maximize the efficiency of the rehabilitation. The first laboratory tests confirm the utility of this system for the initial posture detection, motion range and exercise tracking.

  7. Therapeutic hypertension system based on a microbreathing pressure sensor system.

    PubMed

    Diao, Ziji; Liu, Hongying; Zhu, Lan; Gao, Xiaoqiang; Zhao, Suwen; Pi, Xitian; Zheng, Xiaolin

    2011-01-01

    A novel therapeutic system for the treatment of hypertension was developed on the basis of a slow-breath training mechanism, using a microbreathing pressure sensor device for the detection of human respiratory signals attached to the abdomen. The system utilizes a single-chip AT89C51 microcomputer as a core processor, programmed by Microsoft Visual C++6.0 to communicate with a PC via a full-speed PDIUSBD12 interface chip. The programming is based on a slow-breath guided algorithm in which the respiratory signal serves as a physiological feedback parameter. Inhalation and exhalation by the subject is guided by music signals. Our study indicates that this microbreathing sensor system may assist in slow-breath training and may help to decrease blood pressure.

  8. Intelligent lead: a novel HRI sensor for guide robots.

    PubMed

    Cho, Keum-Bae; Lee, Beom-Hee

    2012-01-01

    This paper addresses the introduction of a new Human Robot Interaction (HRI) sensor for guide robots. Guide robots for geriatric patients or the visually impaired should follow user's control command, keeping a certain desired distance allowing the user to work freely. Therefore, it is necessary to acquire control commands and a user's position on a real-time basis. We suggest a new sensor fusion system to achieve this objective and we will call this sensor the "intelligent lead". The objective of the intelligent lead is to acquire a stable distance from the user to the robot, speed-control volume and turn-control volume, even when the robot platform with the intelligent lead is shaken on uneven ground. In this paper we explain a precise Extended Kalman Filter (EKF) procedure for this. The intelligent lead physically consists of a Kinect sensor, the serial linkage attached with eight rotary encoders, and an IMU (Inertial Measurement Unit) and their measurements are fused by the EKF. A mobile robot was designed to test the performance of the proposed sensor system. After installing the intelligent lead in the mobile robot, several tests are conducted to verify that the mobile robot with the intelligent lead is capable of achieving its goal points while maintaining the appropriate distance between the robot and the user. The results show that we can use the intelligent lead proposed in this paper as a new HRI sensor joined a joystick and a distance measure in the mobile environments such as the robot and the user are moving at the same time.

  9. Application of KinectTM and wireless technology for patient data recording and viewing system in the course of surgery

    NASA Astrophysics Data System (ADS)

    Ong, Aira Patrice R.; Bugtai, Nilo T.; Aldaba, Luis Miguel M.; Madrangca, Astrid Valeska H.; Que, Giselle V.; Que, Miles Frederick L.; Tan, Kean Anderson. S.

    2017-02-01

    In modern operating room (OR) conditions, a patient's computed tomography (CT) or magnetic resonance imaging (MRI) scans are some of the most important resources during surgical procedures. In practice, the surgeon is impelled to scrub out and back in every time he needs to scroll through scan images in mid-operation. To prevent leaving the operating table, many surgeons rely on assistants or nurses and give instructions to manipulate the computer for them, which can be cumbersome and frustrating. As a motivation for this study, the use of touchless (non-contact) gesture-based interface in medical practice is incorporated to have aseptic interactions with the computer systems and with the patient's data. The system presented in this paper is composed of three main parts: the Trek Ai-Ball Camera, the Microsoft Kinect™, and the computer software. The incorporation of these components and the developed software allows the user to perform 13 hand gestures, which have been tested to be 100 percent accurate. Based on the results of the tests performed on the system performance, the conclusions made regarding the time efficiency of the viewing system, the quality and the safety of the recording system has gained positive feedback from consulting doctors.

  10. Virtual Character Animation Based on Affordable Motion Capture and Reconfigurable Tangible Interfaces.

    PubMed

    Lamberti, Fabrizio; Paravati, Gianluca; Gatteschi, Valentina; Cannavo, Alberto; Montuschi, Paolo

    2018-05-01

    Software for computer animation is generally characterized by a steep learning curve, due to the entanglement of both sophisticated techniques and interaction methods required to control 3D geometries. This paper proposes a tool designed to support computer animation production processes by leveraging the affordances offered by articulated tangible user interfaces and motion capture retargeting solutions. To this aim, orientations of an instrumented prop are recorded together with animator's motion in the 3D space and used to quickly pose characters in the virtual environment. High-level functionalities of the animation software are made accessible via a speech interface, thus letting the user control the animation pipeline via voice commands while focusing on his or her hands and body motion. The proposed solution exploits both off-the-shelf hardware components (like the Lego Mindstorms EV3 bricks and the Microsoft Kinect, used for building the tangible device and tracking animator's skeleton) and free open-source software (like the Blender animation tool), thus representing an interesting solution also for beginners approaching the world of digital animation for the first time. Experimental results in different usage scenarios show the benefits offered by the designed interaction strategy with respect to a mouse & keyboard-based interface both for expert and non-expert users.

  11. Assessment of a Microsoft Kinect-based 3D scanning system for taking body segment girth measurements: a comparison to ISAK and ISO standards.

    PubMed

    Clarkson, Sean; Wheat, Jon; Heller, Ben; Choppin, Simon

    2016-01-01

    Use of anthropometric data to infer sporting performance is increasing in popularity, particularly within elite sport programmes. Measurement typically follows standards set by the International Society for the Advancement of Kinanthropometry (ISAK). However, such techniques are time consuming, which reduces their practicality. Schranz et al. recently suggested 3D body scanners could replace current measurement techniques; however, current systems are costly. Recent interest in natural user interaction has led to a range of low-cost depth cameras capable of producing 3D body scans, from which anthropometrics can be calculated. A scanning system comprising 4 depth cameras was used to scan 4 cylinders, representative of the body segments. Girth measurements were calculated from the 3D scans and compared to gold standard measurements. Requirements of a Level 1 ISAK practitioner were met in all 4 cylinders, and ISO standards for scan-derived girth measurements were met in the 2 larger cylinders only. A fixed measurement bias was identified that could be corrected with a simple offset factor. Further work is required to determine comparable performance across a wider range of measurements performed upon living participants. Nevertheless, findings of the study suggest such a system offers many advantages over current techniques, having a range of potential applications.

  12. Is There Evidence That Active Videogames Increase Energy Expenditure and Exercise Intensity for People Poststroke and with Cerebral Palsy?

    PubMed

    Deutsch, Judith E; Guarrera-Bowlby, Phyllis; Myslinski, Mary Jane; Kafri, Michal

    2015-02-01

    This article asked and answered the question of whether there was evidence to support the use of videogames for promotion of wellness and fitness for people poststroke and those with cerebral palsy (CP). A literature search of PubMed, CINAHL, and PEDro using a population, intervention, and outcome (PIO) approach and the key words "stroke (or CP) AND video games (and synonyms) AND energy expenditure (EE) (and synonyms)" was conducted. It yielded two relevant references for people poststroke and five references for people with CP. The literature extraction and synthesis by the categories of the PIO indicated that most studies used only the population of interest, except two that compared the EE with that of healthy controls. The main finding is that both people poststroke (moderate severity) and people with CP (mild severity) can achieve moderate EE playing Wii(™) (Nintendo, Kyoto, Japan), PlayStation(®) (Sony, Tokyo, Japan), and Kinect(™) (Microsoft, Redmond, WA) games. Adults with CP of mild severity played the videogames at vigorous levels, whereas those with severe CP played them at low levels. There appears to be an interaction between development and severity that influences the exercise intensity measured by EE. The findings suggests that videogames are a gateway for wellness promotion.

  13. Intertidal Sandbar Welding as a Primary Source of Sediment for Dune Growth: Evidence from a Large Scale Field Experiment

    NASA Astrophysics Data System (ADS)

    Cohn, N.; Ruggiero, P.; de Vries, S.

    2016-12-01

    Dunes provide the first line of defense from elevated water levels in low-lying coastal systems, limiting potentially major flooding, economic damages, and loss of livelihood. Despite the well documented importance of healthy dunes, our predictive ability of dune growth, particularly following erosive storm events, remains poor - resulting in part from traditionally studying the wet and dry beach as separate entities. In fact, however, dune recovery and growth is closely tied to the subtidal morphology and the nearshore hydrodynamic conditions, necessitating treating the entire coastal zone from the shoreface to the backshore as an integrated system. In this context, to further improve our understanding of the physical processes allowing for beach and dune growth during fair weather conditions, a large field experiment, the Sandbar-aEolian Dune EXchange EXperiment, was performed in summer 2016 in southwestern Washington, USA. Measurements of nearshore and atmospheric hydrodynamics, in-situ sediment transport, and morphology change provide insight into the time and space scales of nearshore-beach-dune exchanges along a rapidly prograding stretch of coast over a 6 week period. As part of this experiment, the hypothesis that dune growth is limited by the welding of intertidal sandbars to the shoreline (Houser, 2009) was tested. Using laser particle counters, bed elevation sensors (sonar altimeters and Microsoft Kinect), continuously logging sediment traps, RGB and IR cameras, and repeat morphology surveys (terrestrial lidar, kite based structure from motion, and RTK GPS), spatial and temporal trends in aeolian sediment transport were assessed in relation to the synoptic onshore migration and welding of intertidal sandbars. Observations from this experiment demonstrate that (1) the intertidal zone is the primary source of sediment to the dunes during non-storm conditions, (2) rates of saltation increase during later stages of bar welding but equivalent wind conditions, and (3) alongshore variability in rates of backshore fluxes appear to be related to alongshore variability in intertidal morphology. These observations quantitatively support the Houser (2009) bar welding hypothesis and provide valuable new insights on nearshore-beach-dune sediment exchanges

  14. A Kinect-Based Assessment System for Smart Classroom

    ERIC Educational Resources Information Center

    Kumara, W. G. C. W.; Wattanachote, Kanoksak; Battulga, Batbaatar; Shih, Timothy K.; Hwang, Wu-Yuin

    2015-01-01

    With the advancements of the human computer interaction field, nowadays it is possible for the users to use their body motions, such as swiping, pushing and moving, to interact with the content of computers or smart phones without traditional input devices like mouse and keyboard. With the introduction of gesture-based interface Kinect from…

  15. Learning Recycling from Playing a Kinect Game

    ERIC Educational Resources Information Center

    González Ibánez, José de Jesús Luis; Wang, Alf Inge

    2015-01-01

    The emergence of gesture-based computing and inexpensive gesture recognition technology such as the Kinect have opened doors for a new generation of educational games. Gesture based-based interfaces make it possible to provide user interfaces that are more nature and closer to the tasks being carried out, and helping students that learn best…

  16. Environmental Health Monitor: Advanced Development of Temperature Sensor Suite.

    DTIC Science & Technology

    1995-07-30

    systems was implemented using program code existing at Veritay. The software , written in Microsoft® QuickBASIC, facilitated program changes for...currently unforeseen reason re-calibration is needed, this can be readily * accommodated by a straightforward change in the software program---without...unit. A linear relationship between these differences * was obtained using curve fitting software . The ½/-inch globe to 6-inch globe correlation * was

  17. The Effects of Using the Kinect Motion-Sensing Interactive System to Enhance English Learning for Elementary Students

    ERIC Educational Resources Information Center

    Pan, Wen Fu

    2017-01-01

    The objective of this study was to test whether the Kinect motion-sensing interactive system (KMIS) enhanced students' English vocabulary learning, while also comparing the system's effectiveness against a traditional computer-mouse interface. Both interfaces utilized an interactive game with a questioning strategy. One-hundred and twenty…

  18. Assessment of laboratory and daily energy expenditure estimates from consumer multi-sensor physical activity monitors.

    PubMed

    Chowdhury, Enhad A; Western, Max J; Nightingale, Thomas E; Peacock, Oliver J; Thompson, Dylan

    2017-01-01

    Wearable physical activity monitors are growing in popularity and provide the opportunity for large numbers of the public to self-monitor physical activity behaviours. The latest generation of these devices feature multiple sensors, ostensibly similar or even superior to advanced research instruments. However, little is known about the accuracy of their energy expenditure estimates. Here, we assessed their performance against criterion measurements in both controlled laboratory conditions (simulated activities of daily living and structured exercise) and over a 24 hour period in free-living conditions. Thirty men (n = 15) and women (n = 15) wore three multi-sensor consumer monitors (Microsoft Band, Apple Watch and Fitbit Charge HR), an accelerometry-only device as a comparison (Jawbone UP24) and validated research-grade multi-sensor devices (BodyMedia Core and individually calibrated Actiheart™). During discrete laboratory activities when compared against indirect calorimetry, the Apple Watch performed similarly to criterion measures. The Fitbit Charge HR was less consistent at measurement of discrete activities, but produced similar free-living estimates to the Apple Watch. Both these devices underestimated free-living energy expenditure (-394 kcal/d and -405 kcal/d, respectively; P<0.01). The multi-sensor Microsoft Band and accelerometry-only Jawbone UP24 devices underestimated most laboratory activities and substantially underestimated free-living expenditure (-1128 kcal/d and -998 kcal/d, respectively; P<0.01). None of the consumer devices were deemed equivalent to the reference method for daily energy expenditure. For all devices, there was a tendency for negative bias with greater daily energy expenditure. No consumer monitors performed as well as the research-grade devices although in some (but not all) cases, estimates were close to criterion measurements. Thus, whilst industry-led innovation has improved the accuracy of consumer monitors, these devices are not yet equivalent to the best research-grade devices or indeed equivalent to each other. We propose independent quality standards and/or accuracy ratings for consumer devices are required.

  19. Assessment of laboratory and daily energy expenditure estimates from consumer multi-sensor physical activity monitors

    PubMed Central

    Chowdhury, Enhad A.; Western, Max J.; Nightingale, Thomas E.; Peacock, Oliver J.; Thompson, Dylan

    2017-01-01

    Wearable physical activity monitors are growing in popularity and provide the opportunity for large numbers of the public to self-monitor physical activity behaviours. The latest generation of these devices feature multiple sensors, ostensibly similar or even superior to advanced research instruments. However, little is known about the accuracy of their energy expenditure estimates. Here, we assessed their performance against criterion measurements in both controlled laboratory conditions (simulated activities of daily living and structured exercise) and over a 24 hour period in free-living conditions. Thirty men (n = 15) and women (n = 15) wore three multi-sensor consumer monitors (Microsoft Band, Apple Watch and Fitbit Charge HR), an accelerometry-only device as a comparison (Jawbone UP24) and validated research-grade multi-sensor devices (BodyMedia Core and individually calibrated Actiheart™). During discrete laboratory activities when compared against indirect calorimetry, the Apple Watch performed similarly to criterion measures. The Fitbit Charge HR was less consistent at measurement of discrete activities, but produced similar free-living estimates to the Apple Watch. Both these devices underestimated free-living energy expenditure (-394 kcal/d and -405 kcal/d, respectively; P<0.01). The multi-sensor Microsoft Band and accelerometry-only Jawbone UP24 devices underestimated most laboratory activities and substantially underestimated free-living expenditure (-1128 kcal/d and -998 kcal/d, respectively; P<0.01). None of the consumer devices were deemed equivalent to the reference method for daily energy expenditure. For all devices, there was a tendency for negative bias with greater daily energy expenditure. No consumer monitors performed as well as the research-grade devices although in some (but not all) cases, estimates were close to criterion measurements. Thus, whilst industry-led innovation has improved the accuracy of consumer monitors, these devices are not yet equivalent to the best research-grade devices or indeed equivalent to each other. We propose independent quality standards and/or accuracy ratings for consumer devices are required. PMID:28234979

  20. Technical Note: Kinect V2 surface filtering during gantry motion for radiotherapy applications.

    PubMed

    Nazir, Souha; Rihana, Sandy; Visvikis, Dimitris; Fayad, Hadi

    2018-04-01

    In radiotherapy, the Kinect V2 camera, has recently received a lot of attention concerning many clinical applications including patient positioning, respiratory motion tracking, and collision detection during the radiotherapy delivery phase. However, issues associated with such applications are related to some materials and surfaces reflections generating an offset in depth measurements especially during gantry motion. This phenomenon appears in particular when the collimator surface is observed by the camera; resulting in erroneous depth measurements, not only in Kinect surfaces itself, but also as a large peak when extracting a 1D respiratory signal from these data. In this paper, we proposed filtering techniques to reduce the noise effect in the Kinect-based 1D respiratory signal, using a trend removal filter, and in associated 2D surfaces, using a temporal median filter. Filtering process was validated using a phantom, in order to simulate a patient undergoing radiotherapy treatment while having the ground truth. Our results indicate a better correlation between the reference respiratory signal and its corresponding filtered signal (Correlation coefficient of 0.76) than that of the nonfiltered signal (Correlation coefficient of 0.13). Furthermore, surface filtering results show a decrease in the mean square distance error (85%) between the reference and the measured point clouds. This work shows a significant noise compensation and surface restitution after surface filtering and therefore a potential use of the Kinect V2 camera for different radiotherapy-based applications, such as respiratory tracking and collision detection. © 2018 American Association of Physicists in Medicine.

  1. Exploring the feasibility and acceptability of sensor monitoring of gait and falls in the homes of persons with multiple sclerosis.

    PubMed

    Newland, Pamela; Wagner, Joanne M; Salter, Amber; Thomas, Florian P; Skubic, Marjorie; Rantz, Marilyn

    2016-09-01

    Gait parameters variability and falls are problems for persons with MS and have not been adequately captured in the home. Our goal was to explore the feasibility and acceptability of monitoring of gait and falls in the homes of persons with MS over a period of 30 days. To test the feasibility of measuring gait and falls for 30days in the home of persons with MS, spatiotemporal gait parameters stride length, stride time, and gait speed were compared. A 3D infrared depth imaging system has been developed to objectively measure gait and falls in the home environment. Participants also completed a 16-foot GaitRite electronic pathway walk to validate spatiotemporal parameters of gait (gait speed (cm/s), stride length (cm), and gait cycle time(s)) during the timed 25 foot walking test (T25FWT). We also documented barriers to feasibility of installing the in-home sensors for these participants. The results of the study suggest that the Kinect sensor may be used as an alternative device to measure gait for persons with MS, depending on the desired accuracy level. Ultimately, using in-home sensors to analyze gait parameters in real time is feasible and could lead to better analysis of gait in persons with MS. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. The Effectiveness of the Game-Based Learning System for the Improvement of American Sign Language Using Kinect

    ERIC Educational Resources Information Center

    Kamnardsiri, Teerawat; Hongsit, Ler-on; Khuwuthyakorn, Pattaraporn; Wongta, Noppon

    2017-01-01

    This paper investigated students' achievement for learning American Sign Language (ASL), using two different methods. There were two groups of samples. The first experimental group (Group A) was the game-based learning for ASL, using Kinect. The second control learning group (Group B) was the traditional face-to-face learning method, generally…

  3. Visualizing vascular structures in virtual environments

    NASA Astrophysics Data System (ADS)

    Wischgoll, Thomas

    2013-01-01

    In order to learn more about the cause of coronary heart diseases and develop diagnostic tools, the extraction and visualization of vascular structures from volumetric scans for further analysis is an important step. By determining a geometric representation of the vasculature, the geometry can be inspected and additional quantitative data calculated and incorporated into the visualization of the vasculature. To provide a more user-friendly visualization tool, virtual environment paradigms can be utilized. This paper describes techniques for interactive rendering of large-scale vascular structures within virtual environments. This can be applied to almost any virtual environment configuration, such as CAVE-type displays. Specifically, the tools presented in this paper were tested on a Barco I-Space and a large 62x108 inch passive projection screen with a Kinect sensor for user tracking.

  4. Occupancy mapping and surface reconstruction using local Gaussian processes with Kinect sensors.

    PubMed

    Kim, Soohwan; Kim, Jonghyuk

    2013-10-01

    Although RGB-D sensors have been successfully applied to visual SLAM and surface reconstruction, most of the applications aim at visualization. In this paper, we propose a noble method of building continuous occupancy maps and reconstructing surfaces in a single framework for both navigation and visualization. Particularly, we apply a Bayesian nonparametric approach, Gaussian process classification, to occupancy mapping. However, it suffers from high-computational complexity of O(n(3))+O(n(2)m), where n and m are the numbers of training and test data, respectively, limiting its use for large-scale mapping with huge training data, which is common with high-resolution RGB-D sensors. Therefore, we partition both training and test data with a coarse-to-fine clustering method and apply Gaussian processes to each local clusters. In addition, we consider Gaussian processes as implicit functions, and thus extract iso-surfaces from the scalar fields, continuous occupancy maps, using marching cubes. By doing that, we are able to build two types of map representations within a single framework of Gaussian processes. Experimental results with 2-D simulated data show that the accuracy of our approximated method is comparable to previous work, while the computational time is dramatically reduced. We also demonstrate our method with 3-D real data to show its feasibility in large-scale environments.

  5. Research into Kinect/Inertial Measurement Units Based on Indoor Robots.

    PubMed

    Li, Huixia; Wen, Xi; Guo, Hang; Yu, Min

    2018-03-12

    As indoor mobile navigation suffers from low positioning accuracy and accumulation error, we carried out research into an integrated location system for a robot based on Kinect and an Inertial Measurement Unit (IMU). In this paper, the close-range stereo images are used to calculate the attitude information and the translation amount of the adjacent positions of the robot by means of the absolute orientation algorithm, for improving the calculation accuracy of the robot's movement. Relying on the Kinect visual measurement and the strap-down IMU devices, we also use Kalman filtering to obtain the errors of the position and attitude outputs, in order to seek the optimal estimation and correct the errors. Experimental results show that the proposed method is able to improve the positioning accuracy and stability of the indoor mobile robot.

  6. Fusion of Kinect depth data with trifocal disparity estimation for near real-time high quality depth maps generation

    NASA Astrophysics Data System (ADS)

    Boisson, Guillaume; Kerbiriou, Paul; Drazic, Valter; Bureller, Olivier; Sabater, Neus; Schubert, Arno

    2014-03-01

    Generating depth maps along with video streams is valuable for Cinema and Television production. Thanks to the improvements of depth acquisition systems, the challenge of fusion between depth sensing and disparity estimation is widely investigated in computer vision. This paper presents a new framework for generating depth maps from a rig made of a professional camera with two satellite cameras and a Kinect device. A new disparity-based calibration method is proposed so that registered Kinect depth samples become perfectly consistent with disparities estimated between rectified views. Also, a new hierarchical fusion approach is proposed for combining on the flow depth sensing and disparity estimation in order to circumvent their respective weaknesses. Depth is determined by minimizing a global energy criterion that takes into account the matching reliability and the consistency with the Kinect input. Thus generated depth maps are relevant both in uniform and textured areas, without holes due to occlusions or structured light shadows. Our GPU implementation reaches 20fps for generating quarter-pel accurate HD720p depth maps along with main view, which is close to real-time performances for video applications. The estimated depth is high quality and suitable for 3D reconstruction or virtual view synthesis.

  7. High-Resolution Surface Reconstruction from Imagery for Close Range Cultural Heritage Applications

    NASA Astrophysics Data System (ADS)

    Wenzel, K.; Abdel-Wahab, M.; Cefalu, A.; Fritsch, D.

    2012-07-01

    The recording of high resolution point clouds with sub-mm resolution is a demanding and cost intensive task, especially with current equipment like handheld laser scanners. We present an image based approached, where techniques of image matching and dense surface reconstruction are combined with a compact and affordable rig of off-the-shelf industry cameras. Such cameras provide high spatial resolution with low radiometric noise, which enables a one-shot solution and thus an efficient data acquisition while satisfying high accuracy requirements. However, the largest drawback of image based solutions is often the acquisition of surfaces with low texture where the image matching process might fail. Thus, an additional structured light projector is employed, represented here by the pseudo-random pattern projector of the Microsoft Kinect. Its strong infrared-laser projects speckles of different sizes. By using dense image matching techniques on the acquired images, a 3D point can be derived for almost each pixel. The use of multiple cameras enables the acquisition of a high resolution point cloud with high accuracy for each shot. For the proposed system up to 3.5 Mio. 3D points with sub-mm accuracy can be derived per shot. The registration of multiple shots is performed by Structure and Motion reconstruction techniques, where feature points are used to derive the camera positions and rotations automatically without initial information.

  8. Using a depth-sensing infrared camera system to access and manipulate medical imaging from within the sterile operating field.

    PubMed

    Strickland, Matt; Tremaine, Jamie; Brigley, Greg; Law, Calvin

    2013-06-01

    As surgical procedures become increasingly dependent on equipment and imaging, the need for sterile members of the surgical team to have unimpeded access to the nonsterile technology in their operating room (OR) is of growing importance. To our knowledge, our team is the first to use an inexpensive infrared depthsensing camera (a component of the Microsoft Kinect) and software developed inhouse to give surgeons a touchless, gestural interface with which to navigate their picture archiving and communication systems intraoperatively. The system was designed and developed with feedback from surgeons and OR personnel and with consideration of the principles of aseptic technique and gestural controls in mind. Simulation was used for basic validation before trialing in a pilot series of 6 hepatobiliary-pancreatic surgeries. The interface was used extensively in 2 laparoscopic and 4 open procedures. Surgeons primarily used the system for anatomic correlation, real-time comparison of intraoperative ultrasound with preoperative computed tomography and magnetic resonance imaging scans and for teaching residents and fellows. The system worked well in a wide range of lighting conditions and procedures. It led to a perceived increase in the use of intraoperative image consultation. Further research should be focused on investigating the usefulness of touchless gestural interfaces in different types of surgical procedures and its effects on operative time.

  9. Free-viewpoint video of human actors using multiple handheld Kinects.

    PubMed

    Ye, Genzhi; Liu, Yebin; Deng, Yue; Hasler, Nils; Ji, Xiangyang; Dai, Qionghai; Theobalt, Christian

    2013-10-01

    We present an algorithm for creating free-viewpoint video of interacting humans using three handheld Kinect cameras. Our method reconstructs deforming surface geometry and temporal varying texture of humans through estimation of human poses and camera poses for every time step of the RGBZ video. Skeletal configurations and camera poses are found by solving a joint energy minimization problem, which optimizes the alignment of RGBZ data from all cameras, as well as the alignment of human shape templates to the Kinect data. The energy function is based on a combination of geometric correspondence finding, implicit scene segmentation, and correspondence finding using image features. Finally, texture recovery is achieved through jointly optimization on spatio-temporal RGB data using matrix completion. As opposed to previous methods, our algorithm succeeds on free-viewpoint video of human actors under general uncontrolled indoor scenes with potentially dynamic background, and it succeeds even if the cameras are moving.

  10. Using Xbox kinect motion capture technology to improve clinical rehabilitation outcomes for balance and cardiovascular health in an individual with chronic TBI.

    PubMed

    Chanpimol, Shane; Seamon, Bryant; Hernandez, Haniel; Harris-Love, Michael; Blackman, Marc R

    2017-01-01

    Motion capture virtual reality-based rehabilitation has become more common. However, therapists face challenges to the implementation of virtual reality (VR) in clinical settings. Use of motion capture technology such as the Xbox Kinect may provide a useful rehabilitation tool for the treatment of postural instability and cardiovascular deconditioning in individuals with chronic severe traumatic brain injury (TBI). The primary purpose of this study was to evaluate the effects of a Kinect-based VR intervention using commercially available motion capture games on balance outcomes for an individual with chronic TBI. The secondary purpose was to assess the feasibility of this intervention for eliciting cardiovascular adaptations. A single system experimental design ( n = 1) was utilized, which included baseline, intervention, and retention phases. Repeated measures were used to evaluate the effects of an 8-week supervised exercise intervention using two Xbox One Kinect games. Balance was characterized using the dynamic gait index (DGI), functional reach test (FRT), and Limits of Stability (LOS) test on the NeuroCom Balance Master. The LOS assesses end-point excursion (EPE), maximal excursion (MXE), and directional control (DCL) during weight-shifting tasks. Cardiovascular and activity measures were characterized by heart rate at the end of exercise (HRe), total gameplay time (TAT), and time spent in a therapeutic heart rate (TTR) during the Kinect intervention. Chi-square and ANOVA testing were used to analyze the data. Dynamic balance, characterized by the DGI, increased during the intervention phase χ 2 (1, N = 12) = 12, p = .001. Static balance, characterized by the FRT showed no significant changes. The EPE increased during the intervention phase in the backward direction χ 2 (1, N = 12) = 5.6, p = .02, and notable improvements of DCL were demonstrated in all directions. HRe ( F (2,174) = 29.65, p = < .001) and time in a TTR ( F (2, 12) = 4.19, p = .04) decreased over the course of the intervention phase. Use of a supervised Kinect-based program that incorporated commercial games improved dynamic balance for an individual post severe TBI. Additionally, moderate cardiovascular activity was achieved through motion capture gaming. Further studies appear warranted to determine the potential therapeutic utility of commercial VR games in this patient population. Clinicaltrial.gov ID - NCT02889289.

  11. IoT-based flood embankments monitoring system

    NASA Astrophysics Data System (ADS)

    Michta, E.; Szulim, R.; Sojka-Piotrowska, A.; Piotrowski, K.

    2017-08-01

    In the paper a concept of flood embankments monitoring system based on using Internet of Things approach and Cloud Computing technologies will be presented. The proposed system consists of sensors, IoT nodes, Gateways and Cloud based services. Nodes communicates with the sensors measuring certain physical parameters describing the state of the embankments and communicates with the Gateways. Gateways are specialized active devices responsible for direct communication with the nodes, collecting sensor data, preprocess the data, applying local rules and communicate with the Cloud Services using communication API delivered by cloud services providers. Architecture of all of the system components will be proposed consisting IoT devices functionalities description, their communication model, software modules and services bases on using a public cloud computing platform like Microsoft Azure will be proposed. The most important aspects of maintaining the communication in a secure way will be shown.

  12. Real-time detecting and tracking ball with OpenCV and Kinect

    NASA Astrophysics Data System (ADS)

    Osiecki, Tomasz; Jankowski, Stanislaw

    2016-09-01

    This paper presents a way to detect and track ball with using the OpenCV and Kinect. Object and people recognition, tracking are more and more popular topics nowadays. Described solution makes it possible to detect ball based on the range, which is set by the user and capture information about ball position in three dimensions. It can be store in the computer and use for example to display trajectory of the ball.

  13. Effects of Training Using Video Games on the Muscle Strength, Muscle Tone, and Activities of Daily Living of Chronic Stroke Patients

    PubMed Central

    Lee, GyuChang

    2013-01-01

    [Purpose] The purpose of this study was to investigate the effects of training using video games played on the Xbox Kinect on the muscle strength, muscle tone, and activities of daily living of post-stroke patients. [Subjects] Fourteen stroke patients were recruited. They were randomly allocated into two groups; the experimental group (n=7) and the control group (n=7). [Methods] The experimental group performed training using video games played on the Xbox Kinect together with conventional occupational therapy for 6 weeks (1 hour/day, 3 days/week), and the control group received conventional occupational therapy only for 6 weeks (30 min/day, 3 days/week). Before and after the intervention, the participants were measured for muscle strength, muscle tone, and performance of activities of daily living. [Results] There were significant differences pre- and post-test in muscle strength of the upper extremities, except the wrist, and performance of activities of daily living in the experimental group. There were no significant differences between the two groups at post-test. [Conclusion] The training using video games played on the Xbox Kinect had a positive effect on the motor function and performance of activities of daily living. This study showed that training using video games played on the Xbox Kinect may be an effective intervention for the rehabilitation of stroke patients. PMID:24259810

  14. An approach of point cloud denoising based on improved bilateral filtering

    NASA Astrophysics Data System (ADS)

    Zheng, Zeling; Jia, Songmin; Zhang, Guoliang; Li, Xiuzhi; Zhang, Xiangyin

    2018-04-01

    An omnidirectional mobile platform is designed for building point cloud based on an improved filtering algorithm which is employed to handle the depth image. First, the mobile platform can move flexibly and the control interface is convenient to control. Then, because the traditional bilateral filtering algorithm is time-consuming and inefficient, a novel method is proposed which called local bilateral filtering (LBF). LBF is applied to process depth image obtained by the Kinect sensor. The results show that the effect of removing noise is improved comparing with the bilateral filtering. In the condition of off-line, the color images and processed images are used to build point clouds. Finally, experimental results demonstrate that our method improves the speed of processing time of depth image and the effect of point cloud which has been built.

  15. Live animal assessments of rump fat and muscle score in Angus cows and steers using 3-dimensional imaging.

    PubMed

    McPhee, M J; Walmsley, B J; Skinner, B; Littler, B; Siddell, J P; Cafe, L M; Wilkins, J F; Oddy, V H; Alempijevic, A

    2017-04-01

    The objective of this study was to develop a proof of concept for using off-the-shelf Red Green Blue-Depth (RGB-D) Microsoft Kinect cameras to objectively assess P8 rump fat (P8 fat; mm) and muscle score (MS) traits in Angus cows and steers. Data from low and high muscled cattle (156 cows and 79 steers) were collected at multiple locations and time points. The following steps were required for the 3-dimensional (3D) image data and subsequent machine learning techniques to learn the traits: 1) reduce the high dimensionality of the point cloud data by extracting features from the input signals to produce a compact and representative feature vector, 2) perform global optimization of the signatures using machine learning algorithms and a parallel genetic algorithm, and 3) train a sensor model using regression-supervised learning techniques on the ultrasound P8 fat and the classified learning techniques for the assessed MS for each animal in the data set. The correlation of estimating hip height (cm) between visually measured and assessed 3D data from RGB-D cameras on cows and steers was 0.75 and 0.90, respectively. The supervised machine learning and global optimization approach correctly classified MS (mean [SD]) 80 (4.7) and 83% [6.6%] for cows and steers, respectively. Kappa tests of MS were 0.74 and 0.79 in cows and steers, respectively, indicating substantial agreement between visual assessment and the learning approaches of RGB-D camera images. A stratified 10-fold cross-validation for P8 fat did not find any differences in the mean bias ( = 0.62 and = 0.42 for cows and steers, respectively). The root mean square error of P8 fat was 1.54 and 1.00 mm for cows and steers, respectively. Additional data is required to strengthen the capacity of machine learning to estimate measured P8 fat and assessed MS. Data sets for and continental cattle are also required to broaden the use of 3D cameras to assess cattle. The results demonstrate the importance of capturing curvature as a form of representing body shape. A data-driven model from shape to trait has established a proof of concept using optimized machine learning techniques to assess P8 fat and MS in Angus cows and steers.

  16. P-Soccer: Soccer Games Application using Kinect

    NASA Astrophysics Data System (ADS)

    Nasir, Mohamad Fahim Mohamed; Suparjoh, Suriawati; Razali, Nazim; Mustapha, Aida

    2018-05-01

    This paper presents a soccer game application called P-Soccer that uses Kinect as the interaction medium between users and the game characters. P-Soccer focuses on training penalty kicks with one character who is taking the kick. This game is developed based on the Game Development Life Cycle (GDLC) methodology. Results for alpha and beta testing showed that the target users are satisfied with overall game design and theme as well as the interactivity with the main character in the game.

  17. Development and validation of a sensor- and expert model-based training system for laparoscopic surgery: the iSurgeon.

    PubMed

    Kowalewski, Karl-Friedrich; Hendrie, Jonathan D; Schmidt, Mona W; Garrow, Carly R; Bruckner, Thomas; Proctor, Tanja; Paul, Sai; Adigüzel, Davud; Bodenstedt, Sebastian; Erben, Andreas; Kenngott, Hannes; Erben, Young; Speidel, Stefanie; Müller-Stich, Beat P; Nickel, Felix

    2017-05-01

    Training and assessment outside of the operating room is crucial for minimally invasive surgery due to steep learning curves. Thus, we have developed and validated the sensor- and expert model-based laparoscopic training system, the iSurgeon. Participants of different experience levels (novice, intermediate, expert) performed four standardized laparoscopic knots. Instruments and surgeons' joint motions were tracked with an NDI Polaris camera and Microsoft Kinect v1. With frame-by-frame image analysis, the key steps of suturing and knot tying were identified and registered with motion data. Construct validity, concurrent validity, and test-retest reliability were analyzed. The Objective Structured Assessment of Technical Skills (OSATS) was used as the gold standard for concurrent validity. The system showed construct validity by discrimination between experience levels by parameters such as time (novice = 442.9 ± 238.5 s; intermediate = 190.1 ± 50.3 s; expert = 115.1 ± 29.1 s; p < 0.001), total path length (novice = 18,817 ± 10318 mm; intermediate = 9995 ± 3286 mm; expert = 7265 ± 2232 mm; p < 0.001), average speed (novice = 42.9 ± 8.3 mm/s; intermediate = 52.7 ± 11.2 mm/s; expert = 63.6 ± 12.9 mm/s; p < 0.001), angular path (novice = 20,573 ± 12,611°; intermediate = 8652 ± 2692°; expert = 5654 ± 1746°; p < 0.001), number of movements (novice = 2197 ± 1405; intermediate = 987 ± 367; expert = 743 ± 238; p < 0.001), number of movements per second (novice = 5.0 ± 1.4; intermediate = 5.2 ± 1.5; expert = 6.6 ± 1.6; p = 0.025), and joint angle range (for different axes and joints all p < 0.001). Concurrent validity of OSATS and iSurgeon parameters was established. Test-retest reliability was given for 7 out of 8 parameters. The key steps "wrapping the thread around the instrument" and "needle positioning" were most difficult to learn. Validity and reliability of the self-developed sensor-and expert model-based laparoscopic training system "iSurgeon" were established. Using multiple parameters proved more reliable than single metric parameters. Wrapping of the needle around the thread and needle positioning were identified as difficult key steps for laparoscopic suturing and knot tying. The iSurgeon could generate automated real-time feedback based on expert models which may result in shorter learning curves for laparoscopic tasks. Our next steps will be the implementation and evaluation of full procedural training in an experimental model.

  18. Visualization of Concrete Slump Flow Using the Kinect Sensor

    PubMed Central

    Park, Minbeom

    2018-01-01

    Workability is regarded as one of the important parameters of high-performance concrete and monitoring it is essential in concrete quality management at construction sites. The conventional workability test methods are basically based on length and time measured by a ruler and a stopwatch and, as such, inevitably involves human error. In this paper, we propose a 4D slump test method based on digital measurement and data processing as a novel concrete workability test. After acquiring the dynamically changing 3D surface of fresh concrete using a 3D depth sensor during the slump flow test, the stream images are processed with the proposed 4D slump processing algorithm and the results are compressed into a single 4D slump image. This image basically represents the dynamically spreading cross-section of fresh concrete along the time axis. From the 4D slump image, it is possible to determine the slump flow diameter, slump flow time, and slump height at any location simultaneously. The proposed 4D slump test will be able to activate research related to concrete flow simulation and concrete rheology by providing spatiotemporal measurement data of concrete flow. PMID:29510510

  19. Visualization of Concrete Slump Flow Using the Kinect Sensor.

    PubMed

    Kim, Jung-Hoon; Park, Minbeom

    2018-03-03

    Workability is regarded as one of the important parameters of high-performance concrete and monitoring it is essential in concrete quality management at construction sites. The conventional workability test methods are basically based on length and time measured by a ruler and a stopwatch and, as such, inevitably involves human error. In this paper, we propose a 4D slump test method based on digital measurement and data processing as a novel concrete workability test. After acquiring the dynamically changing 3D surface of fresh concrete using a 3D depth sensor during the slump flow test, the stream images are processed with the proposed 4D slump processing algorithm and the results are compressed into a single 4D slump image. This image basically represents the dynamically spreading cross-section of fresh concrete along the time axis. From the 4D slump image, it is possible to determine the slump flow diameter, slump flow time, and slump height at any location simultaneously. The proposed 4D slump test will be able to activate research related to concrete flow simulation and concrete rheology by providing spatiotemporal measurement data of concrete flow.

  20. Automatic Recognition of Indoor Navigation Elements from Kinect Point Clouds

    NASA Astrophysics Data System (ADS)

    Zeng, L.; Kang, Z.

    2017-09-01

    This paper realizes automatically the navigating elements defined by indoorGML data standard - door, stairway and wall. The data used is indoor 3D point cloud collected by Kinect v2 launched in 2011 through the means of ORB-SLAM. By contrast, it is cheaper and more convenient than lidar, but the point clouds also have the problem of noise, registration error and large data volume. Hence, we adopt a shape descriptor - histogram of distances between two randomly chosen points, proposed by Osada and merges with other descriptor - in conjunction with random forest classifier to recognize the navigation elements (door, stairway and wall) from Kinect point clouds. This research acquires navigation elements and their 3-d location information from each single data frame through segmentation of point clouds, boundary extraction, feature calculation and classification. Finally, this paper utilizes the acquired navigation elements and their information to generate the state data of the indoor navigation module automatically. The experimental results demonstrate a high recognition accuracy of the proposed method.

  1. Enhanced RGB-D Mapping Method for Detailed 3D Indoor and Outdoor Modeling

    PubMed Central

    Tang, Shengjun; Zhu, Qing; Chen, Wu; Darwish, Walid; Wu, Bo; Hu, Han; Chen, Min

    2016-01-01

    RGB-D sensors (sensors with RGB camera and Depth camera) are novel sensing systems that capture RGB images along with pixel-wise depth information. Although they are widely used in various applications, RGB-D sensors have significant drawbacks including limited measurement ranges (e.g., within 3 m) and errors in depth measurement increase with distance from the sensor with respect to 3D dense mapping. In this paper, we present a novel approach to geometrically integrate the depth scene and RGB scene to enlarge the measurement distance of RGB-D sensors and enrich the details of model generated from depth images. First, precise calibration for RGB-D Sensors is introduced. In addition to the calibration of internal and external parameters for both, IR camera and RGB camera, the relative pose between RGB camera and IR camera is also calibrated. Second, to ensure poses accuracy of RGB images, a refined false features matches rejection method is introduced by combining the depth information and initial camera poses between frames of the RGB-D sensor. Then, a global optimization model is used to improve the accuracy of the camera pose, decreasing the inconsistencies between the depth frames in advance. In order to eliminate the geometric inconsistencies between RGB scene and depth scene, the scale ambiguity problem encountered during the pose estimation with RGB image sequences can be resolved by integrating the depth and visual information and a robust rigid-transformation recovery method is developed to register RGB scene to depth scene. The benefit of the proposed joint optimization method is firstly evaluated with the publicly available benchmark datasets collected with Kinect. Then, the proposed method is examined by tests with two sets of datasets collected in both outside and inside environments. The experimental results demonstrate the feasibility and robustness of the proposed method. PMID:27690028

  2. Enhanced RGB-D Mapping Method for Detailed 3D Indoor and Outdoor Modeling.

    PubMed

    Tang, Shengjun; Zhu, Qing; Chen, Wu; Darwish, Walid; Wu, Bo; Hu, Han; Chen, Min

    2016-09-27

    RGB-D sensors (sensors with RGB camera and Depth camera) are novel sensing systems that capture RGB images along with pixel-wise depth information. Although they are widely used in various applications, RGB-D sensors have significant drawbacks including limited measurement ranges (e.g., within 3 m) and errors in depth measurement increase with distance from the sensor with respect to 3D dense mapping. In this paper, we present a novel approach to geometrically integrate the depth scene and RGB scene to enlarge the measurement distance of RGB-D sensors and enrich the details of model generated from depth images. First, precise calibration for RGB-D Sensors is introduced. In addition to the calibration of internal and external parameters for both, IR camera and RGB camera, the relative pose between RGB camera and IR camera is also calibrated. Second, to ensure poses accuracy of RGB images, a refined false features matches rejection method is introduced by combining the depth information and initial camera poses between frames of the RGB-D sensor. Then, a global optimization model is used to improve the accuracy of the camera pose, decreasing the inconsistencies between the depth frames in advance. In order to eliminate the geometric inconsistencies between RGB scene and depth scene, the scale ambiguity problem encountered during the pose estimation with RGB image sequences can be resolved by integrating the depth and visual information and a robust rigid-transformation recovery method is developed to register RGB scene to depth scene. The benefit of the proposed joint optimization method is firstly evaluated with the publicly available benchmark datasets collected with Kinect. Then, the proposed method is examined by tests with two sets of datasets collected in both outside and inside environments. The experimental results demonstrate the feasibility and robustness of the proposed method.

  3. A study on a robot chasing a human using Kinect while identifying walking parameters using the back view

    NASA Astrophysics Data System (ADS)

    Konno, S.; Mita, A.

    2014-03-01

    Recently, the demand of the building spaces to respond to increase of single aged households and the diversification of life style is increasing. Smart house is one of them, but it is difficult for them to be changed and renovated. Therefore, we suggest Biofied builing. In biofied building, we use a mobile robot to get concious and unconcious information about residents and try to make it more secure and comfort builing spaces by realizing the intraction between residents and builing spaces. Walking parameters are one of the most important unconscious information about residents. They are an indicator of autonomy of elderly, and changes of stride length and walking speed may be pridictive of a future fall and a cognitive impairment. By observing their walking and informing residents their walking state, they can forestall such dangers and it helps them to live more securely and autonomously. Many methods to estimate walking parameters have been studied. The famous ones are to use accelerometers and a motion capture camera. Walking parameters estimated by them are high precise but the sensors are attached to a human body in these method and it can make human's walk different from the original walk. Furthermore, some elderly feel it to invade them. In this work, Kinect which can get information about human untouchably was used on the mobile robot. A stride time, stride length, and walking speed were estimated from the back view of human by following him or her. Evaluation was done for 10m, 5m, 4m, and 3m in whole walking. As a result, the proposal system can estimate walking parameters of the walk more than 3m.

  4. Digitization and Visualization of Greenhouse Tomato Plants in Indoor Environments

    PubMed Central

    Li, Dawei; Xu, Lihong; Tan, Chengxiang; Goodman, Erik D.; Fu, Daichang; Xin, Longjiao

    2015-01-01

    This paper is concerned with the digitization and visualization of potted greenhouse tomato plants in indoor environments. For the digitization, an inexpensive and efficient commercial stereo sensor—a Microsoft Kinect—is used to separate visual information about tomato plants from background. Based on the Kinect, a 4-step approach that can automatically detect and segment stems of tomato plants is proposed, including acquisition and preprocessing of image data, detection of stem segments, removing false detections and automatic segmentation of stem segments. Correctly segmented texture samples including stems and leaves are then stored in a texture database for further usage. Two types of tomato plants—the cherry tomato variety and the ordinary variety are studied in this paper. The stem detection accuracy (under a simulated greenhouse environment) for the cherry tomato variety is 98.4% at a true positive rate of 78.0%, whereas the detection accuracy for the ordinary variety is 94.5% at a true positive of 72.5%. In visualization, we combine L-system theory and digitized tomato organ texture data to build realistic 3D virtual tomato plant models that are capable of exhibiting various structures and poses in real time. In particular, we also simulate the growth process on virtual tomato plants by exerting controls on two L-systems via parameters concerning the age and the form of lateral branches. This research may provide useful visual cues for improving intelligent greenhouse control systems and meanwhile may facilitate research on artificial organisms. PMID:25675284

  5. Point-of-care-testing of standing posture with Wii balance board and Microsoft Kinect during transcranial direct current stimulation: a feasibility study.

    PubMed

    Dutta, Arindam; Chugh, Sanjay; Banerjee, Alakananda; Dutta, Anirban

    2014-01-01

    Non-invasive brain stimulation (NIBS) is a promising tool for facilitating motor function. NIBS therapy in conjunction with training using postural feedback may facilitate physical rehabilitation following posture disorders (e.g., Pusher Syndrome). The objectives of this study were, 1) to develop a low-cost point-of-care-testing (POCT) system for standing posture, 2) to investigate the effects of anodal tDCS on functional reach tasks using the POCT system. Ten community-dwelling elderly (age >50 years) subjects evaluated the POCT system for standing posture during functional reach tasks where their balance score on Berg Balance Scale was compared with that from Center-of-Mass (CoM) - Center-of-Pressure (CoP) posturography. Then, in a single-blind, sham-controlled study, five healthy right-leg dominant subjects (age: 26.4 ± 5.3 yrs) were evaluated using the POCT system under two conditions - with anodal tDCS of primary motor representations of right tibialis anterior muscle and with sham tDCS. The maximum CoP-CoM lean-angle was found to be well correlated with the BBS score in the elderly subjects The anodal tDCS strongly (p = 0.0000) affected the maximum CoP excursions but not the return reaction time in healthy. It was concluded that the CoM-CoP lean-line could be used for posture feedback and monitoring during tDCS therapy in conjunction with balance training exercises.

  6. Technological evaluation of gesture and speech interfaces for enabling dismounted soldier-robot dialogue

    NASA Astrophysics Data System (ADS)

    Kattoju, Ravi Kiran; Barber, Daniel J.; Abich, Julian; Harris, Jonathan

    2016-05-01

    With increasing necessity for intuitive Soldier-robot communication in military operations and advancements in interactive technologies, autonomous robots have transitioned from assistance tools to functional and operational teammates able to service an array of military operations. Despite improvements in gesture and speech recognition technologies, their effectiveness in supporting Soldier-robot communication is still uncertain. The purpose of the present study was to evaluate the performance of gesture and speech interface technologies to facilitate Soldier-robot communication during a spatial-navigation task with an autonomous robot. Gesture and speech semantically based spatial-navigation commands leveraged existing lexicons for visual and verbal communication from the U.S Army field manual for visual signaling and a previously established Squad Level Vocabulary (SLV). Speech commands were recorded by a Lapel microphone and Microsoft Kinect, and classified by commercial off-the-shelf automatic speech recognition (ASR) software. Visual signals were captured and classified using a custom wireless gesture glove and software. Participants in the experiment commanded a robot to complete a simulated ISR mission in a scaled down urban scenario by delivering a sequence of gesture and speech commands, both individually and simultaneously, to the robot. Performance and reliability of gesture and speech hardware interfaces and recognition tools were analyzed and reported. Analysis of experimental results demonstrated the employed gesture technology has significant potential for enabling bidirectional Soldier-robot team dialogue based on the high classification accuracy and minimal training required to perform gesture commands.

  7. Comparing the Microsoft Kinect to a traditional mouse for adjusting the viewed tissue densities of three-dimensional anatomical structures

    NASA Astrophysics Data System (ADS)

    Juhnke, Bethany; Berron, Monica; Philip, Adriana; Williams, Jordan; Holub, Joseph; Winer, Eliot

    2013-03-01

    Advancements in medical image visualization in recent years have enabled three-dimensional (3D) medical images to be volume-rendered from magnetic resonance imaging (MRI) and computed tomography (CT) scans. Medical data is crucial for patient diagnosis and medical education, and analyzing these three-dimensional models rather than two-dimensional (2D) slices would enable more efficient analysis by surgeons and physicians, especially non-radiologists. An interaction device that is intuitive, robust, and easily learned is necessary to integrate 3D modeling software into the medical community. The keyboard and mouse configuration does not readily manipulate 3D models because these traditional interface devices function within two degrees of freedom, not the six degrees of freedom presented in three dimensions. Using a familiar, commercial-off-the-shelf (COTS) device for interaction would minimize training time and enable maximum usability with 3D medical images. Multiple techniques are available to manipulate 3D medical images and provide doctors more innovative ways of visualizing patient data. One such example is windowing. Windowing is used to adjust the viewed tissue density of digital medical data. A software platform available at the Virtual Reality Applications Center (VRAC), named Isis, was used to visualize and interact with the 3D representations of medical data. In this paper, we present the methodology and results of a user study that examined the usability of windowing 3D medical imaging using a Kinect™ device compared to a traditional mouse.

  8. Humanoid assessing rehabilitative exercises.

    PubMed

    Simonov, M; Delconte, G

    2015-01-01

    This article is part of the Focus Theme of Methods of Information in Medicine on "New Methodologies for Patients Rehabilitation". The article presents the approach in which the rehabilitative exercise prepared by healthcare professional is encoded as formal knowledge and used by humanoid robot to assist patients without involving other care actors. The main objective is the use of humanoids in rehabilitative care. An example is pulmonary rehabilitation in COPD patients. Another goal is the automated judgment functionality to determine how the rehabilitation exercise matches the pre-programmed correct sequence. We use the Aldebaran Robotics' NAO humanoid to set up artificial cognitive application. Pre-programmed NAO induces elderly patient to undertake humanoid-driven rehabilitation exercise, but needs to evaluate the human actions against the correct template. Patient is observed using NAO's eyes. We use the Microsoft Kinect SDK to extract motion path from the humanoid's recorded video. We compare human- and humanoid-operated process sequences by using the Dynamic Time Warping (DTW) and test the prototype. This artificial cognitive software showcases the use of DTW algorithm to enable humanoids to judge in near real-time about the correctness of rehabilitative exercises performed by patients following the robot's indications. One could enable better sustainable rehabilitative care services in remote residential settings by combining intelligent applications piloting humanoids with the DTW pattern matching algorithm applied at run time to compare humanoid- and human-operated process sequences. In turn, it will lower the need of human care.

  9. Design and Real-World Evaluation of Eyes-Free Yoga: An Exergame for Blind and Low-Vision Exercise

    PubMed Central

    Rector, Kyle; Vilardaga, Roger; Lansky, Leo; Lu, Kellie; Bennett, Cynthia L.; Ladner, Richard E.; Kientz, Julie A.

    2017-01-01

    People who are blind or low vision may have a harder time participating in exercise due to inaccessibility or lack of encouragement. To address this, we developed Eyes-Free Yoga using the Microsoft Kinect that acts as a yoga instructor and has personalized auditory feedback based on skeletal tracking. We conducted two different studies on two different versions of Eyes-Free Yoga: (1) a controlled study with 16 people who are blind or low vision to evaluate the feasibility of a proof-of-concept and (2) an 8-week in-home deployment study with 4 people who are blind or low vision, with a fully functioning exergame containing four full workouts and motivational techniques. We found that participants preferred the personalized feedback for yoga postures during the laboratory study. Therefore, the personalized feedback was used as a means to build the core components of the system used in the deployment study and was included in both study conditions. From the deployment study, we found that the participants practiced Yoga consistently throughout the 8-week period (Average hours = 17; Average days of practice = 24), almost reaching the American Heart Association recommended exercise guidelines. On average, motivational techniques increased participant’s user experience and their frequency and exercise time. The findings of this work have implications for eyes-free exergame design, including engaging domain experts, piloting with inexperienced users, using musical metaphors, and designing for in-home use cases. PMID:29104712

  10. A Smart Wirelessly Powered Homecage for Long-Term High-Throughput Behavioral Experiments

    PubMed Central

    Lee, Byunghun; Kiani, Mehdi

    2015-01-01

    A wirelessly powered homecage system, called the EnerCage-HC, that is equipped with multicoil wireless power transfer, closed-loop power control, optical behavioral tracking, and a graphic user interface is presented for longitudinal electrophysiology and behavioral neuroscience experiments. The EnerCage-HC system can wirelessly power a mobile unit attached to a small animal subject and also track its behavior in real-time as it is housed inside a standard homecage. The EnerCage-HC system is equipped with one central and four overlapping slanted wire-wound coils with optimal geometries to form three- and four-coil power transmission links while operating at 13.56 MHz. Utilizing multicoil links increases the power transfer efficiency (PTE) compared with conventional two-coil links and also reduces the number of power amplifiers to only one, which significantly reduces the system complexity, cost, and heat dissipation. A Microsoft Kinect installed 90 cm above the homecage localizes the animal position and orientation with 1.6-cm accuracy. Moreover, a power management ASIC, including a high efficiency active rectifier and automatic coil resonance tuning, was fabricated in a 0.35-μm 4M2P standard CMOS process for the mobile unit. The EnerCage-HC achieves a max/min PTE of 36.3%/16.1% at the nominal height of 7 cm. In vivo experiments were conducted on freely behaving rats by continuously delivering 24 mW to the mobile unit for >7 h inside a standard homecage. PMID:26257586

  11. Scaleable wireless web-enabled sensor networks

    NASA Astrophysics Data System (ADS)

    Townsend, Christopher P.; Hamel, Michael J.; Sonntag, Peter A.; Trutor, B.; Arms, Steven W.

    2002-06-01

    Our goal was to develop a long life, low cost, scalable wireless sensing network, which collects and distributes data from a wide variety of sensors over the internet. Time division multiple access was employed with RF transmitter nodes (each w/unique16 bit address) to communicate digital data to a single receiver (range 1/3 mile). One thousand five channel nodes can communicate to one receiver (30 minute update). Current draw (sleep) is 20 microamps, allowing 5 year battery life w/one 3.6 volt Li-Ion AA size battery. The network nodes include sensor excitation (AC or DC), multiplexer, instrumentation amplifier, 16 bit A/D converter, microprocessor, and RF link. They are compatible with thermocouples, strain gauges, load/torque transducers, inductive/capacitive sensors. The receiver (418 MHz) includes a single board computer (SBC) with Ethernet capability, internet file transfer protocols (XML/HTML), and data storage. The receiver detects data from specific nodes, performs error checking, records the data. The web server interrogates the SBC (from Microsoft's Internet Explorer or Netscape's Navigator) to distribute data. This system can collect data from thousands of remote sensors on a smart structure, and be shared by an unlimited number of users.

  12. Predicting human activities in sequences of actions in RGB-D videos

    NASA Astrophysics Data System (ADS)

    Jardim, David; Nunes, Luís.; Dias, Miguel

    2017-03-01

    In our daily activities we perform prediction or anticipation when interacting with other humans or with objects. Prediction of human activity made by computers has several potential applications: surveillance systems, human computer interfaces, sports video analysis, human-robot-collaboration, games and health-care. We propose a system capable of recognizing and predicting human actions using supervised classifiers trained with automatically labeled data evaluated in our human activity RGB-D dataset (recorded with a Kinect sensor) and using only the position of the main skeleton joints to extract features. Using conditional random fields (CRFs) to model the sequential nature of actions in a sequence has been used before, but where other approaches try to predict an outcome or anticipate ahead in time (seconds), we try to predict what will be the next action of a subject. Our results show an activity prediction accuracy of 89.9% using an automatically labeled dataset.

  13. Molecular Rift: Virtual Reality for Drug Designers.

    PubMed

    Norrby, Magnus; Grebner, Christoph; Eriksson, Joakim; Boström, Jonas

    2015-11-23

    Recent advances in interaction design have created new ways to use computers. One example is the ability to create enhanced 3D environments that simulate physical presence in the real world--a virtual reality. This is relevant to drug discovery since molecular models are frequently used to obtain deeper understandings of, say, ligand-protein complexes. We have developed a tool (Molecular Rift), which creates a virtual reality environment steered with hand movements. Oculus Rift, a head-mounted display, is used to create the virtual settings. The program is controlled by gesture-recognition, using the gaming sensor MS Kinect v2, eliminating the need for standard input devices. The Open Babel toolkit was integrated to provide access to powerful cheminformatics functions. Molecular Rift was developed with a focus on usability, including iterative test-group evaluations. We conclude with reflections on virtual reality's future capabilities in chemistry and education. Molecular Rift is open source and can be downloaded from GitHub.

  14. Development of the bedridden person support system using hand gesture.

    PubMed

    Ichimura, Kouhei; Magatani, Kazushige

    2015-08-01

    The purpose of this study is to support the bedridden and physically handicapped person who live independently. In this study, we developed Electric appliances control system that can be used on the bed. The subject can control Electric appliances using hand motion. Infrared sensors of a Kinect are used for the hand motion detection. Our developed system was tested with some normal subjects and results of the experiment were evaluated. In this experiment, all subjects laid on the bed and tried to control our system. As results, most of subjects were able to control our developed system perfectly. However, motion tracking of some subject's hand was reset forcibly. It was difficult for these subjects to make the system recognize his opened hand. From these results, we think if this problem will be improved our support system will be useful for the bedridden and physically handicapped persons.

  15. Mapping of unknown industrial plant using ROS-based navigation mobile robot

    NASA Astrophysics Data System (ADS)

    Priyandoko, G.; Ming, T. Y.; Achmad, M. S. H.

    2017-10-01

    This research examines how humans work with teleoperated unmanned mobile robot inspection in industrial plant area resulting 2D/3D map for further critical evaluation. This experiment focuses on two parts, the way human-robot doing remote interactions using robust method and the way robot perceives the environment surround as a 2D/3D perspective map. ROS (robot operating system) as a tool was utilized in the development and implementation during the research which comes up with robust data communication method in the form of messages and topics. RGBD SLAM performs the visual mapping function to construct 2D/3D map using Kinect sensor. The results showed that the mobile robot-based teleoperated system are successful to extend human perspective in term of remote surveillance in large area of industrial plant. It was concluded that the proposed work is robust solution for large mapping within an unknown construction building.

  16. Depth-Based Detection of Standing-Pigs in Moving Noise Environments.

    PubMed

    Kim, Jinseong; Chung, Yeonwoo; Choi, Younchang; Sa, Jaewon; Kim, Heegon; Chung, Yongwha; Park, Daihee; Kim, Hakjae

    2017-11-29

    In a surveillance camera environment, the detection of standing-pigs in real-time is an important issue towards the final goal of 24-h tracking of individual pigs. In this study, we focus on depth-based detection of standing-pigs with "moving noises", which appear every night in a commercial pig farm, but have not been reported yet. We first apply a spatiotemporal interpolation technique to remove the moving noises occurring in the depth images. Then, we detect the standing-pigs by utilizing the undefined depth values around them. Our experimental results show that this method is effective for detecting standing-pigs at night, in terms of both cost-effectiveness (using a low-cost Kinect depth sensor) and accuracy (i.e., 94.47%), even with severe moving noises occluding up to half of an input depth image. Furthermore, without any time-consuming technique, the proposed method can be executed in real-time.

  17. Comparative Geometrical Investigations of Hand-Held Scanning Systems

    NASA Astrophysics Data System (ADS)

    Kersten, T. P.; Przybilla, H.-J.; Lindstaedt, M.; Tschirschwitz, F.; Misgaiski-Hass, M.

    2016-06-01

    An increasing number of hand-held scanning systems by different manufacturers are becoming available on the market. However, their geometrical performance is little-known to many users. Therefore the Laboratory for Photogrammetry & Laser Scanning of the HafenCity University Hamburg has carried out geometrical accuracy tests with the following systems in co-operation with the Bochum University of Applied Sciences (Laboratory for Photogrammetry) as well as the Humboldt University in Berlin (Institute for Computer Science): DOTProduct DPI-7, Artec Spider, Mantis Vision F5 SR, Kinect v1 + v2, Structure Sensor and Google's Project Tango. In the framework of these comparative investigations geometrically stable reference bodies were used. The appropriate reference data were acquired by measurement with two structured light projection systems (AICON smartSCAN and GOM ATOS I 2M). The comprehensive test results of the different test scenarios are presented and critically discussed in this contribution.

  18. Multiple sclerosis patients' experiences in relation to the impact of the kinect virtual home-exercise programme: a qualitative study.

    PubMed

    Palacios-Ceña, Domingo; Ortiz-Gutierrez, Rosa M; Buesa-Estellez, Almudena; Galán-Del-Río, Fernando; Cachon Perez, José M; Martínez-Piedrola, Rosa; Velarde-Garcia, Juan F; Cano-DE-LA-Cuerda, Roberto

    2016-06-01

    Neurorehabilitation programs are among the most popular therapies aimed at reducing the disabilities that result from multiple sclerosis. Video games have recently gained importance in the rehabilitation of patients with motor neurological dysfunctions. Currently, the studies describing the perspective of patients with multiple sclerosis who have participated in rehabilitation programmes via home-based video games are almost inexistent. The aim of this paper was to explore the experiences of multiple sclerosis patients who performed a virtual home-exercise programme using Kinect. A qualitative research enquiry was conducted as part of a study that examined postural control and balance after a 10-week Kinect home-exercise programme in adults with multiple sclerosis. Patients were recruited from a Neurology Unit of a University Hospital. The inclusion criteria were: subjects aged between 20 and 60 years, diagnosed with multiple sclerosis for over 2 years based on the McDonald Criteria; with an EDSS score ranging from 3 to 5. Purposeful sampling method was implemented. The data collection consisted of unstructured interviews, using open questions, and thematic analysis was conducted. Guidelines for conducting qualitative studies established by the Consolidated Criteria for Reporting Qualitative Research were followed. Twenty-four patients with a mean age of 36.69 were included. Four main themes emerged from the data: 1) regaining previous capacity and abilities. The patients described how, after the treatment with Kinect they felt more independent; 2) sharing the disease. The patients sharing the experience of living with MS with their family, thanks to the use of Kinect; 3) adapting to the new treatment. This refers to how the use of the videogame console incorporated novelties to their rehabilitation programme; and 4) comparing oneself. This refers to the appearance of factors that motivate the patient during KVHEP. The patients' experiences gathered in this study highlight perceptions of unexpected improvement, an eagerness to improve, and the positive opportunity of sharing treatment with their social entourage thanks to the games. These results can be applied to future research using video consoles, by individualizing and adapting the games to the patient's abilities, and by developing a new field in rehabilitation.

  19. Examining Energy Expenditure in Youth Using XBOX Kinect: Differences by Player Mode.

    PubMed

    Barkman, Jourdin; Pfeiffer, Karin; Diltz, Allie; Peng, Wei

    2016-06-01

    Replacing sedentary time with physical activity through new generation exergames (eg, XBOX Kinect) is a potential intervention strategy. The study's purpose was to compare youth energy expenditure while playing different exergames in single- vs. multiplayer mode. Participants (26 male, 14 female) were 10 to 13 years old. They wore a portable metabolic analyzer while playing 4 XBOX Kinect games for 15 minutes each (2 single-, 2 multiplayer). Repeated-measures ANOVA (with Bonferroni correction) was used to examine player mode differences, controlling for age group, sex, weight status, and game. There was a significant difference in energy expenditure between single player (mean = 15.4 ml/kg/min, SD = 4.5) and multiplayer mode (mean = 16.8 ml/kg/min, SD = 4.7). Overweight and obese participants (mean = 13.7 ml/kg/min, SD = 4.2) expended less energy than normal weight (mean = 17.8 ml/kg/min, SD = 4.5) during multiplayer mode (d = 0.93). Player mode, along with personal factors such as weight status, may be important to consider in energy expenditure during exergames.

  20. Kinect based real-time position calibration for nasal endoscopic surgical navigation system

    NASA Astrophysics Data System (ADS)

    Fan, Jingfan; Yang, Jian; Chu, Yakui; Ma, Shaodong; Wang, Yongtian

    2016-03-01

    Unanticipated, reactive motion of the patient during skull based tumor resective surgery is the source of the consequence that the nasal endoscopic tracking system is compelled to be recalibrated. To accommodate the calibration process with patient's movement, this paper developed a Kinect based Real-time positional calibration method for nasal endoscopic surgical navigation system. In this method, a Kinect scanner was employed as the acquisition part of the point cloud volumetric reconstruction of the patient's head during surgery. Then, a convex hull based registration algorithm aligned the real-time image of the patient head with a model built upon the CT scans performed in the preoperative preparation to dynamically calibrate the tracking system if a movement was detected. Experimental results confirmed the robustness of the proposed method, presenting a total tracking error within 1 mm under the circumstance of relatively violent motions. These results point out the tracking accuracy can be retained stably and the potential to expedite the calibration of the tracking system against strong interfering conditions, demonstrating high suitability for a wide range of surgical applications.

  1. 3D Laser Scanner for Underwater Manipulation.

    PubMed

    Palomer, Albert; Ridao, Pere; Youakim, Dina; Ribas, David; Forest, Josep; Petillot, Yvan

    2018-04-04

    Nowadays, research in autonomous underwater manipulation has demonstrated simple applications like picking an object from the sea floor, turning a valve or plugging and unplugging a connector. These are fairly simple tasks compared with those already demonstrated by the mobile robotics community, which include, among others, safe arm motion within areas populated with a priori unknown obstacles or the recognition and location of objects based on their 3D model to grasp them. Kinect-like 3D sensors have contributed significantly to the advance of mobile manipulation providing 3D sensing capabilities in real-time at low cost. Unfortunately, the underwater robotics community is lacking a 3D sensor with similar capabilities to provide rich 3D information of the work space. In this paper, we present a new underwater 3D laser scanner and demonstrate its capabilities for underwater manipulation. In order to use this sensor in conjunction with manipulators, a calibration method to find the relative position between the manipulator and the 3D laser scanner is presented. Then, two different advanced underwater manipulation tasks beyond the state of the art are demonstrated using two different manipulation systems. First, an eight Degrees of Freedom (DoF) fixed-base manipulator system is used to demonstrate arm motion within a work space populated with a priori unknown fixed obstacles. Next, an eight DoF free floating Underwater Vehicle-Manipulator System (UVMS) is used to autonomously grasp an object from the bottom of a water tank.

  2. Microsoft Repository Version 2 and the Open Information Model.

    ERIC Educational Resources Information Center

    Bernstein, Philip A.; Bergstraesser, Thomas; Carlson, Jason; Pal, Shankar; Sanders, Paul; Shutt, David

    1999-01-01

    Describes the programming interface and implementation of the repository engine and the Open Information Model for Microsoft Repository, an object-oriented meta-data management facility that ships in Microsoft Visual Studio and Microsoft SQL Server. Discusses Microsoft's component object model, object manipulation, queries, and information…

  3. Natural user interface as a supplement of the holographic Raman tweezers

    NASA Astrophysics Data System (ADS)

    Tomori, Zoltan; Kanka, Jan; Kesa, Peter; Jakl, Petr; Sery, Mojmir; Bernatova, Silvie; Antalik, Marian; Zemánek, Pavel

    2014-09-01

    Holographic Raman tweezers (HRT) manipulates with microobjects by controlling the positions of multiple optical traps via the mouse or joystick. Several attempts have appeared recently to exploit touch tablets, 2D cameras or Kinect game console instead. We proposed a multimodal "Natural User Interface" (NUI) approach integrating hands tracking, gestures recognition, eye tracking and speech recognition. For this purpose we exploited "Leap Motion" and "MyGaze" low-cost sensors and a simple speech recognition program "Tazti". We developed own NUI software which processes signals from the sensors and sends the control commands to HRT which subsequently controls the positions of trapping beams, micropositioning stage and the acquisition system of Raman spectra. System allows various modes of operation proper for specific tasks. Virtual tools (called "pin" and "tweezers") serving for the manipulation with particles are displayed on the transparent "overlay" window above the live camera image. Eye tracker identifies the position of the observed particle and uses it for the autofocus. Laser trap manipulation navigated by the dominant hand can be combined with the gestures recognition of the secondary hand. Speech commands recognition is useful if both hands are busy. Proposed methods make manual control of HRT more efficient and they are also a good platform for its future semi-automated and fully automated work.

  4. Miniaturized Water Flow and Level Monitoring System for Flood Disaster Early Warning

    NASA Astrophysics Data System (ADS)

    Ifedapo Abdullahi, Salami; Hadi Habaebi, Mohamed; Surya Gunawan, Teddy; Rafiqul Islam, MD

    2017-11-01

    This study presents the performance of a prototype miniaturised water flow and water level monitoring sensor designed towards supporting flood disaster early warning systems. The design involved selection of sensors, coding to control the system mechanism, and automatic data logging and storage. During the design phase, the apparatus was constructed where all the components were assembled using locally sourced items. Subsequently, under controlled laboratory environment, the system was tested by running water through the inlet during which the flow rate and rising water levels are automatically recorded and stored in a database via Microsoft Excel using Coolterm software. The system is simulated such that the water level readings measured in centimeters is output in meters using a multiplicative of 10. A total number of 80 readings were analyzed to evaluate the performance of the system. The result shows that the system is sensitive to water level rise and yielded accurate measurement of water level. But, the flow rate fluctuates due to the manual water supply that produced inconsistent flow. It was also observed that the flow sensor has a duty cycle of 50% of operating time under normal condition which implies that the performance of the flow sensor is optimal.

  5. Improvement in the physiological function and standing stability based on kinect multimedia for older people

    PubMed Central

    Chen, Chih-Chen

    2016-01-01

    [Purpose] The increase in the Taiwanese older population is associated with age-related inconveniences. Finding adequate and simple physical activities to help the older people maintaining their physiological function and preventing them from falls has become an urgent social issue. [Subjects and Methods] This study aimed to design a virtual exercise training game suitable for Taiwanese older people. This system will allow for the maintenance of the physiological function and standing stability through physical exercise, while using a virtual reality game. The participants can easily exercise in a carefree, interactive environment. This study will use Kinect for Windows for physical movement detection and Unity software for virtual world development. [Results] Group A and B subjects were involved in the exercise training method of Kinect interactive multimedia for 12 weeks. The results showed that the functional reach test and the unipedal stance test improved significantly. [Conclusion] The physiological function and standing stability of the group A subjects were examined at six weeks post training. The results showed that these parameters remained constant. This proved that the proposed system provide substantial support toward the preservation of the Taiwanese older people’ physiological function and standing stability. PMID:27190480

  6. Improvement in the physiological function and standing stability based on kinect multimedia for older people.

    PubMed

    Chen, Chih-Chen

    2016-04-01

    [Purpose] The increase in the Taiwanese older population is associated with age-related inconveniences. Finding adequate and simple physical activities to help the older people maintaining their physiological function and preventing them from falls has become an urgent social issue. [Subjects and Methods] This study aimed to design a virtual exercise training game suitable for Taiwanese older people. This system will allow for the maintenance of the physiological function and standing stability through physical exercise, while using a virtual reality game. The participants can easily exercise in a carefree, interactive environment. This study will use Kinect for Windows for physical movement detection and Unity software for virtual world development. [Results] Group A and B subjects were involved in the exercise training method of Kinect interactive multimedia for 12 weeks. The results showed that the functional reach test and the unipedal stance test improved significantly. [Conclusion] The physiological function and standing stability of the group A subjects were examined at six weeks post training. The results showed that these parameters remained constant. This proved that the proposed system provide substantial support toward the preservation of the Taiwanese older people' physiological function and standing stability.

  7. Cardiovascular effects of Zumba® performed in a virtual environment using XBOX Kinect

    PubMed Central

    Neves, Luceli Eunice Da Silva; Cerávolo, Mariza Paver Da Silva; Silva, Elisangela; De Freitas, Wagner Zeferino; Da Silva, Fabiano Fernandes; Higino, Wonder Passoni; Carvalho, Wellington Roberto Gomes; De Souza, Renato Aparecido

    2015-01-01

    [Purpose] This study evaluated the acute cardiovascular responses during a session of Zumba® Fitness in a virtual reality environment. [Subjects] Eighteen healthy volunteers were recruited. [Methods] The following cardiovascular variables: heart rate, systolic blood pressure, diastolic blood pressure, and double product were assessed before and after the practice of virtual Zumba®, which was performed as a continuous sequence of five choreographed movements lasting for 22 min. The game Zumba Fitness Core®, with the Kinect-based virtual reality system for the XBOX 360, was used to create the virtual environment. Comparisons were made among mean delta values (delta=post-Zumba® minus pre-Zumba® values) for systolic and diastolic blood pressure, heart rate, and double product using Student’s t-test for paired samples. [Results] After a single session, a significant increase was noted in all the analyzed parameters (Systolic blood pressure=18%; Diastolic blood pressure=13%; Heart rate=67%; and Double product=97%). [Conclusion] The results support the feasibility of the use of Zumba Fitness Core® with the Kinect-based virtual reality system for the XBOX 360 in physical activity programs and further favor its indication for this purpose. PMID:26504312

  8. You can't touch this: touch-free navigation through radiological images.

    PubMed

    Ebert, Lars C; Hatch, Gary; Ampanozi, Garyfalia; Thali, Michael J; Ross, Steffen

    2012-09-01

    Keyboards, mice, and touch screens are a potential source of infection or contamination in operating rooms, intensive care units, and autopsy suites. The authors present a low-cost prototype of a system, which allows for touch-free control of a medical image viewer. This touch-free navigation system consists of a computer system (IMac, OS X 10.6 Apple, USA) with a medical image viewer (OsiriX, OsiriX foundation, Switzerland) and a depth camera (Kinect, Microsoft, USA). They implemented software that translates the data delivered by the camera and a voice recognition software into keyboard and mouse commands, which are then passed to OsiriX. In this feasibility study, the authors introduced 10 medical professionals to the system and asked them to re-create 12 images from a CT data set. They evaluated response times and usability of the system compared with standard mouse/keyboard control. Users felt comfortable with the system after approximately 10 minutes. Response time was 120 ms. Users required 1.4 times more time to re-create an image with gesture control. Users with OsiriX experience were significantly faster using the mouse/keyboard and faster than users without prior experience. They rated the system 3.4 out of 5 for ease of use in comparison to the mouse/keyboard. The touch-free, gesture-controlled system performs favorably and removes a potential vector for infection, protecting both patients and staff. Because the camera can be quickly and easily integrated into existing systems, requires no calibration, and is low cost, the barriers to using this technology are low.

  9. Use of Assisted Photogrammetry for Indoor and Outdoor Navigation Purposes

    NASA Astrophysics Data System (ADS)

    Pagliari, D.; Cazzaniga, N. E.; Pinto, L.

    2015-05-01

    Nowadays, devices and applications that require navigation solutions are continuously growing. For instance, consider the increasing demand of mapping information or the development of applications based on users' location. In some case it could be sufficient an approximate solution (e.g. at room level), but in the large amount of cases a better solution is required. The navigation problem has been solved from a long time using Global Navigation Satellite System (GNSS). However, it can be unless in obstructed areas, such as in urban areas or inside buildings. An interesting low cost solution is photogrammetry, assisted using additional information to scale the photogrammetric problem and recovering a solution also in critical situation for image-based methods (e.g. poor textured surfaces). In this paper, the use of assisted photogrammetry has been tested for both outdoor and indoor scenarios. Outdoor navigation problem has been faced developing a positioning system with Ground Control Points extracted from urban maps as constrain and tie points automatically extracted from the images acquired during the survey. The proposed approach has been tested under different scenarios, recovering the followed trajectory with an accuracy of 0.20 m. For indoor navigation a solution has been thought to integrate the data delivered by Microsoft Kinect, by identifying interesting features on the RGB images and re-projecting them on the point clouds generated from the delivered depth maps. Then, these points have been used to estimate the rotation matrix between subsequent point clouds and, consequently, to recover the trajectory with few centimeters of error.

  10. The AR Sandbox: Augmented Reality in Geoscience Education

    NASA Astrophysics Data System (ADS)

    Kreylos, O.; Kellogg, L. H.; Reed, S.; Hsi, S.; Yikilmaz, M. B.; Schladow, G.; Segale, H.; Chan, L.

    2016-12-01

    The AR Sandbox is a combination of a physical box full of sand, a 3D (depth) camera such as a Microsoft Kinect, a data projector, and a computer running open-source software, creating a responsive and interactive system to teach geoscience concepts in formal or informal contexts. As one or more users shape the sand surface to create planes, hills, or valleys, the 3D camera scans the surface in real-time, the software creates a dynamic topographic map including elevation color maps and contour lines, and the projector projects that map back onto the sand surface such that real and projected features match exactly. In addition, users can add virtual water to the sandbox, which realistically flows over the real surface driven by a real-time fluid flow simulation. The AR Sandbox can teach basic geographic and hydrologic skills and concepts such as reading topographic maps, interpreting contour lines, formation of watersheds, flooding, or surface wave propagation in a hands-on and explorative manner. AR Sandbox installations in more than 150 institutions have shown high audience engagement and long dwell times of often 20 minutes and more. In a more formal context, the AR Sandbox can be used in field trip preparation, and can teach advanced geoscience skills such as extrapolating 3D sub-surface shapes from surface expression, via advanced software features such as the ability to load digital models of real landscapes and guiding users towards recreating them in the sandbox. Blueprints, installation instructions, and the open-source AR Sandbox software package are available at http://arsandbox.org .

  11. Lyme Disease Data

    MedlinePlus

    ... County-level Lyme disease data from 2000-2016 Microsoft Excel file [Excel CSV – 209KB] ––Right–click the link ... PDF file Microsoft PowerPoint file Microsoft Word file Microsoft Excel file Audio/Video file Apple Quicktime file RealPlayer ...

  12. Long-Term Safety and Tolerability of Valbenazine (NBI-98854) in Subjects with Tardive Dyskinesia and a Diagnosis of Schizophrenia or Mood Disorder

    PubMed Central

    Josiassen, Richard C.; Kane, John M.; Liang, Grace S.; Burke, Joshua; O’Brien, Christopher F.

    2017-01-01

    Background The short-term safety profile of once-daily valbenazine (NBI-98854) has been evaluated in several double-blind, placebo-controlled (DBPC) trials in adults with tardive dyskinesia (TD) who had a diagnosis of schizophrenia/schizoaffective (SCHZ) disorder or mood disorder. Studies with longer treatment duration (up to 48 weeks) were conducted to evaluate the long-term safety of this novel drug in subjects with TD. Methods The pooled long-term exposure (LTE) population included valbenazine-treated subjects from 3 studies: KINECT (NCT01688037: 6-week DBPC, 6-week open-label); KINECT 3 (NCT02274558: 6-week DBPC, 42-week blinded extension, 4-week drug-free follow-up); KINECT 4 (NCT02405091: 48-week open-label, 4-week drug-free follow-up). Safety assessments included adverse events (AEs), laboratory tests, vital signs, electrocardiograms (ECGs), and extrapyramidal symptom (EPS) scales. Psychiatric stability was monitored using the Positive and Negative Syndrome Scale (PANSS) and Calgary Depression Scale for Schizophrenia (CDSS) (SCHZ subgroup), as well as the Montgomery-Åsberg Depression Rating Scale (MADRS) and Young Mania Rating Scale (YMRS) (mood subgroup). All data were analyzed descriptively. Results The LTE population included 430 subjects (KINECT, n = 46; KINECT 3, n = 220; KINECT 4, n = 164), 71.7% with SCHZ and 28.3% with a mood disorder; 85.5% were taking an antipsychotic (atypical only, 69.8%; typical only or typical + atypical, 15.7%). In the LTE population, treatment-emergent AEs (TEAEs) and discontinuations due to AEs were reported in 66.5% and 14.7% of subjects, respectively. The TEAE incidence was lower in the SCHZ subgroup (64.4%) than in the mood subgroup (71.9%). The 3 most common TEAEs in the SCHZ subgroup were urinary tract infection (UTI, 6.1%), headache (5.8%), and somnolence (5.2%). The 3 most common TEAEs in the mood subgroup were headache (12.4%), UTI (10.7%), and somnolence (9.1%). Mean score changes from baseline to end of treatment (Week 48) indicated that psychiatric stability was maintained in the SCHZ subgroup (PANSS Total, -3.4; PANSS Positive, -1.1; PANSS Negative, -0.1; PANSS General Psychopathology, -2.2; CDSS total, -0.4) and the mood subgroup (MADRS Total, 0.0; YMRS Total, -1.2). These scores remained generally stable during the 4-week drug-free follow-up periods. In the LTE population, mean changes in laboratory parameters, vital signs, ECG, and EPS scales were generally minimal and not clinically significant. Conclusion Valbenazine appeared to be well tolerated in adults with TD who received up to 48 weeks of treatment. In addition to long-term efficacy results (presented separately), these results suggest that valbenazine may be appropriate for the long-term management of TD regardless of underlying psychiatric diagnosis (SCHZ disorder or mood disorder). PMID:28839341

  13. Long-Term Safety and Tolerability of Valbenazine (NBI-98854) in Subjects with Tardive Dyskinesia and a Diagnosis of Schizophrenia or Mood Disorder.

    PubMed

    Josiassen, Richard C; Kane, John M; Liang, Grace S; Burke, Joshua; O'Brien, Christopher F

    2017-08-01

    The short-term safety profile of once-daily valbenazine (NBI-98854) has been evaluated in several double-blind, placebo-controlled (DBPC) trials in adults with tardive dyskinesia (TD) who had a diagnosis of schizophrenia/schizoaffective (SCHZ) disorder or mood disorder. Studies with longer treatment duration (up to 48 weeks) were conducted to evaluate the long-term safety of this novel drug in subjects with TD. The pooled long-term exposure (LTE) population included valbenazine-treated subjects from 3 studies: KINECT (NCT01688037: 6-week DBPC, 6-week open-label); KINECT 3 (NCT02274558: 6-week DBPC, 42-week blinded extension, 4-week drug-free follow-up); KINECT 4 (NCT02405091: 48-week open-label, 4-week drug-free follow-up). Safety assessments included adverse events (AEs), laboratory tests, vital signs, electrocardiograms (ECGs), and extrapyramidal symptom (EPS) scales. Psychiatric stability was monitored using the Positive and Negative Syndrome Scale (PANSS) and Calgary Depression Scale for Schizophrenia (CDSS) (SCHZ subgroup), as well as the Montgomery-Åsberg Depression Rating Scale (MADRS) and Young Mania Rating Scale (YMRS) (mood subgroup). All data were analyzed descriptively. The LTE population included 430 subjects (KINECT, n = 46; KINECT 3, n = 220; KINECT 4, n = 164), 71.7% with SCHZ and 28.3% with a mood disorder; 85.5% were taking an antipsychotic (atypical only, 69.8%; typical only or typical + atypical, 15.7%). In the LTE population, treatment-emergent AEs (TEAEs) and discontinuations due to AEs were reported in 66.5% and 14.7% of subjects, respectively. The TEAE incidence was lower in the SCHZ subgroup (64.4%) than in the mood subgroup (71.9%). The 3 most common TEAEs in the SCHZ subgroup were urinary tract infection (UTI, 6.1%), headache (5.8%), and somnolence (5.2%). The 3 most common TEAEs in the mood subgroup were headache (12.4%), UTI (10.7%), and somnolence (9.1%). Mean score changes from baseline to end of treatment (Week 48) indicated that psychiatric stability was maintained in the SCHZ subgroup (PANSS Total, -3.4; PANSS Positive, -1.1; PANSS Negative, -0.1; PANSS General Psychopathology, -2.2; CDSS total, -0.4) and the mood subgroup (MADRS Total, 0.0; YMRS Total, -1.2). These scores remained generally stable during the 4-week drug-free follow-up periods. In the LTE population, mean changes in laboratory parameters, vital signs, ECG, and EPS scales were generally minimal and not clinically significant. Valbenazine appeared to be well tolerated in adults with TD who received up to 48 weeks of treatment. In addition to long-term efficacy results (presented separately), these results suggest that valbenazine may be appropriate for the long-term management of TD regardless of underlying psychiatric diagnosis (SCHZ disorder or mood disorder).

  14. A low cost PSD-based monocular motion capture system

    NASA Astrophysics Data System (ADS)

    Ryu, Young Kee; Oh, Choonsuk

    2007-10-01

    This paper describes a monocular PSD-based motion capture sensor to employ with commercial video game systems such as Microsoft's XBOX and Sony's Playstation II. The system is compact, low-cost, and only requires a one-time calibration at the factory. The system includes a PSD(Position Sensitive Detector) and active infrared (IR) LED markers that are placed on the object to be tracked. The PSD sensor is placed in the focal plane of a wide-angle lens. The micro-controller calculates the 3D position of the markers using only the measured intensity and the 2D position on the PSD. A series of experiments were performed to evaluate the performance of our prototype system. From the experimental results we see that the proposed system has the advantages of the compact size, the low cost, the easy installation, and the high frame rates to be suitable for high speed motion tracking in games.

  15. Metrics for Performance Evaluation of Patient Exercises during Physical Therapy.

    PubMed

    Vakanski, Aleksandar; Ferguson, Jake M; Lee, Stephen

    2017-06-01

    The article proposes a set of metrics for evaluation of patient performance in physical therapy exercises. Taxonomy is employed that classifies the metrics into quantitative and qualitative categories, based on the level of abstraction of the captured motion sequences. Further, the quantitative metrics are classified into model-less and model-based metrics, in reference to whether the evaluation employs the raw measurements of patient performed motions, or whether the evaluation is based on a mathematical model of the motions. The reviewed metrics include root-mean square distance, Kullback Leibler divergence, log-likelihood, heuristic consistency, Fugl-Meyer Assessment, and similar. The metrics are evaluated for a set of five human motions captured with a Kinect sensor. The metrics can potentially be integrated into a system that employs machine learning for modelling and assessment of the consistency of patient performance in home-based therapy setting. Automated performance evaluation can overcome the inherent subjectivity in human performed therapy assessment, and it can increase the adherence to prescribed therapy plans, and reduce healthcare costs.

  16. Tracking and Counting Motion for Monitoring Food Intake Based-On Depth Sensor and UDOO Board: A Comprehensive Review

    NASA Astrophysics Data System (ADS)

    Kassim, Muhammad Fuad bin; Norzali Haji Mohd, Mohd

    2017-08-01

    Technology is all about helping people, which created a new opportunity to take serious action in managing their health care. Moreover, Obesity continues to be a serious public health concern in the Malaysia and continuing to rise. Obesity has been a serious health concern among people. Nearly half of Malaysian people overweight. Most of dietary approach is not tracking and detecting the right calorie intake for weight loss, but currently used tools such as food diaries require users to manually record and track the food calories, making them difficult for daily use. We will be developing a new tool that counts the food intake bite by monitoring hand gesture and face jaw motion movement of caloric intake. The Bite count method showed a good significant that can lead to a successful weight loss by simply monitoring the bite taken during eating. The device used was Kinect Xbox One which used a depth camera to detect the motion on person hand and face during food intake. Previous studies showed that most of the method used to count bite device is worn type. The recent trend is now going towards non-wearable devices due to the difficulty when wearing devices and it has high false alarm ratio. The proposed system gets data from the Kinect that will be monitoring the hand and face gesture of the user while eating. Then, the gesture of hand and face data is sent to the microcontroller board to recognize and start counting bite taken by the user. The system recognizes the patterns of bite taken from user by following the algorithm of basic eating type either using hand or chopstick. This system can help people who are trying to follow a proper way to reduce overweight or eating disorders by monitoring their meal intake and controlling eating rate.

  17. Detection of collaborative activity with Kinect depth cameras.

    PubMed

    Sevrin, Loic; Noury, Norbert; Abouchi, Nacer; Jumel, Fabrice; Massot, Bertrand; Saraydaryan, Jacques

    2016-08-01

    The health status of elderly subjects is highly correlated to their activities together with their social interactions. Thus, the long term monitoring in home of their health status, shall also address the analysis of collaborative activities. This paper proposes a preliminary approach of such a system which can detect the simultaneous presence of several subjects in a common area using Kinect depth cameras. Most areas in home being dedicated to specific tasks, the localization enables the classification of tasks, whether collaborative or not. A scenario of a 24 hours day shrunk into 24 minutes was used to validate our approach. It pointed out the need of artifacts removal to reach high specificity and good sensitivity.

  18. Application of ZigBee sensor network to data acquisition and monitoring

    NASA Astrophysics Data System (ADS)

    Terada, Mitsugu

    2009-01-01

    A ZigBee sensor network for data acquisition and monitoring is presented in this paper. It is configured using a commercially available ZigBee solution. A ZigBee module is connected via a USB interface to a Microsoft Windows PC, which works as a base station in the sensor network. Data collected by remote devices are sent to the base station PC, which is set as a data sink. Each remote device is built of a commercially available ZigBee module product and a sensor. The sensor is a thermocouple connected to a cold junction compensator amplifier. The signal from the amplifier is input to an AD converter port on the ZigBee module. Temperature data are transmitted according to the ZigBee protocol from the remote device to the data sink PC. The data sampling rate is one sampling per second; the highest possible rate is four samplings per second. The data are recorded in the hexadecimal number format by device control software, and the data file is stored in text format on the data sink PC. Time-dependent data changes can be monitored using the macro function of spreadsheet software. The system is considered a useful tool in the field of education, based on the results of trial use for measurement in an undergraduate laboratory class at a university.

  19. What's New with MS Office Suites

    ERIC Educational Resources Information Center

    Goldsborough, Reid

    2012-01-01

    If one buys a new PC, laptop, or netbook computer today, it probably comes preloaded with Microsoft Office 2010 Starter Edition. This is a significantly limited, advertising-laden version of Microsoft's suite of productivity programs, Microsoft Office. This continues the trend of PC makers providing ever more crippled versions of Microsoft's…

  20. Utilizing Microsoft Mathematics in Teaching and Learning Calculus

    ERIC Educational Resources Information Center

    Oktaviyanthi, Rina; Supriani, Yani

    2015-01-01

    The experimental design was conducted to investigate the use of Microsoft Mathematics, free software made by Microsoft Corporation, in teaching and learning Calculus. This paper reports results from experimental study details on implementation of Microsoft Mathematics in Calculus, students' achievement and the effects of the use of Microsoft…

  1. Experimental Design: Utilizing Microsoft Mathematics in Teaching and Learning Calculus

    ERIC Educational Resources Information Center

    Oktaviyanthi, Rina; Supriani, Yani

    2015-01-01

    The experimental design was conducted to investigate the use of Microsoft Mathematics, free software made by Microsoft Corporation, in teaching and learning Calculus. This paper reports results from experimental study details on implementation of Microsoft Mathematics in Calculus, students' achievement and the effects of the use of Microsoft…

  2. A comparison of manual anthropometric measurements with Kinect-based scanned measurements in terms of precision and reliability.

    PubMed

    Bragança, Sara; Arezes, Pedro; Carvalho, Miguel; Ashdown, Susan P; Castellucci, Ignacio; Leão, Celina

    2018-01-01

    Collecting anthropometric data for real-life applications demands a high degree of precision and reliability. It is important to test new equipment that will be used for data collectionOBJECTIVE:Compare two anthropometric data gathering techniques - manual methods and a Kinect-based 3D body scanner - to understand which of them gives more precise and reliable results. The data was collected using a measuring tape and a Kinect-based 3D body scanner. It was evaluated in terms of precision by considering the regular and relative Technical Error of Measurement and in terms of reliability by using the Intraclass Correlation Coefficient, Reliability Coefficient, Standard Error of Measurement and Coefficient of Variation. The results obtained showed that both methods presented better results for reliability than for precision. Both methods showed relatively good results for these two variables, however, manual methods had better results for some body measurements. Despite being considered sufficiently precise and reliable for certain applications (e.g. apparel industry), the 3D scanner tested showed, for almost every anthropometric measurement, a different result than the manual technique. Many companies design their products based on data obtained from 3D scanners, hence, understanding the precision and reliability of the equipment used is essential to obtain feasible results.

  3. ScreenRecorder: A Utility for Creating Screenshot Video Using Only Original Equipment Manufacturer (OEM) Software on Microsoft Windows Systems

    DTIC Science & Technology

    2015-01-01

    class within Microsoft Visual Studio . 2 It has been tested on and is compatible with Microsoft Vista, 7, and 8 and Visual Studio Express 2008...the ScreenRecorder utility assumes a basic understanding of compiling and running C++ code within Microsoft Visual Studio . This report does not...of Microsoft Visual Studio , the ScreenRecorder utility was developed as a C++ class that can be compiled as a library (static or dynamic) to be

  4. Software Re-Engineering of the Human Factors Analysis and Classification System - (Maintenance Extension) Using Object Oriented Methods in a Microsoft Environment

    DTIC Science & Technology

    2001-09-01

    replication) -- all from Visual Basic and VBA . In fact, we found that the SQL Server engine actually had a plethora of options, most formidable of...2002, the new SQL Server 2000 database engine, and Microsoft Visual Basic.NET. This thesis describes our use of the Spiral Development Model to...versions of Microsoft products? Specifically, the pending release of Microsoft Office 2002, the new SQL Server 2000 database engine, and Microsoft

  5. Towards a gestural 3D interaction for tangible and three-dimensional GIS visualizations

    NASA Astrophysics Data System (ADS)

    Partsinevelos, Panagiotis; Agadakos, Ioannis; Pattakos, Nikolas; Maragakis, Michail

    2014-05-01

    The last decade has been characterized by a significant increase of spatially dependent applications that require storage, visualization, analysis and exploration of geographic information. GIS analysis of spatiotemporal geographic data is operated by highly trained personnel under an abundance of software and tools, lacking interoperability and friendly user interaction. Towards this end, new forms of querying and interaction are emerging, including gestural interfaces. Three-dimensional GIS representations refer to either tangible surfaces or projected representations. Making a 3D tangible geographic representation touch-sensitive may be a convenient solution, but such an approach raises the cost significantly and complicates the hardware and processing required to combine touch-sensitive material (for pinpointing points) with deformable material (for displaying elevations). In this study, a novel interaction scheme upon a three dimensional visualization of GIS data is proposed. While gesture user interfaces are not yet fully acceptable due to inconsistencies and complexity, a non-tangible GIS system where 3D visualizations are projected, calls for interactions that are based on three-dimensional, non-contact and gestural procedures. Towards these objectives, we use the Microsoft Kinect II system which includes a time of flight camera, allowing for a robust and real time depth map generation, along with the capturing and translation of a variety of predefined gestures from different simultaneous users. By incorporating these features into our system architecture, we attempt to create a natural way for users to operate on GIS data. Apart from the conventional pan and zoom features, the key functions addressed for the 3-D user interface is the ability to pinpoint particular points, lines and areas of interest, such as destinations, waypoints, landmarks, closed areas, etc. The first results shown, concern a projected GIS representation where the user selects points and regions of interest while the GIS component responds accordingly by changing the scenario in a natural disaster application. Creating a 3D model representation of geospatial data provides a natural way for users to perceive and interact with space. To the best of our knowledge it is the first attempt to use Kinect II for GIS applications and generally virtual environments using novel Human Computer Interaction methods. Under a robust decision support system, the users are able to interact, combine and computationally analyze information in three dimensions using gestures. This study promotes geographic awareness and education and will prove beneficial for a wide range of geoscience applications including natural disaster and emergency management. Acknowledgements: This work is partially supported under the framework of the "Cooperation 2011" project ATLANTAS (11_SYN_6_1937) funded from the Operational Program "Competitiveness and Entrepreneurship" (co-funded by the European Regional Development Fund (ERDF)) and managed by the Greek General Secretariat for Research and Technology.

  6. Project KEWL: Kinect Engineering With Learning

    NASA Technical Reports Server (NTRS)

    Norris, Jeff; Goza, Sharon; Shores, David

    2011-01-01

    Project KEWL is a joint project between NASA/JPL and NASA/JSC to stimulate interest of children in Science, Technology, Engineering and Math (STEM) and bring the NASA space exploration experience to the classroom, museum and ultimately the living room. Using the Kinect game controller KEWL allows children to engage in NASA s missions in a fundamentally new way. KEWL allows children to experiment with gravity on Mars and the Moon; navigate through the International Space Station; fix a torn solar array on the ISS; drive a robot on Mars; visit an Asteroid; learn about the differences in gravity on different planets and control Robonaut 2 using their body as the input device. Project KEWL complements NASA s outreach investments in television, mobile platforms and the web by engaging the public through the rapidly expanding medium of console gaming. In 2008, 97% of teenagers played video games and 86% played on a home gaming console. (source: http://pewresearch.org/pubs/953/) As of March 2011, there have been more than 10 million Kinects sold. (source: http://www.itproportal.com/2011/03/10/kinect-record-breaking-sales-figures-top-10-million/) Project KEWL interacts with children on a platform on which they spend much of their time and teaches them information about NASA while they are having fun. Project KEWL progressed from completely custom C++ code written in house to using a commercial game engine. The art work and 3D geometry models come from existing engineering work or are created by the KEWL development team. Six different KEWL applications have been demonstrated at nine different venues including schools, museums, conferences, and NASA outreach events. These demonstrations have allowed the developers the chance to interact with players and observe the gameplay mechanics in action. The lessons learned were then incorporated into the subsequent versions of the applications.

  7. Cardiovascular Profile of Valbenazine: Analysis of Pooled Data from Three Randomized, Double-Blind, Placebo-Controlled Trials.

    PubMed

    Thai-Cuarto, Dao; O'Brien, Christopher F; Jimenez, Roland; Liang, Grace S; Burke, Joshua

    2018-04-01

    Valbenazine is a novel vesicular monoamine transporter 2 inhibitor approved for the treatment of tardive dyskinesia in adults. Using data from double-blind, placebo-controlled trials, analyses were conducted to evaluate the cardiovascular effects of once-daily valbenazine in patients with a psychiatric disorder who developed tardive dyskinesia after exposure to a dopamine-blocking medication. Data were pooled from three 6-week, double-blind, placebo-controlled trials: KINECT (NCT01688037), KINECT 2 (NCT01733121), and KINECT 3 (NCT02274558). Data from the 42-week valbenazine extension period of KINECT 3 were also analyzed. Outcomes of interest included cardiovascular-related treatment-emergent adverse events, vital sign measurements, and electrocardiogram parameters. The pooled safety population included 400 participants (placebo, n = 178; valbenazine 40 mg/day, n = 110; valbenazine 80 mg/day, n = 112). A history of cardiac disorders was present in 11.8% of participants, and 74.3% were taking a concomitant medication with known potential for QT prolongation. Mean changes from baseline to week 6 in supine vital signs and QTcF (Fridericia correction) were as follows for placebo, valbenazine 40 mg/day, and valbenazine 80 mg/day, respectively: systolic blood pressure (0.2, - 2.1, - 1.8 mmHg), diastolic blood pressure (- 0.1, - 1.6, - 1.2 mmHg), heart rate (- 1.7, - 2.2, - 1.7 bpm), QTcF interval (1.2, 1.1, 2.1 ms); all p > 0.05 for valbenazine vs. placebo. No statistically significant differences were observed between placebo and valbenazine in cardiovascular-related, treatment-emergent adverse events. No notable additional effects on cardiovascular outcomes were found with up to 48 weeks of valbenazine treatment. Results from double-blind, placebo-controlled trials showed no apparent difference between valbenazine and placebo on cardiovascular outcomes. No additional cardiovascular risk was detected during a longer extension study with valbenazine.

  8. Microsoft Biology Initiative: .NET Bioinformatics Platform and Tools

    PubMed Central

    Diaz Acosta, B.

    2011-01-01

    The Microsoft Biology Initiative (MBI) is an effort in Microsoft Research to bring new technology and tools to the area of bioinformatics and biology. This initiative is comprised of two primary components, the Microsoft Biology Foundation (MBF) and the Microsoft Biology Tools (MBT). MBF is a language-neutral bioinformatics toolkit built as an extension to the Microsoft .NET Framework—initially aimed at the area of Genomics research. Currently, it implements a range of parsers for common bioinformatics file formats; a range of algorithms for manipulating DNA, RNA, and protein sequences; and a set of connectors to biological web services such as NCBI BLAST. MBF is available under an open source license, and executables, source code, demo applications, documentation and training materials are freely downloadable from http://research.microsoft.com/bio. MBT is a collection of tools that enable biology and bioinformatics researchers to be more productive in making scientific discoveries.

  9. Evaluation of an innovative sensor for measuring global and diffuse irradiance, and sunshine duration

    NASA Astrophysics Data System (ADS)

    Muneer, Tariq; Zhang, Xiaodong; Wood, John

    2002-03-01

    Delta-T Device Limited of Cambridge, UK have developed an integrated device which enables simultaneous measurement of horizontal global and diffuse irradiance as well as sunshine status at any given instance in time. To evaluate the performance of this new device, horizontal global and diffuse irradiance data were simultaneously collected from Delta-T device and Napier University's CIE First Class daylight monitoring station. To enable a cross check a Kipp & Zonen CM11 global irradiance sensor has also been installed in Currie, south-west Edinburgh. Sunshine duration data have been recorded at the Royal Botanical Garden, Edinburgh using their Campbell-Stokes recorder. Hourly data sets were analysed and plotted within the Microsoft Excel environment. Using the common statistical measures, Root Mean Square Difference (RMSD) and Mean Bias Difference (MBD) the accuracy of measurements of Delta-T sensor's horizontal global and diffuse irradiance, and sunshine duration were investigated. The results show a good performance on the part of Delta-T device for the measurement of global and diffuse irradiance. The sunshine measurements were found to have a lack of consistency and accuracy. It is argued herein that the distance between the respective sensors and the poor accuracy of Campbell-Stokes recorder may be contributing factors to this phenomenon.

  10. Microsoft in Southeast Europe: A Conversation with Goran Radman

    ERIC Educational Resources Information Center

    Pendergast, William; Frayne, Colette; Kelley, Patricia

    2009-01-01

    Goran Radman (GR) joined Microsoft in 1996 and served until Fall 2008 as Microsoft Chairman, Southeast Europe (SEE) and Chairman, East and Central Europe (ECEE). Based in Croatia, where he enjoys sailing the Adriatic coast and islands, he spoke with the authors during 2008 and 2009 about his experience launching Microsoft's commercial presence in…

  11. Microsoft's Tom Corddry on Multimedia, the Information Superhighway and the Future of Online.

    ERIC Educational Resources Information Center

    Herther, Nancy K.

    1994-01-01

    Tom Corddry, Microsoft Corporation's Creative Director for the Consumer Division, is interviewed about the Microsoft Home line of products and the development of related CD-ROM and multimedia products. Reasons for Microsoft's entry into the content market and its challenges, the market's future, and the company's interest in developing online…

  12. New Approaches to Exciting Exergame-Experiences for People with Motor Function Impairments.

    PubMed

    Eckert, Martina; Gómez-Martinho, Ignacio; Meneses, Juan; Martínez, José-Fernán

    2017-02-12

    The work presented here suggests new ways to tackle exergames for physical rehabilitation and to improve the players' immersion and involvement. The primary (but not exclusive) purpose is to increase the motivation of children and adolescents with severe physical impairments, for doing their required exercises while playing. The proposed gaming environment is based on the Kinect sensor and the Blender Game Engine. A middleware has been implemented that efficiently transmits the data from the sensor to the game. Inside the game, different newly proposed mechanisms have been developed to distinguish pure exercise-gestures from other movements used to control the game (e.g., opening a menu). The main contribution is the amplification of weak movements, which allows the physically impaired to have similar gaming experiences as the average population. To test the feasibility of the proposed methods, four mini-games were implemented and tested by a group of 11 volunteers with different disabilities, most of them bound to a wheelchair. Their performance has also been compared to that of a healthy control group. Results are generally positive and motivating, although there is much to do to improve the functionalities. There is a major demand for applications that help to include disabled people in society and to improve their life conditions. This work will contribute towards providing them with more fun during exercise.

  13. New Approaches to Exciting Exergame-Experiences for People with Motor Function Impairments

    PubMed Central

    Eckert, Martina; Gómez-Martinho, Ignacio; Meneses, Juan; Martínez, José-Fernán

    2017-01-01

    The work presented here suggests new ways to tackle exergames for physical rehabilitation and to improve the players’ immersion and involvement. The primary (but not exclusive) purpose is to increase the motivation of children and adolescents with severe physical impairments, for doing their required exercises while playing. The proposed gaming environment is based on the Kinect sensor and the Blender Game Engine. A middleware has been implemented that efficiently transmits the data from the sensor to the game. Inside the game, different newly proposed mechanisms have been developed to distinguish pure exercise-gestures from other movements used to control the game (e.g., opening a menu). The main contribution is the amplification of weak movements, which allows the physically impaired to have similar gaming experiences as the average population. To test the feasibility of the proposed methods, four mini-games were implemented and tested by a group of 11 volunteers with different disabilities, most of them bound to a wheelchair. Their performance has also been compared to that of a healthy control group. Results are generally positive and motivating, although there is much to do to improve the functionalities. There is a major demand for applications that help to include disabled people in society and to improve their life conditions. This work will contribute towards providing them with more fun during exercise. PMID:28208682

  14. Seamonster: A Smart Sensor Web in Southeast Alaska

    NASA Astrophysics Data System (ADS)

    Fatland, D. R.; Heavner, M. J.; Hood, E.; Connor, C.; Nagorski, S.

    2006-12-01

    The NASA Research Opportunities in Space and Earth Science (ROSES) program is supporting a wireless sensor network project as part of its Advanced Information Systems Technology "Smart Sensor Web" initiative. The project, entitled Seamonster (for SouthEast Alaska MONitoring Network for Science, Telecomm, and Education Research) is led by the University of Alaska Southeast (Juneau) in collaboration with Microsoft- Vexcel in Boulder Colorado. This paper describes both the data acquisition components and science research objectives of Seamonster. The underlying data acquisition concept is to facilitate geophysics data acquisition by providing a wireless backbone for data recovery. Other researchers would be encouraged to emplace their own sensors together with short-range wireless (ZigBee, Bluetooth, etc). Through a common protocol the backbone will receive data from these sensors and relay them to a wired server. This means that the investigator can receive their data via email on a daily basis thereby cutting cost and monitoring sensor health. With environmental hardening and fairly high bandwidth and long range (100kbps/50km to 5mpbs/15km per hop) the network is intended to cover large areas and operate in harsh environments. Low power sensors and intelligent power management within the backbone are the dual ideas to contend with typical power/cost/data dilemmas. Seamonster science will focus over the next three years on hydrology and glaciology in a succession of valleys near Juneau in various stages of deglaciation, in effect providing a synopsis of a millennium-timescale process in a single moment. The instrumentation will include GPS, geophones, digital photography, met stations, and a suite of stream state and water quality sensors. Initial focus is on the Lemon Creek watershed with expansion to follow in subsequent years. The project will ideally expand to include marine and biological monitoring components.

  15. 75 FR 14401 - Amendment of Certain of the Commission's Rules of Practice and Procedure and Rules of Commission...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-25

    ... were created, such as Microsoft Excel, Microsoft Word, or Microsoft PowerPoint (``native format'')? We... (condensed) or expanded (detailed) format Export search results to Excel or PDF As noted above, system is...., Microsoft Word ``.doc'' format or non-copy protected text- searchable ``.pdf'' format)? Should submissions...

  16. Using Microsoft Access: A How-To-Do-It Manual for Librarians. How-To-Do-It Manuals for Librarians, Number 76.

    ERIC Educational Resources Information Center

    Butler, E. Sonny

    Much of what librarians do today requires adeptness in creating and manipulating databases. Many new computers bought by libraries every year come packaged with Microsoft Office and include Microsoft Access. This database program features a seamless interface between Microsoft Office's other programs like Word, Excel, and PowerPoint. This book…

  17. A reduced-dimensionality approach to uncovering dyadic modes of body motion in conversations.

    PubMed

    Gaziv, Guy; Noy, Lior; Liron, Yuvalal; Alon, Uri

    2017-01-01

    Face-to-face conversations are central to human communication and a fascinating example of joint action. Beyond verbal content, one of the primary ways in which information is conveyed in conversations is body language. Body motion in natural conversations has been difficult to study precisely due to the large number of coordinates at play. There is need for fresh approaches to analyze and understand the data, in order to ask whether dyads show basic building blocks of coupled motion. Here we present a method for analyzing body motion during joint action using depth-sensing cameras, and use it to analyze a sample of scientific conversations. Our method consists of three steps: defining modes of body motion of individual participants, defining dyadic modes made of combinations of these individual modes, and lastly defining motion motifs as dyadic modes that occur significantly more often than expected given the single-person motion statistics. As a proof-of-concept, we analyze the motion of 12 dyads of scientists measured using two Microsoft Kinect cameras. In our sample, we find that out of many possible modes, only two were motion motifs: synchronized parallel torso motion in which the participants swayed from side to side in sync, and still segments where neither person moved. We find evidence of dyad individuality in the use of motion modes. For a randomly selected subset of 5 dyads, this individuality was maintained for at least 6 months. The present approach to simplify complex motion data and to define motion motifs may be used to understand other joint tasks and interactions. The analysis tools developed here and the motion dataset are publicly available.

  18. Feedback control of arm movements using Neuro-Muscular Electrical Stimulation (NMES) combined with a lockable, passive exoskeleton for gravity compensation

    PubMed Central

    Klauer, Christian; Schauer, Thomas; Reichenfelser, Werner; Karner, Jakob; Zwicker, Sven; Gandolla, Marta; Ambrosini, Emilia; Ferrante, Simona; Hack, Marco; Jedlitschka, Andreas; Duschau-Wicke, Alexander; Gföhler, Margit; Pedrocchi, Alessandra

    2014-01-01

    Within the European project MUNDUS, an assistive framework was developed for the support of arm and hand functions during daily life activities in severely impaired people. This contribution aims at designing a feedback control system for Neuro-Muscular Electrical Stimulation (NMES) to enable reaching functions in people with no residual voluntary control of the arm and shoulder due to high level spinal cord injury. NMES is applied to the deltoids and the biceps muscles and integrated with a three degrees of freedom (DoFs) passive exoskeleton, which partially compensates gravitational forces and allows to lock each DOF. The user is able to choose the target hand position and to trigger actions using an eyetracker system. The target position is selected by using the eyetracker and determined by a marker-based tracking system using Microsoft Kinect. A central controller, i.e., a finite state machine, issues a sequence of basic movement commands to the real-time arm controller. The NMES control algorithm sequentially controls each joint angle while locking the other DoFs. Daily activities, such as drinking, brushing hair, pushing an alarm button, etc., can be supported by the system. The robust and easily tunable control approach was evaluated with five healthy subjects during a drinking task. Subjects were asked to remain passive and to allow NMES to induce the movements. In all of them, the controller was able to perform the task, and a mean hand positioning error of less than five centimeters was achieved. The average total time duration for moving the hand from a rest position to a drinking cup, for moving the cup to the mouth and back, and for finally returning the arm to the rest position was 71 s. PMID:25228853

  19. Feedback control of arm movements using Neuro-Muscular Electrical Stimulation (NMES) combined with a lockable, passive exoskeleton for gravity compensation.

    PubMed

    Klauer, Christian; Schauer, Thomas; Reichenfelser, Werner; Karner, Jakob; Zwicker, Sven; Gandolla, Marta; Ambrosini, Emilia; Ferrante, Simona; Hack, Marco; Jedlitschka, Andreas; Duschau-Wicke, Alexander; Gföhler, Margit; Pedrocchi, Alessandra

    2014-01-01

    Within the European project MUNDUS, an assistive framework was developed for the support of arm and hand functions during daily life activities in severely impaired people. This contribution aims at designing a feedback control system for Neuro-Muscular Electrical Stimulation (NMES) to enable reaching functions in people with no residual voluntary control of the arm and shoulder due to high level spinal cord injury. NMES is applied to the deltoids and the biceps muscles and integrated with a three degrees of freedom (DoFs) passive exoskeleton, which partially compensates gravitational forces and allows to lock each DOF. The user is able to choose the target hand position and to trigger actions using an eyetracker system. The target position is selected by using the eyetracker and determined by a marker-based tracking system using Microsoft Kinect. A central controller, i.e., a finite state machine, issues a sequence of basic movement commands to the real-time arm controller. The NMES control algorithm sequentially controls each joint angle while locking the other DoFs. Daily activities, such as drinking, brushing hair, pushing an alarm button, etc., can be supported by the system. The robust and easily tunable control approach was evaluated with five healthy subjects during a drinking task. Subjects were asked to remain passive and to allow NMES to induce the movements. In all of them, the controller was able to perform the task, and a mean hand positioning error of less than five centimeters was achieved. The average total time duration for moving the hand from a rest position to a drinking cup, for moving the cup to the mouth and back, and for finally returning the arm to the rest position was 71 s.

  20. Motion detection technology as a tool for cardiopulmonary resuscitation (CPR) quality training: a randomised crossover mannequin pilot study.

    PubMed

    Semeraro, Federico; Frisoli, Antonio; Loconsole, Claudio; Bannò, Filippo; Tammaro, Gaetano; Imbriaco, Guglielmo; Marchetti, Luca; Cerchiari, Erga L

    2013-04-01

    Outcome after cardiac arrest is dependent on the quality of chest compressions (CC). A great number of devices have been developed to provide guidance during CPR. The present study evaluates a new CPR feedback system (Mini-VREM: Mini-Virtual Reality Enhanced Mannequin) designed to improve CC during training. Mini-VREM system consists of a Kinect(®) (Microsoft, Redmond, WA, USA) motion sensing device and specifically developed software to provide audio-visual feedback. Mini-VREM was connected to a commercially available mannequin (Laerdal Medical, Stavanger, Norway). Eighty trainees (healthcare professionals and lay people) volunteered in this randomised crossover pilot study. All subjects performed a 2 min CC trial, 1h pause and a second 2 min CC trial. The first group (FB/NFB, n=40) performed CC with Mini-VREM feedback (FB) followed by CC without feedback (NFB). The second group (NFB/FB, n=40) performed vice versa. Primary endpoints: adequate compression (compression rate between 100 and 120 min(-1) and compression depth between 50 and 60mm); compressions rate within 100-120 min(-1); compressions depth within 50-60mm. When compared to the performance without feedback, with Mini-VREM feedback compressions were more adequate (FB 35.78% vs. NFB 7.27%, p<0.001) and more compressions achieved target rate (FB 72.04% vs. 31.42%, p<0.001) and target depth (FB 47.34% vs. 24.87%, p=0.002). The participants perceived the system to be easy to use with effective feedback. The Mini-VREM system was able to improve significantly the CC performance by healthcare professionals and by lay people in a simulated CA scenario, in terms of compression rate and depth. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

Top