Sample records for user-based motion sensing

  1. Mobile user identity sensing using the motion sensor

    NASA Astrophysics Data System (ADS)

    Zhao, Xi; Feng, Tao; Xu, Lei; Shi, Weidong

    2014-05-01

    Employing mobile sensor data to recognize user behavioral activities has been well studied in recent years. However, to adopt the data as a biometric modality has rarely been explored. Existing methods either used the data to recognize gait, which is considered as a distinguished identity feature; or segmented a specific kind of motion for user recognition, such as phone picking-up motion. Since the identity and the motion gesture jointly affect motion data, to fix the gesture (walking or phone picking-up) definitively simplifies the identity sensing problem. However, it meanwhile introduces the complexity from gesture detection or requirement on a higher sample rate from motion sensor readings, which may draw the battery fast and affect the usability of the phone. In general, it is still under investigation that motion based user authentication in a large scale satisfies the accuracy requirement as a stand-alone biometrics modality. In this paper, we propose a novel approach to use the motion sensor readings for user identity sensing. Instead of decoupling the user identity from a gesture, we reasonably assume users have their own distinguishing phone usage habits and extract the identity from fuzzy activity patterns, represented by a combination of body movements whose signals in chains span in relative low frequency spectrum and hand movements whose signals span in relative high frequency spectrum. Then Bayesian Rules are applied to analyze the dependency of different frequency components in the signals. During testing, a posterior probability of user identity given the observed chains can be computed for authentication. Tested on an accelerometer dataset with 347 users, our approach has demonstrated the promising results.

  2. A motion sensing-based framework for robotic manipulation.

    PubMed

    Deng, Hao; Xia, Zeyang; Weng, Shaokui; Gan, Yangzhou; Fang, Peng; Xiong, Jing

    2016-01-01

    To data, outside of the controlled environments, robots normally perform manipulation tasks operating with human. This pattern requires the robot operators with high technical skills training for varied teach-pendant operating system. Motion sensing technology, which enables human-machine interaction in a novel and natural interface using gestures, has crucially inspired us to adopt this user-friendly and straightforward operation mode on robotic manipulation. Thus, in this paper, we presented a motion sensing-based framework for robotic manipulation, which recognizes gesture commands captured from motion sensing input device and drives the action of robots. For compatibility, a general hardware interface layer was also developed in the framework. Simulation and physical experiments have been conducted for preliminary validation. The results have shown that the proposed framework is an effective approach for general robotic manipulation with motion sensing control.

  3. Method and System for Physiologically Modulating Videogames and Simulations which Use Gesture and Body Image Sensing Control Input Devices

    NASA Technical Reports Server (NTRS)

    Pope, Alan T. (Inventor); Stephens, Chad L. (Inventor); Habowski, Tyler (Inventor)

    2017-01-01

    Method for physiologically modulating videogames and simulations includes utilizing input from a motion-sensing video game system and input from a physiological signal acquisition device. The inputs from the physiological signal sensors are utilized to change the response of a user's avatar to inputs from the motion-sensing sensors. The motion-sensing system comprises a 3D sensor system having full-body 3D motion capture of a user's body. This arrangement encourages health-enhancing physiological self-regulation skills or therapeutic amplification of healthful physiological characteristics. The system provides increased motivation for users to utilize biofeedback as may be desired for treatment of various conditions.

  4. The Sense-It App: A Smartphone Sensor Toolkit for Citizen Inquiry Learning

    ERIC Educational Resources Information Center

    Sharples, Mike; Aristeidou, Maria; Villasclaras-Fernández, Eloy; Herodotou, Christothea; Scanlon, Eileen

    2017-01-01

    The authors describe the design and formative evaluation of a sensor toolkit for Android smartphones and tablets that supports inquiry-based science learning. The Sense-it app enables a user to access all the motion, environmental and position sensors available on a device, linking these to a website for shared crowd-sourced investigations. The…

  5. Physiologically Modulating Videogames or Simulations which use Motion-Sensing Input Devices

    NASA Technical Reports Server (NTRS)

    Pope, Alan T. (Inventor); Stephens, Chad L. (Inventor); Blanson, Nina Marie (Inventor)

    2014-01-01

    New types of controllers allow players to make inputs to a video game or simulation by moving the entire controller itself. This capability is typically accomplished using a wireless input device having accelerometers, gyroscopes, and an infrared LED tracking camera. The present invention exploits these wireless motion-sensing technologies to modulate the player's movement inputs to the videogame based upon physiological signals. Such biofeedback-modulated video games train valuable mental skills beyond eye-hand coordination. These psychophysiological training technologies enhance personal improvement, not just the diversion, of the user.

  6. Compact Hip-Force Sensor for a Gait-Assistance Exoskeleton System.

    PubMed

    Choi, Hyundo; Seo, Keehong; Hyung, Seungyong; Shim, Youngbo; Lim, Soo-Chul

    2018-02-13

    In this paper, we propose a compact force sensor system for a hip-mounted exoskeleton for seniors with difficulties in walking due to muscle weakness. It senses and monitors the delivered force and power of the exoskeleton for motion control and taking urgent safety action. Two FSR (force-sensitive resistors) sensors are used to measure the assistance force when the user is walking. The sensor system directly measures the interaction force between the exoskeleton and the lower limb of the user instead of a previously reported force-sensing method, which estimated the hip assistance force from the current of the motor and lookup tables. Furthermore, the sensor system has the advantage of generating torque in the walking-assistant actuator based on directly measuring the hip-assistance force. Thus, the gait-assistance exoskeleton system can control the delivered power and torque to the user. The force sensing structure is designed to decouple the force caused by hip motion from other directional forces to the sensor so as to only measure that force. We confirmed that the hip-assistance force could be measured with the proposed prototype compact force sensor attached to a thigh frame through an experiment with a real system.

  7. Compact Hip-Force Sensor for a Gait-Assistance Exoskeleton System

    PubMed Central

    Choi, Hyundo; Seo, Keehong; Hyung, Seungyong; Shim, Youngbo; Lim, Soo-Chul

    2018-01-01

    In this paper, we propose a compact force sensor system for a hip-mounted exoskeleton for seniors with difficulties in walking due to muscle weakness. It senses and monitors the delivered force and power of the exoskeleton for motion control and taking urgent safety action. Two FSR (force-sensitive resistors) sensors are used to measure the assistance force when the user is walking. The sensor system directly measures the interaction force between the exoskeleton and the lower limb of the user instead of a previously reported force-sensing method, which estimated the hip assistance force from the current of the motor and lookup tables. Furthermore, the sensor system has the advantage of generating torque in the walking-assistant actuator based on directly measuring the hip-assistance force. Thus, the gait-assistance exoskeleton system can control the delivered power and torque to the user. The force sensing structure is designed to decouple the force caused by hip motion from other directional forces to the sensor so as to only measure that force. We confirmed that the hip-assistance force could be measured with the proposed prototype compact force sensor attached to a thigh frame through an experiment with a real system. PMID:29438300

  8. Physiologically Modulating Videogames or Simulations which Use Motion-Sensing Input Devices

    NASA Technical Reports Server (NTRS)

    Blanson, Nina Marie (Inventor); Stephens, Chad L. (Inventor); Pope, Alan T. (Inventor)

    2017-01-01

    New types of controllers allow a player to make inputs to a video game or simulation by moving the entire controller itself or by gesturing or by moving the player's body in whole or in part. This capability is typically accomplished using a wireless input device having accelerometers, gyroscopes, and a camera. The present invention exploits these wireless motion-sensing technologies to modulate the player's movement inputs to the videogame based upon physiological signals. Such biofeedback-modulated video games train valuable mental skills beyond eye-hand coordination. These psychophysiological training technologies enhance personal improvement, not just the diversion, of the user.

  9. Freestanding Triboelectric Nanogenerator Enables Noncontact Motion-Tracking and Positioning.

    PubMed

    Guo, Huijuan; Jia, Xueting; Liu, Lue; Cao, Xia; Wang, Ning; Wang, Zhong Lin

    2018-04-24

    Recent development of interactive motion-tracking and positioning technologies is attracting increasing interests in many areas, such as wearable electronics, intelligent electronics, and the internet of things. For example, the so-called somatosensory technology can afford users strong empathy of immersion and realism due to their consistent interaction with the game. Here, we report a noncontact self-powered positioning and motion-tracking system based on a freestanding triboelectric nanogenerator (TENG). The TENG was fabricated by a nanoengineered surface in the contact-separation mode with the use of a free moving human body (hands or feet) as the trigger. The poly(tetrafluoroethylene) (PTFE) arrays based interactive interface can give an output of 222 V from casual human motions. Different from previous works, this device also responses to a small action at certain heights of 0.01-0.11 m from the device with a sensitivity of about 315 V·m -1 , so that the mechanical sensing is possible. Such a distinctive noncontact sensing feature promotes a wide range of potential applications in smart interaction systems.

  10. Sensing human physiological response using wearable carbon nanotube-based fabrics

    NASA Astrophysics Data System (ADS)

    Wang, Long; Loh, Kenneth J.; Koo, Helen S.

    2016-04-01

    Flexible and wearable sensors for human monitoring have received increased attention. Besides detecting motion and physical activity, measuring human vital signals (e.g., respiration rate and body temperature) provide rich data for assessing subjects' physiological or psychological condition. Instead of using conventional, bulky, sensing transducers, the objective of this study was to design and test a wearable, fabric-like sensing system. In particular, multi-walled carbon nanotube (MWCNT)-latex thin films of different MWCNT concentrations were first fabricated using spray coating. Freestanding MWCNT-latex films were then sandwiched between two layers of flexible fabric using iron-on adhesive to form the wearable sensor. Second, to characterize its strain sensing properties, the fabric sensors were subjected to uniaxial and cyclic tensile load tests, and they exhibited relatively stable electromechanical responses. Finally, the wearable sensors were placed on a human subject for monitoring simple motions and for validating their practical strain sensing performance. Overall, the wearable fabric sensor design exhibited advances such as flexibility, ease of fabrication, light weight, low cost, noninvasiveness, and user comfort.

  11. Intelligent Motion and Interaction Within Virtual Environments

    NASA Technical Reports Server (NTRS)

    Ellis, Stephen R. (Editor); Slater, Mel (Editor); Alexander, Thomas (Editor)

    2007-01-01

    What makes virtual actors and objects in virtual environments seem real? How can the illusion of their reality be supported? What sorts of training or user-interface applications benefit from realistic user-environment interactions? These are some of the central questions that designers of virtual environments face. To be sure simulation realism is not necessarily the major, or even a required goal, of a virtual environment intended to communicate specific information. But for some applications in entertainment, marketing, or aspects of vehicle simulation training, realism is essential. The following chapters will examine how a sense of truly interacting with dynamic, intelligent agents may arise in users of virtual environments. These chapters are based on presentations at the London conference on Intelligent Motion and Interaction within a Virtual Environments which was held at University College, London, U.K., 15-17 September 2003.

  12. An ice-motion tracking system at the Alaska SAR facility

    NASA Technical Reports Server (NTRS)

    Kwok, Ronald; Curlander, John C.; Pang, Shirley S.; Mcconnell, Ross

    1990-01-01

    An operational system for extracting ice-motion information from synthetic aperture radar (SAR) imagery is being developed as part of the Alaska SAR Facility. This geophysical processing system (GPS) will derive ice-motion information by automated analysis of image sequences acquired by radars on the European ERS-1, Japanese ERS-1, and Canadian RADARSAT remote sensing satellites. The algorithm consists of a novel combination of feature-based and area-based techniques for the tracking of ice floes that undergo translation and rotation between imaging passes. The system performs automatic selection of the image pairs for input to the matching routines using an ice-motion estimator. It is designed to have a daily throughput of ten image pairs. A description is given of the GPS system, including an overview of the ice-motion-tracking algorithm, the system architecture, and the ice-motion products that will be available for distribution to geophysical data users.

  13. Wearable carbon nanotube-based fabric sensors for monitoring human physiological performance

    NASA Astrophysics Data System (ADS)

    Wang, Long; Loh, Kenneth J.

    2017-05-01

    A target application of wearable sensors is to detect human motion and to monitor physical activity for improving athletic performance and for delivering better physical therapy. In addition, measuring human vital signals (e.g., respiration rate and body temperature) provides rich information that can be used to assess a subject’s physiological or psychological condition. This study aims to design a multifunctional, wearable, fabric-based sensing system. First, carbon nanotube (CNT)-based thin films were fabricated by spraying. Second, the thin films were integrated with stretchable fabrics to form the fabric sensors. Third, the strain and temperature sensing properties of sensors fabricated using different CNT concentrations were characterized. Furthermore, the sensors were demonstrated to detect human finger bending motions, so as to validate their practical strain sensing performance. Finally, to monitor human respiration, the fabric sensors were integrated with a chest band, which was directly worn by a human subject. Quantification of respiration rates were successfully achieved. Overall, the fabric sensors were characterized by advantages such as flexibility, ease of fabrication, lightweight, low-cost, noninvasiveness, and user comfort.

  14. Use of natural user interfaces in water simulations

    NASA Astrophysics Data System (ADS)

    Donchyts, G.; Baart, F.; van Dam, A.; Jagers, B.

    2013-12-01

    Conventional graphical user interfaces, used to edit input and present results of earth science models, have seen little innovation for the past two decades. In most cases model data is presented and edited using 2D projections even when working with 3D data. The emergence of 3D motion sensing technologies, such as Microsoft Kinect and LEAP Motion, opens new possibilities for user interaction by adding more degrees of freedom compared to a classical way using mouse and keyboard. Here we investigate how interaction with hydrodynamic numerical models can be improved using these new technologies. Our research hypothesis (H1) states that properly designed 3D graphical user interface paired with the 3D motion sensor can significantly reduce the time required to setup and use numerical models. In this work we have used a LEAP motion controller combined with a shallow water flow model engine D-Flow Flexible Mesh. Interacting with numerical model using hands

  15. Multimodal Excitatory Interfaces with Automatic Content Classification

    NASA Astrophysics Data System (ADS)

    Williamson, John; Murray-Smith, Roderick

    We describe a non-visual interface for displaying data on mobile devices, based around active exploration: devices are shaken, revealing the contents rattling around inside. This combines sample-based contact sonification with event playback vibrotactile feedback for a rich and compelling display which produces an illusion much like balls rattling inside a box. Motion is sensed from accelerometers, directly linking the motions of the user to the feedback they receive in a tightly closed loop. The resulting interface requires no visual attention and can be operated blindly with a single hand: it is reactive rather than disruptive. This interaction style is applied to the display of an SMS inbox. We use language models to extract salient features from text messages automatically. The output of this classification process controls the timbre and physical dynamics of the simulated objects. The interface gives a rapid semantic overview of the contents of an inbox, without compromising privacy or interrupting the user.

  16. Expansion of Smartwatch Touch Interface from Touchscreen to Around Device Interface Using Infrared Line Image Sensors.

    PubMed

    Lim, Soo-Chul; Shin, Jungsoon; Kim, Seung-Chan; Park, Joonah

    2015-07-09

    Touchscreen interaction has become a fundamental means of controlling mobile phones and smartwatches. However, the small form factor of a smartwatch limits the available interactive surface area. To overcome this limitation, we propose the expansion of the touch region of the screen to the back of the user's hand. We developed a touch module for sensing the touched finger position on the back of the hand using infrared (IR) line image sensors, based on the calibrated IR intensity and the maximum intensity region of an IR array. For complete touch-sensing solution, a gyroscope installed in the smartwatch is used to read the wrist gestures. The gyroscope incorporates a dynamic time warping gesture recognition algorithm for eliminating unintended touch inputs during the free motion of the wrist while wearing the smartwatch. The prototype of the developed sensing module was implemented in a commercial smartwatch, and it was confirmed that the sensed positional information of the finger when it was used to touch the back of the hand could be used to control the smartwatch graphical user interface. Our system not only affords a novel experience for smartwatch users, but also provides a basis for developing other useful interfaces.

  17. A Motion Planning Approach to Automatic Obstacle Avoidance during Concentric Tube Robot Teleoperation

    PubMed Central

    Torres, Luis G.; Kuntz, Alan; Gilbert, Hunter B.; Swaney, Philip J.; Hendrick, Richard J.; Webster, Robert J.; Alterovitz, Ron

    2015-01-01

    Concentric tube robots are thin, tentacle-like devices that can move along curved paths and can potentially enable new, less invasive surgical procedures. Safe and effective operation of this type of robot requires that the robot’s shaft avoid sensitive anatomical structures (e.g., critical vessels and organs) while the surgeon teleoperates the robot’s tip. However, the robot’s unintuitive kinematics makes it difficult for a human user to manually ensure obstacle avoidance along the entire tentacle-like shape of the robot’s shaft. We present a motion planning approach for concentric tube robot teleoperation that enables the robot to interactively maneuver its tip to points selected by a user while automatically avoiding obstacles along its shaft. We achieve automatic collision avoidance by precomputing a roadmap of collision-free robot configurations based on a description of the anatomical obstacles, which are attainable via volumetric medical imaging. We also mitigate the effects of kinematic modeling error in reaching the goal positions by adjusting motions based on robot tip position sensing. We evaluate our motion planner on a teleoperated concentric tube robot and demonstrate its obstacle avoidance and accuracy in environments with tubular obstacles. PMID:26413381

  18. A Motion Planning Approach to Automatic Obstacle Avoidance during Concentric Tube Robot Teleoperation.

    PubMed

    Torres, Luis G; Kuntz, Alan; Gilbert, Hunter B; Swaney, Philip J; Hendrick, Richard J; Webster, Robert J; Alterovitz, Ron

    2015-05-01

    Concentric tube robots are thin, tentacle-like devices that can move along curved paths and can potentially enable new, less invasive surgical procedures. Safe and effective operation of this type of robot requires that the robot's shaft avoid sensitive anatomical structures (e.g., critical vessels and organs) while the surgeon teleoperates the robot's tip. However, the robot's unintuitive kinematics makes it difficult for a human user to manually ensure obstacle avoidance along the entire tentacle-like shape of the robot's shaft. We present a motion planning approach for concentric tube robot teleoperation that enables the robot to interactively maneuver its tip to points selected by a user while automatically avoiding obstacles along its shaft. We achieve automatic collision avoidance by precomputing a roadmap of collision-free robot configurations based on a description of the anatomical obstacles, which are attainable via volumetric medical imaging. We also mitigate the effects of kinematic modeling error in reaching the goal positions by adjusting motions based on robot tip position sensing. We evaluate our motion planner on a teleoperated concentric tube robot and demonstrate its obstacle avoidance and accuracy in environments with tubular obstacles.

  19. Force Rendering and its Evaluation of a Friction-Based Walking Sensation Display for a Seated User.

    PubMed

    Kato, Ginga; Kuroda, Yoshihiro; Kiyokawa, Kiyoshi; Takemura, Haruo

    2018-04-01

    Most existing locomotion devices that represent the sensation of walking target a user who is actually performing a walking motion. Here, we attempted to represent the walking sensation, especially a kinesthetic sensation and advancing feeling (the sense of moving forward) while the user remains seated. To represent the walking sensation using a relatively simple device, we focused on the force rendering and its evaluation of the longitudinal friction force applied on the sole during walking. Based on the measurement of the friction force applied on the sole during actual walking, we developed a novel friction force display that can present the friction force without the influence of body weight. Using performance evaluation testing, we found that the proposed method can stably and rapidly display friction force. Also, we developed a virtual reality (VR) walk-through system that is able to present the friction force through the proposed device according to the avatar's walking motion in a virtual world. By evaluating the realism, we found that the proposed device can represent a more realistic advancing feeling than vibration feedback.

  20. Hand-movement-based in-vehicle driver/front-seat passenger discrimination for centre console controls

    NASA Astrophysics Data System (ADS)

    Herrmann, Enrico; Makrushin, Andrey; Dittmann, Jana; Vielhauer, Claus; Langnickel, Mirko; Kraetzer, Christian

    2010-01-01

    Successful user discrimination in a vehicle environment may yield a reduction of the number of switches, thus significantly reducing costs while increasing user convenience. The personalization of individual controls permits conditional passenger enable/driver disable and vice versa options which may yield safety improvement. The authors propose a prototypic optical sensing system based on hand movement segmentation in near-infrared image sequences implemented in an Audi A6 Avant. Analyzing the number of movements in special regions, the system recognizes the direction of the forearm and hand motion and decides whether driver or front-seat passenger touch a control. The experimental evaluation is performed independently for uniformly and non-uniformly illuminated video data as well as for the complete video data set which includes both subsets. The general test results in error rates of up to 14.41% FPR / 16.82% FNR and 17.61% FPR / 14.77% FNR for driver and passenger respectively. Finally, the authors discuss the causes of the most frequently occurring errors as well as the prospects and limitations of optical sensing for user discrimination in passenger compartments.

  1. Estimation of heart rate variability using a compact radiofrequency motion sensor.

    PubMed

    Sugita, Norihiro; Matsuoka, Narumi; Yoshizawa, Makoto; Abe, Makoto; Homma, Noriyasu; Otake, Hideharu; Kim, Junghyun; Ohtaki, Yukio

    2015-12-01

    Physiological indices that reflect autonomic nervous activity are considered useful for monitoring peoples' health on a daily basis. A number of such indices are derived from heart rate variability, which is obtained by a radiofrequency (RF) motion sensor without making physical contact with the user's body. However, the bulkiness of RF motion sensors used in previous studies makes them unsuitable for home use. In this study, a new method to measure heart rate variability using a compact RF motion sensor that is sufficiently small to fit in a user's shirt pocket is proposed. To extract a heart rate related component from the sensor signal, an algorithm that optimizes a digital filter based on the power spectral density of the signal is proposed. The signals of the RF motion sensor were measured for 29 subjects during the resting state and their heart rate variability was estimated from the measured signals using the proposed method and a conventional method. A correlation coefficient between true heart rate and heart rate estimated from the proposed method was 0.69. Further, the experimental results showed the viability of the RF sensor for monitoring autonomic nervous activity. However, some improvements such as controlling the direction of sensing were necessary for stable measurement. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.

  2. Master of Puppets: An Animation-by-Demonstration Computer Puppetry Authoring Framework

    NASA Astrophysics Data System (ADS)

    Cui, Yaoyuan; Mousas, Christos

    2018-03-01

    This paper presents Master of Puppets (MOP), an animation-by-demonstration framework that allows users to control the motion of virtual characters (puppets) in real time. In the first step, the user is asked to perform the necessary actions that correspond to the character's motions. The user's actions are recorded, and a hidden Markov model is used to learn the temporal profile of the actions. During the runtime of the framework, the user controls the motions of the virtual character based on the specified activities. The advantage of the MOP framework is that it recognizes and follows the progress of the user's actions in real time. Based on the forward algorithm, the method predicts the evolution of the user's actions, which corresponds to the evolution of the character's motion. This method treats characters as puppets that can perform only one motion at a time. This means that combinations of motion segments (motion synthesis), as well as the interpolation of individual motion sequences, are not provided as functionalities. By implementing the framework and presenting several computer puppetry scenarios, its efficiency and flexibility in animating virtual characters is demonstrated.

  3. Development of excavator training simulator using leap motion controller

    NASA Astrophysics Data System (ADS)

    Fahmi, F.; Nainggolan, F.; Andayani, U.; Siregar, B.

    2018-03-01

    Excavator is a heavy machinery that is used for many industries purposes. Controlling the excavator is not easy. Its operator has to be trained well in many skills to make sure it is safe, effective, and efficient while using the excavator. In this research, we proposed a virtual reality excavator simulator supported by a device called Leap Motion Controller that supports finger and hand motions as an input. This prototype will be developed than in the virtual reality environment to give a more real sensing to the user.

  4. Expansion of Smartwatch Touch Interface from Touchscreen to Around Device Interface Using Infrared Line Image Sensors

    PubMed Central

    Lim, Soo-Chul; Shin, Jungsoon; Kim, Seung-Chan; Park, Joonah

    2015-01-01

    Touchscreen interaction has become a fundamental means of controlling mobile phones and smartwatches. However, the small form factor of a smartwatch limits the available interactive surface area. To overcome this limitation, we propose the expansion of the touch region of the screen to the back of the user’s hand. We developed a touch module for sensing the touched finger position on the back of the hand using infrared (IR) line image sensors, based on the calibrated IR intensity and the maximum intensity region of an IR array. For complete touch-sensing solution, a gyroscope installed in the smartwatch is used to read the wrist gestures. The gyroscope incorporates a dynamic time warping gesture recognition algorithm for eliminating unintended touch inputs during the free motion of the wrist while wearing the smartwatch. The prototype of the developed sensing module was implemented in a commercial smartwatch, and it was confirmed that the sensed positional information of the finger when it was used to touch the back of the hand could be used to control the smartwatch graphical user interface. Our system not only affords a novel experience for smartwatch users, but also provides a basis for developing other useful interfaces. PMID:26184202

  5. User-Independent Motion State Recognition Using Smartphone Sensors.

    PubMed

    Gu, Fuqiang; Kealy, Allison; Khoshelham, Kourosh; Shang, Jianga

    2015-12-04

    The recognition of locomotion activities (e.g., walking, running, still) is important for a wide range of applications like indoor positioning, navigation, location-based services, and health monitoring. Recently, there has been a growing interest in activity recognition using accelerometer data. However, when utilizing only acceleration-based features, it is difficult to differentiate varying vertical motion states from horizontal motion states especially when conducting user-independent classification. In this paper, we also make use of the newly emerging barometer built in modern smartphones, and propose a novel feature called pressure derivative from the barometer readings for user motion state recognition, which is proven to be effective for distinguishing vertical motion states and does not depend on specific users' data. Seven types of motion states are defined and six commonly-used classifiers are compared. In addition, we utilize the motion state history and the characteristics of people's motion to improve the classification accuracies of those classifiers. Experimental results show that by using the historical information and human's motion characteristics, we can achieve user-independent motion state classification with an accuracy of up to 90.7%. In addition, we analyze the influence of the window size and smartphone pose on the accuracy.

  6. Precise tracking of remote sensing satellites with the Global Positioning System

    NASA Technical Reports Server (NTRS)

    Yunck, Thomas P.; Wu, Sien-Chong; Wu, Jiun-Tsong; Thornton, Catherine L.

    1990-01-01

    The Global Positioning System (GPS) can be applied in a number of ways to track remote sensing satellites at altitudes below 3000 km with accuracies of better than 10 cm. All techniques use a precise global network of GPS ground receivers operating in concert with a receiver aboard the user satellite, and all estimate the user orbit, GPS orbits, and selected ground locations simultaneously. The GPS orbit solutions are always dynamic, relying on the laws of motion, while the user orbit solution can range from purely dynamic to purely kinematic (geometric). Two variations show considerable promise. The first one features an optimal synthesis of dynamics and kinematics in the user solution, while the second introduces a novel gravity model adjustment technique to exploit data from repeat ground tracks. These techniques, to be demonstrated on the Topex/Poseidon mission in 1992, will offer subdecimeter tracking accuracy for dynamically unpredictable satellites down to the lowest orbital altitudes.

  7. A Drone Remote Sensing for Virtual Reality Simulation System for Forest Fires: Semantic Neural Network Approach

    NASA Astrophysics Data System (ADS)

    Narasimha Rao, Gudikandhula; Jagadeeswara Rao, Peddada; Duvvuru, Rajesh

    2016-09-01

    Wild fires have significant impact on atmosphere and lives. The demand of predicting exact fire area in forest may help fire management team by using drone as a robot. These are flexible, inexpensive and elevated-motion remote sensing systems that use drones as platforms are important for substantial data gaps and supplementing the capabilities of manned aircraft and satellite remote sensing systems. In addition, powerful computational tools are essential for predicting certain burned area in the duration of a forest fire. The reason of this study is to built up a smart system based on semantic neural networking for the forecast of burned areas. The usage of virtual reality simulator is used to support the instruction process of fire fighters and all users for saving of surrounded wild lives by using a naive method Semantic Neural Network System (SNNS). Semantics are valuable initially to have a enhanced representation of the burned area prediction and better alteration of simulation situation to the users. In meticulous, consequences obtained with geometric semantic neural networking is extensively superior to other methods. This learning suggests that deeper investigation of neural networking in the field of forest fires prediction could be productive.

  8. Cognitive radio based optimal channel sensing and resources allocation

    NASA Astrophysics Data System (ADS)

    Vijayasarveswari, V.; Khatun, S.; Fakir, M. M.; Nayeem, M. N.; Kamarudin, L. M.; Jakaria, A.

    2017-03-01

    Cognitive radio (CR) is the latest type of wireless technoloy that is proposed to mitigate spectrum saturation problem. İn cognitve radio, secondary user will use primary user's spectrum during primary user's absence without interupting primary user's transmission. This paper focuses on practical cognitive radio network development process using Android based smart phone for the data transmission. Energy detector based sensing method was proposed and used here because it doesnot require primary user's information. Bluetooth and Wi-fi are the two available types of spectrum that was sensed for CR detection. Simulation showed cognitive radio network can be developed using Android based smart phones. So, a complete application was developed using Java based Android Eclipse program. Finally, the application was uploaded and run on Android based smart phone to form and verify CR network for channel sensing and resource allocation. The observed efficiency of the application was around 81%.

  9. On event-based optical flow detection

    PubMed Central

    Brosch, Tobias; Tschechne, Stephan; Neumann, Heiko

    2015-01-01

    Event-based sensing, i.e., the asynchronous detection of luminance changes, promises low-energy, high dynamic range, and sparse sensing. This stands in contrast to whole image frame-wise acquisition by standard cameras. Here, we systematically investigate the implications of event-based sensing in the context of visual motion, or flow, estimation. Starting from a common theoretical foundation, we discuss different principal approaches for optical flow detection ranging from gradient-based methods over plane-fitting to filter based methods and identify strengths and weaknesses of each class. Gradient-based methods for local motion integration are shown to suffer from the sparse encoding in address-event representations (AER). Approaches exploiting the local plane like structure of the event cloud, on the other hand, are shown to be well suited. Within this class, filter based approaches are shown to define a proper detection scheme which can also deal with the problem of representing multiple motions at a single location (motion transparency). A novel biologically inspired efficient motion detector is proposed, analyzed and experimentally validated. Furthermore, a stage of surround normalization is incorporated. Together with the filtering this defines a canonical circuit for motion feature detection. The theoretical analysis shows that such an integrated circuit reduces motion ambiguity in addition to decorrelating the representation of motion related activations. PMID:25941470

  10. User-Independent Motion State Recognition Using Smartphone Sensors

    PubMed Central

    Gu, Fuqiang; Kealy, Allison; Khoshelham, Kourosh; Shang, Jianga

    2015-01-01

    The recognition of locomotion activities (e.g., walking, running, still) is important for a wide range of applications like indoor positioning, navigation, location-based services, and health monitoring. Recently, there has been a growing interest in activity recognition using accelerometer data. However, when utilizing only acceleration-based features, it is difficult to differentiate varying vertical motion states from horizontal motion states especially when conducting user-independent classification. In this paper, we also make use of the newly emerging barometer built in modern smartphones, and propose a novel feature called pressure derivative from the barometer readings for user motion state recognition, which is proven to be effective for distinguishing vertical motion states and does not depend on specific users’ data. Seven types of motion states are defined and six commonly-used classifiers are compared. In addition, we utilize the motion state history and the characteristics of people’s motion to improve the classification accuracies of those classifiers. Experimental results show that by using the historical information and human’s motion characteristics, we can achieve user-independent motion state classification with an accuracy of up to 90.7%. In addition, we analyze the influence of the window size and smartphone pose on the accuracy. PMID:26690163

  11. Microwave and millimeter-wave Doppler radar heart sensing

    NASA Astrophysics Data System (ADS)

    Boric-Lubecke, Olga; Lin, Jenshan; Lubecke, Victor M.; Host-Madsen, Anders; Sizer, Tod

    2007-04-01

    Technology that can be used to unobtrusively detect and monitor the presence of human subjects from a distance and through barriers can be a powerful tool for meeting new security challenges, including asymmetric battlefield threats abroad and defense infrastructure needs back home. Our team is developing mobile remote sensing technology for battle-space awareness and warfighter protection, based on microwave and millimeter-wave Doppler radar motion sensing devices that detect human presence. This technology will help overcome a shortfall of current see-through-thewall (STTW) systems, which is, the poor detection of stationary personnel. By detecting the minute Doppler shifts induced by a subject's cardiopulmonary related chest motion, the technology will allow users to detect personnel that are completely stationary more effectively. This personnel detection technique can also have an extremely low probability of intercept since the signals used can be those from everyday communications. The software and hardware developments and challenges for personnel detection and count at a distance will be discussed, including a 2.4 GHz quadrature radar single-chip silicon CMOS implementation, a low-power double side-band Ka-band transmission radar, and phase demodulation and heart rate extraction algorithms. In addition, the application of MIMO techniques for determining the number of subjects will be discussed.

  12. Human motion retrieval from hand-drawn sketch.

    PubMed

    Chao, Min-Wen; Lin, Chao-Hung; Assa, Jackie; Lee, Tong-Yee

    2012-05-01

    The rapid growth of motion capture data increases the importance of motion retrieval. The majority of the existing motion retrieval approaches are based on a labor-intensive step in which the user browses and selects a desired query motion clip from the large motion clip database. In this work, a novel sketching interface for defining the query is presented. This simple approach allows users to define the required motion by sketching several motion strokes over a drawn character, which requires less effort and extends the users’ expressiveness. To support the real-time interface, a specialized encoding of the motions and the hand-drawn query is required. Here, we introduce a novel hierarchical encoding scheme based on a set of orthonormal spherical harmonic (SH) basis functions, which provides a compact representation, and avoids the CPU/processing intensive stage of temporal alignment used by previous solutions. Experimental results show that the proposed approach can well retrieve the motions, and is capable of retrieve logically and numerically similar motions, which is superior to previous approaches. The user study shows that the proposed system can be a useful tool to input motion query if the users are familiar with it. Finally, an application of generating a 3D animation from a hand-drawn comics strip is demonstrated.

  13. Hierarchical Shared Control of Cane-Type Walking-Aid Robot

    PubMed Central

    Tao, Chunjing

    2017-01-01

    A hierarchical shared-control method of the walking-aid robot for both human motion intention recognition and the obstacle emergency-avoidance method based on artificial potential field (APF) is proposed in this paper. The human motion intention is obtained from the interaction force measurements of the sensory system composed of 4 force-sensing registers (FSR) and a torque sensor. Meanwhile, a laser-range finder (LRF) forward is applied to detect the obstacles and try to guide the operator based on the repulsion force calculated by artificial potential field. An obstacle emergency-avoidance method which comprises different control strategies is also assumed according to the different states of obstacles or emergency cases. To ensure the user's safety, the hierarchical shared-control method combines the intention recognition method with the obstacle emergency-avoidance method based on the distance between the walking-aid robot and the obstacles. At last, experiments validate the effectiveness of the proposed hierarchical shared-control method. PMID:29093805

  14. Sensing and Force-Feedback Exoskeleton (SAFE) Robotic Glove.

    PubMed

    Ben-Tzvi, Pinhas; Ma, Zhou

    2015-11-01

    This paper presents the design, implementation and experimental validation of a novel robotic haptic exoskeleton device to measure the user's hand motion and assist hand motion while remaining portable and lightweight. The device consists of a five-finger mechanism actuated with miniature DC motors through antagonistically routed cables at each finger, which act as both active and passive force actuators. The SAFE Glove is a wireless and self-contained mechatronic system that mounts over the dorsum of a bare hand and provides haptic force feedback to each finger. The glove is adaptable to a wide variety of finger sizes without constraining the range of motion. This makes it possible to accurately and comfortably track the complex motion of the finger and thumb joints associated with common movements of hand functions, including grip and release patterns. The glove can be wirelessly linked to a computer for displaying and recording the hand status through 3D Graphical User Interface (GUI) in real-time. The experimental results demonstrate that the SAFE Glove is capable of reliably modeling hand kinematics, measuring finger motion and assisting hand grasping motion. Simulation and experimental results show the potential of the proposed system in rehabilitation therapy and virtual reality applications.

  15. Vision-Based Finger Detection, Tracking, and Event Identification Techniques for Multi-Touch Sensing and Display Systems

    PubMed Central

    Chen, Yen-Lin; Liang, Wen-Yew; Chiang, Chuan-Yen; Hsieh, Tung-Ju; Lee, Da-Cheng; Yuan, Shyan-Ming; Chang, Yang-Lang

    2011-01-01

    This study presents efficient vision-based finger detection, tracking, and event identification techniques and a low-cost hardware framework for multi-touch sensing and display applications. The proposed approach uses a fast bright-blob segmentation process based on automatic multilevel histogram thresholding to extract the pixels of touch blobs obtained from scattered infrared lights captured by a video camera. The advantage of this automatic multilevel thresholding approach is its robustness and adaptability when dealing with various ambient lighting conditions and spurious infrared noises. To extract the connected components of these touch blobs, a connected-component analysis procedure is applied to the bright pixels acquired by the previous stage. After extracting the touch blobs from each of the captured image frames, a blob tracking and event recognition process analyzes the spatial and temporal information of these touch blobs from consecutive frames to determine the possible touch events and actions performed by users. This process also refines the detection results and corrects for errors and occlusions caused by noise and errors during the blob extraction process. The proposed blob tracking and touch event recognition process includes two phases. First, the phase of blob tracking associates the motion correspondence of blobs in succeeding frames by analyzing their spatial and temporal features. The touch event recognition process can identify meaningful touch events based on the motion information of touch blobs, such as finger moving, rotating, pressing, hovering, and clicking actions. Experimental results demonstrate that the proposed vision-based finger detection, tracking, and event identification system is feasible and effective for multi-touch sensing applications in various operational environments and conditions. PMID:22163990

  16. MotionFlow: Visual Abstraction and Aggregation of Sequential Patterns in Human Motion Tracking Data.

    PubMed

    Jang, Sujin; Elmqvist, Niklas; Ramani, Karthik

    2016-01-01

    Pattern analysis of human motions, which is useful in many research areas, requires understanding and comparison of different styles of motion patterns. However, working with human motion tracking data to support such analysis poses great challenges. In this paper, we propose MotionFlow, a visual analytics system that provides an effective overview of various motion patterns based on an interactive flow visualization. This visualization formulates a motion sequence as transitions between static poses, and aggregates these sequences into a tree diagram to construct a set of motion patterns. The system also allows the users to directly reflect the context of data and their perception of pose similarities in generating representative pose states. We provide local and global controls over the partition-based clustering process. To support the users in organizing unstructured motion data into pattern groups, we designed a set of interactions that enables searching for similar motion sequences from the data, detailed exploration of data subsets, and creating and modifying the group of motion patterns. To evaluate the usability of MotionFlow, we conducted a user study with six researchers with expertise in gesture-based interaction design. They used MotionFlow to explore and organize unstructured motion tracking data. Results show that the researchers were able to easily learn how to use MotionFlow, and the system effectively supported their pattern analysis activities, including leveraging their perception and domain knowledge.

  17. A Situated Cultural Festival Learning System Based on Motion Sensing

    ERIC Educational Resources Information Center

    Chang, Yi-Hsing; Lin, Yu-Kai; Fang, Rong-Jyue; Lu, You-Te

    2017-01-01

    A situated Chinese cultural festival learning system based on motion sensing is developed in this study. The primary design principle is to create a highly interactive learning environment, allowing learners to interact with Kinect through natural gestures in the designed learning situation to achieve efficient learning. The system has the…

  18. A research on motion design for APP's loading pages based on time perception

    NASA Astrophysics Data System (ADS)

    Cao, Huai; Hu, Xiaoyun

    2018-04-01

    Due to restrictions caused by objective reasons like network bandwidth, hardware performance and etc., waiting is still an inevitable phenomenon that appears in our using mobile-terminal products. Relevant researches show that users' feelings in a waiting scenario can affect their evaluations on the whole product and services the product provides. With the development of user experience and inter-facial design subjects, the role of motion effect in the interface design has attracted more and more scholars' attention. In the current studies, the research theory of motion design in a waiting scenario is imperfect. This article will use the basic theory and experimental research methods of cognitive psychology to explore the motion design's impact on user's time perception when users are waiting for loading APP pages. Firstly, the article analyzes the factors that affect waiting experience of loading APP pages based on the theory of time perception, and then discusses motion design's impact on the level of time-perception when loading pages and its design strategy. Moreover, by the operation analysis of existing loading motion designs, the article classifies the existing loading motions and designs an experiment to verify the impact of different types of motions on the user's time perception. The result shows that the waiting time perception of mobile's terminals' APPs is related to the loading motion types, the combination type of loading motions can effectively shorten the waiting time perception as it scores a higher mean value in the length of time perception.

  19. Real and Fictive Motion Processing in Polish L2 Users of English and Monolinguals: Evidence for Different Conceptual Representations

    ERIC Educational Resources Information Center

    Tomczak, Ewa; Ewert, Anna

    2015-01-01

    We examine cross-linguistic influence in the processing of motion sentences by L2 users from an embodied cognition perspective. The experiment employs a priming paradigm to test two hypotheses based on previous action and motion research in cognitive psychology. The first hypothesis maintains that conceptual representations of motion are embodied…

  20. Accurate estimation of motion blur parameters in noisy remote sensing image

    NASA Astrophysics Data System (ADS)

    Shi, Xueyan; Wang, Lin; Shao, Xiaopeng; Wang, Huilin; Tao, Zhong

    2015-05-01

    The relative motion between remote sensing satellite sensor and objects is one of the most common reasons for remote sensing image degradation. It seriously weakens image data interpretation and information extraction. In practice, point spread function (PSF) should be estimated firstly for image restoration. Identifying motion blur direction and length accurately is very crucial for PSF and restoring image with precision. In general, the regular light-and-dark stripes in the spectrum can be employed to obtain the parameters by using Radon transform. However, serious noise existing in actual remote sensing images often causes the stripes unobvious. The parameters would be difficult to calculate and the error of the result relatively big. In this paper, an improved motion blur parameter identification method to noisy remote sensing image is proposed to solve this problem. The spectrum characteristic of noisy remote sensing image is analyzed firstly. An interactive image segmentation method based on graph theory called GrabCut is adopted to effectively extract the edge of the light center in the spectrum. Motion blur direction is estimated by applying Radon transform on the segmentation result. In order to reduce random error, a method based on whole column statistics is used during calculating blur length. Finally, Lucy-Richardson algorithm is applied to restore the remote sensing images of the moon after estimating blur parameters. The experimental results verify the effectiveness and robustness of our algorithm.

  1. Virtual Reality: You Are There

    NASA Technical Reports Server (NTRS)

    1993-01-01

    Telepresence or "virtual reality," allows a person, with assistance from advanced technology devices, to figuratively project himself into another environment. This technology is marketed by several companies, among them Fakespace, Inc., a former Ames Research Center contractor. Fakespace developed a teleoperational motion platform for transmitting sounds and images from remote locations. The "Molly" matches the user's head motion and, when coupled with a stereo viewing device and appropriate software, creates the telepresence experience. Its companion piece is the BOOM-the user's viewing device that provides the sense of involvement in the virtual environment. Either system may be used alone. Because suits, gloves, headphones, etc. are not needed, a whole range of commercial applications is possible, including computer-aided design techniques and virtual reality visualizations. Customers include Sandia National Laboratories, Stanford Research Institute and Mattel Toys.

  2. Multi-Sensor Methods for Mobile Radar Motion Capture and Compensation

    NASA Astrophysics Data System (ADS)

    Nakata, Robert

    Remote sensing has many applications, including surveying and mapping, geophysics exploration, military surveillance, search and rescue and counter-terrorism operations. Remote sensor systems typically use visible image, infrared or radar sensors. Camera based image sensors can provide high spatial resolution but are limited to line-of-sight capture during daylight. Infrared sensors have lower resolution but can operate during darkness. Radar sensors can provide high resolution motion measurements, even when obscured by weather, clouds and smoke and can penetrate walls and collapsed structures constructed with non-metallic materials up to 1 m to 2 m in depth depending on the wavelength and transmitter power level. However, any platform motion will degrade the target signal of interest. In this dissertation, we investigate alternative methodologies to capture platform motion, including a Body Area Network (BAN) that doesn't require external fixed location sensors, allowing full mobility of the user. We also investigated platform stabilization and motion compensation techniques to reduce and remove the signal distortion introduced by the platform motion. We evaluated secondary ultrasonic and radar sensors to stabilize the platform resulting in an average 5 dB of Signal to Interference Ratio (SIR) improvement. We also implemented a Digital Signal Processing (DSP) motion compensation algorithm that improved the SIR by 18 dB on average. These techniques could be deployed on a quadcopter platform and enable the detection of respiratory motion using an onboard radar sensor.

  3. Estimation of Finger Joint Angles Based on Electromechanical Sensing of Wrist Shape.

    PubMed

    Kawaguchi, Junki; Yoshimoto, Shunsuke; Kuroda, Yoshihiro; Oshiro, Osamu

    2017-09-01

    An approach to finger motion capture that places fewer restrictions on the usage environment and actions of the user is an important research topic in biomechanics and human-computer interaction. We proposed a system that electrically detects finger motion from the associated deformation of the wrist and estimates the finger joint angles using multiple regression models. A wrist-mounted sensing device with 16 electrodes detects deformation of the wrist from changes in electrical contact resistance at the skin. In this study, we experimentally investigated the accuracy of finger joint angle estimation, the adequacy of two multiple regression models, and the resolution of the estimation of total finger joint angles. In experiments, both the finger joint angles and the system output voltage were recorded as subjects performed flexion/extension of the fingers. These data were used for calibration using the least-squares method. The system was found to be capable of estimating the total finger joint angle with a root-mean-square error of 29-34 degrees. A multiple regression model with a second-order polynomial basis function was shown to be suitable for the estimation of all total finger joint angles, but not those of the thumb.

  4. Optimal Periodic Cooperative Spectrum Sensing Based on Weight Fusion in Cognitive Radio Networks

    PubMed Central

    Liu, Xin; Jia, Min; Gu, Xuemai; Tan, Xuezhi

    2013-01-01

    The performance of cooperative spectrum sensing in cognitive radio (CR) networks depends on the sensing mode, the sensing time and the number of cooperative users. In order to improve the sensing performance and reduce the interference to the primary user (PU), a periodic cooperative spectrum sensing model based on weight fusion is proposed in this paper. Moreover, the sensing period, the sensing time and the searching time are optimized, respectively. Firstly the sensing period is optimized to improve the spectrum utilization and reduce the interference, then the joint optimization algorithm of the local sensing time and the number of cooperative users, is proposed to obtain the optimal sensing time for improving the throughput of the cognitive radio user (CRU) during each period, and finally the water-filling principle is applied to optimize the searching time in order to make the CRU find an idle channel within the shortest time. The simulation results show that compared with the previous algorithms, the optimal sensing period can improve the spectrum utilization of the CRU and decrease the interference to the PU significantly, the optimal sensing time can make the CRU achieve the largest throughput, and the optimal searching time can make the CRU find an idle channel with the least time. PMID:23604027

  5. Investigation on sense of control parameters for joystick interface in remote operated container crane application

    NASA Astrophysics Data System (ADS)

    Abdullah, U. N. N.; Handroos, H.

    2017-09-01

    Introduction: This paper presents the study of sense of control parameters to improve the lack of direct motion feeling through remote operated container crane station (ROCCS) joystick interface. The investigations of the parameters in this study are important to develop the engineering parameters related to the sense of control goal in the next design process. Methodology: Structured interviews and observations were conducted to obtain the user experience data from thirteen remote container crane operators from two international terminals. Then, interview analysis, task analysis, activity analysis and time line analysis were conducted to compare and contrast the results from interviews and observations. Results: Four experience parameters were identified to support the sense of control goal in the later design improvement of the ROCC joystick interface. The significance of difficulties to control, unsynchronized movements, facilitate in control and decision making in unexpected situation as parameters to the sense of control goal were validated by' feedbacks from operators as well as analysis. Contribution: This study provides feedback directly from end users towards developing a sustainable control interface for ROCCS in specific and remote operated off-road vehicles in general.

  6. Image-based fall detection and classification of a user with a walking support system

    NASA Astrophysics Data System (ADS)

    Taghvaei, Sajjad; Kosuge, Kazuhiro

    2017-10-01

    The classification of visual human action is important in the development of systems that interact with humans. This study investigates an image-based classification of the human state while using a walking support system to improve the safety and dependability of these systems.We categorize the possible human behavior while utilizing a walker robot into eight states (i.e., sitting, standing, walking, and five falling types), and propose two different methods, namely, normal distribution and hidden Markov models (HMMs), to detect and recognize these states. The visual feature for the state classification is the centroid position of the upper body, which is extracted from the user's depth images. The first method shows that the centroid position follows a normal distribution while walking, which can be adopted to detect any non-walking state. The second method implements HMMs to detect and recognize these states. We then measure and compare the performance of both methods. The classification results are employed to control the motion of a passive-type walker (called "RT Walker") by activating its brakes in non-walking states. Thus, the system can be used for sit/stand support and fall prevention. The experiments are performed with four subjects, including an experienced physiotherapist. Results show that the algorithm can be adapted to the new user's motion pattern within 40 s, with a fall detection rate of 96.25% and state classification rate of 81.0%. The proposed method can be implemented to other abnormality detection/classification applications that employ depth image-sensing devices.

  7. Interface Prostheses With Classifier-Feedback-Based User Training.

    PubMed

    Fang, Yinfeng; Zhou, Dalin; Li, Kairu; Liu, Honghai

    2017-11-01

    It is evident that user training significantly affects performance of pattern-recognition-based myoelectric prosthetic device control. Despite plausible classification accuracy on offline datasets, online accuracy usually suffers from the changes in physiological conditions and electrode displacement. The user ability in generating consistent electromyographic (EMG) patterns can be enhanced via proper user training strategies in order to improve online performance. This study proposes a clustering-feedback strategy that provides real-time feedback to users by means of a visualized online EMG signal input as well as the centroids of the training samples, whose dimensionality is reduced to minimal number by dimension reduction. Clustering feedback provides a criterion that guides users to adjust motion gestures and muscle contraction forces intentionally. The experiment results have demonstrated that hand motion recognition accuracy increases steadily along the progress of the clustering-feedback-based user training, while conventional classifier-feedback methods, i.e., label feedback, hardly achieve any improvement. The result concludes that the use of proper classifier feedback can accelerate the process of user training, and implies prosperous future for the amputees with limited or no experience in pattern-recognition-based prosthetic device manipulation.It is evident that user training significantly affects performance of pattern-recognition-based myoelectric prosthetic device control. Despite plausible classification accuracy on offline datasets, online accuracy usually suffers from the changes in physiological conditions and electrode displacement. The user ability in generating consistent electromyographic (EMG) patterns can be enhanced via proper user training strategies in order to improve online performance. This study proposes a clustering-feedback strategy that provides real-time feedback to users by means of a visualized online EMG signal input as well as the centroids of the training samples, whose dimensionality is reduced to minimal number by dimension reduction. Clustering feedback provides a criterion that guides users to adjust motion gestures and muscle contraction forces intentionally. The experiment results have demonstrated that hand motion recognition accuracy increases steadily along the progress of the clustering-feedback-based user training, while conventional classifier-feedback methods, i.e., label feedback, hardly achieve any improvement. The result concludes that the use of proper classifier feedback can accelerate the process of user training, and implies prosperous future for the amputees with limited or no experience in pattern-recognition-based prosthetic device manipulation.

  8. Reputation and Reward: Two Sides of the Same Bitcoin.

    PubMed

    Delgado-Segura, Sergi; Tanas, Cristian; Herrera-Joancomartí, Jordi

    2016-05-27

    In Mobile Crowd Sensing (MCS), the power of the crowd, jointly with the sensing capabilities of the smartphones they wear, provides a new paradigm for data sensing. Scenarios involving user behavior or those that rely on user mobility are examples where standard sensor networks may not be suitable, and MCS provides an interesting solution. However, including human participation in sensing tasks presents numerous and unique research challenges. In this paper, we analyze three of the most important: user participation, data sensing quality and user anonymity. We tackle the three as a whole, since all of them are strongly correlated. As a result, we present PaySense, a general framework that incentivizes user participation and provides a mechanism to validate the quality of collected data based on the users' reputation. All such features are performed in a privacy-preserving way by using the Bitcoin cryptocurrency. Rather than a theoretical one, our framework has been implemented, and it is ready to be deployed and complement any existing MCS system.

  9. Multiresolution motion planning for autonomous agents via wavelet-based cell decompositions.

    PubMed

    Cowlagi, Raghvendra V; Tsiotras, Panagiotis

    2012-10-01

    We present a path- and motion-planning scheme that is "multiresolution" both in the sense of representing the environment with high accuracy only locally and in the sense of addressing the vehicle kinematic and dynamic constraints only locally. The proposed scheme uses rectangular multiresolution cell decompositions, efficiently generated using the wavelet transform. The wavelet transform is widely used in signal and image processing, with emerging applications in autonomous sensing and perception systems. The proposed motion planner enables the simultaneous use of the wavelet transform in both the perception and in the motion-planning layers of vehicle autonomy, thus potentially reducing online computations. We rigorously prove the completeness of the proposed path-planning scheme, and we provide numerical simulation results to illustrate its efficacy.

  10. Gait Recognition Using Wearable Motion Recording Sensors

    NASA Astrophysics Data System (ADS)

    Gafurov, Davrondzhon; Snekkenes, Einar

    2009-12-01

    This paper presents an alternative approach, where gait is collected by the sensors attached to the person's body. Such wearable sensors record motion (e.g. acceleration) of the body parts during walking. The recorded motion signals are then investigated for person recognition purposes. We analyzed acceleration signals from the foot, hip, pocket and arm. Applying various methods, the best EER obtained for foot-, pocket-, arm- and hip- based user authentication were 5%, 7%, 10% and 13%, respectively. Furthermore, we present the results of our analysis on security assessment of gait. Studying gait-based user authentication (in case of hip motion) under three attack scenarios, we revealed that a minimal effort mimicking does not help to improve the acceptance chances of impostors. However, impostors who know their closest person in the database or the genders of the users can be a threat to gait-based authentication. We also provide some new insights toward the uniqueness of gait in case of foot motion. In particular, we revealed the following: a sideway motion of the foot provides the most discrimination, compared to an up-down or forward-backward directions; and different segments of the gait cycle provide different level of discrimination.

  11. An EMG-Based Control for an Upper-Limb Power-Assist Exoskeleton Robot.

    PubMed

    Kiguchi, K; Hayashi, Y

    2012-08-01

    Many kinds of power-assist robots have been developed in order to assist self-rehabilitation and/or daily life motions of physically weak persons. Several kinds of control methods have been proposed to control the power-assist robots according to user's motion intention. In this paper, an electromyogram (EMG)-based impedance control method for an upper-limb power-assist exoskeleton robot is proposed to control the robot in accordance with the user's motion intention. The proposed method is simple, easy to design, humanlike, and adaptable to any user. A neurofuzzy matrix modifier is applied to make the controller adaptable to any users. Not only the characteristics of EMG signals but also the characteristics of human body are taken into account in the proposed method. The effectiveness of the proposed method was evaluated by the experiments.

  12. The use of virtual reality technology in the treatment of anxiety and other psychiatric disorders

    PubMed Central

    Maples-Keller, Jessica L.; Bunnell, Brian E.; Kim, Sae-Jin; Rothbaum, Barbara O.

    2016-01-01

    Virtual reality, or VR, allows users to experience a sense of presence in a computer-generated three-dimensional environment. Sensory information is delivered through a head mounted display and specialized interface devices. These devices track head movements so that the movements and images change in a natural way with head motion, allowing for a sense of immersion. VR allows for controlled delivery of sensory stimulation via the therapist and is a convenient and cost-effective treatment. The primary focus of this article is to review the available literature regarding the effectiveness of incorporating VR within the psychiatric treatment of a wide range of psychiatric disorders, with a specific focus on exposure-based intervention for anxiety disorders. A systematic literature search was conducted in order to identify studies implementing VR based treatment for anxiety or other psychiatric disorders. This review will provide an overview of the history of the development of VR based technology and its use within psychiatric treatment, an overview of the empirical evidence for VR based treatment, the benefits for using VR for psychiatric research and treatment, recommendations for how to incorporate VR into psychiatric care, and future directions for VR based treatment and clinical research. PMID:28475502

  13. Measuring sense of presence and user characteristics to predict effective training in an online simulated virtual environment.

    PubMed

    De Leo, Gianluca; Diggs, Leigh A; Radici, Elena; Mastaglio, Thomas W

    2014-02-01

    Virtual-reality solutions have successfully been used to train distributed teams. This study aimed to investigate the correlation between user characteristics and sense of presence in an online virtual-reality environment where distributed teams are trained. A greater sense of presence has the potential to make training in the virtual environment more effective, leading to the formation of teams that perform better in a real environment. Being able to identify, before starting online training, those user characteristics that are predictors of a greater sense of presence can lead to the selection of trainees who would benefit most from the online simulated training. This is an observational study with a retrospective postsurvey of participants' user characteristics and degree of sense of presence. Twenty-nine members from 3 Air Force National Guard Medical Service expeditionary medical support teams participated in an online virtual environment training exercise and completed the Independent Television Commission-Sense of Presence Inventory survey, which measures sense of presence and user characteristics. Nonparametric statistics were applied to determine the statistical significance of user characteristics to sense of presence. Comparing user characteristics to the 4 scales of the Independent Television Commission-Sense of Presence Inventory using Kendall τ test gave the following results: the user characteristics "how often you play video games" (τ(26)=-0.458, P<0.01) and "television/film production knowledge" (τ(27)=-0.516, P<0.01) were significantly related to negative effects. Negative effects refer to adverse physiologic reactions owing to the virtual environment experience such as dizziness, nausea, headache, and eyestrain. The user characteristic "knowledge of virtual reality" was significantly related to engagement (τ(26)=0.463, P<0.01) and negative effects (τ(26)=-0.404, P<0.05). Individuals who have knowledge about virtual environments and experience with gaming environments report a higher sense of presence that indicates that they will likely benefit more from online virtual training. Future research studies could include a larger population of expeditionary medical support, and the results obtained could be used to create a model that predicts the level of presence based on the user characteristics. To maximize results and minimize costs, only those individuals who, based on their characteristics, are supposed to have a higher sense of presence and less negative effects could be selected for online simulated virtual environment training.

  14. Secure distribution for high resolution remote sensing images

    NASA Astrophysics Data System (ADS)

    Liu, Jin; Sun, Jing; Xu, Zheng Q.

    2010-09-01

    The use of remote sensing images collected by space platforms is becoming more and more widespread. The increasing value of space data and its use in critical scenarios call for adoption of proper security measures to protect these data against unauthorized access and fraudulent use. In this paper, based on the characteristics of remote sensing image data and application requirements on secure distribution, a secure distribution method is proposed, including users and regions classification, hierarchical control and keys generation, and multi-level encryption based on regions. The combination of the three parts can make that the same remote sensing images after multi-level encryption processing are distributed to different permission users through multicast, but different permission users can obtain different degree information after decryption through their own decryption keys. It well meets user access control and security needs in the process of high resolution remote sensing image distribution. The experimental results prove the effectiveness of the proposed method which is suitable for practical use in the secure transmission of remote sensing images including confidential information over internet.

  15. Motion analysis report

    NASA Technical Reports Server (NTRS)

    Badler, N. I.

    1985-01-01

    Human motion analysis is the task of converting actual human movements into computer readable data. Such movement information may be obtained though active or passive sensing methods. Active methods include physical measuring devices such as goniometers on joints of the body, force plates, and manually operated sensors such as a Cybex dynamometer. Passive sensing de-couples the position measuring device from actual human contact. Passive sensors include Selspot scanning systems (since there is no mechanical connection between the subject's attached LEDs and the infrared sensing cameras), sonic (spark-based) three-dimensional digitizers, Polhemus six-dimensional tracking systems, and image processing systems based on multiple views and photogrammetric calculations.

  16. Optimization of real-time rigid registration motion compensation for prostate biopsies using 2D/3D ultrasound

    NASA Astrophysics Data System (ADS)

    Gillies, Derek J.; Gardi, Lori; Zhao, Ren; Fenster, Aaron

    2017-03-01

    During image-guided prostate biopsy, needles are targeted at suspicious tissues to obtain specimens that are later examined histologically for cancer. Patient motion causes inaccuracies when using MR-transrectal ultrasound (TRUS) image fusion approaches used to augment the conventional biopsy procedure. Motion compensation using a single, user initiated correction can be performed to temporarily compensate for prostate motion, but a real-time continuous registration offers an improvement to clinical workflow by reducing user interaction and procedure time. An automatic motion compensation method, approaching the frame rate of a TRUS-guided system, has been developed for use during fusion-based prostate biopsy to improve image guidance. 2D and 3D TRUS images of a prostate phantom were registered using an intensity based algorithm utilizing normalized cross-correlation and Powell's method for optimization with user initiated and continuous registration techniques. The user initiated correction performed with observed computation times of 78 ± 35 ms, 74 ± 28 ms, and 113 ± 49 ms for in-plane, out-of-plane, and roll motions, respectively, corresponding to errors of 0.5 ± 0.5 mm, 1.5 ± 1.4 mm, and 1.5 ± 1.6°. The continuous correction performed significantly faster (p < 0.05) than the user initiated method, with observed computation times of 31 ± 4 ms, 32 ± 4 ms, and 31 ± 6 ms for in-plane, out-of-plane, and roll motions, respectively, corresponding to errors of 0.2 ± 0.2 mm, 0.6 ± 0.5 mm, and 0.8 ± 0.4°.

  17. Reputation and Reward: Two Sides of the Same Bitcoin

    PubMed Central

    Delgado-Segura, Sergi; Tanas, Cristian; Herrera-Joancomartí, Jordi

    2016-01-01

    In Mobile Crowd Sensing (MCS), the power of the crowd, jointly with the sensing capabilities of the smartphones they wear, provides a new paradigm for data sensing. Scenarios involving user behavior or those that rely on user mobility are examples where standard sensor networks may not be suitable, and MCS provides an interesting solution. However, including human participation in sensing tasks presents numerous and unique research challenges. In this paper, we analyze three of the most important: user participation, data sensing quality and user anonymity. We tackle the three as a whole, since all of them are strongly correlated. As a result, we present PaySense, a general framework that incentivizes user participation and provides a mechanism to validate the quality of collected data based on the users’ reputation. All such features are performed in a privacy-preserving way by using the Bitcoin cryptocurrency. Rather than a theoretical one, our framework has been implemented, and it is ready to be deployed and complement any existing MCS system. PMID:27240373

  18. In-motion optical sensing for assessment of animal well-being

    NASA Astrophysics Data System (ADS)

    Atkins, Colton A.; Pond, Kevin R.; Madsen, Christi K.

    2017-05-01

    The application of in-motion optical sensor measurements was investigated for inspecting livestock soundness as a means of animal well-being. An optical sensor-based platform was used to collect in-motion, weight-related information. Eight steers, weighing between 680 and 1134 kg, were evaluated twice. Six of the 8 steers were used for further evaluation and analysis. Hoof impacts caused plate flexion that was optically sensed. Observed kinetic differences between animals' strides at a walking or running/trotting gait with significant force distributions of animals' hoof impacts allowed for observation of real-time, biometric patterns. Overall, optical sensor-based measurements identified hoof differences between and within animals in motion that may allow for diagnosis of musculoskeletal unsoundness without visual evaluation.

  19. Performance-Driven Hybrid Full-Body Character Control for Navigation and Interaction in Virtual Environments

    NASA Astrophysics Data System (ADS)

    Mousas, Christos; Anagnostopoulos, Christos-Nikolaos

    2017-06-01

    This paper presents a hybrid character control interface that provides the ability to synthesize in real-time a variety of actions based on the user's performance capture. The proposed methodology enables three different performance interaction modules: the performance animation control that enables the direct mapping of the user's pose to the character, the motion controller that synthesizes the desired motion of the character based on an activity recognition methodology, and the hybrid control that lies within the performance animation and the motion controller. With the methodology presented, the user will have the freedom to interact within the virtual environment, as well as the ability to manipulate the character and to synthesize a variety of actions that cannot be performed directly by him/her, but which the system synthesizes. Therefore, the user is able to interact with the virtual environment in a more sophisticated fashion. This paper presents examples of different scenarios based on the three different full-body character control methodologies.

  20. Oscillatory motion based measurement method and sensor for measuring wall shear stress due to fluid flow

    DOEpatents

    Armstrong, William D [Laramie, WY; Naughton, Jonathan [Laramie, WY; Lindberg, William R [Laramie, WY

    2008-09-02

    A shear stress sensor for measuring fluid wall shear stress on a test surface is provided. The wall shear stress sensor is comprised of an active sensing surface and a sensor body. An elastic mechanism mounted between the active sensing surface and the sensor body allows movement between the active sensing surface and the sensor body. A driving mechanism forces the shear stress sensor to oscillate. A measuring mechanism measures displacement of the active sensing surface relative to the sensor body. The sensor may be operated under periodic excitation where changes in the nature of the fluid properties or the fluid flow over the sensor measurably changes the amplitude or phase of the motion of the active sensing surface, or changes the force and power required from a control system in order to maintain constant motion. The device may be operated under non-periodic excitation where changes in the nature of the fluid properties or the fluid flow over the sensor change the transient motion of the active sensor surface or change the force and power required from a control system to maintain a specified transient motion of the active sensor surface.

  1. Kinematics effectively delineate accomplished users of endovascular robotics with a physical training model.

    PubMed

    Duran, Cassidy; Estrada, Sean; O'Malley, Marcia; Lumsden, Alan B; Bismuth, Jean

    2015-02-01

    Endovascular robotics systems, now approved for clinical use in the United States and Europe, are seeing rapid growth in interest. Determining who has sufficient expertise for safe and effective clinical use remains elusive. Our aim was to analyze performance on a robotic platform to determine what defines an expert user. During three sessions, 21 subjects with a range of endovascular expertise and endovascular robotic experience (novices <2 hours to moderate-extensive experience with >20 hours) performed four tasks on a training model. All participants completed a 2-hour training session on the robot by a certified instructor. Completion times, global rating scores, and motion metrics were collected to assess performance. Electromagnetic tracking was used to capture and to analyze catheter tip motion. Motion analysis was based on derivations of speed and position including spectral arc length and total number of submovements (inversely proportional to proficiency of motion) and duration of submovements (directly proportional to proficiency). Ninety-eight percent of competent subjects successfully completed the tasks within the given time, whereas 91% of noncompetent subjects were successful. There was no significant difference in completion times between competent and noncompetent users except for the posterior branch (151 s:105 s; P = .01). The competent users had more efficient motion as evidenced by statistically significant differences in the metrics of motion analysis. Users with >20 hours of experience performed significantly better than those newer to the system, independent of prior endovascular experience. This study demonstrates that motion-based metrics can differentiate novice from trained users of flexible robotics systems for basic endovascular tasks. Efficiency of catheter movement, consistency of performance, and learning curves may help identify users who are sufficiently trained for safe clinical use of the system. This work will help identify the learning curve and specific movements that translate to expert robotic navigation. Copyright © 2015 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.

  2. Inertial Sensor-Based Touch and Shake Metaphor for Expressive Control of 3D Virtual Avatars

    PubMed Central

    Patil, Shashidhar; Chintalapalli, Harinadha Reddy; Kim, Dubeom; Chai, Youngho

    2015-01-01

    In this paper, we present an inertial sensor-based touch and shake metaphor for expressive control of a 3D virtual avatar in a virtual environment. An intuitive six degrees-of-freedom wireless inertial motion sensor is used as a gesture and motion control input device with a sensor fusion algorithm. The algorithm enables user hand motions to be tracked in 3D space via magnetic, angular rate, and gravity sensors. A quaternion-based complementary filter is implemented to reduce noise and drift. An algorithm based on dynamic time-warping is developed for efficient recognition of dynamic hand gestures with real-time automatic hand gesture segmentation. Our approach enables the recognition of gestures and estimates gesture variations for continuous interaction. We demonstrate the gesture expressivity using an interactive flexible gesture mapping interface for authoring and controlling a 3D virtual avatar and its motion by tracking user dynamic hand gestures. This synthesizes stylistic variations in a 3D virtual avatar, producing motions that are not present in the motion database using hand gesture sequences from a single inertial motion sensor. PMID:26094629

  3. Golfing with protons: using research grade simulation algorithms for online games

    NASA Astrophysics Data System (ADS)

    Harold, J.

    2004-12-01

    Scientists have long known the power of simulations. By modeling a system in a computer, researchers can experiment at will, developing an intuitive sense of how a system behaves. The rapid increase in the power of personal computers, combined with technologies such as Flash, Shockwave and Java, allow us to bring research simulations into the education world by creating exploratory environments for the public. This approach is illustrated by a project funded by a small grant from NSF's Informal Science Education program, through an opportunity that provides education supplements to existing research awards. Using techniques adapted from a magnetospheric research program, several Flash based interactives have been developed that allow web site visitors to explore the motion of particles in the Earth's magnetosphere. These pieces were folded into a larger Space Weather Center web project at the Space Science Institute (www.spaceweathercenter.org). Rather than presenting these interactives as plasma simulations per se, the research algorithms were used to create games such as "Magneto Mini Golf", where the balls are protons moving in combined electric and magnetic fields. The "holes" increase in complexity, beginning with no fields and progressing towards a simple model of Earth's magnetosphere. The emphasis of the activity is gameplay, but because it is at its core a plasma simulation, the user develops an intuitive sense of charged particle motion as they progress. Meanwhile, the pieces contain embedded assessments that are measurable through a database driven tracking system. Mining that database not only provides helpful usability information, but allows us to examine whether users are meeting the learning goals of the activities. We will discuss the development and evaluation results of the project, as well as the potential for these types of activities to shift the expectations of what a web site can and should provide educationally.

  4. A novel sensor for two-degree-of-freedom motion measurement of linear nanopositioning stage using knife edge displacement sensing technique

    NASA Astrophysics Data System (ADS)

    Zolfaghari, Abolfazl; Jeon, Seongkyul; Stepanick, Christopher K.; Lee, ChaBum

    2017-06-01

    This paper presents a novel method for measuring two-degree-of-freedom (DOF) motion of flexure-based nanopositioning systems based on optical knife-edge sensing (OKES) technology, which utilizes the interference of two superimposed waves: a geometrical wave from the primary source of light and a boundary diffraction wave from the secondary source. This technique allows for two-DOF motion measurement of the linear and pitch motions of nanopositioning systems. Two capacitive sensors (CSs) are used for a baseline comparison with the proposed sensor by simultaneously measuring the motions of the nanopositioning system. The experimental results show that the proposed sensor closely agrees with the fundamental linear motion of the CS. However, the two-DOF OKES technology was shown to be approximately three times more sensitive to the pitch motion than the CS. The discrepancy in the two sensor outputs is discussed in terms of measuring principle, linearity, bandwidth, control effectiveness, and resolution.

  5. MotionExplorer: exploratory search in human motion capture data based on hierarchical aggregation.

    PubMed

    Bernard, Jürgen; Wilhelm, Nils; Krüger, Björn; May, Thorsten; Schreck, Tobias; Kohlhammer, Jörn

    2013-12-01

    We present MotionExplorer, an exploratory search and analysis system for sequences of human motion in large motion capture data collections. This special type of multivariate time series data is relevant in many research fields including medicine, sports and animation. Key tasks in working with motion data include analysis of motion states and transitions, and synthesis of motion vectors by interpolation and combination. In the practice of research and application of human motion data, challenges exist in providing visual summaries and drill-down functionality for handling large motion data collections. We find that this domain can benefit from appropriate visual retrieval and analysis support to handle these tasks in presence of large motion data. To address this need, we developed MotionExplorer together with domain experts as an exploratory search system based on interactive aggregation and visualization of motion states as a basis for data navigation, exploration, and search. Based on an overview-first type visualization, users are able to search for interesting sub-sequences of motion based on a query-by-example metaphor, and explore search results by details on demand. We developed MotionExplorer in close collaboration with the targeted users who are researchers working on human motion synthesis and analysis, including a summative field study. Additionally, we conducted a laboratory design study to substantially improve MotionExplorer towards an intuitive, usable and robust design. MotionExplorer enables the search in human motion capture data with only a few mouse clicks. The researchers unanimously confirm that the system can efficiently support their work.

  6. XD-GRASP: Golden-angle radial MRI with reconstruction of extra motion-state dimensions using compressed sensing.

    PubMed

    Feng, Li; Axel, Leon; Chandarana, Hersh; Block, Kai Tobias; Sodickson, Daniel K; Otazo, Ricardo

    2016-02-01

    To develop a novel framework for free-breathing MRI called XD-GRASP, which sorts dynamic data into extra motion-state dimensions using the self-navigation properties of radial imaging and reconstructs the multidimensional dataset using compressed sensing. Radial k-space data are continuously acquired using the golden-angle sampling scheme and sorted into multiple motion-states based on respiratory and/or cardiac motion signals derived directly from the data. The resulting undersampled multidimensional dataset is reconstructed using a compressed sensing approach that exploits sparsity along the new dynamic dimensions. The performance of XD-GRASP is demonstrated for free-breathing three-dimensional (3D) abdominal imaging, two-dimensional (2D) cardiac cine imaging and 3D dynamic contrast-enhanced (DCE) MRI of the liver, comparing against reconstructions without motion sorting in both healthy volunteers and patients. XD-GRASP separates respiratory motion from cardiac motion in cardiac imaging, and respiratory motion from contrast enhancement in liver DCE-MRI, which improves image quality and reduces motion-blurring artifacts. XD-GRASP represents a new use of sparsity for motion compensation and a novel way to handle motions in the context of a continuous acquisition paradigm. Instead of removing or correcting motion, extra motion-state dimensions are reconstructed, which improves image quality and also offers new physiological information of potential clinical value. © 2015 Wiley Periodicals, Inc.

  7. XD-GRASP: Golden-Angle Radial MRI with Reconstruction of Extra Motion-State Dimensions Using Compressed Sensing

    PubMed Central

    Feng, Li; Axel, Leon; Chandarana, Hersh; Block, Kai Tobias; Sodickson, Daniel K.; Otazo, Ricardo

    2015-01-01

    Purpose To develop a novel framework for free-breathing MRI called XD-GRASP, which sorts dynamic data into extra motion-state dimensions using the self-navigation properties of radial imaging and reconstructs the multidimensional dataset using compressed sensing. Methods Radial k-space data are continuously acquired using the golden-angle sampling scheme and sorted into multiple motion-states based on respiratory and/or cardiac motion signals derived directly from the data. The resulting under-sampled multidimensional dataset is reconstructed using a compressed sensing approach that exploits sparsity along the new dynamic dimensions. The performance of XD-GRASP is demonstrated for free-breathing three-dimensional (3D) abdominal imaging, two-dimensional (2D) cardiac cine imaging and 3D dynamic contrast-enhanced (DCE) MRI of the liver, comparing against reconstructions without motion sorting in both healthy volunteers and patients. Results XD-GRASP separates respiratory motion from cardiac motion in cardiac imaging, and respiratory motion from contrast enhancement in liver DCE-MRI, which improves image quality and reduces motion-blurring artifacts. Conclusion XD-GRASP represents a new use of sparsity for motion compensation and a novel way to handle motions in the context of a continuous acquisition paradigm. Instead of removing or correcting motion, extra motion-state dimensions are reconstructed, which improves image quality and also offers new physiological information of potential clinical value. PMID:25809847

  8. Imagery atlas: a structure of expert software designed to improve the accessibility of remote-sensed satellite imagery

    NASA Astrophysics Data System (ADS)

    Genet, Richard P.

    1995-11-01

    Policy changes in the United States and Europe will bring a number of firms into the remote sensing market. More importantly, there will be a vast increase in the amount of data and potentially, the amount of information, that is available for academic, commercial and a variety of public uses. Presently many of the users of remote sensing data have some understanding of photogrammetric and remote sensing technologies. This is especially true of environmentalist users and academics. As the amount of remote sensing data increases, in order to broaden the user base, it will become increasingly important that the information user not be required to have a background in photogrammetry, remote sensing, or even in the basics of geographic information systems. The user must be able to articulate his requirements in view of existence of new sources of information. This paper provides the framework for expert systems to accomplish this interface. Specific examples of the capabilities which must be developed in order to maximize the utility of specific images and image archives are presented and discussed.

  9. Securing Collaborative Spectrum Sensing against Untrustworthy Secondary Users in Cognitive Radio Networks

    NASA Astrophysics Data System (ADS)

    Wang, Wenkai; Li, Husheng; Sun, Yan(Lindsay); Han, Zhu

    2009-12-01

    Cognitive radio is a revolutionary paradigm to migrate the spectrum scarcity problem in wireless networks. In cognitive radio networks, collaborative spectrum sensing is considered as an effective method to improve the performance of primary user detection. For current collaborative spectrum sensing schemes, secondary users are usually assumed to report their sensing information honestly. However, compromised nodes can send false sensing information to mislead the system. In this paper, we study the detection of untrustworthy secondary users in cognitive radio networks. We first analyze the case when there is only one compromised node in collaborative spectrum sensing schemes. Then we investigate the scenario that there are multiple compromised nodes. Defense schemes are proposed to detect malicious nodes according to their reporting histories. We calculate the suspicious level of all nodes based on their reports. The reports from nodes with high suspicious levels will be excluded in decision-making. Compared with existing defense methods, the proposed scheme can effectively differentiate malicious nodes and honest nodes. As a result, it can significantly improve the performance of collaborative sensing. For example, when there are 10 secondary users, with the primary user detection rate being equal to 0.99, one malicious user can make the false alarm rate [InlineEquation not available: see fulltext.] increase to 72%. The proposed scheme can reduce it to 5%. Two malicious users can make [InlineEquation not available: see fulltext.] increase to 85% and the proposed scheme reduces it to 8%.

  10. Example-Based Automatic Music-Driven Conventional Dance Motion Synthesis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Songhua; Fan, Rukun; Geng, Weidong

    We introduce a novel method for synthesizing dance motions that follow the emotions and contents of a piece of music. Our method employs a learning-based approach to model the music to motion mapping relationship embodied in example dance motions along with those motions' accompanying background music. A key step in our method is to train a music to motion matching quality rating function through learning the music to motion mapping relationship exhibited in synchronized music and dance motion data, which were captured from professional human dance performance. To generate an optimal sequence of dance motion segments to match with amore » piece of music, we introduce a constraint-based dynamic programming procedure. This procedure considers both music to motion matching quality and visual smoothness of a resultant dance motion sequence. We also introduce a two-way evaluation strategy, coupled with a GPU-based implementation, through which we can execute the dynamic programming process in parallel, resulting in significant speedup. To evaluate the effectiveness of our method, we quantitatively compare the dance motions synthesized by our method with motion synthesis results by several peer methods using the motions captured from professional human dancers' performance as the gold standard. We also conducted several medium-scale user studies to explore how perceptually our dance motion synthesis method can outperform existing methods in synthesizing dance motions to match with a piece of music. These user studies produced very positive results on our music-driven dance motion synthesis experiments for several Asian dance genres, confirming the advantages of our method.« less

  11. Automatic human body modeling for vision-based motion capture system using B-spline parameterization of the silhouette

    NASA Astrophysics Data System (ADS)

    Jaume-i-Capó, Antoni; Varona, Javier; González-Hidalgo, Manuel; Mas, Ramon; Perales, Francisco J.

    2012-02-01

    Human motion capture has a wide variety of applications, and in vision-based motion capture systems a major issue is the human body model and its initialization. We present a computer vision algorithm for building a human body model skeleton in an automatic way. The algorithm is based on the analysis of the human shape. We decompose the body into its main parts by computing the curvature of a B-spline parameterization of the human contour. This algorithm has been applied in a context where the user is standing in front of a camera stereo pair. The process is completed after the user assumes a predefined initial posture so as to identify the main joints and construct the human model. Using this model, the initialization problem of a vision-based markerless motion capture system of the human body is solved.

  12. An Interactive Web-Based Analysis Framework for Remote Sensing Cloud Computing

    NASA Astrophysics Data System (ADS)

    Wang, X. Z.; Zhang, H. M.; Zhao, J. H.; Lin, Q. H.; Zhou, Y. C.; Li, J. H.

    2015-07-01

    Spatiotemporal data, especially remote sensing data, are widely used in ecological, geographical, agriculture, and military research and applications. With the development of remote sensing technology, more and more remote sensing data are accumulated and stored in the cloud. An effective way for cloud users to access and analyse these massive spatiotemporal data in the web clients becomes an urgent issue. In this paper, we proposed a new scalable, interactive and web-based cloud computing solution for massive remote sensing data analysis. We build a spatiotemporal analysis platform to provide the end-user with a safe and convenient way to access massive remote sensing data stored in the cloud. The lightweight cloud storage system used to store public data and users' private data is constructed based on open source distributed file system. In it, massive remote sensing data are stored as public data, while the intermediate and input data are stored as private data. The elastic, scalable, and flexible cloud computing environment is built using Docker, which is a technology of open-source lightweight cloud computing container in the Linux operating system. In the Docker container, open-source software such as IPython, NumPy, GDAL, and Grass GIS etc., are deployed. Users can write scripts in the IPython Notebook web page through the web browser to process data, and the scripts will be submitted to IPython kernel to be executed. By comparing the performance of remote sensing data analysis tasks executed in Docker container, KVM virtual machines and physical machines respectively, we can conclude that the cloud computing environment built by Docker makes the greatest use of the host system resources, and can handle more concurrent spatial-temporal computing tasks. Docker technology provides resource isolation mechanism in aspects of IO, CPU, and memory etc., which offers security guarantee when processing remote sensing data in the IPython Notebook. Users can write complex data processing code on the web directly, so they can design their own data processing algorithm.

  13. Sampling the isothermal-isobaric ensemble by Langevin dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, Xingyu; Institute of Applied Physics and Computational Mathematics, Fenghao East Road 2, Beijing 100094; CAEP Software Center for High Performance Numerical Simulation, Huayuan Road 6, Beijing 100088

    2016-03-28

    We present a new method of conducting fully flexible-cell molecular dynamics simulation in isothermal-isobaric ensemble based on Langevin equations of motion. The stochastic coupling to all particle and cell degrees of freedoms is introduced in a correct way, in the sense that the stationary configurational distribution is proved to be consistent with that of the isothermal-isobaric ensemble. In order to apply the proposed method in computer simulations, a second order symmetric numerical integration scheme is developed by Trotter’s splitting of the single-step propagator. Moreover, a practical guide of choosing working parameters is suggested for user specified thermo- and baro-coupling timemore » scales. The method and software implementation are carefully validated by a numerical example.« less

  14. Optical Indoor Positioning System Based on TFT Technology.

    PubMed

    Gőzse, István

    2015-12-24

    A novel indoor positioning system is presented in the paper. Similarly to the camera-based solutions, it is based on visual detection, but it conceptually differs from the classical approaches. First, the objects are marked by LEDs, and second, a special sensing unit is applied, instead of a camera, to track the motion of the markers. This sensing unit realizes a modified pinhole camera model, where the light-sensing area is fixed and consists of a small number of sensing elements (photodiodes), and it is the hole that can be moved. The markers are tracked by controlling the motion of the hole, such that the light of the LEDs always hits the photodiodes. The proposed concept has several advantages: Apart from its low computational demands, it is insensitive to the disturbing ambient light. Moreover, as every component of the system can be realized by simple and inexpensive elements, the overall cost of the system can be kept low.

  15. Body Motion and Graphing.

    ERIC Educational Resources Information Center

    Nemirovsky, Ricardo; Tierney, Cornelia; Wright, Tracy

    1998-01-01

    Analyzed two children's use of a computer-based motion detector to make sense of symbolic expressions (Cartesian graphs). Found three themes: (1) tool perspectives, efforts to understand graphical responses to body motion; (2) fusion, emergent ways of talking and behaving that merge symbols and referents; and (3) graphical spaces, when changing…

  16. A Location-Based Interactive Model of Internet of Things and Cloud (IoT-Cloud) for Mobile Cloud Computing Applications.

    PubMed

    Dinh, Thanh; Kim, Younghan; Lee, Hyukjoon

    2017-03-01

    This paper presents a location-based interactive model of Internet of Things (IoT) and cloud integration (IoT-cloud) for mobile cloud computing applications, in comparison with the periodic sensing model. In the latter, sensing collections are performed without awareness of sensing demands. Sensors are required to report their sensing data periodically regardless of whether or not there are demands for their sensing services. This leads to unnecessary energy loss due to redundant transmission. In the proposed model, IoT-cloud provides sensing services on demand based on interest and location of mobile users. By taking advantages of the cloud as a coordinator, sensing scheduling of sensors is controlled by the cloud, which knows when and where mobile users request for sensing services. Therefore, when there is no demand, sensors are put into an inactive mode to save energy. Through extensive analysis and experimental results, we show that the location-based model achieves a significant improvement in terms of network lifetime compared to the periodic model.

  17. A Location-Based Interactive Model of Internet of Things and Cloud (IoT-Cloud) for Mobile Cloud Computing Applications †

    PubMed Central

    Dinh, Thanh; Kim, Younghan; Lee, Hyukjoon

    2017-01-01

    This paper presents a location-based interactive model of Internet of Things (IoT) and cloud integration (IoT-cloud) for mobile cloud computing applications, in comparison with the periodic sensing model. In the latter, sensing collections are performed without awareness of sensing demands. Sensors are required to report their sensing data periodically regardless of whether or not there are demands for their sensing services. This leads to unnecessary energy loss due to redundant transmission. In the proposed model, IoT-cloud provides sensing services on demand based on interest and location of mobile users. By taking advantages of the cloud as a coordinator, sensing scheduling of sensors is controlled by the cloud, which knows when and where mobile users request for sensing services. Therefore, when there is no demand, sensors are put into an inactive mode to save energy. Through extensive analysis and experimental results, we show that the location-based model achieves a significant improvement in terms of network lifetime compared to the periodic model. PMID:28257067

  18. VisitSense: Sensing Place Visit Patterns from Ambient Radio on Smartphones for Targeted Mobile Ads in Shopping Malls.

    PubMed

    Kim, Byoungjip; Kang, Seungwoo; Ha, Jin-Young; Song, Junehwa

    2015-07-16

    In this paper, we introduce a novel smartphone framework called VisitSense that automatically detects and predicts a smartphone user's place visits from ambient radio to enable behavioral targeting for mobile ads in large shopping malls. VisitSense enables mobile app developers to adopt visit-pattern-aware mobile advertising for shopping mall visitors in their apps. It also benefits mobile users by allowing them to receive highly relevant mobile ads that are aware of their place visit patterns in shopping malls. To achieve the goal, VisitSense employs accurate visit detection and prediction methods. For accurate visit detection, we develop a change-based detection method to take into consideration the stability change of ambient radio and the mobility change of users. It performs well in large shopping malls where ambient radio is quite noisy and causes existing algorithms to easily fail. In addition, we proposed a causality-based visit prediction model to capture the causality in the sequential visit patterns for effective prediction. We have developed a VisitSense prototype system, and a visit-pattern-aware mobile advertising application that is based on it. Furthermore, we deploy the system in the COEX Mall, one of the largest shopping malls in Korea, and conduct diverse experiments to show the effectiveness of VisitSense.

  19. Ambient and smartphone sensor assisted ADL recognition in multi-inhabitant smart environments.

    PubMed

    Roy, Nirmalya; Misra, Archan; Cook, Diane

    2016-02-01

    Activity recognition in smart environments is an evolving research problem due to the advancement and proliferation of sensing, monitoring and actuation technologies to make it possible for large scale and real deployment. While activities in smart home are interleaved, complex and volatile; the number of inhabitants in the environment is also dynamic. A key challenge in designing robust smart home activity recognition approaches is to exploit the users' spatiotemporal behavior and location, focus on the availability of multitude of devices capable of providing different dimensions of information and fulfill the underpinning needs for scaling the system beyond a single user or a home environment. In this paper, we propose a hybrid approach for recognizing complex activities of daily living (ADL), that lie in between the two extremes of intensive use of body-worn sensors and the use of ambient sensors. Our approach harnesses the power of simple ambient sensors (e.g., motion sensors) to provide additional 'hidden' context (e.g., room-level location) of an individual, and then combines this context with smartphone-based sensing of micro-level postural/locomotive states. The major novelty is our focus on multi-inhabitant environments, where we show how the use of spatiotemporal constraints along with multitude of data sources can be used to significantly improve the accuracy and computational overhead of traditional activity recognition based approaches such as coupled-hidden Markov models. Experimental results on two separate smart home datasets demonstrate that this approach improves the accuracy of complex ADL classification by over 30 %, compared to pure smartphone-based solutions.

  20. Ambient and smartphone sensor assisted ADL recognition in multi-inhabitant smart environments

    PubMed Central

    Misra, Archan; Cook, Diane

    2016-01-01

    Activity recognition in smart environments is an evolving research problem due to the advancement and proliferation of sensing, monitoring and actuation technologies to make it possible for large scale and real deployment. While activities in smart home are interleaved, complex and volatile; the number of inhabitants in the environment is also dynamic. A key challenge in designing robust smart home activity recognition approaches is to exploit the users' spatiotemporal behavior and location, focus on the availability of multitude of devices capable of providing different dimensions of information and fulfill the underpinning needs for scaling the system beyond a single user or a home environment. In this paper, we propose a hybrid approach for recognizing complex activities of daily living (ADL), that lie in between the two extremes of intensive use of body-worn sensors and the use of ambient sensors. Our approach harnesses the power of simple ambient sensors (e.g., motion sensors) to provide additional ‘hidden’ context (e.g., room-level location) of an individual, and then combines this context with smartphone-based sensing of micro-level postural/locomotive states. The major novelty is our focus on multi-inhabitant environments, where we show how the use of spatiotemporal constraints along with multitude of data sources can be used to significantly improve the accuracy and computational overhead of traditional activity recognition based approaches such as coupled-hidden Markov models. Experimental results on two separate smart home datasets demonstrate that this approach improves the accuracy of complex ADL classification by over 30 %, compared to pure smartphone-based solutions. PMID:27042240

  1. Survey of Motion Tracking Methods Based on Inertial Sensors: A Focus on Upper Limb Human Motion

    PubMed Central

    Filippeschi, Alessandro; Schmitz, Norbert; Miezal, Markus; Bleser, Gabriele; Ruffaldi, Emanuele; Stricker, Didier

    2017-01-01

    Motion tracking based on commercial inertial measurements units (IMUs) has been widely studied in the latter years as it is a cost-effective enabling technology for those applications in which motion tracking based on optical technologies is unsuitable. This measurement method has a high impact in human performance assessment and human-robot interaction. IMU motion tracking systems are indeed self-contained and wearable, allowing for long-lasting tracking of the user motion in situated environments. After a survey on IMU-based human tracking, five techniques for motion reconstruction were selected and compared to reconstruct a human arm motion. IMU based estimation was matched against motion tracking based on the Vicon marker-based motion tracking system considered as ground truth. Results show that all but one of the selected models perform similarly (about 35 mm average position estimation error). PMID:28587178

  2. Ontology-based classification of remote sensing images using spectral rules

    NASA Astrophysics Data System (ADS)

    Andrés, Samuel; Arvor, Damien; Mougenot, Isabelle; Libourel, Thérèse; Durieux, Laurent

    2017-05-01

    Earth Observation data is of great interest for a wide spectrum of scientific domain applications. An enhanced access to remote sensing images for "domain" experts thus represents a great advance since it allows users to interpret remote sensing images based on their domain expert knowledge. However, such an advantage can also turn into a major limitation if this knowledge is not formalized, and thus is difficult for it to be shared with and understood by other users. In this context, knowledge representation techniques such as ontologies should play a major role in the future of remote sensing applications. We implemented an ontology-based prototype to automatically classify Landsat images based on explicit spectral rules. The ontology is designed in a very modular way in order to achieve a generic and versatile representation of concepts we think of utmost importance in remote sensing. The prototype was tested on four subsets of Landsat images and the results confirmed the potential of ontologies to formalize expert knowledge and classify remote sensing images.

  3. Context Sensing System Analysis for Privacy Preservation Based on Game Theory.

    PubMed

    Wang, Shengling; Li, Luyun; Sun, Weiman; Guo, Junqi; Bie, Rongfang; Lin, Kai

    2017-02-10

    In a context sensing system in which a sensor-equipped mobile phone runs an unreliable context-aware application, the application can infer the user's contexts, based on which it provides personalized services. However, the application may sell the user's contexts to some malicious adversaries to earn extra profits, which will hinder its widespread use. In the real world, the actions of the user, the application and the adversary in the context sensing system affect each other, so that their payoffs are constrained mutually. To figure out under which conditions they behave well (the user releases, the application does not leak and the adversary does not retrieve the context), we take advantage of game theory to analyze the context sensing system. We use the extensive form game and the repeated game, respectively, to analyze two typical scenarios, single interaction and multiple interaction among three players, from which Nash equilibriums and cooperation conditions are obtained. Our results show that the reputation mechanism for the context-sensing system in the former scenario is crucial to privacy preservation, so is the extent to which the participants are concerned about future payoffs in the latter one.

  4. A Study on Immersion and Presence of a Portable Hand Haptic System for Immersive Virtual Reality

    PubMed Central

    Kim, Mingyu; Jeon, Changyu; Kim, Jinmo

    2017-01-01

    This paper proposes a portable hand haptic system using Leap Motion as a haptic interface that can be used in various virtual reality (VR) applications. The proposed hand haptic system was designed as an Arduino-based sensor architecture to enable a variety of tactile senses at low cost, and is also equipped with a portable wristband. As a haptic system designed for tactile feedback, the proposed system first identifies the left and right hands and then sends tactile senses (vibration and heat) to each fingertip (thumb and index finger). It is incorporated into a wearable band-type system, making its use easy and convenient. Next, hand motion is accurately captured using the sensor of the hand tracking system and is used for virtual object control, thus achieving interaction that enhances immersion. A VR application was designed with the purpose of testing the immersion and presence aspects of the proposed system. Lastly, technical and statistical tests were carried out to assess whether the proposed haptic system can provide a new immersive presence to users. According to the results of the presence questionnaire and the simulator sickness questionnaire, we confirmed that the proposed hand haptic system, in comparison to the existing interaction that uses only the hand tracking system, provided greater presence and a more immersive environment in the virtual reality. PMID:28513545

  5. A Study on Immersion and Presence of a Portable Hand Haptic System for Immersive Virtual Reality.

    PubMed

    Kim, Mingyu; Jeon, Changyu; Kim, Jinmo

    2017-05-17

    This paper proposes a portable hand haptic system using Leap Motion as a haptic interface that can be used in various virtual reality (VR) applications. The proposed hand haptic system was designed as an Arduino-based sensor architecture to enable a variety of tactile senses at low cost, and is also equipped with a portable wristband. As a haptic system designed for tactile feedback, the proposed system first identifies the left and right hands and then sends tactile senses (vibration and heat) to each fingertip (thumb and index finger). It is incorporated into a wearable band-type system, making its use easy and convenient. Next, hand motion is accurately captured using the sensor of the hand tracking system and is used for virtual object control, thus achieving interaction that enhances immersion. A VR application was designed with the purpose of testing the immersion and presence aspects of the proposed system. Lastly, technical and statistical tests were carried out to assess whether the proposed haptic system can provide a new immersive presence to users. According to the results of the presence questionnaire and the simulator sickness questionnaire, we confirmed that the proposed hand haptic system, in comparison to the existing interaction that uses only the hand tracking system, provided greater presence and a more immersive environment in the virtual reality.

  6. Location and Modality Effects in Online Dating: Rich Modality Profile and Location-Based Information Cues Increase Social Presence, While Moderating the Impact of Uncertainty Reduction Strategy.

    PubMed

    Jung, Soyoung; Roh, Soojin; Yang, Hyun; Biocca, Frank

    2017-09-01

    This study investigates how different interface modality features of online dating sites, such as location awareness cues and modality of profiles, affect the sense of social presence of a prospective date. We also examined how various user behaviors aimed at reducing uncertainty about online interactions affect social presence perceptions and are affected by the user interface features. Male users felt a greater sense of social presence when exposed to both location and accessibility cues (geographical proximity) and a richer medium (video profiles). Viewing a richer medium significantly increased the sense of social presence among female participants whereas location-based information sharing features did not directly affect their social presence perception. Augmented social presence, as a mediator, contributed to users' greater intention to meet potential dating partners in a face-to-face setting and to buy paid memberships on online dating sites.

  7. Synergy-Based Bilateral Port: A Universal Control Module for Tele-Manipulation Frameworks Using Asymmetric Master-Slave Systems.

    PubMed

    Brygo, Anais; Sarakoglou, Ioannis; Grioli, Giorgio; Tsagarakis, Nikos

    2017-01-01

    Endowing tele-manipulation frameworks with the capability to accommodate a variety of robotic hands is key to achieving high performances through permitting to flexibly interchange the end-effector according to the task considered. This requires the development of control policies that not only cope with asymmetric master-slave systems but also whose high-level components are designed in a unified space in abstraction from the devices specifics. To address this dual challenge, a novel synergy port is developed that resolves the kinematic, sensing, and actuation asymmetries of the considered system through generating motion and force feedback references in the hardware-independent hand postural synergy space. It builds upon the concept of the Cartesian-based synergy matrix, which is introduced as a tool mapping the fingertips Cartesian space to the directions oriented along the grasp principal components. To assess the effectiveness of the proposed approach, the synergy port has been integrated into the control system of a highly asymmetric tele-manipulation framework, in which the 3-finger hand exoskeleton HEXOTRAC is used as a master device to control the SoftHand, a robotic hand whose transmission system relies on a single motor to drive all joints along a soft synergistic path. The platform is further enriched with the vision-based motion capture system Optitrack to monitor the 6D trajectory of the user's wrist, which is used to control the robotic arm on which the SoftHand is mounted. Experiments have been conducted with the humanoid robot COMAN and the KUKA LWR robotic manipulator. Results indicate that this bilateral interface is highly intuitive and allows users with no prior experience to reach, grasp, and transport a variety of objects exhibiting very different shapes and impedances. In addition, the hardware and control solutions proved capable of accommodating users with different hand kinematics. Finally, the proposed control framework offers a universal, flexible, and intuitive interface allowing for the performance of effective tele-manipulations.

  8. Self-sensing paper-based actuators employing ferromagnetic nanoparticles and graphite

    NASA Astrophysics Data System (ADS)

    Phan, Hoang-Phuong; Dinh, Toan; Nguyen, Tuan-Khoa; Vatani, Ashkan; Md Foisal, Abu Riduan; Qamar, Afzaal; Kermany, Atieh Ranjbar; Dao, Dzung Viet; Nguyen, Nam-Trung

    2017-04-01

    Paper-based microfluidics and sensors have attracted great attention. Although a large number of paper-based devices have been developed, surprisingly there are only a few studies investigating paper actuators. To fulfill the requirements for the integration of both sensors and actuators into paper, this work presents an unprecedented platform which utilizes ferromagnetic particles for actuation and graphite for motion monitoring. The use of the integrated mechanical sensing element eliminates the reliance on image processing for motion detection and also allows real-time measurements of the dynamic response in paper-based actuators. The proposed platform can also be quickly fabricated using a simple process, indicating its potential for controllable paper-based lab on chip.

  9. HERMA-Heartbeat Microwave Authentication

    NASA Technical Reports Server (NTRS)

    Haque, Salman-ul Mohammed (Inventor); Chow, Edward (Inventor); McKee, Michael Ray (Inventor); Tkacenko, Andre (Inventor); Lux, James Paul (Inventor)

    2018-01-01

    Systems and methods for identifying and/or authenticating individuals utilizing microwave sensing modules are disclosed. A HEaRtbeat Microwave Authentication (HERMA) system can enable the active identification and/or authentication of a user by analyzing reflected RF signals that contain a person's unique characteristics related to their heartbeats. An illumination signal is transmitted towards a person where a reflected signal captures the motion of the skin and tissue (i.e. displacement) due to the person's heartbeats. The HERMA system can utilize existing transmitters in a mobile device (e.g. Wi-Fi, Bluetooth, Cellphone signals) as the illumination source with at least one external receive antenna. The received reflected signals can be pre-processed and analyzed to identify and/or authenticate a user.

  10. Double Threshold Energy Detection Based Cooperative Spectrum Sensing for Cognitive Radio Networks with QoS Guarantee

    NASA Astrophysics Data System (ADS)

    Hu, Hang; Yu, Hong; Zhang, Yongzhi

    2013-03-01

    Cooperative spectrum sensing, which can greatly improve the ability of discovering the spectrum opportunities, is regarded as an enabling mechanism for cognitive radio (CR) networks. In this paper, we employ a double threshold detection method in energy detector to perform spectrum sensing, only the CR users with reliable sensing information are allowed to transmit one bit local decision to the fusion center. Simulation results will show that our proposed double threshold detection method could not only improve the sensing performance but also save the bandwidth of the reporting channel compared with the conventional detection method with one threshold. By weighting the sensing performance and the consumption of system resources in a utility function that is maximized with respect to the number of CR users, it has been shown that the optimal number of CR users is related to the price of these Quality-of-Service (QoS) requirements.

  11. Noise and range considerations for close-range radar sensing of life signs underwater.

    PubMed

    Hafner, Noah; Lubecke, Victor

    2011-01-01

    Close-range underwater sensing of motion-based life signs can be performed with low power Doppler radar and ultrasound techniques. Corresponding noise and range performance trade-offs are examined here, with regard to choice of frequency and technology. The frequency range examined includes part of the UHF and microwave spectrum. Underwater detection of motion by radar in freshwater and saltwater are demonstrated. Radar measurements exhibited reduced susceptibility to noise as compared to ultrasound. While higher frequency radar exhibited better signal to noise ratio, propagation was superior for lower frequencies. Radar detection of motion through saltwater was also demonstrated at restricted ranges (1-2 cm) with low power transmission (10 dBm). The results facilitate the establishment of guidelines for optimal choice in technology for the underwater measurement motion-based life signs, with respect to trade offs involving range and noise.

  12. A novel secret sharing with two users based on joint transform correlator and compressive sensing

    NASA Astrophysics Data System (ADS)

    Zhao, Tieyu; Chi, Yingying

    2018-05-01

    Recently, joint transform correlator (JTC) has been widely applied to image encryption and authentication. This paper presents a novel secret sharing scheme with two users based on JTC. Two users must be present during the decryption that the system has high security and reliability. In the scheme, two users use their fingerprints to encrypt plaintext, and they can decrypt only if both of them provide the fingerprints which are successfully authenticated. The linear relationship between the plaintext and ciphertext is broken using the compressive sensing, which can resist existing attacks on JTC. The results of the theoretical analysis and numerical simulation confirm the validity of the system.

  13. A Novel Device for Total Acoustic Output Measurement of High Power Transducers

    NASA Astrophysics Data System (ADS)

    Howard, S.; Twomey, R.; Morris, H.; Zanelli, C. I.

    2010-03-01

    The objective of this work was to develop a device for ultrasound power measurement applicable over a broad range of medical transducer types, orientations and powers, and which supports automatic measurements to simplify use and minimize errors. Considering all the recommendations from standards such as IEC 61161, an accurate electromagnetic null-balance has been designed for ultrasound power measurements. The sensing element is placed in the water to eliminate errors due to surface tension and water evaporation, and the motion and detection of force is constrained to one axis, to increase immunity to vibration from the floor, water sloshing and water surface waves. A transparent tank was designed so it could easily be submerged in a larger tank to accommodate large transducers or side-firing geometries, and can also be turned upside-down for upward-firing transducers. A vacuum lid allows degassing the water and target in situ. An external control module was designed to operate the sensing/driving loop and to communicate to a local computer for data logging. The sensing algorithm, which incorporates temperature compensation, compares the feedback force needed to cancel the motion for sources in the "on" and "off" states. These two states can be controlled by the control unit or manually by the user, under guidance by a graphical user interface (the system presents measured power live during collection). Software allows calibration to standard weights, or to independently calibrated acoustic sources. The design accommodates a variety of targets, including cone, rubber, brush targets and an oil-filled target for power measurement via buoyancy changes. Measurement examples are presented, including HIFU sources operating at powers from 1 to 100.

  14. Smart Sensor-Based Motion Detection System for Hand Movement Training in Open Surgery.

    PubMed

    Sun, Xinyao; Byrns, Simon; Cheng, Irene; Zheng, Bin; Basu, Anup

    2017-02-01

    We introduce a smart sensor-based motion detection technique for objective measurement and assessment of surgical dexterity among users at different experience levels. The goal is to allow trainees to evaluate their performance based on a reference model shared through communication technology, e.g., the Internet, without the physical presence of an evaluating surgeon. While in the current implementation we used a Leap Motion Controller to obtain motion data for analysis, our technique can be applied to motion data captured by other smart sensors, e.g., OptiTrack. To differentiate motions captured from different participants, measurement and assessment in our approach are achieved using two strategies: (1) low level descriptive statistical analysis, and (2) Hidden Markov Model (HMM) classification. Based on our surgical knot tying task experiment, we can conclude that finger motions generated from users with different surgical dexterity, e.g., expert and novice performers, display differences in path length, number of movements and task completion time. In order to validate the discriminatory ability of HMM for classifying different movement patterns, a non-surgical task was included in our analysis. Experimental results demonstrate that our approach had 100 % accuracy in discriminating between expert and novice performances. Our proposed motion analysis technique applied to open surgical procedures is a promising step towards the development of objective computer-assisted assessment and training systems.

  15. Human-computer interface glove using flexible piezoelectric sensors

    NASA Astrophysics Data System (ADS)

    Cha, Youngsu; Seo, Jeonggyu; Kim, Jun-Sik; Park, Jung-Min

    2017-05-01

    In this note, we propose a human-computer interface glove based on flexible piezoelectric sensors. We select polyvinylidene fluoride as the piezoelectric material for the sensors because of advantages such as a steady piezoelectric characteristic and good flexibility. The sensors are installed in a fabric glove by means of pockets and Velcro bands. We detect changes in the angles of the finger joints from the outputs of the sensors, and use them for controlling a virtual hand that is utilized in virtual object manipulation. To assess the sensing ability of the piezoelectric sensors, we compare the processed angles from the sensor outputs with the real angles from a camera recoding. With good agreement between the processed and real angles, we successfully demonstrate the user interaction system with the virtual hand and interface glove based on the flexible piezoelectric sensors, for four hand motions: fist clenching, pinching, touching, and grasping.

  16. User-centric incentive design for participatory mobile phone sensing

    NASA Astrophysics Data System (ADS)

    Gao, Wei; Lu, Haoyang

    2014-05-01

    Mobile phone sensing is a critical underpinning of pervasive mobile computing, and is one of the key factors for improving people's quality of life in modern society via collective utilization of the on-board sensing capabilities of people's smartphones. The increasing demands for sensing services and ambient awareness in mobile environments highlight the necessity of active participation of individual mobile users in sensing tasks. User incentives for such participation have been continuously offered from an application-centric perspective, i.e., as payments from the sensing server, to compensate users' sensing costs. These payments, however, are manipulated to maximize the benefits of the sensing server, ignoring the runtime flexibility and benefits of participating users. This paper presents a novel framework of user-centric incentive design, and develops a universal sensing platform which translates heterogenous sensing tasks to a generic sensing plan specifying the task-independent requirements of sensing performance. We use this sensing plan as input to reduce three categories of sensing costs, which together cover the possible sources hindering users' participation in sensing.

  17. Optical Indoor Positioning System Based on TFT Technology

    PubMed Central

    Gőzse, István

    2015-01-01

    A novel indoor positioning system is presented in the paper. Similarly to the camera-based solutions, it is based on visual detection, but it conceptually differs from the classical approaches. First, the objects are marked by LEDs, and second, a special sensing unit is applied, instead of a camera, to track the motion of the markers. This sensing unit realizes a modified pinhole camera model, where the light-sensing area is fixed and consists of a small number of sensing elements (photodiodes), and it is the hole that can be moved. The markers are tracked by controlling the motion of the hole, such that the light of the LEDs always hits the photodiodes. The proposed concept has several advantages: Apart from its low computational demands, it is insensitive to the disturbing ambient light. Moreover, as every component of the system can be realized by simple and inexpensive elements, the overall cost of the system can be kept low. PMID:26712753

  18. Effects of Mental Load and Fatigue on Steady-State Evoked Potential Based Brain Computer Interface Tasks: A Comparison of Periodic Flickering and Motion-Reversal Based Visual Attention.

    PubMed

    Xie, Jun; Xu, Guanghua; Wang, Jing; Li, Min; Han, Chengcheng; Jia, Yaguang

    Steady-state visual evoked potentials (SSVEP) based paradigm is a conventional BCI method with the advantages of high information transfer rate, high tolerance to artifacts and the robust performance across users. But the occurrence of mental load and fatigue when users stare at flickering stimuli is a critical problem in implementation of SSVEP-based BCIs. Based on electroencephalography (EEG) power indices α, θ, θ + α, ratio index θ/α and response properties of amplitude and SNR, this study quantitatively evaluated the mental load and fatigue in both of conventional flickering and the novel motion-reversal visual attention tasks. Results over nine subjects revealed significant mental load alleviation in motion-reversal task rather than flickering task. The interaction between factors of "stimulation type" and "fatigue level" also illustrated the motion-reversal stimulation as a superior anti-fatigue solution for long-term BCI operation. Taken together, our work provided an objective method favorable for the design of more practically applicable steady-state evoked potential based BCIs.

  19. Optimal Configuration of Human Motion Tracking Systems: A Systems Engineering Approach

    NASA Technical Reports Server (NTRS)

    Henderson, Steve

    2005-01-01

    Human motion tracking systems represent a crucial technology in the area of modeling and simulation. These systems, which allow engineers to capture human motion for study or replication in virtual environments, have broad applications in several research disciplines including human engineering, robotics, and psychology. These systems are based on several sensing paradigms, including electro-magnetic, infrared, and visual recognition. Each of these paradigms requires specialized environments and hardware configurations to optimize performance of the human motion tracking system. Ideally, these systems are used in a laboratory or other facility that was designed to accommodate the particular sensing technology. For example, electromagnetic systems are highly vulnerable to interference from metallic objects, and should be used in a specialized lab free of metal components.

  20. The Use of Virtual Reality Technology in the Treatment of Anxiety and Other Psychiatric Disorders.

    PubMed

    Maples-Keller, Jessica L; Bunnell, Brian E; Kim, Sae-Jin; Rothbaum, Barbara O

    After participating in this activity, learners should be better able to:• Evaluate the literature regarding the effectiveness of incorporating virtual reality (VR) in the treatment of psychiatric disorders• Assess the use of exposure-based intervention for anxiety disorders ABSTRACT: Virtual reality (VR) allows users to experience a sense of presence in a computer-generated, three-dimensional environment. Sensory information is delivered through a head-mounted display and specialized interface devices. These devices track head movements so that the movements and images change in a natural way with head motion, allowing for a sense of immersion. VR, which allows for controlled delivery of sensory stimulation via the therapist, is a convenient and cost-effective treatment. This review focuses on the available literature regarding the effectiveness of incorporating VR within the treatment of various psychiatric disorders, with particular attention to exposure-based intervention for anxiety disorders. A systematic literature search was conducted in order to identify studies implementing VR-based treatment for anxiety or other psychiatric disorders. This article reviews the history of the development of VR-based technology and its use within psychiatric treatment, the empirical evidence for VR-based treatment, and the benefits for using VR for psychiatric research and treatment. It also presents recommendations for how to incorporate VR into psychiatric care and discusses future directions for VR-based treatment and clinical research.

  1. Satellites for What? Creating User Communities for Space-based Data in France: The Case from LERTS to CESBIO.

    PubMed

    Cirac-Claveras, Gemma

    2018-01-01

    This article uses a French case to explore the who, how, and why of satellite remote-sensing development and its transition towards routine utilization in the domain of ecosystems ecology. It discusses the evolution of a community of technology developers promoting remote-sensing capabilities (mostly sponsored by the French space agency). They attempted to legitimate quality scientific practices, establish the authority of satellite remote-sensing data within academic institutions, and build a community of technology users. This article, hence, is intended to contribute to historical interest in how a community of users is constructed for a technological system.

  2. AMUC: Associated Motion capture User Categories.

    PubMed

    Norman, Sally Jane; Lawson, Sian E M; Olivier, Patrick; Watson, Paul; Chan, Anita M-A; Dade-Robertson, Martyn; Dunphy, Paul; Green, Dave; Hiden, Hugo; Hook, Jonathan; Jackson, Daniel G

    2009-07-13

    The AMUC (Associated Motion capture User Categories) project consisted of building a prototype sketch retrieval client for exploring motion capture archives. High-dimensional datasets reflect the dynamic process of motion capture and comprise high-rate sampled data of a performer's joint angles; in response to multiple query criteria, these data can potentially yield different kinds of information. The AMUC prototype harnesses graphic input via an electronic tablet as a query mechanism, time and position signals obtained from the sketch being mapped to the properties of data streams stored in the motion capture repository. As well as proposing a pragmatic solution for exploring motion capture datasets, the project demonstrates the conceptual value of iterative prototyping in innovative interdisciplinary design. The AMUC team was composed of live performance practitioners and theorists conversant with a variety of movement techniques, bioengineers who recorded and processed motion data for integration into the retrieval tool, and computer scientists who designed and implemented the retrieval system and server architecture, scoped for Grid-based applications. Creative input on information system design and navigation, and digital image processing, underpinned implementation of the prototype, which has undergone preliminary trials with diverse users, allowing identification of rich potential development areas.

  3. Privacy-Preserving Location-Based Service Scheme for Mobile Sensing Data.

    PubMed

    Xie, Qingqing; Wang, Liangmin

    2016-11-25

    With the wide use of mobile sensing application, more and more location-embedded data are collected and stored in mobile clouds, such as iCloud, Samsung cloud, etc. Using these data, the cloud service provider (CSP) can provide location-based service (LBS) for users. However, the mobile cloud is untrustworthy. The privacy concerns force the sensitive locations to be stored on the mobile cloud in an encrypted form. However, this brings a great challenge to utilize these data to provide efficient LBS. To solve this problem, we propose a privacy-preserving LBS scheme for mobile sensing data, based on the RSA (for Rivest, Shamir and Adleman) algorithm and ciphertext policy attribute-based encryption (CP-ABE) scheme. The mobile cloud can perform location distance computing and comparison efficiently for authorized users, without location privacy leakage. In the end, theoretical security analysis and experimental evaluation demonstrate that our scheme is secure against the chosen plaintext attack (CPA) and efficient enough for practical applications in terms of user side computation overhead.

  4. Privacy-Preserving Location-Based Service Scheme for Mobile Sensing Data †

    PubMed Central

    Xie, Qingqing; Wang, Liangmin

    2016-01-01

    With the wide use of mobile sensing application, more and more location-embedded data are collected and stored in mobile clouds, such as iCloud, Samsung cloud, etc. Using these data, the cloud service provider (CSP) can provide location-based service (LBS) for users. However, the mobile cloud is untrustworthy. The privacy concerns force the sensitive locations to be stored on the mobile cloud in an encrypted form. However, this brings a great challenge to utilize these data to provide efficient LBS. To solve this problem, we propose a privacy-preserving LBS scheme for mobile sensing data, based on the RSA (for Rivest, Shamir and Adleman) algorithm and ciphertext policy attribute-based encryption (CP-ABE) scheme. The mobile cloud can perform location distance computing and comparison efficiently for authorized users, without location privacy leakage. In the end, theoretical security analysis and experimental evaluation demonstrate that our scheme is secure against the chosen plaintext attack (CPA) and efficient enough for practical applications in terms of user side computation overhead. PMID:27897984

  5. Molecular sensing with magnetic nanoparticles using magnetic spectroscopy of nanoparticle Brownian motion.

    PubMed

    Zhang, Xiaojuan; Reeves, Daniel B; Perreard, Irina M; Kett, Warren C; Griswold, Karl E; Gimi, Barjor; Weaver, John B

    2013-12-15

    Functionalized magnetic nanoparticles (mNPs) have shown promise in biosensing and other biomedical applications. Here we use functionalized mNPs to develop a highly sensitive, versatile sensing strategy required in practical biological assays and potentially in vivo analysis. We demonstrate a new sensing scheme based on magnetic spectroscopy of nanoparticle Brownian motion (MSB) to quantitatively detect molecular targets. MSB uses the harmonics of oscillating mNPs as a metric for the freedom of rotational motion, thus reflecting the bound state of the mNP. The harmonics can be detected in vivo from nanogram quantities of iron within 5s. Using a streptavidin-biotin binding system, we show that the detection limit of the current MSB technique is lower than 150 pM (0.075 pmole), which is much more sensitive than previously reported techniques based on mNP detection. Using mNPs conjugated with two anti-thrombin DNA aptamers, we show that thrombin can be detected with high sensitivity (4 nM or 2 pmole). A DNA-DNA interaction was also investigated. The results demonstrated that sequence selective DNA detection can be achieved with 100 pM (0.05 pmole) sensitivity. The results of using MSB to sense these interactions, show that the MSB based sensing technique can achieve rapid measurement (within 10s), and is suitable for detecting and quantifying a wide range of biomarkers or analytes. It has the potential to be applied in variety of biomedical applications or diagnostic analyses. © 2013 Elsevier B.V. All rights reserved.

  6. A Web Service-based framework model for people-centric sensing applications applied to social networking.

    PubMed

    Nunes, David; Tran, Thanh-Dien; Raposo, Duarte; Pinto, André; Gomes, André; Silva, Jorge Sá

    2012-01-01

    As the Internet evolved, social networks (such as Facebook) have bloomed and brought together an astonishing number of users. Mashing up mobile phones and sensors with these social environments enables the creation of people-centric sensing systems which have great potential for expanding our current social networking usage. However, such systems also have many associated technical challenges, such as privacy concerns, activity detection mechanisms or intermittent connectivity, as well as limitations due to the heterogeneity of sensor nodes and networks. Considering the openness of the Web 2.0, good technical solutions for these cases consist of frameworks that expose sensing data and functionalities as common Web-Services. This paper presents our RESTful Web Service-based model for people-centric sensing frameworks, which uses sensors and mobile phones to detect users' activities and locations, sharing this information amongst the user's friends within a social networking site. We also present some screenshot results of our experimental prototype.

  7. A Network Coverage Information-Based Sensor Registry System for IoT Environments.

    PubMed

    Jung, Hyunjun; Jeong, Dongwon; Lee, Sukhoon; On, Byung-Won; Baik, Doo-Kwon

    2016-07-25

    The Internet of Things (IoT) is expected to provide better services through the interaction of physical objects via the Internet. However, its limitations cause an interoperability problem when the sensed data are exchanged between the sensor nodes in wireless sensor networks (WSNs), which constitute the core infrastructure of the IoT. To address this problem, a Sensor Registry System (SRS) is used. By using a SRS, the information of the heterogeneous sensed data remains pure. If users move along a road, their mobile devices predict their next positions and obtain the sensed data for that position from the SRS. If the WSNs in the location in which the users move are unstable, the sensed data will be lost. Consider a situation where the user passes through dangerous areas. If the user's mobile device cannot receive information, they cannot be warned about the dangerous situation. To avoid this, two novel SRSs that use network coverage information have been proposed: one uses OpenSignal and the other uses the probabilistic distribution of the users accessing SRS. The empirical study showed that the proposed method can seamlessly provide services related to sensing data under any abnormal circumstance.

  8. Investigation of visually induced motion sickness in dynamic 3D contents based on subjective judgment, heart rate variability, and depth gaze behavior.

    PubMed

    Wibirama, Sunu; Hamamoto, Kazuhiko

    2014-01-01

    Visually induced motion sickness (VIMS) is an important safety issue in stereoscopic 3D technology. Accompanying subjective judgment of VIMS with objective measurement is useful to identify not only biomedical effects of dynamic 3D contents, but also provoking scenes that induce VIMS, duration of VIMS, and user behavior during VIMS. Heart rate variability and depth gaze behavior are appropriate physiological indicators for such objective observation. However, there is no information about relationship between subjective judgment of VIMS, heart rate variability, and depth gaze behavior. In this paper, we present a novel investigation of VIMS based on simulator sickness questionnaire (SSQ), electrocardiography (ECG), and 3D gaze tracking. Statistical analysis on SSQ data shows that nausea and disorientation symptoms increase as amount of dynamic motions increases (nausea: p<;0.005; disorientation: p<;0.05). To reduce VIMS, SSQ and ECG data suggest that user should perform voluntary gaze fixation at one point when experiencing vertical motion (up or down) and horizontal motion (turn left and right) in dynamic 3D contents. Observation of 3D gaze tracking data reveals that users who experienced VIMS tended to have unstable depth gaze than ones who did not experience VIMS.

  9. Position Tracking During Human Walking Using an Integrated Wearable Sensing System.

    PubMed

    Zizzo, Giulio; Ren, Lei

    2017-12-10

    Progress has been made enabling expensive, high-end inertial measurement units (IMUs) to be used as tracking sensors. However, the cost of these IMUs is prohibitive to their widespread use, and hence the potential of low-cost IMUs is investigated in this study. A wearable low-cost sensing system consisting of IMUs and ultrasound sensors was developed. Core to this system is an extended Kalman filter (EKF), which provides both zero-velocity updates (ZUPTs) and Heuristic Drift Reduction (HDR). The IMU data was combined with ultrasound range measurements to improve accuracy. When a map of the environment was available, a particle filter was used to impose constraints on the possible user motions. The system was therefore composed of three subsystems: IMUs, ultrasound sensors, and a particle filter. A Vicon motion capture system was used to provide ground truth information, enabling validation of the sensing system. Using only the IMU, the system showed loop misclosure errors of 1% with a maximum error of 4-5% during walking. The addition of the ultrasound sensors resulted in a 15% reduction in the total accumulated error. Lastly, the particle filter was capable of providing noticeable corrections, which could keep the tracking error below 2% after the first few steps.

  10. NASA programs in technology transfer and their relation to remote sensing education

    NASA Technical Reports Server (NTRS)

    Weinstein, R. H.

    1980-01-01

    Technology transfer to users is a central feature of NASA programs. In each major area of responsibility, a variety of mechanisms was established to provide for this transfer of operational capability to the proper end user, be it a Federal agency, industry, or other public sector users. In addition, the Technology Utilization program was established to cut across all program areas and to make available a wealth of 'spinoff' technology (i.e., secondary applications of space technology to ground-based use). The transfer of remote sensing technology, particularly to state and local users, presents some real challenges in application and education for NASA and the university community. The agency's approach to the transfer of remote sensing technology and the current and potential role of universities in the process are considered.

  11. Ferroelectric Zinc Oxide Nanowire Embedded Flexible Sensor for Motion and Temperature Sensing.

    PubMed

    Shin, Sung-Ho; Park, Dae Hoon; Jung, Joo-Yun; Lee, Min Hyung; Nah, Junghyo

    2017-03-22

    We report a simple method to realize multifunctional flexible motion sensor using ferroelectric lithium-doped ZnO-PDMS. The ferroelectric layer enables piezoelectric dynamic sensing and provides additional motion information to more precisely discriminate different motions. The PEDOT:PSS-functionalized AgNWs, working as electrode layers for the piezoelectric sensing layer, resistively detect a change of both movement or temperature. Thus, through the optimal integration of both elements, the sensing limit, accuracy, and functionality can be further expanded. The method introduced here is a simple and effective route to realize a high-performance flexible motion sensor with integrated multifunctionalities.

  12. A Multimodal Adaptive Wireless Control Interface for People With Upper-Body Disabilities.

    PubMed

    Fall, Cheikh Latyr; Quevillon, Francis; Blouin, Martine; Latour, Simon; Campeau-Lecours, Alexandre; Gosselin, Clement; Gosselin, Benoit

    2018-06-01

    This paper describes a multimodal body-machine interface (BoMI) to help individuals with upper-limb disabilities using advanced assistive technologies, such as robotic arms. The proposed system uses a wearable and wireless body sensor network (WBSN) supporting up to six sensor nodes to measure the natural upper-body gesture of the users and translate it into control commands. Natural gesture of the head and upper-body parts, as well as muscular activity, are measured using inertial measurement units (IMUs) and surface electromyography (sEMG) using custom-designed multimodal wireless sensor nodes. An IMU sensing node is attached to a headset worn by the user. It has a size of 2.9 cm 2.9 cm, a maximum power consumption of 31 mW, and provides angular precision of 1. Multimodal patch sensor nodes, including both IMU and sEMG sensing modalities are placed over the user able-body parts to measure the motion and muscular activity. These nodes have a size of 2.5 cm 4.0 cm and a maximum power consumption of 11 mW. The proposed BoMI runs on a Raspberry Pi. It can adapt to several types of users through different control scenarios using the head and shoulder motion, as well as muscular activity, and provides a power autonomy of up to 24 h. JACO, a 6-DoF assistive robotic arm, is used as a testbed to evaluate the performance of the proposed BoMI. Ten able-bodied subjects performed ADLs while operating the AT device, using the Test d'Évaluation des Membres Supérieurs de Personnes Âgées to evaluate and compare the proposed BoMI with the conventional joystick controller. It is shown that the users can perform all tasks with the proposed BoMI, almost as fast as with the joystick controller, with only 30% time overhead on average, while being potentially more accessible to the upper-body disabled who cannot use the conventional joystick controller. Tests show that control performance with the proposed BoMI improved by up to 17% on average, after three trials.

  13. Motion Sickness: Significance in Aerospace Operations and Prophylaxis (Le Mal des Transports: Son Importance pour les Operations Aerospatiales et Prophylaxies)

    DTIC Science & Technology

    1991-09-01

    description of motion sickness will be based on the assumption that only one peculiar thing happens: a poison response is provoked by motion. Common sense...available for study , because it can be produced for study without the complicating presence of a poison. It is produced by a motion stimulus that...34nausea occurred only during gastric relaxation and hypomotility" (26). The electrical activity of the gut has also been studied during motion

  14. Realtime motion planning for a mobile robot in an unknown environment using a neurofuzzy based approach

    NASA Astrophysics Data System (ADS)

    Zheng, Taixiong

    2005-12-01

    A neuro-fuzzy network based approach for robot motion in an unknown environment was proposed. In order to control the robot motion in an unknown environment, the behavior of the robot was classified into moving to the goal and avoiding obstacles. Then, according to the dynamics of the robot and the behavior character of the robot in an unknown environment, fuzzy control rules were introduced to control the robot motion. At last, a 6-layer neuro-fuzzy network was designed to merge from what the robot sensed to robot motion control. After being trained, the network may be used for robot motion control. Simulation results show that the proposed approach is effective for robot motion control in unknown environment.

  15. Categorization of compensatory motions in transradial myoelectric prosthesis users.

    PubMed

    Hussaini, Ali; Zinck, Arthur; Kyberd, Peter

    2017-06-01

    Prosthesis users perform various compensatory motions to accommodate for the loss of the hand and wrist as well as the reduced functionality of a prosthetic hand. Investigate different compensation strategies that are performed by prosthesis users. Comparative analysis. A total of 20 able-bodied subjects and 4 prosthesis users performed a set of bimanual activities. Movements of the trunk and head were recorded using a motion capture system and a digital video recorder. Clinical motion angles were calculated to assess the compensatory motions made by the prosthesis users. The video recording also assisted in visually identifying the compensations. Compensatory motions by the prosthesis users were evident in the tasks performed (slicing and stirring activities) as compared to the benchmark of able-bodied subjects. Compensations took the form of a measured increase in range of motion, an observed adoption of a new posture during task execution, and prepositioning of items in the workspace prior to initiating a given task. Compensatory motions were performed by prosthesis users during the selected tasks. These can be categorized into three different types of compensations. Clinical relevance Proper identification and classification of compensatory motions performed by prosthesis users into three distinct forms allows clinicians and researchers to accurately identify and quantify movement. It will assist in evaluating new prosthetic interventions by providing distinct terminology that is easily understood and can be shared between research institutions.

  16. Discovery Learning, Representation, and Explanation within a Computer-Based Simulation: Finding the Right Mix

    ERIC Educational Resources Information Center

    Rieber, Lloyd P.; Tzeng, Shyh-Chii; Tribble, Kelly

    2004-01-01

    The purpose of this research was to explore how adult users interact and learn during an interactive computer-based simulation supplemented with brief multimedia explanations of the content. A total of 52 college students interacted with a computer-based simulation of Newton's laws of motion in which they had control over the motion of a simple…

  17. The promise of remote sensing in the atmospheric sciences

    NASA Technical Reports Server (NTRS)

    Atlas, D.

    1981-01-01

    The applications and advances in remote sensing technology for weather prediction, mesoscale meteorology, severe storms, and climate studies are discussed. Doppler radar permits tracking of the three-dimensional field of motion within storms, thereby increasing the accuracy of convective storm modeling. Single Doppler units are also employed for detecting mesoscale storm vortices and tornado vortex signatures with lead times of 30 min. Clear air radar in pulsed and high resolution FM-CW forms reveals boundary layer convection, Kelvin-Helmoltz waves, shear layer turbulence, and wave motions. Lidar is successfully employed for stratospheric aerosol measurements, while Doppler lidar provides data on winds from the ground and can be based in space. Sodar is useful for determining the structure of the PBL. Details and techniques of satellite-based remote sensing are presented, and results from the GWE and FGGE experiments are discussed.

  18. Energy Remote Sensing Applications Projects at the NASA Ames Research Center

    NASA Technical Reports Server (NTRS)

    Norman, S. D.; Likens, W. C.; Mouat, D. A.

    1982-01-01

    The NASA Ames Research Center is active in energy projects primarily in the role of providing assistance to users in the solution of a number of problems related to energy. Data bases were produced which can be used, in combination with other sources of information, to solve spatially related energy problems. Six project activities at Ames are described which relate to energy and remote sensing. Two projects involve power demand forecasting and estimations using remote sensing and geographic information systems; two others involve transmission line routing and corridor analysis; one involves a synfuel user needs assessment through remote sensing; and the sixth involves the siting of energy facilities.

  19. Vision sensor and dual MEMS gyroscope integrated system for attitude determination on moving base

    NASA Astrophysics Data System (ADS)

    Guo, Xiaoting; Sun, Changku; Wang, Peng; Huang, Lu

    2018-01-01

    To determine the relative attitude between the objects on a moving base and the base reference system by a MEMS (Micro-Electro-Mechanical Systems) gyroscope, the motion information of the base is redundant, which must be removed from the gyroscope. Our strategy is to add an auxiliary gyroscope attached to the reference system. The master gyroscope is to sense the total motion, and the auxiliary gyroscope is to sense the motion of the moving base. By a generalized difference method, relative attitude in a non-inertial frame can be determined by dual gyroscopes. With the vision sensor suppressing accumulative drift of the MEMS gyroscope, the vision and dual MEMS gyroscope integration system is formed. Coordinate system definitions and spatial transform are executed in order to fuse inertial and visual data from different coordinate systems together. And a nonlinear filter algorithm, Cubature Kalman filter, is used to fuse slow visual data and fast inertial data together. A practical experimental setup is built up and used to validate feasibility and effectiveness of our proposed attitude determination system in the non-inertial frame on the moving base.

  20. Piezoresistive Carbon-based Hybrid Sensor for Body-Mounted Biomedical Applications

    NASA Astrophysics Data System (ADS)

    Melnykowycz, M.; Tschudin, M.; Clemens, F.

    2017-02-01

    For body-mounted sensor applications, the evolution of soft condensed matter sensor (SCMS) materials offer conformability andit enables mechanical compliance between the body surface and the sensing mechanism. A piezoresistive hybrid sensor and compliant meta-material sub-structure provided a way to engineer sensor physical designs through modification of the mechanical properties of the compliant design. A piezoresistive fiber sensor was produced by combining a thermoplastic elastomer (TPE) matrix with Carbon Black (CB) particles in 1:1 mass ratio. Feedstock was extruded in monofilament fiber form (diameter of 300 microns), resulting in a highly stretchable sensor (strain sensor range up to 100%) with linear resistance signal response. The soft condensed matter sensor was integrated into a hybrid design including a 3D printed metamaterial structure combined with a soft silicone. An auxetic unit cell was chosen (with negative Poisson’s Ratio) in the design in order to combine with the soft silicon, which exhibits a high Poisson’s Ratio. The hybrid sensor design was subjected to mechanical tensile testing up to 50% strain (with gauge factor calculation for sensor performance), and then utilized for strain-based sensing applications on the body including gesture recognition and vital function monitoring including blood pulse-wave and breath monitoring. A 10 gesture Natural User Interface (NUI) test protocol was utilized to show the effectiveness of a single wrist-mounted sensor to identify discrete gestures including finger and hand motions. These hand motions were chosen specifically for Human Computer Interaction (HCI) applications. The blood pulse-wave signal was monitored with the hand at rest, in a wrist-mounted. In addition different breathing patterns were investigated, including normal breathing and coughing, using a belt and chest-mounted configuration.

  1. Robust human machine interface based on head movements applied to assistive robotics.

    PubMed

    Perez, Elisa; López, Natalia; Orosco, Eugenio; Soria, Carlos; Mut, Vicente; Freire-Bastos, Teodiano

    2013-01-01

    This paper presents an interface that uses two different sensing techniques and combines both results through a fusion process to obtain the minimum-variance estimator of the orientation of the user's head. Sensing techniques of the interface are based on an inertial sensor and artificial vision. The orientation of the user's head is used to steer the navigation of a robotic wheelchair. Also, a control algorithm for assistive technology system is presented. The system is evaluated by four individuals with severe motors disability and a quantitative index was developed, in order to objectively evaluate the performance. The results obtained are promising since most users could perform the proposed tasks with the robotic wheelchair.

  2. Robust Human Machine Interface Based on Head Movements Applied to Assistive Robotics

    PubMed Central

    Perez, Elisa; López, Natalia; Orosco, Eugenio; Soria, Carlos; Mut, Vicente; Freire-Bastos, Teodiano

    2013-01-01

    This paper presents an interface that uses two different sensing techniques and combines both results through a fusion process to obtain the minimum-variance estimator of the orientation of the user's head. Sensing techniques of the interface are based on an inertial sensor and artificial vision. The orientation of the user's head is used to steer the navigation of a robotic wheelchair. Also, a control algorithm for assistive technology system is presented. The system is evaluated by four individuals with severe motors disability and a quantitative index was developed, in order to objectively evaluate the performance. The results obtained are promising since most users could perform the proposed tasks with the robotic wheelchair. PMID:24453877

  3. Principal components of wrist circumduction from electromagnetic surgical tracking.

    PubMed

    Rasquinha, Brian J; Rainbow, Michael J; Zec, Michelle L; Pichora, David R; Ellis, Randy E

    2017-02-01

    An electromagnetic (EM) surgical tracking system was used for a functionally calibrated kinematic analysis of wrist motion. Circumduction motions were tested for differences in subject gender and for differences in the sense of the circumduction as clockwise or counter-clockwise motion. Twenty subjects were instrumented for EM tracking. Flexion-extension motion was used to identify the functional axis. Subjects performed unconstrained wrist circumduction in a clockwise and counter-clockwise sense. Data were decomposed into orthogonal flexion-extension motions and radial-ulnar deviation motions. PCA was used to concisely represent motions. Nonparametric Wilcoxon tests were used to distinguish the groups. Flexion-extension motions were projected onto a direction axis with a root-mean-square error of [Formula: see text]. Using the first three principal components, there was no statistically significant difference in gender (all [Formula: see text]). For motion sense, radial-ulnar deviation distinguished the sense of circumduction in the first principal component ([Formula: see text]) and in the third principal component ([Formula: see text]); flexion-extension distinguished the sense in the second principal component ([Formula: see text]). The clockwise sense of circumduction could be distinguished by a multifactorial combination of components; there were no gender differences in this small population. These data constitute a baseline for normal wrist circumduction. The multifactorial PCA findings suggest that a higher-dimensional method, such as manifold analysis, may be a more concise way of representing circumduction in human joints.

  4. Ubiquitous Wireless Smart Sensing and Control

    NASA Technical Reports Server (NTRS)

    Wagner, Raymond

    2013-01-01

    Need new technologies to reliably and safely have humans interact within sensored environments (integrated user interfaces, physical and cognitive augmentation, training, and human-systems integration tools). Areas of focus include: radio frequency identification (RFID), motion tracking, wireless communication, wearable computing, adaptive training and decision support systems, and tele-operations. The challenge is developing effective, low cost/mass/volume/power integrated monitoring systems to assess and control system, environmental, and operator health; and accurately determining and controlling the physical, chemical, and biological environments of the areas and associated environmental control systems.

  5. Ubiquitous Wireless Smart Sensing and Control. Pumps and Pipes JSC: Uniquely Houston

    NASA Technical Reports Server (NTRS)

    Wagner, Raymond

    2013-01-01

    Need new technologies to reliably and safely have humans interact within sensored environments (integrated user interfaces, physical and cognitive augmentation, training, and human-systems integration tools).Areas of focus include: radio frequency identification (RFID), motion tracking, wireless communication, wearable computing, adaptive training and decision support systems, and tele-operations. The challenge is developing effective, low cost/mass/volume/power integrated monitoring systems to assess and control system, environmental, and operator health; and accurately determining and controlling the physical, chemical, and biological environments of the areas and associated environmental control systems.

  6. Alterations to global but not local motion processing in long-term ecstasy (MDMA) users.

    PubMed

    White, Claire; Brown, John; Edwards, Mark

    2014-07-01

    Growing evidence indicates that the main psychoactive ingredient in the illegal drug "ecstasy" (methylendioxymethamphetamine) causes reduced activity in the serotonin and gamma-aminobutyric acid (GABA) systems in humans. On the basis of substantial serotonin input to the occipital lobe, recent research investigated visual processing in long-term users and found a larger magnitude of the tilt aftereffect, interpreted to reflect broadened orientation tuning bandwidths. Further research found higher orientation discrimination thresholds and reduced long-range interactions in the primary visual area of ecstasy users. The aim of the present research was to investigate whether serotonin-mediated V1 visual processing deficits in ecstasy users extend to motion processing mechanisms. Forty-five participants (21 controls, 24 drug users) completed two psychophysical studies: A direction discrimination study directly measured local motion processing in V1, while a motion coherence task tested global motion processing in area V5/MT. "Primary" ecstasy users (n = 18), those without substantial polydrug use, had significantly lower global motion thresholds than controls [p = 0.027, Cohen's d = 0.78 (large)], indicating increased sensitivity to global motion stimuli, but no difference in local motion processing (p = 0.365). These results extend on previous research investigating the long-term effects of illicit drugs on visual processing. Two possible explanations are explored: defuse attentional processes may be facilitating spatial pooling of motion signals in users. Alternatively, it may be that a GABA-mediated disruption to V5/MT processing is reducing spatial suppression and therefore improving global motion perception in ecstasy users.

  7. Usability Evaluation Methods for Gesture-Based Games: A Systematic Review.

    PubMed

    Simor, Fernando Winckler; Brum, Manoela Rogofski; Schmidt, Jaison Dairon Ebertz; Rieder, Rafael; De Marchi, Ana Carolina Bertoletti

    2016-10-04

    Gestural interaction systems are increasingly being used, mainly in games, expanding the idea of entertainment and providing experiences with the purpose of promoting better physical and/or mental health. Therefore, it is necessary to establish mechanisms for evaluating the usability of these interfaces, which make gestures the basis of interaction, to achieve a balance between functionality and ease of use. This study aims to present the results of a systematic review focused on usability evaluation methods for gesture-based games, considering devices with motion-sensing capability. We considered the usability methods used, the common interface issues, and the strategies adopted to build good gesture-based games. The research was centered on four electronic databases: IEEE, Association for Computing Machinery (ACM), Springer, and Science Direct from September 4 to 21, 2015. Within 1427 studies evaluated, 10 matched the eligibility criteria. As a requirement, we considered studies about gesture-based games, Kinect and/or Wii as devices, and the use of a usability method to evaluate the user interface. In the 10 studies found, there was no standardization in the methods because they considered diverse analysis variables. Heterogeneously, authors used different instruments to evaluate gesture-based interfaces and no default approach was proposed. Questionnaires were the most used instruments (70%, 7/10), followed by interviews (30%, 3/10), and observation and video recording (20%, 2/10). Moreover, 60% (6/10) of the studies used gesture-based serious games to evaluate the performance of elderly participants in rehabilitation tasks. This highlights the need for creating an evaluation protocol for older adults to provide a user-friendly interface according to the user's age and limitations. Through this study, we conclude this field is in need of a usability evaluation method for serious games, especially games for older adults, and that the definition of a methodology and a test protocol may offer the user more comfort, welfare, and confidence.

  8. Bilinear modeling of EMG signals to extract user-independent features for multiuser myoelectric interface.

    PubMed

    Matsubara, Takamitsu; Morimoto, Jun

    2013-08-01

    In this study, we propose a multiuser myoelectric interface that can easily adapt to novel users. When a user performs different motions (e.g., grasping and pinching), different electromyography (EMG) signals are measured. When different users perform the same motion (e.g., grasping), different EMG signals are also measured. Therefore, designing a myoelectric interface that can be used by multiple users to perform multiple motions is difficult. To cope with this problem, we propose for EMG signals a bilinear model that is composed of two linear factors: 1) user dependent and 2) motion dependent. By decomposing the EMG signals into these two factors, the extracted motion-dependent factors can be used as user-independent features. We can construct a motion classifier on the extracted feature space to develop the multiuser interface. For novel users, the proposed adaptation method estimates the user-dependent factor through only a few interactions. The bilinear EMG model with the estimated user-dependent factor can extract the user-independent features from the novel user data. We applied our proposed method to a recognition task of five hand gestures for robotic hand control using four-channel EMG signals measured from subject forearms. Our method resulted in 73% accuracy, which was statistically significantly different from the accuracy of standard nonmultiuser interfaces, as the result of a two-sample t -test at a significance level of 1%.

  9. An improved robust blind motion de-blurring algorithm for remote sensing images

    NASA Astrophysics Data System (ADS)

    He, Yulong; Liu, Jin; Liang, Yonghui

    2016-10-01

    Shift-invariant motion blur can be modeled as a convolution of the true latent image and the blur kernel with additive noise. Blind motion de-blurring estimates a sharp image from a motion blurred image without the knowledge of the blur kernel. This paper proposes an improved edge-specific motion de-blurring algorithm which proved to be fit for processing remote sensing images. We find that an inaccurate blur kernel is the main factor to the low-quality restored images. To improve image quality, we do the following contributions. For the robust kernel estimation, first, we adapt the multi-scale scheme to make sure that the edge map could be constructed accurately; second, an effective salient edge selection method based on RTV (Relative Total Variation) is used to extract salient structure from texture; third, an alternative iterative method is introduced to perform kernel optimization, in this step, we adopt l1 and l0 norm as the priors to remove noise and ensure the continuity of blur kernel. For the final latent image reconstruction, an improved adaptive deconvolution algorithm based on TV-l2 model is used to recover the latent image; we control the regularization weight adaptively in different region according to the image local characteristics in order to preserve tiny details and eliminate noise and ringing artifacts. Some synthetic remote sensing images are used to test the proposed algorithm, and results demonstrate that the proposed algorithm obtains accurate blur kernel and achieves better de-blurring results.

  10. Motion-adapted catheter navigation with real-time instantiation and improved visualisation

    PubMed Central

    Kwok, Ka-Wai; Wang, Lichao; Riga, Celia; Bicknell, Colin; Cheshire, Nicholas; Yang, Guang-Zhong

    2014-01-01

    The improvements to catheter manipulation by the use of robot-assisted catheter navigation for endovascular procedures include increased precision, stability of motion and operator comfort. However, navigation through the vasculature under fluoroscopic guidance is still challenging, mostly due to physiological motion and when tortuous vessels are involved. In this paper, we propose a motion-adaptive catheter navigation scheme based on shape modelling to compensate for these dynamic effects, permitting predictive and dynamic navigations. This allows for timed manipulations synchronised with the vascular motion. The technical contribution of the paper includes the following two aspects. Firstly, a dynamic shape modelling and real-time instantiation scheme based on sparse data obtained intra-operatively is proposed for improved visualisation of the 3D vasculature during endovascular intervention. Secondly, a reconstructed frontal view from the catheter tip using the derived dynamic model is used as an interventional aid to user guidance. To demonstrate the practical value of the proposed framework, a simulated aortic branch cannulation procedure is used with detailed user validation to demonstrate the improvement in navigation quality and efficiency. PMID:24744817

  11. Compressive Video Recovery Using Block Match Multi-Frame Motion Estimation Based on Single Pixel Cameras

    PubMed Central

    Bi, Sheng; Zeng, Xiao; Tang, Xin; Qin, Shujia; Lai, King Wai Chiu

    2016-01-01

    Compressive sensing (CS) theory has opened up new paths for the development of signal processing applications. Based on this theory, a novel single pixel camera architecture has been introduced to overcome the current limitations and challenges of traditional focal plane arrays. However, video quality based on this method is limited by existing acquisition and recovery methods, and the method also suffers from being time-consuming. In this paper, a multi-frame motion estimation algorithm is proposed in CS video to enhance the video quality. The proposed algorithm uses multiple frames to implement motion estimation. Experimental results show that using multi-frame motion estimation can improve the quality of recovered videos. To further reduce the motion estimation time, a block match algorithm is used to process motion estimation. Experiments demonstrate that using the block match algorithm can reduce motion estimation time by 30%. PMID:26950127

  12. Earthquake Intensity and Strong Motion Analysis Within SEISCOMP3

    NASA Astrophysics Data System (ADS)

    Becker, J.; Weber, B.; Ghasemi, H.; Cummins, P. R.; Murjaya, J.; Rudyanto, A.; Rößler, D.

    2017-12-01

    Measuring and predicting ground motion parameters including seismic intensities for earthquakes is crucial and subject to recent research in engineering seismology.gempa has developed the new SIGMA module for Seismic Intensity and Ground Motion Analysis. The module is based on the SeisComP3 framework extending it in the field of seismic hazard assessment and engineering seismology. SIGMA may work with or independently of SeisComP3 by supporting FDSN Web services for importing earthquake or station information and waveforms. It provides a user-friendly and modern graphical interface for semi-automatic and interactive strong motion data processing. SIGMA provides intensity and (P)SA maps based on GMPE's or recorded data. It calculates the most common strong motion parameters, e.g. PGA/PGV/PGD, Arias intensity and duration, Tp, Tm, CAV, SED and Fourier-, power- and response spectra. GMPE's are configurable. Supporting C++ and Python plug-ins, standard and customized GMPE's including the OpenQuake Hazard Library can be easily integrated and compared. Originally tailored to specifications by Geoscience Australia and BMKG (Indonesia) SIGMA has become a popular tool among SeisComP3 users concerned with seismic hazard and strong motion seismology.

  13. The effects of SENSE on PROPELLER imaging.

    PubMed

    Chang, Yuchou; Pipe, James G; Karis, John P; Gibbs, Wende N; Zwart, Nicholas R; Schär, Michael

    2015-12-01

    To study how sensitivity encoding (SENSE) impacts periodically rotated overlapping parallel lines with enhanced reconstruction (PROPELLER) image quality, including signal-to-noise ratio (SNR), robustness to motion, precision of motion estimation, and image quality. Five volunteers were imaged by three sets of scans. A rapid method for generating the g-factor map was proposed and validated via Monte Carlo simulations. Sensitivity maps were extrapolated to increase the area over which SENSE can be performed and therefore enhance the robustness to head motion. The precision of motion estimation of PROPELLER blades that are unfolded with these sensitivity maps was investigated. An interleaved R-factor PROPELLER sequence was used to acquire data with similar amounts of motion with and without SENSE acceleration. Two neuroradiologists independently and blindly compared 214 image pairs. The proposed method of g-factor calculation was similar to that provided by the Monte Carlo methods. Extrapolation and rotation of the sensitivity maps allowed for continued robustness of SENSE unfolding in the presence of motion. SENSE-widened blades improved the precision of rotation and translation estimation. PROPELLER images with a SENSE factor of 3 outperformed the traditional PROPELLER images when reconstructing the same number of blades. SENSE not only accelerates PROPELLER but can also improve robustness and precision of head motion correction, which improves overall image quality even when SNR is lost due to acceleration. The reduction of SNR, as a penalty of acceleration, is characterized by the proposed g-factor method. © 2014 Wiley Periodicals, Inc.

  14. An Open Source Software and Web-GIS Based Platform for Airborne SAR Remote Sensing Data Management, Distribution and Sharing

    NASA Astrophysics Data System (ADS)

    Changyong, Dou; Huadong, Guo; Chunming, Han; Ming, Liu

    2014-03-01

    With more and more Earth observation data available to the community, how to manage and sharing these valuable remote sensing datasets is becoming an urgent issue to be solved. The web based Geographical Information Systems (GIS) technology provides a convenient way for the users in different locations to share and make use of the same dataset. In order to efficiently use the airborne Synthetic Aperture Radar (SAR) remote sensing data acquired in the Airborne Remote Sensing Center of the Institute of Remote Sensing and Digital Earth (RADI), Chinese Academy of Sciences (CAS), a Web-GIS based platform for airborne SAR data management, distribution and sharing was designed and developed. The major features of the system include map based navigation search interface, full resolution imagery shown overlaid the map, and all the software adopted in the platform are Open Source Software (OSS). The functions of the platform include browsing the imagery on the map navigation based interface, ordering and downloading data online, image dataset and user management, etc. At present, the system is under testing in RADI and will come to regular operation soon.

  15. A Network Coverage Information-Based Sensor Registry System for IoT Environments

    PubMed Central

    Jung, Hyunjun; Jeong, Dongwon; Lee, Sukhoon; On, Byung-Won; Baik, Doo-Kwon

    2016-01-01

    The Internet of Things (IoT) is expected to provide better services through the interaction of physical objects via the Internet. However, its limitations cause an interoperability problem when the sensed data are exchanged between the sensor nodes in wireless sensor networks (WSNs), which constitute the core infrastructure of the IoT. To address this problem, a Sensor Registry System (SRS) is used. By using a SRS, the information of the heterogeneous sensed data remains pure. If users move along a road, their mobile devices predict their next positions and obtain the sensed data for that position from the SRS. If the WSNs in the location in which the users move are unstable, the sensed data will be lost. Consider a situation where the user passes through dangerous areas. If the user’s mobile device cannot receive information, they cannot be warned about the dangerous situation. To avoid this, two novel SRSs that use network coverage information have been proposed: one uses OpenSignal and the other uses the probabilistic distribution of the users accessing SRS. The empirical study showed that the proposed method can seamlessly provide services related to sensing data under any abnormal circumstance. PMID:27463717

  16. Geo Issue Tracking System

    NASA Astrophysics Data System (ADS)

    Khakpour, Mohammad; Paulik, Christoph; Hahn, Sebastian

    2016-04-01

    Communication about remote sensing data quality between data providers and users as well as between the users is often difficult. The users have a hard time figuring out if a product has known problems over their region of interest and data providers have to spend a lot of effort to make this information available, if it exists. Scientific publications are one tool for communicating with the users base but they are static and mostly one way. As a data provider it is also often difficult to make feedback, received from users, available to the complete user base. The Geo Issue Tracking System (GeoITS) is an Open Source Web Application which has been developed to mitigate these problems. GeoITS combines a mapping interface (Google Maps) with a simple wiki platform. It allows users to give region specific feedback on a remote sensing product by drawing a polygon on the map and describing the problems they had using the remote sensing product in this area. These geolocated wiki entries are then viewable by other users as well as the data providers which can modify and extend the entries. In this way the conversations between the users and the data provider are no longer hidden in e.g. emails but open for all users of the dataset. This new kind of communication platform can enable better cooperation between users and data providers. It will also provide data providers with the ability to track problems their dataset might have in certain areas and resolve them with new product releases. The source code is available via http://github.com/TUW-GEO/geoits_dev A running instance can be tried at https://geoits.herokuapp.com/

  17. [The P300-based brain-computer interface: presentation of the complex "flash + movement" stimuli].

    PubMed

    Ganin, I P; Kaplan, A Ia

    2014-01-01

    The P300 based brain-computer interface requires the detection of P300 wave of brain event-related potentials. Most of its users learn the BCI control in several minutes and after the short classifier training they can type a text on the computer screen or assemble an image of separate fragments in simple BCI-based video games. Nevertheless, insufficient attractiveness for users and conservative stimuli organization in this BCI may restrict its integration into real information processes control. At the same time initial movement of object (motion-onset stimuli) may be an independent factor that induces P300 wave. In current work we checked the hypothesis that complex "flash + movement" stimuli together with drastic and compact stimuli organization on the computer screen may be much more attractive for user while operating in P300 BCI. In 20 subjects research we showed the effectiveness of our interface. Both accuracy and P300 amplitude were higher for flashing stimuli and complex "flash + movement" stimuli compared to motion-onset stimuli. N200 amplitude was maximal for flashing stimuli, while for "flash + movement" stimuli and motion-onset stimuli it was only a half of it. Similar BCI with complex stimuli may be embedded into compact control systems requiring high level of user attention under impact of negative external effects obstructing the BCI control.

  18. A prototype percutaneous transhepatic cholangiography training simulator with real-time breathing motion.

    PubMed

    Villard, P F; Vidal, F P; Hunt, C; Bello, F; John, N W; Johnson, S; Gould, D A

    2009-11-01

    We present here a simulator for interventional radiology focusing on percutaneous transhepatic cholangiography (PTC). This procedure consists of inserting a needle into the biliary tree using fluoroscopy for guidance. The requirements of the simulator have been driven by a task analysis. The three main components have been identified: the respiration, the real-time X-ray display (fluoroscopy) and the haptic rendering (sense of touch). The framework for modelling the respiratory motion is based on kinematics laws and on the Chainmail algorithm. The fluoroscopic simulation is performed on the graphic card and makes use of the Beer-Lambert law to compute the X-ray attenuation. Finally, the haptic rendering is integrated to the virtual environment and takes into account the soft-tissue reaction force feedback and maintenance of the initial direction of the needle during the insertion. Five training scenarios have been created using patient-specific data. Each of these provides the user with variable breathing behaviour, fluoroscopic display tuneable to any device parameters and needle force feedback. A detailed task analysis has been used to design and build the PTC simulator described in this paper. The simulator includes real-time respiratory motion with two independent parameters (rib kinematics and diaphragm action), on-line fluoroscopy implemented on the Graphics Processing Unit and haptic feedback to feel the soft-tissue behaviour of the organs during the needle insertion.

  19. Accelerated acquisition of tagged MRI for cardiac motion correction in simultaneous PET-MR: Phantom and patient studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Chuan, E-mail: chuan.huang@stonybrookmedicine.edu; Department of Radiology, Harvard Medical School, Boston, Massachusetts 02115; Departments of Radiology, Psychiatry, Stony Brook Medicine, Stony Brook, New York 11794

    2015-02-15

    Purpose: Degradation of image quality caused by cardiac and respiratory motions hampers the diagnostic quality of cardiac PET. It has been shown that improved diagnostic accuracy of myocardial defect can be achieved by tagged MR (tMR) based PET motion correction using simultaneous PET-MR. However, one major hurdle for the adoption of tMR-based PET motion correction in the PET-MR routine is the long acquisition time needed for the collection of fully sampled tMR data. In this work, the authors propose an accelerated tMR acquisition strategy using parallel imaging and/or compressed sensing and assess the impact on the tMR-based motion corrected PETmore » using phantom and patient data. Methods: Fully sampled tMR data were acquired simultaneously with PET list-mode data on two simultaneous PET-MR scanners for a cardiac phantom and a patient. Parallel imaging and compressed sensing were retrospectively performed by GRAPPA and kt-FOCUSS algorithms with various acceleration factors. Motion fields were estimated using nonrigid B-spline image registration from both the accelerated and fully sampled tMR images. The motion fields were incorporated into a motion corrected ordered subset expectation maximization reconstruction algorithm with motion-dependent attenuation correction. Results: Although tMR acceleration introduced image artifacts into the tMR images for both phantom and patient data, motion corrected PET images yielded similar image quality as those obtained using the fully sampled tMR images for low to moderate acceleration factors (<4). Quantitative analysis of myocardial defect contrast over ten independent noise realizations showed similar results. It was further observed that although the image quality of the motion corrected PET images deteriorates for high acceleration factors, the images were still superior to the images reconstructed without motion correction. Conclusions: Accelerated tMR images obtained with more than 4 times acceleration can still provide relatively accurate motion fields and yield tMR-based motion corrected PET images with similar image quality as those reconstructed using fully sampled tMR data. The reduction of tMR acquisition time makes it more compatible with routine clinical cardiac PET-MR studies.« less

  20. Online Remote Sensing Interface

    NASA Technical Reports Server (NTRS)

    Lawhead, Joel

    2007-01-01

    BasinTools Module 1 processes remotely sensed raster data, including multi- and hyper-spectral data products, via a Web site with no downloads and no plug-ins required. The interface provides standardized algorithms designed so that a user with little or no remote-sensing experience can use the site. This Web-based approach reduces the amount of software, hardware, and computing power necessary to perform the specified analyses. Access to imagery and derived products is enterprise-level and controlled. Because the user never takes possession of the imagery, the licensing of the data is greatly simplified. BasinTools takes the "just-in-time" inventory control model from commercial manufacturing and applies it to remotely-sensed data. Products are created and delivered on-the-fly with no human intervention, even for casual users. Well-defined procedures can be combined in different ways to extend verified and validated methods in order to derive new remote-sensing products, which improves efficiency in any well-defined geospatial domain. Remote-sensing products produced in BasinTools are self-documenting, allowing procedures to be independently verified or peer-reviewed. The software can be used enterprise-wide to conduct low-level remote sensing, viewing, sharing, and manipulating of image data without the need for desktop applications.

  1. A Programmable System for Motion Control

    NASA Technical Reports Server (NTRS)

    Nowlin, Brent C.

    2003-01-01

    The need for improved flow measurements in the flow path of aeronautics testing facilities has led the NASA Glenn Research Center to develop a new motion control system. The new system is programmable, offering a flexibility unheard of in previous systems. The motion control system is PLC-based, which leads to highly accurate positioning ability, as well as reliability. The user interface is a software-based HMI package, which also adds flexibility to the overall system. The system also has the ability to create and execute motion profiles. This paper discusses the system's operation, control implementation, and experiences.

  2. Grid workflow validation using ontology-based tacit knowledge: A case study for quantitative remote sensing applications

    NASA Astrophysics Data System (ADS)

    Liu, Jia; Liu, Longli; Xue, Yong; Dong, Jing; Hu, Yingcui; Hill, Richard; Guang, Jie; Li, Chi

    2017-01-01

    Workflow for remote sensing quantitative retrieval is the ;bridge; between Grid services and Grid-enabled application of remote sensing quantitative retrieval. Workflow averts low-level implementation details of the Grid and hence enables users to focus on higher levels of application. The workflow for remote sensing quantitative retrieval plays an important role in remote sensing Grid and Cloud computing services, which can support the modelling, construction and implementation of large-scale complicated applications of remote sensing science. The validation of workflow is important in order to support the large-scale sophisticated scientific computation processes with enhanced performance and to minimize potential waste of time and resources. To research the semantic correctness of user-defined workflows, in this paper, we propose a workflow validation method based on tacit knowledge research in the remote sensing domain. We first discuss the remote sensing model and metadata. Through detailed analysis, we then discuss the method of extracting the domain tacit knowledge and expressing the knowledge with ontology. Additionally, we construct the domain ontology with Protégé. Through our experimental study, we verify the validity of this method in two ways, namely data source consistency error validation and parameters matching error validation.

  3. Virtual Character Animation Based on Affordable Motion Capture and Reconfigurable Tangible Interfaces.

    PubMed

    Lamberti, Fabrizio; Paravati, Gianluca; Gatteschi, Valentina; Cannavo, Alberto; Montuschi, Paolo

    2018-05-01

    Software for computer animation is generally characterized by a steep learning curve, due to the entanglement of both sophisticated techniques and interaction methods required to control 3D geometries. This paper proposes a tool designed to support computer animation production processes by leveraging the affordances offered by articulated tangible user interfaces and motion capture retargeting solutions. To this aim, orientations of an instrumented prop are recorded together with animator's motion in the 3D space and used to quickly pose characters in the virtual environment. High-level functionalities of the animation software are made accessible via a speech interface, thus letting the user control the animation pipeline via voice commands while focusing on his or her hands and body motion. The proposed solution exploits both off-the-shelf hardware components (like the Lego Mindstorms EV3 bricks and the Microsoft Kinect, used for building the tangible device and tracking animator's skeleton) and free open-source software (like the Blender animation tool), thus representing an interesting solution also for beginners approaching the world of digital animation for the first time. Experimental results in different usage scenarios show the benefits offered by the designed interaction strategy with respect to a mouse & keyboard-based interface both for expert and non-expert users.

  4. Development and evaluation of low cost game-based balance rehabilitation tool using the Microsoft Kinect sensor.

    PubMed

    Lange, Belinda; Chang, Chien-Yen; Suma, Evan; Newman, Bradley; Rizzo, Albert Skip; Bolas, Mark

    2011-01-01

    The use of the commercial video games as rehabilitation tools, such as the Nintendo WiiFit, has recently gained much interest in the physical therapy arena. Motion tracking controllers such as the Nintendo Wiimote are not sensitive enough to accurately measure performance in all components of balance. Additionally, users can figure out how to "cheat" inaccurate trackers by performing minimal movement (e.g. wrist twisting a Wiimote instead of a full arm swing). Physical rehabilitation requires accurate and appropriate tracking and feedback of performance. To this end, we are developing applications that leverage recent advances in commercial video game technology to provide full-body control of animated virtual characters. A key component of our approach is the use of newly available low cost depth sensing camera technology that provides markerless full-body tracking on a conventional PC. The aim of this research was to develop and assess an interactive game-based rehabilitation tool for balance training of adults with neurological injury.

  5. A Truthful Incentive Mechanism for Online Recruitment in Mobile Crowd Sensing System.

    PubMed

    Chen, Xiao; Liu, Min; Zhou, Yaqin; Li, Zhongcheng; Chen, Shuang; He, Xiangnan

    2017-01-01

    We investigate emerging mobile crowd sensing (MCS) systems, in which new cloud-based platforms sequentially allocate homogenous sensing jobs to dynamically-arriving users with uncertain service qualities. Given that human beings are selfish in nature, it is crucial yet challenging to design an efficient and truthful incentive mechanism to encourage users to participate. To address the challenge, we propose a novel truthful online auction mechanism that can efficiently learn to make irreversible online decisions on winner selections for new MCS systems without requiring previous knowledge of users. Moreover, we theoretically prove that our incentive possesses truthfulness, individual rationality and computational efficiency. Extensive simulation results under both real and synthetic traces demonstrate that our incentive mechanism can reduce the payment of the platform, increase the utility of the platform and social welfare.

  6. Parametric amplification in a resonant sensing array

    NASA Astrophysics Data System (ADS)

    Yie, Zi; Miller, Nicholas J.; Shaw, Steven W.; Turner, Kimberly L.

    2012-03-01

    We demonstrate parametric amplification of a multidegree of freedom resonant mass sensing array via an applied base motion containing the appropriate frequency content and phases. Applying parametric forcing in this manner is simple and aligns naturally with the vibrational properties of the sensing structure. Using this technique, we observe an increase in the quality factors of the coupled array resonances, which provides an effective means of improving device sensitivity.

  7. Immersive viewing engine

    NASA Astrophysics Data System (ADS)

    Schonlau, William J.

    2006-05-01

    An immersive viewing engine providing basic telepresence functionality for a variety of application types is presented. Augmented reality, teleoperation and virtual reality applications all benefit from the use of head mounted display devices that present imagery appropriate to the user's head orientation at full frame rates. Our primary application is the viewing of remote environments, as with a camera equipped teleoperated vehicle. The conventional approach where imagery from a narrow field camera onboard the vehicle is presented to the user on a small rectangular screen is contrasted with an immersive viewing system where a cylindrical or spherical format image is received from a panoramic camera on the vehicle, resampled in response to sensed user head orientation and presented via wide field eyewear display, approaching 180 degrees of horizontal field. Of primary interest is the user's enhanced ability to perceive and understand image content, even when image resolution parameters are poor, due to the innate visual integration and 3-D model generation capabilities of the human visual system. A mathematical model for tracking user head position and resampling the panoramic image to attain distortion free viewing of the region appropriate to the user's current head pose is presented and consideration is given to providing the user with stereo viewing generated from depth map information derived using stereo from motion algorithms.

  8. A systems concept of the vestibular organs

    NASA Technical Reports Server (NTRS)

    Mayne, R.

    1974-01-01

    A comprehensive model of vestibular organ function is presented. The model is based on an analogy with the inertial guidance systems used in navigation. Three distinct operations are investigated: angular motion sensing, linear motion sensing, and computation. These operations correspond to the semicircular canals, the otoliths, and central processing respectively. It is especially important for both an inertial guidance system and the vestibular organs to distinguish between attitude with respect to the vertical on the one hand, and linear velocity and displacement on the other. The model is applied to various experimental situations and found to be corroborated by them.

  9. Combining multiple earthquake models in real time for earthquake early warning

    USGS Publications Warehouse

    Minson, Sarah E.; Wu, Stephen; Beck, James L; Heaton, Thomas H.

    2017-01-01

    The ultimate goal of earthquake early warning (EEW) is to provide local shaking information to users before the strong shaking from an earthquake reaches their location. This is accomplished by operating one or more real‐time analyses that attempt to predict shaking intensity, often by estimating the earthquake’s location and magnitude and then predicting the ground motion from that point source. Other EEW algorithms use finite rupture models or may directly estimate ground motion without first solving for an earthquake source. EEW performance could be improved if the information from these diverse and independent prediction models could be combined into one unified, ground‐motion prediction. In this article, we set the forecast shaking at each location as the common ground to combine all these predictions and introduce a Bayesian approach to creating better ground‐motion predictions. We also describe how this methodology could be used to build a new generation of EEW systems that provide optimal decisions customized for each user based on the user’s individual false‐alarm tolerance and the time necessary for that user to react.

  10. FBG in PVC foils for monitoring the knee joint movement during the rehabilitation process.

    PubMed

    Rocha, R P; Silva, A F; Carmo, J P; Correia, J H

    2011-01-01

    This paper presents a sensing electronic-free wearable solution for monitoring the body kinematics. The measuring of the knee movements, flexion and extension, with the corresponding joint acting as the rotation axis is shown as working principle. The proposed sensing system is based on a single optical Fiber-Bragg Grating (FBG) with a resonance wavelength of 1547.76 nm. The optical fiber with the FBG is placed inside a new polymeric foil composed by three flexible layers which facilitates its placement in the anatomic parts under investigation while maintaining full sensing capabilities. The way the device is placed in the specific body part to be measured enables the clear detection of the movements in respect to the joint. The proposed solution was tested using a prototype that was built to evaluate the device under different condition tests and also to assess the system's consistency. The designed and fabricated system demonstrates clear advantages in medical fields like physical therapy applications as optical fiber is not affected by electromagnetic interference nor does the system needs complex and expensive electronic systems and mechanical parts. Another advantage is the possibility to measure, record and evaluate specific mechanical parameters of the limbs' motion. Patients with bone, muscular and joint related health conditions, as well as athletes, are within the most important end-user applications.

  11. Software Tools for Developing and Simulating the NASA LaRC CMF Motion Base

    NASA Technical Reports Server (NTRS)

    Bryant, Richard B., Jr.; Carrelli, David J.

    2006-01-01

    The NASA Langley Research Center (LaRC) Cockpit Motion Facility (CMF) motion base has provided many design and analysis challenges. In the process of addressing these challenges, a comprehensive suite of software tools was developed. The software tools development began with a detailed MATLAB/Simulink model of the motion base which was used primarily for safety loads prediction, design of the closed loop compensator and development of the motion base safety systems1. A Simulink model of the digital control law, from which a portion of the embedded code is directly generated, was later added to this model to form a closed loop system model. Concurrently, software that runs on a PC was created to display and record motion base parameters. It includes a user interface for controlling time history displays, strip chart displays, data storage, and initializing of function generators used during motion base testing. Finally, a software tool was developed for kinematic analysis and prediction of mechanical clearances for the motion system. These tools work together in an integrated package to support normal operations of the motion base, simulate the end to end operation of the motion base system providing facilities for software-in-the-loop testing, mechanical geometry and sensor data visualizations, and function generator setup and evaluation.

  12. A Study of a Handrim-Activated Power-Assist Wheelchair Based on a Non-Contact Torque Sensor

    PubMed Central

    Nam, Ki-Tae; Jang, Dae-Jin; Kim, Yong Chol; Heo, Yoon; Hong, Eung-Pyo

    2016-01-01

    Demand for wheelchairs is increasing with growing numbers of aged and disabled persons. Manual wheelchairs are the most commonly used assistive device for mobility because they are convenient to transport. Manual wheelchairs have several advantages but are not easy to use for the elderly or those who lack muscular strength. Therefore, handrim-activated power-assist wheelchairs (HAPAW) that can aid driving power with a motor by detecting user driving intentions through the handrim are being researched. This research will be on HAPAW that judge user driving intentions by using non-contact torque sensors. To deliver the desired motion, which is sensed from handrim rotation relative to a fixed controller, a new driving wheel mechanism is designed by applying a non-contact torque sensor, and corresponding torques are simulated. Torques are measured by a driving wheel prototype and compared with simulation results. The HAPAW prototype was developed using the wheels and a driving control algorithm that uses left and right input torques and time differences are used to check if the non-contact torque sensor can distinguish users’ driving intentions. Through this procedure, it was confirmed that the proposed sensor can be used effectively in HAPAW. PMID:27509508

  13. Ultra-wideband radar motion sensor

    DOEpatents

    McEwan, Thomas E.

    1994-01-01

    A motion sensor is based on ultra-wideband (UWB) radar. UWB radar range is determined by a pulse-echo interval. For motion detection, the sensors operate by staring at a fixed range and then sensing any change in the averaged radar reflectivity at that range. A sampling gate is opened at a fixed delay after the emission of a transmit pulse. The resultant sampling gate output is averaged over repeated pulses. Changes in the averaged sampling gate output represent changes in the radar reflectivity at a particular range, and thus motion.

  14. Ultra-wideband radar motion sensor

    DOEpatents

    McEwan, T.E.

    1994-11-01

    A motion sensor is based on ultra-wideband (UWB) radar. UWB radar range is determined by a pulse-echo interval. For motion detection, the sensors operate by staring at a fixed range and then sensing any change in the averaged radar reflectivity at that range. A sampling gate is opened at a fixed delay after the emission of a transmit pulse. The resultant sampling gate output is averaged over repeated pulses. Changes in the averaged sampling gate output represent changes in the radar reflectivity at a particular range, and thus motion. 15 figs.

  15. Motion-Based pH Sensing Based on the Cartridge-Case-like Micromotor.

    PubMed

    Su, Yajun; Ge, Ya; Liu, Limei; Zhang, Lina; Liu, Mei; Sun, Yunyu; Zhang, Hui; Dong, Bin

    2016-02-17

    In this paper, we report a novel cartridge-case-like micromotor. The micromotor, which is fabricated by the template synthesis method, consists of a gelatin shell with platinum nanoparticles decorating its inner surface. Intriguingly, the resulting cartridge-case-like structure exhibits a pH-dependent "open and close" feature, which originates from the pH responsiveness of the gelatin material. On the basis of the catalytic activity of the platinum nanoparticle inside the gelatin shell, the resulting cartridge-case-like structure is capable of moving autonomously in the aqueous solution containing the hydrogen peroxide fuel. More interestingly, we find out that the micromotor can be utilized as a motion-based pH sensor over the whole pH range. The moving velocity of the micromotor increases monotonically with the increase of pH of the analyte solution. Three different factors are considered to be responsible for the proportional relation between the motion speed and pH of the analyte solution: the peroxidase-like and oxidase-like catalytic behavior of the platinum nanoparticle at low and high pH, the volumetric decomposition of the hydrogen peroxide under the basic condition and the pH-dependent catalytic activity of the platinum nanoparticle caused by the swelling/deswelling behavior of the gelatin material. The current work highlights the impact of the material properties on the motion behavior of a micromotor, thus paving the way toward its application in the motion-based sensing field.

  16. Microcomputer based software for biodynamic simulation

    NASA Technical Reports Server (NTRS)

    Rangarajan, N.; Shams, T.

    1993-01-01

    This paper presents a description of a microcomputer based software package, called DYNAMAN, which has been developed to allow an analyst to simulate the dynamics of a system consisting of a number of mass segments linked by joints. One primary application is in predicting the motion of a human occupant in a vehicle under the influence of a variety of external forces, specially those generated during a crash event. Extensive use of a graphical user interface has been made to aid the user in setting up the input data for the simulation and in viewing the results from the simulation. Among its many applications, it has been successfully used in the prototype design of a moving seat that aids in occupant protection during a crash, by aircraft designers in evaluating occupant injury in airplane crashes, and by users in accident reconstruction for reconstructing the motion of the occupant and correlating the impacts with observed injuries.

  17. Synergy-Based Bilateral Port: A Universal Control Module for Tele-Manipulation Frameworks Using Asymmetric Master–Slave Systems

    PubMed Central

    Brygo, Anais; Sarakoglou, Ioannis; Grioli, Giorgio; Tsagarakis, Nikos

    2017-01-01

    Endowing tele-manipulation frameworks with the capability to accommodate a variety of robotic hands is key to achieving high performances through permitting to flexibly interchange the end-effector according to the task considered. This requires the development of control policies that not only cope with asymmetric master–slave systems but also whose high-level components are designed in a unified space in abstraction from the devices specifics. To address this dual challenge, a novel synergy port is developed that resolves the kinematic, sensing, and actuation asymmetries of the considered system through generating motion and force feedback references in the hardware-independent hand postural synergy space. It builds upon the concept of the Cartesian-based synergy matrix, which is introduced as a tool mapping the fingertips Cartesian space to the directions oriented along the grasp principal components. To assess the effectiveness of the proposed approach, the synergy port has been integrated into the control system of a highly asymmetric tele-manipulation framework, in which the 3-finger hand exoskeleton HEXOTRAC is used as a master device to control the SoftHand, a robotic hand whose transmission system relies on a single motor to drive all joints along a soft synergistic path. The platform is further enriched with the vision-based motion capture system Optitrack to monitor the 6D trajectory of the user’s wrist, which is used to control the robotic arm on which the SoftHand is mounted. Experiments have been conducted with the humanoid robot COMAN and the KUKA LWR robotic manipulator. Results indicate that this bilateral interface is highly intuitive and allows users with no prior experience to reach, grasp, and transport a variety of objects exhibiting very different shapes and impedances. In addition, the hardware and control solutions proved capable of accommodating users with different hand kinematics. Finally, the proposed control framework offers a universal, flexible, and intuitive interface allowing for the performance of effective tele-manipulations. PMID:28421179

  18. Using Multi-modal Sensing for Human Activity Modeling in the Real World

    NASA Astrophysics Data System (ADS)

    Harrison, Beverly L.; Consolvo, Sunny; Choudhury, Tanzeem

    Traditionally smart environments have been understood to represent those (often physical) spaces where computation is embedded into the users' surrounding infrastructure, buildings, homes, and workplaces. Users of this "smartness" move in and out of these spaces. Ambient intelligence assumes that users are automatically and seamlessly provided with context-aware, adaptive information, applications and even sensing - though this remains a significant challenge even when limited to these specialized, instrumented locales. Since not all environments are "smart" the experience is not a pervasive one; rather, users move between these intelligent islands of computationally enhanced space while we still aspire to achieve a more ideal anytime, anywhere experience. Two key technological trends are helping to bridge the gap between these smart environments and make the associated experience more persistent and pervasive. Smaller and more computationally sophisticated mobile devices allow sensing, communication, and services to be more directly and continuously experienced by user. Improved infrastructure and the availability of uninterrupted data streams, for instance location-based data, enable new services and applications to persist across environments.

  19. Statistical modeling for visualization evaluation through data fusion.

    PubMed

    Chen, Xiaoyu; Jin, Ran

    2017-11-01

    There is a high demand of data visualization providing insights to users in various applications. However, a consistent, online visualization evaluation method to quantify mental workload or user preference is lacking, which leads to an inefficient visualization and user interface design process. Recently, the advancement of interactive and sensing technologies makes the electroencephalogram (EEG) signals, eye movements as well as visualization logs available in user-centered evaluation. This paper proposes a data fusion model and the application procedure for quantitative and online visualization evaluation. 15 participants joined the study based on three different visualization designs. The results provide a regularized regression model which can accurately predict the user's evaluation of task complexity, and indicate the significance of all three types of sensing data sets for visualization evaluation. This model can be widely applied to data visualization evaluation, and other user-centered designs evaluation and data analysis in human factors and ergonomics. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Accuracy Dimensions in Remote Sensing

    NASA Astrophysics Data System (ADS)

    Barsi, Á.; Kugler, Zs.; László, I.; Szabó, Gy.; Abdulmutalib, H. M.

    2018-04-01

    The technological developments in remote sensing (RS) during the past decade has contributed to a significant increase in the size of data user community. For this reason data quality issues in remote sensing face a significant increase in importance, particularly in the era of Big Earth data. Dozens of available sensors, hundreds of sophisticated data processing techniques, countless software tools assist the processing of RS data and contributes to a major increase in applications and users. In the past decades, scientific and technological community of spatial data environment were focusing on the evaluation of data quality elements computed for point, line, area geometry of vector and raster data. Stakeholders of data production commonly use standardised parameters to characterise the quality of their datasets. Yet their efforts to estimate the quality did not reach the general end-user community running heterogeneous applications who assume that their spatial data is error-free and best fitted to the specification standards. The non-specialist, general user group has very limited knowledge how spatial data meets their needs. These parameters forming the external quality dimensions implies that the same data system can be of different quality to different users. The large collection of the observed information is uncertain in a level that can decry the reliability of the applications. Based on prior paper of the authors (in cooperation within the Remote Sensing Data Quality working group of ISPRS), which established a taxonomy on the dimensions of data quality in GIS and remote sensing domains, this paper is aiming at focusing on measures of uncertainty in remote sensing data lifecycle, focusing on land cover mapping issues. In the paper we try to introduce how quality of the various combination of data and procedures can be summarized and how services fit the users' needs. The present paper gives the theoretic overview of the issue, besides selected, practice-oriented approaches are evaluated too, finally widely-used dimension metrics like Root Mean Squared Error (RMSE) or confusion matrix are discussed. The authors present data quality features of well-defined and poorly defined object. The central part of the study is the land cover mapping, describing its accuracy management model, presented relevance and uncertainty measures of its influencing quality dimensions. In the paper theory is supported by a case study, where the remote sensing technology is used for supporting the area-based agricultural subsidies of the European Union, in Hungarian administration.

  1. Using a remote sensing-based, percent tree cover map to enhance forest inventory estimation

    Treesearch

    Ronald E. McRoberts; Greg C. Liknes; Grant M. Domke

    2014-01-01

    For most national forest inventories, the variables of primary interest to users are forest area and growing stock volume. The precision of estimates of parameters related to these variables can be increased using remotely sensed auxiliary variables, often in combination with stratified estimators. However, acquisition and processing of large amounts of remotely sensed...

  2. A Truthful Incentive Mechanism for Online Recruitment in Mobile Crowd Sensing System

    PubMed Central

    Chen, Xiao; Liu, Min; Zhou, Yaqin; Li, Zhongcheng; Chen, Shuang; He, Xiangnan

    2017-01-01

    We investigate emerging mobile crowd sensing (MCS) systems, in which new cloud-based platforms sequentially allocate homogenous sensing jobs to dynamically-arriving users with uncertain service qualities. Given that human beings are selfish in nature, it is crucial yet challenging to design an efficient and truthful incentive mechanism to encourage users to participate. To address the challenge, we propose a novel truthful online auction mechanism that can efficiently learn to make irreversible online decisions on winner selections for new MCS systems without requiring previous knowledge of users. Moreover, we theoretically prove that our incentive possesses truthfulness, individual rationality and computational efficiency. Extensive simulation results under both real and synthetic traces demonstrate that our incentive mechanism can reduce the payment of the platform, increase the utility of the platform and social welfare. PMID:28045441

  3. Hard Fusion Based Spectrum Sensing over Mobile Fading Channels in Cognitive Vehicular Networks

    PubMed Central

    Hao, Li; Ni, Dadong; Tran, Quang Thanh

    2018-01-01

    An explosive growth in vehicular wireless applications gives rise to spectrum resource starvation. Cognitive radio has been used in vehicular networks to mitigate the impending spectrum starvation problem by allowing vehicles to fully exploit spectrum opportunities unoccupied by licensed users. Efficient and effective detection of licensed user is a critical issue to realize cognitive radio applications. However, spectrum sensing in vehicular environments is a very challenging task due to vehicle mobility. For instance, vehicle mobility has a large effect on the wireless channel, thereby impacting the detection performance of spectrum sensing. Thus, gargantuan efforts have been made in order to analyze the fading properties of mobile radio channel in vehicular environments. Indeed, numerous studies have demonstrated that the wireless channel in vehicular environments can be characterized by a temporally correlated Rayleigh fading. In this paper, we focus on energy detection for spectrum sensing and a counting rule for cooperative sensing based on Neyman-Pearson criteria. Further, we go into the effect of the sensing and reporting channel conditions on the sensing performance under the temporally correlated Rayleigh channel. For local and cooperative sensing, we derive some alternative expressions for the average probability of misdetection. The pertinent numerical and simulating results are provided to further validate our theoretical analyses under a variety of scenarios. PMID:29415452

  4. Simple motion correction strategy reduces respiratory-induced motion artifacts for k-t accelerated and compressed-sensing cardiovascular magnetic resonance perfusion imaging.

    PubMed

    Zhou, Ruixi; Huang, Wei; Yang, Yang; Chen, Xiao; Weller, Daniel S; Kramer, Christopher M; Kozerke, Sebastian; Salerno, Michael

    2018-02-01

    Cardiovascular magnetic resonance (CMR) stress perfusion imaging provides important diagnostic and prognostic information in coronary artery disease (CAD). Current clinical sequences have limited temporal and/or spatial resolution, and incomplete heart coverage. Techniques such as k-t principal component analysis (PCA) or k-t sparcity and low rank structure (SLR), which rely on the high degree of spatiotemporal correlation in first-pass perfusion data, can significantly accelerate image acquisition mitigating these problems. However, in the presence of respiratory motion, these techniques can suffer from significant degradation of image quality. A number of techniques based on non-rigid registration have been developed. However, to first approximation, breathing motion predominantly results in rigid motion of the heart. To this end, a simple robust motion correction strategy is proposed for k-t accelerated and compressed sensing (CS) perfusion imaging. A simple respiratory motion compensation (MC) strategy for k-t accelerated and compressed-sensing CMR perfusion imaging to selectively correct respiratory motion of the heart was implemented based on linear k-space phase shifts derived from rigid motion registration of a region-of-interest (ROI) encompassing the heart. A variable density Poisson disk acquisition strategy was used to minimize coherent aliasing in the presence of respiratory motion, and images were reconstructed using k-t PCA and k-t SLR with or without motion correction. The strategy was evaluated in a CMR-extended cardiac torso digital (XCAT) phantom and in prospectively acquired first-pass perfusion studies in 12 subjects undergoing clinically ordered CMR studies. Phantom studies were assessed using the Structural Similarity Index (SSIM) and Root Mean Square Error (RMSE). In patient studies, image quality was scored in a blinded fashion by two experienced cardiologists. In the phantom experiments, images reconstructed with the MC strategy had higher SSIM (p < 0.01) and lower RMSE (p < 0.01) in the presence of respiratory motion. For patient studies, the MC strategy improved k-t PCA and k-t SLR reconstruction image quality (p < 0.01). The performance of k-t SLR without motion correction demonstrated improved image quality as compared to k-t PCA in the setting of respiratory motion (p < 0.01), while with motion correction there is a trend of better performance in k-t SLR as compared with motion corrected k-t PCA. Our simple and robust rigid motion compensation strategy greatly reduces motion artifacts and improves image quality for standard k-t PCA and k-t SLR techniques in setting of respiratory motion due to imperfect breath-holding.

  5. Assessment of Haptic Interaction for Home-Based Physical Tele-Therapy using Wearable Devices and Depth Sensors.

    PubMed

    Barmpoutis, Angelos; Alzate, Jose; Beekhuizen, Samantha; Delgado, Horacio; Donaldson, Preston; Hall, Andrew; Lago, Charlie; Vidal, Kevin; Fox, Emily J

    2016-01-01

    In this paper a prototype system is presented for home-based physical tele-therapy using a wearable device for haptic feedback. The haptic feedback is generated as a sequence of vibratory cues from 8 vibrator motors equally spaced along an elastic wearable band. The motors guide the patients' movement as they perform a prescribed exercise routine in a way that replaces the physical therapists' haptic guidance in an unsupervised or remotely supervised home-based therapy session. A pilot study of 25 human subjects was performed that focused on: a) testing the capability of the system to guide the users in arbitrary motion paths in the space and b) comparing the motion of the users during typical physical therapy exercises with and without haptic-based guidance. The results demonstrate the efficacy of the proposed system.

  6. Hybrid motion sensing and experimental modal analysis using collocated smartphone camera and accelerometers

    NASA Astrophysics Data System (ADS)

    Ozer, Ekin; Feng, Dongming; Feng, Maria Q.

    2017-10-01

    State-of-the-art multisensory technologies and heterogeneous sensor networks propose a wide range of response measurement opportunities for structural health monitoring (SHM). Measuring and fusing different physical quantities in terms of structural vibrations can provide alternative acquisition methods and improve the quality of the modal testing results. In this study, a recently introduced SHM concept, SHM with smartphones, is focused to utilize multisensory smartphone features for a hybridized structural vibration response measurement framework. Based on vibration testing of a small-scale multistory laboratory model, displacement and acceleration responses are monitored using two different smartphone sensors, an embedded camera and accelerometer, respectively. Double-integration or differentiation among different measurement types is performed to combine multisensory measurements on a comparative basis. In addition, distributed sensor signals from collocated devices are processed for modal identification, and performance of smartphone-based sensing platforms are tested under different configuration scenarios and heterogeneity levels. The results of these tests show a novel and successful implementation of a hybrid motion sensing platform through multiple sensor type and device integration. Despite the heterogeneity of motion data obtained from different smartphone devices and technologies, it is shown that multisensory response measurements can be blended for experimental modal analysis. Getting benefit from the accessibility of smartphone technology, similar smartphone-based dynamic testing methodologies can provide innovative SHM solutions with mobile, programmable, and cost-free interfaces.

  7. Geocenter Motion Derived from the JTRF2014 Combination

    NASA Astrophysics Data System (ADS)

    Abbondanza, C.; Chin, T. M.; Gross, R. S.; Heflin, M. B.; Parker, J. W.; van Dam, T. M.; Wu, X.

    2016-12-01

    JTRF2014 represents the JPL Terrestrial Reference Frame (TRF) recently obtained as a result of the combination of the space-geodetic reprocessed inputs to the ITRF2014. Based upon a Kalman filter and smoother approach, JTRF2014 assimilates station positions and Earth-Orientation Parameters (EOPs) from GNSS, VLBI, SLR and DORIS and combine them through local tie measurements. JTRF is in its essence a time-series based TRF. In the JTRF2014 the dynamical evolution of the station positions is formulated by introducing linear and seasonal terms (annual and semi-annual periodic modes). Non-secular and non-seasonal motions of the geodetic sites are included in the smoothed time series by properly defining the station position process noise whose variance is characterized by analyzing station displacements induced by temporal changes of planetary fluid masses (atmosphere, oceans and continental surface water). With its station position time series output at a weekly resolution, JTRF2014 materializes a sub-secular frame whose origin is at the quasi-instantaneous Center of Mass (CM) as sensed by SLR. Both SLR and VLBI contribute to the scale of the combined frame. The sub-secular nature of the frame allows the users to directly access the quasi-instantaneous geocenter and scale information. Unlike standard combined TRF products which only give access to the secular component of the CM-CN motions, JTRF2014 is able to preserve -in addition to the long-term- the seasonal, non-seasonal and non-secular components of the geocenter motion. In the JTRF2014 assimilation scheme, local tie measurements are used to transfer the geocenter information from SLR to the space-geodetic techniques which are either insensitive to CM (VLBI) or whose geocenter motion is poorly determined (GNSS and DORIS). Properly tied to the CM frame through local ties and co-motion constraints, GNSS, VLBI and DORIS contribute to improve the SLR network geometry. In this paper, the determination of the weekly (CM-CN) time series as inferred from the JTRF2014 combination will be presented. Comparisons with geocenter time series derived from global inversions of GPS, GRACE and ocean bottom pressure models show the JTRF2014-derived geocenter favourably compares to the results of the inversion.

  8. Doppler ultrasound-based measurement of tendon velocity and displacement for application toward detecting user-intended motion.

    PubMed

    Stegman, Kelly J; Park, Edward J; Dechev, Nikolai

    2012-07-01

    The motivation of this research is to non-invasively monitor the wrist tendon's displacement and velocity, for purposes of controlling a prosthetic device. This feasibility study aims to determine if the proposed technique using Doppler ultrasound is able to accurately estimate the tendon's instantaneous velocity and displacement. This study is conducted with a tendon mimicking experiment consisting of two different materials: a commercial ultrasound scanner, and a reference linear motion stage set-up. Audio-based output signals are acquired from the ultrasound scanner, and are processed with our proposed Fourier technique to obtain the tendon's velocity and displacement estimates. We then compare our estimates to an external reference system, and also to the ultrasound scanner's own estimates based on its proprietary software. The proposed tendon motion estimation method has been shown to be repeatable, effective and accurate in comparison to the external reference system, and is generally more accurate than the scanner's own estimates. After establishing this feasibility study, future testing will include cadaver-based studies to test the technique on the human arm tendon anatomy, and later on live human test subjects in order to further refine the proposed method for the novel purpose of detecting user-intended tendon motion for controlling wearable prosthetic devices.

  9. Sensor-Based Human Activity Recognition in a Multi-user Scenario

    NASA Astrophysics Data System (ADS)

    Wang, Liang; Gu, Tao; Tao, Xianping; Lu, Jian

    Existing work on sensor-based activity recognition focuses mainly on single-user activities. However, in real life, activities are often performed by multiple users involving interactions between them. In this paper, we propose Coupled Hidden Markov Models (CHMMs) to recognize multi-user activities from sensor readings in a smart home environment. We develop a multimodal sensing platform and present a theoretical framework to recognize both single-user and multi-user activities. We conduct our trace collection done in a smart home, and evaluate our framework through experimental studies. Our experimental result shows that we achieve an average accuracy of 85.46% with CHMMs.

  10. Analyzing locomotion synthesis with feature-based motion graphs.

    PubMed

    Mahmudi, Mentar; Kallmann, Marcelo

    2013-05-01

    We propose feature-based motion graphs for realistic locomotion synthesis among obstacles. Among several advantages, feature-based motion graphs achieve improved results in search queries, eliminate the need of postprocessing for foot skating removal, and reduce the computational requirements in comparison to traditional motion graphs. Our contributions are threefold. First, we show that choosing transitions based on relevant features significantly reduces graph construction time and leads to improved search performances. Second, we employ a fast channel search method that confines the motion graph search to a free channel with guaranteed clearance among obstacles, achieving faster and improved results that avoid expensive collision checking. Lastly, we present a motion deformation model based on Inverse Kinematics applied over the transitions of a solution branch. Each transition is assigned a continuous deformation range that does not exceed the original transition cost threshold specified by the user for the graph construction. The obtained deformation improves the reachability of the feature-based motion graph and in turn also reduces the time spent during search. The results obtained by the proposed methods are evaluated and quantified, and they demonstrate significant improvements in comparison to traditional motion graph techniques.

  11. High Sensitivity, Wearable, Piezoresistive Pressure Sensors Based on Irregular Microhump Structures and Its Applications in Body Motion Sensing.

    PubMed

    Wang, Zongrong; Wang, Shan; Zeng, Jifang; Ren, Xiaochen; Chee, Adrian J Y; Yiu, Billy Y S; Chung, Wai Choi; Yang, Yong; Yu, Alfred C H; Roberts, Robert C; Tsang, Anderson C O; Chow, Kwok Wing; Chan, Paddy K L

    2016-07-01

    A pressure sensor based on irregular microhump patterns has been proposed and developed. The devices show high sensitivity and broad operating pressure regime while comparing with regular micropattern devices. Finite element analysis (FEA) is utilized to confirm the sensing mechanism and predict the performance of the pressure sensor based on the microhump structures. Silicon carbide sandpaper is employed as the mold to develop polydimethylsiloxane (PDMS) microhump patterns with various sizes. The active layer of the piezoresistive pressure sensor is developed by spin coating PSS on top of the patterned PDMS. The devices show an averaged sensitivity as high as 851 kPa(-1) , broad operating pressure range (20 kPa), low operating power (100 nW), and fast response speed (6.7 kHz). Owing to their flexible properties, the devices are applied to human body motion sensing and radial artery pulse. These flexible high sensitivity devices show great potential in the next generation of smart sensors for robotics, real-time health monitoring, and biomedical applications. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. VisitSense: Sensing Place Visit Patterns from Ambient Radio on Smartphones for Targeted Mobile Ads in Shopping Malls

    PubMed Central

    Kim, Byoungjip; Kang, Seungwoo; Ha, Jin-Young; Song, Junehwa

    2015-01-01

    In this paper, we introduce a novel smartphone framework called VisitSense that automatically detects and predicts a smartphone user’s place visits from ambient radio to enable behavioral targeting for mobile ads in large shopping malls. VisitSense enables mobile app developers to adopt visit-pattern-aware mobile advertising for shopping mall visitors in their apps. It also benefits mobile users by allowing them to receive highly relevant mobile ads that are aware of their place visit patterns in shopping malls. To achieve the goal, VisitSense employs accurate visit detection and prediction methods. For accurate visit detection, we develop a change-based detection method to take into consideration the stability change of ambient radio and the mobility change of users. It performs well in large shopping malls where ambient radio is quite noisy and causes existing algorithms to easily fail. In addition, we proposed a causality-based visit prediction model to capture the causality in the sequential visit patterns for effective prediction. We have developed a VisitSense prototype system, and a visit-pattern-aware mobile advertising application that is based on it. Furthermore, we deploy the system in the COEX Mall, one of the largest shopping malls in Korea, and conduct diverse experiments to show the effectiveness of VisitSense. PMID:26193275

  13. Heading Toward Launch with the Integrated Multi-Satellite Retrievals for GPM (IMERG)

    NASA Technical Reports Server (NTRS)

    Huffman, George J.; Bolvin, David T.; Nelkin, Eric J.; Adler, Robert F.

    2012-01-01

    The Day-l algorithm for computing combined precipitation estimates in GPM is the Integrated Multi-satellitE Retrievals for GPM (IMERG). We plan for the period of record to encompass both the TRMM and GPM eras, and the coverage to extend to fully global as experience is gained in the difficult high-latitude environment. IMERG is being developed as a unified U.S. algorithm that takes advantage of strengths in the three groups that are contributing expertise: 1) the TRMM Multi-satellite Precipitation Analysis (TMPA), which addresses inter-satellite calibration of precipitation estimates and monthly scale combination of satellite and gauge analyses; 2) the CPC Morphing algorithm with Kalman Filtering (KF-CMORPH), which provides quality-weighted time interpolation of precipitation patterns following cloud motion; and 3) the Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks using a Cloud Classification System (PERSIANN-CCS), which provides a neural-network-based scheme for generating microwave-calibrated precipitation estimates from geosynchronous infrared brightness temperatures. In this talk we summarize the major building blocks and important design issues driven by user needs and practical data issues. One concept being pioneered by the IMERG team is that the code system should produce estimates for the same time period but at different latencies to support the requirements of different groups of users. Another user requirement is that all these runs must be reprocessed as new IMERG versions are introduced. IMERG's status at meeting time will be summarized, and the processing scenario in the transition from TRMM to GPM will be laid out. Initially, IMERG will be run with TRMM-based calibration, and then a conversion to a GPM-based calibration will be employed after the GPM sensor products are validated. A complete reprocessing will be computed, which will complete the transition from TMPA.

  14. Optimisation of sensing time and transmission time in cognitive radio-based smart grid networks

    NASA Astrophysics Data System (ADS)

    Yang, Chao; Fu, Yuli; Yang, Junjie

    2016-07-01

    Cognitive radio (CR)-based smart grid (SG) networks have been widely recognised as emerging communication paradigms in power grids. However, a sufficient spectrum resource and reliability are two major challenges for real-time applications in CR-based SG networks. In this article, we study the traffic data collection problem. Based on the two-stage power pricing model, the power price is associated with the efficient received traffic data in a metre data management system (MDMS). In order to minimise the system power price, a wideband hybrid access strategy is proposed and analysed, to share the spectrum between the SG nodes and CR networks. The sensing time and transmission time are jointly optimised, while both the interference to primary users and the spectrum opportunity loss of secondary users are considered. Two algorithms are proposed to solve the joint optimisation problem. Simulation results show that the proposed joint optimisation algorithms outperform the fixed parameters (sensing time and transmission time) algorithms, and the power cost is reduced efficiently.

  15. When Simple Harmonic Motion Is Not that Simple: Managing Epistemological Complexity by Using Computer-Based Representations

    ERIC Educational Resources Information Center

    Parnafes, Orit

    2010-01-01

    Many real-world phenomena, even "simple" physical phenomena such as natural harmonic motion, are complex in the sense that they require coordinating multiple subtle foci of attention to get the required information when experiencing them. Moreover, for students to develop sound understanding of a concept or a phenomenon, they need to learn to get…

  16. The Sophia-Antipolis Conference: General presentation and basic documents. [remote sensing for agriculture, forestry, water resources, and environment management in France

    NASA Technical Reports Server (NTRS)

    1980-01-01

    The procedures and techniques used in NASA's aerospace technology transfer program are reviewed for consideration in establishing priorities and bases for joint action by technicians and users of remotely sensed data in France. Particular emphasis is given to remote sensing in agriculture, forestry, water resources, environment management, and urban research.

  17. Motion-sensor fusion-based gesture recognition and its VLSI architecture design for mobile devices

    NASA Astrophysics Data System (ADS)

    Zhu, Wenping; Liu, Leibo; Yin, Shouyi; Hu, Siqi; Tang, Eugene Y.; Wei, Shaojun

    2014-05-01

    With the rapid proliferation of smartphones and tablets, various embedded sensors are incorporated into these platforms to enable multimodal human-computer interfaces. Gesture recognition, as an intuitive interaction approach, has been extensively explored in the mobile computing community. However, most gesture recognition implementations by now are all user-dependent and only rely on accelerometer. In order to achieve competitive accuracy, users are required to hold the devices in predefined manner during the operation. In this paper, a high-accuracy human gesture recognition system is proposed based on multiple motion sensor fusion. Furthermore, to reduce the energy overhead resulted from frequent sensor sampling and data processing, a high energy-efficient VLSI architecture implemented on a Xilinx Virtex-5 FPGA board is also proposed. Compared with the pure software implementation, approximately 45 times speed-up is achieved while operating at 20 MHz. The experiments show that the average accuracy for 10 gestures achieves 93.98% for user-independent case and 96.14% for user-dependent case when subjects hold the device randomly during completing the specified gestures. Although a few percent lower than the conventional best result, it still provides competitive accuracy acceptable for practical usage. Most importantly, the proposed system allows users to hold the device randomly during operating the predefined gestures, which substantially enhances the user experience.

  18. Design and Verification of Remote Sensing Image Data Center Storage Architecture Based on Hadoop

    NASA Astrophysics Data System (ADS)

    Tang, D.; Zhou, X.; Jing, Y.; Cong, W.; Li, C.

    2018-04-01

    The data center is a new concept of data processing and application proposed in recent years. It is a new method of processing technologies based on data, parallel computing, and compatibility with different hardware clusters. While optimizing the data storage management structure, it fully utilizes cluster resource computing nodes and improves the efficiency of data parallel application. This paper used mature Hadoop technology to build a large-scale distributed image management architecture for remote sensing imagery. Using MapReduce parallel processing technology, it called many computing nodes to process image storage blocks and pyramids in the background to improve the efficiency of image reading and application and sovled the need for concurrent multi-user high-speed access to remotely sensed data. It verified the rationality, reliability and superiority of the system design by testing the storage efficiency of different image data and multi-users and analyzing the distributed storage architecture to improve the application efficiency of remote sensing images through building an actual Hadoop service system.

  19. Enabling Smart Workflows over Heterogeneous ID-Sensing Technologies

    PubMed Central

    Giner, Pau; Cetina, Carlos; Lacuesta, Raquel; Palacios, Guillermo

    2012-01-01

    Sensing technologies in mobile devices play a key role in reducing the gap between the physical and the digital world. The use of automatic identification capabilities can improve user participation in business processes where physical elements are involved (Smart Workflows). However, identifying all objects in the user surroundings does not automatically translate into meaningful services to the user. This work introduces Parkour, an architecture that allows the development of services that match the goals of each of the participants in a smart workflow. Parkour is based on a pluggable architecture that can be extended to provide support for new tasks and technologies. In order to facilitate the development of these plug-ins, tools that automate the development process are also provided. Several Parkour-based systems have been developed in order to validate the applicability of the proposal. PMID:23202193

  20. An EMG-based robot control scheme robust to time-varying EMG signal features.

    PubMed

    Artemiadis, Panagiotis K; Kyriakopoulos, Kostas J

    2010-05-01

    Human-robot control interfaces have received increased attention during the past decades. With the introduction of robots in everyday life, especially in providing services to people with special needs (i.e., elderly, people with impairments, or people with disabilities), there is a strong necessity for simple and natural control interfaces. In this paper, electromyographic (EMG) signals from muscles of the human upper limb are used as the control interface between the user and a robot arm. EMG signals are recorded using surface EMG electrodes placed on the user's skin, making the user's upper limb free of bulky interface sensors or machinery usually found in conventional human-controlled systems. The proposed interface allows the user to control in real time an anthropomorphic robot arm in 3-D space, using upper limb motion estimates based only on EMG recordings. Moreover, the proposed interface is robust to EMG changes with respect to time, mainly caused by muscle fatigue or adjustments of contraction level. The efficiency of the method is assessed through real-time experiments, including random arm motions in the 3-D space with variable hand speed profiles.

  1. A multi-criteria approach to camera motion design for volume data animation.

    PubMed

    Hsu, Wei-Hsien; Zhang, Yubo; Ma, Kwan-Liu

    2013-12-01

    We present an integrated camera motion design and path generation system for building volume data animations. Creating animations is an essential task in presenting complex scientific visualizations. Existing visualization systems use an established animation function based on keyframes selected by the user. This approach is limited in providing the optimal in-between views of the data. Alternatively, computer graphics and virtual reality camera motion planning is frequently focused on collision free movement in a virtual walkthrough. For semi-transparent, fuzzy, or blobby volume data the collision free objective becomes insufficient. Here, we provide a set of essential criteria focused on computing camera paths to establish effective animations of volume data. Our dynamic multi-criteria solver coupled with a force-directed routing algorithm enables rapid generation of camera paths. Once users review the resulting animation and evaluate the camera motion, they are able to determine how each criterion impacts path generation. In this paper, we demonstrate how incorporating this animation approach with an interactive volume visualization system reduces the effort in creating context-aware and coherent animations. This frees the user to focus on visualization tasks with the objective of gaining additional insight from the volume data.

  2. Sensor fusion IV: Control paradigms and data structures; Proceedings of the Meeting, Boston, MA, Nov. 12-15, 1991

    NASA Technical Reports Server (NTRS)

    Schenker, Paul S. (Editor)

    1992-01-01

    Various papers on control paradigms and data structures in sensor fusion are presented. The general topics addressed include: decision models and computational methods, sensor modeling and data representation, active sensing strategies, geometric planning and visualization, task-driven sensing, motion analysis, models motivated biology and psychology, decentralized detection and distributed decision, data fusion architectures, robust estimation of shapes and features, application and implementation. Some of the individual subjects considered are: the Firefly experiment on neural networks for distributed sensor data fusion, manifold traversing as a model for learning control of autonomous robots, choice of coordinate systems for multiple sensor fusion, continuous motion using task-directed stereo vision, interactive and cooperative sensing and control for advanced teleoperation, knowledge-based imaging for terrain analysis, physical and digital simulations for IVA robotics.

  3. Flexible one-structure arched triboelectric nanogenerator based on common electrode for high efficiency energy harvesting and self-powered motion sensing

    NASA Astrophysics Data System (ADS)

    Chen, Xi; He, Jian; Song, Linlin; Zhang, Zengxing; Tian, Zhumei; Wen, Tao; Zhai, Cong; Chen, Yi; Cho, Jundong; Chou, Xiujian; Xue, Chenyang

    2018-04-01

    Triboelectric nanogenerators are widely used because of low cost, simple manufacturing process and high output performance. In this work, a flexible one-structure arched triboelectric nanogenerator (FOAT), based on common electrode to combine the single-electrode mode and contact-separation, was designed using silicone rubber, epoxy resin and flexible electrode. The peak-to-peak short circuit current of 18μ A and the peak-to-peak open circuit voltage of 570V can be obtained from the FOAT with the size of 5×7 cm2 under the frequency of 3Hz and the pressure of 300N. The peak-to-peak short circuit current of FOAT is increased by 29% and 80%, and the peak-to-peak open circuit voltage is increased by 33% and 54% compared with single-electrode mode and contact-separation mode, respectively. FOAT realizes the combination of two generation modes, which improves the output performance of triboelectric nanogenerator (TENG). 62 light-emitting-diodes (LEDs) can be completely lit up and 2.2μ F capacitor can be easily charged to 1.2V in 9s. When the FOAT is placed at different parts of the human body, the human motion energy can be harvested and be the sensing signal for motion monitoring sensor. Based on the above characteristics, FOAT exhibits great potential in illumination, power supplies for wearable electronic devices and self-powered motion monitoring sensor via harvesting the energy of human motion.

  4. An Approach to Data Center-Based KDD of Remote Sensing Datasets

    NASA Technical Reports Server (NTRS)

    Lynnes, Christopher; Mack, Robert; Wharton, Stephen W. (Technical Monitor)

    2001-01-01

    The data explosion in remote sensing is straining the ability of data centers to deliver the data to the user community, yet many large-volume users actually seek a relatively small information component within the data, which they extract at their sites using Knowledge Discovery in Databases (KDD) techniques. To improve the efficiency of this process, the Goddard Earth Sciences Distributed Active Archive Center (GES DAAC) has implemented a KDD subsystem that supports execution of the user's KDD algorithm at the data center, dramatically reducing the volume that is sent to the user. The data are extracted from the archive in a planned, organized "campaign"; the algorithms are executed, and the output products sent to the users over the network. The first campaign, now complete, has resulted in overall reductions in shipped volume from 3.3 TB to 0.4 TB.

  5. 3-d brownian motion simulator for high-sensitivity nanobiotechnological applications.

    PubMed

    Toth, Arpád; Banky, Dániel; Grolmusz, Vince

    2011-12-01

    A wide variety of nanobiotechnologic applications are being developed for nanoparticle based in vitro diagnostic and imaging systems. Some of these systems make possible highly sensitive detection of molecular biomarkers. Frequently, the very low concentration of the biomarkers makes impossible the classical, partial differential equation-based mathematical simulation of the motion of the nanoparticles involved. We present a three-dimensional Brownian motion simulation tool for the prediction of the movement of nanoparticles in various thermal, viscosity, and geometric settings in a rectangular cuvette. For nonprofit users the server is freely available at the site http://brownian.pitgroup.org.

  6. Real-Time Classification of Hand Motions Using Ultrasound Imaging of Forearm Muscles.

    PubMed

    Akhlaghi, Nima; Baker, Clayton A; Lahlou, Mohamed; Zafar, Hozaifah; Murthy, Karthik G; Rangwala, Huzefa S; Kosecka, Jana; Joiner, Wilsaan M; Pancrazio, Joseph J; Sikdar, Siddhartha

    2016-08-01

    Surface electromyography (sEMG) has been the predominant method for sensing electrical activity for a number of applications involving muscle-computer interfaces, including myoelectric control of prostheses and rehabilitation robots. Ultrasound imaging for sensing mechanical deformation of functional muscle compartments can overcome several limitations of sEMG, including the inability to differentiate between deep contiguous muscle compartments, low signal-to-noise ratio, and lack of a robust graded signal. The objective of this study was to evaluate the feasibility of real-time graded control using a computationally efficient method to differentiate between complex hand motions based on ultrasound imaging of forearm muscles. Dynamic ultrasound images of the forearm muscles were obtained from six able-bodied volunteers and analyzed to map muscle activity based on the deformation of the contracting muscles during different hand motions. Each participant performed 15 different hand motions, including digit flexion, different grips (i.e., power grasp and pinch grip), and grips in combination with wrist pronation. During the training phase, we generated a database of activity patterns corresponding to different hand motions for each participant. During the testing phase, novel activity patterns were classified using a nearest neighbor classification algorithm based on that database. The average classification accuracy was 91%. Real-time image-based control of a virtual hand showed an average classification accuracy of 92%. Our results demonstrate the feasibility of using ultrasound imaging as a robust muscle-computer interface. Potential clinical applications include control of multiarticulated prosthetic hands, stroke rehabilitation, and fundamental investigations of motor control and biomechanics.

  7. MotorSense: Using Motion Tracking Technology to Support the Identification and Treatment of Gross-Motor Dysfunction.

    PubMed

    Arnedillo-Sánchez, Inmaculada; Boyle, Bryan; Bossavit, Benoît

    2017-01-01

    MotorSense is a motion detection and tracking technology that can be implemented across a range of environments to assist in detecting delays in gross-motor skills development. The system utilises the motion tracking functionality of Microsoft's Kinect™. It features games that require children to perform graded gross-motor tasks matched with their chronological and developmental ages. This paper describes the rationale for MotorSense, provides an overview of the functionality of the system and illustrates sample activities.

  8. Usability Evaluation Methods for Gesture-Based Games: A Systematic Review

    PubMed Central

    Simor, Fernando Winckler; Brum, Manoela Rogofski; Schmidt, Jaison Dairon Ebertz; De Marchi, Ana Carolina Bertoletti

    2016-01-01

    Background Gestural interaction systems are increasingly being used, mainly in games, expanding the idea of entertainment and providing experiences with the purpose of promoting better physical and/or mental health. Therefore, it is necessary to establish mechanisms for evaluating the usability of these interfaces, which make gestures the basis of interaction, to achieve a balance between functionality and ease of use. Objective This study aims to present the results of a systematic review focused on usability evaluation methods for gesture-based games, considering devices with motion-sensing capability. We considered the usability methods used, the common interface issues, and the strategies adopted to build good gesture-based games. Methods The research was centered on four electronic databases: IEEE, Association for Computing Machinery (ACM), Springer, and Science Direct from September 4 to 21, 2015. Within 1427 studies evaluated, 10 matched the eligibility criteria. As a requirement, we considered studies about gesture-based games, Kinect and/or Wii as devices, and the use of a usability method to evaluate the user interface. Results In the 10 studies found, there was no standardization in the methods because they considered diverse analysis variables. Heterogeneously, authors used different instruments to evaluate gesture-based interfaces and no default approach was proposed. Questionnaires were the most used instruments (70%, 7/10), followed by interviews (30%, 3/10), and observation and video recording (20%, 2/10). Moreover, 60% (6/10) of the studies used gesture-based serious games to evaluate the performance of elderly participants in rehabilitation tasks. This highlights the need for creating an evaluation protocol for older adults to provide a user-friendly interface according to the user’s age and limitations. Conclusions Through this study, we conclude this field is in need of a usability evaluation method for serious games, especially games for older adults, and that the definition of a methodology and a test protocol may offer the user more comfort, welfare, and confidence. PMID:27702737

  9. Impaired limb proprioception in adults with spasmodic dysphonia

    PubMed Central

    Konczak, Jürgen; Aman, Joshua E.; Chen, Yu-Wen; Li, Kuan-yi; Watson, Peter J.

    2015-01-01

    Objectives Focal dystonia of the head, neck are associated with a loss of kinaesthetic acuity at muscles distant from the dystonic sites. That is, while the motor deficits in focal dystonia are confined, the associated somatosensory deficits are generalized. This is the first systematic study to examine, if patients diagnosed with spasmodic dystonia (SD) show somatosensory impairments similar in scope to other forms of focal dystonia. Methods Proprioceptive acuity (ability to discriminate between two stimuli) for forearm position and motion sense was assessed in 14 spasmodic dystonia subjects and 28 age-matched controls using a passive motion apparatus. Psychophysical thresholds, uncertainty area and a proprioceptive acuity index were computed based on the subjects’ verbal responses. Results The main findings are: First, the SD group showed significantly elevated thresholds and uncertainty areas for forearm position sense when compared to the control group. Second, 9 out of 14 dystonia subjects (64%) exhibited an acuity index for position sense above the control group maximum. Three SD subjects had a motion sense acuity index above the control group maximum. Conclusion The results indicate that impaired limb proprioception is a common feature of SD. Like other forms of focal dystonia, spasmodic dystonia does affect the somatosensation of non-dystonic muscle systems. That is, SD is associated with a generalized somatosensory deficit. PMID:25737471

  10. Remarks on the derivation of the governing equations for the dynamics of a nonlinear beam to a non ideal shaft coupling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fenili, André; Lopes Rebello da Fonseca Brasil, Reyolando Manoel; Balthazar, José M., E-mail: jmbaltha@gmail.com

    We derive nonlinear governing equations without assuming that the beam is inextensible. The derivation couples the equations that govern a weak electric motor, which is used to rotate the base of the beam, to those that govern the motion of the beam. The system is considered non-ideal in the sense that the response of the motor to an applied voltage and the motion of the beam must be obtained interactively. The moment that the motor exerts on the base of the beam cannot be determined without solving for the motion of the beam.

  11. Brain-machine interfacing control of whole-body humanoid motion

    PubMed Central

    Bouyarmane, Karim; Vaillant, Joris; Sugimoto, Norikazu; Keith, François; Furukawa, Jun-ichiro; Morimoto, Jun

    2014-01-01

    We propose to tackle in this paper the problem of controlling whole-body humanoid robot behavior through non-invasive brain-machine interfacing (BMI), motivated by the perspective of mapping human motor control strategies to human-like mechanical avatar. Our solution is based on the adequate reduction of the controllable dimensionality of a high-DOF humanoid motion in line with the state-of-the-art possibilities of non-invasive BMI technologies, leaving the complement subspace part of the motion to be planned and executed by an autonomous humanoid whole-body motion planning and control framework. The results are shown in full physics-based simulation of a 36-degree-of-freedom humanoid motion controlled by a user through EEG-extracted brain signals generated with motor imagery task. PMID:25140134

  12. The effects of platform motion and target orientation on the performance of trackball manipulation.

    PubMed

    Yau, Yi-Jan; Chao, Chin-Jung; Feng, Wen-Yang; Hwang, Sheue-Ling

    2011-08-01

    The trackball has been widely employed as a control/command input device on moving vehicles, but few studies have explored the effects of platform motion on its manipulation. Fewer still have considered this issue in designing the user interface and the arrangement of console location and orientation simultaneously. This work describes an experiment carried out to investigate the performance of trackball users on a simple point-and-click task in a motion simulator. By varying the orientation of onscreen targets, the effect of cursor movement direction on performance is investigated. The results indicate that the platform motion and target orientation both significantly affect the time required to point and click, but not the accuracy of target selection. The movement times were considerably longer under rolling and pitching motions and for targets located along the diagonal axes of the interface. Subjective evaluations carried out by the participants agree with these objective results. These findings could be used to optimise console and graphical menu design for use on maritime vessels. STATEMENT OF RELEVANCE: In military situations, matters of life or death may be decided in milliseconds. Any delay or error in classification and identification will thus affect the safety of the ship and its crew. This study demonstrates that performance of manipulating a trackball is affected by the platform motion and target orientation. The results of the present study can guide the arrangement of consoles and the design of trackball-based graphical user interfaces on maritime vessels.

  13. Using Public Network Infrastructures for UAV Remote Sensing in Civilian Security Operations

    DTIC Science & Technology

    2011-03-01

    leveraging public wireless communication networks for UAV-based sensor networks with respect to existing constraints and user requirements...Detection with an Autonomous Micro UAV Mesh Network . In the near future police departments, fire brigades and other homeland security ...UAV-based sensor networks with respect to existing constraints and user requirements. 15. SUBJECT TERMS 16. SECURITY CLASSIFICATION OF: 17. LIMITATION

  14. A review of e-textiles in neurological rehabilitation: How close are we?

    PubMed

    McLaren, Ruth; Joseph, Frances; Baguley, Craig; Taylor, Denise

    2016-06-21

    Textiles able to perform electronic functions are known as e-textiles, and are poised to revolutionise the manner in which rehabilitation and assistive technology is provided. With numerous reports in mainstream media of the possibilities and promise of e-textiles it is timely to review research work in this area related to neurological rehabilitation.This paper provides a review based on a systematic search conducted using EBSCO- Health, Scopus, AMED, PEDro and ProQuest databases, complemented by articles sourced from reference lists. Articles were included if the e-textile technology described had the potential for use in neurological rehabilitation and had been trialled on human participants. A total of 108 records were identified and screened, with 20 meeting the broad review inclusion criteria. Nineteen user trials of healthy people and one pilot study with stroke participants have been reported.The review identifies two areas of research focus; motion sensing, and the measurement of, or stimulation of, muscle activity. In terms of motion sensing, E-textiles appear able to reliably measure gross movement and whether an individual has achieved a predetermined movement pattern. However, the technology still remains somewhat cumbersome and lacking in resolution at present. The measurement of muscle activity and the provision of functional electrical stimulation via e-textiles is in the initial stages of development but shows potential for e-textile expansion into assistive technologies.The review identified a lack of high quality clinical evidence and, in some cases, a lack of practicality for clinical application. These issues may be overcome by engagement of clinicians in e-textile research and using their expertise to develop products that augment and enhance neurological rehabilitation practice.

  15. Validation of XMALab software for marker-based XROMM.

    PubMed

    Knörlein, Benjamin J; Baier, David B; Gatesy, Stephen M; Laurence-Chasen, J D; Brainerd, Elizabeth L

    2016-12-01

    Marker-based XROMM requires software tools for: (1) correcting fluoroscope distortion; (2) calibrating X-ray cameras; (3) tracking radio-opaque markers; and (4) calculating rigid body motion. In this paper we describe and validate XMALab, a new open-source software package for marker-based XROMM (C++ source and compiled versions on Bitbucket). Most marker-based XROMM studies to date have used XrayProject in MATLAB. XrayProject can produce results with excellent accuracy and precision, but it is somewhat cumbersome to use and requires a MATLAB license. We have designed XMALab to accelerate the XROMM process and to make it more accessible to new users. Features include the four XROMM steps (listed above) in one cohesive user interface, real-time plot windows for detecting errors, and integration with an online data management system, XMAPortal. Accuracy and precision of XMALab when tracking markers in a machined object are ±0.010 and ±0.043 mm, respectively. Mean precision for nine users tracking markers in a tutorial dataset of minipig feeding was ±0.062 mm in XMALab and ±0.14 mm in XrayProject. Reproducibility of 3D point locations across nine users was 10-fold greater in XMALab than in XrayProject, and six degree-of-freedom bone motions calculated with a joint coordinate system were 3- to 6-fold more reproducible in XMALab. XMALab is also suitable for tracking white or black markers in standard light videos with optional checkerboard calibration. We expect XMALab to increase both the quality and quantity of animal motion data available for comparative biomechanics research. © 2016. Published by The Company of Biologists Ltd.

  16. Collaborative damage mapping for emergency response: the role of Cognitive Systems Engineering

    NASA Astrophysics Data System (ADS)

    Kerle, N.; Hoffman, R. R.

    2013-01-01

    Remote sensing is increasingly used to assess disaster damage, traditionally by professional image analysts. A recent alternative is crowdsourcing by volunteers experienced in remote sensing, using internet-based mapping portals. We identify a range of problems in current approaches, including how volunteers can best be instructed for the task, ensuring that instructions are accurately understood and translate into valid results, or how the mapping scheme must be adapted for different map user needs. The volunteers, the mapping organizers, and the map users all perform complex cognitive tasks, yet little is known about the actual information needs of the users. We also identify problematic assumptions about the capabilities of the volunteers, principally related to the ability to perform the mapping, and to understand mapping instructions unambiguously. We propose that any robust scheme for collaborative damage mapping must rely on Cognitive Systems Engineering and its principal method, Cognitive Task Analysis (CTA), to understand the information and decision requirements of the map and image users, and how the volunteers can be optimally instructed and their mapping contributions merged into suitable map products. We recommend an iterative approach involving map users, remote sensing specialists, cognitive systems engineers and instructional designers, as well as experimental psychologists.

  17. Smart Braid Feedback for the Closed-Loop Control of Soft Robotic Systems.

    PubMed

    Felt, Wyatt; Chin, Khai Yi; Remy, C David

    2017-09-01

    This article experimentally investigates the potential of using flexible, inductance-based contraction sensors in the closed-loop motion control of soft robots. Accurate motion control remains a highly challenging task for soft robotic systems. Precise models of the actuation dynamics and environmental interactions are often unavailable. This renders open-loop control impossible, while closed-loop control suffers from a lack of suitable feedback. Conventional motion sensors, such as linear or rotary encoders, are difficult to adapt to robots that lack discrete mechanical joints. The rigid nature of these sensors runs contrary to the aspirational benefits of soft systems. As truly soft sensor solutions are still in their infancy, motion control of soft robots has so far relied on laboratory-based sensing systems such as motion capture, electromagnetic (EM) tracking, or Fiber Bragg Gratings. In this article, we used embedded flexible sensors known as Smart Braids to sense the contraction of McKibben muscles through changes in inductance. We evaluated closed-loop control on two systems: a revolute joint and a planar, one degree of freedom continuum manipulator. In the revolute joint, our proposed controller compensated for elasticity in the actuator connections. The Smart Braid feedback allowed motion control with a steady-state root-mean-square (RMS) error of [1.5]°. In the continuum manipulator, Smart Braid feedback enabled tracking of the desired tip angle with a steady-state RMS error of [1.25]°. This work demonstrates that Smart Braid sensors can provide accurate position feedback in closed-loop motion control suitable for field applications of soft robotic systems.

  18. Satellite-Tracking Millimeter-Wave Reflector Antenna System For Mobile Satellite-Tracking

    NASA Technical Reports Server (NTRS)

    Densmore, Arthur C. (Inventor); Jamnejad, Vahraz (Inventor); Woo, Kenneth E. (Inventor)

    2001-01-01

    A miniature dual-band two-way mobile satellite-tracking antenna system mounted on a movable vehicle includes a miniature parabolic reflector dish having an elliptical aperture with major and minor elliptical axes aligned horizontally and vertically, respectively, to maximize azimuthal directionality and minimize elevational directionality to an extent corresponding to expected pitch excursions of the movable ground vehicle. A feed-horn has a back end and an open front end facing the reflector dish and has vertical side walls opening out from the back end to the front end at a lesser horn angle and horizontal top and bottom walls opening out from the back end to the front end at a greater horn angle. An RF circuit couples two different signal bands between the feed-horn and the user. An antenna attitude controller maintains an antenna azimuth direction relative to the satellite by rotating it in azimuth in response to sensed yaw motions of the movable ground vehicle so as to compensate for the yaw motions to within a pointing error angle. The controller sinusoidally dithers the antenna through a small azimuth dither angle greater than the pointing error angle while sensing a signal from the satellite received at the reflector dish, and deduces the pointing angle error from dither-induced fluctuations in the received signal.

  19. A satellite-tracking millimeter-wave reflector antenna system for mobile satellite-tracking

    NASA Technical Reports Server (NTRS)

    Densmore, Arthur C. (Inventor); Jamnejad, Vahraz (Inventor); Woo, Kenneth E. (Inventor)

    1995-01-01

    A miniature dual-band two-way mobile satellite tracking antenna system mounted on a movable ground vehicle includes a miniature parabolic reflector dish having an elliptical aperture with major and minor elliptical axes aligned horizontally and vertically, respectively, to maximize azimuthal directionality and minimize elevational directionality to an extent corresponding to expected pitch excursions of the movable ground vehicle. A feed-horn has a back end and an open front end facing the reflector dish and has vertical side walls opening out from the back end to the front end at a lesser horn angle and horizontal top and bottom walls opening out from the back end to the front end at a greater horn angle. An RF circuit couples two different signal bands between the feed-horn and the user. An antenna attitude controller maintains an antenna azimuth direction relative to the satellite by rotating it in azimuth in response to sensed yaw motions of the movable ground vehicle so as to compensate for the yaw motions to within a pointing error angle. The controller sinusoidally dithers the antenna through a small azimuth dither angle greater than the pointing error angle while sensing a signal from the satellite received at the reflector dish, and deduces the pointing angle error from dither-induced fluctuations in the received signal.

  20. MPEG-4 ASP SoC receiver with novel image enhancement techniques for DAB networks

    NASA Astrophysics Data System (ADS)

    Barreto, D.; Quintana, A.; García, L.; Callicó, G. M.; Núñez, A.

    2007-05-01

    This paper presents a system for real-time video reception in low-power mobile devices using Digital Audio Broadcast (DAB) technology for transmission. A demo receiver terminal is designed into a FPGA platform using the Advanced Simple Profile (ASP) MPEG-4 standard for video decoding. In order to keep the demanding DAB requirements, the bandwidth of the encoded sequence must be drastically reduced. In this sense, prior to the MPEG-4 coding stage, a pre-processing stage is performed. It is firstly composed by a segmentation phase according to motion and texture based on the Principal Component Analysis (PCA) of the input video sequence, and secondly by a down-sampling phase, which depends on the segmentation results. As a result of the segmentation task, a set of texture and motion maps are obtained. These motion and texture maps are also included into the bit-stream as user data side-information and are therefore known to the receiver. For all bit-rates, the whole encoder/decoder system proposed in this paper exhibits higher image visual quality than the alternative encoding/decoding method, assuming equal image sizes. A complete analysis of both techniques has also been performed to provide the optimum motion and texture maps for the global system, which has been finally validated for a variety of video sequences. Additionally, an optimal HW/SW partition for the MPEG-4 decoder has been studied and implemented over a Programmable Logic Device with an embedded ARM9 processor. Simulation results show that a throughput of 15 QCIF frames per second can be achieved with low area and low power implementation.

  1. Motion Rehab AVE 3D: A VR-based exergame for post-stroke rehabilitation.

    PubMed

    Trombetta, Mateus; Bazzanello Henrique, Patrícia Paula; Brum, Manoela Rogofski; Colussi, Eliane Lucia; De Marchi, Ana Carolina Bertoletti; Rieder, Rafael

    2017-11-01

    Recent researches about games for post-stroke rehabilitation have been increasing, focusing in upper limb, lower limb and balance situations, and showing good experiences and results. With this in mind, this paper presents Motion Rehab AVE 3D, a serious game for post-stroke rehabilitation of patients with mild stroke. The aim is offer a new technology in order to assist the traditional therapy and motivate the patient to execute his/her rehabilitation program, under health professional supervision. The game was developed with Unity game engine, supporting Kinect motion sensing input device and display devices like Smart TV 3D and Oculus Rift. It contemplates six activities considering exercises in a tridimensional space: flexion, abduction, shoulder adduction, horizontal shoulder adduction and abduction, elbow extension, wrist extension, knee flexion, and hip flexion and abduction. Motion Rehab AVE 3D also report about hits and errors to the physiotherapist evaluate the patient's progress. A pilot study with 10 healthy participants (61-75 years old) tested one of the game levels. They experienced the 3D user interface in third-person. Our initial goal was to map a basic and comfortable setup of equipment in order to adopt later. All the participants (100%) classified the interaction process as interesting and amazing for the age, presenting a good acceptance. Our evaluation showed that the game could be used as a useful tool to motivate the patients during rehabilitation sessions. Next step is to evaluate its effectiveness for stroke patients, in order to verify if the interface and game exercises contribute into the motor rehabilitation treatment progress. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Experiments in sensing transient rotational acceleration cues on a flight simulator

    NASA Technical Reports Server (NTRS)

    Parrish, R. V.

    1979-01-01

    Results are presented for two transient motion sensing experiments which were motivated by the identification of an anomalous roll cue (a 'jerk' attributed to an acceleration spike) in a prior investigation of realistic fighter motion simulation. The experimental results suggest the consideration of several issues for motion washout and challenge current sensory system modeling efforts. Although no sensory modeling effort is made it is argued that such models must incorporate the ability to handle transient inputs of short duration (some of which are less than the accepted latency times for sensing), and must represent separate channels for rotational acceleration and velocity sensing.

  3. Emerging Use of Dual Channel Infrared for Remote Sensing of Sea Ice

    NASA Astrophysics Data System (ADS)

    Lewis, N. S.; Serreze, M. C.; Gallaher, D. W.; Koenig, L.; Schaefer, K. M.; Campbell, G. G.; Thompson, J. A.; Grant, G.; Fetterer, F. M.

    2017-12-01

    Using GOES-16 data as a proxy for overhead persistent infrared, we examine the feasibility of using a dual channel shortwave / midwave infrared (SWIR/MWIR) approach to detect and chart sea ice in Hudson Bay through a series of images with a temporal scale of less than fifteen minutes. While not traditionally exploited for sea ice remote sensing, the availability of near continuous shortwave and midwave infrared data streams over the Arctic from overhead persistent infrared (OPIR) satellites could provide an invaluable source of information regarding the changing Arctic climate. Traditionally used for the purpose of missile warning and strategic defense, characteristics of OPIR make it an attractive source for Arctic remote sensing as the temporal resolution can provide insight into ice edge melt and motion processes. Fundamentally, the time series based algorithm will discern water/ice/clouds using a SWIR/MWIR normalized difference index. Cloud filtering is accomplished through removing pixels categorized as clouds while retaining a cache of previous ice/water pixels to replace any cloud obscured (and therefore omitted) pixels. Demonstration of the sensitivity of GOES-16 SWIR/MWIR to detect and discern water/ice/clouds provides a justification for exploring the utility of military OPIR sensors for civil and commercial applications. Potential users include the scientific community as well as emergency responders, the fishing industry, oil and gas industries, and transportation industries that are seeking to exploit changing conditions in the Arctic but require more accurate and timely ice charting products.

  4. Consensus-Based Cooperative Spectrum Sensing with Improved Robustness Against SSDF Attacks

    NASA Astrophysics Data System (ADS)

    Liu, Quan; Gao, Jun; Guo, Yunwei; Liu, Siyang

    2011-05-01

    Based on the consensus algorithm, an attack-proof cooperative spectrum sensing (CSS) scheme is presented for decentralized cognitive radio networks (CRNs), where a common fusion center is not available and some malicious users may launch attacks with spectrum sensing data falsification (SSDF). Local energy detection is firstly performed by each secondary user (SU), and then, utilizing the consensus notions, each SU can make its own decision individually only by local information exchange with its neighbors rather than any centralized fusion used in most existing schemes. With the help of some anti-attack tricks, each authentic SU can generally identify and exclude those malicious reports during the interactions within the neighborhood. Compared with the existing solutions, the proposed scheme is proved to have much better robustness against three categories of SSDF attack, without requiring any a priori knowledge of the whole network.

  5. Pilot study on real-time motion detection in UAS video data by human observer and image exploitation algorithm

    NASA Astrophysics Data System (ADS)

    Hild, Jutta; Krüger, Wolfgang; Brüstle, Stefan; Trantelle, Patrick; Unmüßig, Gabriel; Voit, Michael; Heinze, Norbert; Peinsipp-Byma, Elisabeth; Beyerer, Jürgen

    2017-05-01

    Real-time motion video analysis is a challenging and exhausting task for the human observer, particularly in safety and security critical domains. Hence, customized video analysis systems providing functions for the analysis of subtasks like motion detection or target tracking are welcome. While such automated algorithms relieve the human operators from performing basic subtasks, they impose additional interaction duties on them. Prior work shows that, e.g., for interaction with target tracking algorithms, a gaze-enhanced user interface is beneficial. In this contribution, we present an investigation on interaction with an independent motion detection (IDM) algorithm. Besides identifying an appropriate interaction technique for the user interface - again, we compare gaze-based and traditional mouse-based interaction - we focus on the benefit an IDM algorithm might provide for an UAS video analyst. In a pilot study, we exposed ten subjects to the task of moving target detection in UAS video data twice, once performing with automatic support, once performing without it. We compare the two conditions considering performance in terms of effectiveness (correct target selections). Additionally, we report perceived workload (measured using the NASA-TLX questionnaire) and user satisfaction (measured using the ISO 9241-411 questionnaire). The results show that a combination of gaze input and automated IDM algorithm provides valuable support for the human observer, increasing the number of correct target selections up to 62% and reducing workload at the same time.

  6. Flexible Piezoelectric Sensor-Based Gait Recognition.

    PubMed

    Cha, Youngsu; Kim, Hojoon; Kim, Doik

    2018-02-05

    Most motion recognition research has required tight-fitting suits for precise sensing. However, tight-suit systems have difficulty adapting to real applications, because people normally wear loose clothes. In this paper, we propose a gait recognition system with flexible piezoelectric sensors in loose clothing. The gait recognition system does not directly sense lower-body angles. It does, however, detect the transition between standing and walking. Specifically, we use the signals from the flexible sensors attached to the knee and hip parts on loose pants. We detect the periodic motion component using the discrete time Fourier series from the signal during walking. We adapt the gait detection method to a real-time patient motion and posture monitoring system. In the monitoring system, the gait recognition operates well. Finally, we test the gait recognition system with 10 subjects, for which the proposed system successfully detects walking with a success rate over 93 %.

  7. sEMG-based joint force control for an upper-limb power-assist exoskeleton robot.

    PubMed

    Li, Zhijun; Wang, Baocheng; Sun, Fuchun; Yang, Chenguang; Xie, Qing; Zhang, Weidong

    2014-05-01

    This paper investigates two surface electromyogram (sEMG)-based control strategies developed for a power-assist exoskeleton arm. Different from most of the existing position control approaches, this paper develops force control methods to make the exoskeleton robot behave like humans in order to provide better assistance. The exoskeleton robot is directly attached to a user's body and activated by the sEMG signals of the user's muscles, which reflect the user's motion intention. In the first proposed control method, the forces of agonist and antagonist muscles pair are estimated, and their difference is used to produce the torque of the corresponding joints. In the second method, linear discriminant analysis-based classifiers are introduced as the indicator of the motion type of the joints. Then, the classifier's outputs together with the estimated force of corresponding active muscle determine the torque control signals. Different from the conventional approaches, one classifier is assigned to each joint, which decreases the training time and largely simplifies the recognition process. Finally, the extensive experiments are conducted to illustrate the effectiveness of the proposed approaches.

  8. Real-time image mosaicing for medical applications.

    PubMed

    Loewke, Kevin E; Camarillo, David B; Jobst, Christopher A; Salisbury, J Kenneth

    2007-01-01

    In this paper we describe the development of a robotically-assisted image mosaicing system for medical applications. The processing occurs in real-time due to a fast initial image alignment provided by robotic position sensing. Near-field imaging, defined by relatively large camera motion, requires translations as well as pan and tilt orientations to be measured. To capture these measurements we use 5-d.o.f. sensing along with a hand-eye calibration to account for sensor offset. This sensor-based approach speeds up the mosaicing, eliminates cumulative errors, and readily handles arbitrary camera motions. Our results have produced visually satisfactory mosaics on a dental model but can be extended to other medical images.

  9. Real-time animation software for customized training to use motor prosthetic systems.

    PubMed

    Davoodi, Rahman; Loeb, Gerald E

    2012-03-01

    Research on control of human movement and development of tools for restoration and rehabilitation of movement after spinal cord injury and amputation can benefit greatly from software tools for creating precisely timed animation sequences of human movement. Despite their ability to create sophisticated animation and high quality rendering, existing animation software are not adapted for application to neural prostheses and rehabilitation of human movement. We have developed a software tool known as MSMS (MusculoSkeletal Modeling Software) that can be used to develop models of human or prosthetic limbs and the objects with which they interact and to animate their movement using motion data from a variety of offline and online sources. The motion data can be read from a motion file containing synthesized motion data or recordings from a motion capture system. Alternatively, motion data can be streamed online from a real-time motion capture system, a physics-based simulation program, or any program that can produce real-time motion data. Further, animation sequences of daily life activities can be constructed using the intuitive user interface of Microsoft's PowerPoint software. The latter allows expert and nonexpert users alike to assemble primitive movements into a complex motion sequence with precise timing by simply arranging the order of the slides and editing their properties in PowerPoint. The resulting motion sequence can be played back in an open-loop manner for demonstration and training or in closed-loop virtual reality environments where the timing and speed of animation depends on user inputs. These versatile animation utilities can be used in any application that requires precisely timed animations but they are particularly suited for research and rehabilitation of movement disorders. MSMS's modeling and animation tools are routinely used in a number of research laboratories around the country to study the control of movement and to develop and test neural prostheses for patients with paralysis or amputations.

  10. Modeling moving systems with RELAP5-3D

    DOE PAGES

    Mesina, G. L.; Aumiller, David L.; Buschman, Francis X.; ...

    2015-12-04

    RELAP5-3D is typically used to model stationary, land-based reactors. However, it can also model reactors in other inertial and accelerating frames of reference. By changing the magnitude of the gravitational vector through user input, RELAP5-3D can model reactors on a space station or the moon. The field equations have also been modified to model reactors in a non-inertial frame, such as occur in land-based reactors during earthquakes or onboard spacecraft. Transient body forces affect fluid flow in thermal-fluid machinery aboard accelerating crafts during rotational and translational accelerations. It is useful to express the equations of fluid motion in the acceleratingmore » frame of reference attached to the moving craft. However, careful treatment of the rotational and translational kinematics is required to accurately capture the physics of the fluid motion. Correlations for flow at angles between horizontal and vertical are generated via interpolation where no experimental studies or data exist. The equations for three-dimensional fluid motion in a non-inertial frame of reference are developed. As a result, two different systems for describing rotational motion are presented, user input is discussed, and an example is given.« less

  11. Technical skills measurement based on a cyber-physical system for endovascular surgery simulation.

    PubMed

    Tercero, Carlos; Kodama, Hirokatsu; Shi, Chaoyang; Ooe, Katsutoshi; Ikeda, Seiichi; Fukuda, Toshio; Arai, Fumihito; Negoro, Makoto; Kwon, Guiryong; Najdovski, Zoran

    2013-09-01

    Quantification of medical skills is a challenge, particularly simulator-based training. In the case of endovascular intervention, it is desirable that a simulator accurately recreates the morphology and mechanical characteristics of the vasculature while enabling scoring. For this purpose, we propose a cyber-physical system composed of optical sensors for a catheter's body motion encoding, a magnetic tracker for motion capture of an operator's hands, and opto-mechatronic sensors for measuring the interaction of the catheter tip with the vasculature model wall. Two pilot studies were conducted for measuring technical skills, one for distinguishing novices from experts and the other for measuring unnecessary motion. The proficiency levels were measurable between expert and novice and also between individual novice users. The results enabled scoring of the user's proficiency level, using sensitivity, reaction time, time to complete a task and respect for tissue integrity as evaluation criteria. Additionally, unnecessary motion was also measurable. The development of cyber-physical simulators for other domains of medicine depend on the study of photoelastic materials for human tissue modelling, and enables quantitative evaluation of skills using surgical instruments and a realistic representation of human tissue. Copyright © 2012 John Wiley & Sons, Ltd.

  12. Motion Analysis System for Instruction of Nihon Buyo using Motion Capture

    NASA Astrophysics Data System (ADS)

    Shinoda, Yukitaka; Murakami, Shingo; Watanabe, Yuta; Mito, Yuki; Watanuma, Reishi; Marumo, Mieko

    The passing on and preserving of advanced technical skills has become an important issue in a variety of fields, and motion analysis using motion capture has recently become popular in the research of advanced physical skills. This research aims to construct a system having a high on-site instructional effect on dancers learning Nihon Buyo, a traditional dance in Japan, and to classify Nihon Buyo dancing according to style, school, and dancer's proficiency by motion analysis. We have been able to study motion analysis systems for teaching Nihon Buyo now that body-motion data can be digitized and stored by motion capture systems using high-performance computers. Thus, with the aim of developing a user-friendly instruction-support system, we have constructed a motion analysis system that displays a dancer's time series of body motions and center of gravity for instructional purposes. In this paper, we outline this instructional motion analysis system based on three-dimensional position data obtained by motion capture. We also describe motion analysis that we performed based on center-of-gravity data obtained by this system and motion analysis focusing on school and age group using this system.

  13. Authentication of Smartphone Users Based on Activity Recognition and Mobile Sensing.

    PubMed

    Ehatisham-Ul-Haq, Muhammad; Azam, Muhammad Awais; Loo, Jonathan; Shuang, Kai; Islam, Syed; Naeem, Usman; Amin, Yasar

    2017-09-06

    Smartphones are context-aware devices that provide a compelling platform for ubiquitous computing and assist users in accomplishing many of their routine tasks anytime and anywhere, such as sending and receiving emails. The nature of tasks conducted with these devices has evolved with the exponential increase in the sensing and computing capabilities of a smartphone. Due to the ease of use and convenience, many users tend to store their private data, such as personal identifiers and bank account details, on their smartphone. However, this sensitive data can be vulnerable if the device gets stolen or lost. A traditional approach for protecting this type of data on mobile devices is to authenticate users with mechanisms such as PINs, passwords, and fingerprint recognition. However, these techniques are vulnerable to user compliance and a plethora of attacks, such as smudge attacks. The work in this paper addresses these challenges by proposing a novel authentication framework, which is based on recognizing the behavioral traits of smartphone users using the embedded sensors of smartphone, such as Accelerometer, Gyroscope and Magnetometer. The proposed framework also provides a platform for carrying out multi-class smart user authentication, which provides different levels of access to a wide range of smartphone users. This work has been validated with a series of experiments, which demonstrate the effectiveness of the proposed framework.

  14. Authentication of Smartphone Users Based on Activity Recognition and Mobile Sensing

    PubMed Central

    Ehatisham-ul-Haq, Muhammad; Azam, Muhammad Awais; Loo, Jonathan; Shuang, Kai; Islam, Syed; Naeem, Usman; Amin, Yasar

    2017-01-01

    Smartphones are context-aware devices that provide a compelling platform for ubiquitous computing and assist users in accomplishing many of their routine tasks anytime and anywhere, such as sending and receiving emails. The nature of tasks conducted with these devices has evolved with the exponential increase in the sensing and computing capabilities of a smartphone. Due to the ease of use and convenience, many users tend to store their private data, such as personal identifiers and bank account details, on their smartphone. However, this sensitive data can be vulnerable if the device gets stolen or lost. A traditional approach for protecting this type of data on mobile devices is to authenticate users with mechanisms such as PINs, passwords, and fingerprint recognition. However, these techniques are vulnerable to user compliance and a plethora of attacks, such as smudge attacks. The work in this paper addresses these challenges by proposing a novel authentication framework, which is based on recognizing the behavioral traits of smartphone users using the embedded sensors of smartphone, such as Accelerometer, Gyroscope and Magnetometer. The proposed framework also provides a platform for carrying out multi-class smart user authentication, which provides different levels of access to a wide range of smartphone users. This work has been validated with a series of experiments, which demonstrate the effectiveness of the proposed framework. PMID:28878177

  15. Seismic switch for strong motion measurement

    DOEpatents

    Harben, Philip E.; Rodgers, Peter W.; Ewert, Daniel W.

    1995-01-01

    A seismic switching device that has an input signal from an existing microseismic station seismometer and a signal from a strong motion measuring instrument. The seismic switch monitors the signal level of the strong motion instrument and passes the seismometer signal to the station data telemetry and recording systems. When the strong motion instrument signal level exceeds a user set threshold level, the seismometer signal is switched out and the strong motion signal is passed to the telemetry system. The amount of time the strong motion signal is passed before switching back to the seismometer signal is user controlled between 1 and 15 seconds. If the threshold level is exceeded during a switch time period, the length of time is extended from that instant by one user set time period.

  16. Seismic switch for strong motion measurement

    DOEpatents

    Harben, P.E.; Rodgers, P.W.; Ewert, D.W.

    1995-05-30

    A seismic switching device is described that has an input signal from an existing microseismic station seismometer and a signal from a strong motion measuring instrument. The seismic switch monitors the signal level of the strong motion instrument and passes the seismometer signal to the station data telemetry and recording systems. When the strong motion instrument signal level exceeds a user set threshold level, the seismometer signal is switched out and the strong motion signal is passed to the telemetry system. The amount of time the strong motion signal is passed before switching back to the seismometer signal is user controlled between 1 and 15 seconds. If the threshold level is exceeded during a switch time period, the length of time is extended from that instant by one user set time period. 11 figs.

  17. Human-directed local autonomy for motion guidance and coordination in an intelligent manufacturing system

    NASA Astrophysics Data System (ADS)

    Alford, W. A.; Kawamura, Kazuhiko; Wilkes, Don M.

    1997-12-01

    This paper discusses the problem of integrating human intelligence and skills into an intelligent manufacturing system. Our center has jointed the Holonic Manufacturing Systems (HMS) Project, an international consortium dedicated to developing holonic systems technologies. One of our contributions to this effort is in Work Package 6: flexible human integration. This paper focuses on one activity, namely, human integration into motion guidance and coordination. Much research on intelligent systems focuses on creating totally autonomous agents. At the Center for Intelligent Systems (CIS), we design robots that interact directly with a human user. We focus on using the natural intelligence of the user to simplify the design of a robotic system. The problem is finding ways for the user to interact with the robot that are efficient and comfortable for the user. Manufacturing applications impose the additional constraint that the manufacturing process should not be disturbed; that is, frequent interacting with the user could degrade real-time performance. Our research in human-robot interaction is based on a concept called human directed local autonomy (HuDL). Under this paradigm, the intelligent agent selects and executes a behavior or skill, based upon directions from a human user. The user interacts with the robot via speech, gestures, or other media. Our control software is based on the intelligent machine architecture (IMA), an object-oriented architecture which facilitates cooperation and communication among intelligent agents. In this paper we describe our research testbed, a dual-arm humanoid robot and human user, and the use of this testbed for a human directed sorting task. We also discuss some proposed experiments for evaluating the integration of the human into the robot system. At the time of this writing, the experiments have not been completed.

  18. Video stereolization: combining motion analysis with user interaction.

    PubMed

    Liao, Miao; Gao, Jizhou; Yang, Ruigang; Gong, Minglun

    2012-07-01

    We present a semiautomatic system that converts conventional videos into stereoscopic videos by combining motion analysis with user interaction, aiming to transfer as much as possible labeling work from the user to the computer. In addition to the widely used structure from motion (SFM) techniques, we develop two new methods that analyze the optical flow to provide additional qualitative depth constraints. They remove the camera movement restriction imposed by SFM so that general motions can be used in scene depth estimation-the central problem in mono-to-stereo conversion. With these algorithms, the user's labeling task is significantly simplified. We further developed a quadratic programming approach to incorporate both quantitative depth and qualitative depth (such as these from user scribbling) to recover dense depth maps for all frames, from which stereoscopic view can be synthesized. In addition to visual results, we present user study results showing that our approach is more intuitive and less labor intensive, while producing 3D effect comparable to that from current state-of-the-art interactive algorithms.

  19. Toward seamless wearable sensing: Automatic on-body sensor localization for physical activity monitoring.

    PubMed

    Saeedi, Ramyar; Purath, Janet; Venkatasubramanian, Krishna; Ghasemzadeh, Hassan

    2014-01-01

    Mobile wearable sensors have demonstrated great potential in a broad range of applications in healthcare and wellness. These technologies are known for their potential to revolutionize the way next generation medical services are supplied and consumed by providing more effective interventions, improving health outcomes, and substantially reducing healthcare costs. Despite these potentials, utilization of these sensor devices is currently limited to lab settings and in highly controlled clinical trials. A major obstacle in widespread utilization of these systems is that the sensors need to be used in predefined locations on the body in order to provide accurate outcomes such as type of physical activity performed by the user. This has reduced users' willingness to utilize such technologies. In this paper, we propose a novel signal processing approach that leverages feature selection algorithms for accurate and automatic localization of wearable sensors. Our results based on real data collected using wearable motion sensors demonstrate that the proposed approach can perform sensor localization with 98.4% accuracy which is 30.7% more accurate than an approach without a feature selection mechanism. Furthermore, utilizing our node localization algorithm aids the activity recognition algorithm to achieve 98.8% accuracy (an increase from 33.6% for the system without node localization).

  20. Motion Trajectories for Wide-area Surveying with a Rover-based Distributed Spectrometer

    NASA Technical Reports Server (NTRS)

    Tunstel, Edward; Anderson, Gary; Wilson, Edmond

    2006-01-01

    A mobile ground survey application that employs remote sensing as a primary means of area coverage is highlighted. It is distinguished from mobile robotic area coverage problems that employ contact or proximity-based sensing. The focus is on a specific concept for performing mobile surveys in search of biogenic gases on planetary surfaces using a distributed spectrometer -- a rover-based instrument designed for wide measurement coverage of promising search areas. Navigation algorithms for executing circular and spiral survey trajectories are presented for widearea distributed spectroscopy and evaluated based on area covered and distance traveled.

  1. The extraction of motion-onset VEP BCI features based on deep learning and compressed sensing.

    PubMed

    Ma, Teng; Li, Hui; Yang, Hao; Lv, Xulin; Li, Peiyang; Liu, Tiejun; Yao, Dezhong; Xu, Peng

    2017-01-01

    Motion-onset visual evoked potentials (mVEP) can provide a softer stimulus with reduced fatigue, and it has potential applications for brain computer interface(BCI)systems. However, the mVEP waveform is seriously masked in the strong background EEG activities, and an effective approach is needed to extract the corresponding mVEP features to perform task recognition for BCI control. In the current study, we combine deep learning with compressed sensing to mine discriminative mVEP information to improve the mVEP BCI performance. The deep learning and compressed sensing approach can generate the multi-modality features which can effectively improve the BCI performance with approximately 3.5% accuracy incensement over all 11 subjects and is more effective for those subjects with relatively poor performance when using the conventional features. Compared with the conventional amplitude-based mVEP feature extraction approach, the deep learning and compressed sensing approach has a higher classification accuracy and is more effective for subjects with relatively poor performance. According to the results, the deep learning and compressed sensing approach is more effective for extracting the mVEP feature to construct the corresponding BCI system, and the proposed feature extraction framework is easy to extend to other types of BCIs, such as motor imagery (MI), steady-state visual evoked potential (SSVEP)and P300. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. Generating Concise Rules for Human Motion Retrieval

    NASA Astrophysics Data System (ADS)

    Mukai, Tomohiko; Wakisaka, Ken-Ichi; Kuriyama, Shigeru

    This paper proposes a method for retrieving human motion data with concise retrieval rules based on the spatio-temporal features of motion appearance. Our method first converts motion clip into a form of clausal language that represents geometrical relations between body parts and their temporal relationship. A retrieval rule is then learned from the set of manually classified examples using inductive logic programming (ILP). ILP automatically discovers the essential rule in the same clausal form with a user-defined hypothesis-testing procedure. All motions are indexed using this clausal language, and the desired clips are retrieved by subsequence matching using the rule. Such rule-based retrieval offers reasonable performance and the rule can be intuitively edited in the same language form. Consequently, our method enables efficient and flexible search from a large dataset with simple query language.

  3. A Python-based interface to examine motions in time series of solar images

    NASA Astrophysics Data System (ADS)

    Campos-Rozo, J. I.; Vargas Domínguez, S.

    2017-10-01

    Python is considered to be a mature programming language, besides of being widely accepted as an engaging option for scientific analysis in multiple areas, as will be presented in this work for the particular case of solar physics research. SunPy is an open-source library based on Python that has been recently developed to furnish software tools to solar data analysis and visualization. In this work we present a graphical user interface (GUI) based on Python and Qt to effectively compute proper motions for the analysis of time series of solar data. This user-friendly computing interface, that is intended to be incorporated to the Sunpy library, uses a local correlation tracking technique and some extra tools that allows the selection of different parameters to calculate, vizualize and analyze vector velocity fields of solar data, i.e. time series of solar filtergrams and magnetograms.

  4. Off-line programming motion and process commands for robotic welding of Space Shuttle main engines

    NASA Technical Reports Server (NTRS)

    Ruokangas, C. C.; Guthmiller, W. A.; Pierson, B. L.; Sliwinski, K. E.; Lee, J. M. F.

    1987-01-01

    The off-line-programming software and hardware being developed for robotic welding of the Space Shuttle main engine are described and illustrated with diagrams, drawings, graphs, and photographs. The menu-driven workstation-based interactive programming system is designed to permit generation of both motion and process commands for the robotic workcell by weld engineers (with only limited knowledge of programming or CAD systems) on the production floor. Consideration is given to the user interface, geometric-sources interfaces, overall menu structure, weld-parameter data base, and displays of run time and archived data. Ongoing efforts to address limitations related to automatic-downhand-configuration coordinated motion, a lack of source codes for the motion-control software, CAD data incompatibility, interfacing with the robotic workcell, and definition of the welding data base are discussed.

  5. Highly Sensitive Flexible Human Motion Sensor Based on ZnSnO3/PVDF Composite

    NASA Astrophysics Data System (ADS)

    Yang, Young Jin; Aziz, Shahid; Mehdi, Syed Murtuza; Sajid, Memoon; Jagadeesan, Srikanth; Choi, Kyung Hyun

    2017-07-01

    A highly sensitive body motion sensor has been fabricated based on a composite active layer of zinc stannate (ZnSnO3) nano-cubes and poly(vinylidene fluoride) (PVDF) polymer. The thin film-based active layer was deposited on polyethylene terephthalate flexible substrate through D-bar coating technique. Electrical and morphological characterizations of the films and sensors were carried out to discover the physical characteristics and the output response of the devices. The synergistic effect between piezoelectric ZnSnO3 nanocubes and β phase PVDF provides the composite with a desirable electrical conductivity, remarkable bend sensitivity, and excellent stability, ideal for the fabrication of a motion sensor. The recorded resistance of the sensor towards the bending angles of -150° to 0° to 150° changed from 20 MΩ to 55 MΩ to 100 MΩ, respectively, showing the composite to be a very good candidate for motion sensing applications.

  6. The limits of earthquake early warning: Timeliness of ground motion estimates

    USGS Publications Warehouse

    Minson, Sarah E.; Meier, Men-Andrin; Baltay, Annemarie S.; Hanks, Thomas C.; Cochran, Elizabeth S.

    2018-01-01

    The basic physics of earthquakes is such that strong ground motion cannot be expected from an earthquake unless the earthquake itself is very close or has grown to be very large. We use simple seismological relationships to calculate the minimum time that must elapse before such ground motion can be expected at a distance from the earthquake, assuming that the earthquake magnitude is not predictable. Earthquake early warning (EEW) systems are in operation or development for many regions around the world, with the goal of providing enough warning of incoming ground shaking to allow people and automated systems to take protective actions to mitigate losses. However, the question of how much warning time is physically possible for specified levels of ground motion has not been addressed. We consider a zero-latency EEW system to determine possible warning times a user could receive in an ideal case. In this case, the only limitation on warning time is the time required for the earthquake to evolve and the time for strong ground motion to arrive at a user’s location. We find that users who wish to be alerted at lower ground motion thresholds will receive more robust warnings with longer average warning times than users who receive warnings for higher ground motion thresholds. EEW systems have the greatest potential benefit for users willing to take action at relatively low ground motion thresholds, whereas users who set relatively high thresholds for taking action are less likely to receive timely and actionable information.

  7. The limits of earthquake early warning: Timeliness of ground motion estimates

    PubMed Central

    Hanks, Thomas C.

    2018-01-01

    The basic physics of earthquakes is such that strong ground motion cannot be expected from an earthquake unless the earthquake itself is very close or has grown to be very large. We use simple seismological relationships to calculate the minimum time that must elapse before such ground motion can be expected at a distance from the earthquake, assuming that the earthquake magnitude is not predictable. Earthquake early warning (EEW) systems are in operation or development for many regions around the world, with the goal of providing enough warning of incoming ground shaking to allow people and automated systems to take protective actions to mitigate losses. However, the question of how much warning time is physically possible for specified levels of ground motion has not been addressed. We consider a zero-latency EEW system to determine possible warning times a user could receive in an ideal case. In this case, the only limitation on warning time is the time required for the earthquake to evolve and the time for strong ground motion to arrive at a user’s location. We find that users who wish to be alerted at lower ground motion thresholds will receive more robust warnings with longer average warning times than users who receive warnings for higher ground motion thresholds. EEW systems have the greatest potential benefit for users willing to take action at relatively low ground motion thresholds, whereas users who set relatively high thresholds for taking action are less likely to receive timely and actionable information. PMID:29750190

  8. Simulation and animation of sensor-driven robots.

    PubMed

    Chen, C; Trivedi, M M; Bidlack, C R

    1994-10-01

    Most simulation and animation systems utilized in robotics are concerned with simulation of the robot and its environment without simulation of sensors. These systems have difficulty in handling robots that utilize sensory feedback in their operation. In this paper, a new design of an environment for simulation, animation, and visualization of sensor-driven robots is presented. As sensor technology advances, increasing numbers of robots are equipped with various types of sophisticated sensors. The main goal of creating the visualization environment is to aid the automatic robot programming and off-line programming capabilities of sensor-driven robots. The software system will help the users visualize the motion and reaction of the sensor-driven robot under their control program. Therefore, the efficiency of the software development is increased, the reliability of the software and the operation safety of the robot are ensured, and the cost of new software development is reduced. Conventional computer-graphics-based robot simulation and animation software packages lack of capabilities for robot sensing simulation. This paper describes a system designed to overcome this deficiency.

  9. eFisioTrack: a telerehabilitation environment based on motion recognition using accelerometry.

    PubMed

    Ruiz-Fernandez, Daniel; Marín-Alonso, Oscar; Soriano-Paya, Antonio; García-Pérez, Joaquin D

    2014-01-01

    The growing demand for physical rehabilitation processes can result in the rising of costs and waiting lists, becoming a threat to healthcare services' sustainability. Telerehabilitation solutions can help in this issue by discharging patients from points of care while improving their adherence to treatment. Sensing devices are used to collect data so that the physiotherapists can monitor and evaluate the patients' activity in the scheduled sessions. This paper presents a software platform that aims to meet the needs of the rehabilitation experts and the patients along a physical rehabilitation plan, allowing its use in outpatient scenarios. It is meant to be low-cost and easy-to-use, improving patients and experts experience. We show the satisfactory results already obtained from its use, in terms of the accuracy evaluating the exercises, and the degree of users' acceptance. We conclude that this platform is suitable and technically feasible to carry out rehabilitation plans outside the point of care.

  10. Identification of dust source regions and dust emission trends across North Africa and the Middle East using MISR satellite observations

    NASA Astrophysics Data System (ADS)

    Yu, Y.; Kalashnikova, O. V.; Garay, M. J.; Notaro, M.

    2017-12-01

    Global arid and semi-arid regions supply 1100 to 5000 Tg of Aeolian dust to the atmosphere each year, primarily from North Africa and secondarily from the Middle East. Previous dust source identification methods, based on either remotely-sensed aerosol optical depth (AOD) or dust activity, yield distinct dust source maps, largely due to the limitations in each method and remote-sensing product. Here we apply a novel motion-based method for dust source identification. Dust plume thickness and motion vectors from Multi-angle Imaging SpectroRadiometer (MISR) Cloud Motion Vector Product (CMVP) are examined to identify the regions with high frequency of fast moving-dust plumes, by season. According to MISR CMVP, Bodele depression is the most important dust source across North Africa, consistent with previous studies. Seasonal variability of dust emission across the North Africa is largely driven by climatology of wind and precipitation, featuring the influence of Sharav Cyclone and western African monsoon. In the Middle East, Iraq, Kuwait, and eastern Saudi Arabia are identified as dust source regions, especially during summer months, when the Middle Eastern Shamal wind is active. Furthermore, dust emission trend at each dust source are diagnosed from the motion-based dust source dataset. Increase in dust emission from the Fertile Crescent, Sahel, and eastern African dust sources are identified from MISR CMVP, implying potential contribution from these dust sources to the upward trend in AOD and dust AOD over the Middle East in the 21st century. By comparing with various dust source identification studies, we conclude that the motion-based identification of dust sources is an encouraging alternative and compliment to the AOD-only source identification method.

  11. Data Quality Screening Service

    NASA Technical Reports Server (NTRS)

    Strub, Richard; Lynnes, Christopher; Hearty, Thomas; Won, Young-In; Fox, Peter; Zednik, Stephan

    2013-01-01

    A report describes the Data Quality Screening Service (DQSS), which is designed to help automate the filtering of remote sensing data on behalf of science users. Whereas this process often involves much research through quality documents followed by laborious coding, the DQSS is a Web Service that provides data users with data pre-filtered to their particular criteria, while at the same time guiding the user with filtering recommendations of the cognizant data experts. The DQSS design is based on a formal semantic Web ontology that describes data fields and the quality fields for applying quality control within a data product. The accompanying code base handles several remote sensing datasets and quality control schemes for data products stored in Hierarchical Data Format (HDF), a common format for NASA remote sensing data. Together, the ontology and code support a variety of quality control schemes through the implementation of the Boolean expression with simple, reusable conditional expressions as operands. Additional datasets are added to the DQSS simply by registering instances in the ontology if they follow a quality scheme that is already modeled in the ontology. New quality schemes are added by extending the ontology and adding code for each new scheme.

  12. Secure Nearest Neighbor Query on Crowd-Sensing Data

    PubMed Central

    Cheng, Ke; Wang, Liangmin; Zhong, Hong

    2016-01-01

    Nearest neighbor queries are fundamental in location-based services, and secure nearest neighbor queries mainly focus on how to securely and quickly retrieve the nearest neighbor in the outsourced cloud server. However, the previous big data system structure has changed because of the crowd-sensing data. On the one hand, sensing data terminals as the data owner are numerous and mistrustful, while, on the other hand, in most cases, the terminals find it difficult to finish many safety operation due to computation and storage capability constraints. In light of they Multi Owners and Multi Users (MOMU) situation in the crowd-sensing data cloud environment, this paper presents a secure nearest neighbor query scheme based on the proxy server architecture, which is constructed by protocols of secure two-party computation and secure Voronoi diagram algorithm. It not only preserves the data confidentiality and query privacy but also effectively resists the collusion between the cloud server and the data owners or users. Finally, extensive theoretical and experimental evaluations are presented to show that our proposed scheme achieves a superior balance between the security and query performance compared to other schemes. PMID:27669253

  13. Secure Nearest Neighbor Query on Crowd-Sensing Data.

    PubMed

    Cheng, Ke; Wang, Liangmin; Zhong, Hong

    2016-09-22

    Nearest neighbor queries are fundamental in location-based services, and secure nearest neighbor queries mainly focus on how to securely and quickly retrieve the nearest neighbor in the outsourced cloud server. However, the previous big data system structure has changed because of the crowd-sensing data. On the one hand, sensing data terminals as the data owner are numerous and mistrustful, while, on the other hand, in most cases, the terminals find it difficult to finish many safety operation due to computation and storage capability constraints. In light of they Multi Owners and Multi Users (MOMU) situation in the crowd-sensing data cloud environment, this paper presents a secure nearest neighbor query scheme based on the proxy server architecture, which is constructed by protocols of secure two-party computation and secure Voronoi diagram algorithm. It not only preserves the data confidentiality and query privacy but also effectively resists the collusion between the cloud server and the data owners or users. Finally, extensive theoretical and experimental evaluations are presented to show that our proposed scheme achieves a superior balance between the security and query performance compared to other schemes.

  14. Applications of Sentinel-2 data for agriculture and forest monitoring using the absolute difference (ZABUD) index derived from the AgroEye software (ESA)

    NASA Astrophysics Data System (ADS)

    de Kok, R.; WeŻyk, P.; PapieŻ, M.; Migo, L.

    2017-10-01

    To convince new users of the advantages of the Sentinel_2 sensor, a simplification of classic remote sensing tools allows to create a platform of communication among domain specialists of agricultural analysis, visual image interpreters and remote sensing programmers. An index value, known in the remote sensing user domain as "Zabud" was selected to represent, in color, the essentials of a time series analysis. The color index used in a color atlas offers a working platform for an agricultural field control. This creates a database of test and training areas that enables rapid anomaly detection in the agricultural domain. The use cases and simplifications now function as an introduction to Sentinel_2 based remote sensing, in an area that before relies on VHR imagery and aerial data, to serve mainly the visual interpretation. The database extension with detected anomalies allows developers of open source software to design solutions for further agricultural control with remote sensing.

  15. Commercial potential of remote sensing data from the Earth observing system

    NASA Technical Reports Server (NTRS)

    Merry, Carolyn J.; Tomlin, Sandra M.

    1992-01-01

    The purpose was to assess the market potential of remote sensing value-added products from the Earth Observing System (EOS) platform. Sensors on the EOS platform were evaluated to determine which qualities and capabilities could be useful to the commercial user. The approach was to investigate past and future satellite data distribution programs. A questionnaire was developed for use in a telephone survey. Based on the results of the survey of companies that add value to remotely sensed data, conversations with the principal investigators in charge of each EOS sensor, a study of past commercial satellite data ventures, and reading from the commercial remote sensing industry literature, three recommendations were developed: develop a strategic plan for commercialization of EOS data, define a procedure for commercial users within the EOS data stream, and develop an Earth Observations Commercial Applications Program-like demonstration program within NASA using EOS simulated data.

  16. Remote sensing and geographically based information systems

    NASA Technical Reports Server (NTRS)

    Cicone, R. C.

    1977-01-01

    A structure is proposed for a geographically-oriented computer-based information system applicable to the analysis of remote sensing digital data. The structure, intended to answer a wide variety of user needs, would permit multiple views of the data, provide independent management of data security, quality and integrity, and rely on automatic data filing. Problems in geographically-oriented data systems, including those related to line encoding and cell encoding, are considered.

  17. High-Frequency Replanning Under Uncertainty Using Parallel Sampling-Based Motion Planning

    PubMed Central

    Sun, Wen; Patil, Sachin; Alterovitz, Ron

    2015-01-01

    As sampling-based motion planners become faster, they can be re-executed more frequently by a robot during task execution to react to uncertainty in robot motion, obstacle motion, sensing noise, and uncertainty in the robot’s kinematic model. We investigate and analyze high-frequency replanning (HFR), where, during each period, fast sampling-based motion planners are executed in parallel as the robot simultaneously executes the first action of the best motion plan from the previous period. We consider discrete-time systems with stochastic nonlinear (but linearizable) dynamics and observation models with noise drawn from zero mean Gaussian distributions. The objective is to maximize the probability of success (i.e., avoid collision with obstacles and reach the goal) or to minimize path length subject to a lower bound on the probability of success. We show that, as parallel computation power increases, HFR offers asymptotic optimality for these objectives during each period for goal-oriented problems. We then demonstrate the effectiveness of HFR for holonomic and nonholonomic robots including car-like vehicles and steerable medical needles. PMID:26279645

  18. Blind identification of full-field vibration modes from video measurements with phase-based video motion magnification

    NASA Astrophysics Data System (ADS)

    Yang, Yongchao; Dorn, Charles; Mancini, Tyler; Talken, Zachary; Kenyon, Garrett; Farrar, Charles; Mascareñas, David

    2017-02-01

    Experimental or operational modal analysis traditionally requires physically-attached wired or wireless sensors for vibration measurement of structures. This instrumentation can result in mass-loading on lightweight structures, and is costly and time-consuming to install and maintain on large civil structures, especially for long-term applications (e.g., structural health monitoring) that require significant maintenance for cabling (wired sensors) or periodic replacement of the energy supply (wireless sensors). Moreover, these sensors are typically placed at a limited number of discrete locations, providing low spatial sensing resolution that is hardly sufficient for modal-based damage localization, or model correlation and updating for larger-scale structures. Non-contact measurement methods such as scanning laser vibrometers provide high-resolution sensing capacity without the mass-loading effect; however, they make sequential measurements that require considerable acquisition time. As an alternative non-contact method, digital video cameras are relatively low-cost, agile, and provide high spatial resolution, simultaneous, measurements. Combined with vision based algorithms (e.g., image correlation, optical flow), video camera based measurements have been successfully used for vibration measurements and subsequent modal analysis, based on techniques such as the digital image correlation (DIC) and the point-tracking. However, they typically require speckle pattern or high-contrast markers to be placed on the surface of structures, which poses challenges when the measurement area is large or inaccessible. This work explores advanced computer vision and video processing algorithms to develop a novel video measurement and vision-based operational (output-only) modal analysis method that alleviate the need of structural surface preparation associated with existing vision-based methods and can be implemented in a relatively efficient and autonomous manner with little user supervision and calibration. First a multi-scale image processing method is applied on the frames of the video of a vibrating structure to extract the local pixel phases that encode local structural vibration, establishing a full-field spatiotemporal motion matrix. Then a high-spatial dimensional, yet low-modal-dimensional, over-complete model is used to represent the extracted full-field motion matrix using modal superposition, which is physically connected and manipulated by a family of unsupervised learning models and techniques, respectively. Thus, the proposed method is able to blindly extract modal frequencies, damping ratios, and full-field (as many points as the pixel number of the video frame) mode shapes from line of sight video measurements of the structure. The method is validated by laboratory experiments on a bench-scale building structure and a cantilever beam. Its ability for output (video measurements)-only identification and visualization of the weakly-excited mode is demonstrated and several issues with its implementation are discussed.

  19. Control of a Quadcopter Aerial Robot Using Optic Flow Sensing

    NASA Astrophysics Data System (ADS)

    Hurd, Michael Brandon

    This thesis focuses on the motion control of a custom-built quadcopter aerial robot using optic flow sensing. Optic flow sensing is a vision-based approach that can provide a robot the ability to fly in global positioning system (GPS) denied environments, such as indoor environments. In this work, optic flow sensors are used to stabilize the motion of quadcopter robot, where an optic flow algorithm is applied to provide odometry measurements to the quadcopter's central processing unit to monitor the flight heading. The optic-flow sensor and algorithm are capable of gathering and processing the images at 250 frames/sec, and the sensor package weighs 2.5 g and has a footprint of 6 cm2 in area. The odometry value from the optic flow sensor is then used a feedback information in a simple proportional-integral-derivative (PID) controller on the quadcopter. Experimental results are presented to demonstrate the effectiveness of using optic flow for controlling the motion of the quadcopter aerial robot. The technique presented herein can be applied to different types of aerial robotic systems or unmanned aerial vehicles (UAVs), as well as unmanned ground vehicles (UGV).

  20. Cardiac-induced localized thoracic motion detected by a fiber optic sensing scheme

    NASA Astrophysics Data System (ADS)

    Allsop, Thomas; Lloyd, Glynn; Bhamber, Ranjeet S.; Hadzievski, Ljupco; Halliday, Michael; Webb, David J.; Bennion, Ian

    2014-11-01

    The cardiovascular health of the human population is a major concern for medical clinicians, with cardiovascular diseases responsible for 48% of all deaths worldwide, according to the World Health Organization. The development of new diagnostic tools that are practicable and economical to scrutinize the cardiovascular health of humans is a major driver for clinicians. We offer a new technique to obtain seismocardiographic signals up to 54 Hz covering both ballistocardiography (below 20 Hz) and audible heart sounds (20 Hz upward), using a system based on curvature sensors formed from fiber optic long period gratings. This system can visualize the real-time three-dimensional (3-D) mechanical motion of the heart by using the data from the sensing array in conjunction with a bespoke 3-D shape reconstruction algorithm. Visualization is demonstrated by adhering three to four sensors on the outside of the thorax and in close proximity to the apex of the heart; the sensing scheme revealed a complex motion of the heart wall next to the apex region of the heart. The detection scheme is low-cost, portable, easily operated and has the potential for ambulatory applications.

  1. Autocalibrating motion-corrected wave-encoding for highly accelerated free-breathing abdominal MRI.

    PubMed

    Chen, Feiyu; Zhang, Tao; Cheng, Joseph Y; Shi, Xinwei; Pauly, John M; Vasanawala, Shreyas S

    2017-11-01

    To develop a motion-robust wave-encoding technique for highly accelerated free-breathing abdominal MRI. A comprehensive 3D wave-encoding-based method was developed to enable fast free-breathing abdominal imaging: (a) auto-calibration for wave-encoding was designed to avoid extra scan for coil sensitivity measurement; (b) intrinsic butterfly navigators were used to track respiratory motion; (c) variable-density sampling was included to enable compressed sensing; (d) golden-angle radial-Cartesian hybrid view-ordering was incorporated to improve motion robustness; and (e) localized rigid motion correction was combined with parallel imaging compressed sensing reconstruction to reconstruct the highly accelerated wave-encoded datasets. The proposed method was tested on six subjects and image quality was compared with standard accelerated Cartesian acquisition both with and without respiratory triggering. Inverse gradient entropy and normalized gradient squared metrics were calculated, testing whether image quality was improved using paired t-tests. For respiratory-triggered scans, wave-encoding significantly reduced residual aliasing and blurring compared with standard Cartesian acquisition (metrics suggesting P < 0.05). For non-respiratory-triggered scans, the proposed method yielded significantly better motion correction compared with standard motion-corrected Cartesian acquisition (metrics suggesting P < 0.01). The proposed methods can reduce motion artifacts and improve overall image quality of highly accelerated free-breathing abdominal MRI. Magn Reson Med 78:1757-1766, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  2. Towards Wearable A-Mode Ultrasound Sensing for Real-Time Finger Motion Recognition.

    PubMed

    Yang, Xingchen; Sun, Xueli; Zhou, Dalin; Li, Yuefeng; Liu, Honghai

    2018-06-01

    It is evident that surface electromyography (sEMG) based human-machine interfaces (HMI) have inherent difficulty in predicting dexterous musculoskeletal movements such as finger motions. This paper is an attempt to investigate a plausible alternative to sEMG, ultrasound-driven HMI, for dexterous motion recognition due to its characteristic of detecting morphological changes of deep muscles and tendons. A multi-channel A-mode ultrasound lightweight device is adopted to evaluate the performance of finger motion recognition; an experiment is designed for both widely acceptable offline and online algorithms with eight able-bodied subjects employed. The experiment result presents that the offline recognition accuracy is up to 98.83% ± 0.79%. The real-time motion completion rate is 95.4% ± 8.7% and online motion selection time is 0.243 ± 0.127 s. The outcomes confirm the feasibility of A-mode ultrasound based wearable HMI and its prosperous applications in prosthetic devices, virtual reality, and remote manipulation.

  3. Activity recognition of assembly tasks using body-worn microphones and accelerometers.

    PubMed

    Ward, Jamie A; Lukowicz, Paul; Tröster, Gerhard; Starner, Thad E

    2006-10-01

    In order to provide relevant information to mobile users, such as workers engaging in the manual tasks of maintenance and assembly, a wearable computer requires information about the user's specific activities. This work focuses on the recognition of activities that are characterized by a hand motion and an accompanying sound. Suitable activities can be found in assembly and maintenance work. Here, we provide an initial exploration into the problem domain of continuous activity recognition using on-body sensing. We use a mock "wood workshop" assembly task to ground our investigation. We describe a method for the continuous recognition of activities (sawing, hammering, filing, drilling, grinding, sanding, opening a drawer, tightening a vise, and turning a screwdriver) using microphones and three-axis accelerometers mounted at two positions on the user's arms. Potentially "interesting" activities are segmented from continuous streams of data using an analysis of the sound intensity detected at the two different locations. Activity classification is then performed on these detected segments using linear discriminant analysis (LDA) on the sound channel and hidden Markov models (HMMs) on the acceleration data. Four different methods at classifier fusion are compared for improving these classifications. Using user-dependent training, we obtain continuous average recall and precision rates (for positive activities) of 78 percent and 74 percent, respectively. Using user-independent training (leave-one-out across five users), we obtain recall rates of 66 percent and precision rates of 63 percent. In isolation, these activities were recognized with accuracies of 98 percent, 87 percent, and 95 percent for the user-dependent, user-independent, and user-adapted cases, respectively.

  4. How cells jump: Ultrafast motions in the single-celled micro-organism Halteria grandinella

    NASA Astrophysics Data System (ADS)

    Krishnamurthy, Deepak; Cockenpot, Fabien; Prakash, Manu

    Here we describe a novel behavior of ''jumping'' in micro-organisms, observed in the common freshwater ciliate Halteria grandinella. This organism's swimming motion is characterized by periods of forward swimming at around 10 body lengths/s punctuated by extremely rapid backward ''jumps'' where the organism reaches speeds of more than 150 body lengths/s. We show, using detailed measurements of the swimming motion through high-speed video microscopy, that the extreme swimming speeds are achieved by the motile cilia transitioning to a beating mode characterized by a significantly larger beat amplitude and an associated reversal in the direction of thrust production. We further show that H.grandinella cells can sense a fluid shear stress signal and ''jump'' in response: a possible predator avoidance mechanism. We investigate this mechanism of shear sensing and study the role of the long, slender structures known as ''cirri'' as microscale sensors of shear stress. The jumping of H.grandinella is at the limits of the metabolic rate of the organism and thus offers insights into the limiting factors governing energy storage and mechanical power release at the microscale. Concurrently their sensing apparatus allows an understanding of the physical limits of microscale mechanical sensing. This material is based on work supported by, or in part by, the US Army Research Laboratory and the US Army Research Office under contract/Grant Number W911NF-15-1-0358.

  5. Impaired Limb Proprioception in Adults With Spasmodic Dysphonia.

    PubMed

    Konczak, Jürgen; Aman, Joshua E; Chen, Yu-Wen; Li, Kuan-yi; Watson, Peter J

    2015-11-01

    Focal dystonia of the head and neck are associated with a loss of kinesthetic acuity at muscles distant from the dystonic sites. That is, while the motor deficits in focal dystonia are confined, the associated somatosensory deficits are generalized. This is the first systematic study to examine, if patients diagnosed with spasmodic dystonia (SD) show somatosensory impairments similar in scope to other forms of focal dystonia. Proprioceptive acuity (ability to discriminate between two stimuli) for forearm position and motion sense was assessed in 14 spasmodic dystonia subjects and 28 age-matched controls using a passive motion apparatus. Psychophysical thresholds, uncertainty area (UA), and a proprioceptive acuity index (AI) were computed based on the subjects' verbal responses. The main findings are as follows: first, the SD group showed significantly elevated thresholds and UAs for forearm position sense compared with the control group. Second, 9 of 14 dystonia subjects (64%) exhibited an AI for position sense above the control group maximum. Three SD subjects had a motion sense AI above the control group maximum. The results indicate that impaired limb proprioception is a common feature of SD. Like other forms of focal dystonia, spasmodic dystonia does affect the somatosensation of nondystonic muscle systems. That is, SD is associated with a generalized somatosensory deficit. Copyright © 2015 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  6. Bushland Evapotranspiration and Agricultural Remote Sensing System (BEARS) software

    NASA Astrophysics Data System (ADS)

    Gowda, P. H.; Moorhead, J.; Brauer, D. K.

    2017-12-01

    Evapotranspiration (ET) is a major component of the hydrologic cycle. ET data are used for a variety of water management and research purposes such as irrigation scheduling, water and crop modeling, streamflow, water availability, and many more. Remote sensing products have been widely used to create spatially representative ET data sets which provide important information from field to regional scales. As UAV capabilities increase, remote sensing use is likely to also increase. For that purpose, scientists at the USDA-ARS research laboratory in Bushland, TX developed the Bushland Evapotranspiration and Agricultural Remote Sensing System (BEARS) software. The BEARS software is a Java based software that allows users to process remote sensing data to generate ET outputs using predefined models, or enter custom equations and models. The capability to define new equations and build new models expands the applicability of the BEARS software beyond ET mapping to any remote sensing application. The software also includes an image viewing tool that allows users to visualize outputs, as well as draw an area of interest using various shapes. This software is freely available from the USDA-ARS Conservation and Production Research Laboratory website.

  7. User manual for the NTS ground motion data base retrieval program: ntsgm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    App, F.N.; Tunnell, T.W.

    1994-05-01

    The NTS (Nevada Test Site) Ground Motion Data Base is composed of strong motion data recorded during the normal execution of the US underground test program. It contains surface, subsurface, and structure motion data as digitized waveforms. Currently the data base contains information from 148 underground explosions. This represents about 4,200 measurements and nearly 12,000 individual digitized waveforms. Most of the data was acquired by Los Alamos National Laboratory (LANL) in connection with LANL sponsored underground tests. Some was acquired by Los Alamos on tests conducted by the Defense Nuclear Agency (DNA) and Lawrence Livermore National Laboratory (LLNL), and theremore » are some measurements that were acquired by the other test sponsors on their events and provided for inclusion in this data base. Data acquisition, creation of the data base, and development of the data base retrieval program (ntsgm) are the result of work in support of the Los Alamos Field Test Office and the Office of Nonproliferation and Arms Control.« less

  8. Scalable sensing electronics towards a motion capture suit

    NASA Astrophysics Data System (ADS)

    Xu, Daniel; Gisby, Todd A.; Xie, Shane; Anderson, Iain A.

    2013-04-01

    Being able to accurately record body motion allows complex movements to be characterised and studied. This is especially important in the film or sport coaching industry. Unfortunately, the human body has over 600 skeletal muscles, giving rise to multiple degrees of freedom. In order to accurately capture motion such as hand gestures, elbow or knee flexion and extension, vast numbers of sensors are required. Dielectric elastomer (DE) sensors are an emerging class of electroactive polymer (EAP) that is soft, lightweight and compliant. These characteristics are ideal for a motion capture suit. One challenge is to design sensing electronics that can simultaneously measure multiple sensors. This paper describes a scalable capacitive sensing device that can measure up to 8 different sensors with an update rate of 20Hz.

  9. Navigation, behaviors, and control modes in an autonomous vehicle

    NASA Astrophysics Data System (ADS)

    Byler, Eric A.

    1995-01-01

    An Intelligent Mobile Sensing System (IMSS) has been developed for the automated inspection of radioactive and hazardous waste storage containers in warehouse facilities at Department of Energy sites. A 2D space of control modes was used that provides a combined view of reactive and planning approaches wherein a 2D situation space is defined by dimensions representing the predictability of the agent's task environment and the constraint imposed by its goals. In this sense selection of appropriate systems for planning, navigation, and control depends on the problem at hand. The IMSS vehicle navigation system is based on a combination of feature based motion, landmark sightings, and an a priori logical map of the mockup storage facility. Motion for the inspection activities are composed of different interactions of several available control modes, several obstacle avoidance modes, and several feature identification modes. Features used to drive these behaviors are both visual and acoustic.

  10. Research on dynamic performance design of mobile phone application based on context awareness

    NASA Astrophysics Data System (ADS)

    Bo, Zhang

    2018-05-01

    It aims to explore the dynamic performance of different mobile phone applications and the user's cognitive differences, reduce the cognitive burden, and enhance the sense of experience. By analyzing the dynamic design performance in four different interactive contexts, and constructing the framework of information service process in the interactive context perception and the two perception principles of the cognitive consensus between designer and user, and the two kinds of knowledge in accordance with the perception principles. The analysis of the context will help users sense the dynamic performance more intuitively, so that the details of interaction will be performed more vividly and smoothly, thus enhance user's experience in the interactive process. The common perception experience enables designers and users to produce emotional resonance in different interactive contexts, and help them achieve rapid understanding of interactive content and perceive the logic and hierarchy of the content and the structure, therefore the effectiveness of mobile applications will be improved.

  11. Classification of motor intent in transradial amputees using sonomyography and spatio-temporal image analysis

    NASA Astrophysics Data System (ADS)

    Hariharan, Harishwaran; Aklaghi, Nima; Baker, Clayton A.; Rangwala, Huzefa; Kosecka, Jana; Sikdar, Siddhartha

    2016-04-01

    In spite of major advances in biomechanical design of upper extremity prosthetics, these devices continue to lack intuitive control. Conventional myoelectric control strategies typically utilize electromyography (EMG) signal amplitude sensed from forearm muscles. EMG has limited specificity in resolving deep muscle activity and poor signal-to-noise ratio. We have been investigating alternative control strategies that rely on real-time ultrasound imaging that can overcome many of the limitations of EMG. In this work, we present an ultrasound image sequence classification method that utilizes spatiotemporal features to describe muscle activity and classify motor intent. Ultrasound images of the forearm muscles were obtained from able-bodied subjects and a trans-radial amputee while they attempted different hand movements. A grid-based approach is used to test the feasibility of using spatio-temporal features by classifying hand motions performed by the subjects. Using the leave-one-out cross validation on image sequences acquired from able-bodied subjects, we observe that the grid-based approach is able to discern four hand motions with 95.31% accuracy. In case of the trans-radial amputee, we are able to discern three hand motions with 80% accuracy. In a second set of experiments, we study classification accuracy by extracting spatio-temporal sub-sequences the depict activity due to the motion of local anatomical interfaces. Short time and space limited cuboidal sequences are initially extracted and assigned an optical flow behavior label, based on a response function. The image space is clustered based on the location of cuboids and features calculated from the cuboids in each cluster. Using sequences of known motions, we extract feature vectors that describe said motion. A K-nearest neighbor classifier is designed for classification experiments. Using the leave-one-out cross validation on image sequences for an amputee subject, we demonstrate that the classifier is able to discern three important hand motions with an accuracy of 93.33% accuracy, 91-100% precision and 80-100% recall rate. We anticipate that ultrasound imaging based methods will address some limitations of conventional myoelectric sensing, while adding advantages inherent to ultrasound imaging.

  12. Sensor fusion and computer vision for context-aware control of a multi degree-of-freedom prosthesis

    NASA Astrophysics Data System (ADS)

    Markovic, Marko; Dosen, Strahinja; Popovic, Dejan; Graimann, Bernhard; Farina, Dario

    2015-12-01

    Objective. Myoelectric activity volitionally generated by the user is often used for controlling hand prostheses in order to replicate the synergistic actions of muscles in healthy humans during grasping. Muscle synergies in healthy humans are based on the integration of visual perception, heuristics and proprioception. Here, we demonstrate how sensor fusion that combines artificial vision and proprioceptive information with the high-level processing characteristics of biological systems can be effectively used in transradial prosthesis control. Approach. We developed a novel context- and user-aware prosthesis (CASP) controller integrating computer vision and inertial sensing with myoelectric activity in order to achieve semi-autonomous and reactive control of a prosthetic hand. The presented method semi-automatically provides simultaneous and proportional control of multiple degrees-of-freedom (DOFs), thus decreasing overall physical effort while retaining full user control. The system was compared against the major commercial state-of-the art myoelectric control system in ten able-bodied and one amputee subject. All subjects used transradial prosthesis with an active wrist to grasp objects typically associated with activities of daily living. Main results. The CASP significantly outperformed the myoelectric interface when controlling all of the prosthesis DOF. However, when tested with less complex prosthetic system (smaller number of DOF), the CASP was slower but resulted with reaching motions that contained less compensatory movements. Another important finding is that the CASP system required minimal user adaptation and training. Significance. The CASP constitutes a substantial improvement for the control of multi-DOF prostheses. The application of the CASP will have a significant impact when translated to real-life scenarious, particularly with respect to improving the usability and acceptance of highly complex systems (e.g., full prosthetic arms) by amputees.

  13. Sensor fusion and computer vision for context-aware control of a multi degree-of-freedom prosthesis.

    PubMed

    Markovic, Marko; Dosen, Strahinja; Popovic, Dejan; Graimann, Bernhard; Farina, Dario

    2015-12-01

    Myoelectric activity volitionally generated by the user is often used for controlling hand prostheses in order to replicate the synergistic actions of muscles in healthy humans during grasping. Muscle synergies in healthy humans are based on the integration of visual perception, heuristics and proprioception. Here, we demonstrate how sensor fusion that combines artificial vision and proprioceptive information with the high-level processing characteristics of biological systems can be effectively used in transradial prosthesis control. We developed a novel context- and user-aware prosthesis (CASP) controller integrating computer vision and inertial sensing with myoelectric activity in order to achieve semi-autonomous and reactive control of a prosthetic hand. The presented method semi-automatically provides simultaneous and proportional control of multiple degrees-of-freedom (DOFs), thus decreasing overall physical effort while retaining full user control. The system was compared against the major commercial state-of-the art myoelectric control system in ten able-bodied and one amputee subject. All subjects used transradial prosthesis with an active wrist to grasp objects typically associated with activities of daily living. The CASP significantly outperformed the myoelectric interface when controlling all of the prosthesis DOF. However, when tested with less complex prosthetic system (smaller number of DOF), the CASP was slower but resulted with reaching motions that contained less compensatory movements. Another important finding is that the CASP system required minimal user adaptation and training. The CASP constitutes a substantial improvement for the control of multi-DOF prostheses. The application of the CASP will have a significant impact when translated to real-life scenarious, particularly with respect to improving the usability and acceptance of highly complex systems (e.g., full prosthetic arms) by amputees.

  14. Evaluation of the leap motion controller as a new contact-free pointing device.

    PubMed

    Bachmann, Daniel; Weichert, Frank; Rinkenauer, Gerhard

    2014-12-24

    This paper presents a Fitts' law-based analysis of the user's performance in selection tasks with the Leap Motion Controller compared with a standard mouse device. The Leap Motion Controller (LMC) is a new contact-free input system for gesture-based human-computer interaction with declared sub-millimeter accuracy. Up to this point, there has hardly been any systematic evaluation of this new system available. With an error rate of 7.8% for the LMC and 2.8% for the mouse device, movement times twice as large as for a mouse device and high overall effort ratings, the Leap Motion Controller's performance as an input device for everyday generic computer pointing tasks is rather limited, at least with regard to the selection recognition provided by the LMC.

  15. Evaluation of the Leap Motion Controller as a New Contact-Free Pointing Device

    PubMed Central

    Bachmann, Daniel; Weichert, Frank; Rinkenauer, Gerhard

    2015-01-01

    This paper presents a Fitts' law-based analysis of the user's performance in selection tasks with the Leap Motion Controller compared with a standard mouse device. The Leap Motion Controller (LMC) is a new contact-free input system for gesture-based human-computer interaction with declared sub-millimeter accuracy. Up to this point, there has hardly been any systematic evaluation of this new system available. With an error rate of 7.8 % for the LMC and 2.8% for the mouse device, movement times twice as large as for a mouse device and high overall effort ratings, the Leap Motion Controller's performance as an input device for everyday generic computer pointing tasks is rather limited, at least with regard to the selection recognition provided by the LMC. PMID:25609043

  16. Model-Based Reinforcement of Kinect Depth Data for Human Motion Capture Applications

    PubMed Central

    Calderita, Luis Vicente; Bandera, Juan Pedro; Bustos, Pablo; Skiadopoulos, Andreas

    2013-01-01

    Motion capture systems have recently experienced a strong evolution. New cheap depth sensors and open source frameworks, such as OpenNI, allow for perceiving human motion on-line without using invasive systems. However, these proposals do not evaluate the validity of the obtained poses. This paper addresses this issue using a model-based pose generator to complement the OpenNI human tracker. The proposed system enforces kinematics constraints, eliminates odd poses and filters sensor noise, while learning the real dimensions of the performer's body. The system is composed by a PrimeSense sensor, an OpenNI tracker and a kinematics-based filter and has been extensively tested. Experiments show that the proposed system improves pure OpenNI results at a very low computational cost. PMID:23845933

  17. Vertical Jump Height Estimation Algorithm Based on Takeoff and Landing Identification Via Foot-Worn Inertial Sensing.

    PubMed

    Wang, Jianren; Xu, Junkai; Shull, Peter B

    2018-03-01

    Vertical jump height is widely used for assessing motor development, functional ability, and motor capacity. Traditional methods for estimating vertical jump height rely on force plates or optical marker-based motion capture systems limiting assessment to people with access to specialized laboratories. Current wearable designs need to be attached to the skin or strapped to an appendage which can potentially be uncomfortable and inconvenient to use. This paper presents a novel algorithm for estimating vertical jump height based on foot-worn inertial sensors. Twenty healthy subjects performed countermovement jumping trials and maximum jump height was determined via inertial sensors located above the toe and under the heel and was compared with the gold standard maximum jump height estimation via optical marker-based motion capture. Average vertical jump height estimation errors from inertial sensing at the toe and heel were -2.2±2.1 cm and -0.4±3.8 cm, respectively. Vertical jump height estimation with the presented algorithm via inertial sensing showed excellent reliability at the toe (ICC(2,1)=0.98) and heel (ICC(2,1)=0.97). There was no significant bias in the inertial sensing at the toe, but proportional bias (b=1.22) and fixed bias (a=-10.23cm) were detected in inertial sensing at the heel. These results indicate that the presented algorithm could be applied to foot-worn inertial sensors to estimate maximum jump height enabling assessment outside of traditional laboratory settings, and to avoid bias errors, the toe may be a more suitable location for inertial sensor placement than the heel.

  18. Restoration of Wavelet-Compressed Images and Motion Imagery

    DTIC Science & Technology

    2004-01-01

    SECURITY CLASSIFICATION OF REPORT UNCLASSIFIED 18. SECURITY CLASSIFICATION OF THIS PAGE UNCLASSIFIED 19. SECURITY CLASSIFICATION...images is that they are global translates of each other, where 29 the global motion parameters are known. In a very simple sense , these five images form...Image Proc., vol. 1, Oct. 2001, pp. 185–188. [2] J. W. Woods and T. Naveen, “A filter based bit allocation scheme for subband compresion of HDTV,” IEEE

  19. Engineering uses of physics-based ground motion simulations

    USGS Publications Warehouse

    Baker, Jack W.; Luco, Nicolas; Abrahamson, Norman A.; Graves, Robert W.; Maechling, Phillip J.; Olsen, Kim B.

    2014-01-01

    This paper summarizes validation methodologies focused on enabling ground motion simulations to be used with confidence in engineering applications such as seismic hazard analysis and dynmaic analysis of structural and geotechnical systems. Numberical simullation of ground motion from large erthquakes, utilizing physics-based models of earthquake rupture and wave propagation, is an area of active research in the earth science community. Refinement and validatoin of these models require collaboration between earthquake scientists and engineering users, and testing/rating methodolgies for simulated ground motions to be used with confidence in engineering applications. This paper provides an introduction to this field and an overview of current research activities being coordinated by the Souther California Earthquake Center (SCEC). These activities are related both to advancing the science and computational infrastructure needed to produce ground motion simulations, as well as to engineering validation procedures. Current research areas and anticipated future achievements are also discussed.

  20. An adaptive field detection method for bridge scour monitoring using motion-sensing radio transponders (RFIDs).

    DOT National Transportation Integrated Search

    2014-01-01

    A comprehensive field detection method is proposed that is aimed at developing advanced capability for : reliable monitoring, inspection and life estimation of bridge infrastructure. The goal is to utilize Motion-Sensing Radio Transponders (RFIDS) on...

  1. CardioGuard: A Brassiere-Based Reliable ECG Monitoring Sensor System for Supporting Daily Smartphone Healthcare Applications

    PubMed Central

    Kwon, Sungjun; Kim, Jeehoon; Kang, Seungwoo; Lee, Youngki; Baek, Hyunjae

    2014-01-01

    Abstract We propose CardioGuard, a brassiere-based reliable electrocardiogram (ECG) monitoring sensor system, for supporting daily smartphone healthcare applications. It is designed to satisfy two key requirements for user-unobtrusive daily ECG monitoring: reliability of ECG sensing and usability of the sensor. The system is validated through extensive evaluations. The evaluation results showed that the CardioGuard sensor reliably measure the ECG during 12 representative daily activities including diverse movement levels; 89.53% of QRS peaks were detected on average. The questionnaire-based user study with 15 participants showed that the CardioGuard sensor was comfortable and unobtrusive. Additionally, the signal-to-noise ratio test and the washing durability test were conducted to show the high-quality sensing of the proposed sensor and its physical durability in practical use, respectively. PMID:25405527

  2. Motion perception: behavior and neural substrate.

    PubMed

    Mather, George

    2011-05-01

    Visual motion perception is vital for survival. Single-unit recordings in primate primary visual cortex (V1) have revealed the existence of specialized motion sensing neurons; perceptual effects such as the motion after-effect demonstrate their importance for motion perception. Human psychophysical data on motion detection can be explained by a computational model of cortical motion sensors. Both psychophysical and physiological data reveal at least two classes of motion sensor capable of sensing motion in luminance-defined and texture-defined patterns, respectively. Psychophysical experiments also reveal that motion can be seen independently of motion sensor output, based on attentive tracking of visual features. Sensor outputs are inherently ambiguous, due to the problem of univariance in neural responses. In order to compute stimulus direction and speed, the visual system must compare the responses of many different sensors sensitive to different directions and speeds. Physiological data show that this computation occurs in the visual middle temporal (MT) area. Recent psychophysical studies indicate that information about spatial form may also play a role in motion computations. Adaptation studies show that the human visual system is selectively sensitive to large-scale optic flow patterns, and physiological studies indicate that cells in the middle superior temporal (MST) area derive this sensitivity from the combined responses of many MT cells. Extraretinal signals used to control eye movements are an important source of signals to cancel out the retinal motion responses generated by eye movements, though visual information also plays a role. A number of issues remain to be resolved at all levels of the motion-processing hierarchy. WIREs Cogni Sci 2011 2 305-314 DOI: 10.1002/wcs.110 For further resources related to this article, please visit the WIREs website Additional Supporting Information may be found in http://www.lifesci.sussex.ac.uk/home/George_Mather/Motion/index.html. Copyright © 2010 John Wiley & Sons, Ltd.

  3. Initial Experiments with the Leap Motion as a User Interface in Robotic Endonasal Surgery.

    PubMed

    Travaglini, T A; Swaney, P J; Weaver, Kyle D; Webster, R J

    The Leap Motion controller is a low-cost, optically-based hand tracking system that has recently been introduced on the consumer market. Prior studies have investigated its precision and accuracy, toward evaluating its usefulness as a surgical robot master interface. Yet due to the diversity of potential slave robots and surgical procedures, as well as the dynamic nature of surgery, it is challenging to make general conclusions from published accuracy and precision data. Thus, our goal in this paper is to explore the use of the Leap in the specific scenario of endonasal pituitary surgery. We use it to control a concentric tube continuum robot in a phantom study, and compare user performance using the Leap to previously published results using the Phantom Omni. We find that the users were able to achieve nearly identical average resection percentage and overall surgical duration with the Leap.

  4. Initial Experiments with the Leap Motion as a User Interface in Robotic Endonasal Surgery

    PubMed Central

    Travaglini, T. A.; Swaney, P. J.; Weaver, Kyle D.; Webster, R. J.

    2016-01-01

    The Leap Motion controller is a low-cost, optically-based hand tracking system that has recently been introduced on the consumer market. Prior studies have investigated its precision and accuracy, toward evaluating its usefulness as a surgical robot master interface. Yet due to the diversity of potential slave robots and surgical procedures, as well as the dynamic nature of surgery, it is challenging to make general conclusions from published accuracy and precision data. Thus, our goal in this paper is to explore the use of the Leap in the specific scenario of endonasal pituitary surgery. We use it to control a concentric tube continuum robot in a phantom study, and compare user performance using the Leap to previously published results using the Phantom Omni. We find that the users were able to achieve nearly identical average resection percentage and overall surgical duration with the Leap. PMID:26752501

  5. Phase retrieval based wavefront sensing experimental implementation and wavefront sensing accuracy calibration

    NASA Astrophysics Data System (ADS)

    Mao, Heng; Wang, Xiao; Zhao, Dazun

    2009-05-01

    As a wavefront sensing (WFS) tool, Baseline algorithm, which is classified as the iterative-transform algorithm of phase retrieval, estimates the phase distribution at pupil from some known PSFs at defocus planes. By using multiple phase diversities and appropriate phase unwrapping methods, this algorithm can accomplish reliable unique solution and high dynamic phase measurement. In the paper, a Baseline algorithm based wavefront sensing experiment with modification of phase unwrapping has been implemented, and corresponding Graphical User Interfaces (GUI) software has also been given. The adaptability and repeatability of Baseline algorithm have been validated in experiments. Moreover, referring to the ZYGO interferometric results, the WFS accuracy of this algorithm has been exactly calibrated.

  6. A stretchable strain sensor based on a metal nanoparticle thin film for human motion detection

    NASA Astrophysics Data System (ADS)

    Lee, Jaehwan; Kim, Sanghyeok; Lee, Jinjae; Yang, Daejong; Park, Byong Chon; Ryu, Seunghwa; Park, Inkyu

    2014-09-01

    Wearable strain sensors for human motion detection are being highlighted in various fields such as medical, entertainment and sports industry. In this paper, we propose a new type of stretchable strain sensor that can detect both tensile and compressive strains and can be fabricated by a very simple process. A silver nanoparticle (Ag NP) thin film patterned on the polydimethylsiloxane (PDMS) stamp by a single-step direct transfer process is used as the strain sensing material. The working principle is the change in the electrical resistance caused by the opening/closure of micro-cracks under mechanical deformation. The fabricated stretchable strain sensor shows highly sensitive and durable sensing performances in various tensile/compressive strains, long-term cyclic loading and relaxation tests. We demonstrate the applications of our stretchable strain sensors such as flexible pressure sensors and wearable human motion detection devices with high sensitivity, response speed and mechanical robustness.Wearable strain sensors for human motion detection are being highlighted in various fields such as medical, entertainment and sports industry. In this paper, we propose a new type of stretchable strain sensor that can detect both tensile and compressive strains and can be fabricated by a very simple process. A silver nanoparticle (Ag NP) thin film patterned on the polydimethylsiloxane (PDMS) stamp by a single-step direct transfer process is used as the strain sensing material. The working principle is the change in the electrical resistance caused by the opening/closure of micro-cracks under mechanical deformation. The fabricated stretchable strain sensor shows highly sensitive and durable sensing performances in various tensile/compressive strains, long-term cyclic loading and relaxation tests. We demonstrate the applications of our stretchable strain sensors such as flexible pressure sensors and wearable human motion detection devices with high sensitivity, response speed and mechanical robustness. Electronic supplementary information (ESI) available. See DOI: 10.1039/c4nr03295k

  7. Unreal Interactive Puppet Game Development Using Leap Motion

    NASA Astrophysics Data System (ADS)

    Huang, An-Pin; Huang, Fay; Jhu, Jing-Siang

    2018-04-01

    This paper proposed a novel puppet play method utilizing recent technology. An interactive puppet game has been developed based on the theme of a famous Chinese classical novel. This project was implemented using Unreal Engine, which is a leading software of integrated tools for developers to design and build games. On the other hand, Leap Motion Controller is a sensor device for recognizing hand movements and gestures. It is commonly used in systems which require close-range finger-based user interaction. In order to manipulate puppets’ movements, the developed program employs the Leap Motion SDK, which provides a friendly way to add motion-controlled 3D hands to an Unreal game. The novelty of our project is to replace 3D model of rigged hands by two 3D humanoid rigged characters. The challenges of this task are two folds. First, the skeleton structure of a human hand and a humanoid character (i.e., puppets) are totally different. Making the puppets to follow the hand poses of the user and yet ensuring reasonable puppets’ movements has not been discussed in the literatures nor in the developer forums. Second, there are only a limited number of built-in recognizable hand gestures. More recognizable hand gestures need to be created for the interactive game. This paper reports the proposed solutions to these challenges.

  8. DNA Encoding Training Using 3D Gesture Interaction.

    PubMed

    Nicola, Stelian; Handrea, Flavia-Laura; Crişan-Vida, Mihaela; Stoicu-Tivadar, Lăcrămioara

    2017-01-01

    The work described in this paper summarizes the development process and presents the results of a human genetics training application, studying the 20 amino acids formed by the combination of the 3 nucleotides of DNA targeting mainly medical and bioinformatics students. Currently, the domain applications using recognized human gestures of the Leap Motion sensor are used in molecules controlling and learning from Mendeleev table or in visualizing the animated reactions of specific molecules with water. The novelty in the current application consists in using the Leap Motion sensor creating new gestures for the application control and creating a tag based algorithm corresponding to each amino acid, depending on the position in the 3D virtual space of the 4 nucleotides of DNA and their type. The team proposes a 3D application based on Unity editor and on Leap Motion sensor where the user has the liberty of forming different combinations of the 20 amino acids. The results confirm that this new type of study of medicine/biochemistry using the Leap Motion sensor for handling amino acids is suitable for students. The application is original and interactive and the users can create their own amino acid structures in a 3D-like environment which they could not do otherwise using traditional pen-and-paper.

  9. Toward an affordable and user-friendly visual motion capture system.

    PubMed

    Bonnet, V; Sylla, N; Cherubini, A; Gonzáles, A; Azevedo Coste, C; Fraisse, P; Venture, G

    2014-01-01

    The present study aims at designing and evaluating a low-cost, simple and portable system for arm joint angle estimation during grasping-like motions. The system is based on a single RGB-D camera and three customized markers. The automatically detected and tracked marker positions were used as inputs to an offline inverse kinematic process based on bio-mechanical constraints to reduce noise effect and handle marker occlusion. The method was validated on 4 subjects with different motions. The joint angles were estimated both with the proposed low-cost system and, a stereophotogrammetric system. Comparative analysis shows good accuracy with high correlation coefficient (r= 0.92) and low average RMS error (3.8 deg).

  10. User Interface Preferences in the Design of a Camera-Based Navigation and Wayfinding Aid

    ERIC Educational Resources Information Center

    Arditi, Aries; Tian, YingLi

    2013-01-01

    Introduction: Development of a sensing device that can provide a sufficient perceptual substrate for persons with visual impairments to orient themselves and travel confidently has been a persistent rehabilitation technology goal, with the user interface posing a significant challenge. In the study presented here, we enlist the advice and ideas of…

  11. Motion Field Estimation for a Dynamic Scene Using a 3D LiDAR

    PubMed Central

    Li, Qingquan; Zhang, Liang; Mao, Qingzhou; Zou, Qin; Zhang, Pin; Feng, Shaojun; Ochieng, Washington

    2014-01-01

    This paper proposes a novel motion field estimation method based on a 3D light detection and ranging (LiDAR) sensor for motion sensing for intelligent driverless vehicles and active collision avoidance systems. Unlike multiple target tracking methods, which estimate the motion state of detected targets, such as cars and pedestrians, motion field estimation regards the whole scene as a motion field in which each little element has its own motion state. Compared to multiple target tracking, segmentation errors and data association errors have much less significance in motion field estimation, making it more accurate and robust. This paper presents an intact 3D LiDAR-based motion field estimation method, including pre-processing, a theoretical framework for the motion field estimation problem and practical solutions. The 3D LiDAR measurements are first projected to small-scale polar grids, and then, after data association and Kalman filtering, the motion state of every moving grid is estimated. To reduce computing time, a fast data association algorithm is proposed. Furthermore, considering the spatial correlation of motion among neighboring grids, a novel spatial-smoothing algorithm is also presented to optimize the motion field. The experimental results using several data sets captured in different cities indicate that the proposed motion field estimation is able to run in real-time and performs robustly and effectively. PMID:25207868

  12. Motion field estimation for a dynamic scene using a 3D LiDAR.

    PubMed

    Li, Qingquan; Zhang, Liang; Mao, Qingzhou; Zou, Qin; Zhang, Pin; Feng, Shaojun; Ochieng, Washington

    2014-09-09

    This paper proposes a novel motion field estimation method based on a 3D light detection and ranging (LiDAR) sensor for motion sensing for intelligent driverless vehicles and active collision avoidance systems. Unlike multiple target tracking methods, which estimate the motion state of detected targets, such as cars and pedestrians, motion field estimation regards the whole scene as a motion field in which each little element has its own motion state. Compared to multiple target tracking, segmentation errors and data association errors have much less significance in motion field estimation, making it more accurate and robust. This paper presents an intact 3D LiDAR-based motion field estimation method, including pre-processing, a theoretical framework for the motion field estimation problem and practical solutions. The 3D LiDAR measurements are first projected to small-scale polar grids, and then, after data association and Kalman filtering, the motion state of every moving grid is estimated. To reduce computing time, a fast data association algorithm is proposed. Furthermore, considering the spatial correlation of motion among neighboring grids, a novel spatial-smoothing algorithm is also presented to optimize the motion field. The experimental results using several data sets captured in different cities indicate that the proposed motion field estimation is able to run in real-time and performs robustly and effectively.

  13. Capacity-Delay Trade-Off in Collaborative Hybrid Ad-Hoc Networks with Coverage Sensing.

    PubMed

    Chen, Lingyu; Luo, Wenbin; Liu, Chen; Hong, Xuemin; Shi, Jianghong

    2017-01-26

    The integration of ad hoc device-to-device (D2D) communications and open-access small cells can result in a networking paradigm called hybrid the ad hoc network, which is particularly promising in delivering delay-tolerant data. The capacity-delay performance of hybrid ad hoc networks has been studied extensively under a popular framework called scaling law analysis. These studies, however, do not take into account aspects of interference accumulation and queueing delay and, therefore, may lead to over-optimistic results. Moreover, focusing on the average measures, existing works fail to give finer-grained insights into the distribution of delays. This paper proposes an alternative analytical framework based on queueing theoretic models and physical interference models. We apply this framework to study the capacity-delay performance of a collaborative cellular D2D network with coverage sensing and two-hop relay. The new framework allows us to fully characterize the delay distribution in the transform domain and pinpoint the impacts of coverage sensing, user and base station densities, transmit power, user mobility and packet size on the capacity-delay trade-off. We show that under the condition of queueing equilibrium, the maximum throughput capacity per device saturates to an upper bound of 0.7239 λ b / λ u bits/s/Hz, where λ b and λ u are the densities of base stations and mobile users, respectively.

  14. Capacity-Delay Trade-Off in Collaborative Hybrid Ad-Hoc Networks with Coverage Sensing

    PubMed Central

    Chen, Lingyu; Luo, Wenbin; Liu, Chen; Hong, Xuemin; Shi, Jianghong

    2017-01-01

    The integration of ad hoc device-to-device (D2D) communications and open-access small cells can result in a networking paradigm called hybrid the ad hoc network, which is particularly promising in delivering delay-tolerant data. The capacity-delay performance of hybrid ad hoc networks has been studied extensively under a popular framework called scaling law analysis. These studies, however, do not take into account aspects of interference accumulation and queueing delay and, therefore, may lead to over-optimistic results. Moreover, focusing on the average measures, existing works fail to give finer-grained insights into the distribution of delays. This paper proposes an alternative analytical framework based on queueing theoretic models and physical interference models. We apply this framework to study the capacity-delay performance of a collaborative cellular D2D network with coverage sensing and two-hop relay. The new framework allows us to fully characterize the delay distribution in the transform domain and pinpoint the impacts of coverage sensing, user and base station densities, transmit power, user mobility and packet size on the capacity-delay trade-off. We show that under the condition of queueing equilibrium, the maximum throughput capacity per device saturates to an upper bound of 0.7239 λb/λu bits/s/Hz, where λb and λu are the densities of base stations and mobile users, respectively. PMID:28134769

  15. Towards Gesture-Based Multi-User Interactions in Collaborative Virtual Environments

    NASA Astrophysics Data System (ADS)

    Pretto, N.; Poiesi, F.

    2017-11-01

    We present a virtual reality (VR) setup that enables multiple users to participate in collaborative virtual environments and interact via gestures. A collaborative VR session is established through a network of users that is composed of a server and a set of clients. The server manages the communication amongst clients and is created by one of the users. Each user's VR setup consists of a Head Mounted Display (HMD) for immersive visualisation, a hand tracking system to interact with virtual objects and a single-hand joypad to move in the virtual environment. We use Google Cardboard as a HMD for the VR experience and a Leap Motion for hand tracking, thus making our solution low cost. We evaluate our VR setup though a forensics use case, where real-world objects pertaining to a simulated crime scene are included in a VR environment, acquired using a smartphone-based 3D reconstruction pipeline. Users can interact using virtual gesture-based tools such as pointers and rulers.

  16. Samba: a real-time motion capture system using wireless camera sensor networks.

    PubMed

    Oh, Hyeongseok; Cha, Geonho; Oh, Songhwai

    2014-03-20

    There is a growing interest in 3D content following the recent developments in 3D movies, 3D TVs and 3D smartphones. However, 3D content creation is still dominated by professionals, due to the high cost of 3D motion capture instruments. The availability of a low-cost motion capture system will promote 3D content generation by general users and accelerate the growth of the 3D market. In this paper, we describe the design and implementation of a real-time motion capture system based on a portable low-cost wireless camera sensor network. The proposed system performs motion capture based on the data-driven 3D human pose reconstruction method to reduce the computation time and to improve the 3D reconstruction accuracy. The system can reconstruct accurate 3D full-body poses at 16 frames per second using only eight markers on the subject's body. The performance of the motion capture system is evaluated extensively in experiments.

  17. Samba: A Real-Time Motion Capture System Using Wireless Camera Sensor Networks

    PubMed Central

    Oh, Hyeongseok; Cha, Geonho; Oh, Songhwai

    2014-01-01

    There is a growing interest in 3D content following the recent developments in 3D movies, 3D TVs and 3D smartphones. However, 3D content creation is still dominated by professionals, due to the high cost of 3D motion capture instruments. The availability of a low-cost motion capture system will promote 3D content generation by general users and accelerate the growth of the 3D market. In this paper, we describe the design and implementation of a real-time motion capture system based on a portable low-cost wireless camera sensor network. The proposed system performs motion capture based on the data-driven 3D human pose reconstruction method to reduce the computation time and to improve the 3D reconstruction accuracy. The system can reconstruct accurate 3D full-body poses at 16 frames per second using only eight markers on the subject's body. The performance of the motion capture system is evaluated extensively in experiments. PMID:24658618

  18. A Prototype Hydrologic Observatory for the Neuse River Basin Using Remote Sensing Data as a Part of the CUAHSI-HIS Effort

    NASA Astrophysics Data System (ADS)

    Kanwar, R.; Narayan, U.; Lakshmi, V.

    2005-12-01

    Remote sensing has the potential to immensely advance the science and application of hydrology as it provides multi-scale and multi-temporal measurements of several hydrologic parameters. There is a wide variety of remote sensing data sources available to a hydrologist with a myriad of data formats, access techniques, data quality issues and temporal and spatial extents. It is very important to make data availability and its usage as convenient as possible for potential users. The CUAHSI Hydrologic Information System (HIS) initiative addresses this issue of better data access and management for hydrologists with a focus on in-situ data, that is point measurements of water and energy fluxes which make up the 'more conventional' sources of hydrologic data. This paper explores various sources of remotely sensed hydrologic data available, their data formats and volumes, current modes of data acquisition by end users, metadata associated with data itself, and requirements from potential data models that would allow a seamless integration of remotely sensed hydrologic observations into the Hydrologic Information System. Further, a prototype hydrologic observatory (HO) for the Neuse River Basin is developed using surface temperature, vegetation indices and soil moisture estimates available from remote sensing. The prototype (HO) uses the CUAHSI digital library system (DLS) on the back (server) end. On the front (client) end, a rich visual environment has been developed in order to provide better decision making tools in order to make an optimal choice in the selection of remote sensing data for a particular application. An easy point and click interface to the remote sensing data is also implemented for common users who are just interested in location based query of hydrologic variable values.

  19. Tasking and sharing sensing assets using controlled natural language

    NASA Astrophysics Data System (ADS)

    Preece, Alun; Pizzocaro, Diego; Braines, David; Mott, David

    2012-06-01

    We introduce an approach to representing intelligence, surveillance, and reconnaissance (ISR) tasks at a relatively high level in controlled natural language. We demonstrate that this facilitates both human interpretation and machine processing of tasks. More specically, it allows the automatic assignment of sensing assets to tasks, and the informed sharing of tasks between collaborating users in a coalition environment. To enable automatic matching of sensor types to tasks, we created a machine-processable knowledge representation based on the Military Missions and Means Framework (MMF), and implemented a semantic reasoner to match task types to sensor types. We combined this mechanism with a sensor-task assignment procedure based on a well-known distributed protocol for resource allocation. In this paper, we re-formulate the MMF ontology in Controlled English (CE), a type of controlled natural language designed to be readable by a native English speaker whilst representing information in a structured, unambiguous form to facilitate machine processing. We show how CE can be used to describe both ISR tasks (for example, detection, localization, or identication of particular kinds of object) and sensing assets (for example, acoustic, visual, or seismic sensors, mounted on motes or unmanned vehicles). We show how these representations enable an automatic sensor-task assignment process. Where a group of users are cooperating in a coalition, we show how CE task summaries give users in the eld a high-level picture of ISR coverage of an area of interest. This allows them to make ecient use of sensing resources by sharing tasks.

  20. Robust object tracking techniques for vision-based 3D motion analysis applications

    NASA Astrophysics Data System (ADS)

    Knyaz, Vladimir A.; Zheltov, Sergey Y.; Vishnyakov, Boris V.

    2016-04-01

    Automated and accurate spatial motion capturing of an object is necessary for a wide variety of applications including industry and science, virtual reality and movie, medicine and sports. For the most part of applications a reliability and an accuracy of the data obtained as well as convenience for a user are the main characteristics defining the quality of the motion capture system. Among the existing systems for 3D data acquisition, based on different physical principles (accelerometry, magnetometry, time-of-flight, vision-based), optical motion capture systems have a set of advantages such as high speed of acquisition, potential for high accuracy and automation based on advanced image processing algorithms. For vision-based motion capture accurate and robust object features detecting and tracking through the video sequence are the key elements along with a level of automation of capturing process. So for providing high accuracy of obtained spatial data the developed vision-based motion capture system "Mosca" is based on photogrammetric principles of 3D measurements and supports high speed image acquisition in synchronized mode. It includes from 2 to 4 technical vision cameras for capturing video sequences of object motion. The original camera calibration and external orientation procedures provide the basis for high accuracy of 3D measurements. A set of algorithms as for detecting, identifying and tracking of similar targets, so for marker-less object motion capture is developed and tested. The results of algorithms' evaluation show high robustness and high reliability for various motion analysis tasks in technical and biomechanics applications.

  1. Validity of an ankle joint motion and position sense measurement system and its application in healthy subjects and patients with ankle sprain.

    PubMed

    Lin, Chueh-Ho; Chiang, Shang-Lin; Lu, Liang-Hsuan; Wei, Shun-Hwa; Sung, Wen-Hsu

    2016-07-01

    Ankle motion and proprioception in multiple axis movements are crucial for daily activities. However, few studies have developed and used a multiple axis system for measuring ankle motion and proprioception. This study was designed to validate a novel ankle haptic interface system that measures the ankle range of motion (ROM) and joint position sense in multiple plane movements, investigating the proprioception deficits during joint position sense tasks for patients with ankle instability. Eleven healthy adults (mean ± standard deviation; age, 24.7 ± 1.9 years) and thirteen patients with ankle instability were recruited in this study. All subjects were asked to perform tests to evaluate the validity of the ankle ROM measurements and underwent tests for validating the joint position sense measurements conducted during multiple axis movements of the ankle joint. Pearson correlation was used for validating the angular position measurements obtained using the developed system; the independent t test was used to investigate the differences in joint position sense task performance for people with or without ankle instability. The ROM measurements of the device were linearly correlated with the criterion standards (r = 0.99). The ankle instability and healthy groups were significantly different in direction, absolute, and variable errors of plantar flexion, dorsiflexion, inversion, and eversion (p < 0.05). The results demonstrate that the novel ankle joint motion and position sense measurement system is valid and can be used for measuring the ankle ROM and joint position sense in multiple planes and indicate proprioception deficits for people with ankle instability. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fogg, P; Aland, T; West, M

    Purpose: To investigate the effects of external surrogate and tumour motion by observing the reconstructed phases and AveCT in an Amplitude and Time based 4DCT. Methods: Based on patient motion studies, Cos6 and sinusoidal motions were simulated as external surrogate and tumour motions in a motion phantom. The diaphragm and tumour motions may or may not display the same waveform therefore the same and different waveforms were programmed into the phantom, scanned and reconstructed based on Amplitude and Time. The AveCT and phases were investigated with these different scenarios. The AveCT phantom images were also compared with CBCT phantom imagesmore » programmed with the same motions. Results: For the same surrogate and tumour sin motions, the phases (Amplitude and Time) and AveCT indicated similar motions based on the position of the BB at the slice and displayed contrast values respectively. For cos6 motions, due to the varied time the tumour spends at each position, the Amplitude and Time based phases differed. The AveCT images represented the actual tumour motions and the Time and Amplitude based phases were represented by the surrogate with varied times. Conclusion: Different external surrogate and tumour motions may result in different displayed image motions when observing the AveCT and reconstructed phases. During the 4DCT, the surrogate motion is readily available for observation of the amplitude and time of the diaphragm position. Following image reconstruction, the user may need to observe the AveCT in addition to the reconstructed phases to comprehend the time weightings of the tumour motion during the scan. This may also apply to 3D CBCT images where the displayed tumour position in the images is influenced by the long duration of the CBCT. Knowledge of the tumour motion represented by the greyscale of the AveCT may also assist in CBCT treatment beam verification matching.« less

  3. Controlling motion sickness and spatial disorientation and enhancing vestibular rehabilitation with a user-worn see-through display.

    PubMed

    Krueger, Wesley W O

    2011-01-01

    An eyewear mounted visual display ("User-worn see-through display") projecting an artificial horizon aligned with the user's head and body position in space can prevent or lessen motion sickness in susceptible individuals when in a motion provocative environment as well as aid patients undergoing vestibular rehabilitation. In this project, a wearable display device, including software technology and hardware, was developed and a phase I feasibility study and phase II clinical trial for safety and efficacy were performed. Both phase I and phase II were prospective studies funded by the NIH. The phase II study used repeated measures for motion intolerant subjects and a randomized control group (display device/no display device) pre-posttest design for patients in vestibular rehabilitation. Following technology and display device development, 75 patients were evaluated by test and rating scales in the phase II study; 25 subjects with motion intolerance used the technology in the display device in provocative environments and completed subjective rating scales, whereas 50 patients were evaluated before and after vestibular rehabilitation (25 using the display device and 25 in a control group) using established test measures. All patients with motion intolerance rated the technology as helpful for nine symptoms assessed, and 96% rated the display device as simple and easy to use. Duration of symptoms significantly decreased with use of the technology displayed. In patients undergoing vestibular rehabilitation, there were no significant differences in amount of change from pre- to posttherapy on objective balance tests between display device users and controls. However, those using the technology required significantly fewer rehabilitation sessions to achieve those outcomes than the control group. A user-worn see-through display, utilizing a visual fixation target coupled with a stable artificial horizon and aligned with user movement, has demonstrated substantial benefit for individuals susceptible to motion intolerance and spatial disorientation and those undergoing vestibular rehabilitation. The technology developed has applications in any environment where motion sensitivity affects human performance.

  4. The application of mean field theory to image motion estimation.

    PubMed

    Zhang, J; Hanauer, G G

    1995-01-01

    Previously, Markov random field (MRF) model-based techniques have been proposed for image motion estimation. Since motion estimation is usually an ill-posed problem, various constraints are needed to obtain a unique and stable solution. The main advantage of the MRF approach is its capacity to incorporate such constraints, for instance, motion continuity within an object and motion discontinuity at the boundaries between objects. In the MRF approach, motion estimation is often formulated as an optimization problem, and two frequently used optimization methods are simulated annealing (SA) and iterative-conditional mode (ICM). Although the SA is theoretically optimal in the sense of finding the global optimum, it usually takes many iterations to converge. The ICM, on the other hand, converges quickly, but its results are often unsatisfactory due to its "hard decision" nature. Previously, the authors have applied the mean field theory to image segmentation and image restoration problems. It provides results nearly as good as SA but with much faster convergence. The present paper shows how the mean field theory can be applied to MRF model-based motion estimation. This approach is demonstrated on both synthetic and real-world images, where it produced good motion estimates.

  5. Mounted Smartphones as Measurement and Control Platforms for Motor-Based Laboratory Test-Beds †

    PubMed Central

    Frank, Jared A.; Brill, Anthony; Kapila, Vikram

    2016-01-01

    Laboratory education in science and engineering often entails the use of test-beds equipped with costly peripherals for sensing, acquisition, storage, processing, and control of physical behavior. However, costly peripherals are no longer necessary to obtain precise measurements and achieve stable feedback control of test-beds. With smartphones performing diverse sensing and processing tasks, this study examines the feasibility of mounting smartphones directly to test-beds to exploit their embedded hardware and software in the measurement and control of the test-beds. This approach is a first step towards replacing laboratory-grade peripherals with more compact and affordable smartphone-based platforms, whose interactive user interfaces can engender wider participation and engagement from learners. Demonstrative cases are presented in which the sensing, computation, control, and user interaction with three motor-based test-beds are handled by a mounted smartphone. Results of experiments and simulations are used to validate the feasibility of mounted smartphones as measurement and feedback control platforms for motor-based laboratory test-beds, report the measurement precision and closed-loop performance achieved with such platforms, and address challenges in the development of platforms to maintain system stability. PMID:27556464

  6. Mounted Smartphones as Measurement and Control Platforms for Motor-Based Laboratory Test-Beds.

    PubMed

    Frank, Jared A; Brill, Anthony; Kapila, Vikram

    2016-08-20

    Laboratory education in science and engineering often entails the use of test-beds equipped with costly peripherals for sensing, acquisition, storage, processing, and control of physical behavior. However, costly peripherals are no longer necessary to obtain precise measurements and achieve stable feedback control of test-beds. With smartphones performing diverse sensing and processing tasks, this study examines the feasibility of mounting smartphones directly to test-beds to exploit their embedded hardware and software in the measurement and control of the test-beds. This approach is a first step towards replacing laboratory-grade peripherals with more compact and affordable smartphone-based platforms, whose interactive user interfaces can engender wider participation and engagement from learners. Demonstrative cases are presented in which the sensing, computation, control, and user interaction with three motor-based test-beds are handled by a mounted smartphone. Results of experiments and simulations are used to validate the feasibility of mounted smartphones as measurement and feedback control platforms for motor-based laboratory test-beds, report the measurement precision and closed-loop performance achieved with such platforms, and address challenges in the development of platforms to maintain system stability.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheung, Y; Rahimi, A; Sawant, A

    Purpose: Active breathing control (ABC) has been used to reduce treatment margin due to respiratory organ motion by enforcing temporary breath-holds. However, in practice, even if the ABC device indicates constant lung volume during breath-hold, the patient may still exhibit minor chest motion. Consequently, therapists are given a false sense of security that the patient is immobilized. This study aims at quantifying such motion during ABC breath-holds by monitoring the patient chest motion using a surface photogrammetry system, VisionRT. Methods: A female patient with breast cancer was selected to evaluate chest motion during ABC breath-holds. During the entire course ofmore » treatment, the patient’s chest surface was monitored by a surface photogrammetry system, VisionRT. Specifically, a user-defined region-of-interest (ROI) on the chest surface was selected for the system to track at a rate of ∼3Hz. The surface motion was estimated by rigid image registration between the current ROI image captured and a reference image. The translational and rotational displacements computed were saved in a log file. Results: A total of 20 fractions of radiation treatment were monitored by VisionRT. After removing noisy data, we obtained chest motion of 79 breath-hold sessions. Mean chest motion in AP direction during breath-holds is 1.31mm with 0.62mm standard deviation. Of the 79 sessions, the patient exhibited motion ranging from 0–1 mm (30 sessions), 1–2 mm (37 sessions), 2–3 mm (11 sessions) and >3 mm (1 session). Conclusion: Contrary to popular assumptions, the patient is not completely still during ABC breath-hold sessions. In this particular case studied, the patient exhibited chest motion over 2mm in 14 out of 79 breath-holds. Underestimating treatment margin for radiation therapy with ABC could reduce treatment effectiveness due to geometric miss or overdose of critical organs. The senior author receives research funding from NIH, VisionRT, Varian Medical Systems and Elekta.« less

  8. Stretch sensors for human body motion

    NASA Astrophysics Data System (ADS)

    O'Brien, Ben; Gisby, Todd; Anderson, Iain A.

    2014-03-01

    Sensing motion of the human body is a difficult task. From an engineers' perspective people are soft highly mobile objects that move in and out of complex environments. As well as the technical challenge of sensing, concepts such as comfort, social intrusion, usability, and aesthetics are paramount in determining whether someone will adopt a sensing solution or not. At the same time the demands for human body motion sensing are growing fast. Athletes want feedback on posture and technique, consumers need new ways to interact with augmented reality devices, and healthcare providers wish to track recovery of a patient. Dielectric elastomer stretch sensors are ideal for bridging this gap. They are soft, flexible, and precise. They are low power, lightweight, and can be easily mounted on the body or embedded into clothing. From a commercialisation point of view stretch sensing is easier than actuation or generation - such sensors can be low voltage and integrated with conventional microelectronics. This paper takes a birds-eye view of the use of these sensors to measure human body motion. A holistic description of sensor operation and guidelines for sensor design will be presented to help technologists and developers in the space.

  9. Performance analysis of visual tracking algorithms for motion-based user interfaces on mobile devices

    NASA Astrophysics Data System (ADS)

    Winkler, Stefan; Rangaswamy, Karthik; Tedjokusumo, Jefry; Zhou, ZhiYing

    2008-02-01

    Determining the self-motion of a camera is useful for many applications. A number of visual motion-tracking algorithms have been developed till date, each with their own advantages and restrictions. Some of them have also made their foray into the mobile world, powering augmented reality-based applications on phones with inbuilt cameras. In this paper, we compare the performances of three feature or landmark-guided motion tracking algorithms, namely marker-based tracking with MXRToolkit, face tracking based on CamShift, and MonoSLAM. We analyze and compare the complexity, accuracy, sensitivity, robustness and restrictions of each of the above methods. Our performance tests are conducted over two stages: The first stage of testing uses video sequences created with simulated camera movements along the six degrees of freedom in order to compare accuracy in tracking, while the second stage analyzes the robustness of the algorithms by testing for manipulative factors like image scaling and frame-skipping.

  10. Evaluating the effect of remote sensing image spatial resolution on soil exchangeable potassium prediction models in smallholder farm settings.

    PubMed

    Xu, Yiming; Smith, Scot E; Grunwald, Sabine; Abd-Elrahman, Amr; Wani, Suhas P

    2017-09-15

    Major end users of Digital Soil Mapping (DSM) such as policy makers and agricultural extension workers are faced with choosing the appropriate remote sensing data. The objective of this research is to analyze the spatial resolution effects of different remote sensing images on soil prediction models in two smallholder farms in Southern India called Kothapally (Telangana State), and Masuti (Karnataka State), and provide empirical guidelines to choose the appropriate remote sensing images in DSM. Bayesian kriging (BK) was utilized to characterize the spatial pattern of exchangeable potassium (K ex ) in the topsoil (0-15 cm) at different spatial resolutions by incorporating spectral indices from Landsat 8 (30 m), RapidEye (5 m), and WorldView-2/GeoEye-1/Pleiades-1A images (2 m). Some spectral indices such as band reflectances, band ratios, Crust Index and Atmospherically Resistant Vegetation Index from multiple images showed relatively strong correlations with soil K ex in two study areas. The research also suggested that fine spatial resolution WorldView-2/GeoEye-1/Pleiades-1A-based and RapidEye-based soil prediction models would not necessarily have higher prediction performance than coarse spatial resolution Landsat 8-based soil prediction models. The end users of DSM in smallholder farm settings need select the appropriate spectral indices and consider different factors such as the spatial resolution, band width, spectral resolution, temporal frequency, cost, and processing time of different remote sensing images. Overall, remote sensing-based Digital Soil Mapping has potential to be promoted to smallholder farm settings all over the world and help smallholder farmers implement sustainable and field-specific soil nutrient management scheme. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Long-range strategy for remote sensing: an integrated supersystem

    NASA Astrophysics Data System (ADS)

    Glackin, David L.; Dodd, Joseph K.

    1995-12-01

    Present large space-based remote sensing systems, and those planned for the next two decades, remain dichotomous and custom-built. An integrated architecture might reduce total cost without limiting system performance. An example of such an architecture, developed at The Aerospace Corporation, explores the feasibility of reducing overall space systems costs by forming a 'super-system' which will provide environmental, earth resources and theater surveillance information to a variety of users. The concept involves integration of programs, sharing of common spacecraft bus designs and launch vehicles, use of modular components and subsystems, integration of command and control and data capture functions, and establishment of an integrated program office. Smart functional modules that are easily tested and replaced are used wherever possible in the space segment. Data is disseminated to systems such as NASA's EOSDIS, and data processing is performed at established centers of expertise. This concept is advanced for potential application as a follow-on to currently budgeted and planned space-based remote sensing systems. We hope that this work will serve to engender discussion that may be of assistance in leading to multinational remote sensing systems with greater cost effectiveness at no loss of utility to the end user.

  12. Advancing Adventure Education Using Digital Motion-Sensing Games

    ERIC Educational Resources Information Center

    Shih, Ju-Ling; Hsu, Yu-Jen

    2016-01-01

    This study used the Xbox Kinect and Unity 3D game engine to develop two motion-sensing games in which the participants, in simulated scenarios, could experience activities that are unattainable in real life, become immersed in collaborative activities, and explore the value of adventure education. Adventure Education involves courses that…

  13. Tuning self-motion perception in virtual reality with visual illusions.

    PubMed

    Bruder, Gerd; Steinicke, Frank; Wieland, Phil; Lappe, Markus

    2012-07-01

    Motion perception in immersive virtual environments significantly differs from the real world. For example, previous work has shown that users tend to underestimate travel distances in virtual environments (VEs). As a solution to this problem, researchers proposed to scale the mapped virtual camera motion relative to the tracked real-world movement of a user until real and virtual motion are perceived as equal, i.e., real-world movements could be mapped with a larger gain to the VE in order to compensate for the underestimation. However, introducing discrepancies between real and virtual motion can become a problem, in particular, due to misalignments of both worlds and distorted space cognition. In this paper, we describe a different approach that introduces apparent self-motion illusions by manipulating optic flow fields during movements in VEs. These manipulations can affect self-motion perception in VEs, but omit a quantitative discrepancy between real and virtual motions. In particular, we consider to which regions of the virtual view these apparent self-motion illusions can be applied, i.e., the ground plane or peripheral vision. Therefore, we introduce four illusions and show in experiments that optic flow manipulation can significantly affect users' self-motion judgments. Furthermore, we show that with such manipulations of optic flow fields the underestimation of travel distances can be compensated.

  14. Motion Sickness Treatment Apparatus and Method

    NASA Technical Reports Server (NTRS)

    Reschke, Millard F. (Inventor); Somers, Jeffrey T. (Inventor); Ford, George A. (Inventor)

    2005-01-01

    Methods and apparatus are disclosed for treating motion sickness. In a preferred embodiment a method of the invention comprises operating eyewear having shutter lenses to open said shutter lenses at a selected operating frequency ranging from within about 3 Hz to about 50 Hz. The shutter lenses are opened for a short duration at the selected operating frequency wherein the duration is selected to prevent retinal slip. The shutter lenses may be operated at a relatively slow frequency of about 4 Hz when the user is in passive activity such as riding in a boat or car or in limited motion situations in a spacecraft. The shutter lenses may be operated at faster frequencies related to motion of the user's head when the user is active.

  15. Usability research study of a specially engineered sonic powered toothbrush with unique sensing and control technologies.

    PubMed

    Hunter, Gail; Burns, Laurie; Bone, Brian; Mintel, Thomas; Jimenez, Eduardo

    2012-01-01

    This paper summarizes the results of a longitudinal usability research study of a specially engineered sonic powered toothbrush with unique sensing and control technologies. The usability test was conducted with fourteen (14) consumers from the St. Louis, MO, USA area who use manual toothbrushes. The study consisted of consumers using the specially engineered sonic powered toothbrush with unique sensing and control technologies for three weeks. During the study, users participated in four toothbrush trials during weekly visits to the research facility. These trials were videotaped and were analyzed regarding brushing time, behavior, and technique. In addition, the users were required to use the toothbrush twice a day for their at-home brushing. The toothbrush had a positive impact on consumers' tooth brushing behavior. Users spent more time brushing their teeth with this toothbrush as compared to their manual toothbrush. In addition, users spent more time keeping the sonic toothbrush in the recommended angle during use. Finally, users perceived their teeth to be cleaner when using the specially engineered sonic powered toothbrush with unique sensing and control technologies. The specially engineered sonic powered toothbrush with unique sensing and control technologies left a positive impression on the users. The users perceived the toothbrush to clean their teeth better than a manual toothbrush.

  16. Improved Hip-Based Individual Recognition Using Wearable Motion Recording Sensor

    NASA Astrophysics Data System (ADS)

    Gafurov, Davrondzhon; Bours, Patrick

    In todays society the demand for reliable verification of a user identity is increasing. Although biometric technologies based on fingerprint or iris can provide accurate and reliable recognition performance, they are inconvenient for periodic or frequent re-verification. In this paper we propose a hip-based user recognition method which can be suitable for implicit and periodic re-verification of the identity. In our approach we use a wearable accelerometer sensor attached to the hip of the person, and then the measured hip motion signal is analysed for identity verification purposes. The main analyses steps consists of detecting gait cycles in the signal and matching two sets of detected gait cycles. Evaluating the approach on a hip data set consisting of 400 gait sequences (samples) from 100 subjects, we obtained equal error rate (EER) of 7.5% and identification rate at rank 1 was 81.4%. These numbers are improvements by 37.5% and 11.2% respectively of the previous study using the same data set.

  17. Validation plays the role of a "bridge" in connecting remote sensing research and applications

    NASA Astrophysics Data System (ADS)

    Wang, Zhiqiang; Deng, Ying; Fan, Yida

    2018-07-01

    Remote sensing products contribute to improving earth observations over space and time. Uncertainties exist in products of different levels; thus, validation of these products before and during their applications is critical. This study discusses the meaning of validation in depth and proposes a new definition of reliability for use with such products. In this context, validation should include three aspects: a description of the relevant uncertainties, quantitative measurement results and a qualitative judgment that considers the needs of users. A literature overview is then presented evidencing improvements in the concepts associated with validation. It shows that the root mean squared error (RMSE) is widely used to express accuracy; increasing numbers of remote sensing products have been validated; research institutes contribute most validation efforts; and sufficient validation studies encourage the application of remote sensing products. Validation plays a connecting role in the distribution and application of remote sensing products. Validation connects simple remote sensing subjects with other disciplines, and it connects primary research with practical applications. Based on the above findings, it is suggested that validation efforts that include wider cooperation among research institutes and full consideration of the needs of users should be promoted.

  18. Design Foundations for Content-Rich Acoustic Interfaces: Investigating Audemes as Referential Non-Speech Audio Cues

    ERIC Educational Resources Information Center

    Ferati, Mexhid Adem

    2012-01-01

    To access interactive systems, blind and visually impaired users can leverage their auditory senses by using non-speech sounds. The current structure of non-speech sounds, however, is geared toward conveying user interface operations (e.g., opening a file) rather than large theme-based information (e.g., a history passage) and, thus, is ill-suited…

  19. Color and luminance in the perception of 1- and 2-dimensional motion.

    PubMed

    Farell, B

    1999-08-01

    An isoluminant color grating usually appears to move more slowly than a luminance grating that has the same physical speed. Yet a grating defined by both color and luminance is seen as perceptually unified and moving at a single intermediate speed. In experiments measuring perceived speed and direction, it was found that color- and luminance-based motion signals are combined differently in the perception of 1-D motion than they are in the perception of 2-D motion. Adding color to a moving 1-D luminance pattern, a grating, slows its perceived speed. Adding color to a moving 2-D luminance pattern, a plaid made of orthogonal gratings, leaves its perceived speed unchanged. Analogous results occur for the perception of the direction of 2-D motion. The visual system appears to discount color when analyzing the motion of luminance-bearing 2-D patterns. This strategy has adaptive advantages, making the sensing of object motion more veridical without sacrificing the ability to see motion at isoluminance.

  20. Simulator certification methods and the vertical motion simulator

    NASA Technical Reports Server (NTRS)

    Showalter, T. W.

    1981-01-01

    The vertical motion simulator (VMS) is designed to simulate a variety of experimental helicopter and STOL/VTOL aircraft as well as other kinds of aircraft with special pitch and Z axis characteristics. The VMS includes a large motion base with extensive vertical and lateral travel capabilities, a computer generated image visual system, and a high speed CDC 7600 computer system, which performs aero model calculations. Guidelines on how to measure and evaluate VMS performance were developed. A survey of simulation users was conducted to ascertain they evaluated and certified simulators for use. The results are presented.

  1. Upper limb joint motion of two different user groups during manual wheelchair propulsion

    NASA Astrophysics Data System (ADS)

    Hwang, Seonhong; Kim, Seunghyeon; Son, Jongsang; Lee, Jinbok; Kim, Youngho

    2013-02-01

    Manual wheelchair users have a high risk of injury to the upper extremities. Recent studies have focused on kinematic and kinetic analyses of manual wheelchair propulsion in order to understand the physical demands on wheelchair users. The purpose of this study was to investigate upper limb joint motion by using a motion capture system and a dynamometer with two different groups of wheelchair users propelling their wheelchairs at different speeds under different load conditions. The variations in the contact time, release time, and linear velocity of the experienced group were all larger than they were in the novice group. The propulsion angles of the experienced users were larger than those of the novices under all conditions. The variances in the propulsion force (both radial and tangential) of the experienced users were larger than those of the novices. The shoulder joint moment had the largest variance with the conditions, followed by the wrist joint moment and the elbow joint moment. The variance of the maximum shoulder joint moment was over four times the variance of the maximum wrist joint moment and eight times the maximum elbow joint moment. The maximum joint moments increased significantly as the speed and load increased in both groups. Quick and significant manipulation ability based on environmental changes is considered an important factor in efficient propulsion. This efficiency was confirmed from the propulsion power results. Sophisticated strategies for efficient manual wheelchair propulsion could be understood by observation of the physical responses of each upper limb joint to changes in load and speed. We expect that the findings of this study will be utilized for designing a rehabilitation program to reduce injuries.

  2. Coordinating robot motion, sensing, and control in plans. LDRD project final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xavier, P.G.; Brown, R.G.; Watterberg, P.A.

    1997-08-01

    The goal of this project was to develop a framework for robotic planning and execution that provides a continuum of adaptability with respect to model incompleteness, model error, and sensing error. For example, dividing robot motion into gross-motion planning, fine-motion planning, and sensor-augmented control had yielded productive research and solutions to individual problems. Unfortunately, these techniques could only be combined by hand with ad hoc methods and were restricted to systems where all kinematics are completely modeled in planning. The original intent was to develop methods for understanding and autonomously synthesizing plans that coordinate motion, sensing, and control. The projectmore » considered this problem from several perspectives. Results included (1) theoretical methods to combine and extend gross-motion and fine-motion planning; (2) preliminary work in flexible-object manipulation and an implementable algorithm for planning shortest paths through obstacles for the free-end of an anchored cable; (3) development and implementation of a fast swept-body distance algorithm; and (4) integration of Sandia`s C-Space Toolkit geometry engine and SANDROS motion planer and improvements, which yielded a system practical for everyday motion planning, with path-segment planning at interactive speeds. Results (3) and (4) have either led to follow-on work or are being used in current projects, and they believe that (2) will eventually be also.« less

  3. Affective SSVEP BCI to effectively control 3D objects by using a prism array-based display

    NASA Astrophysics Data System (ADS)

    Mun, Sungchul; Park, Min-Chul

    2014-06-01

    3D objects with depth information can provide many benefits to users in education, surgery, and interactions. In particular, many studies have been done to enhance sense of reality in 3D interaction. Viewing and controlling stereoscopic 3D objects with crossed or uncrossed disparities, however, can cause visual fatigue due to the vergenceaccommodation conflict generally accepted in 3D research fields. In order to avoid the vergence-accommodation mismatch and provide a strong sense of presence to users, we apply a prism array-based display to presenting 3D objects. Emotional pictures were used as visual stimuli in control panels to increase information transfer rate and reduce false positives in controlling 3D objects. Involuntarily motivated selective attention by affective mechanism can enhance steady-state visually evoked potential (SSVEP) amplitude and lead to increased interaction efficiency. More attentional resources are allocated to affective pictures with high valence and arousal levels than to normal visual stimuli such as white-and-black oscillating squares and checkerboards. Among representative BCI control components (i.e., eventrelated potentials (ERP), event-related (de)synchronization (ERD/ERS), and SSVEP), SSVEP-based BCI was chosen in the following reasons. It shows high information transfer rates and takes a few minutes for users to control BCI system while few electrodes are required for obtaining reliable brainwave signals enough to capture users' intention. The proposed BCI methods are expected to enhance sense of reality in 3D space without causing critical visual fatigue to occur. In addition, people who are very susceptible to (auto) stereoscopic 3D may be able to use the affective BCI.

  4. Stereoscopic Height and Wind Retrievals for Aerosol Plumes with the MISR INteractive eXplorer (MINX)

    NASA Technical Reports Server (NTRS)

    Nelson, D.L.; Garay, M.J.; Kahn, Ralph A.; Dunst, Ben A.

    2013-01-01

    The Multi-angle Imaging SpectroRadiometer (MISR) instrument aboard the Terra satellite acquires imagery at 275-m resolution at nine angles ranging from 0deg (nadir) to 70deg off-nadir. This multi-angle capability facilitates the stereoscopic retrieval of heights and motion vectors for clouds and aerosol plumes. MISR's operational stereo product uses this capability to retrieve cloud heights and winds for every satellite orbit, yielding global coverage every nine days. The MISR INteractive eXplorer (MINX) visualization and analysis tool complements the operational stereo product by providing users the ability to retrieve heights and winds locally for detailed studies of smoke, dust and volcanic ash plumes, as well as clouds, at higher spatial resolution and with greater precision than is possible with the operational product or with other space-based, passive, remote sensing instruments. This ability to investigate plume geometry and dynamics is becoming increasingly important as climate and air quality studies require greater knowledge about the injection of aerosols and the location of clouds within the atmosphere. MINX incorporates features that allow users to customize their stereo retrievals for optimum results under varying aerosol and underlying surface conditions. This paper discusses the stereo retrieval algorithms and retrieval options in MINX, and provides appropriate examples to explain how the program can be used to achieve the best results.

  5. 3-D-Gaze-Based Robotic Grasping Through Mimicking Human Visuomotor Function for People With Motion Impairments.

    PubMed

    Li, Songpo; Zhang, Xiaoli; Webb, Jeremy D

    2017-12-01

    The goal of this paper is to achieve a novel 3-D-gaze-based human-robot-interaction modality, with which a user with motion impairment can intuitively express what tasks he/she wants the robot to do by directly looking at the object of interest in the real world. Toward this goal, we investigate 1) the technology to accurately sense where a person is looking in real environments and 2) the method to interpret the human gaze and convert it into an effective interaction modality. Looking at a specific object reflects what a person is thinking related to that object, and the gaze location contains essential information for object manipulation. A novel gaze vector method is developed to accurately estimate the 3-D coordinates of the object being looked at in real environments, and a novel interpretation framework that mimics human visuomotor functions is designed to increase the control capability of gaze in object grasping tasks. High tracking accuracy was achieved using the gaze vector method. Participants successfully controlled a robotic arm for object grasping by directly looking at the target object. Human 3-D gaze can be effectively employed as an intuitive interaction modality for robotic object manipulation. It is the first time that 3-D gaze is utilized in a real environment to command a robot for a practical application. Three-dimensional gaze tracking is promising as an intuitive alternative for human-robot interaction especially for disabled and elderly people who cannot handle the conventional interaction modalities.

  6. Architectures and algorithms for digital image processing; Proceedings of the Meeting, Cannes, France, December 5, 6, 1985

    NASA Technical Reports Server (NTRS)

    Duff, Michael J. B. (Editor); Siegel, Howard J. (Editor); Corbett, Francis J. (Editor)

    1986-01-01

    The conference presents papers on the architectures, algorithms, and applications of image processing. Particular attention is given to a very large scale integration system for image reconstruction from projections, a prebuffer algorithm for instant display of volume data, and an adaptive image sequence filtering scheme based on motion detection. Papers are also presented on a simple, direct practical method of sensing local motion and analyzing local optical flow, image matching techniques, and an automated biological dosimetry system.

  7. 3D moviemap and a 3D panorama

    NASA Astrophysics Data System (ADS)

    Naimark, Michael

    1997-05-01

    Two immersive virtual environments produced as art installations investigate 'sense of place' in different but complimentary ways. One is a stereoscopic moviemap, the other a stereoscopic panorama. Moviemaps are interactive systems which allow 'travel' along pre-recorded routes with some control over speed and direction. Panoramas are 360 degree visual representations dating back to the late 18th century but which have recently experienced renewed interest due to 'virtual reality' systems. Moviemaps allow 'moving around' while panoramas allow 'looking around,' but to date there has been little or no attempt to produce either in stereo from camera-based material. 'See Banff stereoscopic moviemap about landscape, tourism, and growth in the Canadian Rocky Mountains. It was filmed with twin 16 mm cameras and displayed as a single-user experience housed in a cabinet resembling a century-old kinetoscope, with a crank on the side for 'moving through' the material. 'Be Now Here (Welcome to the Neighborhood)' (1995-6) is a stereoscopic panorama filmed in public gathering places around the world, based upon the UNESCO World Heritage 'In Danger' list. It was filmed with twin 35 mm motion picture cameras on a rotating tripod and displayed using a synchronized rotating floor.

  8. Prototype of web-based database of surface wave investigation results for site classification

    NASA Astrophysics Data System (ADS)

    Hayashi, K.; Cakir, R.; Martin, A. J.; Craig, M. S.; Lorenzo, J. M.

    2016-12-01

    As active and passive surface wave methods are getting popular for evaluating site response of earthquake ground motion, demand on the development of database for investigation results is also increasing. Seismic ground motion not only depends on 1D velocity structure but also on 2D and 3D structures so that spatial information of S-wave velocity must be considered in ground motion prediction. The database can support to construct 2D and 3D underground models. Inversion of surface wave processing is essentially non-unique so that other information must be combined into the processing. The database of existed geophysical, geological and geotechnical investigation results can provide indispensable information to improve the accuracy and reliability of investigations. Most investigations, however, are carried out by individual organizations and investigation results are rarely stored in the unified and organized database. To study and discuss appropriate database and digital standard format for the surface wave investigations, we developed a prototype of web-based database to store observed data and processing results of surface wave investigations that we have performed at more than 400 sites in U.S. and Japan. The database was constructed on a web server using MySQL and PHP so that users can access to the database through the internet from anywhere with any device. All data is registered in the database with location and users can search geophysical data through Google Map. The database stores dispersion curves, horizontal to vertical spectral ratio and S-wave velocity profiles at each site that was saved in XML files as digital data so that user can review and reuse them. The database also stores a published 3D deep basin and crustal structure and user can refer it during the processing of surface wave data.

  9. Simultaneous Ionic Current and Potential Detection of Nanoparticles by a Multifunctional Nanopipette.

    PubMed

    Panday, Namuna; Qian, Gongming; Wang, Xuewen; Chang, Shuai; Pandey, Popular; He, Jin

    2016-12-27

    Nanopore sensing-based technologies have made significant progress for single molecule and single nanoparticle detection and analysis. In recent years, multimode sensing by multifunctional nanopores shows the potential to greatly improve the sensitivity and selectivity of traditional resistive-pulse sensing methods. In this paper, we showed that two label-free electric sensing modes could work cooperatively to detect the motion of 40 nm diameter spherical gold nanoparticles (GNPs) in solution by a multifunctional nanopipette. The multifunctional nanopipettes containing both nanopore and nanoelectrode (pyrolytic carbon) at the tip were fabricated quickly and cheaply. We demonstrated that the ionic current and local electrical potential changes could be detected simultaneously during the translocation of individual GNPs. We also showed that the nanopore/CNE tip geometry enabled the CNE not only to detect the translocation of single GNP but also to collectively detect several GNPs outside the nanopore entrance. The dynamic accumulation of GNPs near the nanopore entrance resulted in no detectable current changes, but was detected by the potential changes at the CNE. We revealed the motions of GNPs both outside and inside the nanopore, individually and collectively, with the combination of ionic current and potential measurements.

  10. Live Speech Driven Head-and-Eye Motion Generators.

    PubMed

    Le, Binh H; Ma, Xiaohan; Deng, Zhigang

    2012-11-01

    This paper describes a fully automated framework to generate realistic head motion, eye gaze, and eyelid motion simultaneously based on live (or recorded) speech input. Its central idea is to learn separate yet interrelated statistical models for each component (head motion, gaze, or eyelid motion) from a prerecorded facial motion data set: 1) Gaussian Mixture Models and gradient descent optimization algorithm are employed to generate head motion from speech features; 2) Nonlinear Dynamic Canonical Correlation Analysis model is used to synthesize eye gaze from head motion and speech features, and 3) nonnegative linear regression is used to model voluntary eye lid motion and log-normal distribution is used to describe involuntary eye blinks. Several user studies are conducted to evaluate the effectiveness of the proposed speech-driven head and eye motion generator using the well-established paired comparison methodology. Our evaluation results clearly show that this approach can significantly outperform the state-of-the-art head and eye motion generation algorithms. In addition, a novel mocap+video hybrid data acquisition technique is introduced to record high-fidelity head movement, eye gaze, and eyelid motion simultaneously.

  11. Communication training improves sense of performance expectancy of public health nurses engaged in long-term elderly prevention care program.

    PubMed

    Tanabe, Motoko; Suzukamo, Yoshimi; Tsuji, Ichiro; Izumi, Sin-Ichi

    2012-01-01

    This study examines the effectiveness of a communication skill training based on a coaching theory for public health nurses (PHNs) who are engaged in Japan's long-term care prevention program. The participants in this study included 112 PHNs and 266 service users who met with these PHNs in order to create a customized care plan within one month after the PHNs' training. The participants were divided into three groups: a supervised group in which the PHNs attended the 1-day training seminar and the follow-up supervision; a seminar group attended only the 1-day training seminar; a control group. The PHNs' sense of performance expectancy, and user's satisfaction, user's spontaneous behavior were evaluated at the baseline (T1), at one month (T2), and at three months (T3) after the PHNs' training. At T3, the PHNs performed a recalled evaluation (RE) of their communication skills before the training. The PHNs' sense of performance expectancy increased significantly over time in the supervised group and the control group (F = 11.28, P < 0.001; F = 4.03, P < 0.05, resp.). The difference score between T3-RE was significantly higher in the supervised group than the control group (P < 0.01). No significant differences in the users' outcomes were found.

  12. Detecting Motion from a Moving Platform; Phase 3: Unification of Control and Sensing for More Advanced Situational Awareness

    DTIC Science & Technology

    2011-11-01

    RX-TY-TR-2011-0096-01) develops a novel computer vision sensor based upon the biological vision system of the common housefly , Musca domestica...01 summarizes the development of a novel computer vision sensor based upon the biological vision system of the common housefly , Musca domestica

  13. Open architecture CMM motion controller

    NASA Astrophysics Data System (ADS)

    Chang, David; Spence, Allan D.; Bigg, Steve; Heslip, Joe; Peterson, John

    2001-12-01

    Although initially the only Coordinate Measuring Machine (CMM) sensor available was a touch trigger probe, technological advances in sensors and computing have greatly increased the variety of available inspection sensors. Non-contact laser digitizers and analog scanning touch probes require very well tuned CMM motion control, as well as an extensible, open architecture interface. This paper describes the implementation of a retrofit CMM motion controller designed for open architecture interface to a variety of sensors. The controller is based on an Intel Pentium microcomputer and a Servo To Go motion interface electronics card. Motor amplifiers, safety, and additional interface electronics are housed in a separate enclosure. Host Signal Processing (HSP) is used for the motion control algorithm. Compared to the usual host plus DSP architecture, single CPU HSP simplifies integration with the various sensors, and implementation of software geometric error compensation. Motion control tuning is accomplished using a remote computer via 100BaseTX Ethernet. A Graphical User Interface (GUI) is used to enter geometric error compensation data, and to optimize the motion control tuning parameters. It is shown that this architecture achieves the required real time motion control response, yet is much easier to extend to additional sensors.

  14. A New Joint-Blade SENSE Reconstruction for Accelerated PROPELLER MRI

    PubMed Central

    Lyu, Mengye; Liu, Yilong; Xie, Victor B.; Feng, Yanqiu; Guo, Hua; Wu, Ed X.

    2017-01-01

    PROPELLER technique is widely used in MRI examinations for being motion insensitive, but it prolongs scan time and is restricted mainly to T2 contrast. Parallel imaging can accelerate PROPELLER and enable more flexible contrasts. Here, we propose a multi-step joint-blade (MJB) SENSE reconstruction to reduce the noise amplification in parallel imaging accelerated PROPELLER. MJB SENSE utilizes the fact that PROPELLER blades contain sharable information and blade-combined images can serve as regularization references. It consists of three steps. First, conventional blade-combined images are obtained using the conventional simple single-blade (SSB) SENSE, which reconstructs each blade separately. Second, the blade-combined images are employed as regularization for blade-wise noise reduction. Last, with virtual high-frequency data resampled from the previous step, all blades are jointly reconstructed to form the final images. Simulations were performed to evaluate the proposed MJB SENSE for noise reduction and motion correction. MJB SENSE was also applied to both T2-weighted and T1-weighted in vivo brain data. Compared to SSB SENSE, MJB SENSE greatly reduced the noise amplification at various acceleration factors, leading to increased image SNR in all simulation and in vivo experiments, including T1-weighted imaging with short echo trains. Furthermore, it preserved motion correction capability and was computationally efficient. PMID:28205602

  15. A New Joint-Blade SENSE Reconstruction for Accelerated PROPELLER MRI.

    PubMed

    Lyu, Mengye; Liu, Yilong; Xie, Victor B; Feng, Yanqiu; Guo, Hua; Wu, Ed X

    2017-02-16

    PROPELLER technique is widely used in MRI examinations for being motion insensitive, but it prolongs scan time and is restricted mainly to T2 contrast. Parallel imaging can accelerate PROPELLER and enable more flexible contrasts. Here, we propose a multi-step joint-blade (MJB) SENSE reconstruction to reduce the noise amplification in parallel imaging accelerated PROPELLER. MJB SENSE utilizes the fact that PROPELLER blades contain sharable information and blade-combined images can serve as regularization references. It consists of three steps. First, conventional blade-combined images are obtained using the conventional simple single-blade (SSB) SENSE, which reconstructs each blade separately. Second, the blade-combined images are employed as regularization for blade-wise noise reduction. Last, with virtual high-frequency data resampled from the previous step, all blades are jointly reconstructed to form the final images. Simulations were performed to evaluate the proposed MJB SENSE for noise reduction and motion correction. MJB SENSE was also applied to both T2-weighted and T1-weighted in vivo brain data. Compared to SSB SENSE, MJB SENSE greatly reduced the noise amplification at various acceleration factors, leading to increased image SNR in all simulation and in vivo experiments, including T1-weighted imaging with short echo trains. Furthermore, it preserved motion correction capability and was computationally efficient.

  16. A View From the Sidewalk: "Flowers for Algernon,""Requiem for a Heavyweight,- "Hurricane's Corner."

    ERIC Educational Resources Information Center

    Hochberg, Frances

    1968-01-01

    Those high school students who are unmotivated slow-learners living in a "sense-oriented" world respond to instructional units centered around a sense-oriented medium--the motion picture. A unit incorporating "Requiem for a Heavyweight" (a motion picture), "Hurricane's Corner" (an editorial about a fighter), and…

  17. Micro-patterned graphene-based sensing skins for human physiological monitoring

    NASA Astrophysics Data System (ADS)

    Wang, Long; Loh, Kenneth J.; Chiang, Wei-Hung; Manna, Kausik

    2018-03-01

    Ultrathin, flexible, conformal, and skin-like electronic transducers are emerging as promising candidates for noninvasive and nonintrusive human health monitoring. In this work, a wearable sensing membrane is developed by patterning a graphene-based solution onto ultrathin medical tape, which can then be attached to the skin for monitoring human physiological parameters and physical activity. Here, the sensor is validated for monitoring finger bending/movements and for recognizing hand motion patterns, thereby demonstrating its future potential for evaluating athletic performance, physical therapy, and designing next-generation human-machine interfaces. Furthermore, this study also quantifies the sensor’s ability to monitor eye blinking and radial pulse in real-time, which can find broader applications for the healthcare sector. Overall, the printed graphene-based sensing skin is highly conformable, flexible, lightweight, nonintrusive, mechanically robust, and is characterized by high strain sensitivity.

  18. Mode extraction on wind turbine blades via phase-based video motion estimation

    NASA Astrophysics Data System (ADS)

    Sarrafi, Aral; Poozesh, Peyman; Niezrecki, Christopher; Mao, Zhu

    2017-04-01

    In recent years, image processing techniques are being applied more often for structural dynamics identification, characterization, and structural health monitoring. Although as a non-contact and full-field measurement method, image processing still has a long way to go to outperform other conventional sensing instruments (i.e. accelerometers, strain gauges, laser vibrometers, etc.,). However, the technologies associated with image processing are developing rapidly and gaining more attention in a variety of engineering applications including structural dynamics identification and modal analysis. Among numerous motion estimation and image-processing methods, phase-based video motion estimation is considered as one of the most efficient methods regarding computation consumption and noise robustness. In this paper, phase-based video motion estimation is adopted for structural dynamics characterization on a 2.3-meter long Skystream wind turbine blade, and the modal parameters (natural frequencies, operating deflection shapes) are extracted. Phase-based video processing adopted in this paper provides reliable full-field 2-D motion information, which is beneficial for manufacturing certification and model updating at the design stage. The phase-based video motion estimation approach is demonstrated through processing data on a full-scale commercial structure (i.e. a wind turbine blade) with complex geometry and properties, and the results obtained have a good correlation with the modal parameters extracted from accelerometer measurements, especially for the first four bending modes, which have significant importance in blade characterization.

  19. Pneumatic Muscle Actuated Equipment for Continuous Passive Motion

    NASA Astrophysics Data System (ADS)

    Deaconescu, Tudor T.; Deaconescu, Andrea I.

    2009-10-01

    Applying continuous passive rehabilitation movements as part of the recovery programme of patients with post-traumatic disabilities of the bearing joints of the inferior limbs requires the development of new high performance equipment. This chapter discusses a study of the kinematics and performance of such a new, continuous passive motion based rehabilitation system actuated by pneumatic muscles. The utilized energy source is compressed air ensuring complete absorption of the end of stroke shocks, thus minimizing user discomfort.

  20. Model of human visual-motion sensing

    NASA Technical Reports Server (NTRS)

    Watson, A. B.; Ahumada, A. J., Jr.

    1985-01-01

    A model of how humans sense the velocity of moving images is proposed. The model exploits constraints provided by human psychophysics, notably that motion-sensing elements appear tuned for two-dimensional spatial frequency, and by the frequency spectrum of a moving image, namely, that its support lies in the plane in which the temporal frequency equals the dot product of the spatial frequency and the image velocity. The first stage of the model is a set of spatial-frequency-tuned, direction-selective linear sensors. The temporal frequency of the response of each sensor is shown to encode the component of the image velocity in the sensor direction. At the second stage, these components are resolved in order to measure the velocity of image motion at each of a number of spatial locations and spatial frequencies. The model has been applied to several illustrative examples, including apparent motion, coherent gratings, and natural image sequences. The model agrees qualitatively with human perception.

  1. The Envoy® Totally Implantable Hearing System, St. Croix Medical

    PubMed Central

    Kroll, Kai; Grant, Iain L.; Javel, Eric

    2002-01-01

    The Totally Implantable Envoy® System is currently undergoing clinical trials in both the United States and Europe. The fully implantable hearing device is intended for use in patients with sensorineural hearing loss. The device employs piezoelectric transducers to sense ossicle motion and drive the stapes. Programmable signal processing parameters include amplification, compression, and variable frequency response. The fully implantable attribute allows users to take advantage of normal external ear resonances and head-related transfer functions, while avoiding undesirable earmold effects. The high sensitivity, low power consumption, and high fidelity attributes of piezoelectric transducers minimize acoustic feedback and maximize battery life (Gyo, 1996; Yanagihara, (1987) and 2001). The surgical procedure to install the device has been accurately defined and implantation is reversible. PMID:25425915

  2. Efficient Wideband Spectrum Sensing with Maximal Spectral Efficiency for LEO Mobile Satellite Systems

    PubMed Central

    Li, Feilong; Li, Zhiqiang; Li, Guangxia; Dong, Feihong; Zhang, Wei

    2017-01-01

    The usable satellite spectrum is becoming scarce due to static spectrum allocation policies. Cognitive radio approaches have already demonstrated their potential towards spectral efficiency for providing more spectrum access opportunities to secondary user (SU) with sufficient protection to licensed primary user (PU). Hence, recent scientific literature has been focused on the tradeoff between spectrum reuse and PU protection within narrowband spectrum sensing (SS) in terrestrial wireless sensing networks. However, those narrowband SS techniques investigated in the context of terrestrial CR may not be applicable for detecting wideband satellite signals. In this paper, we mainly investigate the problem of joint designing sensing time and hard fusion scheme to maximize SU spectral efficiency in the scenario of low earth orbit (LEO) mobile satellite services based on wideband spectrum sensing. Compressed detection model is established to prove that there indeed exists one optimal sensing time achieving maximal spectral efficiency. Moreover, we propose novel wideband cooperative spectrum sensing (CSS) framework where each SU reporting duration can be utilized for its following SU sensing. The sensing performance benefits from the novel CSS framework because the equivalent sensing time is extended by making full use of reporting slot. Furthermore, in respect of time-varying channel, the spatiotemporal CSS (ST-CSS) is presented to attain space and time diversity gain simultaneously under hard decision fusion rule. Computer simulations show that the optimal sensing settings algorithm of joint optimization of sensing time, hard fusion rule and scheduling strategy achieves significant improvement in spectral efficiency. Additionally, the novel ST-CSS scheme performs much higher spectral efficiency than that of general CSS framework. PMID:28117712

  3. A Kinect-Based Assessment System for Smart Classroom

    ERIC Educational Resources Information Center

    Kumara, W. G. C. W.; Wattanachote, Kanoksak; Battulga, Batbaatar; Shih, Timothy K.; Hwang, Wu-Yuin

    2015-01-01

    With the advancements of the human computer interaction field, nowadays it is possible for the users to use their body motions, such as swiping, pushing and moving, to interact with the content of computers or smart phones without traditional input devices like mouse and keyboard. With the introduction of gesture-based interface Kinect from…

  4. Investigation related to multispectral imaging systems

    NASA Technical Reports Server (NTRS)

    Nalepka, R. F.; Erickson, J. D.

    1974-01-01

    A summary of technical progress made during a five year research program directed toward the development of operational information systems based on multispectral sensing and the use of these systems in earth-resource survey applications is presented. Efforts were undertaken during this program to: (1) improve the basic understanding of the many facets of multispectral remote sensing, (2) develop methods for improving the accuracy of information generated by remote sensing systems, (3) improve the efficiency of data processing and information extraction techniques to enhance the cost-effectiveness of remote sensing systems, (4) investigate additional problems having potential remote sensing solutions, and (5) apply the existing and developing technology for specific users and document and transfer that technology to the remote sensing community.

  5. Integration Method of Emphatic Motions and Adverbial Expressions with Scalar Parameters for Robotic Motion Coaching System

    NASA Astrophysics Data System (ADS)

    Okuno, Keisuke; Inamura, Tetsunari

    A robotic coaching system can improve humans' learning performance of motions by intelligent usage of emphatic motions and adverbial expressions according to user reactions. In robotics, however, method to control both the motions and the expressions and how to bind them had not been adequately discussed from an engineering point of view. In this paper, we propose a method for controlling and binding emphatic motions and adverbial expressions by using two scalar parameters in a phase space. In the phase space, variety of motion patterns and verbal expressions are connected and can be expressed as static points. We show the feasibility of the proposing method through experiments of actual sport coaching tasks for beginners. From the results of participants' improvements in motion learning, we confirmed the feasibility of the methods to control and bind emphatic motions and adverbial expressions, as well as confirmed contribution of the emphatic motions and positive correlation of adverbial expressions for participants' improvements in motion learning. Based on the results, we introduce a hypothesis that individually optimized method for binding adverbial expression is required.

  6. Increasing Access and Usability of Remote Sensing Data: The NASA Protected Area Archive

    NASA Technical Reports Server (NTRS)

    Geller, Gary N.

    2004-01-01

    Although remote sensing data are now widely available, much of it at low or no-cost, many managers of protected conservation areas do not have the expertise or tools to view or analyze it. Thus access to it by the protected area management community is effectively blocked. The Protected Area Archive will increase access to remote sensing data by creating collections of satellite images of protected areas and packaging them with simple-to-use visualization and analytical tools. The user can easily locate the area and image of interest on a map, then display, roam, and zoom the image. A set of simple tools will be provided so the user can explore the data and employ it to assist in management and monitoring of their area. The 'Phase 1 ' version requires only a Windows-based computer and basic computer skills, and may be of particular help to protected area managers in developing countries.

  7. Optimizing Radiometric Fidelity to Enhance Aerial Image Change Detection Utilizing Digital Single Lens Reflex (DSLR) Cameras

    NASA Astrophysics Data System (ADS)

    Kerr, Andrew D.

    Determining optimal imaging settings and best practices related to the capture of aerial imagery using consumer-grade digital single lens reflex (DSLR) cameras, should enable remote sensing scientists to generate consistent, high quality, and low cost image data sets. Radiometric optimization, image fidelity, image capture consistency and repeatability were evaluated in the context of detailed image-based change detection. The impetus for this research is in part, a dearth of relevant, contemporary literature, on the utilization of consumer grade DSLR cameras for remote sensing, and the best practices associated with their use. The main radiometric control settings on a DSLR camera, EV (Exposure Value), WB (White Balance), light metering, ISO, and aperture (f-stop), are variables that were altered and controlled over the course of several image capture missions. These variables were compared for their effects on dynamic range, intra-frame brightness variation, visual acuity, temporal consistency, and the detectability of simulated cracks placed in the images. This testing was conducted from a terrestrial, rather than an airborne collection platform, due to the large number of images per collection, and the desire to minimize inter-image misregistration. The results point to a range of slightly underexposed image exposure values as preferable for change detection and noise minimization fidelity. The makeup of the scene, the sensor, and aerial platform, influence the selection of the aperture and shutter speed which along with other variables, allow for estimation of the apparent image motion (AIM) motion blur in the resulting images. The importance of the image edges in the image application, will in part dictate the lowest usable f-stop, and allow the user to select a more optimal shutter speed and ISO. The single most important camera capture variable is exposure bias (EV), with a full dynamic range, wide distribution of DN values, and high visual contrast and acuity occurring around -0.7 to -0.3EV exposure bias. The ideal values for sensor gain, was found to be ISO 100, with ISO 200 a less desirable. This study offers researchers a better understanding of the effects of camera capture settings on RSI pairs and their influence on image-based change detection.

  8. The 3D Human Motion Control Through Refined Video Gesture Annotation

    NASA Astrophysics Data System (ADS)

    Jin, Yohan; Suk, Myunghoon; Prabhakaran, B.

    In the beginning of computer and video game industry, simple game controllers consisting of buttons and joysticks were employed, but recently game consoles are replacing joystick buttons with novel interfaces such as the remote controllers with motion sensing technology on the Nintendo Wii [1] Especially video-based human computer interaction (HCI) technique has been applied to games, and the representative game is 'Eyetoy' on the Sony PlayStation 2. Video-based HCI technique has great benefit to release players from the intractable game controller. Moreover, in order to communicate between humans and computers, video-based HCI is very crucial since it is intuitive, easy to get, and inexpensive. On the one hand, extracting semantic low-level features from video human motion data is still a major challenge. The level of accuracy is really dependent on each subject's characteristic and environmental noises. Of late, people have been using 3D motion-capture data for visualizing real human motions in 3D space (e.g, 'Tiger Woods' in EA Sports, 'Angelina Jolie' in Bear-Wolf movie) and analyzing motions for specific performance (e.g, 'golf swing' and 'walking'). 3D motion-capture system ('VICON') generates a matrix for each motion clip. Here, a column is corresponding to a human's sub-body part and row represents time frames of data capture. Thus, we can extract sub-body part's motion only by selecting specific columns. Different from low-level feature values of video human motion, 3D human motion-capture data matrix are not pixel values, but is closer to human level of semantics.

  9. Applications of Phase-Based Motion Processing

    NASA Technical Reports Server (NTRS)

    Branch, Nicholas A.; Stewart, Eric C.

    2018-01-01

    Image pyramids provide useful information in determining structural response at low cost using commercially available cameras. The current effort applies previous work on the complex steerable pyramid to analyze and identify imperceptible linear motions in video. Instead of implicitly computing motion spectra through phase analysis of the complex steerable pyramid and magnifying the associated motions, instead present a visual technique and the necessary software to display the phase changes of high frequency signals within video. The present technique quickly identifies regions of largest motion within a video with a single phase visualization and without the artifacts of motion magnification, but requires use of the computationally intensive Fourier transform. While Riesz pyramids present an alternative to the computationally intensive complex steerable pyramid for motion magnification, the Riesz formulation contains significant noise, and motion magnification still presents large amounts of data that cannot be quickly assessed by the human eye. Thus, user-friendly software is presented for quickly identifying structural response through optical flow and phase visualization in both Python and MATLAB.

  10. An unsupervised method for summarizing egocentric sport videos

    NASA Astrophysics Data System (ADS)

    Habibi Aghdam, Hamed; Jahani Heravi, Elnaz; Puig, Domenec

    2015-12-01

    People are getting more interested to record their sport activities using head-worn or hand-held cameras. This type of videos which is called egocentric sport videos has different motion and appearance patterns compared with life-logging videos. While a life-logging video can be defined in terms of well-defined human-object interactions, notwithstanding, it is not trivial to describe egocentric sport videos using well-defined activities. For this reason, summarizing egocentric sport videos based on human-object interaction might fail to produce meaningful results. In this paper, we propose an unsupervised method for summarizing egocentric videos by identifying the key-frames of the video. Our method utilizes both appearance and motion information and it automatically finds the number of the key-frames. Our blind user study on the new dataset collected from YouTube shows that in 93:5% cases, the users choose the proposed method as their first video summary choice. In addition, our method is within the top 2 choices of the users in 99% of studies.

  11. Controlling Motion Sickness and Spatial Disorientation and Enhancing Vestibular Rehabilitation with a User-Worn See-Through Display

    PubMed Central

    Krueger, Wesley W.O.

    2010-01-01

    Objectives/Hypotheses An eyewear mounted visual display (“User-worn see-through display”) projecting an artificial horizon aligned with the user's head and body position in space can prevent or lessen motion sickness in susceptible individuals when in a motion provocative environment as well as aid patients undergoing vestibular rehabilitation. In this project, a wearable display device, including software technology and hardware, was developed and a phase I feasibility study and phase II clinical trial for safety and efficacy were performed. Study Design Both phase I and phase II were prospective studies funded by the NIH. The phase II study used repeated measures for motion intolerant subjects and a randomized control group (display device/no display device) pre-post test design for patients in vestibular rehabilitation. Methods Following technology and display device development, 75 patients were evaluated by test and rating scales in the phase II study; 25 subjects with motion intolerance used the technology in the display device in provocative environments and completed subjective rating scales while 50 patients were evaluated before and after vestibular rehabilitation (25 using the display device and 25 in a control group) using established test measures. Results All patients with motion intolerance rated the technology as helpful for nine symptoms assessed, and 96% rated the display device as simple and easy to use. Duration of symptoms significantly decreased with use of the technology displayed. In patients undergoing vestibular rehabilitation, there were no significant differences in amount of change from pre- to post-therapy on objective balance tests between display device users and controls. However, those using the technology required significantly fewer rehabilitation sessions to achieve those outcomes than the control group. Conclusions A user-worn see-through display, utilizing a visual fixation target coupled with a stable artificial horizon and aligned with user movement, has demonstrated substantial benefit for individuals susceptible to motion intolerance and spatial disorientation and those undergoing vestibular rehabilitation. The technology developed has applications in any environment where motion sensitivity affects human performance. PMID:21181963

  12. Image deblurring by motion estimation for remote sensing

    NASA Astrophysics Data System (ADS)

    Chen, Yueting; Wu, Jiagu; Xu, Zhihai; Li, Qi; Feng, Huajun

    2010-08-01

    The imagery resolution of imaging systems for remote sensing is often limited by image degradation resulting from unwanted motion disturbances of the platform during image exposures. Since the form of the platform vibration can be arbitrary, the lack of priori knowledge about the motion function (the PSF) suggests blind restoration approaches. A deblurring method which combines motion estimation and image deconvolution both for area-array and TDI remote sensing has been proposed in this paper. The image motion estimation is accomplished by an auxiliary high-speed detector and a sub-pixel correlation algorithm. The PSF is then reconstructed from estimated image motion vectors. Eventually, the clear image can be recovered by the Richardson-Lucy (RL) iterative deconvolution algorithm from the blurred image of the prime camera with the constructed PSF. The image deconvolution for the area-array detector is direct. While for the TDICCD detector, an integral distortion compensation step and a row-by-row deconvolution scheme are applied. Theoretical analyses and experimental results show that, the performance of the proposed concept is convincing. Blurred and distorted images can be properly recovered not only for visual observation, but also with significant objective evaluation increment.

  13. [Study on an Exoskeleton Hand Function Training Device].

    PubMed

    Hu, Xin; Zhang, Ying; Li, Jicai; Yi, Jinhua; Yu, Hongliu; He, Rongrong

    2016-02-01

    Based on the structure and motion bionic principle of the normal adult fingers, biological characteristics of human hands were analyzed, and a wearable exoskeleton hand function training device for the rehabilitation of stroke patients or patients with hand trauma was designed. This device includes the exoskeleton mechanical structure and the electromyography (EMG) control system. With adjustable mechanism, the device was capable to fit different finger lengths, and by capturing the EMG of the users' contralateral limb, the motion state of the exoskeleton hand was controlled. Then driven by the device, the user's fingers conducting adduction/abduction rehabilitation training was carried out. Finally, the mechanical properties and training effect of the exoskeleton hand were verified through mechanism simulation and the experiments on the experimental prototype of the wearable exoskeleton hand function training device.

  14. Feasibility of high temporal resolution breast DCE-MRI using compressed sensing theory.

    PubMed

    Wang, Haoyu; Miao, Yanwei; Zhou, Kun; Yu, Yanming; Bao, Shanglian; He, Qiang; Dai, Yongming; Xuan, Stephanie Y; Tarabishy, Bisher; Ye, Yongquan; Hu, Jiani

    2010-09-01

    To investigate the feasibility of high temporal resolution breast DCE-MRI using compressed sensing theory. Two experiments were designed to investigate the feasibility of using reference image based compressed sensing (RICS) technique in DCE-MRI of the breast. The first experiment examined the capability of RICS to faithfully reconstruct uptake curves using undersampled data sets extracted from fully sampled clinical breast DCE-MRI data. An average approach and an approach using motion estimation and motion compensation (ME/MC) were implemented to obtain reference images and to evaluate their efficacy in reducing motion related effects. The second experiment, an in vitro phantom study, tested the feasibility of RICS for improving temporal resolution without degrading the spatial resolution. For the uptake-curve reconstruction experiment, there was a high correlation between uptake curves reconstructed from fully sampled data by Fourier transform and from undersampled data by RICS, indicating high similarity between them. The mean Pearson correlation coefficients for RICS with the ME/MC approach and RICS with the average approach were 0.977 +/- 0.023 and 0.953 +/- 0.031, respectively. The comparisons of final reconstruction results between RICS with the average approach and RICS with the ME/MC approach suggested that the latter was superior to the former in reducing motion related effects. For the in vitro experiment, compared to the fully sampled method, RICS improved the temporal resolution by an acceleration factor of 10 without degrading the spatial resolution. The preliminary study demonstrates the feasibility of RICS for faithfully reconstructing uptake curves and improving temporal resolution of breast DCE-MRI without degrading the spatial resolution.

  15. The Effects of Using the Kinect Motion-Sensing Interactive System to Enhance English Learning for Elementary Students

    ERIC Educational Resources Information Center

    Pan, Wen Fu

    2017-01-01

    The objective of this study was to test whether the Kinect motion-sensing interactive system (KMIS) enhanced students' English vocabulary learning, while also comparing the system's effectiveness against a traditional computer-mouse interface. Both interfaces utilized an interactive game with a questioning strategy. One-hundred and twenty…

  16. Parabolic flight - Loss of sense of orientation

    NASA Technical Reports Server (NTRS)

    Lackner, J. R.; Graybiel, A.

    1979-01-01

    On the earth, or in level flight, a blindfolded subject being rotated at constant velocity about his recumbent long body axis experiences illusory orbital motion of his body in the opposite direction. By contrast, during comparable rotation in the free-fall phase of parabolic flight, no body motion is perceived and all sense of external orientation may be lost; when touch and pressure stimulation is applied to the body surface, a sense of orientation is reestablished immediately. The increased gravitoinertial force period of a parabola produces an exaggeration of the orbital motion experienced in level flight. These observations reveal an important influence of touch, pressure, and kinesthetic information on spatial orientation and provide a basis for understanding many of the postural illusions reported by astronauts in space flight.

  17. Analyzing Virtual Physics Simulations with Tracker

    NASA Astrophysics Data System (ADS)

    Claessens, Tom

    2017-12-01

    In the physics teaching community, Tracker is well known as a user-friendly open source video analysis software, authored by Douglas Brown. With this tool, the user can trace markers indicated on a video or on stroboscopic photos and perform kinematic analyses. Tracker also includes a data modeling tool that allows one to fit some theoretical equations of motion onto experimentally obtained data. In the field of particle mechanics, Tracker has been effectively used for learning and teaching about projectile motion, "toss up" and free-fall vertical motion, and to explain the principle of mechanical energy conservation. Also, Tracker has been successfully used in rigid body mechanics to interpret the results of experiments with rolling/slipping cylinders and moving rods. In this work, I propose an original method in which Tracker is used to analyze virtual computer simulations created with a physics-based motion solver, instead of analyzing video recording or stroboscopic photos. This could be an interesting approach to study kinematics and dynamics problems in physics education, in particular when there is no or limited access to physical labs. I demonstrate the working method with a typical (but quite challenging) problem in classical mechanics: a slipping/rolling cylinder on a rough surface.

  18. Pattern Activity Clustering and Evaluation (PACE)

    NASA Astrophysics Data System (ADS)

    Blasch, Erik; Banas, Christopher; Paul, Michael; Bussjager, Becky; Seetharaman, Guna

    2012-06-01

    With the vast amount of network information available on activities of people (i.e. motions, transportation routes, and site visits) there is a need to explore the salient properties of data that detect and discriminate the behavior of individuals. Recent machine learning approaches include methods of data mining, statistical analysis, clustering, and estimation that support activity-based intelligence. We seek to explore contemporary methods in activity analysis using machine learning techniques that discover and characterize behaviors that enable grouping, anomaly detection, and adversarial intent prediction. To evaluate these methods, we describe the mathematics and potential information theory metrics to characterize behavior. A scenario is presented to demonstrate the concept and metrics that could be useful for layered sensing behavior pattern learning and analysis. We leverage work on group tracking, learning and clustering approaches; as well as utilize information theoretical metrics for classification, behavioral and event pattern recognition, and activity and entity analysis. The performance evaluation of activity analysis supports high-level information fusion of user alerts, data queries and sensor management for data extraction, relations discovery, and situation analysis of existing data.

  19. Optofluidics incorporating actively controlled micro- and nano-particles

    PubMed Central

    Kayani, Aminuddin A.; Khoshmanesh, Khashayar; Ward, Stephanie A.; Mitchell, Arnan; Kalantar-zadeh, Kourosh

    2012-01-01

    The advent of optofluidic systems incorporating suspended particles has resulted in the emergence of novel applications. Such systems operate based on the fact that suspended particles can be manipulated using well-appointed active forces, and their motions, locations and local concentrations can be controlled. These forces can be exerted on both individual and clusters of particles. Having the capability to manipulate suspended particles gives users the ability for tuning the physical and, to some extent, the chemical properties of the suspension media, which addresses the needs of various advanced optofluidic systems. Additionally, the incorporation of particles results in the realization of novel optofluidic solutions used for creating optical components and sensing platforms. In this review, we present different types of active forces that are used for particle manipulations and the resulting optofluidic systems incorporating them. These systems include optical components, optofluidic detection and analysis platforms, plasmonics and Raman systems, thermal and energy related systems, and platforms specifically incorporating biological particles. We conclude the review with a discussion of future perspectives, which are expected to further advance this rapidly growing field. PMID:23864925

  20. A novel nano-sensor based on optomechanical crystal cavity

    NASA Astrophysics Data System (ADS)

    Zhang, Yeping; Ai, Jie; Ma, Jingfang

    2017-10-01

    Optical devices based on new sensing principle are widely used in biochemical and medical area. Nowadays, mass sensing based on monitoring the frequency shifts induced by added mass in oscillators is a well-known and widely used technique. It is interesting to note that for nanoscience and nanotechnology applications there is a strong demand for very sensitive mass sensors, being the target a sensor for single molecule detection. The desired mass resolution for very few or even single molecule detection, has to be below the femtogram range. Considering the strong interaction between high co-localized optical mode and mechanical mode in optomechanical crystal (OMC) cavities, we investigate OMC splitnanobeam cavities in silicon operating near at the 1550nm to achieve high optomechanical coupling rate and ultra-small motion mass. Theoretical investigations of the optical and mechanical characteristic for the proposed cavity are carried out. By adjusting the structural parameters, the cavity's effective motion mass below 10fg and mechanical frequency exceed 10GHz. The transmission spectrum of the cavity is sensitive to the sample which located on the center of the cavity. We conducted the fabrication and the characterization of this cavity sensor on the silicon-on-insulator (SOI) chip. By using vertical coupling between the tapered fiber and the SOI chip, we measured the transmission spectrum of the cavity, and verify this cavity is promising for ultimate precision mass sensing and detection.

  1. Feedback and Elaboration within a Computer-Based Simulation: A Dual Coding Perspective.

    ERIC Educational Resources Information Center

    Rieber, Lloyd P.; And Others

    The purpose of this study was to explore how adult users interact and learn during a computer-based simulation given visual and verbal forms of feedback coupled with embedded elaborations of the content. A total of 52 college students interacted with a computer-based simulation of Newton's laws of motion in which they had control over the motion…

  2. Kinect-based posture tracking for correcting positions during exercise.

    PubMed

    Guerrero, Cesar; Uribe-Quevedo, Alvaro

    2013-01-01

    The Kinect sensor has opened the path for developing numerous applications in several different areas. Medical and health applications are benefiting from the Kinect as it allows non-invasive body motion capture that can be used in motor rehabilitation and phobia treatment. A major advantage of the Kinect is that allows developing solutions that can be used at home or even the office thus, expanding the user freedom for interacting with complementary solutions to its physical activities without requiring any traveling. This paper present a Kinect-based posture tracking software for assisting the user in successfully match postures required in some exercises for strengthen body muscles. Unlike several video games available, this tool offers a user interface for customizing posture parameters, so it can be enhanced by healthcare professionals or by their guidance through the user.

  3. Master-slave micromanipulator apparatus

    DOEpatents

    Morimoto, A.K.; Kozlowski, D.M.; Charles, S.T.; Spalding, J.A.

    1999-08-31

    An apparatus is disclosed based on precision X-Y stages that are stacked. Attached to arms projecting from each X-Y stage are a set of two axis gimbals. Attached to the gimbals is a rod, which provides motion along the axis of the rod and rotation around its axis. A dual-planar apparatus that provides six degrees of freedom of motion precise to within microns of motion. Precision linear stages along with precision linear motors, encoders, and controls provide a robotics system. The motors can be positioned in a remote location by incorporating a set of bellows on the motors and can be connected through a computer controller that will allow one to be a master and the other one to be a slave. Position information from the master can be used to control the slave. Forces of interaction of the slave with its environment can be reflected back to the motor control of the master to provide a sense of force sensed by the slave. Forces import onto the master by the operator can be fed back into the control of the slave to reduce the forces required to move it. 12 figs.

  4. Master-slave micromanipulator method

    DOEpatents

    Morimoto, Alan K.; Kozlowski, David M.; Charles, Steven T.; Spalding, James A.

    1999-01-01

    A method based on precision X-Y stages that are stacked. Attached to arms projecting from each X-Y stage are a set of two axis gimbals. Attached to the gimbals is a rod, which provides motion along the axis of the rod and rotation around its axis. A dual-planar apparatus that provides six degrees of freedom of motion precise to within microns of motion. Precision linear stages along with precision linear motors, encoders, and controls provide a robotics system. The motors can be remotized by incorporating a set of bellows on the motors and can be connected through a computer controller that will allow one to be a master and the other one to be a slave. Position information from the master can be used to control the slave. Forces of interaction of the slave with its environment can be reflected back to the motor control of the master to provide a sense of force sensed by the slave. Forces import onto the master by the operator can be fed back into the control of the slave to reduce the forces required to move it.

  5. Master-slave micromanipulator apparatus

    DOEpatents

    Morimoto, Alan K.; Kozlowski, David M.; Charles, Steven T.; Spalding, James A.

    1999-01-01

    An apparatus based on precision X-Y stages that are stacked. Attached to arms projecting from each X-Y stage are a set of two axis gimbals. Attached to the gimbals is a rod, which provides motion along the axis of the rod and rotation around its axis. A dual-planar apparatus that provides six degrees of freedom of motion precise to within microns of motion. Precision linear stages along with precision linear motors, encoders, and controls provide a robotics system. The motors can be positioned in a remote location by incorporating a set of bellows on the motors and can be connected through a computer controller that will allow one to be a master and the other one to be a slave. Position information from the master can be used to control the slave. Forces of interaction of the slave with its environment can be reflected back to the motor control of the master to provide a sense of force sensed by the slave. Forces import onto the master by the operator can be fed back into the control of the slave to reduce the forces required to move it.

  6. A T-Type Capacitive Sensor Capable of Measuring 5-DOF Error Motions of Precision Spindles

    PubMed Central

    Xiang, Kui; Qiu, Rongbo; Mei, Deqing; Chen, Zichen

    2017-01-01

    The precision spindle is a core component of high-precision machine tools, and the accurate measurement of its error motions is important for improving its rotation accuracy as well as the work performance of the machine. This paper presents a T-type capacitive sensor (T-type CS) with an integrated structure. The proposed sensor can measure the 5-degree-of-freedom (5-DOF) error motions of a spindle in-situ and simultaneously by integrating electrode groups in the cylindrical bore of the stator and the outer end face of its flange, respectively. Simulation analysis and experimental results show that the sensing electrode groups with differential measurement configuration have near-linear output for the different types of rotor displacements. What’s more, the additional capacitance generated by fringe effects has been reduced about 90% with the sensing electrode groups fabricated based on flexible printed circuit board (FPCB) and related processing technologies. The improved signal processing circuit has also been increased one times in the measuring performance and makes the measured differential output capacitance up to 93% of the theoretical values. PMID:28846631

  7. Coupling reconstruction and motion estimation for dynamic MRI through optical flow constraint

    NASA Astrophysics Data System (ADS)

    Zhao, Ningning; O'Connor, Daniel; Gu, Wenbo; Ruan, Dan; Basarab, Adrian; Sheng, Ke

    2018-03-01

    This paper addresses the problem of dynamic magnetic resonance image (DMRI) reconstruction and motion estimation jointly. Because of the inherent anatomical movements in DMRI acquisition, reconstruction of DMRI using motion estimation/compensation (ME/MC) has been explored under the compressed sensing (CS) scheme. In this paper, by embedding the intensity based optical flow (OF) constraint into the traditional CS scheme, we are able to couple the DMRI reconstruction and motion vector estimation. Moreover, the OF constraint is employed in a specific coarse resolution scale in order to reduce the computational complexity. The resulting optimization problem is then solved using a primal-dual algorithm due to its efficiency when dealing with nondifferentiable problems. Experiments on highly accelerated dynamic cardiac MRI with multiple receiver coils validate the performance of the proposed algorithm.

  8. Suppression of extraneous thermal noise in cavity optomechanics.

    PubMed

    Zhao, Yi; Wilson, Dalziel J; Ni, K-K; Kimble, H J

    2012-02-13

    Extraneous thermal motion can limit displacement sensitivity and radiation pressure effects, such as optical cooling, in a cavity-optomechanical system. Here we present an active noise suppression scheme and its experimental implementation. The main challenge is to selectively sense and suppress extraneous thermal noise without affecting motion of the oscillator. Our solution is to monitor two modes of the optical cavity, each with different sensitivity to the oscillator's motion but similar sensitivity to the extraneous thermal motion. This information is used to imprint "anti-noise" onto the frequency of the incident laser field. In our system, based on a nano-mechanical membrane coupled to a Fabry-Pérot cavity, simulation and experiment demonstrate that extraneous thermal noise can be selectively suppressed and that the associated limit on optical cooling can be reduced.

  9. Trained neurons-based motion detection in optical camera communications

    NASA Astrophysics Data System (ADS)

    Teli, Shivani; Cahyadi, Willy Anugrah; Chung, Yeon Ho

    2018-04-01

    A concept of trained neurons-based motion detection (TNMD) in optical camera communications (OCC) is proposed. The proposed TNMD is based on neurons present in a neural network that perform repetitive analysis in order to provide efficient and reliable motion detection in OCC. This efficient motion detection can be considered another functionality of OCC in addition to two traditional functionalities of illumination and communication. To verify the proposed TNMD, the experiments were conducted in an indoor static downlink OCC, where a mobile phone front camera is employed as the receiver and an 8 × 8 red, green, and blue (RGB) light-emitting diode array as the transmitter. The motion is detected by observing the user's finger movement in the form of centroid through the OCC link via a camera. Unlike conventional trained neurons approaches, the proposed TNMD is trained not with motion itself but with centroid data samples, thus providing more accurate detection and far less complex detection algorithm. The experiment results demonstrate that the TNMD can detect all considered motions accurately with acceptable bit error rate (BER) performances at a transmission distance of up to 175 cm. In addition, while the TNMD is performed, a maximum data rate of 3.759 kbps over the OCC link is obtained. The OCC with the proposed TNMD combined can be considered an efficient indoor OCC system that provides illumination, communication, and motion detection in a convenient smart home environment.

  10. Drift-Free Position Estimation of Periodic or Quasi-Periodic Motion Using Inertial Sensors

    PubMed Central

    Latt, Win Tun; Veluvolu, Kalyana Chakravarthy; Ang, Wei Tech

    2011-01-01

    Position sensing with inertial sensors such as accelerometers and gyroscopes usually requires other aided sensors or prior knowledge of motion characteristics to remove position drift resulting from integration of acceleration or velocity so as to obtain accurate position estimation. A method based on analytical integration has previously been developed to obtain accurate position estimate of periodic or quasi-periodic motion from inertial sensors using prior knowledge of the motion but without using aided sensors. In this paper, a new method is proposed which employs linear filtering stage coupled with adaptive filtering stage to remove drift and attenuation. The prior knowledge of the motion the proposed method requires is only approximate band of frequencies of the motion. Existing adaptive filtering methods based on Fourier series such as weighted-frequency Fourier linear combiner (WFLC), and band-limited multiple Fourier linear combiner (BMFLC) are modified to combine with the proposed method. To validate and compare the performance of the proposed method with the method based on analytical integration, simulation study is performed using periodic signals as well as real physiological tremor data, and real-time experiments are conducted using an ADXL-203 accelerometer. Results demonstrate that the performance of the proposed method outperforms the existing analytical integration method. PMID:22163935

  11. Bioinspired active whisker sensor for robotic vibrissal tactile sensing

    NASA Astrophysics Data System (ADS)

    Ju, Feng; Ling, Shih-Fu

    2014-12-01

    A whisker transducer (WT) inspired by rat’s vibrissal tactile perception is proposed based on a transduction matrix model characterizing the electro-mechanical transduction process in both forward and backward directions. It is capable of acting as an actuator to sweep the whisker and simultaneously as a sensor to sense the force, motion, and mechanical impedance at whisker tip. Its validity is confirmed by numerical simulation using a finite element model. A prototype is then fabricated and its transduction matrix is determined by parameter identification. The calibrated WT can accurately sense mechanical impedance which is directly related to stiffness, mass and damping. Subsequent vibrissal tactile sensing of sandpaper texture reveals that the real part of mechanical impedance sensed by WT is correlated with sandpaper roughness. Texture discrimination is successfully achieved by inputting the real part to a k-means clustering algorithm. The mechanical impedance sensing ability as well as other features of the WT such as simultaneous-actuation-and-sensing makes it a good solution to robotic tactile sensing.

  12. Towards Robot-Assisted Retinal Vein Cannulation: A Motorized Force-Sensing Microneedle Integrated with a Handheld Micromanipulator †

    PubMed Central

    Gonenc, Berk; Chae, Jeremy; Gehlbach, Peter; Taylor, Russell H.; Iordachita, Iulian

    2017-01-01

    Retinal vein cannulation is a technically demanding surgical procedure where therapeutic agents are injected into the retinal veins to treat occlusions. The clinical feasibility of this approach has been largely limited by the technical challenges associated with performing the procedure. Among the challenges to successful vein cannulation are identifying the moment of venous puncture, achieving cannulation of the micro-vessel, and maintaining cannulation throughout drug delivery. Recent advances in medical robotics and sensing of tool-tissue interaction forces have the potential to address each of these challenges as well as to prevent tissue trauma, minimize complications, diminish surgeon effort, and ultimately promote successful retinal vein cannulation. In this paper, we develop an assistive system combining a handheld micromanipulator, called “Micron”, with a force-sensing microneedle. Using this system, we examine two distinct methods of precisely detecting the instant of venous puncture. This is based on measured tool-tissue interaction forces and also the tracked position of the needle tip. In addition to the existing tremor canceling function of Micron, a new control method is implemented to actively compensate unintended movements of the operator, and to keep the cannulation device securely inside the vein following cannulation. To demonstrate the capabilities and performance of our uniquely upgraded system, we present a multi-user artificial phantom study with subjects from three different surgical skill levels. Results show that our puncture detection algorithm, when combined with the active positive holding feature enables sustained cannulation which is most evident in smaller veins. Notable is that the active holding function significantly attenuates tool motion in the vein, thereby reduces the trauma during cannulation. PMID:28946634

  13. Graphs and Tracks Revisited

    NASA Astrophysics Data System (ADS)

    Christian, Wolfgang; Belloni, Mario

    2013-04-01

    We have recently developed a Graphs and Tracks model based on an earlier program by David Trowbridge, as shown in Fig. 1. Our model can show position, velocity, acceleration, and energy graphs and can be used for motion-to-graphs exercises. Users set the heights of the track segments, and the model displays the motion of the ball on the track together with position, velocity, and acceleration graphs. This ready-to-run model is available in the ComPADRE OSP Collection at www.compadre.org/osp/items/detail.cfm?ID=12023.

  14. MEMS sensor technologies for human centred applications in healthcare, physical activities, safety and environmental sensing: a review on research activities in Italy.

    PubMed

    Ciuti, Gastone; Ricotti, Leonardo; Menciassi, Arianna; Dario, Paolo

    2015-03-17

    Over the past few decades the increased level of public awareness concerning healthcare, physical activities, safety and environmental sensing has created an emerging need for smart sensor technologies and monitoring devices able to sense, classify, and provide feedbacks to users' health status and physical activities, as well as to evaluate environmental and safety conditions in a pervasive, accurate and reliable fashion. Monitoring and precisely quantifying users' physical activity with inertial measurement unit-based devices, for instance, has also proven to be important in health management of patients affected by chronic diseases, e.g., Parkinson's disease, many of which are becoming highly prevalent in Italy and in the Western world. This review paper will focus on MEMS sensor technologies developed in Italy in the last three years describing research achievements for healthcare and physical activity, safety and environmental sensing, in addition to smart systems integration. Innovative and smart integrated solutions for sensing devices, pursued and implemented in Italian research centres, will be highlighted, together with specific applications of such technologies. Finally, the paper will depict the future perspective of sensor technologies and corresponding exploitation opportunities, again with a specific focus on Italy.

  15. SensePath: Understanding the Sensemaking Process Through Analytic Provenance.

    PubMed

    Nguyen, Phong H; Xu, Kai; Wheat, Ashley; Wong, B L William; Attfield, Simon; Fields, Bob

    2016-01-01

    Sensemaking is described as the process of comprehension, finding meaning and gaining insight from information, producing new knowledge and informing further action. Understanding the sensemaking process allows building effective visual analytics tools to make sense of large and complex datasets. Currently, it is often a manual and time-consuming undertaking to comprehend this: researchers collect observation data, transcribe screen capture videos and think-aloud recordings, identify recurring patterns, and eventually abstract the sensemaking process into a general model. In this paper, we propose a general approach to facilitate such a qualitative analysis process, and introduce a prototype, SensePath, to demonstrate the application of this approach with a focus on browser-based online sensemaking. The approach is based on a study of a number of qualitative research sessions including observations of users performing sensemaking tasks and post hoc analyses to uncover their sensemaking processes. Based on the study results and a follow-up participatory design session with HCI researchers, we decided to focus on the transcription and coding stages of thematic analysis. SensePath automatically captures user's sensemaking actions, i.e., analytic provenance, and provides multi-linked views to support their further analysis. A number of other requirements elicited from the design session are also implemented in SensePath, such as easy integration with existing qualitative analysis workflow and non-intrusive for participants. The tool was used by an experienced HCI researcher to analyze two sensemaking sessions. The researcher found the tool intuitive and considerably reduced analysis time, allowing better understanding of the sensemaking process.

  16. Addressing and Presenting Quality of Satellite Data via Web-Based Services

    NASA Technical Reports Server (NTRS)

    Leptoukh, Gregory; Lynnes, C.; Ahmad, S.; Fox, P.; Zednik, S.; West, P.

    2011-01-01

    With the recent attention to climate change and proliferation of remote-sensing data utilization, climate model and various environmental monitoring and protection applications have begun to increasingly rely on satellite measurements. Research application users seek good quality satellite data, with uncertainties and biases provided for each data point. However, different communities address remote sensing quality issues rather inconsistently and differently. We describe our attempt to systematically characterize, capture, and provision quality and uncertainty information as it applies to the NASA MODIS Aerosol Optical Depth data product. In particular, we note the semantic differences in quality/bias/uncertainty at the pixel, granule, product, and record levels. We outline various factors contributing to uncertainty or error budget; errors. Web-based science analysis and processing tools allow users to access, analyze, and generate visualizations of data while alleviating users from having directly managing complex data processing operations. These tools provide value by streamlining the data analysis process, but usually shield users from details of the data processing steps, algorithm assumptions, caveats, etc. Correct interpretation of the final analysis requires user understanding of how data has been generated and processed and what potential biases, anomalies, or errors may have been introduced. By providing services that leverage data lineage provenance and domain-expertise, expert systems can be built to aid the user in understanding data sources, processing, and the suitability for use of products generated by the tools. We describe our experiences developing a semantic, provenance-aware, expert-knowledge advisory system applied to NASA Giovanni web-based Earth science data analysis tool as part of the ESTO AIST-funded Multi-sensor Data Synergy Advisor project.

  17. AQUA-USERS: AQUAculture USEr Driven Operational Remote Sensing Information Services

    NASA Astrophysics Data System (ADS)

    Laanen, Marnix; Poser, Kathrin; Peters, Steef; de Reus, Nils; Ghebrehiwot, Semhar; Eleveld, Marieke; Miller, Peter; Groom, Steve; Clements, Oliver; Kurekin, Andrey; Martinez Vicente, Victor; Brotas, Vanda; Sa, Carolina; Couto, Andre; Brito, Ana; Amorim, Ana; Dale, Trine; Sorensen, Kai; Boye Hansen, Lars; Huber, Silvia; Kaas, Hanne; Andersson, Henrik; Icely, John; Fragoso, Bruno

    2015-12-01

    The FP7 project AQUA-USERS provides the aquaculture industry with user-relevant and timely information based on the most up-to-date satellite data and innovative optical in-situ measurements. Its key purpose is to develop an application that brings together satellite information on water quality and temperature with in-situ observations as well as relevant weather prediction and met-ocean data. The application and its underlying database are linked to a decision support system that includes a set of (user-determined) management options. Specific focus is on the development of indicators for aquaculture management including indicators for harmful algae bloom (HAB) events. The methods and services developed within AQUA-USERS are tested by the members of the user board, who represent different geographic areas and aquaculture production systems.

  18. Use of a gesture user interface as a touchless image navigation system in dental surgery: Case series report

    PubMed Central

    Elizondo, María L.

    2014-01-01

    Purpose The purposes of this study were to develop a workstation computer that allowed intraoperative touchless control of diagnostic and surgical images by a surgeon, and to report the preliminary experience with the use of the system in a series of cases in which dental surgery was performed. Materials and Methods A custom workstation with a new motion sensing input device (Leap Motion) was set up in order to use a natural user interface (NUI) to manipulate the imaging software by hand gestures. The system allowed intraoperative touchless control of the surgical images. Results For the first time in the literature, an NUI system was used for a pilot study during 11 dental surgery procedures including tooth extractions, dental implant placements, and guided bone regeneration. No complications were reported. The system performed very well and was very useful. Conclusion The proposed system fulfilled the objective of providing touchless access and control of the system of images and a three-dimensional surgical plan, thus allowing the maintenance of sterile conditions. The interaction between surgical staff, under sterile conditions, and computer equipment has been a key issue. The solution with an NUI with touchless control of the images seems to be closer to an ideal. The cost of the sensor system is quite low; this could facilitate its incorporation into the practice of routine dental surgery. This technology has enormous potential in dental surgery and other healthcare specialties. PMID:24944966

  19. A System for Discovering Bioengineered Threats by Knowledge Base Driven Mining of Toxin Data

    DTIC Science & Technology

    2004-08-01

    RMSD cut - off and select a residue substitution matrix. The user is also allowed...in the sense that after super-positioning, the RMSD between the substructures is no more than the cut - off RMSD . * Residue substitutions are allowed...during super-positioning. Default RMSD cut - off and residue substitution matrix are provided. Users can specify their own RMSD cut - offs as well as

  20. Powered wheelchair simulator development: implementing combined navigation-reaching tasks with a 3D hand motion controller.

    PubMed

    Tao, Gordon; Archambault, Philippe S

    2016-01-19

    Powered wheelchair (PW) training involving combined navigation and reaching is often limited or unfeasible. Virtual reality (VR) simulators offer a feasible alternative for rehabilitation training either at home or in a clinical setting. This study evaluated a low-cost magnetic-based hand motion controller as an interface for reaching tasks within the McGill Immersive Wheelchair (miWe) simulator. Twelve experienced PW users performed three navigation-reaching tasks in the real world (RW) and in VR: working at a desk, using an elevator, and opening a door. The sense of presence in VR was assessed using the iGroup Presence Questionnaire (IPQ). We determined concordance of task performance in VR with that in the RW. A video task analysis was performed to analyse task behaviours. Compared to previous miWe data, IPQ scores were greater in the involvement domain (p < 0.05). Task analysis showed most of navigation and reaching behaviours as having moderate to excellent (K > 0.4, Cohen's Kappa) agreement between the two environments, but greater (p < 0.05) risk of collisions and reaching errors in VR. VR performance demonstrated longer (p < 0.05) task times and more discreet movements for the elevator and desk tasks but not the door task. Task performance showed poorer kinematic performance in VR than RW but similar strategies. Therefore, the reaching component represents a promising addition to the miWe training simulator, though some limitations must be addressed in future development.

  1. A tool for NDVI time series extraction from wide-swath remotely sensed images

    NASA Astrophysics Data System (ADS)

    Li, Zhishan; Shi, Runhe; Zhou, Cong

    2015-09-01

    Normalized Difference Vegetation Index (NDVI) is one of the most widely used indicators for monitoring the vegetation coverage in land surface. The time series features of NDVI are capable of reflecting dynamic changes of various ecosystems. Calculating NDVI via Moderate Resolution Imaging Spectrometer (MODIS) and other wide-swath remotely sensed images provides an important way to monitor the spatial and temporal characteristics of large-scale NDVI. However, difficulties are still existed for ecologists to extract such information correctly and efficiently because of the problems in several professional processes on the original remote sensing images including radiometric calibration, geometric correction, multiple data composition and curve smoothing. In this study, we developed an efficient and convenient online toolbox for non-remote sensing professionals who want to extract NDVI time series with a friendly graphic user interface. It is based on Java Web and Web GIS technically. Moreover, Struts, Spring and Hibernate frameworks (SSH) are integrated in the system for the purpose of easy maintenance and expansion. Latitude, longitude and time period are the key inputs that users need to provide, and the NDVI time series are calculated automatically.

  2. Location Privacy for Mobile Crowd Sensing through Population Mapping †

    PubMed Central

    Shin, Minho; Cornelius, Cory; Kapadia, Apu; Triandopoulos, Nikos; Kotz, David

    2015-01-01

    Opportunistic sensing allows applications to “task” mobile devices to measure context in a target region. For example, one could leverage sensor-equipped vehicles to measure traffic or pollution levels on a particular street or users' mobile phones to locate (Bluetooth-enabled) objects in their vicinity. In most proposed applications, context reports include the time and location of the event, putting the privacy of users at increased risk: even if identifying information has been removed from a report, the accompanying time and location can reveal sufficient information to de-anonymize the user whose device sent the report. We propose and evaluate a novel spatiotemporal blurring mechanism based on tessellation and clustering to protect users' privacy against the system while reporting context. Our technique employs a notion of probabilistic k-anonymity; it allows users to perform local blurring of reports efficiently without an online anonymization server before the data are sent to the system. The proposed scheme can control the degree of certainty in location privacy and the quality of reports through a system parameter. We outline the architecture and security properties of our approach and evaluate our tessellation and clustering algorithm against real mobility traces. PMID:26131676

  3. Awareness and Learning in Participatory Noise Sensing

    PubMed Central

    Becker, Martin; Caminiti, Saverio; Fiorella, Donato; Francis, Louise; Gravino, Pietro; Haklay, Mordechai (Muki); Hotho, Andreas; Loreto, Vittorio; Mueller, Juergen; Ricchiuti, Ferdinando; Servedio, Vito D. P.; Sîrbu, Alina; Tria, Francesca

    2013-01-01

    The development of ICT infrastructures has facilitated the emergence of new paradigms for looking at society and the environment over the last few years. Participatory environmental sensing, i.e. directly involving citizens in environmental monitoring, is one example, which is hoped to encourage learning and enhance awareness of environmental issues. In this paper, an analysis of the behaviour of individuals involved in noise sensing is presented. Citizens have been involved in noise measuring activities through the WideNoise smartphone application. This application has been designed to record both objective (noise samples) and subjective (opinions, feelings) data. The application has been open to be used freely by anyone and has been widely employed worldwide. In addition, several test cases have been organised in European countries. Based on the information submitted by users, an analysis of emerging awareness and learning is performed. The data show that changes in the way the environment is perceived after repeated usage of the application do appear. Specifically, users learn how to recognise different noise levels they are exposed to. Additionally, the subjective data collected indicate an increased user involvement in time and a categorisation effect between pleasant and less pleasant environments. PMID:24349102

  4. User surveys support designing a prosthetic wrist that incorporates the Dart Thrower's Motion.

    PubMed

    Davidson, Matthew; Bodine, Cathy; Weir, Richard F Ff

    2018-03-07

    Prosthetic devices are not meeting the needs of people with upper limb amputations. Due to controlsidelimitations, prosthetic wrists cannot yet be fully articulated. This study sought to determine which wrist motions users felt were most important for completing activities of daily living. We specifically invstigated whether adding a combinationof flexion and deviation known as the Dart Thrower's Motion to a prosthetic wrist would help improve functionality. Fifteen participants with a trans-radial amputation, aged 25-64 years, who use a prosthesis completed an online survey and answered interview questions to determine which types of tasks pose particular challenges. Participants were asked what kinds of improvements they would like to see in a new prosthesis. A subset of five participants were interviewed in-depth to provide further information about difficulties they face using their device. The survey showed that participants had difficulty performing activities of daily living that involve a combination of wrist flexion and deviation known as the "Dart Throwers Motion". Interview responses confirmed that users have difficulty performing these tasks, especially those that require tools. Additionally, users said that they were more interested in having flexion and deviation than rotation in a prosthetic wrist. This research indicates that including the Dart Thrower's Motion in future designs of prosthetic wrists would improve these devices and people with upper limb amputations would be excited to see this improvement in their devices. Implications for Rehabilitation • Over one third of people with upper limb amputations do not use a prosthesis because prosthetic devices do not meet their needs.• The number of motions possible in state of the art prosthetic devices is limited by the small number of control sites available.• The Dart Thrower?s Motion is a wrist motion used for many activities of daily living but unavailable in commercial prosthetics leading many prosthetics users to have difficulty with these tasks.• Prosthetic use, and therefore quality of life, could be improved by including the Dart Thrower's Motion in a prosthesis.

  5. FUNCTIONAL ASSESSMENT OF A CAMERA PHONE-BASED WAYFINDING SYSTEM OPERATED BY BLIND AND VISUALLY IMPAIRED USERS

    PubMed Central

    COUGHLAN, JAMES; MANDUCHI, ROBERTO

    2009-01-01

    We describe a wayfinding system for blind and visually impaired persons that uses a camera phone to determine the user's location with respect to color markers, posted at locations of interest (such as offices), which are automatically detected by the phone. The color marker signs are specially designed to be detected in real time in cluttered environments using computer vision software running on the phone; a novel segmentation algorithm quickly locates the borders of the color marker in each image, which allows the system to calculate how far the marker is from the phone. We present a model of how the user's scanning strategy (i.e. how he/she pans the phone left and right to find color markers) affects the system's ability to detect color markers given the limitations imposed by motion blur, which is always a possibility whenever a camera is in motion. Finally, we describe experiments with our system tested by blind and visually impaired volunteers, demonstrating their ability to reliably use the system to find locations designated by color markers in a variety of indoor and outdoor environments, and elucidating which search strategies were most effective for users. PMID:19960101

  6. FUNCTIONAL ASSESSMENT OF A CAMERA PHONE-BASED WAYFINDING SYSTEM OPERATED BY BLIND AND VISUALLY IMPAIRED USERS.

    PubMed

    Coughlan, James; Manduchi, Roberto

    2009-06-01

    We describe a wayfinding system for blind and visually impaired persons that uses a camera phone to determine the user's location with respect to color markers, posted at locations of interest (such as offices), which are automatically detected by the phone. The color marker signs are specially designed to be detected in real time in cluttered environments using computer vision software running on the phone; a novel segmentation algorithm quickly locates the borders of the color marker in each image, which allows the system to calculate how far the marker is from the phone. We present a model of how the user's scanning strategy (i.e. how he/she pans the phone left and right to find color markers) affects the system's ability to detect color markers given the limitations imposed by motion blur, which is always a possibility whenever a camera is in motion. Finally, we describe experiments with our system tested by blind and visually impaired volunteers, demonstrating their ability to reliably use the system to find locations designated by color markers in a variety of indoor and outdoor environments, and elucidating which search strategies were most effective for users.

  7. Pyroelectric IR sensor arrays for fall detection in the older population

    NASA Astrophysics Data System (ADS)

    Sixsmith, A.; Johnson, N.; Whatmore, R.

    2005-09-01

    Uncooled pyroelectric sensor arrays have been studied over many years for their uses in thermal imaging applications. These arrays will only detect changes in IR flux and so systems based upon them are very good at detecting movements of people in the scene without sensing the background, if they are used in staring mode. Relatively-low element count arrays (16 x 16) can be used for a variety of people sensing applications, including people counting (for safety applications), queue monitoring etc. With appropriate signal processing such systems can be also be used for the detection of particular events such as a person falling over. There is a considerable need for automatic fall detection amongst older people, but there are important limitations to some of the current and emerging technologies available for this. Simple sensors, such as 1 or 2 element pyroelectric infra-red sensors provide crude data that is difficult to interpret; the use of devices worn on the person, such as wrist communicator and motion detectors have potential, but are reliant on the person being able and willing to wear the device; video cameras may be seen as intrusive and require considerable human resources to monitor activity while machine-interpretation of camera images is complex and may be difficult in this application area. The use of a pyroelectric thermal array sensor was seen to have a number of potential benefits. The sensor is wall-mounted and does not require the user to wear a device. It enables detailed analysis of a subject's motion to be achieved locally, within the detector, using only a modest processor. This is possible due to the relative ease with which data from the sensor can be interpreted relative to the data generated by alternative sensors such as video devices. In addition to the cost-effectiveness of this solution, it was felt that the lack of detail in the low-level data, together with the elimination of the need to transmit data outside the detector, would help to avert feelings intrusiveness on the part of the end-user.The main benefits of this type of technology would be for older people who spend time alone in unsupervised environments. This would include people living alone in ordinary housing or in sheltered accommodation (apartment complexes for older people with local warden) and non-communal areas in residential/nursing home environments (e.g. bedrooms and ensuite bathrooms and toilets). This paper will review the development of the array, the pyroelectric ceramic material upon which it is based and the system capabilities. It will present results from the Framework 5 SIMBAD project, which used the system to monitor the movements of elderly people over a considerable period of time.

  8. Effects of Vibrotactile Feedback on Human Learning of Arm Motions

    PubMed Central

    Bark, Karlin; Hyman, Emily; Tan, Frank; Cha, Elizabeth; Jax, Steven A.; Buxbaum, Laurel J.; Kuchenbecker, Katherine J.

    2015-01-01

    Tactile cues generated from lightweight, wearable actuators can help users learn new motions by providing immediate feedback on when and how to correct their movements. We present a vibrotactile motion guidance system that measures arm motions and provides vibration feedback when the user deviates from a desired trajectory. A study was conducted to test the effects of vibrotactile guidance on a subject’s ability to learn arm motions. Twenty-six subjects learned motions of varying difficulty with both visual (V), and visual and vibrotactile (VVT) feedback over the course of four days of training. After four days of rest, subjects returned to perform the motions from memory with no feedback. We found that augmenting visual feedback with vibrotactile feedback helped subjects reduce the root mean square (rms) angle error of their limb significantly while they were learning the motions, particularly for 1DOF motions. Analysis of the retention data showed no significant difference in rms angle errors between feedback conditions. PMID:25486644

  9. Real-Time Motion Tracking for Mobile Augmented/Virtual Reality Using Adaptive Visual-Inertial Fusion

    PubMed Central

    Fang, Wei; Zheng, Lianyu; Deng, Huanjun; Zhang, Hongbo

    2017-01-01

    In mobile augmented/virtual reality (AR/VR), real-time 6-Degree of Freedom (DoF) motion tracking is essential for the registration between virtual scenes and the real world. However, due to the limited computational capacity of mobile terminals today, the latency between consecutive arriving poses would damage the user experience in mobile AR/VR. Thus, a visual-inertial based real-time motion tracking for mobile AR/VR is proposed in this paper. By means of high frequency and passive outputs from the inertial sensor, the real-time performance of arriving poses for mobile AR/VR is achieved. In addition, to alleviate the jitter phenomenon during the visual-inertial fusion, an adaptive filter framework is established to cope with different motion situations automatically, enabling the real-time 6-DoF motion tracking by balancing the jitter and latency. Besides, the robustness of the traditional visual-only based motion tracking is enhanced, giving rise to a better mobile AR/VR performance when motion blur is encountered. Finally, experiments are carried out to demonstrate the proposed method, and the results show that this work is capable of providing a smooth and robust 6-DoF motion tracking for mobile AR/VR in real-time. PMID:28475145

  10. Real-Time Motion Tracking for Mobile Augmented/Virtual Reality Using Adaptive Visual-Inertial Fusion.

    PubMed

    Fang, Wei; Zheng, Lianyu; Deng, Huanjun; Zhang, Hongbo

    2017-05-05

    In mobile augmented/virtual reality (AR/VR), real-time 6-Degree of Freedom (DoF) motion tracking is essential for the registration between virtual scenes and the real world. However, due to the limited computational capacity of mobile terminals today, the latency between consecutive arriving poses would damage the user experience in mobile AR/VR. Thus, a visual-inertial based real-time motion tracking for mobile AR/VR is proposed in this paper. By means of high frequency and passive outputs from the inertial sensor, the real-time performance of arriving poses for mobile AR/VR is achieved. In addition, to alleviate the jitter phenomenon during the visual-inertial fusion, an adaptive filter framework is established to cope with different motion situations automatically, enabling the real-time 6-DoF motion tracking by balancing the jitter and latency. Besides, the robustness of the traditional visual-only based motion tracking is enhanced, giving rise to a better mobile AR/VR performance when motion blur is encountered. Finally, experiments are carried out to demonstrate the proposed method, and the results show that this work is capable of providing a smooth and robust 6-DoF motion tracking for mobile AR/VR in real-time.

  11. Real-time registration of 3D to 2D ultrasound images for image-guided prostate biopsy.

    PubMed

    Gillies, Derek J; Gardi, Lori; De Silva, Tharindu; Zhao, Shuang-Ren; Fenster, Aaron

    2017-09-01

    During image-guided prostate biopsy, needles are targeted at tissues that are suspicious of cancer to obtain specimen for histological examination. Unfortunately, patient motion causes targeting errors when using an MR-transrectal ultrasound (TRUS) fusion approach to augment the conventional biopsy procedure. This study aims to develop an automatic motion correction algorithm approaching the frame rate of an ultrasound system to be used in fusion-based prostate biopsy systems. Two modes of operation have been investigated for the clinical implementation of the algorithm: motion compensation using a single user initiated correction performed prior to biopsy, and real-time continuous motion compensation performed automatically as a background process. Retrospective 2D and 3D TRUS patient images acquired prior to biopsy gun firing were registered using an intensity-based algorithm utilizing normalized cross-correlation and Powell's method for optimization. 2D and 3D images were downsampled and cropped to estimate the optimal amount of image information that would perform registrations quickly and accurately. The optimal search order during optimization was also analyzed to avoid local optima in the search space. Error in the algorithm was computed using target registration errors (TREs) from manually identified homologous fiducials in a clinical patient dataset. The algorithm was evaluated for real-time performance using the two different modes of clinical implementations by way of user initiated and continuous motion compensation methods on a tissue mimicking prostate phantom. After implementation in a TRUS-guided system with an image downsampling factor of 4, the proposed approach resulted in a mean ± std TRE and computation time of 1.6 ± 0.6 mm and 57 ± 20 ms respectively. The user initiated mode performed registrations with in-plane, out-of-plane, and roll motions computation times of 108 ± 38 ms, 60 ± 23 ms, and 89 ± 27 ms, respectively, and corresponding registration errors of 0.4 ± 0.3 mm, 0.2 ± 0.4 mm, and 0.8 ± 0.5°. The continuous method performed registration significantly faster (P < 0.05) than the user initiated method, with observed computation times of 35 ± 8 ms, 43 ± 16 ms, and 27 ± 5 ms for in-plane, out-of-plane, and roll motions, respectively, and corresponding registration errors of 0.2 ± 0.3 mm, 0.7 ± 0.4 mm, and 0.8 ± 1.0°. The presented method encourages real-time implementation of motion compensation algorithms in prostate biopsy with clinically acceptable registration errors. Continuous motion compensation demonstrated registration accuracy with submillimeter and subdegree error, while performing < 50 ms computation times. Image registration technique approaching the frame rate of an ultrasound system offers a key advantage to be smoothly integrated to the clinical workflow. In addition, this technique could be used further for a variety of image-guided interventional procedures to treat and diagnose patients by improving targeting accuracy. © 2017 American Association of Physicists in Medicine.

  12. Robonaut 2 - The First Humanoid Robot in Space

    NASA Technical Reports Server (NTRS)

    Diftler, M. A.; Radford, N. A.; Mehling, J. S.; Abdallah, M. E.; Bridgwater, L. B.; Sanders, A. M.; Askew, R. S.; Linn, D. M.; Yamokoski, J. D.; Permenter, F. A.; hide

    2010-01-01

    NASA and General Motors have developed the second generation Robonaut, Robonaut 2 or R2, and it is scheduled to arrive on the International Space Station in late 2010 and undergo initial testing in early 2011. This state of the art, dexterous, anthropomorphic robotic torso has significant technical improvements over its predecessor making it a far more valuable tool for astronauts. Upgrades include: increased force sensing, greater range of motion, higher bandwidth and improved dexterity. R2 s integrated mechatronics design results in a more compact and robust distributed control system with a faction of the wiring of the original Robonaut. Modularity is prevalent throughout the hardware and software along with innovative and layered approaches for sensing and control. The most important aspects of the Robonaut philosophy are clearly present in this latest model s ability to allow comfortable human interaction and in its design to perform significant work using the same hardware and interfaces used by people. The following describes the mechanisms, integrated electronics, control strategies and user interface that make R2 a promising addition to the Space Station and other environments where humanoid robots can assist people.

  13. Space motion sickness

    NASA Technical Reports Server (NTRS)

    Homick, J. L.

    1979-01-01

    Research on the etiology, prediction, treatment and prevention of space motion sickness, designed to minimize the impact of this syndrome which was experienced frequently and with severity by individuals on the Skylab missions, on Space Shuttle crews is reviewed. Theories of the cause of space motion sickness currently under investigation by NASA include sensory conflict, which argues that motion sickness symptoms result from a mismatch between the total pattern of information from the spatial senses and that stored from previous experiences, and fluid shift, based upon the redistribution of bodily fluids that occurs upon continued exposure to weightlessness. Attempts are underway to correlate space motion sickness susceptibility to different provocative environments, vestibular and nonvestibular responses, and the rate of acquisition and length of retention of sensory adaptation. Space motion sickness countermeasures under investigation include various drug combinations, of which the equal combination of promethazine and ephedrine has been found to be as effective as the scopolomine and dexedrine combination, and vestibular adaptation and biofeedback training and autogenic therapy.

  14. Motion sickness: a negative reinforcement model.

    PubMed

    Bowins, Brad

    2010-01-15

    Theories pertaining to the "why" of motion sickness are in short supply relative to those detailing the "how." Considering the profoundly disturbing and dysfunctional symptoms of motion sickness, it is difficult to conceive of why this condition is so strongly biologically based in humans and most other mammalian and primate species. It is posited that motion sickness evolved as a potent negative reinforcement system designed to terminate motion involving sensory conflict or postural instability. During our evolution and that of many other species, motion of this type would have impaired evolutionary fitness via injury and/or signaling weakness and vulnerability to predators. The symptoms of motion sickness strongly motivate the individual to terminate the offending motion by early avoidance, cessation of movement, or removal of oneself from the source. The motion sickness negative reinforcement mechanism functions much like pain to strongly motivate evolutionary fitness preserving behavior. Alternative why theories focusing on the elimination of neurotoxins and the discouragement of motion programs yielding vestibular conflict suffer from several problems, foremost that neither can account for the rarity of motion sickness in infants and toddlers. The negative reinforcement model proposed here readily accounts for the absence of motion sickness in infants and toddlers, in that providing strong motivation to terminate aberrant motion does not make sense until a child is old enough to act on this motivation.

  15. A TMS study on the contribution of visual area V5 to the perception of implied motion in art and its appreciation.

    PubMed

    Cattaneo, Zaira; Schiavi, Susanna; Silvanto, Juha; Nadal, Marcos

    2017-01-01

    Over the last decade, researchers have sought to understand the brain mechanisms involved in the appreciation of art. Previous studies reported an increased activity in sensory processing regions for artworks that participants find more appealing. Here we investigated the intriguing possibility that activity in cortical area V5-a region in the occipital cortex mediating physical and implied motion detection-is related not only to the generation of a sense of motion from visual cues used in artworks, but also to the appreciation of those artworks. Art-naïve participants viewed a series of paintings and quickly judged whether or not the paintings conveyed a sense of motion, and whether or not they liked them. Triple-pulse TMS applied over V5 while viewing the paintings significantly decreased the perceived sense of motion, and also significantly reduced liking of abstract (but not representational) paintings. Our data demonstrate that V5 is involved in extracting motion information even when the objects whose motion is implied are pictorial representations (as opposed to photographs or film frames), and even in the absence of any figurative content. Moreover, our study suggests that, in the case of untrained people, V5 activity plays a causal role in the appreciation of abstract but not of representational art.

  16. Apparatus and Method for Assessing Vestibulo-Ocular Function

    NASA Technical Reports Server (NTRS)

    Shelhamer, Mark J. (Inventor)

    2015-01-01

    A system for assessing vestibulo-ocular function includes a motion sensor system adapted to be coupled to a user's head; a data processing system configured to communicate with the motion sensor system to receive the head-motion signals; a visual display system configured to communicate with the data processing system to receive image signals from the data processing system; and a gain control device arranged to be operated by the user and to communicate gain adjustment signals to the data processing system.

  17. Algorithms and architectures for robot vision

    NASA Technical Reports Server (NTRS)

    Schenker, Paul S.

    1990-01-01

    The scope of the current work is to develop practical sensing implementations for robots operating in complex, partially unstructured environments. A focus in this work is to develop object models and estimation techniques which are specific to requirements of robot locomotion, approach and avoidance, and grasp and manipulation. Such problems have to date received limited attention in either computer or human vision - in essence, asking not only how perception is in general modeled, but also what is the functional purpose of its underlying representations. As in the past, researchers are drawing on ideas from both the psychological and machine vision literature. Of particular interest is the development 3-D shape and motion estimates for complex objects when given only partial and uncertain information and when such information is incrementally accrued over time. Current studies consider the use of surface motion, contour, and texture information, with the longer range goal of developing a fused sensing strategy based on these sources and others.

  18. Motion Artifact Quantification and Sensor Fusion for Unobtrusive Health Monitoring.

    PubMed

    Hoog Antink, Christoph; Schulz, Florian; Leonhardt, Steffen; Walter, Marian

    2017-12-25

    Sensors integrated into objects of everyday life potentially allow unobtrusive health monitoring at home. However, since the coupling of sensors and subject is not as well-defined as compared to a clinical setting, the signal quality is much more variable and can be disturbed significantly by motion artifacts. One way of tackling this challenge is the combined evaluation of multiple channels via sensor fusion. For robust and accurate sensor fusion, analyzing the influence of motion on different modalities is crucial. In this work, a multimodal sensor setup integrated into an armchair is presented that combines capacitively coupled electrocardiography, reflective photoplethysmography, two high-frequency impedance sensors and two types of ballistocardiography sensors. To quantify motion artifacts, a motion protocol performed by healthy volunteers is recorded with a motion capture system, and reference sensors perform cardiorespiratory monitoring. The shape-based signal-to-noise ratio SNR S is introduced and used to quantify the effect on motion on different sensing modalities. Based on this analysis, an optimal combination of sensors and fusion methodology is developed and evaluated. Using the proposed approach, beat-to-beat heart-rate is estimated with a coverage of 99.5% and a mean absolute error of 7.9 ms on 425 min of data from seven volunteers in a proof-of-concept measurement scenario.

  19. Fidelity of the ensemble code for visual motion in primate retina.

    PubMed

    Frechette, E S; Sher, A; Grivich, M I; Petrusca, D; Litke, A M; Chichilnisky, E J

    2005-07-01

    Sensory experience typically depends on the ensemble activity of hundreds or thousands of neurons, but little is known about how populations of neurons faithfully encode behaviorally important sensory information. We examined how precisely speed of movement is encoded in the population activity of magnocellular-projecting parasol retinal ganglion cells (RGCs) in macaque monkey retina. Multi-electrode recordings were used to measure the activity of approximately 100 parasol RGCs simultaneously in isolated retinas stimulated with moving bars. To examine how faithfully the retina signals motion, stimulus speed was estimated directly from recorded RGC responses using an optimized algorithm that resembles models of motion sensing in the brain. RGC population activity encoded speed with a precision of approximately 1%. The elementary motion signal was conveyed in approximately 10 ms, comparable to the interspike interval. Temporal structure in spike trains provided more precise speed estimates than time-varying firing rates. Correlated activity between RGCs had little effect on speed estimates. The spatial dispersion of RGC receptive fields along the axis of motion influenced speed estimates more strongly than along the orthogonal direction, as predicted by a simple model based on RGC response time variability and optimal pooling. on and off cells encoded speed with similar and statistically independent variability. Simulation of downstream speed estimation using populations of speed-tuned units showed that peak (winner take all) readout provided more precise speed estimates than centroid (vector average) readout. These findings reveal how faithfully the retinal population code conveys information about stimulus speed and the consequences for motion sensing in the brain.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cho, M; Kim, T; Kang, S

    Purpose: The purpose of this work is to develop a new patient set-up monitoring system using force sensing resistor (FSR) sensors that can confirm pressure of contact surface and evaluate its feasibility. Methods: In this study, we focused on develop the patient set-up monitoring system to compensate for the limitation of existing optical based monitoring system, so the developed system can inform motion in the radiation therapy. The set-up monitoring system was designed consisting of sensor units (FSR sensor), signal conditioning devices (USB cable/interface electronics), a control PC, and a developed analysis software. The sensor unit was made by attachingmore » FSR sensor and dispersing pressure sponge to prevent error which is caused by concentrating specific point. Measured signal from the FSR sensor was sampled to arduino mega 2560 microcontroller, transferred to control PC by using serial communication. The measured data went through normalization process. The normalized data was displayed through the developed graphic user interface (GUI) software. The software was designed to display a single sensor unit intensity (maximum 16 sensors) and display 2D pressure distribution (using 16 sensors) according to the purpose. Results: Changes of pressure value according to motion was confirmed by the developed set-up monitoring system. Very small movement such as little physical change in appearance can be confirmed using a single unit and using 2D pressure distribution. Also, the set-up monitoring system can observe in real time. Conclusion: In this study, we developed the new set-up monitoring system using FSR sensor. Especially, we expect that the new set-up monitoring system is suitable for motion monitoring of blind area that is hard to confirm existing optical system and compensate existing optical based monitoring system. As a further study, an integrated system will be constructed through correlation of existing optical monitoring system. This work was supported by the Industrial R&D program of MOTIE/KEIT. [10048997, Development of the core technology for integrated therapy devices based on real-time MRI guided tumor tracking] and the Mid-career Researcher Program (2014R1A2A1A10050270) through the National Research Foundation of Korea funded by the Ministry of Science, ICT&Future Planning.« less

  1. Ocean Colour Products from Remote Sensing Related to In-Situ Data for Supporting Management of Offshore Aquaculture

    NASA Astrophysics Data System (ADS)

    Fragoso, Bruno Dias Duarte; Icely, John; Moore, Gerald; Laanen, Marnix; Ghbrehiwot, Semhar

    2016-08-01

    The EU funded "AQUAculture USEr driven operational Remote Sensing information services project" (AQUA- USERS grant number 607325) is a user driven project for the aquaculture industry that aims at providing this industry with relevant and timely information based on the most recent satellite data and innovative optical in- situ measurements. The Water Insight Spectrometer (WISP-3) is a hand held instrument which can provide measurements of the optical parameters Chlorophyll-a (Chl-a), Total Suspended Matter (TSM), Coloured Dissolved Organic Matter (CDOM), and the Spectral Diffuse Attenuation Coefficient (Kd). Sampling campaigns were carried out between March 2014 and September 2015, to collect water samples at the same time as taking optical reading from the WISP-3 at an offshore aquaculture site off Sagres on the SW Portugal, operated by Finisterra Lda, one of the "users" in the project. The estimates from the WISP-3 for Chla and TSM have been compared with in-situ measurements from the water samples for these two variables, with the objective of calibrating the algorithms used by the WISP-3 for estimation of Chla and TSM. At a later stage in the project, it is expected that WISP-3 readings can be related to remote sensing products developed from the Ocean Land Coloured Instrument (OLCI) from the Sentinel-3 satellite. The key purpose of AQUA- Users is to develop, in collaboration with "users" from the aquaculture industry, a mobile phone application (app) that collates satellite information on optical water quality and temperature together with in-situ data of these variables to develop a decision support system for daily management of the aquaculture.

  2. Selection of head and whisker coordination strategies during goal-oriented active touch.

    PubMed

    Schroeder, Joseph B; Ritt, Jason T

    2016-04-01

    In the rodent whisker system, a key model for neural processing and behavioral choices during active sensing, whisker motion is increasingly recognized as only part of a broader motor repertoire employed by rodents during active touch. In particular, recent studies suggest whisker and head motions are tightly coordinated. However, conditions governing the selection and temporal organization of such coordinated sensing strategies remain poorly understood. We videographically reconstructed head and whisker motions of freely moving mice searching for a randomly located rewarded aperture, focusing on trials in which animals appeared to rapidly "correct" their trajectory under tactile guidance. Mice orienting after unilateral contact repositioned their whiskers similarly to previously reported head-turning asymmetry. However, whisker repositioning preceded head turn onsets and was not bilaterally symmetric. Moreover, mice selectively employed a strategy we term contact maintenance, with whisking modulated to counteract head motion and facilitate repeated contacts on subsequent whisks. Significantly, contact maintenance was not observed following initial contact with an aperture boundary, when the mouse needed to make a large corrective head motion to the front of the aperture, but only following contact by the same whisker field with the opposite aperture boundary, when the mouse needed to precisely align its head with the reward spout. Together these results suggest that mice can select from a diverse range of sensing strategies incorporating both knowledge of the task and whisk-by-whisk sensory information and, moreover, suggest the existence of high level control (not solely reflexive) of sensing motions coordinated between multiple body parts. Copyright © 2016 the American Physiological Society.

  3. Selection of head and whisker coordination strategies during goal-oriented active touch

    PubMed Central

    2016-01-01

    In the rodent whisker system, a key model for neural processing and behavioral choices during active sensing, whisker motion is increasingly recognized as only part of a broader motor repertoire employed by rodents during active touch. In particular, recent studies suggest whisker and head motions are tightly coordinated. However, conditions governing the selection and temporal organization of such coordinated sensing strategies remain poorly understood. We videographically reconstructed head and whisker motions of freely moving mice searching for a randomly located rewarded aperture, focusing on trials in which animals appeared to rapidly “correct” their trajectory under tactile guidance. Mice orienting after unilateral contact repositioned their whiskers similarly to previously reported head-turning asymmetry. However, whisker repositioning preceded head turn onsets and was not bilaterally symmetric. Moreover, mice selectively employed a strategy we term contact maintenance, with whisking modulated to counteract head motion and facilitate repeated contacts on subsequent whisks. Significantly, contact maintenance was not observed following initial contact with an aperture boundary, when the mouse needed to make a large corrective head motion to the front of the aperture, but only following contact by the same whisker field with the opposite aperture boundary, when the mouse needed to precisely align its head with the reward spout. Together these results suggest that mice can select from a diverse range of sensing strategies incorporating both knowledge of the task and whisk-by-whisk sensory information and, moreover, suggest the existence of high level control (not solely reflexive) of sensing motions coordinated between multiple body parts. PMID:26792880

  4. General Tool for Evaluating High-Contrast Coronagraphic Telescope Performance Error Budgets

    NASA Technical Reports Server (NTRS)

    Marchen, Luis F.

    2011-01-01

    The Coronagraph Performance Error Budget (CPEB) tool automates many of the key steps required to evaluate the scattered starlight contrast in the dark hole of a space-based coronagraph. The tool uses a Code V prescription of the optical train, and uses MATLAB programs to call ray-trace code that generates linear beam-walk and aberration sensitivity matrices for motions of the optical elements and line-of-sight pointing, with and without controlled fine-steering mirrors (FSMs). The sensitivity matrices are imported by macros into Excel 2007, where the error budget is evaluated. The user specifies the particular optics of interest, and chooses the quality of each optic from a predefined set of PSDs. The spreadsheet creates a nominal set of thermal and jitter motions, and combines that with the sensitivity matrices to generate an error budget for the system. CPEB also contains a combination of form and ActiveX controls with Visual Basic for Applications code to allow for user interaction in which the user can perform trade studies such as changing engineering requirements, and identifying and isolating stringent requirements. It contains summary tables and graphics that can be instantly used for reporting results in view graphs. The entire process to obtain a coronagraphic telescope performance error budget has been automated into three stages: conversion of optical prescription from Zemax or Code V to MACOS (in-house optical modeling and analysis tool), a linear models process, and an error budget tool process. The first process was improved by developing a MATLAB package based on the Class Constructor Method with a number of user-defined functions that allow the user to modify the MACOS optical prescription. The second process was modified by creating a MATLAB package that contains user-defined functions that automate the process. The user interfaces with the process by utilizing an initialization file where the user defines the parameters of the linear model computations. Other than this, the process is fully automated. The third process was developed based on the Terrestrial Planet Finder coronagraph Error Budget Tool, but was fully automated by using VBA code, form, and ActiveX controls.

  5. Interacting with target tracking algorithms in a gaze-enhanced motion video analysis system

    NASA Astrophysics Data System (ADS)

    Hild, Jutta; Krüger, Wolfgang; Heinze, Norbert; Peinsipp-Byma, Elisabeth; Beyerer, Jürgen

    2016-05-01

    Motion video analysis is a challenging task, particularly if real-time analysis is required. It is therefore an important issue how to provide suitable assistance for the human operator. Given that the use of customized video analysis systems is more and more established, one supporting measure is to provide system functions which perform subtasks of the analysis. Recent progress in the development of automated image exploitation algorithms allow, e.g., real-time moving target tracking. Another supporting measure is to provide a user interface which strives to reduce the perceptual, cognitive and motor load of the human operator for example by incorporating the operator's visual focus of attention. A gaze-enhanced user interface is able to help here. This work extends prior work on automated target recognition, segmentation, and tracking algorithms as well as about the benefits of a gaze-enhanced user interface for interaction with moving targets. We also propose a prototypical system design aiming to combine both the qualities of the human observer's perception and the automated algorithms in order to improve the overall performance of a real-time video analysis system. In this contribution, we address two novel issues analyzing gaze-based interaction with target tracking algorithms. The first issue extends the gaze-based triggering of a target tracking process, e.g., investigating how to best relaunch in the case of track loss. The second issue addresses the initialization of tracking algorithms without motion segmentation where the operator has to provide the system with the object's image region in order to start the tracking algorithm.

  6. LAnd surface remote sensing Products VAlidation System (LAPVAS) and its preliminary application

    NASA Astrophysics Data System (ADS)

    Lin, Xingwen; Wen, Jianguang; Tang, Yong; Ma, Mingguo; Dou, Baocheng; Wu, Xiaodan; Meng, Lumin

    2014-11-01

    The long term record of remote sensing product shows the land surface parameters with spatial and temporal change to support regional and global scientific research widely. Remote sensing product with different sensors and different algorithms is necessary to be validated to ensure the high quality remote sensing product. Investigation about the remote sensing product validation shows that it is a complex processing both the quality of in-situ data requirement and method of precision assessment. A comprehensive validation should be needed with long time series and multiple land surface types. So a system named as land surface remote sensing product is designed in this paper to assess the uncertainty information of the remote sensing products based on a amount of in situ data and the validation techniques. The designed validation system platform consists of three parts: Validation databases Precision analysis subsystem, Inter-external interface of system. These three parts are built by some essential service modules, such as Data-Read service modules, Data-Insert service modules, Data-Associated service modules, Precision-Analysis service modules, Scale-Change service modules and so on. To run the validation system platform, users could order these service modules and choreograph them by the user interactive and then compete the validation tasks of remote sensing products (such as LAI ,ALBEDO ,VI etc.) . Taking SOA-based architecture as the framework of this system. The benefit of this architecture is the good service modules which could be independent of any development environment by standards such as the Web-Service Description Language(WSDL). The standard language: C++ and java will used as the primary programming language to create service modules. One of the key land surface parameter, albedo, is selected as an example of the system application. It is illustrated that the LAPVAS has a good performance to implement the land surface remote sensing product validation.

  7. Respiratory motion resolved, self-gated 4D-MRI using Rotating Cartesian K-space (ROCK)

    PubMed Central

    Han, Fei; Zhou, Ziwu; Cao, Minsong; Yang, Yingli; Sheng, Ke; Hu, Peng

    2017-01-01

    Purpose To propose and validate a respiratory motion resolved, self-gated (SG) 4D-MRI technique to assess patient-specific breathing motion of abdominal organs for radiation treatment planning. Methods The proposed 4D-MRI technique was based on the balanced steady-state free-precession (bSSFP) technique and 3D k-space encoding. A novel ROtating Cartesian K-space (ROCK) reordering method was designed that incorporates repeatedly sampled k-space centerline as the SG motion surrogate and allows for retrospective k-space data binning into different respiratory positions based on the amplitude of the surrogate. The multiple respiratory-resolved 3D k-space data were subsequently reconstructed using a joint parallel imaging and compressed sensing method with spatial and temporal regularization. The proposed 4D-MRI technique was validated using a custom-made dynamic motion phantom and was tested in 6 healthy volunteers, in whom quantitative diaphragm and kidney motion measurements based on 4D-MRI images were compared with those based on 2D-CINE images. Results The 5-minute 4D-MRI scan offers high-quality volumetric images in 1.2×1.2×1.6mm3 and 8 respiratory positions, with good soft-tissue contrast. In phantom experiments with triangular motion waveform, the motion amplitude measurements based on 4D-MRI were 11.89% smaller than the ground truth, whereas a −12.5% difference was expected due to data binning effects. In healthy volunteers, the difference between the measurements based on 4D-MRI and the ones based on 2D-CINE were 6.2±4.5% for the diaphragm, 8.2±4.9% and 8.9±5.1% for the right and left kidney. Conclusion The proposed 4D-MRI technique could provide high resolution, high quality, respiratory motion resolved 4D images with good soft-tissue contrast and are free of the “stitching” artifacts usually seen on 4D-CT and 4D-MRI based on resorting 2D-CINE. It could be used to visualize and quantify abdominal organ motion for MRI-based radiation treatment planning. PMID:28133752

  8. Respiratory motion-resolved, self-gated 4D-MRI using rotating cartesian k-space (ROCK).

    PubMed

    Han, Fei; Zhou, Ziwu; Cao, Minsong; Yang, Yingli; Sheng, Ke; Hu, Peng

    2017-04-01

    To propose and validate a respiratory motion resolved, self-gated (SG) 4D-MRI technique to assess patient-specific breathing motion of abdominal organs for radiation treatment planning. The proposed 4D-MRI technique was based on the balanced steady-state free-precession (bSSFP) technique and 3D k-space encoding. A novel rotating cartesian k-space (ROCK) reordering method was designed which incorporates repeatedly sampled k-space centerline as the SG motion surrogate and allows for retrospective k-space data binning into different respiratory positions based on the amplitude of the surrogate. The multiple respiratory-resolved 3D k-space data were subsequently reconstructed using a joint parallel imaging and compressed sensing method with spatial and temporal regularization. The proposed 4D-MRI technique was validated using a custom-made dynamic motion phantom and was tested in six healthy volunteers, in whom quantitative diaphragm and kidney motion measurements based on 4D-MRI images were compared with those based on 2D-CINE images. The 5-minute 4D-MRI scan offers high-quality volumetric images in 1.2 × 1.2 × 1.6 mm 3 and eight respiratory positions, with good soft-tissue contrast. In phantom experiments with triangular motion waveform, the motion amplitude measurements based on 4D-MRI were 11.89% smaller than the ground truth, whereas a -12.5% difference was expected due to data binning effects. In healthy volunteers, the difference between the measurements based on 4D-MRI and the ones based on 2D-CINE were 6.2 ± 4.5% for the diaphragm, 8.2 ± 4.9% and 8.9 ± 5.1% for the right and left kidney. The proposed 4D-MRI technique could provide high-resolution, high-quality, respiratory motion-resolved 4D images with good soft-tissue contrast and are free of the "stitching" artifacts usually seen on 4D-CT and 4D-MRI based on resorting 2D-CINE. It could be used to visualize and quantify abdominal organ motion for MRI-based radiation treatment planning. © 2017 American Association of Physicists in Medicine.

  9. TriNet "ShakeMaps": Rapid generation of peak ground motion and intensity maps for earthquakes in southern California

    USGS Publications Warehouse

    Wald, D.J.; Quitoriano, V.; Heaton, T.H.; Kanamori, H.; Scrivner, C.W.; Worden, C.B.

    1999-01-01

    Rapid (3-5 minutes) generation of maps of instrumental ground-motion and shaking intensity is accomplished through advances in real-time seismographic data acquisition combined with newly developed relationships between recorded ground-motion parameters and expected shaking intensity values. Estimation of shaking over the entire regional extent of southern California is obtained by the spatial interpolation of the measured ground motions with geologically based frequency and amplitude-dependent site corrections. Production of the maps is automatic, triggered by any significant earthquake in southern California. Maps are now made available within several minutes of the earthquake for public and scientific consumption via the World Wide Web; they will be made available with dedicated communications for emergency response agencies and critical users.

  10. Hand VR Exergame for Occupational Health Care.

    PubMed

    Ortiz, Saskia; Uribe-Quevedo, Alvaro; Kapralos, Bill

    2016-01-01

    The widespread use and ubiquity of mobile computing technologies such as smartphones, tablets, laptops and portable gaming consoles has led to an increase in musculoskeletal disorders due to overuse, bad posture, repetitive movements, fixed postures and physical de-conditioning caused by low muscular demands while using (and over-using) these devices. In this paper we present the development of a hand motion-based virtual reality-based exergame for occupational health purposes that allows the user to perform simple exercises using a cost-effective non-invasive motion capture device to help overcome and prevent some of the muskoloskeletal problems associated with the over-use of keyboards and mobile devices.

  11. Six-degrees-of-freedom sensing based on pictures taken by single camera.

    PubMed

    Zhongke, Li; Yong, Wang; Yongyuan, Qin; Peijun, Lu

    2005-02-01

    Two six-degrees-of-freedom sensing methods are presented. In the first method, three laser beams are employed to set up Descartes' frame on a rigid body and a screen is adopted to form diffuse spots. In the second method, two superimposed grid screens and two laser beams are used. A CCD camera is used to take photographs in both methods. Both approaches provide a simple and error-free method to record the positions and the attitudes of a rigid body in motion continuously.

  12. Review of Virtual Environment Interface Technology.

    DTIC Science & Technology

    1996-03-01

    1.9 SpacePad 56 1.10 CyberTrack 3.2 57 1.11 Wayfinder-VR 57 1.12 Mouse-Sense3D 57 1.13 Selcom AB, SELSPOT H 57 1.14 OPTOTRAK 3020 58 1.15...Wayfinder-VR 57 Figure 38. Mouse-Sense3D 57 Figure 39. SELSPOTII 58 Figure 40. OPTOTRAK 3020 58 Figure 41. MacReflex 58 Figure 42. DynaSight 59...OPTOTRAK3020 The OPTOTRAK 3020 by Northern Digital Inc. is an infra-red (IR)-based, non- contact position and motion measurement sys- tem. Small IR LEDs

  13. Sustainable Cooperative Robotic Technologies for Human and Robotic Outpost Infrastructure Construction and Maintenance

    NASA Technical Reports Server (NTRS)

    Stroupe, Ashley W.; Okon, Avi; Robinson, Matthew; Huntsberger, Terry; Aghazarian, Hrand; Baumgartner, Eric

    2004-01-01

    Robotic Construction Crew (RCC) is a heterogeneous multi-robot system for autonomous acquisition, transport, and precision mating of components in construction tasks. RCC minimizes resources constrained in a space environment such as computation, power, communication and, sensing. A behavior-based architecture provides adaptability and robustness despite low computational requirements. RCC successfully performs several construction related tasks in an emulated outdoor environment despite high levels of uncertainty in motions and sensing. Quantitative results are provided for formation keeping in component transport, precision instrument placement, and construction tasks.

  14. Spectrally formulated user-defined element in conventional finite element environment for wave motion analysis in 2-D composite structures

    NASA Astrophysics Data System (ADS)

    Khalili, Ashkan; Jha, Ratneshwar; Samaratunga, Dulip

    2016-11-01

    Wave propagation analysis in 2-D composite structures is performed efficiently and accurately through the formulation of a User-Defined Element (UEL) based on the wavelet spectral finite element (WSFE) method. The WSFE method is based on the first-order shear deformation theory which yields accurate results for wave motion at high frequencies. The 2-D WSFE model is highly efficient computationally and provides a direct relationship between system input and output in the frequency domain. The UEL is formulated and implemented in Abaqus (commercial finite element software) for wave propagation analysis in 2-D composite structures with complexities. Frequency domain formulation of WSFE leads to complex valued parameters, which are decoupled into real and imaginary parts and presented to Abaqus as real values. The final solution is obtained by forming a complex value using the real number solutions given by Abaqus. Five numerical examples are presented in this article, namely undamaged plate, impacted plate, plate with ply drop, folded plate and plate with stiffener. Wave motions predicted by the developed UEL correlate very well with Abaqus simulations. The results also show that the UEL largely retains computational efficiency of the WSFE method and extends its ability to model complex features.

  15. Motional studies of one and two laser-cooled trapped ions for electric-field sensing applications

    NASA Astrophysics Data System (ADS)

    Domínguez, F.; Gutiérrez, M. J.; Arrazola, I.; Berrocal, J.; Cornejo, J. M.; Del Pozo, J. J.; Rica, R. A.; Schmidt, S.; Solano, E.; Rodríguez, D.

    2018-03-01

    We have studied the dynamics of one and two laser-cooled trapped ?Ca? ions by applying electric fields of different nature along the axial direction of the trap, namely, driving the motion with a harmonic dipolar field, or with white noise. These two types of driving induce distinct motional states of the axial modes: a coherent oscillation with the dipolar field, or an enhanced Brownian motion due to an additional contribution to the heating rate from the electric noise. In both scenarios, the sensitivity of an isolated ion and a laser-cooled two-ion crystal has been evaluated and compared. The analysis and understanding of this dynamics is important towards the implementation of a novel Penning trap mass-spectroscopy technique based on optical detection, aiming at improving precision and sensitivity.

  16. Compressed Sensing for Body MRI

    PubMed Central

    Feng, Li; Benkert, Thomas; Block, Kai Tobias; Sodickson, Daniel K; Otazo, Ricardo; Chandarana, Hersh

    2016-01-01

    The introduction of compressed sensing for increasing imaging speed in MRI has raised significant interest among researchers and clinicians, and has initiated a large body of research across multiple clinical applications over the last decade. Compressed sensing aims to reconstruct unaliased images from fewer measurements than that are traditionally required in MRI by exploiting image compressibility or sparsity. Moreover, appropriate combinations of compressed sensing with previously introduced fast imaging approaches, such as parallel imaging, have demonstrated further improved performance. The advent of compressed sensing marks the prelude to a new era of rapid MRI, where the focus of data acquisition has changed from sampling based on the nominal number of voxels and/or frames to sampling based on the desired information content. This paper presents a brief overview of the application of compressed sensing techniques in body MRI, where imaging speed is crucial due to the presence of respiratory motion along with stringent constraints on spatial and temporal resolution. The first section provides an overview of the basic compressed sensing methodology, including the notion of sparsity, incoherence, and non-linear reconstruction. The second section reviews state-of-the-art compressed sensing techniques that have been demonstrated for various clinical body MRI applications. In the final section, the paper discusses current challenges and future opportunities. PMID:27981664

  17. Assisted navigation based on shared-control, using discrete and sparse human-machine interfaces.

    PubMed

    Lopes, Ana C; Nunes, Urbano; Vaz, Luis; Vaz, Luís

    2010-01-01

    This paper presents a shared-control approach for Assistive Mobile Robots (AMR), which depends on the user's ability to navigate a semi-autonomous powered wheelchair, using a sparse and discrete human-machine interface (HMI). This system is primarily intended to help users with severe motor disabilities that prevent them to use standard human-machine interfaces. Scanning interfaces and Brain Computer Interfaces (BCI), characterized to provide a small set of commands issued sparsely, are possible HMIs. This shared-control approach is intended to be applied in an Assisted Navigation Training Framework (ANTF) that is used to train users' ability in steering a powered wheelchair in an appropriate manner, given the restrictions imposed by their limited motor capabilities. A shared-controller based on user characterization, is proposed. This controller is able to share the information provided by the local motion planning level with the commands issued sparsely by the user. Simulation results of the proposed shared-control method, are presented.

  18. Free-breathing volumetric fat/water separation by combining radial sampling, compressed sensing, and parallel imaging.

    PubMed

    Benkert, Thomas; Feng, Li; Sodickson, Daniel K; Chandarana, Hersh; Block, Kai Tobias

    2017-08-01

    Conventional fat/water separation techniques require that patients hold breath during abdominal acquisitions, which often fails and limits the achievable spatial resolution and anatomic coverage. This work presents a novel approach for free-breathing volumetric fat/water separation. Multiecho data are acquired using a motion-robust radial stack-of-stars three-dimensional GRE sequence with bipolar readout. To obtain fat/water maps, a model-based reconstruction is used that accounts for the off-resonant blurring of fat and integrates both compressed sensing and parallel imaging. The approach additionally enables generation of respiration-resolved fat/water maps by detecting motion from k-space data and reconstructing different respiration states. Furthermore, an extension is described for dynamic contrast-enhanced fat-water-separated measurements. Uniform and robust fat/water separation is demonstrated in several clinical applications, including free-breathing noncontrast abdominal examination of adults and a pediatric subject with both motion-averaged and motion-resolved reconstructions, as well as in a noncontrast breast exam. Furthermore, dynamic contrast-enhanced fat/water imaging with high temporal resolution is demonstrated in the abdomen and breast. The described framework provides a viable approach for motion-robust fat/water separation and promises particular value for clinical applications that are currently limited by the breath-holding capacity or cooperation of patients. Magn Reson Med 78:565-576, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  19. Two-point discrimination and kinesthetic sense disorders in productive age individuals with carpal tunnel syndrome.

    PubMed

    Wolny, Tomasz; Saulicz, Edward; Linek, Paweł; Myśliwiec, Andrzej

    2016-06-16

    The aim of this study was to evaluate two-point discrimination (2PD) sense and kinesthetic sense dysfunctions in carpal tunnel syndrome (CTS) patients compared with a healthy group. The 2PD sense, muscle force, and kinesthetic differentiation (KD) of strength; the range of motion in radiocarpal articulation; and KD of motion were assessed. The 2PD sense assessment showed significantly higher values in all the examined fingers in the CTS group than in those in the healthy group (p<0.01). There was a significant difference in the percentage value of error in KD of pincer and cylindrical grip (p<0.01) as well as in KD of flexion and extension movement in the radiocarpal articulation (p<0.01) between the studied groups. There are significant differences in the 2PD sense and KD of strength and movement between CTS patients compared with healthy individuals.

  20. Two-point discrimination and kinesthetic sense disorders in productive age individuals with carpal tunnel syndrome

    PubMed Central

    Wolny, Tomasz; Saulicz, Edward; Linek, Paweł; Myśliwiec, Andrzej

    2016-01-01

    Objectives: The aim of this study was to evaluate two-point discrimination (2PD) sense and kinesthetic sense dysfunctions in carpal tunnel syndrome (CTS) patients compared with a healthy group. Methods: The 2PD sense, muscle force, and kinesthetic differentiation (KD) of strength; the range of motion in radiocarpal articulation; and KD of motion were assessed. Results: The 2PD sense assessment showed significantly higher values in all the examined fingers in the CTS group than in those in the healthy group (p<0.01). There was a significant difference in the percentage value of error in KD of pincer and cylindrical grip (p<0.01) as well as in KD of flexion and extension movement in the radiocarpal articulation (p<0.01) between the studied groups. Conclusions: There are significant differences in the 2PD sense and KD of strength and movement between CTS patients compared with healthy individuals. PMID:27108640

  1. Artificial neural network EMG classifier for functional hand grasp movements prediction.

    PubMed

    Gandolla, Marta; Ferrante, Simona; Ferrigno, Giancarlo; Baldassini, Davide; Molteni, Franco; Guanziroli, Eleonora; Cotti Cottini, Michele; Seneci, Carlo; Pedrocchi, Alessandra

    2017-12-01

    Objective To design and implement an electromyography (EMG)-based controller for a hand robotic assistive device, which is able to classify the user's motion intention before the effective kinematic movement execution. Methods Multiple degrees-of-freedom hand grasp movements (i.e. pinching, grasp an object, grasping) were predicted by means of surface EMG signals, recorded from 10 bipolar EMG electrodes arranged in a circular configuration around the forearm 2-3 cm from the elbow. Two cascaded artificial neural networks were then exploited to detect the patient's motion intention from the EMG signal window starting from the electrical activity onset to movement onset (i.e. electromechanical delay). Results The proposed approach was tested on eight healthy control subjects (4 females; age range 25-26 years) and it demonstrated a mean ± SD testing performance of 76% ± 14% for correctly predicting healthy users' motion intention. Two post-stroke patients tested the controller and obtained 79% and 100% of correctly classified movements under testing conditions. Conclusion A task-selection controller was developed to estimate the intended movement from the EMG measured during the electromechanical delay.

  2. iMODS: internal coordinates normal mode analysis server.

    PubMed

    López-Blanco, José Ramón; Aliaga, José I; Quintana-Ortí, Enrique S; Chacón, Pablo

    2014-07-01

    Normal mode analysis (NMA) in internal (dihedral) coordinates naturally reproduces the collective functional motions of biological macromolecules. iMODS facilitates the exploration of such modes and generates feasible transition pathways between two homologous structures, even with large macromolecules. The distinctive internal coordinate formulation improves the efficiency of NMA and extends its applicability while implicitly maintaining stereochemistry. Vibrational analysis, motion animations and morphing trajectories can be easily carried out at different resolution scales almost interactively. The server is versatile; non-specialists can rapidly characterize potential conformational changes, whereas advanced users can customize the model resolution with multiple coarse-grained atomic representations and elastic network potentials. iMODS supports advanced visualization capabilities for illustrating collective motions, including an improved affine-model-based arrow representation of domain dynamics. The generated all-heavy-atoms conformations can be used to introduce flexibility for more advanced modeling or sampling strategies. The server is free and open to all users with no login requirement at http://imods.chaconlab.org. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  3. A General Tool for Evaluating High-Contrast Coronagraphic Telescope Performance Error Budgets

    NASA Technical Reports Server (NTRS)

    Marchen, Luis F.; Shaklan, Stuart B.

    2009-01-01

    This paper describes a general purpose Coronagraph Performance Error Budget (CPEB) tool that we have developed under the NASA Exoplanet Exploration Program. The CPEB automates many of the key steps required to evaluate the scattered starlight contrast in the dark hole of a space-based coronagraph. It operates in 3 steps: first, a CodeV or Zemax prescription is converted into a MACOS optical prescription. Second, a Matlab program calls ray-trace code that generates linear beam-walk and aberration sensitivity matrices for motions of the optical elements and line-of-sight pointing, with and without controlled coarse and fine-steering mirrors. Third, the sensitivity matrices are imported by macros into Excel 2007 where the error budget is created. Once created, the user specifies the quality of each optic from a predefined set of PSDs. The spreadsheet creates a nominal set of thermal and jitter motions and combines them with the sensitivity matrices to generate an error budget for the system. The user can easily modify the motion allocations to perform trade studies.

  4. Western Regional Remote Sensing Conference Proceedings, 1979

    NASA Technical Reports Server (NTRS)

    1980-01-01

    Remote sensing users from the 14 western states explained their diverse applications of LANDSAT data, discussed operational goals, and exchanged problems and solutions. In addition, conference participants stressed the need for increased cooperation among state and local governments, private industry, and universities to aid NASA's objective of transferring to user agencies the ability to operationally use remote sensing technology for resource and environmental quality management.

  5. A Novel Physical Sensing Principle for Liquid Characterization Using Paper-Based Hygro-Mechanical Systems (PB-HMS).

    PubMed

    Perez-Cruz, Angel; Stiharu, Ion; Dominguez-Gonzalez, Aurelio

    2017-07-20

    In recent years paper-based microfluidic systems have emerged as versatile tools for developing sensors in different areas. In this work; we report a novel physical sensing principle for the characterization of liquids using a paper-based hygro-mechanical system (PB-HMS). The PB-HMS is formed by the interaction of liquid droplets and paper-based mini-structures such as cantilever beams. The proposed principle takes advantage of the hygroscopic properties of paper to produce hygro-mechanical motion. The dynamic response of the PB-HMS reveals information about the tested liquid that can be applied to characterize certain properties of liquids. A suggested method to characterize liquids by means of the proposed principle is introduced. The experimental results show the feasibility of such a method. It is expected that the proposed principle may be applied to sense properties of liquids in different applications where both disposability and portability are of extreme importance.

  6. Characterization of users of remotely-sensed data in the Alabama coastal zone. [user requirements, surveys - technology utilization

    NASA Technical Reports Server (NTRS)

    Vittor, B. A. (Editor)

    1975-01-01

    Federal, State, local, universities and private companies were polled to determine their needs for remote sensing data. A total of 62 users were polled. Poll results are given in tables. A comprehensive research program was developed to satisfy user needs, and is examined for the disciplines of Geology, Water Resources, Archaeology, Geography, and Conservation. An investigation of silt plume discharge from Mobile Bay is also examined. Sample poll forms used in the surveys are shown.

  7. Challenges in paper-based fluorogenic optical sensing with smartphones

    NASA Astrophysics Data System (ADS)

    Ulep, Tiffany-Heather; Yoon, Jeong-Yeol

    2018-05-01

    Application of optically superior, tunable fluorescent nanotechnologies have long been demonstrated throughout many chemical and biological sensing applications. Combined with microfluidics technologies, i.e. on lab-on-a-chip platforms, such fluorescent nanotechnologies have often enabled extreme sensitivity, sometimes down to single molecule level. Within recent years there has been a peak interest in translating fluorescent nanotechnology onto paper-based platforms for chemical and biological sensing, as a simple, low-cost, disposable alternative to conventional silicone-based microfluidic substrates. On the other hand, smartphone integration as an optical detection system as well as user interface and data processing component has been widely attempted, serving as a gateway to on-board quantitative processing, enhanced mobility, and interconnectivity with informational networks. Smartphone sensing can be integrated to these paper-based fluorogenic assays towards demonstrating extreme sensitivity as well as ease-of-use and low-cost. However, with these emerging technologies there are always technical limitations that must be addressed; for example, paper's autofluorescence that perturbs fluorogenic sensing; smartphone flash's limitations in fluorescent excitation; smartphone camera's limitations in detecting narrow-band fluorescent emission, etc. In this review, physical optical setups, digital enhancement algorithms, and various fluorescent measurement techniques are discussed and pinpointed as areas of opportunities to further improve paper-based fluorogenic optical sensing with smartphones.

  8. Whole left ventricular functional assessment from two minutes free breathing multi-slice CINE acquisition

    NASA Astrophysics Data System (ADS)

    Usman, M.; Atkinson, D.; Heathfield, E.; Greil, G.; Schaeffter, T.; Prieto, C.

    2015-04-01

    Two major challenges in cardiovascular MRI are long scan times due to slow MR acquisition and motion artefacts due to respiratory motion. Recently, a Motion Corrected-Compressed Sensing (MC-CS) technique has been proposed for free breathing 2D dynamic cardiac MRI that addresses these challenges by simultaneously accelerating MR acquisition and correcting for any arbitrary motion in a compressed sensing reconstruction. In this work, the MC-CS framework is combined with parallel imaging for further acceleration, and is termed Motion Corrected Sparse SENSE (MC-SS). Validation of the MC-SS framework is demonstrated in eight volunteers and three patients for left ventricular functional assessment and results are compared with the breath-hold acquisitions as reference. A non-significant difference (P > 0.05) was observed in the volumetric functional measurements (end diastolic volume, end systolic volume, ejection fraction) and myocardial border sharpness values obtained with the proposed and gold standard methods. The proposed method achieves whole heart multi-slice coverage in 2 min under free breathing acquisition eliminating the time needed between breath-holds for instructions and recovery. This results in two-fold speed up of the total acquisition time in comparison to the breath-hold acquisition.

  9. Improving 3D Character Posing with a Gestural Interface.

    PubMed

    Kyto, Mikko; Dhinakaran, Krupakar; Martikainen, Aki; Hamalainen, Perttu

    2017-01-01

    The most time-consuming part of character animation is 3D character posing. Posing using a mouse is a slow and tedious task that involves sequences of selecting on-screen control handles and manipulating the handles to adjust character parameters, such as joint rotations and end effector positions. Thus, various 3D user interfaces have been proposed to make animating easier, but they typically provide less accuracy. The proposed interface combines a mouse with the Leap Motion device to provide 3D input. A usability study showed that users preferred the Leap Motion over a mouse as a 3D gestural input device. The Leap Motion drastically decreased the number of required operations and the task completion time, especially for novice users.

  10. WiFi-based person identification

    NASA Astrophysics Data System (ADS)

    Yuan, Jing

    2016-10-01

    There has been increased interest in WIFI devices equipped with multiple antennas, which brings various wireless sensing applications such as localization, gesture identification and motion tracking. WIFI-based sensing has a lot of benefits such as device Free, which has shown great potential in smart scenarios. In this paper, we present WIP, a system that can distinguish a person from a small group of people. We prove that Channel State Information (CSI) can identify a person's gait. From the related-work, different people have different gait features. Thus the CSI-based gait features can be used to identify a person. We then proposed a machine-learning model-ANN to classify different person. The results show that ANN has a good performance in our scenario.

  11. A high speed, portable, multi-function, weigh-in-motion (WIM) sensing system and a high performance optical fiber Bragg grating (FBG) demodulator

    NASA Astrophysics Data System (ADS)

    Zhang, Hongtao; Wei, Zhanxiong; Fan, Lingling; Yang, Shangming; Wang, Pengfei; Cui, Hong-Liang

    2010-04-01

    A high speed, portable, multi-function WIM sensing system based on Fiber Bragg Grating (FBG) technology is reported in this paper. This system is developed to measure the total weight, the distribution of weight of vehicle in motion, the distance of wheel axles and the distance between left and right wheels. In this system, a temperature control system and a real-time compensation system are employed to eliminate the drifts of optical fiber Fabry-Pérot tunable filter. Carbon Fiber Laminated Composites are used in the sensor heads to obtain high reliability and sensitivity. The speed of tested vehicles is up to 20 mph, the full scope of measurement is 4000 lbs, and the static resolution of sensor head is 20 lbs. The demodulator has high speed (500 Hz) data collection, and high stability. The demodulator and the light source are packed into a 17'' rack style enclosure. The prototype has been tested respectively at Stevens' campus and Army base. Some experiences of avoiding the pitfalls in developing this system are also presented in this paper.

  12. An Earthquake Shake Map Routine with Low Cost Accelerometers: Preliminary Results

    NASA Astrophysics Data System (ADS)

    Alcik, H. A.; Tanircan, G.; Kaya, Y.

    2015-12-01

    Vast amounts of high quality strong motion data are indispensable inputs of the analyses in the field of geotechnical and earthquake engineering however, high cost of installation of the strong motion systems constitutes the biggest obstacle for worldwide dissemination. In recent years, MEMS based (micro-electro-mechanical systems) accelerometers have been used in seismological research-oriented studies as well as earthquake engineering oriented projects basically due to precision obtained in downsized instruments. In this research our primary goal is to ensure the usage of these low-cost instruments in the creation of shake-maps immediately after a strong earthquake. Second goal is to develop software that will automatically process the real-time data coming from the rapid response network and create shake-map. For those purposes, four MEMS sensors have been set up to deliver real-time data. Data transmission is done through 3G modems. A subroutine was coded in assembler language and embedded into the operating system of each instrument to create MiniSEED files with packages of 1-second instead of 512-byte packages.The Matlab-based software calculates the strong motion (SM) parameters at every second, and they are compared with the user-defined thresholds. A voting system embedded in the software captures the event if the total vote exceeds the threshold. The user interface of the software enables users to monitor the calculated SM parameters either in a table or in a graph (Figure 1). A small scale and affordable rapid response network is created using four MEMS sensors, and the functionality of the software has been tested and validated using shake table tests. The entire system is tested together with a reference sensor under real strong ground motion recordings as well as series of sine waves with varying amplitude and frequency. The successful realization of this software allowed us to set up a test network at Tekirdağ Province, the closest coastal point to the moderate size earthquake activities in the Marmara Sea, Turkey.

  13. Motion control system of MAX IV Laboratory soft x-ray beamlines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sjöblom, Peter, E-mail: peter.sjoblom@maxlab.lu.se; Lindberg, Mirjam, E-mail: mirjam.lindberg@maxlab.lu.se; Forsberg, Johan, E-mail: johan.forsberg@maxlab.lu.se

    2016-07-27

    At the MAX IV Laboratory, five new soft x-ray beamlines are under development. The first is Species and it will be used to develop and set the standard of the control system, which will be common across the facility. All motion axes at MAX IV will be motorized using stepper motors steered by the IcePAP motion controller and a mixture of absolute and incremental encoders following a predefined coordinate system. The control system software is built in Tango and uses the Python-based Sardana framework. The user controls the entire beamline through a synoptic overview and Sardana is used to runmore » the scans.« less

  14. Remote Sensing Image Analysis Without Expert Knowledge - A Web-Based Classification Tool On Top of Taverna Workflow Management System

    NASA Astrophysics Data System (ADS)

    Selsam, Peter; Schwartze, Christian

    2016-10-01

    Providing software solutions via internet has been known for quite some time and is now an increasing trend marketed as "software as a service". A lot of business units accept the new methods and streamlined IT strategies by offering web-based infrastructures for external software usage - but geospatial applications featuring very specialized services or functionalities on demand are still rare. Originally applied in desktop environments, the ILMSimage tool for remote sensing image analysis and classification was modified in its communicating structures and enabled for running on a high-power server and benefiting from Tavema software. On top, a GIS-like and web-based user interface guides the user through the different steps in ILMSimage. ILMSimage combines object oriented image segmentation with pattern recognition features. Basic image elements form a construction set to model for large image objects with diverse and complex appearance. There is no need for the user to set up detailed object definitions. Training is done by delineating one or more typical examples (templates) of the desired object using a simple vector polygon. The template can be large and does not need to be homogeneous. The template is completely independent from the segmentation. The object definition is done completely by the software.

  15. A Dual-Mode Human Computer Interface Combining Speech and Tongue Motion for People with Severe Disabilities

    PubMed Central

    Huo, Xueliang; Park, Hangue; Kim, Jeonghee; Ghovanloo, Maysam

    2015-01-01

    We are presenting a new wireless and wearable human computer interface called the dual-mode Tongue Drive System (dTDS), which is designed to allow people with severe disabilities to use computers more effectively with increased speed, flexibility, usability, and independence through their tongue motion and speech. The dTDS detects users’ tongue motion using a magnetic tracer and an array of magnetic sensors embedded in a compact and ergonomic wireless headset. It also captures the users’ voice wirelessly using a small microphone embedded in the same headset. Preliminary evaluation results based on 14 able-bodied subjects and three individuals with high level spinal cord injuries at level C3–C5 indicated that the dTDS headset, combined with a commercially available speech recognition (SR) software, can provide end users with significantly higher performance than either unimodal forms based on the tongue motion or speech alone, particularly in completing tasks that require both pointing and text entry. PMID:23475380

  16. Quantification of organ motion based on an adaptive image-based scale invariant feature method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paganelli, Chiara; Peroni, Marta; Baroni, Guido

    2013-11-15

    Purpose: The availability of corresponding landmarks in IGRT image series allows quantifying the inter and intrafractional motion of internal organs. In this study, an approach for the automatic localization of anatomical landmarks is presented, with the aim of describing the nonrigid motion of anatomo-pathological structures in radiotherapy treatments according to local image contrast.Methods: An adaptive scale invariant feature transform (SIFT) was developed from the integration of a standard 3D SIFT approach with a local image-based contrast definition. The robustness and invariance of the proposed method to shape-preserving and deformable transforms were analyzed in a CT phantom study. The application ofmore » contrast transforms to the phantom images was also tested, in order to verify the variation of the local adaptive measure in relation to the modification of image contrast. The method was also applied to a lung 4D CT dataset, relying on manual feature identification by an expert user as ground truth. The 3D residual distance between matches obtained in adaptive-SIFT was then computed to verify the internal motion quantification with respect to the expert user. Extracted corresponding features in the lungs were used as regularization landmarks in a multistage deformable image registration (DIR) mapping the inhale vs exhale phase. The residual distances between the warped manual landmarks and their reference position in the inhale phase were evaluated, in order to provide a quantitative indication of the registration performed with the three different point sets.Results: The phantom study confirmed the method invariance and robustness properties to shape-preserving and deformable transforms, showing residual matching errors below the voxel dimension. The adapted SIFT algorithm on the 4D CT dataset provided automated and accurate motion detection of peak to peak breathing motion. The proposed method resulted in reduced residual errors with respect to standard SIFT, providing a motion description comparable to expert manual identification, as confirmed by DIR.Conclusions: The application of the method to a 4D lung CT patient dataset demonstrated adaptive-SIFT potential as an automatic tool to detect landmarks for DIR regularization and internal motion quantification. Future works should include the optimization of the computational cost and the application of the method to other anatomical sites and image modalities.« less

  17. A New MEMS Gyroscope Used for Single-Channel Damping

    PubMed Central

    Zhang, Zengping; Zhang, Wei; Zhang, Fuxue; Wang, Biao

    2015-01-01

    The silicon micromechanical gyroscope, which will be introduced in this paper, represents a novel MEMS gyroscope concept. It is used for the damping of a single-channel control system of rotating aircraft. It differs from common MEMS gyroscopes in that does not have a drive structure, itself, and only has a sense structure. It is installed on a rotating aircraft, and utilizes the aircraft spin to make its sensing element obtain angular momentum. When the aircraft is subjected to an angular rotation, a periodic Coriolis force is induced in the direction orthogonal to both the angular momentum and the angular velocity input axis. This novel MEMS gyroscope can thus sense angular velocity inputs. The output sensing signal is exactly an amplitude-modulation signal. Its envelope is proportional to the input angular velocity, and the carrier frequency corresponds to the spin frequency of the rotating aircraft, so the MEMS gyroscope can not only sense the transverse angular rotation of an aircraft, but also automatically change the carrier frequency over the change of spin frequency, making it very suitable for the damping of a single-channel control system of a rotating aircraft. In this paper, the motion equation of the MEMS gyroscope has been derived. Then, an analysis has been carried to solve the motion equation and dynamic parameters. Finally, an experimental validation has been done based on a precision three axis rate table. The correlation coefficients between the tested data and the theoretical values are 0.9969, 0.9872 and 0.9842, respectively. These results demonstrate that both the design and sensing mechanism are correct. PMID:25942638

  18. Development of a Waterproof Crack-Based Stretchable Strain Sensor Based on PDMS Shielding.

    PubMed

    Hong, Seong Kyung; Yang, Seongjin; Cho, Seong J; Jeon, Hyungkook; Lim, Geunbae

    2018-04-12

    This paper details the design of a poly(dimethylsiloxane) (PDMS)-shielded waterproof crack-based stretchable strain sensor, in which the electrical characteristics and sensing performance are not influenced by changes in humidity. This results in a higher number of potential applications for the sensor. A previously developed omni-purpose stretchable strain (OPSS) sensor was used as the basis for this work, which utilizes a metal cracking structure and provides a wide sensing range and high sensitivity. Changes in the conductivity of the OPSS sensor, based on humidity conditions, were investigated along with the potential possibility of using the design as a humidity sensor. However, to prevent conductivity variation, which can decrease the reliability and sensing ability of the OPSS sensor, PDMS was utilized as a shielding layer over the OPSS sensor. The PDMS-shielded OPSS sensor showed approximately the same electrical characteristics as previous designs, including in a high humidity environment, while maintaining its strain sensing capabilities. The developed sensor shows promise for use under high humidity conditions and in underwater applications. Therefore, considering its unique features and reliable sensing performance, the developed PDMS-shielded waterproof OPSS sensor has potential utility in a wide range of applications, such as motion monitoring, medical robotics and wearable healthcare devices.

  19. ShakeCast: Automating and Improving the Use of ShakeMap for Post-Earthquake Decision- Making and Response

    NASA Astrophysics Data System (ADS)

    Lin, K.; Wald, D. J.

    2007-12-01

    ShakeCast is a freely available, post-earthquake situational awareness application that automatically retrieves earthquake shaking data from ShakeMap, compares intensity measures against users" facilities, sends notifications of potential damage to responsible parties, and generates facility damage maps and other Web-based products for emergency managers and responders. ShakeMap, a tool used to portray the extent of potentially damaging shaking following an earthquake, provides overall information regarding the affected areas. When a potentially damaging earthquake occurs, utility and other lifeline managers, emergency responders, and other critical users have an urgent need for information about the impact on their particular facilities so they can make appropriate decisions and take quick actions to ensure safety and restore system functionality. To this end, ShakeCast estimates the potential damage to a user's widely distributed facilities by comparing the complex shaking distribution with the potentially highly variable damageability of their inventory to provide a simple, hierarchical list and maps showing structures or facilities most likely impacted. All ShakeMap and ShakeCast files and products are non-propriety to simplify interfacing with existing users" response tools and to encourage user-made enhancement to the software. ShakeCast uses standard RSS and HTTP requests to communicate with the USGS Web servers that host ShakeMaps, which are widely-distributed and heavily mirrored. The RSS approach allows ShakeCast users to initiate and receive selected ShakeMap products and information on software updates. To assess facility damage estimates, ShakeCast users can combine measured or estimated ground motion parameters with damage relationships that can be pre-computed, use one of these ground motion parameters as input, and produce a multi-state discrete output of damage likelihood. Presently three common approaches are being used to provide users with an indication of damage: HAZUS-based, intensity-based, and customized damage functions. Intensity-based thresholds are for locations with poorly established damage relationships; custom damage levels are for advanced ShakeCast users such as Caltrans which produces its own set of damage functions that correspond to the specific details of each California bridge or overpass in its jurisdiction. For users whose portfolio of structures is comprised of common, standard designs, ShakeCast offers a simplified structural damage-state estimation capability adapted from the HAZUS-MH earthquake module (NIBS and FEMA, 2003). Currently the simplified fragility settings consist of 128 combinations of HAZUS model building types, construction materials, building heights, and building-code eras.

  20. Improving Vision-Based Motor Rehabilitation Interactive Systems for Users with Disabilities Using Mirror Feedback

    PubMed Central

    Martínez-Bueso, Pau; Moyà-Alcover, Biel

    2014-01-01

    Observation is recommended in motor rehabilitation. For this reason, the aim of this study was to experimentally test the feasibility and benefit of including mirror feedback in vision-based rehabilitation systems: we projected the user on the screen. We conducted a user study by using a previously evaluated system that improved the balance and postural control of adults with cerebral palsy. We used a within-subjects design with the two defined feedback conditions (mirror and no-mirror) with two different groups of users (8 with disabilities and 32 without disabilities) using usability measures (time-to-start (T s) and time-to-complete (T c)). A two-tailed paired samples t-test confirmed that in case of disabilities the mirror feedback facilitated the interaction in vision-based systems for rehabilitation. The measured times were significantly worse in the absence of the user's own visual feedback (T s = 7.09 (P < 0.001) and T c = 4.48 (P < 0.005)). In vision-based interaction systems, the input device is the user's own body; therefore, it makes sense that feedback should be related to the body of the user. In case of disabilities the mirror feedback mechanisms facilitated the interaction in vision-based systems for rehabilitation. Results recommends developers and researchers use this improvement in vision-based motor rehabilitation interactive systems. PMID:25295310

  1. Highlights: US Commercial Remote Sensing Industry Analysis

    NASA Technical Reports Server (NTRS)

    Rabin, Ron

    2002-01-01

    This viewgraph presentation profiles the US remote sensing industry based on responses to a survey by 1450 industry professionals. The presentation divides the industry into three sectors: academic, commercial, and government; the survey results from each are covered in a section of the presentation. The presentation also divides survey results on user needs into the following sectors: spatial resolution, geolocation accuracy; elevation accuracy, area coverage, imagery types, and timeliness. Data, information, and software characteristics are also covered in the presentation.

  2. Analysis of relative displacement between the HX wearable robotic exoskeleton and the user's hand.

    PubMed

    Cempini, Marco; Marzegan, Alberto; Rabuffetti, Marco; Cortese, Mario; Vitiello, Nicola; Ferrarin, Maurizio

    2014-10-18

    Advances in technology are allowing for the production of several viable wearable robotic devices to assist with activities of daily living and with rehabilitation. One of the most pressing limitations to user satisfaction is the lack of consistency in motion between the user and the robotic device. The displacement between the robot and the body segment may not correspond because of differences in skin and tissue compliance, mechanical backlash, and/or incorrect fit. This report presents the results of an analysis of relative displacement between the user's hand and a wearable exoskeleton, the HX. HX has been designed to maximize comfort, wearability and user safety, exploiting chains with multiple degrees-of-freedom with a modular architecture. These appealing features may introduce several uncertainties in the kinematic performances, especially when considering the anthropometry, morphology and degree of mobility of the human hand. The small relative displacements between the hand and the exoskeleton were measured with a video-based motion capture system, while the user executed several different grips in different exoskeleton modes. The analysis furnished quantitative results about the device performance, differentiated among device modules and test conditions. In general, the global relative displacement for the distal part of the device was in the range 0.5-1.5 mm, while within 3 mm (worse but still acceptable) for displacements nearest to the hand dorsum. Conclusions over the HX design principles have been drawn, as well as guidelines for future developments.

  3. Crowd Sensing-Enabling Security Service Recommendation for Social Fog Computing Systems

    PubMed Central

    Wu, Jun; Su, Zhou; Li, Jianhua

    2017-01-01

    Fog computing, shifting intelligence and resources from the remote cloud to edge networks, has the potential of providing low-latency for the communication from sensing data sources to users. For the objects from the Internet of Things (IoT) to the cloud, it is a new trend that the objects establish social-like relationships with each other, which efficiently brings the benefits of developed sociality to a complex environment. As fog service become more sophisticated, it will become more convenient for fog users to share their own services, resources, and data via social networks. Meanwhile, the efficient social organization can enable more flexible, secure, and collaborative networking. Aforementioned advantages make the social network a potential architecture for fog computing systems. In this paper, we design an architecture for social fog computing, in which the services of fog are provisioned based on “friend” relationships. To the best of our knowledge, this is the first attempt at an organized fog computing system-based social model. Meanwhile, social networking enhances the complexity and security risks of fog computing services, creating difficulties of security service recommendations in social fog computing. To address this, we propose a novel crowd sensing-enabling security service provisioning method to recommend security services accurately in social fog computing systems. Simulation results show the feasibilities and efficiency of the crowd sensing-enabling security service recommendation method for social fog computing systems. PMID:28758943

  4. Crowd Sensing-Enabling Security Service Recommendation for Social Fog Computing Systems.

    PubMed

    Wu, Jun; Su, Zhou; Wang, Shen; Li, Jianhua

    2017-07-30

    Fog computing, shifting intelligence and resources from the remote cloud to edge networks, has the potential of providing low-latency for the communication from sensing data sources to users. For the objects from the Internet of Things (IoT) to the cloud, it is a new trend that the objects establish social-like relationships with each other, which efficiently brings the benefits of developed sociality to a complex environment. As fog service become more sophisticated, it will become more convenient for fog users to share their own services, resources, and data via social networks. Meanwhile, the efficient social organization can enable more flexible, secure, and collaborative networking. Aforementioned advantages make the social network a potential architecture for fog computing systems. In this paper, we design an architecture for social fog computing, in which the services of fog are provisioned based on "friend" relationships. To the best of our knowledge, this is the first attempt at an organized fog computing system-based social model. Meanwhile, social networking enhances the complexity and security risks of fog computing services, creating difficulties of security service recommendations in social fog computing. To address this, we propose a novel crowd sensing-enabling security service provisioning method to recommend security services accurately in social fog computing systems. Simulation results show the feasibilities and efficiency of the crowd sensing-enabling security service recommendation method for social fog computing systems.

  5. Vision-Aided Context-Aware Framework for Personal Navigation Services

    NASA Astrophysics Data System (ADS)

    Saeedi, S.; Moussa, A.; El-Sheimy, N., , Dr.

    2012-07-01

    The ubiquity of mobile devices (such as smartphones and tablet-PCs) has encouraged the use of location-based services (LBS) that are relevant to the current location and context of a mobile user. The main challenge of LBS is to find a pervasive and accurate personal navigation system (PNS) in different situations of a mobile user. In this paper, we propose a method of personal navigation for pedestrians that allows a user to freely move in outdoor environments. This system aims at detection of the context information which is useful for improving personal navigation. The context information for a PNS consists of user activity modes (e.g. walking, stationary, driving, and etc.) and the mobile device orientation and placement with respect to the user. After detecting the context information, a low-cost integrated positioning algorithm has been employed to estimate pedestrian navigation parameters. The method is based on the integration of the relative user's motion (changes of velocity and heading angle) estimation based on the video image matching and absolute position information provided by GPS. A Kalman filter (KF) has been used to improve the navigation solution when the user is walking and the phone is in his/her hand. The Experimental results demonstrate the capabilities of this method for outdoor personal navigation systems.

  6. Building and evaluating sensor-based Citizens' Observatories for improving quality of life in cities

    NASA Astrophysics Data System (ADS)

    Castell, Nuria; Lahoz, William; Schneider, Philipp; Høiskar, Britt Ann; Grossberndt, Sonja; Naderer, Clemens; Robinson, Johanna; Kocman, David; Horvat, Milena; Bartonova, Alena

    2014-05-01

    Urban air quality, the environmental quality of public spaces and indoor areas such as schools, are areas of great concern to citizens and policymakers. However, access to information addressing these areas is not always available in a user-friendly manner. In particular, the quality and quantity of this information is not consistent across these areas, and does not reflect differences in needs among users. The EU-funded CITI-SENSE project will build on the concept of the Citizens' Observatories to empower citizens to contribute to and participate in environmental governance, and enable them to support and influence decision making by policymakers. To achieve this goal, CITI-SENSE will develop, test, demonstrate and validate a community-based environmental monitoring and information system using low-cost sensors and Earth Observation applications. Key to achieving this goal is the chain "sensors-platforms-products-users" linking providers of technology to users: (i) technologies for distributed monitoring (sensors); (ii) information and communication technologies (platform); (iii) information products and services (products); (iv) and citizen involvement in both monitoring and societal decisions (users). The CITI-SENSE observatories cover three empowerment initiatives: urban air quality; public spaces; and school indoor quality. The empowerment initiatives are being performed at nine locations across Europe. Each location has adapted the generic case study to their local circumstances and has contacted the urban stakeholders needed to run the study. The empowerment initiatives are divided into two phases: a first phase (Pilot Study), and a second phase (Full Implementation). The main goal of the Pilot Study is to test and evaluate the chain "sensors-platform-products-users". To assess the results of the empowerment initiatives, key performance indicators (KPIs) are being developed; these include questionnaires for users. The KPIs will be used to design the full implementation phase of the project. First results from the Pilot Study will be presented for three participating cities: Ljubljana (Slovenia), Vienna (Austria) and Oslo (Norway), which differ in size, environmental conditions and social perception on local air quality. Ljubljana and Oslo empowerment initiatives include urban air quality, and school indoor air quality, while Vienna only includes urban air quality. For the area of urban air quality, the three cities will deploy a wireless network of five static sensor nodes and distribute five personal sensors among people to be carried while performing daily activities in the pilot study. The data will be accessible to users through mobile phones, web services and other devices. For the full implementation phase the sensor network will comprise a total of 20 to 40 static nodes, depending on the size of the city, and 20 personal nodes. For the school indoor air quality three sensors will be allocated inside the school and one outside. The data will be visible provided in school classrooms giving the students a unique and innovative approach to learn about air quality by being involved. Acknowledgements: CITI-SENSE is a Collaborative Project partly funded by the EU FP7-ENV-2012 under grant agreement no 308524. www.citi-sense.eu.

  7. Design of a haptic device with grasp and push-pull force feedback for a master-slave surgical robot.

    PubMed

    Hu, Zhenkai; Yoon, Chae-Hyun; Park, Samuel Byeongjun; Jo, Yung-Ho

    2016-07-01

    We propose a portable haptic device providing grasp (kinesthetic) and push-pull (cutaneous) sensations for optical-motion-capture master interfaces. Although optical-motion-capture master interfaces for surgical robot systems can overcome the stiffness, friction, and coupling problems of mechanical master interfaces, it is difficult to add haptic feedback to an optical-motion-capture master interface without constraining the free motion of the operator's hands. Therefore, we utilized a Bowden cable-driven mechanism to provide the grasp and push-pull sensation while retaining the free hand motion of the optical-motion capture master interface. To evaluate the haptic device, we construct a 2-DOF force sensing/force feedback system. We compare the sensed force and the reproduced force of the haptic device. Finally, a needle insertion test was done to evaluate the performance of the haptic interface in the master-slave system. The results demonstrate that both the grasp force feedback and the push-pull force feedback provided by the haptic interface closely matched with the sensed forces of the slave robot. We successfully apply our haptic interface in the optical-motion-capture master-slave system. The results of the needle insertion test showed that our haptic feedback can provide more safety than merely visual observation. We develop a suitable haptic device to produce both kinesthetic grasp force feedback and cutaneous push-pull force feedback. Our future research will include further objective performance evaluations of the optical-motion-capture master-slave robot system with our haptic interface in surgical scenarios.

  8. Assessing college-level learning difficulties and "at riskness" for learning disabilities and ADHD: development and validation of the learning difficulties assessment.

    PubMed

    Kane, Steven T; Walker, John H; Schmidt, George R

    2011-01-01

    This article describes the development and validation of the Learning Difficulties Assessment (LDA), a normed and web-based survey that assesses perceived difficulties with reading, writing, spelling, mathematics, listening, concentration, memory, organizational skills, sense of control, and anxiety in college students. The LDA is designed to (a) map individual learning strengths and weaknesses, (b) provide users with a comparative sense of their academic skills, (c) integrate research in user-interface design to assist those with reading and learning challenges, and (d) identify individuals who may be at risk for learning disabilities and attention-deficit/hyperactivity disorder (ADHD) and who should thus be further assessed. Data from a large-scale 5-year study describing the instrument's validity as a screening tool for learning disabilities and ADHD are presented. This article also describes unique characteristics of the LDA including its user-interface design, normative characteristics, and use as a no-cost screening tool for identifying college students at risk for learning disorders and ADHD.

  9. Assistive obstacle detection and navigation devices for vision-impaired users.

    PubMed

    Ong, S K; Zhang, J; Nee, A Y C

    2013-09-01

    Quality of life for the visually impaired is an urgent worldwide issue that needs to be addressed. Obstacle detection is one of the most important navigation tasks for the visually impaired. In this research, a novel range sensor placement scheme is proposed in this paper for the development of obstacle detection devices. Based on this scheme, two prototypes have been developed targeting at different user groups. This paper discusses the design issues, functional modules and the evaluation tests carried out for both prototypes. Implications for Rehabilitation Visual impairment problem is becoming more severe due to the worldwide ageing population. Individuals with visual impairment require assistance from assistive devices in daily navigation tasks. Traditional assistive devices that assist navigation may have certain drawbacks, such as the limited sensing range of a white cane. Obstacle detection devices applying the range sensor technology can identify road conditions with a higher sensing range to notify the users of potential dangers in advance.

  10. MapSentinel: Can the Knowledge of Space Use Improve Indoor Tracking Further?

    PubMed Central

    Jia, Ruoxi; Jin, Ming; Zou, Han; Yesilata, Yigitcan; Xie, Lihua; Spanos, Costas

    2016-01-01

    Estimating an occupant’s location is arguably the most fundamental sensing task in smart buildings. The applications for fine-grained, responsive building operations require the location sensing systems to provide location estimates in real time, also known as indoor tracking. Existing indoor tracking systems require occupants to carry specialized devices or install programs on their smartphone to collect inertial sensing data. In this paper, we propose MapSentinel, which performs non-intrusive location sensing based on WiFi access points and ultrasonic sensors. MapSentinel combines the noisy sensor readings with the floormap information to estimate locations. One key observation supporting our work is that occupants exhibit distinctive motion characteristics at different locations on the floormap, e.g., constrained motion along the corridor or in the cubicle zones, and free movement in the open space. While extensive research has been performed on using a floormap as a tool to obtain correct walking trajectories without wall-crossings, there have been few attempts to incorporate the knowledge of space use available from the floormap into the location estimation. This paper argues that the knowledge of space use as an additional information source presents new opportunities for indoor tracking. The fusion of heterogeneous information is theoretically formulated within the Factor Graph framework, and the Context-Augmented Particle Filtering algorithm is developed to efficiently solve real-time walking trajectories. Our evaluation in a large office space shows that the MapSentinel can achieve accuracy improvement of 31.3% compared with the purely WiFi-based tracking system. PMID:27049387

  11. MapSentinel: Can the Knowledge of Space Use Improve Indoor Tracking Further?

    PubMed

    Jia, Ruoxi; Jin, Ming; Zou, Han; Yesilata, Yigitcan; Xie, Lihua; Spanos, Costas

    2016-04-02

    Estimating an occupant's location is arguably the most fundamental sensing task in smart buildings. The applications for fine-grained, responsive building operations require the location sensing systems to provide location estimates in real time, also known as indoor tracking. Existing indoor tracking systems require occupants to carry specialized devices or install programs on their smartphone to collect inertial sensing data. In this paper, we propose MapSentinel, which performs non-intrusive location sensing based on WiFi access points and ultrasonic sensors. MapSentinel combines the noisy sensor readings with the floormap information to estimate locations. One key observation supporting our work is that occupants exhibit distinctive motion characteristics at different locations on the floormap, e.g., constrained motion along the corridor or in the cubicle zones, and free movement in the open space. While extensive research has been performed on using a floormap as a tool to obtain correct walking trajectories without wall-crossings, there have been few attempts to incorporate the knowledge of space use available from the floormap into the location estimation. This paper argues that the knowledge of space use as an additional information source presents new opportunities for indoor tracking. The fusion of heterogeneous information is theoretically formulated within the Factor Graph framework, and the Context-Augmented Particle Filtering algorithm is developed to efficiently solve real-time walking trajectories. Our evaluation in a large office space shows that the MapSentinel can achieve accuracy improvement of 31.3% compared with the purely WiFi-based tracking system.

  12. Motion-compensated optical coherence tomography using envelope-based surface detection and Kalman-based prediction

    NASA Astrophysics Data System (ADS)

    Irsch, Kristina; Lee, Soohyun; Bose, Sanjukta N.; Kang, Jin U.

    2018-02-01

    We present an optical coherence tomography (OCT) imaging system that effectively compensates unwanted axial motion with micron-scale accuracy. The OCT system is based on a swept-source (SS) engine (1060-nm center wavelength, 100-nm full-width sweeping bandwidth, and 100-kHz repetition rate), with axial and lateral resolutions of about 4.5 and 8.5 microns respectively. The SS-OCT system incorporates a distance sensing method utilizing an envelope-based surface detection algorithm. The algorithm locates the target surface from the B-scans, taking into account not just the first or highest peak but the entire signature of sequential A-scans. Subsequently, a Kalman filter is applied as predictor to make up for system latencies, before sending the calculated position information to control a linear motor, adjusting and maintaining a fixed system-target distance. To test system performance, the motioncorrection algorithm was compared to earlier, more basic peak-based surface detection methods and to performing no motion compensation. Results demonstrate increased robustness and reproducibility, particularly noticeable in multilayered tissues, while utilizing the novel technique. Implementing such motion compensation into clinical OCT systems may thus improve the reliability of objective and quantitative information that can be extracted from OCT measurements.

  13. Towards Microeconomic Resource Sharing in End System Multicast Networks Based on Walrasian General Equilibrium

    NASA Astrophysics Data System (ADS)

    Rezvani, Mohammad Hossein; Analoui, Morteza

    2010-11-01

    We have designed a competitive economical mechanism for application level multicast in which a number of independent services are provided to the end-users by a number of origin servers. Each offered service can be thought of as a commodity and the origin servers and the users who relay the service to their downstream nodes can thus be thought of as producers of the economy. Also, the end-users can be viewed as consumers of the economy. The proposed mechanism regulates the price of each service in such a way that general equilibrium holds. So, all allocations will be Pareto optimal in the sense that the social welfare of the users is maximized.

  14. The Effects of Actual Human Size Display and Stereoscopic Presentation on Users' Sense of Being Together with and of Psychological Immersion in a Virtual Character

    PubMed Central

    Ahn, Dohyun; Seo, Youngnam; Kim, Minkyung; Kwon, Joung Huem; Jung, Younbo; Ahn, Jungsun

    2014-01-01

    Abstract This study examined the role of display size and mode in increasing users' sense of being together with and of their psychological immersion in a virtual character. Using a high-resolution three-dimensional virtual character, this study employed a 2×2 (stereoscopic mode vs. monoscopic mode×actual human size vs. small size display) factorial design in an experiment with 144 participants randomly assigned to each condition. Findings showed that stereoscopic mode had a significant effect on both users' sense of being together and psychological immersion. However, display size affected only the sense of being together. Furthermore, display size was not found to moderate the effect of stereoscopic mode. PMID:24606057

  15. 14 CFR 29.779 - Motion and effect of cockpit controls.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 14 Aeronautics and Space 1 2012-01-01 2012-01-01 false Motion and effect of cockpit controls. 29... Accommodations § 29.779 Motion and effect of cockpit controls. Cockpit controls must be designed so that they... collective pitch control, must operate with a sense of motion which corresponds to the effect on the...

  16. 14 CFR 27.779 - Motion and effect of cockpit controls.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 14 Aeronautics and Space 1 2011-01-01 2011-01-01 false Motion and effect of cockpit controls. 27... Accommodations § 27.779 Motion and effect of cockpit controls. Cockpit controls must be designed so that they... collective pitch control, must operate with a sense of motion which corresponds to the effect on the...

  17. 14 CFR 29.779 - Motion and effect of cockpit controls.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 14 Aeronautics and Space 1 2011-01-01 2011-01-01 false Motion and effect of cockpit controls. 29... Accommodations § 29.779 Motion and effect of cockpit controls. Cockpit controls must be designed so that they... collective pitch control, must operate with a sense of motion which corresponds to the effect on the...

  18. 14 CFR 29.779 - Motion and effect of cockpit controls.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 14 Aeronautics and Space 1 2013-01-01 2013-01-01 false Motion and effect of cockpit controls. 29... Accommodations § 29.779 Motion and effect of cockpit controls. Cockpit controls must be designed so that they... collective pitch control, must operate with a sense of motion which corresponds to the effect on the...

  19. 14 CFR 27.779 - Motion and effect of cockpit controls.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 14 Aeronautics and Space 1 2012-01-01 2012-01-01 false Motion and effect of cockpit controls. 27... Accommodations § 27.779 Motion and effect of cockpit controls. Cockpit controls must be designed so that they... collective pitch control, must operate with a sense of motion which corresponds to the effect on the...

  20. 14 CFR 27.779 - Motion and effect of cockpit controls.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Motion and effect of cockpit controls. 27... Accommodations § 27.779 Motion and effect of cockpit controls. Cockpit controls must be designed so that they... collective pitch control, must operate with a sense of motion which corresponds to the effect on the...

  1. 14 CFR 29.779 - Motion and effect of cockpit controls.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 14 Aeronautics and Space 1 2014-01-01 2014-01-01 false Motion and effect of cockpit controls. 29... Accommodations § 29.779 Motion and effect of cockpit controls. Cockpit controls must be designed so that they... collective pitch control, must operate with a sense of motion which corresponds to the effect on the...

  2. 14 CFR 27.779 - Motion and effect of cockpit controls.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 14 Aeronautics and Space 1 2014-01-01 2014-01-01 false Motion and effect of cockpit controls. 27... Accommodations § 27.779 Motion and effect of cockpit controls. Cockpit controls must be designed so that they... collective pitch control, must operate with a sense of motion which corresponds to the effect on the...

  3. 14 CFR 29.779 - Motion and effect of cockpit controls.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Motion and effect of cockpit controls. 29... Accommodations § 29.779 Motion and effect of cockpit controls. Cockpit controls must be designed so that they... collective pitch control, must operate with a sense of motion which corresponds to the effect on the...

  4. 14 CFR 27.779 - Motion and effect of cockpit controls.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 14 Aeronautics and Space 1 2013-01-01 2013-01-01 false Motion and effect of cockpit controls. 27... Accommodations § 27.779 Motion and effect of cockpit controls. Cockpit controls must be designed so that they... collective pitch control, must operate with a sense of motion which corresponds to the effect on the...

  5. A Review of Hybrid Fiber-Optic Distributed Simultaneous Vibration and Temperature Sensing Technology and Its Geophysical Applications

    PubMed Central

    2017-01-01

    Distributed sensing systems can transform an optical fiber cable into an array of sensors, allowing users to detect and monitor multiple physical parameters such as temperature, vibration and strain with fine spatial and temporal resolution over a long distance. Fiber-optic distributed acoustic sensing (DAS) and distributed temperature sensing (DTS) systems have been developed for various applications with varied spatial resolution, and spectral and sensing range. Rayleigh scattering-based phase optical time domain reflectometry (OTDR) for vibration and Raman/Brillouin scattering-based OTDR for temperature and strain measurements have been developed over the past two decades. The key challenge has been to find a methodology that would enable the physical parameters to be determined at any point along the sensing fiber with high sensitivity and spatial resolution, yet within acceptable frequency range for dynamic vibration, and temperature detection. There are many applications, especially in geophysical and mining engineering where simultaneous measurements of vibration and temperature are essential. In this article, recent developments of different hybrid systems for simultaneous vibration, temperature and strain measurements are analyzed based on their operation principles and performance. Then, challenges and limitations of the systems are highlighted for geophysical applications. PMID:29104259

  6. A Review of Hybrid Fiber-Optic Distributed Simultaneous Vibration and Temperature Sensing Technology and Its Geophysical Applications.

    PubMed

    Miah, Khalid; Potter, David K

    2017-11-01

    Distributed sensing systems can transform an optical fiber cable into an array of sensors, allowing users to detect and monitor multiple physical parameters such as temperature, vibration and strain with fine spatial and temporal resolution over a long distance. Fiber-optic distributed acoustic sensing (DAS) and distributed temperature sensing (DTS) systems have been developed for various applications with varied spatial resolution, and spectral and sensing range. Rayleigh scattering-based phase optical time domain reflectometry (OTDR) for vibration and Raman/Brillouin scattering-based OTDR for temperature and strain measurements have been developed over the past two decades. The key challenge has been to find a methodology that would enable the physical parameters to be determined at any point along the sensing fiber with high sensitivity and spatial resolution, yet within acceptable frequency range for dynamic vibration, and temperature detection. There are many applications, especially in geophysical and mining engineering where simultaneous measurements of vibration and temperature are essential. In this article, recent developments of different hybrid systems for simultaneous vibration, temperature and strain measurements are analyzed based on their operation principles and performance. Then, challenges and limitations of the systems are highlighted for geophysical applications.

  7. Conversational sensing

    NASA Astrophysics Data System (ADS)

    Preece, Alun; Gwilliams, Chris; Parizas, Christos; Pizzocaro, Diego; Bakdash, Jonathan Z.; Braines, Dave

    2014-05-01

    Recent developments in sensing technologies, mobile devices and context-aware user interfaces have made it pos- sible to represent information fusion and situational awareness for Intelligence, Surveillance and Reconnaissance (ISR) activities as a conversational process among actors at or near the tactical edges of a network. Motivated by use cases in the domain of Company Intelligence Support Team (CoIST) tasks, this paper presents an approach to information collection, fusion and sense-making based on the use of natural language (NL) and controlled nat- ural language (CNL) to support richer forms of human-machine interaction. The approach uses a conversational protocol to facilitate a ow of collaborative messages from NL to CNL and back again in support of interactions such as: turning eyewitness reports from human observers into actionable information (from both soldier and civilian sources); fusing information from humans and physical sensors (with associated quality metadata); and assisting human analysts to make the best use of available sensing assets in an area of interest (governed by man- agement and security policies). CNL is used as a common formal knowledge representation for both machine and human agents to support reasoning, semantic information fusion and generation of rationale for inferences, in ways that remain transparent to human users. Examples are provided of various alternative styles for user feedback, including NL, CNL and graphical feedback. A pilot experiment with human subjects shows that a prototype conversational agent is able to gather usable CNL information from untrained human subjects.

  8. Kinesthetic Force Feedback and Belt Control for the Treadport Locomotion Interface.

    PubMed

    Hejrati, Babak; Crandall, Kyle L; Hollerbach, John M; Abbott, Jake J

    2015-01-01

    This paper describes an improved control system for the Treadport immersive locomotion interface, with results that generalize to any treadmill that utilizes an actuated tether to enable self-selected walking speed. A new belt controller is implemented to regulate the user's position; when combined with the user's own volition, this controller also enables the user to naturally self-select their walking speed as they would when walking over ground. A new kinesthetic-force-feedback controller is designed for the tether that applies forces to the user's torso. This new controller is derived based on maintaining the user's sense of balance during belt acceleration, rather than by rendering an inertial force as was done in our prior work. Based on the results of a human-subjects study, the improvements in both controllers significantly contribute to an improved perception of realistic walking on the Treadport. The improved control system uses intuitive dynamic-system and anatomical parameters and requires no ad hoc gain tuning. The control system simply requires three measurements to be made for a given user: the user's mass, the user's height, and the height of the tether attachment point on the user's torso.

  9. The Effect of the Ill-posed Problem on Quantitative Error Assessment in Digital Image Correlation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lehoucq, R. B.; Reu, P. L.; Turner, D. Z.

    Here, this work explores the effect of the ill-posed problem on uncertainty quantification for motion estimation using digital image correlation (DIC) (Sutton et al. 2009). We develop a correction factor for standard uncertainty estimates based on the cosine of the angle between the true motion and the image gradients, in an integral sense over a subregion of the image. This correction factor accounts for variability in the DIC solution previously unaccounted for when considering only image noise, interpolation bias, contrast, and the software settings such as subset size and spacing.

  10. Robotic situational awareness of actions in human teaming

    NASA Astrophysics Data System (ADS)

    Tahmoush, Dave

    2015-06-01

    When robots can sense and interpret the activities of the people they are working with, they become more of a team member and less of just a piece of equipment. This has motivated work on recognizing human actions using existing robotic sensors like short-range ladar imagers. These produce three-dimensional point cloud movies which can be analyzed for structure and motion information. We skeletonize the human point cloud and apply a physics-based velocity correlation scheme to the resulting joint motions. The twenty actions are then recognized using a nearest-neighbors classifier that achieves good accuracy.

  11. The Effect of the Ill-posed Problem on Quantitative Error Assessment in Digital Image Correlation

    DOE PAGES

    Lehoucq, R. B.; Reu, P. L.; Turner, D. Z.

    2017-11-27

    Here, this work explores the effect of the ill-posed problem on uncertainty quantification for motion estimation using digital image correlation (DIC) (Sutton et al. 2009). We develop a correction factor for standard uncertainty estimates based on the cosine of the angle between the true motion and the image gradients, in an integral sense over a subregion of the image. This correction factor accounts for variability in the DIC solution previously unaccounted for when considering only image noise, interpolation bias, contrast, and the software settings such as subset size and spacing.

  12. Omni-Purpose Stretchable Strain Sensor Based on a Highly Dense Nanocracking Structure for Whole-Body Motion Monitoring.

    PubMed

    Jeon, Hyungkook; Hong, Seong Kyung; Kim, Min Seo; Cho, Seong J; Lim, Geunbae

    2017-12-06

    Here, we report an omni-purpose stretchable strain sensor (OPSS sensor) based on a nanocracking structure for monitoring whole-body motions including both joint-level and skin-level motions. By controlling and optimizing the nanocracking structure, inspired by the spider sensory system, the OPSS sensor is endowed with both high sensitivity (gauge factor ≈ 30) and a wide working range (strain up to 150%) under great linearity (R 2 = 0.9814) and fast response time (<30 ms). Furthermore, the fabrication process of the OPSS sensor has advantages of being extremely simple, patternable, integrated circuit-compatible, and reliable in terms of reproducibility. Using the OPSS sensor, we detected various human body motions including both moving of joints and subtle deforming of skin such as pulsation. As specific medical applications of the sensor, we also successfully developed a glove-type hand motion detector and a real-time Morse code communication system for patients with general paralysis. Therefore, considering the outstanding sensing performances, great advantages of the fabrication process, and successful results from a variety of practical applications, we believe that the OPSS sensor is a highly suitable strain sensor for whole-body motion monitoring and has potential for a wide range of applications, such as medical robotics and wearable healthcare devices.

  13. Enhancing Long-Term Motivation of Cardiac Patients by Applying Exergaming in Rehabilitation Training.

    PubMed

    Volmer, Joe; Burkert, Malte; Krumm, Heiko; Abodahab, Abdurrahman; Dinklage, Patrick; Feltmann, Marius; Kröger, Chris; Panta, Pernes; Schäfer, Felix; Scheidt, David; Sellung, Marcel; Singerhoff, Hauke; Steingrefer, Christofer; Schmidt, Thomas; Hoffmann, Jan-Dirk; Willemsen, Detlev; Reiss, Nils

    2017-01-01

    Although regular physical activities reduce mortality and increase quality of life many cardiac patients discontinue training due to lack of motivation, lack of time or having health concerns because of a too high training intensity. Therefore, we developed an exergaming based system to enhance long-term motivation in the context of rehabilitation training. We combined different hardware components such as vital sensors, a virtual reality headset, a motion detecting camera, a bicycle ergometer and a motion platform to create an immersive and fun experience for the training user without having to worry about any negative health impact. Our evaluation shows that the system is well accepted by the users and is capable of tackling the aforementioned reasons for an inactive lifestyle. The system is designed to be easily extensible, safe to use and enables professionals to adjust and to telemonitor the training at any time.

  14. Exercise Sensing and Pose Recovery Inference Tool (ESPRIT) - A Compact Stereo-based Motion Capture Solution For Exercise Monitoring

    NASA Technical Reports Server (NTRS)

    Lee, Mun Wai

    2015-01-01

    Crew exercise is important during long-duration space flight not only for maintaining health and fitness but also for preventing adverse health problems, such as losses in muscle strength and bone density. Monitoring crew exercise via motion capture and kinematic analysis aids understanding of the effects of microgravity on exercise and helps ensure that exercise prescriptions are effective. Intelligent Automation, Inc., has developed ESPRIT to monitor exercise activities, detect body markers, extract image features, and recover three-dimensional (3D) kinematic body poses. The system relies on prior knowledge and modeling of the human body and on advanced statistical inference techniques to achieve robust and accurate motion capture. In Phase I, the company demonstrated motion capture of several exercises, including walking, curling, and dead lifting. Phase II efforts focused on enhancing algorithms and delivering an ESPRIT prototype for testing and demonstration.

  15. Exploitation of Ubiquitous Wi-Fi Devices as Building Blocks for Improvised Motion Detection Systems.

    PubMed

    Soldovieri, Francesco; Gennarelli, Gianluca

    2016-02-27

    This article deals with a feasibility study on the detection of human movements in indoor scenarios based on radio signal strength variations. The sensing principle exploits the fact that the human body interacts with wireless signals, introducing variations of the radiowave fields due to shadowing and multipath phenomena. As a result, human motion can be inferred from fluctuations of radiowave power collected by a receiving terminal. In this paper, we investigate the potentialities of widely available wireless communication devices in order to develop an improvised motion detection system (IMDS). Experimental tests are performed in an indoor environment by using a smartphone as a Wi-Fi access point and a laptop with dedicated software as a receiver. Simple detection strategies tailored for real-time operation are implemented to process the received signal strength measurements. The achieved results confirm the potentialities of the simple system here proposed to reliably detect human motion in operational conditions.

  16. Applying n-bit floating point numbers and integers, and the n-bit filter of HDF5 to reduce file sizes of remote sensing products in memory-sensitive environments

    NASA Astrophysics Data System (ADS)

    Zinke, Stephan

    2017-02-01

    Memory sensitive applications for remote sensing data require memory-optimized data types in remote sensing products. Hierarchical Data Format version 5 (HDF5) offers user defined floating point numbers and integers and the n-bit filter to create data types optimized for memory consumption. The European Organisation for the Exploitation of Meteorological Satellites (EUMETSAT) applies a compaction scheme to the disseminated products of the Day and Night Band (DNB) data of Suomi National Polar-orbiting Partnership (S-NPP) satellite's instrument Visible Infrared Imager Radiometer Suite (VIIRS) through the EUMETSAT Advanced Retransmission Service, converting the original 32 bits floating point numbers to user defined floating point numbers in combination with the n-bit filter for the radiance dataset of the product. The radiance dataset requires a floating point representation due to the high dynamic range of the DNB. A compression factor of 1.96 is reached by using an automatically determined exponent size and an 8 bits trailing significand and thus reducing the bandwidth requirements for dissemination. It is shown how the parameters needed for user defined floating point numbers are derived or determined automatically based on the data present in a product.

  17. Design of a compact low-power human-computer interaction equipment for hand motion

    NASA Astrophysics Data System (ADS)

    Wu, Xianwei; Jin, Wenguang

    2017-01-01

    Human-Computer Interaction (HCI) raises demand of convenience, endurance, responsiveness and naturalness. This paper describes a design of a compact wearable low-power HCI equipment applied to gesture recognition. System combines multi-mode sense signals: the vision sense signal and the motion sense signal, and the equipment is equipped with the depth camera and the motion sensor. The dimension (40 mm × 30 mm) and structure is compact and portable after tight integration. System is built on a module layered framework, which contributes to real-time collection (60 fps), process and transmission via synchronous confusion with asynchronous concurrent collection and wireless Blue 4.0 transmission. To minimize equipment's energy consumption, system makes use of low-power components, managing peripheral state dynamically, switching into idle mode intelligently, pulse-width modulation (PWM) of the NIR LEDs of the depth camera and algorithm optimization by the motion sensor. To test this equipment's function and performance, a gesture recognition algorithm is applied to system. As the result presents, general energy consumption could be as low as 0.5 W.

  18. Parallel Microcracks-based Ultrasensitive and Highly Stretchable Strain Sensors.

    PubMed

    Amjadi, Morteza; Turan, Mehmet; Clementson, Cameron P; Sitti, Metin

    2016-03-02

    There is an increasing demand for flexible, skin-attachable, and wearable strain sensors due to their various potential applications. However, achieving strain sensors with both high sensitivity and high stretchability is still a grand challenge. Here, we propose highly sensitive and stretchable strain sensors based on the reversible microcrack formation in composite thin films. Controllable parallel microcracks are generated in graphite thin films coated on elastomer films. Sensors made of graphite thin films with short microcracks possess high gauge factors (maximum value of 522.6) and stretchability (ε ≥ 50%), whereas sensors with long microcracks show ultrahigh sensitivity (maximum value of 11,344) with limited stretchability (ε ≤ 50%). We demonstrate the high performance strain sensing of our sensors in both small and large strain sensing applications such as human physiological activity recognition, human body large motion capturing, vibration detection, pressure sensing, and soft robotics.

  19. Transient aging in fractional Brownian and Langevin-equation motion.

    PubMed

    Kursawe, Jochen; Schulz, Johannes; Metzler, Ralf

    2013-12-01

    Stochastic processes driven by stationary fractional Gaussian noise, that is, fractional Brownian motion and fractional Langevin-equation motion, are usually considered to be ergodic in the sense that, after an algebraic relaxation, time and ensemble averages of physical observables coincide. Recently it was demonstrated that fractional Brownian motion and fractional Langevin-equation motion under external confinement are transiently nonergodic-time and ensemble averages behave differently-from the moment when the particle starts to sense the confinement. Here we show that these processes also exhibit transient aging, that is, physical observables such as the time-averaged mean-squared displacement depend on the time lag between the initiation of the system at time t=0 and the start of the measurement at the aging time t(a). In particular, it turns out that for fractional Langevin-equation motion the aging dependence on t(a) is different between the cases of free and confined motion. We obtain explicit analytical expressions for the aged moments of the particle position as well as the time-averaged mean-squared displacement and present a numerical analysis of this transient aging phenomenon.

  20. QKD-based quantum private query without a failure probability

    NASA Astrophysics Data System (ADS)

    Liu, Bin; Gao, Fei; Huang, Wei; Wen, QiaoYan

    2015-10-01

    In this paper, we present a quantum-key-distribution (QKD)-based quantum private query (QPQ) protocol utilizing single-photon signal of multiple optical pulses. It maintains the advantages of the QKD-based QPQ, i.e., easy to implement and loss tolerant. In addition, different from the situations in the previous QKD-based QPQ protocols, in our protocol, the number of the items an honest user will obtain is always one and the failure probability is always zero. This characteristic not only improves the stability (in the sense that, ignoring the noise and the attack, the protocol would always succeed), but also benefits the privacy of the database (since the database will no more reveal additional secrets to the honest users). Furthermore, for the user's privacy, the proposed protocol is cheat sensitive, and for security of the database, we obtain an upper bound for the leaked information of the database in theory.

  1. The SCEC Broadband Platform: Open-Source Software for Strong Ground Motion Simulation and Validation

    NASA Astrophysics Data System (ADS)

    Silva, F.; Goulet, C. A.; Maechling, P. J.; Callaghan, S.; Jordan, T. H.

    2016-12-01

    The Southern California Earthquake Center (SCEC) Broadband Platform (BBP) is a carefully integrated collection of open-source scientific software programs that can simulate broadband (0-100 Hz) ground motions for earthquakes at regional scales. The BBP can run earthquake rupture and wave propagation modeling software to simulate ground motions for well-observed historical earthquakes and to quantify how well the simulated broadband seismograms match the observed seismograms. The BBP can also run simulations for hypothetical earthquakes. In this case, users input an earthquake location and magnitude description, a list of station locations, and a 1D velocity model for the region of interest, and the BBP software then calculates ground motions for the specified stations. The BBP scientific software modules implement kinematic rupture generation, low- and high-frequency seismogram synthesis using wave propagation through 1D layered velocity structures, several ground motion intensity measure calculations, and various ground motion goodness-of-fit tools. These modules are integrated into a software system that provides user-defined, repeatable, calculation of ground-motion seismograms, using multiple alternative ground motion simulation methods, and software utilities to generate tables, plots, and maps. The BBP has been developed over the last five years in a collaborative project involving geoscientists, earthquake engineers, graduate students, and SCEC scientific software developers. The SCEC BBP software released in 2016 can be compiled and run on recent Linux and Mac OS X systems with GNU compilers. It includes five simulation methods, seven simulation regions covering California, Japan, and Eastern North America, and the ability to compare simulation results against empirical ground motion models (aka GMPEs). The latest version includes updated ground motion simulation methods, a suite of new validation metrics and a simplified command line user interface.

  2. Combined Feature Based and Shape Based Visual Tracker for Robot Navigation

    NASA Technical Reports Server (NTRS)

    Deans, J.; Kunz, C.; Sargent, R.; Park, E.; Pedersen, L.

    2005-01-01

    We have developed a combined feature based and shape based visual tracking system designed to enable a planetary rover to visually track and servo to specific points chosen by a user with centimeter precision. The feature based tracker uses invariant feature detection and matching across a stereo pair, as well as matching pairs before and after robot movement in order to compute an incremental 6-DOF motion at each tracker update. This tracking method is subject to drift over time, which can be compensated by the shape based method. The shape based tracking method consists of 3D model registration, which recovers 6-DOF motion given sufficient shape and proper initialization. By integrating complementary algorithms, the combined tracker leverages the efficiency and robustness of feature based methods with the precision and accuracy of model registration. In this paper, we present the algorithms and their integration into a combined visual tracking system.

  3. TERSSE: Definition of the Total Earth Resources System for the Shuttle Era. Volume 8: User's Mission and System Requirements Data (appendix A of Volume 3)

    NASA Technical Reports Server (NTRS)

    1974-01-01

    A computer printout is presented of the mission requirement for the TERSSE missions and their associated user tasks. The data included in the data base represents a broad-based attempt to define the amount, extent, and type of information needed for an earth resources management program in the era of the space shuttle. An effort was made to consider all aspects of remote sensing and resource management; because of its broad scope, it is not intended that the data be used without verification for in-depth studies of particular missions and/or users. The data base represents the quantitative structure necessary to define the TERSSE architecture and requirements, and to an overall integrated view of the earth resources technology requirements of the 1980's.

  4. A Motion-Sensing Game-Based Therapy to Foster the Learning of Children with Sensory Integration Dysfunction

    ERIC Educational Resources Information Center

    Chuang, Tsung-Yen; Kuo, Ming-Shiou

    2016-01-01

    Children with Sensory Integration Dysfunction (SID, also known as Sensory Processing Disorder, SPD) are also learners with disabilities with regard to responding adequately to the demands made by a learning environment. With problems of organizing and processing the sensation information coming from body modalities, children with SID (CwSID)…

  5. Presence in Video-Mediated Interactions: Case Studies at CSIRO

    NASA Astrophysics Data System (ADS)

    Alem, Leila

    Although telepresence and a sense of connectedness with others are frequently mentioned in media space studies, as far as we know, none of these studies report attempts at assessing this critical aspect of user experience. While some attempts have been made to measure presence in virtual reality or augmented reality, (a comprehensive review of existing measures is available in Baren and Ijsselsteijn [2004]), very little work has been reported in measuring presence in video-mediated collaboration systems. Traditional studies of video-mediated collaboration have mostly focused their evaluation on measures of task performance and user satisfaction. Videoconferencing systems can be seen as a type of media space; they rely on technologies of audio, video, and computing put together to create an environment extending the embodied mind. This chapter reports on a set of video-mediated collaboration studies conducted at CSIRO in which different aspects of presence are being investigated. The first study reports the sense of physical presence a specialist doctor experiences when engaged in a remote consultation of a patient using the virtual critical care unit (Alem et al., 2006). The Viccu system is an “always-on” system connecting two hospitals (Li et al., 2006). The presence measure focuses on the extent to which users of videoconferencing systems feel physically present in the remote location. The second study reports the sense of social presence users experience when playing a game of charades with remote partners using a video conference link (Kougianous et al., 2006). In this study the presence measure focuses on the extent to which users feel connected with their remote partners. The third study reports the sense of copresence users experience when building collaboratively a piece of Lego toy (Melo and Alem, 2007). The sense of copresence is the extent to which users feel present with their remote partner. In this final study the sense of copresence is investigated by looking at the word used by users when referring to the physical objects they are manipulating during their interaction as well as when referring to locations in the collaborative workspace. We believe that such efforts provide a solid stepping stone for evaluating and analyzing future media spaces.

  6. A telemedicine instrument for Internet-based home monitoring of thoracoabdominal motion in patients with respiratory diseases

    NASA Astrophysics Data System (ADS)

    da Silva Junior, Evert Pereira; Esteves, Guilherme Pompeu; Dames, Karla Kristine; Melo, Pedro Lopes de

    2011-01-01

    Changes in thoracoabdominal motion are highly prevalent in patients with chronic respiratory diseases. Home care services that use telemedicine techniques and Internet-based monitoring have the potential to improve the management of these patients. However, there is no detailed description in the literature of a system for Internet-based monitoring of patients with disturbed thoracoabdominal motion. The purpose of this work was to describe the development of a new telemedicine instrument for Internet-based home monitoring of thoracoabdominal movement. The instrument directly measures changes in the thorax and abdomen circumferences and transfers data through a transmission control protocol/Internet protocol connection. After the design details are described, the accuracy of the electronic and software processing units of the instrument is evaluated by using electronic signals simulating normal subjects and individuals with thoracoabdominal motion disorders. The results obtained during in vivo studies on normal subjects simulating thoracoabdominal motion disorders showed that this new system is able to detect a reduction in abdominal movement that is associated with abnormal thoracic breathing (p < 0.0001) and the reduction in thoracic movement during abnormal abdominal breathing (p < 0.005). Simulated asynchrony in thoracoabdominal motion was also adequately detected by the system (p < 0.0001). The experimental results obtained for patients with respiratory diseases were in close agreement with the expected values, providing evidence that this instrument can be a useful tool for the evaluation of thoracoabdominal motion. The Internet transmission tests showed that the acquisition and analysis of the thoracoabdominal motion signals can be performed remotely. The user can also receive medical recommendations. The proposed system can be used in a spectrum of telemedicine scenarios, which can reduce the costs of assistance offered to patients with respiratory diseases.

  7. High Accuracy Passive Magnetic Field-Based Localization for Feedback Control Using Principal Component Analysis.

    PubMed

    Foong, Shaohui; Sun, Zhenglong

    2016-08-12

    In this paper, a novel magnetic field-based sensing system employing statistically optimized concurrent multiple sensor outputs for precise field-position association and localization is presented. This method capitalizes on the independence between simultaneous spatial field measurements at multiple locations to induce unique correspondences between field and position. This single-source-multi-sensor configuration is able to achieve accurate and precise localization and tracking of translational motion without contact over large travel distances for feedback control. Principal component analysis (PCA) is used as a pseudo-linear filter to optimally reduce the dimensions of the multi-sensor output space for computationally efficient field-position mapping with artificial neural networks (ANNs). Numerical simulations are employed to investigate the effects of geometric parameters and Gaussian noise corruption on PCA assisted ANN mapping performance. Using a 9-sensor network, the sensing accuracy and closed-loop tracking performance of the proposed optimal field-based sensing system is experimentally evaluated on a linear actuator with a significantly more expensive optical encoder as a comparison.

  8. Relationships between scalp, brain, and skull motion estimated using magnetic resonance elastography.

    PubMed

    Badachhape, Andrew A; Okamoto, Ruth J; Johnson, Curtis L; Bayly, Philip V

    2018-05-17

    The objective of this study was to characterize the relationships between motion in the scalp, skull, and brain. In vivo estimates of motion transmission from the skull to the brain may illuminate the mechanics of traumatic brain injury. Because of challenges in directly sensing skull motion, it is useful to know how well motion of soft tissue of the head, i.e., the scalp, can approximate skull motion or predict brain tissue deformation. In this study, motion of the scalp and brain were measured using magnetic resonance elastography (MRE) and separated into components due to rigid-body displacement and dynamic deformation. Displacement estimates in the scalp were calculated using low motion-encoding gradient strength in order to reduce "phase wrapping" (an ambiguity in displacement estimates caused by the 2 π-periodicity of MRE phase contrast). MRE estimates of scalp and brain motion were compared to skull motion estimated from three tri-axial accelerometers. Comparison of the relative amplitudes and phases of harmonic motion in the scalp, skull, and brain of six human subjects indicate that data from scalp-based sensors should be used with caution to estimate skull kinematics, but that fairly consistent relationships exist between scalp, skull, and brain motion. In addition, the measured amplitude and phase relationships of scalp, skull, and brain can be used to evaluate and improve mathematical models of head biomechanics. Copyright © 2018 Elsevier Ltd. All rights reserved.

  9. Evaluation of a Micro-Force Sensing Handheld Robot for Vitreoretinal Surgery.

    PubMed

    Gonenc, Berk; Balicki, Marcin A; Handa, James; Gehlbach, Peter; Riviere, Cameron N; Taylor, Russell H; Iordachita, Iulian

    2012-12-20

    Highly accurate positioning is fundamental to the performance of vitreoretinal microsurgery. Of vitreoretinal procedures, membrane peeling is among the most prone to complications since extremely delicate manipulation of retinal tissue is required. Associated tool-to-tissue interaction forces are usually below the threshold of human perception, and the surgical tools are moved very slowly, within the 0.1-0.5 mm/s range. During the procedure, unintentional tool motion and excessive forces can easily give rise to vision loss or irreversible damage to the retina. A successful surgery includes two key features: controlled tremor-free tool motion and control of applied force. In this study, we present the potential benefits of a micro-force sensing robot in vitreoretinal surgery. Our main contribution is implementing fiber Bragg grating based force sensing in an active tremor canceling handheld micromanipulator, known as Micron, to measure tool-to-tissue interaction forces in real time. Implemented auditory sensory substitution assists in reducing and limiting forces. In order to test the functionality and performance, the force sensing Micron was evaluated in peeling experiments with adhesive bandages and with the inner shell membrane from chicken eggs. Our findings show that the combination of active tremor canceling together with auditory sensory substitution is the most promising aid that keeps peeling forces below 7 mN with a significant reduction in 2-20 Hz oscillations.

  10. Dizziness

    MedlinePlus

    ... the cause. Inner ear problems that cause dizziness (vertigo) Your sense of balance depends on the combined ... help detect gravity and back-and-forth motion Vertigo is the false sense that your surroundings are ...

  11. Dynamics and Control of Tethered Satellite Formations for the Purpose of Space-Based Remote Sensing

    DTIC Science & Technology

    2006-08-01

    remote sensing mission. Energy dissipation is found to have an adverse effect on foundational rigid body (Likins-Pringle) equilibria. It is shown that a continuously earth-facing equilibrium condition for a fixed-length tethered system does not exist since the spin rate required for the proper precession would not be high enough to maintain tether tension. The range of required spin rates for steady-spin motion is numerically defined here, but none of these conditions can meet the continuously earth-facing criteria. Of particular note is the discovery that applying certain

  12. The Tetracorder user guide: version 4.4

    USGS Publications Warehouse

    Livo, Keith Eric; Clark, Roger N.

    2014-01-01

    Imaging spectroscopy mapping software assists in the identification and mapping of materials based on their chemical properties as expressed in spectral measurements of a planet including the solid or liquid surface or atmosphere. Such software can be used to analyze field, aircraft, or spacecraft data; remote sensing datasets; or laboratory spectra. Tetracorder is a set of software algorithms commanded through an expert system to identify materials based on their spectra (Clark and others, 2003). Tetracorder also can be used in traditional remote sensing analyses, because some of the algorithms are a version of a matched filter. Thus, depending on the instructions fed to the Tetracorder system, results can range from simple matched filter output, to spectral feature fitting, to full identification of surface materials (within the limits of the spectral signatures of materials over the spectral range and resolution of the imaging spectroscopy data). A basic understanding of spectroscopy by the user is required for developing an optimum mapping strategy and assessing the results.

  13. Extracting Association Patterns in Network Communications

    PubMed Central

    Portela, Javier; Villalba, Luis Javier García; Trujillo, Alejandra Guadalupe Silva; Orozco, Ana Lucila Sandoval; Kim, Tai-hoon

    2015-01-01

    In network communications, mixes provide protection against observers hiding the appearance of messages, patterns, length and links between senders and receivers. Statistical disclosure attacks aim to reveal the identity of senders and receivers in a communication network setting when it is protected by standard techniques based on mixes. This work aims to develop a global statistical disclosure attack to detect relationships between users. The only information used by the attacker is the number of messages sent and received by each user for each round, the batch of messages grouped by the anonymity system. A new modeling framework based on contingency tables is used. The assumptions are more flexible than those used in the literature, allowing to apply the method to multiple situations automatically, such as email data or social networks data. A classification scheme based on combinatoric solutions of the space of rounds retrieved is developed. Solutions about relationships between users are provided for all pairs of users simultaneously, since the dependence of the data retrieved needs to be addressed in a global sense. PMID:25679311

  14. Extracting association patterns in network communications.

    PubMed

    Portela, Javier; Villalba, Luis Javier García; Trujillo, Alejandra Guadalupe Silva; Orozco, Ana Lucila Sandoval; Kim, Tai-hoon

    2015-02-11

    In network communications, mixes provide protection against observers hiding the appearance of messages, patterns, length and links between senders and receivers. Statistical disclosure attacks aim to reveal the identity of senders and receivers in a communication network setting when it is protected by standard techniques based on mixes. This work aims to develop a global statistical disclosure attack to detect relationships between users. The only information used by the attacker is the number of messages sent and received by each user for each round, the batch of messages grouped by the anonymity system. A new modeling framework based on contingency tables is used. The assumptions are more flexible than those used in the literature, allowing to apply the method to multiple situations automatically, such as email data or social networks data. A classification scheme based on combinatoric solutions of the space of rounds retrieved is developed. Solutions about relationships between users are provided for all pairs of users simultaneously, since the dependence of the data retrieved needs to be addressed in a global sense.

  15. A Framework for Sharing and Integrating Remote Sensing and GIS Models Based on Web Service

    PubMed Central

    Chen, Zeqiang; Lin, Hui; Chen, Min; Liu, Deer; Bao, Ying; Ding, Yulin

    2014-01-01

    Sharing and integrating Remote Sensing (RS) and Geographic Information System/Science (GIS) models are critical for developing practical application systems. Facilitating model sharing and model integration is a problem for model publishers and model users, respectively. To address this problem, a framework based on a Web service for sharing and integrating RS and GIS models is proposed in this paper. The fundamental idea of the framework is to publish heterogeneous RS and GIS models into standard Web services for sharing and interoperation and then to integrate the RS and GIS models using Web services. For the former, a “black box” and a visual method are employed to facilitate the publishing of the models as Web services. For the latter, model integration based on the geospatial workflow and semantic supported marching method is introduced. Under this framework, model sharing and integration is applied for developing the Pearl River Delta water environment monitoring system. The results show that the framework can facilitate model sharing and model integration for model publishers and model users. PMID:24901016

  16. A framework for sharing and integrating remote sensing and GIS models based on Web service.

    PubMed

    Chen, Zeqiang; Lin, Hui; Chen, Min; Liu, Deer; Bao, Ying; Ding, Yulin

    2014-01-01

    Sharing and integrating Remote Sensing (RS) and Geographic Information System/Science (GIS) models are critical for developing practical application systems. Facilitating model sharing and model integration is a problem for model publishers and model users, respectively. To address this problem, a framework based on a Web service for sharing and integrating RS and GIS models is proposed in this paper. The fundamental idea of the framework is to publish heterogeneous RS and GIS models into standard Web services for sharing and interoperation and then to integrate the RS and GIS models using Web services. For the former, a "black box" and a visual method are employed to facilitate the publishing of the models as Web services. For the latter, model integration based on the geospatial workflow and semantic supported marching method is introduced. Under this framework, model sharing and integration is applied for developing the Pearl River Delta water environment monitoring system. The results show that the framework can facilitate model sharing and model integration for model publishers and model users.

  17. An Intelligent computer-aided tutoring system for diagnosing anomalies of spacecraft in operation

    NASA Technical Reports Server (NTRS)

    Rolincik, Mark; Lauriente, Michael; Koons, Harry C.; Gorney, David

    1993-01-01

    A new rule-based, expert system for diagnosing spacecraft anomalies is under development. The knowledge base consists of over two-hundred (200) rules and provides links to historical and environmental databases. Environmental causes considered are bulk charging, single event upsets (SEU), surface charging, and total radiation dose. The system's driver translates forward chaining rules into a backward chaining sequence, prompting the user for information pertinent to the causes considered. When the user selects the novice mode, the system automatically gives detailed explanations and descriptions of terms and reasoning as the session progresses, in a sense teaching the user. As such it is an effective tutoring tool. The use of heuristics frees the user from searching through large amounts of irrelevant information and allows the user to input partial information (varying degrees of confidence in an answer) or 'unknown' to any question. The system is available on-line and uses C Language Integrated Production System (CLIPS), an expert shell developed by the NASA Johnson Space Center AI Laboratory in Houston.

  18. Remote Sensing Systems Optimization for Geobase Enhancement

    DTIC Science & Technology

    2003-03-01

    through feedback from base users, as well as the researcher’s observations. 3.1 GeoBase and GIS Learning GeoBase and Geographic Information System ...Abstract The U.S. Air Force is in the process of implementing GeoBase, a geographic information system (GIS), throughout its worldwide installations...Geographic Information System (GIS). A GIS is a computer database that contains geo-spatial information . It is the principal tool used to input, view

  19. Signal Quality Improvement Algorithms for MEMS Gyroscope-Based Human Motion Analysis Systems: A Systematic Review.

    PubMed

    Du, Jiaying; Gerdtman, Christer; Lindén, Maria

    2018-04-06

    Motion sensors such as MEMS gyroscopes and accelerometers are characterized by a small size, light weight, high sensitivity, and low cost. They are used in an increasing number of applications. However, they are easily influenced by environmental effects such as temperature change, shock, and vibration. Thus, signal processing is essential for minimizing errors and improving signal quality and system stability. The aim of this work is to investigate and present a systematic review of different signal error reduction algorithms that are used for MEMS gyroscope-based motion analysis systems for human motion analysis or have the potential to be used in this area. A systematic search was performed with the search engines/databases of the ACM Digital Library, IEEE Xplore, PubMed, and Scopus. Sixteen papers that focus on MEMS gyroscope-related signal processing and were published in journals or conference proceedings in the past 10 years were found and fully reviewed. Seventeen algorithms were categorized into four main groups: Kalman-filter-based algorithms, adaptive-based algorithms, simple filter algorithms, and compensation-based algorithms. The algorithms were analyzed and presented along with their characteristics such as advantages, disadvantages, and time limitations. A user guide to the most suitable signal processing algorithms within this area is presented.

  20. Signal Quality Improvement Algorithms for MEMS Gyroscope-Based Human Motion Analysis Systems: A Systematic Review

    PubMed Central

    Gerdtman, Christer

    2018-01-01

    Motion sensors such as MEMS gyroscopes and accelerometers are characterized by a small size, light weight, high sensitivity, and low cost. They are used in an increasing number of applications. However, they are easily influenced by environmental effects such as temperature change, shock, and vibration. Thus, signal processing is essential for minimizing errors and improving signal quality and system stability. The aim of this work is to investigate and present a systematic review of different signal error reduction algorithms that are used for MEMS gyroscope-based motion analysis systems for human motion analysis or have the potential to be used in this area. A systematic search was performed with the search engines/databases of the ACM Digital Library, IEEE Xplore, PubMed, and Scopus. Sixteen papers that focus on MEMS gyroscope-related signal processing and were published in journals or conference proceedings in the past 10 years were found and fully reviewed. Seventeen algorithms were categorized into four main groups: Kalman-filter-based algorithms, adaptive-based algorithms, simple filter algorithms, and compensation-based algorithms. The algorithms were analyzed and presented along with their characteristics such as advantages, disadvantages, and time limitations. A user guide to the most suitable signal processing algorithms within this area is presented. PMID:29642412

  1. University of Maryland walking robot: A design project for undergraduate students

    NASA Technical Reports Server (NTRS)

    Olsen, Bob; Bielec, Jim; Hartsig, Dave; Oliva, Mani; Grotheer, Phil; Hekmat, Morad; Russell, David; Tavakoli, Hossein; Young, Gary; Nave, Tom

    1990-01-01

    The design and construction required that the walking robot machine be capable of completing a number of tasks including walking in a straight line, turning to change direction, and maneuvering over an obstable such as a set of stairs. The machine consists of two sets of four telescoping legs that alternately support the entire structure. A gear-box and crank-arm assembly is connected to the leg sets to provide the power required for the translational motion of the machine. By retracting all eight legs, the robot comes to rest on a central Bigfoot support. Turning is accomplished by rotating the machine about this support. The machine can be controlled by using either a user operated remote tether or the on-board computer for the execution of control commands. Absolute encoders are attached to all motors (leg, main drive, and Bigfoot) to provide the control computer with information regarding the status of the motors (up-down motion, forward or reverse rotation). Long and short range infrared sensors provide the computer with feedback information regarding the machine's relative position to a series of stripes and reflectors. These infrared sensors simulate how the robot might sense and gain information about the environment of Mars.

  2. Virtualized Traffic: reconstructing traffic flows from discrete spatiotemporal data.

    PubMed

    Sewall, Jason; van den Berg, Jur; Lin, Ming C; Manocha, Dinesh

    2011-01-01

    We present a novel concept, Virtualized Traffic, to reconstruct and visualize continuous traffic flows from discrete spatiotemporal data provided by traffic sensors or generated artificially to enhance a sense of immersion in a dynamic virtual world. Given the positions of each car at two recorded locations on a highway and the corresponding time instances, our approach can reconstruct the traffic flows (i.e., the dynamic motions of multiple cars over time) between the two locations along the highway for immersive visualization of virtual cities or other environments. Our algorithm is applicable to high-density traffic on highways with an arbitrary number of lanes and takes into account the geometric, kinematic, and dynamic constraints on the cars. Our method reconstructs the car motion that automatically minimizes the number of lane changes, respects safety distance to other cars, and computes the acceleration necessary to obtain a smooth traffic flow subject to the given constraints. Furthermore, our framework can process a continuous stream of input data in real time, enabling the users to view virtualized traffic events in a virtual world as they occur. We demonstrate our reconstruction technique with both synthetic and real-world input. © 2011 IEEE Published by the IEEE Computer Society

  3. The IBM HeadTracking Pointer: improvements in vision-based pointer control.

    PubMed

    Kjeldsen, Rick

    2008-07-01

    Vision-based head trackers have been around for some years and are even beginning to be commercialized, but problems remain with respect to usability. Users without the ability to use traditional pointing devices--the intended audience of such systems--have no alternative if the automatic bootstrapping process fails. There is room for improvement in face tracking, and the pointer movement dynamics do not support accurate and efficient pointing. This paper describes the IBM HeadTracking Pointer, a system which attempts to directly address some of these issues. Head gestures are used to provide the end user a greater level of autonomous control over the system. A novel face-tracking algorithm reduces drift under variable lighting conditions, allowing the use of absolute, rather than relative, pointer positioning. Most importantly, the pointer dynamics have been designed to take into account the constraints of head-based pointing, with a non-linear gain which allows stability in fine pointer movement, high speed on long transitions and adjustability to support users with different movement dynamics. User studies have identified some difficulties with training the system and some characteristics of the pointer motion that take time to get used to, but also good user feedback and very promising performance results.

  4. Environmental application of remote sensing methods to coastal zone land use and marine resource management. Appendix F: User's guide for advection, convection prototype. [southeastern Virginia

    NASA Technical Reports Server (NTRS)

    1972-01-01

    A user's manual is provided for the environmental computer model proposed for the Richmond-Cape Henry Environmental Laboratory (RICHEL) application project for coastal zone land use investigations and marine resources management. The model was developed around the hydrologic cycle and includes two data bases consisting of climate and land use variables. The main program is described, along with control parameters to be set and pertinent subroutines.

  5. Capturing, Harmonizing and Delivering Data and Quality Provenance

    NASA Technical Reports Server (NTRS)

    Leptoukh, Gregory; Lynnes, Christopher

    2011-01-01

    Satellite remote sensing data have proven to be vital for various scientific and applications needs. However, the usability of these data depends not only on the data values but also on the ability of data users to assess and understand the quality of these data for various applications and for comparison or inter-usage of data from different sensors and models. In this paper, we describe some aspects of capturing, harmonizing and delivering this information to users in the framework of distributed web-based data tools.

  6. Ubiquitous computing in sports: A review and analysis.

    PubMed

    Baca, Arnold; Dabnichki, Peter; Heller, Mario; Kornfeind, Philipp

    2009-10-01

    Ubiquitous (pervasive) computing is a term for a synergetic use of sensing, communication and computing. Pervasive use of computing has seen a rapid increase in the current decade. This development has propagated in applied sport science and everyday life. The work presents a survey of recent developments in sport and leisure with emphasis on technology and computational techniques. A detailed analysis on new technological developments is performed. Sensors for position and motion detection, and such for equipment and physiological monitoring are discussed. Aspects of novel trends in communication technologies and data processing are outlined. Computational advancements have started a new trend - development of smart and intelligent systems for a wide range of applications - from model-based posture recognition to context awareness algorithms for nutrition monitoring. Examples particular to coaching and training are discussed. Selected tools for monitoring rules' compliance and automatic decision-making are outlined. Finally, applications in leisure and entertainment are presented, from systems supporting physical activity to systems providing motivation. It is concluded that the emphasis in future will shift from technologies to intelligent systems that allow for enhanced social interaction as efforts need to be made to improve user-friendliness and standardisation of measurement and transmission protocols.

  7. Evaluating motion parallax and stereopsis as depth cues for autostereoscopic displays

    NASA Astrophysics Data System (ADS)

    Braun, Marius; Leiner, Ulrich; Ruschin, Detlef

    2011-03-01

    The perception of space in the real world is based on multifaceted depth cues, most of them monocular, some binocular. Developing 3D-displays raises the question, which of these depth cues are predominant and should be simulated by computational means in such a panel. Beyond the cues based on image content, such as shadows or patterns, Stereopsis and depth from motion parallax are the most significant mechanisms supporting observers with depth information. We set up a carefully designed test situation, widely excluding undesired other distance hints. Thereafter we conducted a user test to find out, which of these two depth cues is more relevant and whether a combination of both would increase accuracy in a depth estimation task. The trials were conducting utilizing our autostereoscopic "Free2C"-displays, which are capable to detect the user eye position and steer the image lobes dynamically into that direction. At the same time, eye position was used to update the virtual camera's location and thereby offering motion parallax to the observer. As far as we know, this was the first time that such a test has been conducted using an autosteresocopic display without any assistive technologies. Our results showed, in accordance with prior experiments, that both cues are effective, however Stereopsis is by order of magnitude more relevant. Combining both cues improved the precision of distance estimation by another 30-40%.

  8. Non-destructive ion trap mass spectrometer and method

    DOEpatents

    Frankevich, Vladimir E.; Soni, Manish H.; Nappi, Mario; Santini, Robert E.; Amy, Jonathan W.; Cooks, Robert G.

    1997-01-01

    The invention relates to an ion trap mass spectrometer of the type having an ion trapping volume defined by spaced end caps and a ring electrode. The ion trap includes a small sensing electrode which senses characteristic motion of ions trapped in said trapping volume and provides an image current. Ions are excited into characteristic motion by application of an excitation pulse to the trapped ions. The invention also relates to a method of operating such an ion trap.

  9. 'Is it the crime of the century?': factors for psychiatrists and service users that influence the long-term prescription of hypnosedatives.

    PubMed

    MacDonald, Joanna; Garvie, Christopher; Gordon, Sarah; Huthwaite, Mark; Mathieson, Fiona; Wood, Amber-Jane; Romans, Sarah

    2015-07-01

    Given the longstanding controversy about hypnosedative use, we aimed to investigate the attitudes of prescribing psychiatrists and service users towards long-term use of hypnosedative medication, and their perceptions of barriers to evidence-based nonmedication alternatives. Qualitative data from focus groups in Aotearoa/NZ were analysed thematically. A novel research design involved a service user researcher contributing throughout the research design and process. Service users and psychiatrists met to discuss each other's views, initially separately, and subsequently together. Analysis of the data identified four key themes: the challenge, for both parties, of sleep disturbance among service users with mental health problems; the conceptual and ethical conflicts for service users and psychiatrists in managing this challenge; the significant barriers to service users accessing evidence-based nonmedication alternatives; and the initial sense of disempowerment, shared by both service users and psychiatrists, which was transformed during the research process. Our results raise questions about the relevance of the existing guidelines for this group of service users, highlight the resource and time pressures that discourage participants from embarking on withdrawal regimes and education programmes on alternatives, highlight the lack of knowledge about alternatives and reflect the complex interaction between sleep and mental health problems, which poses a significant dilemma for service users and psychiatrists.

  10. Biomedical sensing analyzer (BSA) for mobile-health (mHealth)-LTE.

    PubMed

    Adibi, Sasan

    2014-01-01

    The rapid expansion of mobile-based systems, the capabilities of smartphone devices, as well as the radio access and cellular network technologies are the wind beneath the wing of mobile health (mHealth). In this paper, the concept of biomedical sensing analyzer (BSA) is presented, which is a novel framework, devised for sensor-based mHealth applications. The BSA is capable of formulating the Quality of Service (QoS) measurements in an end-to-end sense, covering the entire communication path (wearable sensors, link-technology, smartphone, cell-towers, mobile-cloud, and the end-users). The characterization and formulation of BSA depend on a number of factors, including the deployment of application-specific biomedical sensors, generic link-technologies, collection, aggregation, and prioritization of mHealth data, cellular network based on the Long-Term Evolution (LTE) access technology, and extensive multidimensional delay analyses. The results are studied and analyzed in a LabView 8.5 programming environment.

  11. The Development and Preliminary Test of a Powered Alternately Walking Exoskeleton With the Wheeled Foot for Paraplegic Patients.

    PubMed

    Ma, Qingchuan; Ji, Linhong; Wang, Rencheng

    2018-02-01

    Upright walking has both physical and social meanings for paraplegic patients. The main purpose of this paper is to reduce the automatic functioning of the powered exoskeleton and enable the user to fully control the walking procedure in real-time, aiming to further improve the engagement of the patient during rehabilitation training. For this prototype, a custom-made hub motor was placed at the bottom of the exoskeleton's foot, and a pair of crutches with the embedded wireless controller were utilized as the auxiliary device. The user could alternatively press the button of the crutch to control the movement of the leg and by repeating this procedure, the user could complete a continuous walking motion. For safety, an automatic brake and mechanical limitation for maximum step length were implemented. A gait analysis was performed to evaluate the exoskeleton's motion capability and corresponding response of user's major muscles. The kinematic results of this paper showed that this exoskeleton could assist the user to walk in a motion trend close to the normally walk, especially for ankle joint. The electromyography results indicated that this exoskeleton could decrease the loading burden of the user's lower limb while requiring more involvements of upper-limb muscles to maintain balance while walking.

  12. Fast-responder: Rapid mobile-phone access to recent remote sensing imagery for first responders

    NASA Astrophysics Data System (ADS)

    Talbot, L. M.; Talbot, B. G.

    We introduce Fast-Responder, a novel prototype data-dissemination application and architecture concept to rapidly deliver remote sensing imagery to smartphones to enable situational awareness. The architecture implements a Fast-Earth image caching system on the phone and interacts with a Fast-Earth server. Prototype evaluation successfully demonstrated that National Guard users could select a location, download multiple remote sensing images, and flicker between images, all in less than a minute on a 3G mobile commercial link. The Fast-Responder architecture is a significant advance that is designed to meet the needs of mobile users, such as National Guard response units, to rapidly access information during a crisis, such as a natural or man-made disaster. This paper focuses on the architecture design and advanced user interface concepts for small-screens for highly active mobile users. Novel Fast-Responder concepts can also enable rapid dissemination and evaluation of imagery on the desktop, opening new technology horizons for both desktop and mobile users.

  13. Helicopter Flight Simulation Motion Platform Requirements

    NASA Technical Reports Server (NTRS)

    Schroeder, Jeffery Allyn

    1999-01-01

    To determine motion fidelity requirements, a series of piloted simulations was performed. Several key results were found. First, lateral and vertical translational platform cues had significant effects on fidelity. Their presence improved performance and reduced pilot workload. Second, yaw and roll rotational platform cues were not as important as the translational platform cues. In particular, the yaw rotational motion platform cue did not appear at all useful in improving performance or reducing workload. Third, when the lateral translational platform cue was combined with visual yaw rotational cues, pilots believed the platform was rotating when it was not. Thus, simulator systems can be made more efficient by proper combination of platform and visual cues. Fourth, motion fidelity specifications were revised that now provide simulator users with a better prediction of motion fidelity based upon the frequency responses of their motion control laws. Fifth, vertical platform motion affected pilot estimates of steady-state altitude during altitude repositioning. Finally, the combined results led to a general method for configuring helicopter motion systems and for developing simulator tasks that more likely represent actual flight. The overall results can serve as a guide to future simulator designers and to today's operators.

  14. Marker-less multi-frame motion tracking and compensation in PET-brain imaging

    NASA Astrophysics Data System (ADS)

    Lindsay, C.; Mukherjee, J. M.; Johnson, K.; Olivier, P.; Song, X.; Shao, L.; King, M. A.

    2015-03-01

    In PET brain imaging, patient motion can contribute significantly to the degradation of image quality potentially leading to diagnostic and therapeutic problems. To mitigate the image artifacts resulting from patient motion, motion must be detected and tracked then provided to a motion correction algorithm. Existing techniques to track patient motion fall into one of two categories: 1) image-derived approaches and 2) external motion tracking (EMT). Typical EMT requires patients to have markers in a known pattern on a rigid too attached to their head, which are then tracked by expensive and bulky motion tracking camera systems or stereo cameras. This has made marker-based EMT unattractive for routine clinical application. Our main contributions are the development of a marker-less motion tracking system that uses lowcost, small depth-sensing cameras which can be installed in the bore of the imaging system. Our motion tracking system does not require anything to be attached to the patient and can track the rigid transformation (6-degrees of freedom) of the patient's head at a rate 60 Hz. We show that our method can not only be used in with Multi-frame Acquisition (MAF) PET motion correction, but precise timing can be employed to determine only the necessary frames needed for correction. This can speeds up reconstruction by eliminating the unnecessary subdivision of frames.

  15. Y0: An innovative tool for spatial data analysis

    NASA Astrophysics Data System (ADS)

    Wilson, Jeremy C.

    1993-08-01

    This paper describes an advanced analysis and visualization tool, called Y0 (pronounced ``Why not?!''), that has been developed to directly support the scientific process for earth and space science research. Y0 aids the scientific research process by enabling the user to formulate algorithms and models within an integrated environment, and then interactively explore the solution space with the aid of appropriate visualizations. Y0 has been designed to provide strong support for both quantitative analysis and rich visualization. The user's algorithm or model is defined in terms of algebraic formulas in cells on worksheets, in a similar fashion to spreadsheet programs. Y0 is specifically designed to provide the data types and rich function set necessary for effective analysis and manipulation of remote sensing data. This includes various types of arrays, geometric objects, and objects for representing geographic coordinate system mappings. Visualization of results is tailored to the needs of remote sensing, with straightforward methods of composing, comparing, and animating imagery and graphical information, with reference to geographical coordinate systems. Y0 is based on advanced object-oriented technology. It is implemented in C++ for use in Unix environments, with a user interface based on the X window system. Y0 has been delivered under contract to Unidata, a group which provides data and software support to atmospheric researches in universities affiliated with UCAR. This paper will explore the key concepts in Y0, describe its utility for remote sensing analysis and visualization, and will give a specific example of its application to the problem of measuring glacier flow rates from Landsat imagery.

  16. Stroboscopic Goggles for Reduction of Motion Sickness

    NASA Technical Reports Server (NTRS)

    Reschke, M. F.; Somers, Jeffrey T.

    2005-01-01

    A device built around a pair of electronic shutters has been demonstrated to be effective as a prototype of stroboscopic goggles or eyeglasses for preventing or reducing motion sickness. The momentary opening of the shutters helps to suppress a phenomenon that is known in the art as retinal slip and is described more fully below. While a number of different environmental factors can induce motion sickness, a common factor associated with every known motion environment is sensory confusion or sensory mismatch. Motion sickness is a product of misinformation arriving at a central point in the nervous system from the senses from which one determines one s spatial orientation. When information from the eyes, ears, joints, and pressure receptors are all in agreement as to one s orientation, there is no motion sickness. When one or more sensory input(s) to the brain is not expected, or conflicts with what is anticipated, the end product is motion sickness. Normally, an observer s eye moves, compensating for the anticipated effect of motion, in such a manner that the image of an object moving relatively to an observer is held stationary on the retina. In almost every known environment that induces motion sickness, a change in the gain (in the signal-processing sense of gain ) of the vestibular system causes the motion of the eye to fail to hold images stationary on the retina, and the resulting motion of the images is termed retinal slip. The present concept of stroboscopic goggles or eyeglasses (see figure) is based on the proposition that prevention of retinal slip, and hence, the prevention of sensory mismatch, can be expected to reduce the tendency toward motion sickness. A device according to this concept helps to prevent retinal slip by providing snapshots of the visual environment through electronic shutters that are brief enough that each snapshot freezes the image on each retina. The exposure time for each snapshot is less than 5 ms. In the event that a higher rate of strobing is necessary for adequate viewing of the changing scene during rapid head movements, the rate of strobing (but not the exposure time) can be controlled in response to the readings of rate-of-rotation sensors attached to the device.

  17. Monitoring Aircraft Motion at Airports by LIDAR

    NASA Astrophysics Data System (ADS)

    Toth, C.; Jozkow, G.; Koppanyi, Z.; Young, S.; Grejner-Brzezinska, D.

    2016-06-01

    Improving sensor performance, combined with better affordability, provides better object space observability, resulting in new applications. Remote sensing systems are primarily concerned with acquiring data of the static components of our environment, such as the topographic surface of the earth, transportation infrastructure, city models, etc. Observing the dynamic component of the object space is still rather rare in the geospatial application field; vehicle extraction and traffic flow monitoring are a few examples of using remote sensing to detect and model moving objects. Deploying a network of inexpensive LiDAR sensors along taxiways and runways can provide both geometrically and temporally rich geospatial data that aircraft body can be extracted from the point cloud, and then, based on consecutive point clouds motion parameters can be estimated. Acquiring accurate aircraft trajectory data is essential to improve aviation safety at airports. This paper reports about the initial experiences obtained by using a network of four Velodyne VLP- 16 sensors to acquire data along a runway segment.

  18. Use of Context in Video Processing

    NASA Astrophysics Data System (ADS)

    Wu, Chen; Aghajan, Hamid

    Interpreting an event or a scene based on visual data often requires additional contextual information. Contextual information may be obtained from different sources. In this chapter, we discuss two broad categories of contextual sources: environmental context and user-centric context. Environmental context refers to information derived from domain knowledge or from concurrently sensed effects in the area of operation. User-centric context refers to information obtained and accumulated from the user. Both types of context can include static or dynamic contextual elements. Examples from a smart home environment are presented to illustrate how different types of contextual data can be applied to aid the decision-making process.

  19. The use of ambient audio to increase safety and immersion in location-based games

    NASA Astrophysics Data System (ADS)

    Kurczak, John Jason

    The purpose of this thesis is to propose an alternative type of interface for mobile software being used while walking or running. Our work addresses the problem of visual user interfaces for mobile software be- ing potentially unsafe for pedestrians, and not being very immersive when used for location-based games. In addition, location-based games and applications can be dif- ficult to develop when directly interfacing with the sensors used to track the user's location. These problems need to be addressed because portable computing devices are be- coming a popular tool for navigation, playing games, and accessing the internet while walking. This poses a safety problem for mobile users, who may be paying too much attention to their device to notice and react to hazards in their environment. The difficulty of developing location-based games and other location-aware applications may significantly hinder the prevalence of applications that explore new interaction techniques for ubiquitous computing. We created the TREC toolkit to address the issues with tracking sensors while developing location-based games and applications. We have developed functional location-based applications with TREC to demonstrate the amount of work that can be saved by using this toolkit. In order to have a safer and more immersive alternative to visual interfaces, we have developed ambient audio interfaces for use with mobile applications. Ambient audio uses continuous streams of sound over headphones to present information to mobile users without distracting them from walking safely. In order to test the effectiveness of ambient audio, we ran a study to compare ambient audio with handheld visual interfaces in a location-based game. We compared players' ability to safely navigate the environment, their sense of immersion in the game, and their performance at the in-game tasks. We found that ambient audio was able to significantly increase players' safety and sense of immersion compared to a visual interface, while players performed signifi- cantly better at the game tasks when using the visual interface. This makes ambient audio a legitimate alternative to visual interfaces for mobile users when safety and immersion are a priority.

  20. Mobile assistive technologies for the visually impaired.

    PubMed

    Hakobyan, Lilit; Lumsden, Jo; O'Sullivan, Dympna; Bartlett, Hannah

    2013-01-01

    There are around 285 million visually impaired people worldwide, and around 370,000 people are registered as blind or partially sighted in the UK. Ongoing advances in information technology (IT) are increasing the scope for IT-based mobile assistive technologies to facilitate the independence, safety, and improved quality of life of the visually impaired. Research is being directed at making mobile phones and other handheld devices accessible via our haptic (touch) and audio sensory channels. We review research and innovation within the field of mobile assistive technology for the visually impaired and, in so doing, highlight the need for successful collaboration between clinical expertise, computer science, and domain users to realize fully the potential benefits of such technologies. We initially reflect on research that has been conducted to make mobile phones more accessible to people with vision loss. We then discuss innovative assistive applications designed for the visually impaired that are either delivered via mainstream devices and can be used while in motion (e.g., mobile phones) or are embedded within an environment that may be in motion (e.g., public transport) or within which the user may be in motion (e.g., smart homes). Copyright © 2013 Elsevier Inc. All rights reserved.

Top