Sample records for webcam motion sensor

  1. The Effects of Applying Game-Based Learning to Webcam Motion Sensor Games for Autistic Students' Sensory Integration Training

    ERIC Educational Resources Information Center

    Li, Kun-Hsien; Lou, Shi-Jer; Tsai, Huei-Yin; Shih, Ru-Chu

    2012-01-01

    This study aims to explore the effects of applying game-based learning to webcam motion sensor games for autistic students' sensory integration training for autistic students. The research participants were three autistic students aged from six to ten. Webcam camera as the research tool wad connected internet games to engage in motion sensor…

  2. Hybrid position and orientation tracking for a passive rehabilitation table-top robot.

    PubMed

    Wojewoda, K K; Culmer, P R; Gallagher, J F; Jackson, A E; Levesley, M C

    2017-07-01

    This paper presents a real time hybrid 2D position and orientation tracking system developed for an upper limb rehabilitation robot. Designed to work on a table-top, the robot is to enable home-based upper-limb rehabilitative exercise for stroke patients. Estimates of the robot's position are computed by fusing data from two tracking systems, each utilizing a different sensor type: laser optical sensors and a webcam. Two laser optical sensors are mounted on the underside of the robot and track the relative motion of the robot with respect to the surface on which it is placed. The webcam is positioned directly above the workspace, mounted on a fixed stand, and tracks the robot's position with respect to a fixed coordinate system. The optical sensors sample the position data at a higher frequency than the webcam, and a position and orientation fusion scheme is proposed to fuse the data from the two tracking systems. The proposed fusion scheme is validated through an experimental set-up whereby the rehabilitation robot is moved by a humanoid robotic arm replicating previously recorded movements of a stroke patient. The results prove that the presented hybrid position tracking system can track the position and orientation with greater accuracy than the webcam or optical sensors alone. The results also confirm that the developed system is capable of tracking recovery trends during rehabilitation therapy.

  3. Integration of smartphones and webcam for the measure of spatio-temporal gait parameters.

    PubMed

    Barone, V; Maranesi, E; Fioretti, S

    2014-01-01

    A very low cost prototype has been made for the spatial and temporal analysis of human movement using an integrated system of last generation smartphones and a highdefinition webcam, controlled by a laptop. The system can be used to analyze mainly planar motions in non-structured environments. In this paper, the accelerometer signal as captured by the 3D sensor embedded in one smartphone, and the position of colored markers derived by the webcam frames, are used for the computation of spatial-temporal parameters of gait. Accuracy of results is compared with that obtainable by a gold-standard instrumentation. The system is characterized by a very low cost and by a very high level of automation. It has been thought to be used by non-expert users in ambulatory settings.

  4. Beat-to-beat heart rate estimation fusing multimodal video and sensor data

    PubMed Central

    Antink, Christoph Hoog; Gao, Hanno; Brüser, Christoph; Leonhardt, Steffen

    2015-01-01

    Coverage and accuracy of unobtrusively measured biosignals are generally relatively low compared to clinical modalities. This can be improved by exploiting redundancies in multiple channels with methods of sensor fusion. In this paper, we demonstrate that two modalities, skin color variation and head motion, can be extracted from the video stream recorded with a webcam. Using a Bayesian approach, these signals are fused with a ballistocardiographic signal obtained from the seat of a chair with a mean absolute beat-to-beat estimation error below 25 milliseconds and an average coverage above 90% compared to an ECG reference. PMID:26309754

  5. Beat-to-beat heart rate estimation fusing multimodal video and sensor data.

    PubMed

    Antink, Christoph Hoog; Gao, Hanno; Brüser, Christoph; Leonhardt, Steffen

    2015-08-01

    Coverage and accuracy of unobtrusively measured biosignals are generally relatively low compared to clinical modalities. This can be improved by exploiting redundancies in multiple channels with methods of sensor fusion. In this paper, we demonstrate that two modalities, skin color variation and head motion, can be extracted from the video stream recorded with a webcam. Using a Bayesian approach, these signals are fused with a ballistocardiographic signal obtained from the seat of a chair with a mean absolute beat-to-beat estimation error below 25 milliseconds and an average coverage above 90% compared to an ECG reference.

  6. Peripheral vasomotor activity assessment using a continuous wavelet analysis on webcam photoplethysmographic signals.

    PubMed

    Bousefsaf, F; Maaoui, C; Pruski, A

    2016-11-25

    Vasoconstriction and vasodilation phenomena reflect the relative changes in the vascular bed. They induce particular modifications in the pulse wave magnitude. Webcams correspond to remote sensors that can be employed to measure the pulse wave in order to compute the pulse frequency. Record and analyze pulse wave signal with a low-cost webcam to extract the amplitude information and assess the vasomotor activity of the participant. Photoplethysmographic signals obtained from a webcam are analyzed through a continuous wavelet transform. The performance of the proposed filtering technique was evaluated using approved contact probes on a set of 12 healthy subjects after they perform a short but intense physical exercise. During the rest period, a cutaneous vasodilation is observable. High degrees of correlation between the webcam and a reference sensor were obtained. Webcams are low-cost and non-contact devices that can be used to reliably estimate both heart rate and peripheral vasomotor activity, notably during physical exertion.

  7. The Accuracy of Webcams in 2D Motion Analysis: Sources of Error and Their Control

    ERIC Educational Resources Information Center

    Page, A.; Moreno, R.; Candelas, P.; Belmar, F.

    2008-01-01

    In this paper, we show the potential of webcams as precision measuring instruments in a physics laboratory. Various sources of error appearing in 2D coordinate measurements using low-cost commercial webcams are discussed, quantifying their impact on accuracy and precision, and simple procedures to control these sources of error are presented.…

  8. A low cost real-time motion tracking approach using webcam technology.

    PubMed

    Krishnan, Chandramouli; Washabaugh, Edward P; Seetharaman, Yogesh

    2015-02-05

    Physical therapy is an important component of gait recovery for individuals with locomotor dysfunction. There is a growing body of evidence that suggests that incorporating a motor learning task through visual feedback of movement trajectory is a useful approach to facilitate therapeutic outcomes. Visual feedback is typically provided by recording the subject's limb movement patterns using a three-dimensional motion capture system and displaying it in real-time using customized software. However, this approach can seldom be used in the clinic because of the technical expertise required to operate this device and the cost involved in procuring a three-dimensional motion capture system. In this paper, we describe a low cost two-dimensional real-time motion tracking approach using a simple webcam and an image processing algorithm in LabVIEW Vision Assistant. We also evaluated the accuracy of this approach using a high precision robotic device (Lokomat) across various walking speeds. Further, the reliability and feasibility of real-time motion-tracking were evaluated in healthy human participants. The results indicated that the measurements from the webcam tracking approach were reliable and accurate. Experiments on human subjects also showed that participants could utilize the real-time kinematic feedback generated from this device to successfully perform a motor learning task while walking on a treadmill. These findings suggest that the webcam motion tracking approach is a feasible low cost solution to perform real-time movement analysis and training. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. A low cost real-time motion tracking approach using webcam technology

    PubMed Central

    Krishnan, Chandramouli; Washabaugh, Edward P.; Seetharaman, Yogesh

    2014-01-01

    Physical therapy is an important component of gait recovery for individuals with locomotor dysfunction. There is a growing body of evidence that suggests that incorporating a motor learning task through visual feedback of movement trajectory is a useful approach to facilitate therapeutic outcomes. Visual feedback is typically provided by recording the subject’s limb movement patterns using a three-dimensional motion capture system and displaying it in real-time using customized software. However, this approach can seldom be used in the clinic because of the technical expertise required to operate this device and the cost involved in procuring a three-dimensional motion capture system. In this paper, we describe a low cost two-dimensional real-time motion tracking approach using a simple webcam and an image processing algorithm in LabVIEW Vision Assistant. We also evaluated the accuracy of this approach using a high precision robotic device (Lokomat) across various walking speeds. Further, the reliability and feasibility of real-time motion-tracking were evaluated in healthy human participants. The results indicated that the measurements from the webcam tracking approach were reliable and accurate. Experiments on human subjects also showed that participants could utilize the real-time kinematic feedback generated from this device to successfully perform a motor learning task while walking on a treadmill. These findings suggest that the webcam motion tracking approach is a feasible low cost solution to perform real-time movement analysis and training. PMID:25555306

  10. Assessing image quality of low-cost laparoscopic box trainers: options for residents training at home.

    PubMed

    Kiely, Daniel J; Stephanson, Kirk; Ross, Sue

    2011-10-01

    Low-cost laparoscopic box trainers built using home computers and webcams may provide residents with a useful tool for practice at home. This study set out to evaluate the image quality of low-cost laparoscopic box trainers compared with a commercially available model. Five low-cost laparoscopic box trainers including the components listed were compared in random order to one commercially available box trainer: A (high-definition USB 2.0 webcam, PC laptop), B (Firewire webcam, Mac laptop), C (high-definition USB 2.0 webcam, Mac laptop), D (standard USB webcam, PC desktop), E (Firewire webcam, PC desktop), and F (the TRLCD03 3-DMEd Standard Minimally Invasive Training System). Participants observed still image quality and performed a peg transfer task using each box trainer. Participants rated still image quality, image quality with motion, and whether the box trainer had sufficient image quality to be useful for training. Sixteen residents in obstetrics and gynecology took part in the study. The box trainers showing no statistically significant difference from the commercially available model were A, B, C, D, and E for still image quality; A for image quality with motion; and A and B for usefulness of the simulator based on image quality. The cost of the box trainers A-E is approximately $100 to $160 each, not including a computer or laparoscopic instruments. Laparoscopic box trainers built from a high-definition USB 2.0 webcam with a PC (box trainer A) or from a Firewire webcam with a Mac (box trainer B) provide image quality comparable with a commercial standard.

  11. Cost effective patient location monitoring system using webcams.

    PubMed

    Logeswaran, Rajasvaran

    2009-10-01

    This paper details the development of a simple webcam joystick, a wireless, or rather cableless, and contactless pointing device by using a webcam and a simple flexible non-electronic joystick. Such a system requires no power source on the joystick, allows for light, robust and very mobile joysticks, and can be extended into a large array of applications. This paper proposes the use of small webcam joysticks as sensors for recording movement, the way wireless sensors are used. Specifically, it could be used as a simple navigation and monitoring system for patient movement in medical wards, where knowledge of patient location and movement could provide instant assistance, pre-emptive action and also hinder untoward patient mix-ups. Experiments and discussions in this paper highlight how a successful implementation is possible, and emphasize the flexibility of such an implementation in a low cost medical environment.

  12. Store-and-feedforward adaptive gaming system for hand-finger motion tracking in telerehabilitation.

    PubMed

    Lockery, Daniel; Peters, James F; Ramanna, Sheela; Shay, Barbara L; Szturm, Tony

    2011-05-01

    This paper presents a telerehabilitation system that encompasses a webcam and store-and-feedforward adaptive gaming system for tracking finger-hand movement of patients during local and remote therapy sessions. Gaming-event signals and webcam images are recorded as part of a gaming session and then forwarded to an online healthcare content management system (CMS) that separates incoming information into individual patient records. The CMS makes it possible for clinicians to log in remotely and review gathered data using online reports that are provided to help with signal and image analysis using various numerical measures and plotting functions. Signals from a 6 degree-of-freedom magnetic motion tracking system provide a basis for video-game sprite control. The MMT provides a path for motion signals between common objects manipulated by a patient and a computer game. During a therapy session, a webcam that captures images of the hand together with a number of performance metrics provides insight into the quality, efficiency, and skill of a patient.

  13. Pre-Capture Privacy for Small Vision Sensors.

    PubMed

    Pittaluga, Francesco; Koppal, Sanjeev Jagannatha

    2017-11-01

    The next wave of micro and nano devices will create a world with trillions of small networked cameras. This will lead to increased concerns about privacy and security. Most privacy preserving algorithms for computer vision are applied after image/video data has been captured. We propose to use privacy preserving optics that filter or block sensitive information directly from the incident light-field before sensor measurements are made, adding a new layer of privacy. In addition to balancing the privacy and utility of the captured data, we address trade-offs unique to miniature vision sensors, such as achieving high-quality field-of-view and resolution within the constraints of mass and volume. Our privacy preserving optics enable applications such as depth sensing, full-body motion tracking, people counting, blob detection and privacy preserving face recognition. While we demonstrate applications on macro-scale devices (smartphones, webcams, etc.) our theory has impact for smaller devices.

  14. Webcams for Bird Detection and Monitoring: A Demonstration Study

    PubMed Central

    Verstraeten, Willem W.; Vermeulen, Bart; Stuckens, Jan; Lhermitte, Stefaan; Van der Zande, Dimitry; Van Ranst, Marc; Coppin, Pol

    2010-01-01

    Better insights into bird migration can be a tool for assessing the spread of avian borne infections or ecological/climatologic issues reflected in deviating migration patterns. This paper evaluates whether low budget permanent cameras such as webcams can offer a valuable contribution to the reporting of migratory birds. An experimental design was set up to study the detection capability using objects of different size, color and velocity. The results of the experiment revealed the minimum size, maximum velocity and contrast of the objects required for detection by a standard webcam. Furthermore, a modular processing scheme was proposed to track and follow migratory birds in webcam recordings. Techniques such as motion detection by background subtraction, stereo vision and lens distortion were combined to form the foundation of the bird tracking algorithm. Additional research to integrate webcam networks, however, is needed and future research should enforce the potential of the processing scheme by exploring and testing alternatives of each individual module or processing step. PMID:22319308

  15. Webcams for bird detection and monitoring: a demonstration study.

    PubMed

    Verstraeten, Willem W; Vermeulen, Bart; Stuckens, Jan; Lhermitte, Stefaan; Van der Zande, Dimitry; Van Ranst, Marc; Coppin, Pol

    2010-01-01

    Better insights into bird migration can be a tool for assessing the spread of avian borne infections or ecological/climatologic issues reflected in deviating migration patterns. This paper evaluates whether low budget permanent cameras such as webcams can offer a valuable contribution to the reporting of migratory birds. An experimental design was set up to study the detection capability using objects of different size, color and velocity. The results of the experiment revealed the minimum size, maximum velocity and contrast of the objects required for detection by a standard webcam. Furthermore, a modular processing scheme was proposed to track and follow migratory birds in webcam recordings. Techniques such as motion detection by background subtraction, stereo vision and lens distortion were combined to form the foundation of the bird tracking algorithm. Additional research to integrate webcam networks, however, is needed and future research should enforce the potential of the processing scheme by exploring and testing alternatives of each individual module or processing step.

  16. Evaluation of tracking accuracy of the CyberKnife system using a webcam and printed calibrated grid.

    PubMed

    Sumida, Iori; Shiomi, Hiroya; Higashinaka, Naokazu; Murashima, Yoshikazu; Miyamoto, Youichi; Yamazaki, Hideya; Mabuchi, Nobuhisa; Tsuda, Eimei; Ogawa, Kazuhiko

    2016-03-08

    Tracking accuracy for the CyberKnife's Synchrony system is commonly evaluated using a film-based verification method. We have evaluated a verification system that uses a webcam and a printed calibrated grid to verify tracking accuracy over three different motion patterns. A box with an attached printed calibrated grid and four fiducial markers was attached to the motion phantom. A target marker was positioned at the grid's center. The box was set up using the other three markers. Target tracking accuracy was evaluated under three conditions: 1) stationary; 2) sinusoidal motion with different amplitudes of 5, 10, 15, and 20 mm for the same cycle of 4 s and different cycles of 2, 4, 6, and 8 s with the same amplitude of 15 mm; and 3) irregular breathing patterns in six human volunteers breathing normally. Infrared markers were placed on the volunteers' abdomens, and their trajectories were used to simulate the target motion. All tests were performed with one-dimensional motion in craniocaudal direction. The webcam captured the grid's motion and a laser beam was used to simulate the CyberKnife's beam. Tracking error was defined as the difference between the grid's center and the laser beam. With a stationary target, mean tracking error was measured at 0.4 mm. For sinusoidal motion, tracking error was less than 2 mm for any amplitude and breathing cycle. For the volunteers' breathing patterns, the mean tracking error range was 0.78-1.67 mm. Therefore, accurate lesion targeting requires individual quality assurance for each patient.

  17. Motion tracking to enable pre-surgical margin mapping in basal cell carcinoma using optical imaging modalities: initial feasibility study using optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Duffy, M.; Richardson, T. J.; Craythorne, E.; Mallipeddi, R.; Coleman, A. J.

    2014-02-01

    A system has been developed to assess the feasibility of using motion tracking to enable pre-surgical margin mapping of basal cell carcinoma (BCC) in the clinic using optical coherence tomography (OCT). This system consists of a commercial OCT imaging system (the VivoSight 1500, MDL Ltd., Orpington, UK), which has been adapted to incorporate a webcam and a single-sensor electromagnetic positional tracking module (the Flock of Birds, Ascension Technology Corp, Vermont, USA). A supporting software interface has also been developed which allows positional data to be captured and projected onto a 2D dermoscopic image in real-time. Initial results using a stationary test phantom are encouraging, with maximum errors in the projected map in the order of 1-2mm. Initial clinical results were poor due to motion artefact, despite attempts to stabilise the patient. However, the authors present several suggested modifications that are expected to reduce the effects of motion artefact and improve the overall accuracy and clinical usability of the system.

  18. Evaluation of tracking accuracy of the CyberKnife system using a webcam and printed calibrated grid

    PubMed Central

    Shiomi, Hiroya; Higashinaka, Naokazu; Murashima, Yoshikazu; Miyamoto, Youichi; Yamazaki, Hideya; Mabuchi, Nobuhisa; Tsuda, Eimei; Ogawa, Kazuhiko

    2016-01-01

    Tracking accuracy for the CyberKnife's Synchrony system is commonly evaluated using a film‐based verification method. We have evaluated a verification system that uses a webcam and a printed calibrated grid to verify tracking accuracy over three different motion patterns. A box with an attached printed calibrated grid and four fiducial markers was attached to the motion phantom. A target marker was positioned at the grid's center. The box was set up using the other three markers. Target tracking accuracy was evaluated under three conditions: 1) stationary; 2) sinusoidal motion with different amplitudes of 5, 10, 15, and 20 mm for the same cycle of 4 s and different cycles of 2, 4, 6, and 8 s with the same amplitude of 15 mm; and 3) irregular breathing patterns in six human volunteers breathing normally. Infrared markers were placed on the volunteers’ abdomens, and their trajectories were used to simulate the target motion. All tests were performed with one‐dimensional motion in craniocaudal direction. The webcam captured the grid's motion and a laser beam was used to simulate the CyberKnife's beam. Tracking error was defined as the difference between the grid's center and the laser beam. With a stationary target, mean tracking error was measured at 0.4 mm. For sinusoidal motion, tracking error was less than 2 mm for any amplitude and breathing cycle. For the volunteers’ breathing patterns, the mean tracking error range was 0.78‐1.67 mm. Therefore, accurate lesion targeting requires individual quality assurance for each patient. PACS number(s): 87.55.D‐, 87.55.km, 87.55.Qr, 87.56.Fc PMID:27074474

  19. Compact diffraction grating laser wavemeter with sub-picometer accuracy and picowatt sensitivity using a webcam imaging sensor.

    PubMed

    White, James D; Scholten, Robert E

    2012-11-01

    We describe a compact laser wavelength measuring instrument based on a small diffraction grating and a consumer-grade webcam. With just 1 pW of optical power, the instrument achieves absolute accuracy of 0.7 pm, sufficient to resolve individual hyperfine transitions of the rubidium absorption spectrum. Unlike interferometric wavemeters, the instrument clearly reveals multimode laser operation, making it particularly suitable for use with external cavity diode lasers and atom cooling and trapping experiments.

  20. Towards continuous monitoring of pulse rate in neonatal intensive care unit with a webcam.

    PubMed

    Mestha, Lalit K; Kyal, Survi; Xu, Beilei; Lewis, Leslie Edward; Kumar, Vijay

    2014-01-01

    We describe a novel method to monitor pulse rate (PR) on a continuous basis of patients in a neonatal intensive care unit (NICU) using videos taken from a high definition (HD) webcam. We describe algorithms that determine PR from videoplethysmographic (VPG) signals extracted from multiple regions of interest (ROI) simultaneously available within the field of view of the camera where cardiac signal is registered. We detect motion from video images and compensate for motion artifacts from each ROI. Preliminary clinical results are presented on 8 neonates each with 30 minutes of uninterrupted video. Comparisons to hospital equipment indicate that the proposed technology can meet medical industry standards and give improved patient comfort and ease of use for practitioners when instrumented with proper hardware.

  1. Illumination adaptation with rapid-response color sensors

    NASA Astrophysics Data System (ADS)

    Zhang, Xinchi; Wang, Quan; Boyer, Kim L.

    2014-09-01

    Smart lighting solutions based on imaging sensors such as webcams or time-of-flight sensors suffer from rising privacy concerns. In this work, we use low-cost non-imaging color sensors to measure local luminous flux of different colors in an indoor space. These sensors have much higher data acquisition rate and are much cheaper than many o_-the-shelf commercial products. We have developed several applications with these sensors, including illumination feedback control and occupancy-driven lighting.

  2. Bilayer segmentation of webcam videos using tree-based classifiers.

    PubMed

    Yin, Pei; Criminisi, Antonio; Winn, John; Essa, Irfan

    2011-01-01

    This paper presents an automatic segmentation algorithm for video frames captured by a (monocular) webcam that closely approximates depth segmentation from a stereo camera. The frames are segmented into foreground and background layers that comprise a subject (participant) and other objects and individuals. The algorithm produces correct segmentations even in the presence of large background motion with a nearly stationary foreground. This research makes three key contributions: First, we introduce a novel motion representation, referred to as "motons," inspired by research in object recognition. Second, we propose estimating the segmentation likelihood from the spatial context of motion. The estimation is efficiently learned by random forests. Third, we introduce a general taxonomy of tree-based classifiers that facilitates both theoretical and experimental comparisons of several known classification algorithms and generates new ones. In our bilayer segmentation algorithm, diverse visual cues such as motion, motion context, color, contrast, and spatial priors are fused by means of a conditional random field (CRF) model. Segmentation is then achieved by binary min-cut. Experiments on many sequences of our videochat application demonstrate that our algorithm, which requires no initialization, is effective in a variety of scenes, and the segmentation results are comparable to those obtained by stereo systems.

  3. Advancements in noncontact, multiparameter physiological measurements using a webcam.

    PubMed

    Poh, Ming-Zher; McDuff, Daniel J; Picard, Rosalind W

    2011-01-01

    We present a simple, low-cost method for measuring multiple physiological parameters using a basic webcam. By applying independent component analysis on the color channels in video recordings, we extracted the blood volume pulse from the facial regions. Heart rate (HR), respiratory rate, and HR variability (HRV, an index for cardiac autonomic activity) were subsequently quantified and compared to corresponding measurements using Food and Drug Administration-approved sensors. High degrees of agreement were achieved between the measurements across all physiological parameters. This technology has significant potential for advancing personal health care and telemedicine.

  4. Sensor node for remote monitoring of waterborne disease-causing bacteria.

    PubMed

    Kim, Kyukwang; Myung, Hyun

    2015-05-05

    A sensor node for sampling water and checking for the presence of harmful bacteria such as E. coli in water sources was developed in this research. A chromogenic enzyme substrate assay method was used to easily detect coliform bacteria by monitoring the color change of the sampled water mixed with a reagent. Live webcam image streaming to the web browser of the end user with a Wi-Fi connected sensor node shows the water color changes in real time. The liquid can be manipulated on the web-based user interface, and also can be observed by webcam feeds. Image streaming and web console servers run on an embedded processor with an expansion board. The UART channel of the expansion board is connected to an external Arduino board and a motor driver to control self-priming water pumps to sample the water, mix the reagent, and remove the water sample after the test is completed. The sensor node can repeat water testing until the test reagent is depleted. The authors anticipate that the use of the sensor node developed in this research can decrease the cost and required labor for testing samples in a factory environment and checking the water quality of local water sources in developing countries.

  5. Low-Cost Alternative for Signal Generators in the Physics Laboratory

    NASA Astrophysics Data System (ADS)

    Pathare, Shirish Rajan; Raghavendra, M. K.; Huli, Saurabhee

    2017-05-01

    Recently devices such as the optical mouse of a computer, webcams, Wii remote, and digital cameras have been used to record and analyze different physical phenomena quantitatively. Devices like tablets and smartphones are also becoming popular. Different scientific applications available at Google Play (Android devices) or the App Store (iOS devices) make them versatile. One can find many websites that provide information regarding various scientific applications compatible with these systems. A variety of smartphones/tablets are available with different types of sensors embedded. Some of them have sensors that are capable of measuring intensity of light, sound, and magnetic field. The camera of these devices has been used to study projectile motion, and the same device, along with a sensor, has been used to study the physical pendulum. Accelerometers have been used to study free and damped harmonic oscillations and to measure acceleration due to gravity. Using accelerometers and gyroscopes, angular velocity and centripetal acceleration have been measured. The coefficient of restitution for a ball bouncing on the floor has been measured using the application Oscilloscope on the iPhone. In this article, we present the use of an Android device as a low-cost alternative for a signal generator. We use the Signal Generator application installed on the Android device along with an amplifier circuit.

  6. Floor Covering and Surface Identification for Assistive Mobile Robotic Real-Time Room Localization Application

    PubMed Central

    Gillham, Michael; Howells, Gareth; Spurgeon, Sarah; McElroy, Ben

    2013-01-01

    Assistive robotic applications require systems capable of interaction in the human world, a workspace which is highly dynamic and not always predictable. Mobile assistive devices face the additional and complex problem of when and if intervention should occur; therefore before any trajectory assistance is given, the robotic device must know where it is in real-time, without unnecessary disruption or delay to the user requirements. In this paper, we demonstrate a novel robust method for determining room identification from floor features in a real-time computational frame for autonomous and assistive robotics in the human environment. We utilize two inexpensive sensors: an optical mouse sensor for straightforward and rapid, texture or pattern sampling, and a four color photodiode light sensor for fast color determination. We show how data relating floor texture and color obtained from typical dynamic human environments, using these two sensors, compares favorably with data obtained from a standard webcam. We show that suitable data can be extracted from these two sensors at a rate 16 times faster than a standard webcam, and that these data are in a form which can be rapidly processed using readily available classification techniques, suitable for real-time system application. We achieved a 95% correct classification accuracy identifying 133 rooms' flooring from 35 classes, suitable for fast coarse global room localization application, boundary crossing detection, and additionally some degree of surface type identification. PMID:24351647

  7. Floor covering and surface identification for assistive mobile robotic real-time room localization application.

    PubMed

    Gillham, Michael; Howells, Gareth; Spurgeon, Sarah; McElroy, Ben

    2013-12-17

    Assistive robotic applications require systems capable of interaction in the human world, a workspace which is highly dynamic and not always predictable. Mobile assistive devices face the additional and complex problem of when and if intervention should occur; therefore before any trajectory assistance is given, the robotic device must know where it is in real-time, without unnecessary disruption or delay to the user requirements. In this paper, we demonstrate a novel robust method for determining room identification from floor features in a real-time computational frame for autonomous and assistive robotics in the human environment. We utilize two inexpensive sensors: an optical mouse sensor for straightforward and rapid, texture or pattern sampling, and a four color photodiode light sensor for fast color determination. We show how data relating floor texture and color obtained from typical dynamic human environments, using these two sensors, compares favorably with data obtained from a standard webcam. We show that suitable data can be extracted from these two sensors at a rate 16 times faster than a standard webcam, and that these data are in a form which can be rapidly processed using readily available classification techniques, suitable for real-time system application. We achieved a 95% correct classification accuracy identifying 133 rooms' flooring from 35 classes, suitable for fast coarse global room localization application, boundary crossing detection, and additionally some degree of surface type identification.

  8. Reliable sagittal plane kinematic gait assessments are feasible using low-cost webcam technology.

    PubMed

    Saner, Robert J; Washabaugh, Edward P; Krishnan, Chandramouli

    2017-07-01

    Three-dimensional (3-D) motion capture systems are commonly used for gait analysis because they provide reliable and accurate measurements. However, the downside of this approach is that it is expensive and requires technical expertise; thus making it less feasible in the clinic. To address this limitation, we recently developed and validated (using a high-precision walking robot) a low-cost, two-dimensional (2-D) real-time motion tracking approach using a simple webcam and LabVIEW Vision Assistant. The purpose of this study was to establish the repeatability and minimal detectable change values of hip and knee sagittal plane gait kinematics recorded using this system. Twenty-one healthy subjects underwent two kinematic assessments while walking on a treadmill at a range of gait velocities. Intraclass correlation coefficients (ICC) and minimal detectable change (MDC) values were calculated for commonly used hip and knee kinematic parameters to demonstrate the reliability of the system. Additionally, Bland-Altman plots were generated to examine the agreement between the measurements recorded on two different days. The system demonstrated good to excellent reliability (ICC>0.75) for all the gait parameters tested on this study. The MDC values were typically low (<5°) for most of the parameters. The Bland-Altman plots indicated that there was no systematic error or bias in kinematic measurements and showed good agreement between measurements obtained on two different days. These results indicate that kinematic gait assessments using webcam technology can be reliably used for clinical and research purposes. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Image stacking approach to increase sensitivity of fluorescence detection using a low cost complementary metal-oxide-semiconductor (CMOS) webcam.

    PubMed

    Balsam, Joshua; Bruck, Hugh Alan; Kostov, Yordan; Rasooly, Avraham

    2012-01-01

    Optical technologies are important for biological analysis. Current biomedical optical analyses rely on high-cost, high-sensitivity optical detectors such as photomultipliers, avalanched photodiodes or cooled CCD cameras. In contrast, Webcams, mobile phones and other popular consumer electronics use lower-sensitivity, lower-cost optical components such as photodiodes or CMOS sensors. In order for consumer electronics devices, such as webcams, to be useful for biomedical analysis, they must have increased sensitivity. We combined two strategies to increase the sensitivity of CMOS-based fluorescence detector. We captured hundreds of low sensitivity images using a Webcam in video mode, instead of a single image typically used in cooled CCD devices.We then used a computational approach consisting of an image stacking algorithm to remove the noise by combining all of the images into a single image. While video mode is widely used for dynamic scene imaging (e.g. movies or time-lapse photography), it is not used to capture a single static image, which removes noise and increases sensitivity by more than thirty fold. The portable, battery-operated Webcam-based fluorometer system developed here consists of five modules: (1) a low cost CMOS Webcam to monitor light emission, (2) a plate to perform assays, (3) filters and multi-wavelength LED illuminator for fluorophore excitation, (4) a portable computer to acquire and analyze images, and (5) image stacking software for image enhancement. The samples consisted of various concentrations of fluorescein, ranging from 30 μM to 1000 μM, in a 36-well miniature plate. In the single frame mode, the fluorometer's limit-of-detection (LOD) for fluorescein is ∼1000 μM, which is relatively insensitive. However, when used in video mode combined with image stacking enhancement, the LOD is dramatically reduced to 30 μM, sensitivity which is similar to that of state-of-the-art ELISA plate photomultiplier-based readers. Numerous medical diagnostics assays rely on optical and fluorescence readers. Our novel combination of detection technologies, which is new to biodetection may enable the development of new low cost optical detectors based on an inexpensive Webcam (<$10). It has the potential to form the basis for high sensitivity, low cost medical diagnostics in resource-poor settings.

  10. Image stacking approach to increase sensitivity of fluorescence detection using a low cost complementary metal-oxide-semiconductor (CMOS) webcam

    PubMed Central

    Balsam, Joshua; Bruck, Hugh Alan; Kostov, Yordan; Rasooly, Avraham

    2013-01-01

    Optical technologies are important for biological analysis. Current biomedical optical analyses rely on high-cost, high-sensitivity optical detectors such as photomultipliers, avalanched photodiodes or cooled CCD cameras. In contrast, Webcams, mobile phones and other popular consumer electronics use lower-sensitivity, lower-cost optical components such as photodiodes or CMOS sensors. In order for consumer electronics devices, such as webcams, to be useful for biomedical analysis, they must have increased sensitivity. We combined two strategies to increase the sensitivity of CMOS-based fluorescence detector. We captured hundreds of low sensitivity images using a Webcam in video mode, instead of a single image typically used in cooled CCD devices.We then used a computational approach consisting of an image stacking algorithm to remove the noise by combining all of the images into a single image. While video mode is widely used for dynamic scene imaging (e.g. movies or time-lapse photography), it is not used to capture a single static image, which removes noise and increases sensitivity by more than thirty fold. The portable, battery-operated Webcam-based fluorometer system developed here consists of five modules: (1) a low cost CMOS Webcam to monitor light emission, (2) a plate to perform assays, (3) filters and multi-wavelength LED illuminator for fluorophore excitation, (4) a portable computer to acquire and analyze images, and (5) image stacking software for image enhancement. The samples consisted of various concentrations of fluorescein, ranging from 30 μM to 1000 μM, in a 36-well miniature plate. In the single frame mode, the fluorometer's limit-of-detection (LOD) for fluorescein is ∼1000 μM, which is relatively insensitive. However, when used in video mode combined with image stacking enhancement, the LOD is dramatically reduced to 30 μM, sensitivity which is similar to that of state-of-the-art ELISA plate photomultiplier-based readers. Numerous medical diagnostics assays rely on optical and fluorescence readers. Our novel combination of detection technologies, which is new to biodetection may enable the development of new low cost optical detectors based on an inexpensive Webcam (<$10). It has the potential to form the basis for high sensitivity, low cost medical diagnostics in resource-poor settings. PMID:23990697

  11. On-Chip Imaging of Schistosoma haematobium Eggs in Urine for Diagnosis by Computer Vision

    PubMed Central

    Linder, Ewert; Grote, Anne; Varjo, Sami; Linder, Nina; Lebbad, Marianne; Lundin, Mikael; Diwan, Vinod; Hannuksela, Jari; Lundin, Johan

    2013-01-01

    Background Microscopy, being relatively easy to perform at low cost, is the universal diagnostic method for detection of most globally important parasitic infections. As quality control is hard to maintain, misdiagnosis is common, which affects both estimates of parasite burdens and patient care. Novel techniques for high-resolution imaging and image transfer over data networks may offer solutions to these problems through provision of education, quality assurance and diagnostics. Imaging can be done directly on image sensor chips, a technique possible to exploit commercially for the development of inexpensive “mini-microscopes”. Images can be transferred for analysis both visually and by computer vision both at point-of-care and at remote locations. Methods/Principal Findings Here we describe imaging of helminth eggs using mini-microscopes constructed from webcams and mobile phone cameras. The results show that an inexpensive webcam, stripped off its optics to allow direct application of the test sample on the exposed surface of the sensor, yields images of Schistosoma haematobium eggs, which can be identified visually. Using a highly specific image pattern recognition algorithm, 4 out of 5 eggs observed visually could be identified. Conclusions/Significance As proof of concept we show that an inexpensive imaging device, such as a webcam, may be easily modified into a microscope, for the detection of helminth eggs based on on-chip imaging. Furthermore, algorithms for helminth egg detection by machine vision can be generated for automated diagnostics. The results can be exploited for constructing simple imaging devices for low-cost diagnostics of urogenital schistosomiasis and other neglected tropical infectious diseases. PMID:24340107

  12. Fiber specklegram sensors sensitivities at high temperatures

    NASA Astrophysics Data System (ADS)

    Rodriguez-Cobo, L.; Lomer, M.; Lopez-Higuera, J. M.

    2015-09-01

    In this work, the sensitivity of Fiber Specklegram Sensors to high temperatures (up to 800ºC) have been studied. Two multimode silica fibers have been introduced into a tubular furnace while a HeNe laser source was launched into a fiber edge, projecting speckle patterns to a commercial webcam. A computer generated different heating and cooling sweeps while the specklegram evolution was recorded. The achieved results exhibit a remarkably linearity in FSS's sensitivity for temperatures under 800ºC, following the thermal expansion of fused silica.

  13. LabVIEW application for motion tracking using USB camera

    NASA Astrophysics Data System (ADS)

    Rob, R.; Tirian, G. O.; Panoiu, M.

    2017-05-01

    The technical state of the contact line and also the additional equipment in electric rail transport is very important for realizing the repairing and maintenance of the contact line. During its functioning, the pantograph motion must stay in standard limits. Present paper proposes a LabVIEW application which is able to track in real time the motion of a laboratory pantograph and also to acquire the tracking images. An USB webcam connected to a computer acquires the desired images. The laboratory pantograph contains an automatic system which simulates the real motion. The tracking parameters are the horizontally motion (zigzag) and the vertically motion which can be studied in separate diagrams. The LabVIEW application requires appropriate tool-kits for vision development. Therefore the paper describes the subroutines that are especially programmed for real-time image acquisition and also for data processing.

  14. Webcam as a new invigilation method: students' comfort and potential for cheating.

    PubMed

    Mirza, Noeman; Staples, Eric

    2010-02-01

    The purpose of this descriptive survey study was to determine the comfort of nurse practitioner (NP) students with webcam invigilation of online examinations and the effectiveness of webcam invigilation in preventing students from cheating. An online questionnaire was developed for NP students currently enrolled in Ontario's Primary Health Care Nurse Practitioner program, in which online examinations are invigilated through a webcam. All students were contacted via e-mail and invited to participate in the online questionnaire. The response rate was 77%. Data were collected and analyzed. Results demonstrated that webcam invigilation can be an uncomfortable experience and that cheating on webcam-invigilated examinations is possible. The results will contribute to the scarce literature available on webcam invigilation of online examinations, but research with a larger sample is needed if results are to be generalized to the webcam invigilation process.

  15. Therapeutic uses of the WebCam in child psychiatry.

    PubMed

    Chlebowski, Susan; Fremont, Wanda

    2011-01-01

    The authors provide examples for the use of the WebCam as a therapeutic tool in child psychiatry, discussing cases to demonstrate the application of the WebCam, which is most often used in psychiatry training programs during resident supervision and for case presentations. Six cases illustrate the use of the WebCam in individual and family therapy. The WebCam, used during individual sessions, can facilitate the development of prosocial skills. Comparing individual WebCam video sessions can help to evaluate the effectiveness of medication and progress in therapy. The WebCam has proven to be useful in psycho-education, facilitating communication, and treating children and families. The applications of this technology may include cognitive-behavioral therapy, dialectical-behavioral, and group therapy.

  16. Radial line method for rear-view mirror distortion detection

    NASA Astrophysics Data System (ADS)

    Rahmah, Fitri; Kusumawardhani, Apriani; Setijono, Heru; Hatta, Agus M.; Irwansyah, .

    2015-01-01

    An image of the object can be distorted due to a defect in a mirror. A rear-view mirror is an important component for the vehicle safety. One of standard parameters of the rear-view mirror is a distortion factor. This paper presents a radial line method for distortion detection of the rear-view mirror. The rear-view mirror was tested for the distortion detection by using a system consisting of a webcam sensor and an image-processing unit. In the image-processing unit, the captured image from the webcam were pre-processed by using smoothing and sharpening techniques and then a radial line method was used to define the distortion factor. It was demonstrated successfully that the radial line method could be used to define the distortion factor. This detection system is useful to be implemented such as in Indonesian's automotive component industry while the manual inspection still be used.

  17. Webcam-based flow cytometer using wide-field imaging for low cell number detection at high throughput.

    PubMed

    Balsam, Joshua; Bruck, Hugh Alan; Rasooly, Avraham

    2014-09-07

    Here we describe a novel low-cost flow cytometer based on a webcam capable of low cell number detection in a large volume which may overcome the limitations of current flow cytometry. Several key elements have been combined to yield both high throughput and high sensitivity. The first element is a commercially available webcam capable of 187 frames per second video capture at a resolution of 320 × 240 pixels. The second element in this design is a 1 W 450 nm laser module for area-excitation, which combined with the webcam allows for rapid interrogation of a flow field. The final element is a 2D flow-cell which overcomes the flow limitation of hydrodynamic focusing and allows for higher sample throughput in a wider flow field. This cell allows for the linear velocity of target cells to be lower than in a conventional "1D" hydrodynamic focusing flow-cells typically used in cytometry at similar volumetric flow rates. It also allows cells to be imaged at the full frame rate of the webcam. Using this webcam-based flow cytometer with wide-field imaging, it was confirmed that the detection of fluorescently tagged 5 μm polystyrene beads in "1D" hydrodynamic focusing flow-cells was not practical for low cell number detection due to streaking from the motion of the beads, which did not occur with the 2D flow-cell design. The sensitivity and throughput of this webcam-based flow cytometer was then investigated using THP-1 human monocytes stained with SYTO-9 florescent dye in the 2D flow-cell. The flow cytometer was found to be capable of detecting fluorescently tagged cells at concentrations as low as 1 cell per mL at flow rates of 500 μL min(-1) in buffer and in blood. The effectiveness of detection was concentration dependent: at 100 cells per mL 84% of the cells were detected compared to microscopy, 10 cells per mL 79% detected and 1 cell per mL 59% of the cells were detected. With the blood samples spiked to 100 cells per mL, the average concentration for all samples was 91.4 cells per mL, with a 95% confidence interval of 86-97 cells per mL. These low cell concentrations and the large volume capabilities of the system may overcome the limitations of current cytometry, and are applicable to rare cell (such as circulating tumor cell) detection The simplicity and low cost of this device suggests that it may have a potential use in developing point-of-care clinical flow cytometry for resource-poor settings associated with global health.

  18. Speckle POF sensor for detecting vital signs of patients

    NASA Astrophysics Data System (ADS)

    Lomer, M.; Rodriguez-Cobo, L.; Revilla, P.; Herrero, G.; Madruga, F.; Lopez-Higuera, J. M.

    2014-05-01

    In this work, both arterial pulse and respiratory rate have been successfully measured based on changes in speckle patterns of multimode fibers. Using two fiber-based transducers, one located on the wrist and another in the chest, both disturbances were transmitted to the fiber, varying the speckle pattern. These variations of the speckle pattern were captured using a commercial webcam and further processed using different methods. The achieved results have been presented and the simultaneous monitoring of both vital signs has been also discussed. The feasibility to use the proposed sensor system for this application is demonstrated.

  19. Optical readout of a two phase liquid argon TPC using CCD camera and THGEMs

    NASA Astrophysics Data System (ADS)

    Mavrokoridis, K.; Ball, F.; Carroll, J.; Lazos, M.; McCormick, K. J.; Smith, N. A.; Touramanis, C.; Walker, J.

    2014-02-01

    This paper presents a preliminary study into the use of CCDs to image secondary scintillation light generated by THick Gas Electron Multipliers (THGEMs) in a two phase LAr TPC. A Sony ICX285AL CCD chip was mounted above a double THGEM in the gas phase of a 40 litre two-phase LAr TPC with the majority of the camera electronics positioned externally via a feedthrough. An Am-241 source was mounted on a rotatable motion feedthrough allowing the positioning of the alpha source either inside or outside of the field cage. Developed for and incorporated into the TPC design was a novel high voltage feedthrough featuring LAr insulation. Furthermore, a range of webcams were tested for operation in cryogenics as an internal detector monitoring tool. Of the range of webcams tested the Microsoft HD-3000 (model no:1456) webcam was found to be superior in terms of noise and lowest operating temperature. In ambient temperature and atmospheric pressure 1 ppm pure argon gas, the THGEM gain was ≈ 1000 and using a 1 msec exposure the CCD captured single alpha tracks. Successful operation of the CCD camera in two-phase cryogenic mode was also achieved. Using a 10 sec exposure a photograph of secondary scintillation light induced by the Am-241 source in LAr has been captured for the first time.

  20. Georectification and snow classification of webcam images: potential for complementing satellite-derrived snow maps over Switzerland

    NASA Astrophysics Data System (ADS)

    Dizerens, Céline; Hüsler, Fabia; Wunderle, Stefan

    2016-04-01

    The spatial and temporal variability of snow cover has a significant impact on climate and environment and is of great socio-economic importance for the European Alps. Satellite remote sensing data is widely used to study snow cover variability and can provide spatially comprehensive information on snow cover extent. However, cloud cover strongly impedes the surface view and hence limits the number of useful snow observations. Outdoor webcam images not only offer unique potential for complementing satellite-derived snow retrieval under cloudy conditions but could also serve as a reference for improved validation of satellite-based approaches. Thousands of webcams are currently connected to the Internet and deliver freely available images with high temporal and spatial resolutions. To exploit the untapped potential of these webcams, a semi-automatic procedure was developed to generate snow cover maps based on webcam images. We used daily webcam images of the Swiss alpine region to apply, improve, and extend existing approaches dealing with the positioning of photographs within a terrain model, appropriate georectification, and the automatic snow classification of such photographs. In this presentation, we provide an overview of the implemented procedure and demonstrate how our registration approach automatically resolves the orientation of a webcam by using a high-resolution digital elevation model and the webcam's position. This allows snow-classified pixels of webcam images to be related to their real-world coordinates. We present several examples of resulting snow cover maps, which have the same resolution as the digital elevation model and indicate whether each grid cell is snow-covered, snow-free, or not visible from webcams' positions. The procedure is expected to work under almost any weather condition and demonstrates the feasibility of using webcams for the retrieval of high-resolution snow cover information.

  1. Therapeutic Uses of the WebCam in Child Psychiatry

    ERIC Educational Resources Information Center

    Chlebowski, Susan; Fremont, Wanda

    2011-01-01

    Objective: The authors provide examples for the use of the WebCam as a therapeutic tool in child psychiatry, discussing cases to demonstrate the application of the WebCam, which is most often used in psychiatry training programs during resident supervision and for case presentations. Method: Six cases illustrate the use of the WebCam in individual…

  2. Potential and limitations of webcam images for snow cover monitoring in the Swiss Alps

    NASA Astrophysics Data System (ADS)

    Dizerens, Céline; Hüsler, Fabia; Wunderle, Stefan

    2017-04-01

    In Switzerland, several thousands of outdoor webcams are currently connected to the Internet. They deliver freely available images that can be used to analyze snow cover variability on a high spatio-temporal resolution. To make use of this big data source, we have implemented a webcam-based snow cover mapping procedure, which allows to almost automatically derive snow cover maps from such webcam images. As there is mostly no information about the webcams and its parameters available, our registration approach automatically resolves these parameters (camera orientation, principal point, field of view) by using an estimate of the webcams position, the mountain silhouette, and a high-resolution digital elevation model (DEM). Combined with an automatic snow classification and an image alignment using SIFT features, our procedure can be applied to arbitrary images to generate snow cover maps with a minimum of effort. Resulting snow cover maps have the same resolution as the digital elevation model and indicate whether each grid cell is snow-covered, snow-free, or hidden from webcams' positions. Up to now, we processed images of about 290 webcams from our archive, and evaluated images of 20 webcams using manually selected ground control points (GCPs) to evaluate the mapping accuracy of our procedure. We present methodological limitations and ongoing improvements, show some applications of our snow cover maps, and demonstrate that webcams not only offer a great opportunity to complement satellite-derived snow retrieval under cloudy conditions, but also serve as a reference for improved validation of satellite-based approaches.

  3. Coastal dynamics on a soft coastline from serendipitous webcams: KwaZulu-Natal, South Africa

    NASA Astrophysics Data System (ADS)

    Guastella, Lisa A.; Smith, Alan M.

    2014-10-01

    Webcams have become popular means of showcasing beach conditions for a wide variety of beach users. However, webcams can also be a useful tool in assessing changes in coastal morphology and coastal processes. This information can be used by managers to assist in planning. A number of fixed-position beach webcams are freely available to the South African public via various tourism, surfing, weather and aviation websites, individual clubs and a cell-phone network provider. The advantages of these public networks are that the information is free and as the webcams are fixed, afford a consistent and comparable view of the beach. The disadvantage is that you are at the mercy of the provider: resolution is generally poor, downtime and communication are out of your control, and you have no influence over the positioning of the webcam or the discontinuity of service. Notwithstanding the above, the existing webcams can still provide valuable information. From the network of beach webcams available in South Africa we analyse imagery from three beach webcams located in the province of KwaZulu-Natal, at Umhlanga, Margate beach and lagoon, and Amanzimtoti beach and lagoon to examine the coastal dynamics. From these case studies we illustrate seasonal beach rotation and lagoon mouth dynamics, specifically why outlets migrate southwards in opposition to regional longshore drift.

  4. Use of wildlife webcams - Literature review and annotated bibliography

    USGS Publications Warehouse

    Ratz, Joan M.; Conk, Shannon J.

    2010-01-01

    The U.S. Fish and Wildlife Service National Conservation Training Center requested a literature review product that would serve as a resource to natural resource professionals interested in using webcams to connect people with nature. The literature review focused on the effects on the public of viewing wildlife through webcams and on information regarding installation and use of webcams. We searched the peer reviewed, published literature for three topics: wildlife cameras, virtual tourism, and technological nature. Very few publications directly addressed the effect of viewing wildlife webcams. The review of information on installation and use of cameras yielded information about many aspects of the use of remote photography, but not much specifically regarding webcams. Aspects of wildlife camera use covered in the literature review include: camera options, image retrieval, system maintenance and monitoring, time to assemble, power source, light source, camera mount, frequency of image recording, consequences for animals, and equipment security. Webcam technology is relatively new and more publication regarding the use of the technology is needed. Future research should specifically study the effect that viewing wildlife through webcams has on the viewers' conservation attitudes, behaviors, and sense of connectedness to nature.

  5. A study of CR-39 plastic charged-particle detector replacement by consumer imaging sensors

    NASA Astrophysics Data System (ADS)

    Plaud-Ramos, K. O.; Freeman, M. S.; Wei, W.; Guardincerri, E.; Bacon, J. D.; Cowan, J.; Durham, J. M.; Huang, D.; Gao, J.; Hoffbauer, M. A.; Morley, D. J.; Morris, C. L.; Poulson, D. C.; Wang, Zhehui

    2016-11-01

    Consumer imaging sensors (CIS) are examined for real-time charged-particle detection and CR-39 plastic detector replacement. Removing cover glass from CIS is hard if not impossible, in particular for the latest inexpensive webcam models. We show that 10-class CIS are sensitive to MeV and higher energy protons and α-particles by using a 90Sr β-source with its cover glass in place. Indirect, real-time, high-resolution detection is also feasible when combining CIS with a ZnS:Ag phosphor screen and optics. Noise reduction in CIS is nevertheless important for the indirect approach.

  6. A study of CR-39 plastic charged-particle detector replacement by consumer imaging sensors

    DOE PAGES

    Plaud-Ramos, Kenie Omar; Freeman, Matthew Stouten; Wei, Wanchun; ...

    2016-08-03

    Consumer imaging sensors (CIS) are examined for real-time charged-particle detection and CR-39 plastic detector replacement. Removing cover glass from CIS is hard if not impossible, in particular for the latest inexpensive webcam models. We show that $10-class CIS are sensitive to MeV and higher energy protons and α-particles by using a 90Sr β-source with its cover glass in place. Indirect, real-time, high-resolution detection is also feasible when combining CIS with a ZnS:Ag phosphor screen and optics. Furthermore, noise reduction in CIS is nevertheless important for the indirect approach.

  7. A study of CR-39 plastic charged-particle detector replacement by consumer imaging sensors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Plaud-Ramos, K. O.; Freeman, M. S.; Wei, W.

    Consumer imaging sensors (CIS) are examined for real-time charged-particle detection and CR-39 plastic detector replacement. Removing cover glass from CIS is hard if not impossible, in particular for the latest inexpensive webcam models. We show that $10-class CIS are sensitive to MeV and higher energy protons and α-particles by using a {sup 90}Sr β-source with its cover glass in place. Indirect, real-time, high-resolution detection is also feasible when combining CIS with a ZnS:Ag phosphor screen and optics. Noise reduction in CIS is nevertheless important for the indirect approach.

  8. A study of CR-39 plastic charged-particle detector replacement by consumer imaging sensors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Plaud-Ramos, Kenie Omar; Freeman, Matthew Stouten; Wei, Wanchun

    Consumer imaging sensors (CIS) are examined for real-time charged-particle detection and CR-39 plastic detector replacement. Removing cover glass from CIS is hard if not impossible, in particular for the latest inexpensive webcam models. We show that $10-class CIS are sensitive to MeV and higher energy protons and α-particles by using a 90Sr β-source with its cover glass in place. Indirect, real-time, high-resolution detection is also feasible when combining CIS with a ZnS:Ag phosphor screen and optics. Furthermore, noise reduction in CIS is nevertheless important for the indirect approach.

  9. The Webcam system: a simple, automated, computer-based video system for quantitative measurement of movement in nonhuman primates.

    PubMed

    Togasaki, Daniel M; Hsu, Albert; Samant, Meghana; Farzan, Bijan; DeLanney, Louis E; Langston, J William; Di Monte, Donato A; Quik, Maryka

    2005-06-30

    Investigations using models of neurologic disease frequently involve quantifying animal motor activity. We developed a simple method for measuring motor activity using a computer-based video system (the Webcam system) consisting of an inexpensive video camera connected to a personal computer running customized software. Images of the animals are captured at half-second intervals and movement is quantified as the number of pixel changes between consecutive images. The Webcam system allows measurement of motor activity of the animals in their home cages, without devices affixed to their bodies. Webcam quantification of movement was validated by correlation with measures simultaneously obtained by two other methods: measurement of locomotion by interruption of infrared beams; and measurement of general motor activity using portable accelerometers. In untreated squirrel monkeys, correlations of Webcam and locomotor activity exceeded 0.79, and correlations with general activity counts exceeded 0.65. Webcam activity decreased after the monkeys were rendered parkinsonian by treatment with 1-methyl-4-phenyl-1,2,3,6-tetrahydropyridine (MPTP), but the correlations with the other measures of motor activity were maintained. Webcam activity also correlated with clinical ratings of parkinsonism. These results indicate that the Webcam system is reliable under both untreated and experimental conditions and is an excellent method for quantifying motor activity in animals.

  10. Perceptions of webcams in the neonatal intensive care unit: here's looking at you kid!

    PubMed

    Hawkes, Gavin A; Livingstone, Vicki; Ryan, C Anthony; Dempsey, Eugene Michael

    2015-02-01

    Many tertiary neonatal units employ a restricted visiting policy. Webcams have previously been implemented in the neonatal unit setting in several countries. This study aims to determine the views from parents, physicians, and nursing staff before implementation of a webcam system. A questionnaire-based study. There were 101 responses. Parental computer usage was 83%. The majority of parents indicated that they would use the webcam system. Parents felt that a webcam system would reduce stress. Members of the nursing staff were most concerned about privacy risks (68%), compared with parents who were confident in the security of these systems (92%, p-value < 0.001). Seventy two percent of nurses felt that a webcam system would increase the stress levels of staff as compared with less than 20% of the physicians (p-value < 0.001). The majority of parents who completed the questionnaire have positive attitudes toward implementation of a webcam system in the NICU. Education of health care staff is required before implementation. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  11. Webcam classification using simple features

    NASA Astrophysics Data System (ADS)

    Pramoun, Thitiporn; Choe, Jeehyun; Li, He; Chen, Qingshuang; Amornraksa, Thumrongrat; Lu, Yung-Hsiang; Delp, Edward J.

    2015-03-01

    Thousands of sensors are connected to the Internet and many of these sensors are cameras. The "Internet of Things" will contain many "things" that are image sensors. This vast network of distributed cameras (i.e. web cams) will continue to exponentially grow. In this paper we examine simple methods to classify an image from a web cam as "indoor/outdoor" and having "people/no people" based on simple features. We use four types of image features to classify an image as indoor/outdoor: color, edge, line, and text. To classify an image as having people/no people we use HOG and texture features. The features are weighted based on their significance and combined. A support vector machine is used for classification. Our system with feature weighting and feature combination yields 95.5% accuracy.

  12. Automated processing of webcam images for phenological classification.

    PubMed

    Bothmann, Ludwig; Menzel, Annette; Menze, Bjoern H; Schunk, Christian; Kauermann, Göran

    2017-01-01

    Along with the global climate change, there is an increasing interest for its effect on phenological patterns such as start and end of the growing season. Scientific digital webcams are used for this purpose taking every day one or more images from the same natural motive showing for example trees or grassland sites. To derive phenological patterns from the webcam images, regions of interest are manually defined on these images by an expert and subsequently a time series of percentage greenness is derived and analyzed with respect to structural changes. While this standard approach leads to satisfying results and allows to determine dates of phenological change points, it is associated with a considerable amount of manual work and is therefore constrained to a limited number of webcams only. In particular, this forbids to apply the phenological analysis to a large network of publicly accessible webcams in order to capture spatial phenological variation. In order to be able to scale up the analysis to several hundreds or thousands of webcams, we propose and evaluate two automated alternatives for the definition of regions of interest, allowing for efficient analyses of webcam images. A semi-supervised approach selects pixels based on the correlation of the pixels' time series of percentage greenness with a few prototype pixels. An unsupervised approach clusters pixels based on scores of a singular value decomposition. We show for a scientific webcam that the resulting regions of interest are at least as informative as those chosen by an expert with the advantage that no manual action is required. Additionally, we show that the methods can even be applied to publicly available webcams accessed via the internet yielding interesting partitions of the analyzed images. Finally, we show that the methods are suitable for the intended big data applications by analyzing 13988 webcams from the AMOS database. All developed methods are implemented in the statistical software package R and publicly available in the R package phenofun. Executable example code is provided as supplementary material.

  13. Automated processing of webcam images for phenological classification

    PubMed Central

    Bothmann, Ludwig; Menzel, Annette; Menze, Bjoern H.; Schunk, Christian; Kauermann, Göran

    2017-01-01

    Along with the global climate change, there is an increasing interest for its effect on phenological patterns such as start and end of the growing season. Scientific digital webcams are used for this purpose taking every day one or more images from the same natural motive showing for example trees or grassland sites. To derive phenological patterns from the webcam images, regions of interest are manually defined on these images by an expert and subsequently a time series of percentage greenness is derived and analyzed with respect to structural changes. While this standard approach leads to satisfying results and allows to determine dates of phenological change points, it is associated with a considerable amount of manual work and is therefore constrained to a limited number of webcams only. In particular, this forbids to apply the phenological analysis to a large network of publicly accessible webcams in order to capture spatial phenological variation. In order to be able to scale up the analysis to several hundreds or thousands of webcams, we propose and evaluate two automated alternatives for the definition of regions of interest, allowing for efficient analyses of webcam images. A semi-supervised approach selects pixels based on the correlation of the pixels’ time series of percentage greenness with a few prototype pixels. An unsupervised approach clusters pixels based on scores of a singular value decomposition. We show for a scientific webcam that the resulting regions of interest are at least as informative as those chosen by an expert with the advantage that no manual action is required. Additionally, we show that the methods can even be applied to publicly available webcams accessed via the internet yielding interesting partitions of the analyzed images. Finally, we show that the methods are suitable for the intended big data applications by analyzing 13988 webcams from the AMOS database. All developed methods are implemented in the statistical software package R and publicly available in the R package phenofun. Executable example code is provided as supplementary material. PMID:28235092

  14. "Proximal Sensing" capabilities for snow cover monitoring

    NASA Astrophysics Data System (ADS)

    Valt, Mauro; Salvatori, Rosamaria; Plini, Paolo; Salzano, Roberto; Giusti, Marco; Montagnoli, Mauro; Sigismondi, Daniele; Cagnati, Anselmo

    2013-04-01

    The seasonal snow cover represents one of the most important land cover class in relation to environmental studies in mountain areas, especially considering its variation during time. Snow cover and its extension play a relevant role for the studies on the atmospheric dynamics and the evolution of climate. It is also important for the analysis and management of water resources and for the management of touristic activities in mountain areas. Recently, webcam images collected at daily or even hourly intervals are being used as tools to observe the snow covered areas; those images, properly processed, can be considered a very important environmental data source. Images captured by digital cameras become a useful tool at local scale providing images even when the cloud coverage makes impossible the observation by satellite sensors. When suitably processed these images can be used for scientific purposes, having a good resolution (at least 800x600x16 million colours) and a very good sampling frequency (hourly images taken through the whole year). Once stored in databases, those images represent therefore an important source of information for the study of recent climatic changes, to evaluate the available water resources and to analyse the daily surface evolution of the snow cover. The Snow-noSnow software has been specifically designed to automatically detect the extension of snow cover collected from webcam images with a very limited human intervention. The software was tested on images collected on Alps (ARPAV webcam network) and on Apennine in a pilot station properly equipped for this project by CNR-IIA. The results obtained through the use of Snow-noSnow are comparable to the one achieved by photo-interpretation and could be considered as better as the ones obtained using the image segmentation routine implemented into image processing commercial softwares. Additionally, Snow-noSnow operates in a semi-automatic way and has a reduced processing time. The analysis of this kind of images could represent an useful element to support the interpretation of remote sensing images, especially those provided by high spatial resolution sensors. Keywords: snow cover monitoring, digital images, software, Alps, Apennines.

  15. Webcam Delivery of the Camperdown Program for Adolescents Who Stutter: A Phase II Trial

    ERIC Educational Resources Information Center

    Carey, Brenda; O'Brian, Sue; Lowe, Robyn; Onslow, Mark

    2014-01-01

    Purpose: This Phase II clinical trial examined stuttering adolescents' responsiveness to the Webcam-delivered Camperdown Program. Method: Sixteen adolescents were treated by Webcam with no clinic attendance. Primary outcome was percentage of syllables stuttered (%SS). Secondary outcomes were number of sessions, weeks and hours to maintenance,…

  16. Webcam delivery of the Lidcombe program for early stuttering: a phase I clinical trial.

    PubMed

    O'Brian, Sue; Smith, Kylie; Onslow, Mark

    2014-06-01

    The Lidcombe Program is an operant treatment for early stuttering shown with meta-analysis to have a favorable odds ratio. However, many clients are unable to access the treatment because of distance and lifestyle factors. In this Phase I trial, we explored the potential efficacy, practicality, and viability of an Internet webcam Lidcombe Program service delivery model. Participants were 3 preschool children who stuttered and their parents, all of whom received assessment and treatment using webcam in their homes with no clinic attendance. At 6 months post-Stage 1 completion, all children were stuttering below 1.0% syllables stuttered. The webcam intervention was acceptable to the parents and appeared to be practical and viable, with only occasional audiovisual problems. At present, there is no reason to doubt that a webcam-delivered Lidcombe Program will be shown with clinical trials to have comparable efficacy with the clinic version. Webcam-delivered Lidcombe Program intervention is potentially efficacious, is practical and viable, and requires further exploration with comparative clinical trials and a qualitative study of parent and caregiver experiences.

  17. Beyond detection: nuclear physics with a webcam in an educational setting

    NASA Astrophysics Data System (ADS)

    Pallone, Arthur

    2015-03-01

    Nuclear physics affects our daily lives in such diverse fields from medicine to art. I believe three obstacles - limited time, lack of subject familiarity and thus comfort on the part of educators, and equipment expense - must be overcome to produce a nuclear-educated populace. Educators regularly use webcams to actively engage students in scientific discovery as evidenced by a literature search for the term webcam paired with topics such as astronomy, biology, and physics. Inspired by YouTube videos that demonstrate alpha particle detection by modified webcams, I searched for examples that go beyond simple detection with only one education-oriented result - the determination of the in-air range of alphas using a modified CCD camera. Custom-built, radiation-hardened CMOS detectors exist in high energy physics and for soft x-ray detection. Commercial CMOS cameras are used for direct imaging in electron microscopy. I demonstrate charged-particle spectrometry with a slightly modified CMOS-based webcam. When used with inexpensive sources of radiation and free software, the webcam charged-particle spectrometer presents educators with a simple, low-cost technique to include nuclear physics in science education.

  18. Developing a reading concentration monitoring system by applying an artificial bee colony algorithm to e-books in an intelligent classroom.

    PubMed

    Hsu, Chia-Cheng; Chen, Hsin-Chin; Su, Yen-Ning; Huang, Kuo-Kuang; Huang, Yueh-Min

    2012-10-22

    A growing number of educational studies apply sensors to improve student learning in real classroom settings. However, how can sensors be integrated into classrooms to help instructors find out students' reading concentration rates and thus better increase learning effectiveness? The aim of the current study was to develop a reading concentration monitoring system for use with e-books in an intelligent classroom and to help instructors find out the students' reading concentration rates. The proposed system uses three types of sensor technologies, namely a webcam, heartbeat sensor, and blood oxygen sensor to detect the learning behaviors of students by capturing various physiological signals. An artificial bee colony (ABC) optimization approach is applied to the data gathered from these sensors to help instructors understand their students' reading concentration rates in a classroom learning environment. The results show that the use of the ABC algorithm in the proposed system can effectively obtain near-optimal solutions. The system has a user-friendly graphical interface, making it easy for instructors to clearly understand the reading status of their students.

  19. Developing a Reading Concentration Monitoring System by Applying an Artificial Bee Colony Algorithm to E-Books in an Intelligent Classroom

    PubMed Central

    Hsu, Chia-Cheng; Chen, Hsin-Chin; Su, Yen-Ning; Huang, Kuo-Kuang; Huang, Yueh-Min

    2012-01-01

    A growing number of educational studies apply sensors to improve student learning in real classroom settings. However, how can sensors be integrated into classrooms to help instructors find out students' reading concentration rates and thus better increase learning effectiveness? The aim of the current study was to develop a reading concentration monitoring system for use with e-books in an intelligent classroom and to help instructors find out the students' reading concentration rates. The proposed system uses three types of sensor technologies, namely a webcam, heartbeat sensor, and blood oxygen sensor to detect the learning behaviors of students by capturing various physiological signals. An artificial bee colony (ABC) optimization approach is applied to the data gathered from these sensors to help instructors understand their students' reading concentration rates in a classroom learning environment. The results show that the use of the ABC algorithm in the proposed system can effectively obtain near-optimal solutions. The system has a user-friendly graphical interface, making it easy for instructors to clearly understand the reading status of their students. PMID:23202042

  20. Webcam Delivery of the Camperdown Program for Adolescents Who Stutter: A Phase I Trial

    ERIC Educational Resources Information Center

    Carey, Brenda; O'Brian, Sue; Onslow, Mark; Packman, Ann; Menzies, Ross

    2012-01-01

    Purpose: This Phase I clinical trial explored the viability of webcam Internet delivery of the Camperdown Program for adolescents who stutter. Method and Procedure: Participants were 3 adolescents ages 13, 15, and 16 years, with moderate-severe stuttering. Each was treated with the Camperdown Program delivered by webcam with no clinic attendance.…

  1. Evaluating Differences in Landscape Interpretation between Webcam and Field-Based Experiences

    ERIC Educational Resources Information Center

    Kolivras, Korine N.; Luebbering, Candice R.; Resler, Lynn M.

    2012-01-01

    Field trips have become less common due to issues including budget constraints and large class sizes. Research suggests that virtual field trips can substitute for field visits, but the role of webcams has not been evaluated. To investigate the potential for webcams to substitute for field trips, participants viewed urban and physical landscapes…

  2. Lidcombe Program Webcam Treatment for Early Stuttering: A Randomized Controlled Trial

    ERIC Educational Resources Information Center

    Bridgman, Kate; Onslow, Mark; O'Brian, Susan; Jones, Mark; Block, Susan

    2016-01-01

    Purpose: Webcam treatment is potentially useful for health care in cases of early stuttering in which clients are isolated from specialized treatment services for geographic and other reasons. The purpose of the present trial was to compare outcomes of clinic and webcam deliveries of the Lidcombe Program treatment (Packman et al., 2015) for early…

  3. Mobile robot self-localization system using single webcam distance measurement technology in indoor environments.

    PubMed

    Li, I-Hsum; Chen, Ming-Chang; Wang, Wei-Yen; Su, Shun-Feng; Lai, To-Wen

    2014-01-27

    A single-webcam distance measurement technique for indoor robot localization is proposed in this paper. The proposed localization technique uses webcams that are available in an existing surveillance environment. The developed image-based distance measurement system (IBDMS) and parallel lines distance measurement system (PLDMS) have two merits. Firstly, only one webcam is required for estimating the distance. Secondly, the set-up of IBDMS and PLDMS is easy, which only one known-dimension rectangle pattern is needed, i.e., a ground tile. Some common and simple image processing techniques, i.e., background subtraction are used to capture the robot in real time. Thus, for the purposes of indoor robot localization, the proposed method does not need to use expensive high-resolution webcams and complicated pattern recognition methods but just few simple estimating formulas. From the experimental results, the proposed robot localization method is reliable and effective in an indoor environment.

  4. Mobile Robot Self-Localization System Using Single Webcam Distance Measurement Technology in Indoor Environments

    PubMed Central

    Li, I-Hsum; Chen, Ming-Chang; Wang, Wei-Yen; Su, Shun-Feng; Lai, To-Wen

    2014-01-01

    A single-webcam distance measurement technique for indoor robot localization is proposed in this paper. The proposed localization technique uses webcams that are available in an existing surveillance environment. The developed image-based distance measurement system (IBDMS) and parallel lines distance measurement system (PLDMS) have two merits. Firstly, only one webcam is required for estimating the distance. Secondly, the set-up of IBDMS and PLDMS is easy, which only one known-dimension rectangle pattern is needed, i.e., a ground tile. Some common and simple image processing techniques, i.e., background subtraction are used to capture the robot in real time. Thus, for the purposes of indoor robot localization, the proposed method does not need to use expensive high-resolution webcams and complicated pattern recognition methods but just few simple estimating formulas. From the experimental results, the proposed robot localization method is reliable and effective in an indoor environment. PMID:24473282

  5. The Input-Interface of Webcam Applied in 3D Virtual Reality Systems

    ERIC Educational Resources Information Center

    Sun, Huey-Min; Cheng, Wen-Lin

    2009-01-01

    Our research explores a virtual reality application based on Web camera (Webcam) input-interface. The interface can replace with the mouse to control direction intention of a user by the method of frame difference. We divide a frame into nine grids from Webcam and make use of the background registration to compute the moving object. In order to…

  6. Lidcombe Program Webcam Treatment for Early Stuttering: A Randomized Controlled Trial.

    PubMed

    Bridgman, Kate; Onslow, Mark; O'Brian, Susan; Jones, Mark; Block, Susan

    2016-10-01

    Webcam treatment is potentially useful for health care in cases of early stuttering in which clients are isolated from specialized treatment services for geographic and other reasons. The purpose of the present trial was to compare outcomes of clinic and webcam deliveries of the Lidcombe Program treatment (Packman et al., 2015) for early stuttering. The design was a parallel, open plan, noninferiority randomized controlled trial of the standard Lidcombe Program treatment and the experimental webcam Lidcombe Program treatment. Participants were 49 children aged 3 years 0 months to 5 years 11 months at the start of treatment. Primary outcomes were the percentage of syllables stuttered at 9 months postrandomization and the number of consultations to complete Stage 1 of the Lidcombe Program. There was insufficient evidence of a posttreatment difference of the percentage of syllables stuttered between the standard and webcam Lidcombe Program treatments. There was insufficient evidence of a difference between the groups for typical stuttering severity measured by parents or the reported clinical relationship with the treating speech-language pathologist. This trial confirmed the viability of the webcam Lidcombe Program intervention. It appears to be as efficacious and economically viable as the standard, clinic Lidcombe Program treatment.

  7. Low-cost laser speckle contrast imaging of blood flow using a webcam.

    PubMed

    Richards, Lisa M; Kazmi, S M Shams; Davis, Janel L; Olin, Katherine E; Dunn, Andrew K

    2013-01-01

    Laser speckle contrast imaging has become a widely used tool for dynamic imaging of blood flow, both in animal models and in the clinic. Typically, laser speckle contrast imaging is performed using scientific-grade instrumentation. However, due to recent advances in camera technology, these expensive components may not be necessary to produce accurate images. In this paper, we demonstrate that a consumer-grade webcam can be used to visualize changes in flow, both in a microfluidic flow phantom and in vivo in a mouse model. A two-camera setup was used to simultaneously image with a high performance monochrome CCD camera and the webcam for direct comparison. The webcam was also tested with inexpensive aspheric lenses and a laser pointer for a complete low-cost, compact setup ($90, 5.6 cm length, 25 g). The CCD and webcam showed excellent agreement with the two-camera setup, and the inexpensive setup was used to image dynamic blood flow changes before and after a targeted cerebral occlusion.

  8. Low-cost laser speckle contrast imaging of blood flow using a webcam

    PubMed Central

    Richards, Lisa M.; Kazmi, S. M. Shams; Davis, Janel L.; Olin, Katherine E.; Dunn, Andrew K.

    2013-01-01

    Laser speckle contrast imaging has become a widely used tool for dynamic imaging of blood flow, both in animal models and in the clinic. Typically, laser speckle contrast imaging is performed using scientific-grade instrumentation. However, due to recent advances in camera technology, these expensive components may not be necessary to produce accurate images. In this paper, we demonstrate that a consumer-grade webcam can be used to visualize changes in flow, both in a microfluidic flow phantom and in vivo in a mouse model. A two-camera setup was used to simultaneously image with a high performance monochrome CCD camera and the webcam for direct comparison. The webcam was also tested with inexpensive aspheric lenses and a laser pointer for a complete low-cost, compact setup ($90, 5.6 cm length, 25 g). The CCD and webcam showed excellent agreement with the two-camera setup, and the inexpensive setup was used to image dynamic blood flow changes before and after a targeted cerebral occlusion. PMID:24156082

  9. Improving the Sensitivity and Functionality of Mobile Webcam-Based Fluorescence Detectors for Point-of-Care Diagnostics in Global Health.

    PubMed

    Rasooly, Reuven; Bruck, Hugh Alan; Balsam, Joshua; Prickril, Ben; Ossandon, Miguel; Rasooly, Avraham

    2016-05-17

    Resource-poor countries and regions require effective, low-cost diagnostic devices for accurate identification and diagnosis of health conditions. Optical detection technologies used for many types of biological and clinical analysis can play a significant role in addressing this need, but must be sufficiently affordable and portable for use in global health settings. Most current clinical optical imaging technologies are accurate and sensitive, but also expensive and difficult to adapt for use in these settings. These challenges can be mitigated by taking advantage of affordable consumer electronics mobile devices such as webcams, mobile phones, charge-coupled device (CCD) cameras, lasers, and LEDs. Low-cost, portable multi-wavelength fluorescence plate readers have been developed for many applications including detection of microbial toxins such as C. Botulinum A neurotoxin, Shiga toxin, and S. aureus enterotoxin B (SEB), and flow cytometry has been used to detect very low cell concentrations. However, the relatively low sensitivities of these devices limit their clinical utility. We have developed several approaches to improve their sensitivity presented here for webcam based fluorescence detectors, including (1) image stacking to improve signal-to-noise ratios; (2) lasers to enable fluorescence excitation for flow cytometry; and (3) streak imaging to capture the trajectory of a single cell, enabling imaging sensors with high noise levels to detect rare cell events. These approaches can also help to overcome some of the limitations of other low-cost optical detection technologies such as CCD or phone-based detectors (like high noise levels or low sensitivities), and provide for their use in low-cost medical diagnostics in resource-poor settings.

  10. Improving the Sensitivity and Functionality of Mobile Webcam-Based Fluorescence Detectors for Point-of-Care Diagnostics in Global Health

    PubMed Central

    Rasooly, Reuven; Bruck, Hugh Alan; Balsam, Joshua; Prickril, Ben; Ossandon, Miguel; Rasooly, Avraham

    2016-01-01

    Resource-poor countries and regions require effective, low-cost diagnostic devices for accurate identification and diagnosis of health conditions. Optical detection technologies used for many types of biological and clinical analysis can play a significant role in addressing this need, but must be sufficiently affordable and portable for use in global health settings. Most current clinical optical imaging technologies are accurate and sensitive, but also expensive and difficult to adapt for use in these settings. These challenges can be mitigated by taking advantage of affordable consumer electronics mobile devices such as webcams, mobile phones, charge-coupled device (CCD) cameras, lasers, and LEDs. Low-cost, portable multi-wavelength fluorescence plate readers have been developed for many applications including detection of microbial toxins such as C. Botulinum A neurotoxin, Shiga toxin, and S. aureus enterotoxin B (SEB), and flow cytometry has been used to detect very low cell concentrations. However, the relatively low sensitivities of these devices limit their clinical utility. We have developed several approaches to improve their sensitivity presented here for webcam based fluorescence detectors, including (1) image stacking to improve signal-to-noise ratios; (2) lasers to enable fluorescence excitation for flow cytometry; and (3) streak imaging to capture the trajectory of a single cell, enabling imaging sensors with high noise levels to detect rare cell events. These approaches can also help to overcome some of the limitations of other low-cost optical detection technologies such as CCD or phone-based detectors (like high noise levels or low sensitivities), and provide for their use in low-cost medical diagnostics in resource-poor settings. PMID:27196933

  11. Laparoscopic skills training using a webcam trainer.

    PubMed

    Chung, Steve Y; Landsittel, Douglas; Chon, Chris H; Ng, Christopher S; Fuchs, Gerhard J

    2005-01-01

    Many sophisticated and expensive trainers have been developed to assist surgeons in learning basic laparoscopic skills. We developed an inexpensive trainer and evaluated its effectiveness. The webcam laparoscopic training device is composed of a webcam, cardboard box, desk lamp and home computer. This homemade trainer was evaluated against 2 commercially available systems, namely the video Pelvitrainer (Karl Storz Endoscopy, Culver City, California) and the dual mirror Simuview (Simulab Corp., Seattle, Washington). The Pelvitrainer consists of a fiberglass box, single lens optic laparoscope, fiberoptic light source, endoscopic camera and video monitor, while the Simuview trainer uses 2 offset, facing mirrors and an uncovered plastic box. A total of 42 participants without prior laparoscopic training were enrolled in the study and asked to execute 2 tasks, that is peg transfer and pattern cutting. Participants were randomly assigned to 6 groups with each group representing a different permutation of trainers to be used. The time required for participants to complete each task was recorded and differences in performance were calculated. Paired t tests, the Wilcoxon signed rank test and ANOVA were performed to analyze the statistical difference in performance times for all conditions. Statistical analyses of the 2 tasks showed no significant difference for the video and webcam trainers. However, the mirror trainer gave significantly higher outcome values for tasks 1 and 2 compared to the video (p = 0.01 and <0.01) and webcam (p = 0.04 and <0.01, respectively) methods. ANOVA indicated no overall difference for tasks 1 and 2 across the orderings (p = 0.36 and 0.99, respectively). However, by attempt 3 the time required to complete the skill tests decreased significantly for all 3 trainers (each p <0.01). Our homemade webcam system is comparable in function to the more elaborate video trainer but superior to the dual mirror trainer. For novice laparoscopists we believe that the webcam system is an inexpensive and effective laparoscopic training device. Furthermore, the webcam system also allows instant recording and review of techniques.

  12. Use of ambient light in remote photoplethysmographic systems: comparison between a high-performance camera and a low-cost webcam.

    PubMed

    Sun, Yu; Papin, Charlotte; Azorin-Peris, Vicente; Kalawsky, Roy; Greenwald, Stephen; Hu, Sijung

    2012-03-01

    Imaging photoplethysmography (PPG) is able to capture useful physiological data remotely from a wide range of anatomical locations. Recent imaging PPG studies have concentrated on two broad research directions involving either high-performance cameras and or webcam-based systems. However, little has been reported about the difference between these two techniques, particularly in terms of their performance under illumination with ambient light. We explore these two imaging PPG approaches through the simultaneous measurement of the cardiac pulse acquired from the face of 10 male subjects and the spectral characteristics of ambient light. Measurements are made before and after a period of cycling exercise. The physiological pulse waves extracted from both imaging PPG systems using the smoothed pseudo-Wigner-Ville distribution yield functional characteristics comparable to those acquired using gold standard contact PPG sensors. The influence of ambient light intensity on the physiological information is considered, where results reveal an independent relationship between the ambient light intensity and the normalized plethysmographic signals. This provides further support for imaging PPG as a means for practical noncontact physiological assessment with clear applications in several domains, including telemedicine and homecare. © 2012 Society of Photo-Optical Instrumentation Engineers (SPIE).

  13. Use of ambient light in remote photoplethysmographic systems: comparison between a high-performance camera and a low-cost webcam

    NASA Astrophysics Data System (ADS)

    Sun, Yu; Papin, Charlotte; Azorin-Peris, Vicente; Kalawsky, Roy; Greenwald, Stephen; Hu, Sijung

    2012-03-01

    Imaging photoplethysmography (PPG) is able to capture useful physiological data remotely from a wide range of anatomical locations. Recent imaging PPG studies have concentrated on two broad research directions involving either high-performance cameras and or webcam-based systems. However, little has been reported about the difference between these two techniques, particularly in terms of their performance under illumination with ambient light. We explore these two imaging PPG approaches through the simultaneous measurement of the cardiac pulse acquired from the face of 10 male subjects and the spectral characteristics of ambient light. Measurements are made before and after a period of cycling exercise. The physiological pulse waves extracted from both imaging PPG systems using the smoothed pseudo-Wigner-Ville distribution yield functional characteristics comparable to those acquired using gold standard contact PPG sensors. The influence of ambient light intensity on the physiological information is considered, where results reveal an independent relationship between the ambient light intensity and the normalized plethysmographic signals. This provides further support for imaging PPG as a means for practical noncontact physiological assessment with clear applications in several domains, including telemedicine and homecare.

  14. Characterizing volcanic activity: Application of freely-available webcams

    NASA Astrophysics Data System (ADS)

    Dehn, J.; Harrild, M.; Webley, P. W.

    2017-12-01

    In recent years, freely-available web-based cameras, or webcams, have become more readily available allowing an increased level of monitoring at active volcanoes across the globe. While these cameras have been extensively used as qualitative tools, they provide a unique dataset to perform quantitative analyzes of the changing behavior of the particular volcano within the cameras field of view. We focus on the multitude of these freely-available webcams and present a new algorithm to detect changes in volcanic activity using nighttime webcam data. Our approach uses a quick, efficient, and fully automated algorithm to identify changes in webcam data in near real-time, including techniques such as edge detection, Gaussian mixture models, and temporal/spatial statistical tests, which are applied to each target image. Often the image metadata (exposure, gain settings, aperture, focal length, etc.) are unknown, meaning we developed our algorithm to identify the quantity of volcanically incandescent pixels as well as the number of specific algorithm tests needed to detect thermal activity, instead of directly correlating brightness in the webcam to eruption temperatures. We compared our algorithm results to a manual analysis of webcam data for several volcanoes and determined a false detection rate of less than 3% for the automated approach. In our presentation, we describe the different tests integrated into our algorithm, lessons learned, and how we applied our method to several volcanoes across the North Pacific during its development and implementation. We will finish with a discussion on the global applicability of our approach and how to build a 24/7, 365 day a year tool that can be used as an additional data source for real-time analysis of volcanic activity.

  15. Two terminal micropower radar sensor

    DOEpatents

    McEwan, Thomas E.

    1995-01-01

    A simple, low power ultra-wideband radar motion sensor/switch configuration connects a power source and load to ground. The switch is connected to and controlled by the signal output of a radar motion sensor. The power input of the motion sensor is connected to the load through a diode which conducts power to the motion sensor when the switch is open. A storage capacitor or rechargeable battery is connected to the power input of the motion sensor. The storage capacitor or battery is charged when the switch is open and powers the motion sensor when the switch is closed. The motion sensor and switch are connected between the same two terminals between the source/load and ground.

  16. Two terminal micropower radar sensor

    DOEpatents

    McEwan, T.E.

    1995-11-07

    A simple, low power ultra-wideband radar motion sensor/switch configuration connects a power source and load to ground. The switch is connected to and controlled by the signal output of a radar motion sensor. The power input of the motion sensor is connected to the load through a diode which conducts power to the motion sensor when the switch is open. A storage capacitor or rechargeable battery is connected to the power input of the motion sensor. The storage capacitor or battery is charged when the switch is open and powers the motion sensor when the switch is closed. The motion sensor and switch are connected between the same two terminals between the source/load and ground. 3 figs.

  17. Unobtrusive monitoring of heart rate using a cost-effective speckle-based SI-POF remote sensor

    NASA Astrophysics Data System (ADS)

    Pinzón, P. J.; Montero, D. S.; Tapetado, A.; Vázquez, C.

    2017-03-01

    A novel speckle-based sensing technique for cost-effective heart-rate monitoring is demonstrated. This technique detects periodical changes in the spatial distribution of energy on the speckle pattern at the output of a Step-Index Polymer Optical Fiber (SI-POF) lead by using a low-cost webcam. The scheme operates in reflective configuration thus performing a centralized interrogation unit scheme. The prototype has been integrated into a mattress and its functionality has been tested with 5 different patients lying on the mattress in different positions without direct contact with the fiber sensing lead.

  18. Young people's views on the potential use of telemedicine consultations for sexual health: results of a national survey.

    PubMed

    Garrett, Cameryn C; Hocking, Jane; Chen, Marcus Y; Fairley, Christopher K; Kirkman, Maggie

    2011-10-25

    Young people are disproportionately affected by sexually transmissible infections in Australia but face barriers to accessing sexual health services, including concerns over confidentiality and, for some, geographic remoteness. A possible innovation to increase access to services is the use of telemedicine. Young people's (aged 16-24) pre-use views on telephone and webcam consultations for sexual health were investigated through a widely-advertised national online survey in Australia. Descriptive statistics were used to describe the study sample and chi-square, Mann-Whitney U test, or t-tests were used to assess associations. Multinomial logistic regression was used to explore the association between the three-level outcome variable (first preference in person, telephone or webcam, and demographic and behavioural variables); odds ratios and 95%CI were calculated using in person as the reference category. Free text responses were analysed thematically. A total of 662 people completed the questionnaire. Overall, 85% of the sample indicated they would be willing to have an in-person consultation with a doctor, 63% a telephone consultation, and 29% a webcam consultation. Men, respondents with same-sex partners, and respondents reporting three or more partners in the previous year were more willing to have a webcam consultation. Imagining they lived 20 minutes from a doctor, 83% of respondents reported that their first preference would be an in-person consultation with a doctor; if imagining they lived two hours from a doctor, 51% preferred a telephone consultation. The main objections to webcam consultations in the free text responses were privacy and security concerns relating to the possibility of the webcam consultation being recorded, saved, and potentially searchable and retrievable online. This study is the first we are aware of that seeks the views of young people on telemedicine and access to sexual health services. Although only 29% of respondents were willing to have a webcam consultation, such a service may benefit youth who may not otherwise access a sexual health service. The acceptability of webcam consultations may be increased if medical clinics provide clear and accessible privacy policies ensuring that consultations will not be recorded or saved.

  19. Young people's views on the potential use of telemedicine consultations for sexual health: results of a national survey

    PubMed Central

    2011-01-01

    Background Young people are disproportionately affected by sexually transmissible infections in Australia but face barriers to accessing sexual health services, including concerns over confidentiality and, for some, geographic remoteness. A possible innovation to increase access to services is the use of telemedicine. Methods Young people's (aged 16-24) pre-use views on telephone and webcam consultations for sexual health were investigated through a widely-advertised national online survey in Australia. Descriptive statistics were used to describe the study sample and chi-square, Mann-Whitney U test, or t-tests were used to assess associations. Multinomial logistic regression was used to explore the association between the three-level outcome variable (first preference in person, telephone or webcam, and demographic and behavioural variables); odds ratios and 95%CI were calculated using in person as the reference category. Free text responses were analysed thematically. Results A total of 662 people completed the questionnaire. Overall, 85% of the sample indicated they would be willing to have an in-person consultation with a doctor, 63% a telephone consultation, and 29% a webcam consultation. Men, respondents with same-sex partners, and respondents reporting three or more partners in the previous year were more willing to have a webcam consultation. Imagining they lived 20 minutes from a doctor, 83% of respondents reported that their first preference would be an in-person consultation with a doctor; if imagining they lived two hours from a doctor, 51% preferred a telephone consultation. The main objections to webcam consultations in the free text responses were privacy and security concerns relating to the possibility of the webcam consultation being recorded, saved, and potentially searchable and retrievable online. Conclusions This study is the first we are aware of that seeks the views of young people on telemedicine and access to sexual health services. Although only 29% of respondents were willing to have a webcam consultation, such a service may benefit youth who may not otherwise access a sexual health service. The acceptability of webcam consultations may be increased if medical clinics provide clear and accessible privacy policies ensuring that consultations will not be recorded or saved. PMID:22026640

  20. A Simple Method Based on the Application of a CCD Camera as a Sensor to Detect Low Concentrations of Barium Sulfate in Suspension

    PubMed Central

    de Sena, Rodrigo Caciano; Soares, Matheus; Pereira, Maria Luiza Oliveira; da Silva, Rogério Cruz Domingues; do Rosário, Francisca Ferreira; da Silva, Joao Francisco Cajaiba

    2011-01-01

    The development of a simple, rapid and low cost method based on video image analysis and aimed at the detection of low concentrations of precipitated barium sulfate is described. The proposed system is basically composed of a webcam with a CCD sensor and a conventional dichroic lamp. For this purpose, software for processing and analyzing the digital images based on the RGB (Red, Green and Blue) color system was developed. The proposed method had shown very good repeatability and linearity and also presented higher sensitivity than the standard turbidimetric method. The developed method is presented as a simple alternative for future applications in the study of precipitations of inorganic salts and also for detecting the crystallization of organic compounds. PMID:22346607

  1. C-arm rotation encoding with accelerometers.

    PubMed

    Grzeda, Victor; Fichtinger, Gabor

    2010-07-01

    Fluoroscopic C-arms are being incorporated in computer-assisted interventions in increasing number. For these applications to work, the relative poses of imaging must be known. To find the pose, tracking methods such as optical cameras, electromagnetic trackers, and radiographic fiducials have been used-all hampered by significant shortcomings. We propose to recover the rotational pose of the C-arm using the angle-sensing ability of accelerometers, by exploiting the capability of the accelerometer to measure tilt angles. By affixing the accelerometer to a C-arm, the accelerometer tracks the C-arm pose during rotations of the C-arm. To demonstrate this concept, a C-arm analogue was constructed with a webcam device affixed to the C-arm model to mimic X-ray imaging. Then, measuring the offset between the accelerometer angle readings to the webcam pose angle, an angle correction equation (ACE) was created to properly tracking the C-arm rotational pose. Several tests were performed on the webcam C-arm model using the ACEs to tracking the primary and secondary angle rotations of the model. We evaluated the capability of linear and polynomial ACEs to tracking the webcam C-arm pose angle for different rotational scenarios. The test results showed that the accelerometer could track the pose of the webcam C-arm model with an accuracy of less than 1.0 degree. The accelerometer was successful in sensing the C-arm's rotation with clinically adequate accuracy in the C-arm webcam model.

  2. Noncontact measurement of emotional and physiological changes in heart rate from a webcam.

    PubMed

    Madan, Christopher R; Harrison, Tyler; Mathewson, Kyle E

    2018-04-01

    Heart rate, measured in beats per minute, can be used as an index of an individual's physiological state. Each time the heart beats, blood is expelled and travels through the body. This blood flow can be detected in the face using a standard webcam that is able to pick up subtle changes in color that cannot be seen by the naked eye. Due to the light absorption spectrum of blood, we are able to detect differences in the amount of light absorbed by the blood traveling just below the skin (i.e., photoplethysmography). By modulating emotional and physiological stress-that is, viewing arousing images and sitting versus standing, respectively-to elicit changes in heart rate, we explored the feasibility of using a webcam as a psychophysiological measurement of autonomic activity. We found a high level of agreement between established physiological measures, electrocardiogram, and blood pulse oximetry, and heart rate estimates obtained from the webcam. We thus suggest webcams can be used as a noninvasive and readily available method for measuring psychophysiological changes, easily integrated into existing stimulus presentation software and hardware setups. © 2017 Society for Psychophysiological Research.

  3. A neural-based remote eye gaze tracker under natural head motion.

    PubMed

    Torricelli, Diego; Conforto, Silvia; Schmid, Maurizio; D'Alessio, Tommaso

    2008-10-01

    A novel approach to view-based eye gaze tracking for human computer interface (HCI) is presented. The proposed method combines different techniques to address the problems of head motion, illumination and usability in the framework of low cost applications. Feature detection and tracking algorithms have been designed to obtain an automatic setup and strengthen the robustness to light conditions. An extensive analysis of neural solutions has been performed to deal with the non-linearity associated with gaze mapping under free-head conditions. No specific hardware, such as infrared illumination or high-resolution cameras, is needed, rather a simple commercial webcam working in visible light spectrum suffices. The system is able to classify the gaze direction of the user over a 15-zone graphical interface, with a success rate of 95% and a global accuracy of around 2 degrees , comparable with the vast majority of existing remote gaze trackers.

  4. Utility of optical facial feature and arm movement tracking systems to enable text communication in critically ill patients who cannot otherwise communicate.

    PubMed

    Muthuswamy, M B; Thomas, B N; Williams, D; Dingley, J

    2014-09-01

    Patients recovering from critical illness especially those with critical illness related neuropathy, myopathy, or burns to face, arms and hands are often unable to communicate by writing, speech (due to tracheostomy) or lip reading. This may frustrate both patient and staff. Two low cost movement tracking systems based around a laptop webcam and a laser/optical gaming system sensor were utilised as control inputs for on-screen text creation software and both were evaluated as communication tools in volunteers. Two methods were used to control an on-screen cursor to create short sentences via an on-screen keyboard: (i) webcam-based facial feature tracking, (ii) arm movement tracking by laser/camera gaming sensor and modified software. 16 volunteers with simulated tracheostomy and bandaged arms to simulate communication via gross movements of a burned limb, communicated 3 standard messages using each system (total 48 per system) in random sequence. Ten and 13 minor typographical errors occurred with each system respectively, however all messages were comprehensible. Speed of sentence formation ranged from 58 to 120s with the facial feature tracking system, and 60-160s with the arm movement tracking system. The average speed of sentence formation was 81s (range 58-120) and 104s (range 60-160) for facial feature and arm tracking systems respectively, (P<0.001, 2-tailed independent sample t-test). Both devices may be potentially useful communication aids in patients in general and burns critical care units who cannot communicate by conventional means, due to the nature of their injuries. Copyright © 2014 Elsevier Ltd and ISBI. All rights reserved.

  5. Relationships between canopy greenness and CO2 dynamics of a Mediterranean deciduous forest assessed with webcam imagery and MODIS vegetation indices

    NASA Astrophysics Data System (ADS)

    Balzarolo, M.; Papale, D.; Richardson, A. D.

    2009-04-01

    Phenological observations of foliar development and senescence are needed to understand the relationship between canopy properties and seasonal productivity dynamics (e.g., carbon uptake) of terrestrial ecosystems. Traditional phenological ground observations based on a visual observation of different vegetation growth phases (from first leaf opening, to first leaf flowering, full bloom until senescence) are laborious and typically limited to observations on just a few individual subjects. On the contrary, remote sensing techniques appear to offer the potential for assessing long-term variability in primary productivity at a global scale (Field et al., 1993). Recent studies have shown that biochemical and biophysical canopy properties can be measured with a quantifiable uncertainty that can be incorporated in the land-biosphere models (Ustin et al., 2004a; Ollinger et al 2008). Canopy greenness can be quantified by the use of vegetation indices (VIs) as, for example, Normalized Difference Vegetation Index (NDVI, Rouse et al., 1974; Deering, 1978), but a disadvantage of this approach is that there are uncertainties associated with these indices (due to the spatial and temporal resolution of the data), and the interpretation of a specific VI value, in the context of on-the-ground phenology, is not clear. Improved ground-based datasets are needed to validate and improve remotely-sensed phenological indices. Continuous monitoring of vegetation canopies with digital webcams (Richardson et al. 2007) may offer a direct link between phenological changes in canopy state and what is "seen" by satellite sensors. The general objective of this study is to analyze the relationship between biosphere-atmosphere CO2 exchange (measured by eddy covariance) and phenological canopy status, or greenness, of a Mediterranean deciduous broadleaf forest in central Italy (Roccarespampani, 42°24' N, 11°55' E). Canopy greenness is quantify using two different approaches: from digital webcam images, using indices derived from red, green and blue (RGB) color channel brightness (RGBi, after Richardson et al. 2007) and with VIs (e.g. NDVI, SR, MSR, GRDI, NCI, CI and SLAVI) derived from MODIS surface reflectance data (MOD09A1). Since MOD09A1 reflectance data represent the maximum surface reflectance of each band for a consecutive 8-day period, webcam imagery, as fluxes data, acquired whit half-hourly temporal resolution have been time averaged on 8 day period. Evaluation of performance of RGBi-VIs, RGBi-CO2flux and MODIS-CO2flux relationships were performed by linear regression analyses using the classical least squares (LS) statistical technique. Among all calculated vegetation indexes, GRDI (Green Red Difference Index: Gitelson et al., 2002) and SLAVI (Specific Leaf Area Vegetation Index: Lymburner et al., 2000) showed best linear fit with webcam RGBi greenness. SLAVI was also one of the vegetation indices best correlated with mean daily CO2 flux (R2=0.79). Finally, the relationship between RGBi and CO2 flux had a R2 of 0.67. Concluding, both webcam and MODIS greenness indices offer potential for assessing seasonal variation in the productivity of terrestrial ecosystems. Future work will focus on reducing the uncertainties inherent in these approaches, and integrating field observations of phenology into this study.

  6. Comparing near-earth and satellite remote sensing based phenophase estimates: an analysis using multiple webcams and MODIS (Invited)

    NASA Astrophysics Data System (ADS)

    Hufkens, K.; Richardson, A. D.; Migliavacca, M.; Frolking, S. E.; Braswell, B. H.; Milliman, T.; Friedl, M. A.

    2010-12-01

    In recent years several studies have used digital cameras and webcams to monitor green leaf phenology. Such "near-surface" remote sensing has been shown to be a cost effective means of accurately capturing phenology. Specifically, it allows for accurate tracking of intra- and inter-annual phenological dynamics at high temporal frequency and over broad spatial scales compared to visual observations or tower-based fAPAR and broadband NDVI measurements. Near surface remote sensing measurements therefore show promise for bridging the gap between traditional in-situ measurements of phenology and satellite remote sensing data. For this work, we examined the relationship between phenophase estimates derived from satellite remote sensing (MODIS) and near-earth remote sensing derived from webcams for a select set of sites with high-quality webcam data. A logistic model was used to characterize phenophases for both the webcam and MODIS data. We documented model fit accuracy, phenophase estimates, and model biases for both data sources. Our results show that different vegetation indices (VI's) derived from MODIS produce significantly different phenophase estimates compared to corresponding estimates derived from webcam data. Different VI's showed markedly different radiometric properties, and as a result, influenced phenophase estimates. The study shows that phenophase estimates are not only highly dependent on the algorithm used but also depend on the VI used by the phenology retrieval algorithm. These results highlight the need for a better understanding of how near-earth and satellite remote data relate to eco-physiological and canopy changes during different parts of the growing season.

  7. Webcam delivery of the Camperdown Program for adolescents who stutter: a phase II trial.

    PubMed

    Carey, Brenda; O'Brian, Sue; Lowe, Robyn; Onslow, Mark

    2014-10-01

    This Phase II clinical trial examined stuttering adolescents' responsiveness to the Webcam-delivered Camperdown Program. Sixteen adolescents were treated by Webcam with no clinic attendance. Primary outcome was percentage of syllables stuttered (%SS). Secondary outcomes were number of sessions, weeks and hours to maintenance, self-reported stuttering severity, speech satisfaction, speech naturalness, self-reported anxiety, self-reported situation avoidance, self-reported impact of stuttering, and satisfaction with Webcam treatment delivery. Data were collected before treatment and up to 12 months after entry into maintenance. Fourteen participants completed the treatment. Group mean stuttering frequency was 6.1 %SS (range, 0.7-14.7) pretreatment and 2.8 %SS (range, 0-12.2) 12 months after entry into maintenance, with half the participants stuttering at 1.2 %SS or lower at this time. Treatment was completed in a mean of 25 sessions (15.5 hr). Self-reported stuttering severity ratings, self-reported stuttering impact, and speech satisfaction scores supported %SS outcomes. Minimal anxiety was evident either pre- or post-treatment. Individual responsiveness to the treatment varied, with half the participants showing little reduction in avoidance of speech situations. The Webcam service delivery model was appealing to participants, although it was efficacious and efficient for only half. Suggestions for future stuttering treatment development for adolescents are discussed.

  8. Are shy adults really bolder online? It depends on the context.

    PubMed

    Brunet, Paul M; Schmidt, Louis A

    2008-12-01

    We examined whether individual differences in shyness and context influenced the amount of computer-mediated self-disclosure and use of affective language during an unfamiliar dyadic social interaction. Unfamiliar young adults were selected for high and low self-reported shyness and paired in mixed dyads (one shy and one nonshy). Each dyad was randomly assigned to either a live webcam or no webcam condition. Participants then engaged in a 20-minute online free chat over the Internet in the laboratory. Free chat conversations were archived, and the transcripts were objectively coded for traditional communication variables, conversational style, and the use of affective language. As predicted, shy adults engaged in significantly fewer spontaneous self-disclosures than did their nonshy counterparts only in the webcam condition. Shy versus nonshy adults did not differ on spontaneous self-disclosures in the no webcam condition. However, context did not influence the use of computer-mediated affective language. Although shy adults used significantly less active and pleasant words than their nonshy counterparts, these differences were not related to webcam condition. The present findings replicate and extend earlier work on shyness, context, and computer-mediated communication to a selected sample of shy adults. Findings suggest that context may influence some, but not all, aspects of social communication in shy adults.

  9. Fast instantaneous center of rotation estimation algorithm for a skied-steered robot

    NASA Astrophysics Data System (ADS)

    Kniaz, V. V.

    2015-05-01

    Skid-steered robots are widely used as mobile platforms for machine vision systems. However it is hard to achieve a stable motion of such robots along desired trajectory due to an unpredictable wheel slip. It is possible to compensate the unpredictable wheel slip and stabilize the motion of the robot using visual odometry. This paper presents a fast optical flow based algorithm for estimation of instantaneous center of rotation, angular and longitudinal speed of the robot. The proposed algorithm is based on Horn-Schunck variational optical flow estimation method. The instantaneous center of rotation and motion of the robot is estimated by back projection of optical flow field to the ground surface. The developed algorithm was tested using skid-steered mobile robot. The robot is based on a mobile platform that includes two pairs of differential driven motors and a motor controller. Monocular visual odometry system consisting of a singleboard computer and a low cost webcam is mounted on the mobile platform. A state-space model of the robot was derived using standard black-box system identification. The input (commands) and the output (motion) were recorded using a dedicated external motion capture system. The obtained model was used to control the robot without visual odometry data. The paper is concluded with the algorithm quality estimation by comparison of the trajectories estimated by the algorithm with the data from motion capture system.

  10. Hand-writing motion tracking with vision-inertial sensor fusion: calibration and error correction.

    PubMed

    Zhou, Shengli; Fei, Fei; Zhang, Guanglie; Liu, Yunhui; Li, Wen J

    2014-08-25

    The purpose of this study was to improve the accuracy of real-time ego-motion tracking through inertial sensor and vision sensor fusion. Due to low sampling rates supported by web-based vision sensor and accumulation of errors in inertial sensors, ego-motion tracking with vision sensors is commonly afflicted by slow updating rates, while motion tracking with inertial sensor suffers from rapid deterioration in accuracy with time. This paper starts with a discussion of developed algorithms for calibrating two relative rotations of the system using only one reference image. Next, stochastic noises associated with the inertial sensor are identified using Allan Variance analysis, and modeled according to their characteristics. Finally, the proposed models are incorporated into an extended Kalman filter for inertial sensor and vision sensor fusion. Compared with results from conventional sensor fusion models, we have shown that ego-motion tracking can be greatly enhanced using the proposed error correction model.

  11. Recent improvements in monitoring Hawaiian volcanoes with webcams and thermal cameras

    NASA Astrophysics Data System (ADS)

    Patrick, M. R.; Orr, T. R.; Antolik, L.; Lee, R.; Kamibayashi, K.

    2012-12-01

    Webcams have become essential tools for continuous observation of ongoing volcanic activity. The use of both visual webcams and Web-connected thermal cameras has increased dramatically at the Hawaiian Volcano Observatory over the past five years, improving our monitoring capability and understanding of both Kilauea's summit eruption, which began in 2008, and the east rift zone eruption, which began in 1983. The recent bolstering of the webcam network builds upon the three sub-megapixel webcams that were in place five years ago. First, several additional fixed visual webcam systems have been installed, using multi-megapixel low-light cameras. Second, several continuously operating thermal cameras have been deployed, providing a new view of activity, easier detection of active flows, and often "seeing" through fume that completely obscures views from visual webcams. Third, a new type of "mobile" webcam - using cellular modem telemetry and capable of rapid deployment - has allowed us to respond quickly to changes in eruptive activity. Fourth, development of automated analysis and alerting scripts provide real-time products that aid in quantitative interpretation of incoming images. Finally, improvements in the archiving and Web-based display of images allow efficient review of current and recent images by observatory staff. Examples from Kilauea's summit and lava flow field provide more detail on the improvements. A thermal camera situated at Kilauea's summit has tracked the changes in the active lava lake in Halema`uma`u Crater since late 2010. Automated measurements from these images using Matlab scripts are now providing real-time quantitative data on lava level and, in some cases, lava crust velocity. Lava level essentially follows summit tilt over short time scales, in which near-daily cycles of deflation and inflation correspond with about ten meters of lava level drop and rise, respectively. The data also show that the long-term Halema`uma`u lava level tracked by the thermal cameras also correlates with the pressure state of the summit magma reservoir over months based on deformation data. Comparing the summit lava level with that in Pu`u `O`o crater, about 20 km distant on the east rift zone, reveals a clear correlation that reaffirms the hydraulic connection from summit to rift zone. Elsewhere on Kilauea, mobile webcams deployed on the coastal plain have improved the tracking of active breakouts from the east rift zone eruption site - a critical hazard zone given that four homes, mostly in the Kalapana area, have been destroyed by lava flows in the last three years. Each morning an automated Matlab script detects incandescent areas in overnight images and, using the known image geometry, determines the azimuth to active flows. The results of this eruptive "breakout locator" are emailed to observatory staff each morning and provide a quantitative constraint on breakout locations and hazard potential that serves as a valuable addition to routine field mapping. These examples show the utility of webcams and thermal cameras for monitoring volcanic activity, and they reinforce the importance of continued development of equipment as well as real-time processing and analysis tools.

  12. Spatio-Temporal Constrained Human Trajectory Generation from the PIR Motion Detector Sensor Network Data: A Geometric Algebra Approach

    PubMed Central

    Yu, Zhaoyuan; Yuan, Linwang; Luo, Wen; Feng, Linyao; Lv, Guonian

    2015-01-01

    Passive infrared (PIR) motion detectors, which can support long-term continuous observation, are widely used for human motion analysis. Extracting all possible trajectories from the PIR sensor networks is important. Because the PIR sensor does not log location and individual information, none of the existing methods can generate all possible human motion trajectories that satisfy various spatio-temporal constraints from the sensor activation log data. In this paper, a geometric algebra (GA)-based approach is developed to generate all possible human trajectories from the PIR sensor network data. Firstly, the representation of the geographical network, sensor activation response sequences and the human motion are represented as algebraic elements using GA. The human motion status of each sensor activation are labeled using the GA-based trajectory tracking. Then, a matrix multiplication approach is developed to dynamically generate the human trajectories according to the sensor activation log and the spatio-temporal constraints. The method is tested with the MERL motion database. Experiments show that our method can flexibly extract the major statistical pattern of the human motion. Compared with direct statistical analysis and tracklet graph method, our method can effectively extract all possible trajectories of the human motion, which makes it more accurate. Our method is also likely to provides a new way to filter other passive sensor log data in sensor networks. PMID:26729123

  13. Spatio-Temporal Constrained Human Trajectory Generation from the PIR Motion Detector Sensor Network Data: A Geometric Algebra Approach.

    PubMed

    Yu, Zhaoyuan; Yuan, Linwang; Luo, Wen; Feng, Linyao; Lv, Guonian

    2015-12-30

    Passive infrared (PIR) motion detectors, which can support long-term continuous observation, are widely used for human motion analysis. Extracting all possible trajectories from the PIR sensor networks is important. Because the PIR sensor does not log location and individual information, none of the existing methods can generate all possible human motion trajectories that satisfy various spatio-temporal constraints from the sensor activation log data. In this paper, a geometric algebra (GA)-based approach is developed to generate all possible human trajectories from the PIR sensor network data. Firstly, the representation of the geographical network, sensor activation response sequences and the human motion are represented as algebraic elements using GA. The human motion status of each sensor activation are labeled using the GA-based trajectory tracking. Then, a matrix multiplication approach is developed to dynamically generate the human trajectories according to the sensor activation log and the spatio-temporal constraints. The method is tested with the MERL motion database. Experiments show that our method can flexibly extract the major statistical pattern of the human motion. Compared with direct statistical analysis and tracklet graph method, our method can effectively extract all possible trajectories of the human motion, which makes it more accurate. Our method is also likely to provides a new way to filter other passive sensor log data in sensor networks.

  14. A simple webcam spectrograph

    NASA Astrophysics Data System (ADS)

    Lorenz, Ralph D.

    2014-02-01

    A spectrometer is constructed with an optical fiber, a webcam, and an inexpensive diffraction grating. Assembly takes a matter of minutes, and the instrument is able to produce quantitative spectra of incandescent and fluorescent sources, lasers, and light-emitting diodes. Examples of data analyses, carried out with free software, are discussed.

  15. Towards Multimodal Emotion Recognition in E-Learning Environments

    ERIC Educational Resources Information Center

    Bahreini, Kiavash; Nadolski, Rob; Westera, Wim

    2016-01-01

    This paper presents a framework (FILTWAM (Framework for Improving Learning Through Webcams And Microphones)) for real-time emotion recognition in e-learning by using webcams. FILTWAM offers timely and relevant feedback based upon learner's facial expressions and verbalizations. FILTWAM's facial expression software module has been developed and…

  16. Hand-Writing Motion Tracking with Vision-Inertial Sensor Fusion: Calibration and Error Correction

    PubMed Central

    Zhou, Shengli; Fei, Fei; Zhang, Guanglie; Liu, Yunhui; Li, Wen J.

    2014-01-01

    The purpose of this study was to improve the accuracy of real-time ego-motion tracking through inertial sensor and vision sensor fusion. Due to low sampling rates supported by web-based vision sensor and accumulation of errors in inertial sensors, ego-motion tracking with vision sensors is commonly afflicted by slow updating rates, while motion tracking with inertial sensor suffers from rapid deterioration in accuracy with time. This paper starts with a discussion of developed algorithms for calibrating two relative rotations of the system using only one reference image. Next, stochastic noises associated with the inertial sensor are identified using Allan Variance analysis, and modeled according to their characteristics. Finally, the proposed models are incorporated into an extended Kalman filter for inertial sensor and vision sensor fusion. Compared with results from conventional sensor fusion models, we have shown that ego-motion tracking can be greatly enhanced using the proposed error correction model. PMID:25157546

  17. Activity recognition with wearable sensors on loose clothing

    PubMed Central

    Howard, Matthew

    2017-01-01

    Observing human motion in natural everyday environments (such as the home), has evoked a growing interest in the development of on-body wearable sensing technology. However, wearable sensors suffer from motion artefacts introduced by the non-rigid attachment of sensors to the body, and the prevailing view is that it is necessary to eliminate these artefacts. This paper presents findings that suggest that these artefacts can, in fact, be used to distinguish between similar motions, by exploiting additional information provided by the fabric motion. An experimental study is presented whereby factors of both the motion and the properties of the fabric are analysed in the context of motion similarity. It is seen that while standard rigidly attached sensors have difficultly in distinguishing between similar motions, sensors mounted onto fabric exhibit significant differences (p < 0.01). An evaluation of the physical properties of the fabric shows that the stiffness of the material plays a role in this, with a trade-off between additional information and extraneous motion. This effect is evaluated in an online motion classification task, and the use of fabric-mounted sensors demonstrates an increase in prediction accuracy over rigidly attached sensors. PMID:28976978

  18. Activity recognition with wearable sensors on loose clothing.

    PubMed

    Michael, Brendan; Howard, Matthew

    2017-01-01

    Observing human motion in natural everyday environments (such as the home), has evoked a growing interest in the development of on-body wearable sensing technology. However, wearable sensors suffer from motion artefacts introduced by the non-rigid attachment of sensors to the body, and the prevailing view is that it is necessary to eliminate these artefacts. This paper presents findings that suggest that these artefacts can, in fact, be used to distinguish between similar motions, by exploiting additional information provided by the fabric motion. An experimental study is presented whereby factors of both the motion and the properties of the fabric are analysed in the context of motion similarity. It is seen that while standard rigidly attached sensors have difficultly in distinguishing between similar motions, sensors mounted onto fabric exhibit significant differences (p < 0.01). An evaluation of the physical properties of the fabric shows that the stiffness of the material plays a role in this, with a trade-off between additional information and extraneous motion. This effect is evaluated in an online motion classification task, and the use of fabric-mounted sensors demonstrates an increase in prediction accuracy over rigidly attached sensors.

  19. Towards a Passive Low-Cost In-Home Gait Assessment System for Older Adults

    PubMed Central

    Wang, Fang; Stone, Erik; Skubic, Marjorie; Keller, James M.; Abbott, Carmen; Rantz, Marilyn

    2013-01-01

    In this paper, we propose a webcam-based system for in-home gait assessment of older adults. A methodology has been developed to extract gait parameters including walking speed, step time and step length from a three-dimensional voxel reconstruction, which is built from two calibrated webcam views. The gait parameters are validated with a GAITRite mat and a Vicon motion capture system in the lab with 13 participants and 44 tests, and again with GAITRite for 8 older adults in senior housing. An excellent agreement with intra-class correlation coefficients of 0.99 and repeatability coefficients between 0.7% and 6.6% was found for walking speed, step time and step length given the limitation of frame rate and voxel resolution. The system was further tested with 10 seniors in a scripted scenario representing everyday activities in an unstructured environment. The system results demonstrate the capability of being used as a daily gait assessment tool for fall risk assessment and other medical applications. Furthermore, we found that residents displayed different gait patterns during their clinical GAITRite tests compared to the realistic scenario, namely a mean increase of 21% in walking speed, a mean decrease of 12% in step time, and a mean increase of 6% in step length. These findings provide support for continuous gait assessment in the home for capturing habitual gait. PMID:24235111

  20. Research-grade CMOS image sensors for remote sensing applications

    NASA Astrophysics Data System (ADS)

    Saint-Pe, Olivier; Tulet, Michel; Davancens, Robert; Larnaudie, Franck; Magnan, Pierre; Martin-Gonthier, Philippe; Corbiere, Franck; Belliot, Pierre; Estribeau, Magali

    2004-11-01

    Imaging detectors are key elements for optical instruments and sensors on board space missions dedicated to Earth observation (high resolution imaging, atmosphere spectroscopy...), Solar System exploration (micro cameras, guidance for autonomous vehicle...) and Universe observation (space telescope focal planes, guiding sensors...). This market has been dominated by CCD technology for long. Since the mid-90s, CMOS Image Sensors (CIS) have been competing with CCDs for consumer domains (webcams, cell phones, digital cameras...). Featuring significant advantages over CCD sensors for space applications (lower power consumption, smaller system size, better radiations behaviour...), CMOS technology is also expanding in this field, justifying specific R&D and development programs funded by national and European space agencies (mainly CNES, DGA and ESA). All along the 90s and thanks to their increasingly improving performances, CIS have started to be successfully used for more and more demanding space applications, from vision and control functions requiring low-level performances to guidance applications requiring medium-level performances. Recent technology improvements have made possible the manufacturing of research-grade CIS that are able to compete with CCDs in the high-performances arena. After an introduction outlining the growing interest of optical instruments designers for CMOS image sensors, this paper will present the existing and foreseen ways to reach high-level electro-optics performances for CIS. The developments and performances of CIS prototypes built using an imaging CMOS process will be presented in the corresponding section.

  1. Motion Artifact Quantification and Sensor Fusion for Unobtrusive Health Monitoring.

    PubMed

    Hoog Antink, Christoph; Schulz, Florian; Leonhardt, Steffen; Walter, Marian

    2017-12-25

    Sensors integrated into objects of everyday life potentially allow unobtrusive health monitoring at home. However, since the coupling of sensors and subject is not as well-defined as compared to a clinical setting, the signal quality is much more variable and can be disturbed significantly by motion artifacts. One way of tackling this challenge is the combined evaluation of multiple channels via sensor fusion. For robust and accurate sensor fusion, analyzing the influence of motion on different modalities is crucial. In this work, a multimodal sensor setup integrated into an armchair is presented that combines capacitively coupled electrocardiography, reflective photoplethysmography, two high-frequency impedance sensors and two types of ballistocardiography sensors. To quantify motion artifacts, a motion protocol performed by healthy volunteers is recorded with a motion capture system, and reference sensors perform cardiorespiratory monitoring. The shape-based signal-to-noise ratio SNR S is introduced and used to quantify the effect on motion on different sensing modalities. Based on this analysis, an optimal combination of sensors and fusion methodology is developed and evaluated. Using the proposed approach, beat-to-beat heart-rate is estimated with a coverage of 99.5% and a mean absolute error of 7.9 ms on 425 min of data from seven volunteers in a proof-of-concept measurement scenario.

  2. Using Webcams to Show Change and Movement in the Physical Environment

    ERIC Educational Resources Information Center

    Sawyer, Carol F.; Butler, David R.; Curtis, Mary

    2010-01-01

    Environmental change is ideally taught through field observations; however, leaving the classroom is often unrealistic due to financial and logistical constraints. The Internet offers several feasible alternatives using webcams that instructors can use to illustrate a variety of geographic examples and exercises for students. This article explores…

  3. Webcam Stories

    ERIC Educational Resources Information Center

    Clidas, Jeanne

    2011-01-01

    Stories, steeped in science content and full of specific information, can be brought into schools and homes through the power of live video streaming. Video streaming refers to the process of viewing video over the internet. These videos may be live (webcam feeds) or recorded. These stories are engaging and inspiring. They offer opportunities to…

  4. A Semiotic Perspective on Webconferencing-Supported Language Teaching

    ERIC Educational Resources Information Center

    Guichon, Nicolas; Wigham, Ciara R.

    2016-01-01

    In webconferencing-supported teaching, the webcam mediates and organizes the pedagogical interaction. Previous research has provided a mixed picture of the use of the webcam: while it is seen as a useful medium to contribute to the personalization of the interlocutors' relationship, help regulate interaction and facilitate learner comprehension…

  5. Compliant finger sensor for sensorimotor studies in MEG and MR environment

    NASA Astrophysics Data System (ADS)

    Li, Y.; Yong, X.; Cheung, T. P. L.; Menon, C.

    2016-07-01

    Magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI) are widely used for functional brain imaging. The correlations between the sensorimotor functions of the hand and brain activities have been investigated in MEG/fMRI studies. Currently, limited information can be drawn from these studies due to the limitations of existing motion sensors that are used to detect hand movements. One major challenge in designing these motion sensors is to limit the signal interference between the motion sensors and the MEG/fMRI. In this work, a novel finger motion sensor, which contains low-ferromagnetic and non-conductive materials, is introduced. The finger sensor consists of four air-filled chambers. When compressed by finger(s), the pressure change in the chambers can be detected by the electronics of the finger sensor. Our study has validated that the interference between the finger sensor and an MEG is negligible. Also, by applying a support vector machine algorithm to the data obtained from the finger sensor, at least 11 finger patterns can be discriminated. Comparing to the use of traditional electromyography (EMG) in detecting finger motion, our proposed finger motion sensor is not only MEG/fMRI compatible, it is also easy to use. As the signals acquired from the sensor have a higher SNR than that of the EMG, no complex algorithms are required to detect different finger movement patterns. Future studies can utilize this motion sensor to investigate brain activations during different finger motions and correlate the activations with the sensory and motor functions respectively.

  6. Wearable Stretch Sensors for Motion Measurement of the Wrist Joint Based on Dielectric Elastomers.

    PubMed

    Huang, Bo; Li, Mingyu; Mei, Tao; McCoul, David; Qin, Shihao; Zhao, Zhanfeng; Zhao, Jianwen

    2017-11-23

    Motion capture of the human body potentially holds great significance for exoskeleton robots, human-computer interaction, sports analysis, rehabilitation research, and many other areas. Dielectric elastomer sensors (DESs) are excellent candidates for wearable human motion capture systems because of their intrinsic characteristics of softness, light weight, and compliance. In this paper, DESs were applied to measure all component motions of the wrist joints. Five sensors were mounted to different positions on the wrist, and each one is for one component motion. To find the best position to mount the sensors, the distribution of the muscles is analyzed. Even so, the component motions and the deformation of the sensors are coupled; therefore, a decoupling method was developed. By the decoupling algorithm, all component motions can be measured with a precision of 5°, which meets the requirements of general motion capture systems.

  7. Multi-Sensor Methods for Mobile Radar Motion Capture and Compensation

    NASA Astrophysics Data System (ADS)

    Nakata, Robert

    Remote sensing has many applications, including surveying and mapping, geophysics exploration, military surveillance, search and rescue and counter-terrorism operations. Remote sensor systems typically use visible image, infrared or radar sensors. Camera based image sensors can provide high spatial resolution but are limited to line-of-sight capture during daylight. Infrared sensors have lower resolution but can operate during darkness. Radar sensors can provide high resolution motion measurements, even when obscured by weather, clouds and smoke and can penetrate walls and collapsed structures constructed with non-metallic materials up to 1 m to 2 m in depth depending on the wavelength and transmitter power level. However, any platform motion will degrade the target signal of interest. In this dissertation, we investigate alternative methodologies to capture platform motion, including a Body Area Network (BAN) that doesn't require external fixed location sensors, allowing full mobility of the user. We also investigated platform stabilization and motion compensation techniques to reduce and remove the signal distortion introduced by the platform motion. We evaluated secondary ultrasonic and radar sensors to stabilize the platform resulting in an average 5 dB of Signal to Interference Ratio (SIR) improvement. We also implemented a Digital Signal Processing (DSP) motion compensation algorithm that improved the SIR by 18 dB on average. These techniques could be deployed on a quadcopter platform and enable the detection of respiratory motion using an onboard radar sensor.

  8. Response of Seismometer with Symmetric Triaxial Sensor Configuration to Complex Ground Motion

    NASA Astrophysics Data System (ADS)

    Graizer, V.

    2007-12-01

    Most instruments used in seismological practice to record ground motion in all directions use three sensors oriented toward North, East and upward. In this standard configuration horizontal and vertical sensors differ in their construction because of gravity acceleration always applied to a vertical sensor. An alternative way of symmetric sensor configuration was first introduced by Galperin (1955) for petroleum exploration. In this arrangement three identical sensors are also positioned orthogonally to each other but are tilted at the same angle of 54.7 degrees to the vertical axis (triaxial system of coordinate balanced on its corner). Records obtained using symmetric configuration must be rotated into an earth referenced X, Y, Z coordinate system. A number of recent seismological instruments (e.g., broadband seismometers Streckeisen STS-2, Trillium of Nanometrics and Cronos of Kinemetrics) are using symmetric sensor configuration. In most of seismological studies it is assumed that rotational (rocking and torsion) components of earthquake ground motion are small enough to be neglected. However, recently examples were shown when rotational components are significant relative to translational components of motions. Response of pendulums installed in standard configuration (vertical and two horizontals) to complex input motion that includes rotations has been studied in a number of publications. We consider the response of pendulums in a symmetric sensor configuration to complex input motions including rotations, and the resultant triaxial system response. Possible implications of using symmetric sensor configuration in strong motion studies are discussed. Considering benefits of equal design of all three sensors in symmetric configuration, and as a result potentially lower cost of the three-component accelerograph, it may be useful for strong motion measurements not requiring high resolution post signal processing. The disadvantage of this configuration is that if one of the sensors is not working properly or there is a misalignment of sensors, it results in degradation of all three components. Symmetric sensor configuration requires identical processing of each channel putting a number of limitations on further processing of strong motion records.

  9. Oscillatory motion based measurement method and sensor for measuring wall shear stress due to fluid flow

    DOEpatents

    Armstrong, William D [Laramie, WY; Naughton, Jonathan [Laramie, WY; Lindberg, William R [Laramie, WY

    2008-09-02

    A shear stress sensor for measuring fluid wall shear stress on a test surface is provided. The wall shear stress sensor is comprised of an active sensing surface and a sensor body. An elastic mechanism mounted between the active sensing surface and the sensor body allows movement between the active sensing surface and the sensor body. A driving mechanism forces the shear stress sensor to oscillate. A measuring mechanism measures displacement of the active sensing surface relative to the sensor body. The sensor may be operated under periodic excitation where changes in the nature of the fluid properties or the fluid flow over the sensor measurably changes the amplitude or phase of the motion of the active sensing surface, or changes the force and power required from a control system in order to maintain constant motion. The device may be operated under non-periodic excitation where changes in the nature of the fluid properties or the fluid flow over the sensor change the transient motion of the active sensor surface or change the force and power required from a control system to maintain a specified transient motion of the active sensor surface.

  10. Adolescent Literacy Tutoring: Face-to-Face and Via Webcam Technology

    ERIC Educational Resources Information Center

    Houge, Timothy T.; Peyton, David; Geier, Constance; Petrie, Bruce

    2007-01-01

    The purpose of this research project was to examine the effectiveness of supervised literacy tutoring delivered by 25 secondary teacher candidates to middle and high school students via webcam technology and in person. The results stem from two semester-long studies of technology-delivered tutoring from a university to middle and high school…

  11. Investigating Diffusion with Technology

    ERIC Educational Resources Information Center

    Miller, Jon S.; Windelborn, Augden F.

    2013-01-01

    The activities described here allow students to explore the concept of diffusion with the use of common equipment such as computers, webcams and analysis software. The procedure includes taking a series of digital pictures of a container of water with a webcam as a dye slowly diffuses. At known time points, measurements of the pixel densities…

  12. Enhancement of sun-tracking with optoelectronic devices

    NASA Astrophysics Data System (ADS)

    Wu, Jiunn-Chi

    2015-09-01

    Sun-tracking is one of the most challenging tasks in implementing CPV. In order to justify the additional complexity of sun-tracking, careful assessment of performance of CPV by monitoring the performance of sun-tracking is vital. Measurement of accuracy of sun-tracking is one of the important tasks in an outdoor test. This study examines techniques with three optoelectronic devices (i.e. position sensitive device (PSD), CCD and webcam). Outdoor measurements indicated that during sunny days (global horizontal insolation (GHI) > 700 W/m2), three devices recorded comparable tracking accuracy of 0.16˜0.3°. The method using a PSD has fastest sampling rate and is able to detect the sun's position without additional image processing. Yet, it cannot identify the sunlight effectively during low insolation. The techniques with a CCD and a webcam enhance the accuracy of centroid of sunlight via the optical lens and image processing. The image quality acquired using a webcam and a CCD is comparable but the webcam is more affordable than that of CCD because it can be assembled with consumer-graded products.

  13. The feasibility of automated eye tracking with the Early Childhood Vigilance Test of attention in younger HIV-exposed Ugandan children.

    PubMed

    Boivin, Michael J; Weiss, Jonathan; Chhaya, Ronak; Seffren, Victoria; Awadu, Jorem; Sikorskii, Alla; Giordani, Bruno

    2017-07-01

    Tobii eye tracking was compared with webcam-based observer scoring on an animation viewing measure of attention (Early Childhood Vigilance Test; ECVT) to evaluate the feasibility of automating measurement and scoring. Outcomes from both scoring approaches were compared with the Mullen Scales of Early Learning (MSEL), Color-Object Association Test (COAT), and Behavior Rating Inventory of Executive Function for preschool children (BRIEF-P). A total of 44 children 44 to 65 months of age were evaluated with the ECVT, COAT, MSEL, and BRIEF-P. Tobii ×2-30 portable infrared cameras were programmed to monitor pupil direction during the ECVT 6-min animation and compared with observer-based PROCODER webcam scoring. Children watched 78% of the cartoon (Tobii) compared with 67% (webcam scoring), although the 2 measures were highly correlated (r = .90, p = .001). It is possible for 2 such measures to be highly correlated even if one is consistently higher than the other (Bergemann et al., 2012). Both ECVT Tobii and webcam ECVT measures significantly correlated with COAT immediate recall (r = .37, p = .02 vs. r = .38, p = .01, respectively) and total recall (r = .33, p = .06 vs. r = .42, p = .005) measures. However, neither the Tobii eye tracking nor PROCODER webcam ECVT measures of attention correlated with MSEL composite cognitive performance or BRIEF-P global executive composite. ECVT scoring using Tobii eye tracking is feasible with at-risk very young African children and consistent with webcam-based scoring approaches in their correspondence to one another and other neurocognitive performance-based measures. By automating measurement and scoring, eye tracking technologies can improve the efficiency and help better standardize ECVT testing of attention in younger children. This holds promise for other neurodevelopmental tests where eye movements, tracking, and gaze length can provide important behavioral markers of neuropsychological and neurodevelopmental processes associated with such tests. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  14. Foucault Dissipation in a Rolling Cylinder: A Webcam Quantitative Study

    ERIC Educational Resources Information Center

    Bonanno, A.; Bozzo, G.; Camarca, M.; Sapia, P.

    2011-01-01

    In this paper we present an experimental strategy to measure the micro power dissipation due to Foucault "eddy" currents in a copper cylinder rolling on two parallel conductive rails in the presence of a magnetic field. Foucault power dissipation is obtained from kinematical measurements carried out by using a common PC webcam and video analysis…

  15. Live Webcam Coaching to Help Early Elementary Classroom Teachers Provide Effective Literacy Instruction for Struggling Readers: The Targeted Reading Intervention

    ERIC Educational Resources Information Center

    Vernon-Feagans, Lynne; Kainz, Kirsten; Hedrick, Amy; Ginsberg, Marnie; Amendum, Steve

    2013-01-01

    This study evaluated whether the Targeted Reading Intervention (TRI), a classroom teacher professional development program delivered through webcam technology literacy coaching, could provide rural classroom teachers with the instructional skills to help struggling readers progress rapidly in early reading. Fifteen rural schools were randomly…

  16. The Use of the Webcam for Teaching a Foreign Language in a Desktop Videoconferencing Environment

    ERIC Educational Resources Information Center

    Develotte, Christine; Guichon, Nicolas; Vincent, Caroline

    2010-01-01

    This paper explores how language teachers learn to teach with a synchronous multimodal setup ("Skype"), and it focuses on their use of the webcam during the pedagogical interaction. First, we analyze the ways that French graduate students learning to teach online use the multimodal resources available in a desktop videoconferencing (DVC)…

  17. Beyond Detection: Nuclear Physics with a Webcam in an Educational Setting

    ERIC Educational Resources Information Center

    Pallone, A.; Barnes, P.

    2016-01-01

    Basic understanding of nuclear science enhances our daily-life experience in many areas, such as the environment, medicine, electric power generation, and even politics. Yet typical school curricula do not provide for experiments that explore the topic. We present a means by which educators can use the ubiquitous webcam and inexpensive sources of…

  18. Perceptions of Webcam Use by Experienced Online Teachers and Learners: A Seeming Disconnect between Research and Practice

    ERIC Educational Resources Information Center

    Kozar, Olga

    2016-01-01

    Videoconferencing tools, like Skype, etc., are being increasingly used in language education worldwide. Despite assumed socio-affective and pedagogical benefits of using webcams in synchronous online language lessons, such as the feeling of co-presence or the possibilities of non-verbal communication, little is known about attitudes held by…

  19. Strong RFI observed in protected 21 cm band at Zurich observatory, Switzerland

    NASA Astrophysics Data System (ADS)

    Monstein, C.

    2014-03-01

    While testing a new antenna control software tool, the telescope was moved to the most western azimuth position pointing to our own building. While de-accelerating the telescope, the spectrometer showed strong broadband radio frequency interference (RFI) and two single-frequency carriers around 1412 and 1425 MHz, both of which are in the internationally protected band. After lengthy analysis it was found out, that the Webcam AXIS2000 was the source for both the broadband and single-frequency interference. Switching off the Webcam solved the problem immediately. So, for future observations of 21 cm radiation, all nearby electronics has to be switched off. Not only the Webcam but also all unused PCs, printers, networks, monitors etc.

  20. Open architecture CMM motion controller

    NASA Astrophysics Data System (ADS)

    Chang, David; Spence, Allan D.; Bigg, Steve; Heslip, Joe; Peterson, John

    2001-12-01

    Although initially the only Coordinate Measuring Machine (CMM) sensor available was a touch trigger probe, technological advances in sensors and computing have greatly increased the variety of available inspection sensors. Non-contact laser digitizers and analog scanning touch probes require very well tuned CMM motion control, as well as an extensible, open architecture interface. This paper describes the implementation of a retrofit CMM motion controller designed for open architecture interface to a variety of sensors. The controller is based on an Intel Pentium microcomputer and a Servo To Go motion interface electronics card. Motor amplifiers, safety, and additional interface electronics are housed in a separate enclosure. Host Signal Processing (HSP) is used for the motion control algorithm. Compared to the usual host plus DSP architecture, single CPU HSP simplifies integration with the various sensors, and implementation of software geometric error compensation. Motion control tuning is accomplished using a remote computer via 100BaseTX Ethernet. A Graphical User Interface (GUI) is used to enter geometric error compensation data, and to optimize the motion control tuning parameters. It is shown that this architecture achieves the required real time motion control response, yet is much easier to extend to additional sensors.

  1. Inertial Sensor-Based Touch and Shake Metaphor for Expressive Control of 3D Virtual Avatars

    PubMed Central

    Patil, Shashidhar; Chintalapalli, Harinadha Reddy; Kim, Dubeom; Chai, Youngho

    2015-01-01

    In this paper, we present an inertial sensor-based touch and shake metaphor for expressive control of a 3D virtual avatar in a virtual environment. An intuitive six degrees-of-freedom wireless inertial motion sensor is used as a gesture and motion control input device with a sensor fusion algorithm. The algorithm enables user hand motions to be tracked in 3D space via magnetic, angular rate, and gravity sensors. A quaternion-based complementary filter is implemented to reduce noise and drift. An algorithm based on dynamic time-warping is developed for efficient recognition of dynamic hand gestures with real-time automatic hand gesture segmentation. Our approach enables the recognition of gestures and estimates gesture variations for continuous interaction. We demonstrate the gesture expressivity using an interactive flexible gesture mapping interface for authoring and controlling a 3D virtual avatar and its motion by tracking user dynamic hand gestures. This synthesizes stylistic variations in a 3D virtual avatar, producing motions that are not present in the motion database using hand gesture sequences from a single inertial motion sensor. PMID:26094629

  2. 1 kHz 2D Visual Motion Sensor Using 20 × 20 Silicon Retina Optical Sensor and DSP Microcontroller.

    PubMed

    Liu, Shih-Chii; Yang, MinHao; Steiner, Andreas; Moeckel, Rico; Delbruck, Tobi

    2015-04-01

    Optical flow sensors have been a long running theme in neuromorphic vision sensors which include circuits that implement the local background intensity adaptation mechanism seen in biological retinas. This paper reports a bio-inspired optical motion sensor aimed towards miniature robotic and aerial platforms. It combines a 20 × 20 continuous-time CMOS silicon retina vision sensor with a DSP microcontroller. The retina sensor has pixels that have local gain control and adapt to background lighting. The system allows the user to validate various motion algorithms without building dedicated custom solutions. Measurements are presented to show that the system can compute global 2D translational motion from complex natural scenes using one particular algorithm: the image interpolation algorithm (I2A). With this algorithm, the system can compute global translational motion vectors at a sample rate of 1 kHz, for speeds up to ±1000 pixels/s, using less than 5 k instruction cycles (12 instructions per pixel) per frame. At 1 kHz sample rate the DSP is 12% occupied with motion computation. The sensor is implemented as a 6 g PCB consuming 170 mW of power.

  3. A triboelectric motion sensor in wearable body sensor network for human activity recognition.

    PubMed

    Hui Huang; Xian Li; Ye Sun

    2016-08-01

    The goal of this study is to design a novel triboelectric motion sensor in wearable body sensor network for human activity recognition. Physical activity recognition is widely used in well-being management, medical diagnosis and rehabilitation. Other than traditional accelerometers, we design a novel wearable sensor system based on triboelectrification. The triboelectric motion sensor can be easily attached to human body and collect motion signals caused by physical activities. The experiments are conducted to collect five common activity data: sitting and standing, walking, climbing upstairs, downstairs, and running. The k-Nearest Neighbor (kNN) clustering algorithm is adopted to recognize these activities and validate the feasibility of this new approach. The results show that our system can perform physical activity recognition with a successful rate over 80% for walking, sitting and standing. The triboelectric structure can also be used as an energy harvester for motion harvesting due to its high output voltage in random low-frequency motion.

  4. Omni-Purpose Stretchable Strain Sensor Based on a Highly Dense Nanocracking Structure for Whole-Body Motion Monitoring.

    PubMed

    Jeon, Hyungkook; Hong, Seong Kyung; Kim, Min Seo; Cho, Seong J; Lim, Geunbae

    2017-12-06

    Here, we report an omni-purpose stretchable strain sensor (OPSS sensor) based on a nanocracking structure for monitoring whole-body motions including both joint-level and skin-level motions. By controlling and optimizing the nanocracking structure, inspired by the spider sensory system, the OPSS sensor is endowed with both high sensitivity (gauge factor ≈ 30) and a wide working range (strain up to 150%) under great linearity (R 2 = 0.9814) and fast response time (<30 ms). Furthermore, the fabrication process of the OPSS sensor has advantages of being extremely simple, patternable, integrated circuit-compatible, and reliable in terms of reproducibility. Using the OPSS sensor, we detected various human body motions including both moving of joints and subtle deforming of skin such as pulsation. As specific medical applications of the sensor, we also successfully developed a glove-type hand motion detector and a real-time Morse code communication system for patients with general paralysis. Therefore, considering the outstanding sensing performances, great advantages of the fabrication process, and successful results from a variety of practical applications, we believe that the OPSS sensor is a highly suitable strain sensor for whole-body motion monitoring and has potential for a wide range of applications, such as medical robotics and wearable healthcare devices.

  5. Use of standard Webcam and Internet equipment for telepsychiatry treatment of depression among underserved Hispanics.

    PubMed

    Moreno, Francisco A; Chong, Jenny; Dumbauld, James; Humke, Michelle; Byreddy, Seenaiah

    2012-12-01

    Depression affects nearly one in five Americans at some time in their life, causing individual suffering and financial cost. The Internet has made it possible to deliver telemedicine care economically to areas and populations with limited access to specialist or culturally and linguistically congruent care. This study compared the effectiveness for Hispanic patients of depression treatment provided by a psychiatrist through Internet videoconferencing (Webcam intervention) and treatment as usual by a primary care provider. Adults (N=167) with a diagnosis of depression were recruited from a community clinic and were randomly assigned to treatment condition. Webcam participants met remotely each month with the psychiatrist, and treatment-as-usual patients received customary care from their primary care providers, all for six months. At baseline and three and six months, analyses of variance tested differences between conditions in scores on depression rating scales and quality-of-life and functional ability measures. All participants experienced an improvement in depression symptoms. Ratings on the Montgomery-Åsberg Depression Rating Scale by clinicians blind to treatment group and self-ratings on the nine-item Patient Health Questionnaire, Quality of Life Enjoyment and Satisfaction Questionnaire, and Sheehan Disability Scale showed a significant main effect of time. On all four measures, a significant interaction of time by intervention favoring the Webcam group was noted. Results suggest that telepsychiatry delivered through the Internet utilizing commercially available domestic Webcams and standard Internet and computer equipment is effective and acceptable. Use of this technology may help close the gap in access to culturally and linguistically congruent specialists.

  6. Research-grade CMOS image sensors for demanding space applications

    NASA Astrophysics Data System (ADS)

    Saint-Pé, Olivier; Tulet, Michel; Davancens, Robert; Larnaudie, Franck; Magnan, Pierre; Corbière, Franck; Martin-Gonthier, Philippe; Belliot, Pierre

    2004-06-01

    Imaging detectors are key elements for optical instruments and sensors on board space missions dedicated to Earth observation (high resolution imaging, atmosphere spectroscopy...), Solar System exploration (micro cameras, guidance for autonomous vehicle...) and Universe observation (space telescope focal planes, guiding sensors...). This market has been dominated by CCD technology for long. Since the mid-90s, CMOS Image Sensors (CIS) have been competing with CCDs for more and more consumer domains (webcams, cell phones, digital cameras...). Featuring significant advantages over CCD sensors for space applications (lower power consumption, smaller system size, better radiations behaviour...), CMOS technology is also expanding in this field, justifying specific R&D and development programs funded by national and European space agencies (mainly CNES, DGA, and ESA). All along the 90s and thanks to their increasingly improving performances, CIS have started to be successfully used for more and more demanding applications, from vision and control functions requiring low-level performances to guidance applications requiring medium-level performances. Recent technology improvements have made possible the manufacturing of research-grade CIS that are able to compete with CCDs in the high-performances arena. After an introduction outlining the growing interest of optical instruments designers for CMOS image sensors, this talk will present the existing and foreseen ways to reach high-level electro-optics performances for CIS. The developments of CIS prototypes built using an imaging CMOS process and of devices based on improved designs will be presented.

  7. Research-grade CMOS image sensors for demanding space applications

    NASA Astrophysics Data System (ADS)

    Saint-Pé, Olivier; Tulet, Michel; Davancens, Robert; Larnaudie, Franck; Magnan, Pierre; Corbière, Franck; Martin-Gonthier, Philippe; Belliot, Pierre

    2017-11-01

    Imaging detectors are key elements for optical instruments and sensors on board space missions dedicated to Earth observation (high resolution imaging, atmosphere spectroscopy...), Solar System exploration (micro cameras, guidance for autonomous vehicle...) and Universe observation (space telescope focal planes, guiding sensors...). This market has been dominated by CCD technology for long. Since the mid- 90s, CMOS Image Sensors (CIS) have been competing with CCDs for more and more consumer domains (webcams, cell phones, digital cameras...). Featuring significant advantages over CCD sensors for space applications (lower power consumption, smaller system size, better radiations behaviour...), CMOS technology is also expanding in this field, justifying specific R&D and development programs funded by national and European space agencies (mainly CNES, DGA, and ESA). All along the 90s and thanks to their increasingly improving performances, CIS have started to be successfully used for more and more demanding applications, from vision and control functions requiring low-level performances to guidance applications requiring medium-level performances. Recent technology improvements have made possible the manufacturing of research-grade CIS that are able to compete with CCDs in the high-performances arena. After an introduction outlining the growing interest of optical instruments designers for CMOS image sensors, this talk will present the existing and foreseen ways to reach high-level electro-optics performances for CIS. The developments of CIS prototypes built using an imaging CMOS process and of devices based on improved designs will be presented.

  8. Webcam Delivery of the Lidcombe Program for Early Stuttering: A Phase I Clinical Trial

    ERIC Educational Resources Information Center

    O'Brian, Sue; Smith, Kylie; Onslow, Mark

    2014-01-01

    Purpose: The Lidcombe Program is an operant treatment for early stuttering shown with meta-analysis to have a favorable odds ratio. However, many clients are unable to access the treatment because of distance and lifestyle factors. In this Phase I trial, we explored the potential efficacy, practicality, and viability of an Internet webcam Lidcombe…

  9. Wobbly strings: calculating the capture rate of a webcam using the rolling shutter effect in a guitar

    NASA Astrophysics Data System (ADS)

    Cunnah, David

    2014-07-01

    In this paper I propose a method of calculating the time between line captures in a standard complementary metal-oxide-semiconductor (CMOS) webcam using the rolling shutter effect when filming a guitar. The exercise links the concepts of wavelength and frequency, while outlining the basic operation of a CMOS camera through vertical line capture.

  10. Behind the Webcam's Watchful Eye, Online Proctoring Takes Hold

    ERIC Educational Resources Information Center

    Kolowich, Steve

    2013-01-01

    Hailey Schnorr has spent years peering into the bedrooms, kitchens, and dorm rooms of students via Webcam. In her job proctoring online tests for universities, she has learned to focus mainly on students' eyes. Ms. Schnorr works for ProctorU, a company hired by universities to police the integrity of their online courses. ProctorU is part of a…

  11. Wobbly Strings: Calculating the Capture Rate of a Webcam Using the Rolling Shutter Effect in a Guitar

    ERIC Educational Resources Information Center

    Cunnah, David

    2014-01-01

    In this paper I propose a method of calculating the time between line captures in a standard complementary metal-oxide-semiconductor (CMOS) webcam using the rolling shutter effect when filming a guitar. The exercise links the concepts of wavelength and frequency, while outlining the basic operation of a CMOS camera through vertical line capture.

  12. NGEE Arctic Webcam Photographs, Barrow Environmental Observatory, Barrow, Alaska

    DOE Data Explorer

    Bob Busey; Larry Hinzman

    2012-04-01

    The NGEE Arctic Webcam (PTZ Camera) captures two views of seasonal transitions from its generally south-facing position on a tower located at the Barrow Environmental Observatory near Barrow, Alaska. Images are captured every 30 minutes. Historical images are available for download. The camera is operated by the U.S. DOE sponsored Next Generation Ecosystem Experiments - Arctic (NGEE Arctic) project.

  13. The Impact of the Webcam on an Online L2 Interaction

    ERIC Educational Resources Information Center

    Guichon, Nicolas; Cohen, Cathy

    2014-01-01

    It is intuitively felt that visual cues should enhance online communication, and this experimental study aims to test this prediction by exploring the value provided by a webcam in an online L2 pedagogical teacher-to-learner interaction. A total of 40 French undergraduate students with a B2 level in English were asked to describe in English four…

  14. Webcam mouse using face and eye tracking in various illumination environments.

    PubMed

    Lin, Yuan-Pin; Chao, Yi-Ping; Lin, Chung-Chih; Chen, Jyh-Horng

    2005-01-01

    Nowadays, due to enhancement of computer performance and popular usage of webcam devices, it has become possible to acquire users' gestures for the human-computer-interface with PC via webcam. However, the effects of illumination variation would dramatically decrease the stability and accuracy of skin-based face tracking system; especially for a notebook or portable platform. In this study we present an effective illumination recognition technique, combining K-Nearest Neighbor classifier and adaptive skin model, to realize the real-time tracking system. We have demonstrated that the accuracy of face detection based on the KNN classifier is higher than 92% in various illumination environments. In real-time implementation, the system successfully tracks user face and eyes features at 15 fps under standard notebook platforms. Although KNN classifier only initiates five environments at preliminary stage, the system permits users to define and add their favorite environments to KNN for computer access. Eventually, based on this efficient tracking algorithm, we have developed a "Webcam Mouse" system to control the PC cursor using face and eye tracking. Preliminary studies in "point and click" style PC web games also shows promising applications in consumer electronic markets in the future.

  15. Triboelectrification based motion sensor for human-machine interfacing.

    PubMed

    Yang, Weiqing; Chen, Jun; Wen, Xiaonan; Jing, Qingshen; Yang, Jin; Su, Yuanjie; Zhu, Guang; Wu, Wenzuo; Wang, Zhong Lin

    2014-05-28

    We present triboelectrification based, flexible, reusable, and skin-friendly dry biopotential electrode arrays as motion sensors for tracking muscle motion and human-machine interfacing (HMI). The independently addressable, self-powered sensor arrays have been utilized to record the electric output signals as a mapping figure to accurately identify the degrees of freedom as well as directions and magnitude of muscle motions. A fast Fourier transform (FFT) technique was employed to analyse the frequency spectra of the obtained electric signals and thus to determine the motion angular velocities. Moreover, the motion sensor arrays produced a short-circuit current density up to 10.71 mA/m(2), and an open-circuit voltage as high as 42.6 V with a remarkable signal-to-noise ratio up to 1000, which enables the devices as sensors to accurately record and transform the motions of the human joints, such as elbow, knee, heel, and even fingers, and thus renders it a superior and unique invention in the field of HMI.

  16. A novel sensor for two-degree-of-freedom motion measurement of linear nanopositioning stage using knife edge displacement sensing technique

    NASA Astrophysics Data System (ADS)

    Zolfaghari, Abolfazl; Jeon, Seongkyul; Stepanick, Christopher K.; Lee, ChaBum

    2017-06-01

    This paper presents a novel method for measuring two-degree-of-freedom (DOF) motion of flexure-based nanopositioning systems based on optical knife-edge sensing (OKES) technology, which utilizes the interference of two superimposed waves: a geometrical wave from the primary source of light and a boundary diffraction wave from the secondary source. This technique allows for two-DOF motion measurement of the linear and pitch motions of nanopositioning systems. Two capacitive sensors (CSs) are used for a baseline comparison with the proposed sensor by simultaneously measuring the motions of the nanopositioning system. The experimental results show that the proposed sensor closely agrees with the fundamental linear motion of the CS. However, the two-DOF OKES technology was shown to be approximately three times more sensitive to the pitch motion than the CS. The discrepancy in the two sensor outputs is discussed in terms of measuring principle, linearity, bandwidth, control effectiveness, and resolution.

  17. Facial Emotion Recognition: A Survey and Real-World User Experiences in Mixed Reality

    PubMed Central

    Mehta, Dhwani; Siddiqui, Mohammad Faridul Haque

    2018-01-01

    Extensive possibilities of applications have made emotion recognition ineluctable and challenging in the field of computer science. The use of non-verbal cues such as gestures, body movement, and facial expressions convey the feeling and the feedback to the user. This discipline of Human–Computer Interaction places reliance on the algorithmic robustness and the sensitivity of the sensor to ameliorate the recognition. Sensors play a significant role in accurate detection by providing a very high-quality input, hence increasing the efficiency and the reliability of the system. Automatic recognition of human emotions would help in teaching social intelligence in the machines. This paper presents a brief study of the various approaches and the techniques of emotion recognition. The survey covers a succinct review of the databases that are considered as data sets for algorithms detecting the emotions by facial expressions. Later, mixed reality device Microsoft HoloLens (MHL) is introduced for observing emotion recognition in Augmented Reality (AR). A brief introduction of its sensors, their application in emotion recognition and some preliminary results of emotion recognition using MHL are presented. The paper then concludes by comparing results of emotion recognition by the MHL and a regular webcam. PMID:29389845

  18. Facial Emotion Recognition: A Survey and Real-World User Experiences in Mixed Reality.

    PubMed

    Mehta, Dhwani; Siddiqui, Mohammad Faridul Haque; Javaid, Ahmad Y

    2018-02-01

    Extensive possibilities of applications have made emotion recognition ineluctable and challenging in the field of computer science. The use of non-verbal cues such as gestures, body movement, and facial expressions convey the feeling and the feedback to the user. This discipline of Human-Computer Interaction places reliance on the algorithmic robustness and the sensitivity of the sensor to ameliorate the recognition. Sensors play a significant role in accurate detection by providing a very high-quality input, hence increasing the efficiency and the reliability of the system. Automatic recognition of human emotions would help in teaching social intelligence in the machines. This paper presents a brief study of the various approaches and the techniques of emotion recognition. The survey covers a succinct review of the databases that are considered as data sets for algorithms detecting the emotions by facial expressions. Later, mixed reality device Microsoft HoloLens (MHL) is introduced for observing emotion recognition in Augmented Reality (AR). A brief introduction of its sensors, their application in emotion recognition and some preliminary results of emotion recognition using MHL are presented. The paper then concludes by comparing results of emotion recognition by the MHL and a regular webcam.

  19. Development of esMOCA Biomechanic, Motion Capture Instrumentation for Biomechanics Analysis

    NASA Astrophysics Data System (ADS)

    Arendra, A.; Akhmad, S.

    2018-01-01

    This study aims to build motion capture instruments using inertial measurement unit sensors to assist in the analysis of biomechanics. Sensors used are accelerometer and gyroscope. Estimation of orientation sensors is done by digital motion processing in each sensor nodes. There are nine sensor nodes attached to the upper limbs. This sensor is connected to the pc via a wireless sensor network. The development of kinematics and inverse dynamamic models of the upper limb is done in simulink simmechanic. The kinematic model receives streaming data of sensor nodes mounted on the limbs. The output of the kinematic model is the pose of each limbs and visualized on display. The dynamic inverse model outputs the reaction force and reaction moment of each joint based on the limb motion input. Model validation in simulink with mathematical model of mechanical analysis showed results that did not differ significantly

  20. Informed Decision Making for In-Home Use of Motion Sensor-Based Monitoring Technologies

    ERIC Educational Resources Information Center

    Bruce, Courtenay R.

    2012-01-01

    Motion sensor-based monitoring technologies are designed to maintain independence and safety of older individuals living alone. These technologies use motion sensors that are placed throughout older individuals' homes in order to derive information about eating, sleeping, and leaving/returning home habits. Deviations from normal behavioral…

  1. Commercial CMOS image sensors as X-ray imagers and particle beam monitors

    NASA Astrophysics Data System (ADS)

    Castoldi, A.; Guazzoni, C.; Maffessanti, S.; Montemurro, G. V.; Carraresi, L.

    2015-01-01

    CMOS image sensors are widely used in several applications such as mobile handsets webcams and digital cameras among others. Furthermore they are available across a wide range of resolutions with excellent spectral and chromatic responses. In order to fulfill the need of cheap systems as beam monitors and high resolution image sensors for scientific applications we exploited the possibility of using commercial CMOS image sensors as X-rays and proton detectors. Two different sensors have been mounted and tested. An Aptina MT9v034, featuring 752 × 480 pixels, 6μm × 6μm pixel size has been mounted and successfully tested as bi-dimensional beam profile monitor, able to take pictures of the incoming proton bunches at the DeFEL beamline (1-6 MeV pulsed proton beam) of the LaBeC of INFN in Florence. The naked sensor is able to successfully detect the interactions of the single protons. The sensor point-spread-function (PSF) has been qualified with 1MeV protons and is equal to one pixel (6 mm) r.m.s. in both directions. A second sensor MT9M032, featuring 1472 × 1096 pixels, 2.2 × 2.2 μm pixel size has been mounted on a dedicated board as high-resolution imager to be used in X-ray imaging experiments with table-top generators. In order to ease and simplify the data transfer and the image acquisition the system is controlled by a dedicated micro-processor board (DM3730 1GHz SoC ARM Cortex-A8) on which a modified LINUX kernel has been implemented. The paper presents the architecture of the sensor systems and the results of the experimental measurements.

  2. An ordinary camera in an extraordinary location: Outreach with the Mars Webcam

    NASA Astrophysics Data System (ADS)

    Ormston, T.; Denis, M.; Scuka, D.; Griebel, H.

    2011-09-01

    The European Space Agency's Mars Express mission was launched in 2003 and was Europe's first mission to Mars. On-board was a small camera designed to provide ‘visual telemetry’ of the separation of the Beagle-2 lander. After achieving its goal it was shut down while the primary science mission of Mars Express got underway. In 2007 this camera was reactivated by the flight control team of Mars Express for the purpose of providing public education and outreach—turning it into the ‘Mars Webcam’.The camera is a small, 640×480 pixel colour CMOS camera with a wide-angle 30°×40° field of view. This makes it very similar in almost every way to the average home PC webcam. The major difference is that this webcam is not in an average location but is instead in orbit around Mars. On a strict basis of non-interference with the primary science activities, the camera is turned on to provide unique wide-angle views of the planet below.A highly automated process ensures that the observations are scheduled on the spacecraft and then uploaded to the internet as rapidly as possible. There is no intermediate stage, so that visitors to the Mars Webcam blog serve as ‘citizen scientists’. Full raw datasets and processing instructions are provided along with a mechanism to allow visitors to comment on the blog. Members of the public are encouraged to use this in either a personal or an educational context and work with the images. We then take their excellent work and showcase it back on the blog. We even apply techniques developed by them to improve the data and webcam experience for others.The accessibility and simplicity of the images also makes the data ideal for educational use, especially as educational projects can then be showcased on the site as inspiration for others. The oft-neglected target audience of space enthusiasts is also important as this allows them to participate as part of an interplanetary instrument team.This paper will cover the history of the project and the technical background behind using the camera and linking the results to an accessible blog format. It will also cover the outreach successes of the project, some of the contributions from the Mars Webcam community, opportunities to use and work with the Mars Webcam and plans for future uses of the camera.

  3. Cardiopulmonary Response to Videogaming: Slaying Monsters Using Motion Sensor Versus Joystick Devices.

    PubMed

    Sherman, Jeffrey D; Sherman, Michael S; Heiman-Patterson, Terry

    2014-10-01

    Replacing physical activity with videogaming has been implicated in causing obesity. Studies have shown that using motion-sensing controllers with activity-promoting videogames expends energy comparable to aerobic exercise; however, effects of motion-sensing controllers have not been examined with traditional (non-exercise-promoting) videogames. We measured indirect calorimetry and heart rate in 14 subjects during rest and traditional videogaming using motion sensor and joystick controllers. Energy expenditure was higher while subjects were playing with the motion sensor (1.30±0.32 kcal/kg/hour) than with the joystick (1.07±0.26 kcal/kg/hour; P<0.01) or resting (0.91±0.24 kcal/kg/hour; P<0.01). Oxygen consumption during videogaming averaged 15.7 percent of predicted maximum for the motion sensor and 11.8 percent of maximum for the joystick. Minute ventilation was higher playing with the motion sensor (10.7±3.5 L/minute) than with the joystick (8.6±1.8 L/minute; P<0.02) or resting (6.7±1.4 L/minute; P<0.001), predominantly because of higher respiratory rates (15.2±4.3 versus 20.3±2.8 versus 20.4±4.2 beats/minute for resting, the joystick, and the motion sensor, respectively; P<0.001); tidal volume did not change significantly. Peak heart rate during gaming was 16.4 percent higher than resting (78.0±12.0) for joystick (90.1±15.0; P=0.002) and 17.4 percent higher for the motion sensor (91.6±14.1; P=0.002); mean heart rate did not differ significantly. Playing with a motion sensor burned significantly more calories than with a joystick, but the energy expended was modest. With both consoles, the increased respiratory rate without increasing tidal volume and the increased peak heart rate without increasing mean heart rate are consistent with psychological stimulation from videogaming, rather than a result of exercise. We conclude that using a motion sensor with traditional videogames does not provide adequate energy expenditure to provide cardiovascular conditioning.

  4. Video Feedback in the Classroom: Development of an Easy-to-Use Learning Environment

    ERIC Educational Resources Information Center

    De Poorter, John; De Jaegher, Lut; De Cock, Mieke; Neuttiens, Tom

    2007-01-01

    Video feedback offers great potential for use in teaching but the relative complexity of the normal set-up of a video camera, a special tripod and a monitor has limited its use in teaching. The authors have developed a computer-webcam set-up which simplifies this. Anyone with an ordinary computer and webcam can learn to control the video feedback…

  5. Miniature low-power inertial sensors: promising technology for implantable motion capture systems.

    PubMed

    Lambrecht, Joris M; Kirsch, Robert F

    2014-11-01

    Inertial and magnetic sensors are valuable for untethered, self-contained human movement analysis. Very recently, complete integration of inertial sensors, magnetic sensors, and processing into single packages, has resulted in miniature, low power devices that could feasibly be employed in an implantable motion capture system. We developed a wearable sensor system based on a commercially available system-in-package inertial and magnetic sensor. We characterized the accuracy of the system in measuring 3-D orientation-with and without magnetometer-based heading compensation-relative to a research grade optical motion capture system. The root mean square error was less than 4° in dynamic and static conditions about all axes. Using four sensors, recording from seven degrees-of-freedom of the upper limb (shoulder, elbow, wrist) was demonstrated in one subject during reaching motions. Very high correlation and low error was found across all joints relative to the optical motion capture system. Findings were similar to previous publications using inertial sensors, but at a fraction of the power consumption and size of the sensors. Such ultra-small, low power sensors provide exciting new avenues for movement monitoring for various movement disorders, movement-based command interfaces for assistive devices, and implementation of kinematic feedback systems for assistive interventions like functional electrical stimulation.

  6. The Use of Wearable Inertial Motion Sensors in Human Lower Limb Biomechanics Studies: A Systematic Review

    PubMed Central

    Fong, Daniel Tik-Pui; Chan, Yue-Yan

    2010-01-01

    Wearable motion sensors consisting of accelerometers, gyroscopes and magnetic sensors are readily available nowadays. The small size and low production costs of motion sensors make them a very good tool for human motions analysis. However, data processing and accuracy of the collected data are important issues for research purposes. In this paper, we aim to review the literature related to usage of inertial sensors in human lower limb biomechanics studies. A systematic search was done in the following search engines: ISI Web of Knowledge, Medline, SportDiscus and IEEE Xplore. Thirty nine full papers and conference abstracts with related topics were included in this review. The type of sensor involved, data collection methods, study design, validation methods and its applications were reviewed. PMID:22163542

  7. The use of wearable inertial motion sensors in human lower limb biomechanics studies: a systematic review.

    PubMed

    Fong, Daniel Tik-Pui; Chan, Yue-Yan

    2010-01-01

    Wearable motion sensors consisting of accelerometers, gyroscopes and magnetic sensors are readily available nowadays. The small size and low production costs of motion sensors make them a very good tool for human motions analysis. However, data processing and accuracy of the collected data are important issues for research purposes. In this paper, we aim to review the literature related to usage of inertial sensors in human lower limb biomechanics studies. A systematic search was done in the following search engines: ISI Web of Knowledge, Medline, SportDiscus and IEEE Xplore. Thirty nine full papers and conference abstracts with related topics were included in this review. The type of sensor involved, data collection methods, study design, validation methods and its applications were reviewed.

  8. The lucky image-motion prediction for simple scene observation based soft-sensor technology

    NASA Astrophysics Data System (ADS)

    Li, Yan; Su, Yun; Hu, Bin

    2015-08-01

    High resolution is important to earth remote sensors, while the vibration of the platforms of the remote sensors is a major factor restricting high resolution imaging. The image-motion prediction and real-time compensation are key technologies to solve this problem. For the reason that the traditional autocorrelation image algorithm cannot meet the demand for the simple scene image stabilization, this paper proposes to utilize soft-sensor technology in image-motion prediction, and focus on the research of algorithm optimization in imaging image-motion prediction. Simulations results indicate that the improving lucky image-motion stabilization algorithm combining the Back Propagation Network (BP NN) and support vector machine (SVM) is the most suitable for the simple scene image stabilization. The relative error of the image-motion prediction based the soft-sensor technology is below 5%, the training computing speed of the mathematical predication model is as fast as the real-time image stabilization in aerial photography.

  9. Motion and ranging sensor system for through-the-wall surveillance system

    NASA Astrophysics Data System (ADS)

    Black, Jeffrey D.

    2002-08-01

    A portable Through-the-Wall Surveillance System is being developed for law enforcement, counter-terrorism, and military use. The Motion and Ranging Sensor is a radar that operates in a frequency band that allows for surveillance penetration of most non-metallic walls. Changes in the sensed radar returns are analyzed to detect the human motion that would typically be present during a hostage or barricaded suspect scenario. The system consists of a Sensor Unit, a handheld Remote Display Unit, and an optional laptop computer Command Display Console. All units are battery powered and a wireless link provides command and data communication between units. The Sensor Unit is deployed close to the wall or door through which the surveillance is to occur. After deploying the sensor the operator may move freely as required by the scenario. Up to five Sensor Units may be deployed at a single location. A software upgrade to the Command Display Console is also being developed. This software upgrade will combine the motion detected by multiple Sensor Units and determine and track the location of detected motion in two dimensions.

  10. A wearable strain sensor based on a carbonized nano-sponge/silicone composite for human motion detection.

    PubMed

    Yu, Xiao-Guang; Li, Yuan-Qing; Zhu, Wei-Bin; Huang, Pei; Wang, Tong-Tong; Hu, Ning; Fu, Shao-Yun

    2017-05-25

    Melamine sponge, also known as nano-sponge, is widely used as an abrasive cleaner in our daily life. In this work, the fabrication of a wearable strain sensor for human motion detection is first demonstrated with a commercially available nano-sponge as a starting material. The key resistance sensitive material in the wearable strain sensor is obtained by the encapsulation of a carbonized nano-sponge (CNS) with silicone resin. The as-fabricated CNS/silicone sensor is highly sensitive to strain with a maximum gauge factor of 18.42. In addition, the CNS/silicone sensor exhibits a fast and reliable response to various cyclic loading within a strain range of 0-15% and a loading frequency range of 0.01-1 Hz. Finally, the CNS/silicone sensor as a wearable device for human motion detection including joint motion, eye blinking, blood pulse and breathing is demonstrated by attaching the sensor to the corresponding parts of the human body. In consideration of the simple fabrication technique, low material cost and excellent strain sensing performance, the CNS/silicone sensor is believed to have great potential in the next-generation of wearable devices for human motion detection.

  11. Study of Submicron Particle Size Distributions by Laser Doppler Measurement of Brownian Motion.

    DTIC Science & Technology

    1984-10-29

    o ..... . 5-1 A.S *6NEW DISCOVERIES OR INVENTIONS .. o......... ......... 6-1 APPENDIX: COMPUTER SIMULATION OF THE BROWNIAN MOTION SENSOR SIGNALS...scattering regime by analysis of the scattered light intensity and particle mass (size) obtained using the Brownian motion sensor . 9 Task V - By application...of the Brownian motion sensor in a flat-flame burner, the contractor shall assess the application of this technique for In-situ sizing of submicron

  12. On-Line Detection and Segmentation of Sports Motions Using a Wearable Sensor.

    PubMed

    Kim, Woosuk; Kim, Myunggyu

    2018-03-19

    In sports motion analysis, observation is a prerequisite for understanding the quality of motions. This paper introduces a novel approach to detect and segment sports motions using a wearable sensor for supporting systematic observation. The main goal is, for convenient analysis, to automatically provide motion data, which are temporally classified according to the phase definition. For explicit segmentation, a motion model is defined as a sequence of sub-motions with boundary states. A sequence classifier based on deep neural networks is designed to detect sports motions from continuous sensor inputs. The evaluation on two types of motions (soccer kicking and two-handed ball throwing) verifies that the proposed method is successful for the accurate detection and segmentation of sports motions. By developing a sports motion analysis system using the motion model and the sequence classifier, we show that the proposed method is useful for observation of sports motions by automatically providing relevant motion data for analysis.

  13. Visualizing individual microtubules by bright field microscopy

    NASA Astrophysics Data System (ADS)

    Gutiérrez-Medina, Braulio; Block, Steven M.

    2010-11-01

    Microtubules are slender (˜25 nm diameter), filamentous polymers involved in cellular structure and organization. Individual microtubules have been visualized via fluorescence imaging of dye-labeled tubulin subunits and by video-enhanced, differential interference-contrast microscopy of unlabeled polymers using sensitive CCD cameras. We demonstrate the imaging of unstained microtubules using a microscope with conventional bright field optics in conjunction with a webcam-type camera and a light-emitting diode illuminator. The light scattered by microtubules is image-processed to remove the background, reduce noise, and enhance contrast. The setup is based on a commercial microscope with a minimal set of inexpensive components, suitable for implementation in a student laboratory. We show how this approach can be used in a demonstration motility assay, tracking the gliding motions of microtubules driven by the motor protein kinesin.

  14. Vital sign monitoring for elderly at home: development of a compound sensor for pulse rate and motion.

    PubMed

    Sum, K W; Zheng, Y P; Mak, A F T

    2005-01-01

    This paper describes the development of a miniaturized wearable vital sign monitor which is aimed for use by elderly at home. The development of a compound sensor for pulse rate, motion, and skin temperature is reported. A pair of infrared sensor working in reflection mode was used to detect the pulse rate from various sites over the body including the wrist and finger. Meanwhile, a motion sensor was used to detect the motion of the body. In addition, the temperature on the skin surface was sensed by a semiconductor temperature sensor. A prototype has been built into a box with a dimension of 2 x 2.5 x 4 cm3. The device includes the sensors, microprocessor, circuits, battery, and a wireless transceiver for communicating data with a data terminal.

  15. Sensitive and Flexible Polymeric Strain Sensor for Accurate Human Motion Monitoring

    PubMed Central

    Khan, Hassan; Kottapalli, Ajay; Asadnia, Mohsen

    2018-01-01

    Flexible electronic devices offer the capability to integrate and adapt with human body. These devices are mountable on surfaces with various shapes, which allow us to attach them to clothes or directly onto the body. This paper suggests a facile fabrication strategy via electrospinning to develop a stretchable, and sensitive poly (vinylidene fluoride) nanofibrous strain sensor for human motion monitoring. A complete characterization on the single PVDF nano fiber has been performed. The charge generated by PVDF electrospun strain sensor changes was employed as a parameter to control the finger motion of the robotic arm. As a proof of concept, we developed a smart glove with five sensors integrated into it to detect the fingers motion and transfer it to a robotic hand. Our results shows that the proposed strain sensors are able to detect tiny motion of fingers and successfully run the robotic hand. PMID:29389851

  16. Cable-driven elastic parallel humanoid head with face tracking for Autism Spectrum Disorder interventions.

    PubMed

    Su, Hao; Dickstein-Fischer, Laurie; Harrington, Kevin; Fu, Qiushi; Lu, Weina; Huang, Haibo; Cole, Gregory; Fischer, Gregory S

    2010-01-01

    This paper presents the development of new prismatic actuation approach and its application in human-safe humanoid head design. To reduce actuator output impedance and mitigate unexpected external shock, the prismatic actuation method uses cables to drive a piston with preloaded spring. By leveraging the advantages of parallel manipulator and cable-driven mechanism, the developed neck has a parallel manipulator embodiment with two cable-driven limbs embedded with preloaded springs and one passive limb. The eye mechanism is adapted for low-cost webcam with succinct "ball-in-socket" structure. Based on human head anatomy and biomimetics, the neck has 3 degree of freedom (DOF) motion: pan, tilt and one decoupled roll while each eye has independent pan and synchronous tilt motion (3 DOF eyes). A Kalman filter based face tracking algorithm is implemented to interact with the human. This neck and eye structure is translatable to other human-safe humanoid robots. The robot's appearance reflects a non-threatening image of a penguin, which can be translated into a possible therapeutic intervention for children with Autism Spectrum Disorders.

  17. Real-time weigh-in-motion measurement using fiber Bragg grating sensors

    NASA Astrophysics Data System (ADS)

    Huang, Ying; Palek, Leonard; Strommen, Robert; Worel, Ben; Chen, Genda

    2014-03-01

    Overloading truck loads have long been one of the key reasons for accelerating road damage, especially in rural regions where the design loads are expected to be small and in the cold regions where the wet-and-dry cycle places a significant role. To control the designed traffic loads and further guide the road design in future, periodical weight stations have been implemented for double check of the truck loads. The weight stations give chances for missing measurement of overloaded vehicles, slow down the traffic, and require additional labors. Infrastructure weight-in-motion sensors, on the other hand, keep consistent traffic flow and monitor all types of vehicles on roads. However, traditional electrical weight-in-motion sensors showed high electromagnetic interference (EMI), high dependence on environmental conditions such as moisture, and relatively short life cycle, which are unreliable for long-term weigh-inmotion measurements. Fiber Bragg grating (FBG) sensors, with unique advantages of compactness, immune to EMI and moisture, capability of quasi-distributed sensing, and long life cycle, will be a perfect candidate for long-term weigh-in-motion measurements. However, the FBG sensors also surfer from their frangible nature of glass materials for a good survive rate during sensor installation. In this study, the FBG based weight-in-motion sensors were packaged by fiber reinforced polymer (FRP) materials and further validated at MnROAD facility, Minnesota DOT (MnDOT). The design and layout of the FRP-FBG weight-in-motion sensors, their field test setup, data acquisition, and data analysis will be presented. Upon validation, the FRP-FBG sensors can be applied weigh-in-motion measurement to assistant road managements.

  18. DexterNet: An Open Platform for Heterogeneous Body Sensor Networks and Its Applications

    DTIC Science & Technology

    2008-12-19

    motion, ECG PC, PDA 802.15.4 No No ALARM-NET pulse oximetry STARGATE Bluetooth No Yes [19] motion, ECG PDA, PC 802.11 (temperature, light, PIR) DexterNet...motion, ECG PDA 802.15.4 Yes Possible via SPINE EIP, GPS PC (e.g., air pollution sensor) MICAz, SHIMMER uses MICAz sensors and STARGATE to relay the

  19. Motion camera based on a custom vision sensor and an FPGA architecture

    NASA Astrophysics Data System (ADS)

    Arias-Estrada, Miguel

    1998-09-01

    A digital camera for custom focal plane arrays was developed. The camera allows the test and development of analog or mixed-mode arrays for focal plane processing. The camera is used with a custom sensor for motion detection to implement a motion computation system. The custom focal plane sensor detects moving edges at the pixel level using analog VLSI techniques. The sensor communicates motion events using the event-address protocol associated to a temporal reference. In a second stage, a coprocessing architecture based on a field programmable gate array (FPGA) computes the time-of-travel between adjacent pixels. The FPGA allows rapid prototyping and flexible architecture development. Furthermore, the FPGA interfaces the sensor to a compact PC computer which is used for high level control and data communication to the local network. The camera could be used in applications such as self-guided vehicles, mobile robotics and smart surveillance systems. The programmability of the FPGA allows the exploration of further signal processing like spatial edge detection or image segmentation tasks. The article details the motion algorithm, the sensor architecture, the use of the event- address protocol for velocity vector computation and the FPGA architecture used in the motion camera system.

  20. Using Motion-Sensor Games to Encourage Physical Activity for Adults with Intellectual Disability.

    PubMed

    Taylor, Michael J; Taylor, David; Gamboa, Patricia; Vlaev, Ivo; Darzi, Ara

    2016-01-01

    Adults with Intellectual Disability (ID) are at high risk of being in poor health as a result of exercising infrequently; recent evidence indicates this is often due to there being a lack of opportunities to exercise. This pilot study involved an investigation of the use of motion-sensor game technology to enable and encourage exercise for this population. Five adults (two female; 3 male, aged 34-74 [M = 55.20, SD = 16.71] with ID used motion-sensor games to conduct exercise at weekly sessions at a day-centre. Session attendees reported to have enjoyed using the games, and that they would like to use the games in future. Interviews were conducted with six (four female; two male, aged 27-51 [M = 40.20, SD = 11.28]) day-centre staff, which indicated ways in which the motion-sensor games could be improved for use by adults with ID, and barriers to consider in relation to their possible future implementation. Findings indicate motion-sensor games provide a useful, enjoyable and accessible way for adults with ID to exercise. Future research could investigate implementation of motion-sensor games as a method for exercise promotion for this population on a larger scale.

  1. Ferroelectric Zinc Oxide Nanowire Embedded Flexible Sensor for Motion and Temperature Sensing.

    PubMed

    Shin, Sung-Ho; Park, Dae Hoon; Jung, Joo-Yun; Lee, Min Hyung; Nah, Junghyo

    2017-03-22

    We report a simple method to realize multifunctional flexible motion sensor using ferroelectric lithium-doped ZnO-PDMS. The ferroelectric layer enables piezoelectric dynamic sensing and provides additional motion information to more precisely discriminate different motions. The PEDOT:PSS-functionalized AgNWs, working as electrode layers for the piezoelectric sensing layer, resistively detect a change of both movement or temperature. Thus, through the optimal integration of both elements, the sensing limit, accuracy, and functionality can be further expanded. The method introduced here is a simple and effective route to realize a high-performance flexible motion sensor with integrated multifunctionalities.

  2. A Soft Sensor-Based Three-Dimensional (3-D) Finger Motion Measurement System

    PubMed Central

    Park, Wookeun; Ro, Kyongkwan; Kim, Suin; Bae, Joonbum

    2017-01-01

    In this study, a soft sensor-based three-dimensional (3-D) finger motion measurement system is proposed. The sensors, made of the soft material Ecoflex, comprise embedded microchannels filled with a conductive liquid metal (EGaln). The superior elasticity, light weight, and sensitivity of soft sensors allows them to be embedded in environments in which conventional sensors cannot. Complicated finger joints, such as the carpometacarpal (CMC) joint of the thumb are modeled to specify the location of the sensors. Algorithms to decouple the signals from soft sensors are proposed to extract the pure flexion, extension, abduction, and adduction joint angles. The performance of the proposed system and algorithms are verified by comparison with a camera-based motion capture system. PMID:28241414

  3. Beyond detection: nuclear physics with a webcam in an educational setting

    NASA Astrophysics Data System (ADS)

    Pallone, A.; Barnes, P.

    2016-09-01

    Basic understanding of nuclear science enhances our daily-life experience in many areas, such as the environment, medicine, electric power generation, and even politics. Yet typical school curricula do not provide for experiments that explore the topic. We present a means by which educators can use the ubiquitous webcam and inexpensive sources of radiation to lead their students in a quantitative exploration of radioactivity, radiation, and the applications of nuclear physics.

  4. Evaluation of the traffic parameters in a metropolitan area by fusing visual perceptions and CNN processing of webcam images.

    PubMed

    Faro, Alberto; Giordano, Daniela; Spampinato, Concetto

    2008-06-01

    This paper proposes a traffic monitoring architecture based on a high-speed communication network whose nodes are equipped with fuzzy processors and cellular neural network (CNN) embedded systems. It implements a real-time mobility information system where visual human perceptions sent by people working on the territory and video-sequences of traffic taken from webcams are jointly processed to evaluate the fundamental traffic parameters for every street of a metropolitan area. This paper presents the whole methodology for data collection and analysis and compares the accuracy and the processing time of the proposed soft computing techniques with other existing algorithms. Moreover, this paper discusses when and why it is recommended to fuse the visual perceptions of the traffic with the automated measurements taken from the webcams to compute the maximum traveling time that is likely needed to reach any destination in the traffic network.

  5. Remote detection of mental workload changes using cardiac parameters assessed with a low-cost webcam.

    PubMed

    Bousefsaf, Frédéric; Maaoui, Choubeila; Pruski, Alain

    2014-10-01

    We introduce a new framework for detecting mental workload changes using video frames obtained from a low-cost webcam. Image processing in addition to a continuous wavelet transform filtering method were developed and applied to remove major artifacts and trends on raw webcam photoplethysmographic signals. The measurements are performed on human faces. To induce stress, we have employed a computerized and interactive Stroop color word test on a set composed by twelve participants. The electrodermal activity of the participants was recorded and compared to the mental workload curve assessed by merging two parameters derived from the pulse rate variability and photoplethysmographic amplitude fluctuations, which reflect peripheral vasoconstriction changes. The results exhibit strong correlation between the two measurement techniques. This study offers further support for the applicability of mental workload detection by remote and low-cost means, providing an alternative to conventional contact techniques. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. A new technique for detecting colored macro plastic debris on beaches using webcam images and CIELUV.

    PubMed

    Kataoka, Tomoya; Hinata, Hirofumi; Kako, Shin'ichiro

    2012-09-01

    We have developed a technique for detecting the pixels of colored macro plastic debris (plastic pixels) using photographs taken by a webcam installed on Sodenohama beach, Tobishima Island, Japan. The technique involves generating color references using a uniform color space (CIELUV) to detect plastic pixels and removing misdetected pixels by applying a composite image method. This technique demonstrated superior performance in terms of detecting plastic pixels of various colors compared to the previous method which used the lightness values in the CIELUV color space. We also obtained a 10-month time series of the quantity of plastic debris by combining a projective transformation with this technique. By sequential monitoring of plastic debris quantity using webcams, it is possible to clean up beaches systematically, to clarify the transportation processes of plastic debris in oceans and coastal seas and to estimate accumulation rates on beaches. Copyright © 2012 Elsevier Ltd. All rights reserved.

  7. The Vestibular System and Human Dynamic Space Orientation

    NASA Technical Reports Server (NTRS)

    Meiry, J. L.

    1966-01-01

    The motion sensors of the vestibular system are studied to determine their role in human dynamic space orientation and manual vehicle control. The investigation yielded control models for the sensors, descriptions of the subsystems for eye stabilization, and demonstrations of the effects of motion cues on closed loop manual control. Experiments on the abilities of subjects to perceive a variety of linear motions provided data on the dynamic characteristics of the otoliths, the linear motion sensors. Angular acceleration threshold measurements supplemented knowledge of the semicircular canals, the angular motion sensors. Mathematical models are presented to describe the known control characteristics of the vestibular sensors, relating subjective perception of motion to objective motion of a vehicle. The vestibular system, the neck rotation proprioceptors and the visual system form part of the control system which maintains the eye stationary relative to a target or a reference. The contribution of each of these systems was identified through experiments involving head and body rotations about a vertical axis. Compensatory eye movements in response to neck rotation were demonstrated and their dynamic characteristics described by a lag-lead model. The eye motions attributable to neck rotations and vestibular stimulation obey superposition when both systems are active. Human operator compensatory tracking is investigated in simple vehicle orientation control system with stable and unstable controlled elements. Control of vehicle orientation to a reference is simulated in three modes: visual, motion and combined. Motion cues sensed by the vestibular system through tactile sensation enable the operator to generate more lead compensation than in fixed base simulation with only visual input. The tracking performance of the human in an unstable control system near the limits of controllability is shown to depend heavily upon the rate information provided by the vestibular sensors.

  8. Thermal Property Analysis of Axle Load Sensors for Weighing Vehicles in Weigh-in-Motion System

    PubMed Central

    Burnos, Piotr; Gajda, Janusz

    2016-01-01

    Systems which permit the weighing of vehicles in motion are called dynamic Weigh-in-Motion scales. In such systems, axle load sensors are embedded in the pavement. Among the influencing factors that negatively affect weighing accuracy is the pavement temperature. This paper presents a detailed analysis of this phenomenon and describes the properties of polymer, quartz and bending plate load sensors. The studies were conducted in two ways: at roadside Weigh-in-Motion sites and at a laboratory using a climate chamber. For accuracy assessment of roadside systems, the reference vehicle method was used. The pavement temperature influence on the weighing error was experimentally investigated as well as a non-uniform temperature distribution along and across the Weigh-in-Motion site. Tests carried out in the climatic chamber allowed the influence of temperature on the sensor intrinsic error to be determined. The results presented clearly show that all kinds of sensors are temperature sensitive. This is a new finding, as up to now the quartz and bending plate sensors were considered insensitive to this factor. PMID:27983704

  9. Self-adapted and tunable graphene strain sensors for detecting both subtle and large human motions.

    PubMed

    Tao, Lu-Qi; Wang, Dan-Yang; Tian, He; Ju, Zhen-Yi; Liu, Ying; Pang, Yu; Chen, Yuan-Quan; Yang, Yi; Ren, Tian-Ling

    2017-06-22

    Conventional strain sensors rarely have both a high gauge factor and a large strain range simultaneously, so they can only be used in specific situations where only a high sensitivity or a large strain range is required. However, for detecting human motions that include both subtle and large motions, these strain sensors can't meet the diverse demands simultaneously. Here, we come up with laser patterned graphene strain sensors with self-adapted and tunable performance for the first time. A series of strain sensors with either an ultrahigh gauge factor or a preferable strain range can be fabricated simultaneously via one-step laser patterning, and are suitable for detecting all human motions. The strain sensors have a GF of up to 457 with a strain range of 35%, or have a strain range of up to 100% with a GF of 268. Most importantly, the performance of the strain sensors can be easily tuned by adjusting the patterns of the graphene, so that the sensors can meet diverse demands in both subtle and large motion situations. The graphene strain sensors show significant potential in applications such as wearable electronics, health monitoring and intelligent robots. Furthermore, the facile, fast and low-cost fabrication method will make them possible and practical to be used for commercial applications in the future.

  10. Biomechanics of the Sensor–Tissue Interface—Effects of Motion, Pressure, and Design on Sensor Performance and Foreign Body Response—Part II: Examples and Application

    PubMed Central

    Helton, Kristen L; Ratner, Buddy D; Wisniewski, Natalie A

    2011-01-01

    This article is the second part of a two-part review in which we explore the biomechanics of the sensor–tissue interface as an important aspect of continuous glucose sensor biocompatibility. Part I, featured in this issue of Journal of Diabetes Science and Technology, describes a theoretical framework of how biomechanical factors such as motion and pressure (typically micromotion and micropressure) affect tissue physiology around a sensor and in turn, impact sensor performance. Here in Part II, a literature review is presented that summarizes examples of motion or pressure affecting sensor performance. Data are presented that show how both acute and chronic forces can impact continuous glucose monitor signals. Also presented are potential strategies for countering the ill effects of motion and pressure on glucose sensors. Improved engineering and optimized chemical biocompatibility have advanced sensor design and function, but we believe that mechanical biocompatibility, a rarely considered factor, must also be optimized in order to achieve an accurate, long-term, implantable sensor. PMID:21722579

  11. Training to acquire psychomotor skills for endoscopic endonasal surgery using a personal webcam trainer.

    PubMed

    Hirayama, Ryuichi; Fujimoto, Yasunori; Umegaki, Masao; Kagawa, Naoki; Kinoshita, Manabu; Hashimoto, Naoya; Yoshimine, Toshiki

    2013-05-01

    Existing training methods for neuroendoscopic surgery have mainly emphasized the acquisition of anatomical knowledge and procedures for operating an endoscope and instruments. For laparoscopic surgery, various training systems have been developed to teach handling of an endoscope as well as the manipulation of instruments for speedy and precise endoscopic performance using both hands. In endoscopic endonasal surgery (EES), especially using a binostril approach to the skull base and intradural lesions, the learning of more meticulous manipulation of instruments is mandatory, and it may be necessary to develop another type of training method for acquiring psychomotor skills for EES. Authors of the present study developed an inexpensive, portable personal trainer using a webcam and objectively evaluated its utility. Twenty-five neurosurgeons volunteered for this study and were divided into 2 groups, a novice group (19 neurosurgeons) and an experienced group (6 neurosurgeons). Before and after the exercises of set tasks with a webcam box trainer, the basic endoscopic skills of each participant were objectively assessed using the virtual reality simulator (LapSim) while executing 2 virtual tasks: grasping and instrument navigation. Scores for the following 11 performance variables were recorded: instrument time, instrument misses, instrument path length, and instrument angular path (all of which were measured in both hands), as well as tissue damage, max damage, and finally overall score. Instrument time was indicated as movement speed; instrument path length and instrument angular path as movement efficiency; and instrument misses, tissue damage, and max damage as movement precision. In the novice group, movement speed and efficiency were significantly improved after the training. In the experienced group, significant improvement was not shown in the majority of virtual tasks. Before the training, significantly greater movement speed and efficiency were demonstrated in the experienced group, but no difference in movement precision was shown between the 2 groups. After the training, no significant differences were shown between the 2 groups in the majority of the virtual tasks. Analysis revealed that the webcam trainer improved the basic skills of the novices, increasing movement speed and efficiency without sacrificing movement precision. Novices using this unique webcam trainer showed improvement in psychomotor skills for EES. The authors believe that training in terms of basic endoscopic skills is meaningful and that the webcam training system can play a role in daily off-the-job training for EES.

  12. MATLAB tools for improved characterization and quantification of volcanic incandescence in Webcam imagery; applications at Kilauea Volcano, Hawai'i

    USGS Publications Warehouse

    Patrick, Matthew R.; Kauahikaua, James P.; Antolik, Loren

    2010-01-01

    Webcams are now standard tools for volcano monitoring and are used at observatories in Alaska, the Cascades, Kamchatka, Hawai'i, Italy, and Japan, among other locations. Webcam images allow invaluable documentation of activity and provide a powerful comparative tool for interpreting other monitoring datastreams, such as seismicity and deformation. Automated image processing can improve the time efficiency and rigor of Webcam image interpretation, and potentially extract more information on eruptive activity. For instance, Lovick and others (2008) provided a suite of processing tools that performed such tasks as noise reduction, eliminating uninteresting images from an image collection, and detecting incandescence, with an application to dome activity at Mount St. Helens during 2007. In this paper, we present two very simple automated approaches for improved characterization and quantification of volcanic incandescence in Webcam images at Kilauea Volcano, Hawai`i. The techniques are implemented in MATLAB (version 2009b, Copyright: The Mathworks, Inc.) to take advantage of the ease of matrix operations. Incandescence is a useful indictor of the location and extent of active lava flows and also a potentially powerful proxy for activity levels at open vents. We apply our techniques to a period covering both summit and east rift zone activity at Kilauea during 2008?2009 and compare the results to complementary datasets (seismicity, tilt) to demonstrate their integrative potential. A great strength of this study is the demonstrated success of these tools in an operational setting at the Hawaiian Volcano Observatory (HVO) over the course of more than a year. Although applied only to Webcam images here, the techniques could be applied to any type of sequential images, such as time-lapse photography. We expect that these tools are applicable to many other volcano monitoring scenarios, and the two MATLAB scripts, as they are implemented at HVO, are included in the appendixes. These scripts would require minor to moderate modifications for use elsewhere, primarily to customize directory navigation. If the user has some familiarity with MATLAB, or programming in general, these modifications should be easy. Although we originally anticipated needing the Image Processing Toolbox, the scripts in the appendixes do not require it. Thus, only the base installation of MATLAB is needed. Because fairly basic MATLAB functions are used, we expect that the script can be run successfully by versions earlier than 2009b.

  13. Ultra-wideband radar motion sensor

    DOEpatents

    McEwan, Thomas E.

    1994-01-01

    A motion sensor is based on ultra-wideband (UWB) radar. UWB radar range is determined by a pulse-echo interval. For motion detection, the sensors operate by staring at a fixed range and then sensing any change in the averaged radar reflectivity at that range. A sampling gate is opened at a fixed delay after the emission of a transmit pulse. The resultant sampling gate output is averaged over repeated pulses. Changes in the averaged sampling gate output represent changes in the radar reflectivity at a particular range, and thus motion.

  14. Ultra-wideband radar motion sensor

    DOEpatents

    McEwan, T.E.

    1994-11-01

    A motion sensor is based on ultra-wideband (UWB) radar. UWB radar range is determined by a pulse-echo interval. For motion detection, the sensors operate by staring at a fixed range and then sensing any change in the averaged radar reflectivity at that range. A sampling gate is opened at a fixed delay after the emission of a transmit pulse. The resultant sampling gate output is averaged over repeated pulses. Changes in the averaged sampling gate output represent changes in the radar reflectivity at a particular range, and thus motion. 15 figs.

  15. Ultra-wideband radar sensors and networks

    DOEpatents

    Leach, Jr., Richard R; Nekoogar, Faranak; Haugen, Peter C

    2013-08-06

    Ultra wideband radar motion sensors strategically placed in an area of interest communicate with a wireless ad hoc network to provide remote area surveillance. Swept range impulse radar and a heart and respiration monitor combined with the motion sensor further improves discrimination.

  16. Image processing for drawing recognition

    NASA Astrophysics Data System (ADS)

    Feyzkhanov, Rustem; Zhelavskaya, Irina

    2014-03-01

    The task of recognizing edges of rectangular structures is well known. Still, almost all of them work with static images and has no limit on work time. We propose application of conducting homography for the video stream which can be obtained from the webcam. We propose algorithm which can be successfully used for this kind of application. One of the main use cases of such application is recognition of drawings by person on the piece of paper before webcam.

  17. An alternative cost-effective image processing based sensor for continuous turbidity monitoring

    NASA Astrophysics Data System (ADS)

    Chai, Matthew Min Enn; Ng, Sing Muk; Chua, Hong Siang

    2017-03-01

    Turbidity is the degree to which the optical clarity of water is reduced by impurities in the water. High turbidity values in rivers and lakes promote the growth of pathogen, reduce dissolved oxygen levels and reduce light penetration. The conventional ways of on-site turbidity measurements involve the use of optical sensors similar to those used in commercial turbidimeters. However, these instruments require frequent maintenance due to biological fouling on the sensors. Thus, image processing was proposed as an alternative technique for continuous turbidity measurement to reduce frequency of maintenance. The camera was kept out of water to avoid biofouling while other parts of the system submerged in water can be coated with anti-fouling surface. The setup developed consisting of a webcam, a light source, a microprocessor and a motor used to control the depth of a reference object. The image processing algorithm quantifies the relationship between the number of circles detected on the reference object and the depth of the reference object. By relating the quantified data to turbidity, the setup was able to detect turbidity levels from 20 NTU to 380 NTU with measurement error of 15.7 percent. The repeatability and sensitivity of the turbidity measurement was found to be satisfactory.

  18. Lake Ice Monitoring with Webcams

    NASA Astrophysics Data System (ADS)

    Xiao, M.; Rothermel, M.; Tom, M.; Galliani, S.; Baltsavias, E.; Schindler, K.

    2018-05-01

    Continuous monitoring of climate indicators is important for understanding the dynamics and trends of the climate system. Lake ice has been identified as one such indicator, and has been included in the list of Essential Climate Variables (ECVs). Currently there are two main ways to survey lake ice cover and its change over time, in-situ measurements and satellite remote sensing. The challenge with both of them is to ensure sufficient spatial and temporal resolution. Here, we investigate the possibility to monitor lake ice with video streams acquired by publicly available webcams. Main advantages of webcams are their high temporal frequency and dense spatial sampling. By contrast, they have low spectral resolution and limited image quality. Moreover, the uncontrolled radiometry and low, oblique viewpoints result in heavily varying appearance of water, ice and snow. We present a workflow for pixel-wise semantic segmentation of images into these classes, based on state-of-the-art encoder-decoder Convolutional Neural Networks (CNNs). The proposed segmentation pipeline is evaluated on two sequences featuring different ground sampling distances. The experiment suggests that (networks of) webcams have great potential for lake ice monitoring. The overall per-pixel accuracies for both tested data sets exceed 95 %. Furthermore, per-image discrimination between ice-on and ice-off conditions, derived by accumulating per-pixel results, is 100 % correct for our test data, making it possible to precisely recover freezing and thawing dates.

  19. Non-contact and noise tolerant heart rate monitoring using microwave doppler sensor and range imagery.

    PubMed

    Matsunag, Daichi; Izumi, Shintaro; Okuno, Keisuke; Kawaguchi, Hiroshi; Yoshimoto, Masahiko

    2015-01-01

    This paper describes a non-contact and noise-tolerant heart beat monitoring system. The proposed system comprises a microwave Doppler sensor and range imagery using Microsoft Kinect™. The possible application of the proposed system is a driver health monitoring. We introduce the sensor fusion approach to minimize the heart beat detection error. The proposed algorithm can subtract a body motion artifact from Doppler sensor output using time-frequency analysis. The body motion artifact is a crucially important problem for biosignal monitoring using microwave Doppler sensor. The body motion speed is obtainable from range imagery, which has 5-mm resolution at 30-cm distance. Measurement results show that the success rate of the heart beat detection is improved about 75% on average when the Doppler wave is degraded by the body motion artifact.

  20. Artifact Noise Removal Techniques on Seismocardiogram Using Two Tri-Axial Accelerometers

    PubMed Central

    Luu, Loc; Dinh, Anh

    2018-01-01

    The aim of this study is on the investigation of motion noise removal techniques using two-accelerometer sensor system and various placements of the sensors on gentle movement and walking of the patients. A Wi-Fi based data acquisition system and a framework on Matlab are developed to collect and process data while the subjects are in motion. The tests include eight volunteers who have no record of heart disease. The walking and running data on the subjects are analyzed to find the minimal-noise bandwidth of the SCG signal. This bandwidth is used to design filters in the motion noise removal techniques and peak signal detection. There are two main techniques of combining signals from the two sensors to mitigate the motion artifact: analog processing and digital processing. The analog processing comprises analog circuits performing adding or subtracting functions and bandpass filter to remove artifact noises before entering the data acquisition system. The digital processing processes all the data using combinations of total acceleration and z-axis only acceleration. The two techniques are tested on three placements of accelerometer sensors including horizontal, vertical, and diagonal on gentle motion and walking. In general, the total acceleration and z-axis acceleration are the best techniques to deal with gentle motion on all sensor placements which improve average systolic signal-noise-ratio (SNR) around 2 times and average diastolic SNR around 3 times comparing to traditional methods using only one accelerometer. With walking motion, ADDER and z-axis acceleration are the best techniques on all placements of the sensors on the body which enhance about 7 times of average systolic SNR and about 11 times of average diastolic SNR comparing to only one accelerometer method. Among the sensor placements, the performance of horizontal placement of the sensors is outstanding comparing with other positions on all motions. PMID:29614821

  1. Avoiding space robot collisions utilizing the NASA/GSFC tri-mode skin sensor

    NASA Technical Reports Server (NTRS)

    Prinz, F. B. S.; Mahalingam, S.

    1992-01-01

    A capacitance based proximity sensor, the 'Capaciflector' (Vranish 92), has been developed at the Goddard Space Flight Center of NASA. We had investigated the use of this sensor for avoiding and maneuvering around unexpected objects (Mahalingam 92). The approach developed there would help in executing collision-free gross motions. Another important aspect of robot motion planning is fine motion planning. Let us classify manipulator robot motion planning into two groups at the task level: gross motion planning and fine motion planning. We use the term 'gross planning' where the major degrees of freedom of the robot execute large motions, for example, the motion of a robot in a pick and place type operation. We use the term 'fine motion' to indicate motions of the robot where the large dofs do not move much, and move far less than the mirror dofs, such as in inserting a peg in a hole. In this report we describe our experiments and experiences in this area.

  2. Motion perception: behavior and neural substrate.

    PubMed

    Mather, George

    2011-05-01

    Visual motion perception is vital for survival. Single-unit recordings in primate primary visual cortex (V1) have revealed the existence of specialized motion sensing neurons; perceptual effects such as the motion after-effect demonstrate their importance for motion perception. Human psychophysical data on motion detection can be explained by a computational model of cortical motion sensors. Both psychophysical and physiological data reveal at least two classes of motion sensor capable of sensing motion in luminance-defined and texture-defined patterns, respectively. Psychophysical experiments also reveal that motion can be seen independently of motion sensor output, based on attentive tracking of visual features. Sensor outputs are inherently ambiguous, due to the problem of univariance in neural responses. In order to compute stimulus direction and speed, the visual system must compare the responses of many different sensors sensitive to different directions and speeds. Physiological data show that this computation occurs in the visual middle temporal (MT) area. Recent psychophysical studies indicate that information about spatial form may also play a role in motion computations. Adaptation studies show that the human visual system is selectively sensitive to large-scale optic flow patterns, and physiological studies indicate that cells in the middle superior temporal (MST) area derive this sensitivity from the combined responses of many MT cells. Extraretinal signals used to control eye movements are an important source of signals to cancel out the retinal motion responses generated by eye movements, though visual information also plays a role. A number of issues remain to be resolved at all levels of the motion-processing hierarchy. WIREs Cogni Sci 2011 2 305-314 DOI: 10.1002/wcs.110 For further resources related to this article, please visit the WIREs website Additional Supporting Information may be found in http://www.lifesci.sussex.ac.uk/home/George_Mather/Motion/index.html. Copyright © 2010 John Wiley & Sons, Ltd.

  3. A Study on the Performance of Low Cost MEMS Sensors in Strong Motion Studies

    NASA Astrophysics Data System (ADS)

    Tanırcan, Gulum; Alçık, Hakan; Kaya, Yavuz; Beyen, Kemal

    2017-04-01

    Recent advances in sensors have helped the growth of local networks. In recent years, many Micro Electro Mechanical System (MEMS)-based accelerometers have been successfully used in seismology and earthquake engineering projects. This is basically due to the increased precision obtained in these downsized instruments. Moreover, they are cheaper alternatives to force-balance type accelerometers. In Turkey, though MEMS-based accelerometers have been used in various individual applications such as magnitude and location determination of earthquakes, structural health monitoring, earthquake early warning systems, MEMS-based strong motion networks are not currently available in other populated areas of the country. Motivation of this study comes from the fact that, if MEMS sensors are qualified to record strong motion parameters of large earthquakes, a dense network can be formed in an affordable price at highly populated areas. The goals of this study are 1) to test the performance of MEMS sensors, which are available in the inventory of the Institute through shake table tests, and 2) to setup a small scale network for observing online data transfer speed to a trusted in-house routine. In order to evaluate the suitability of sensors in strong motion related studies, MEMS sensors and a reference sensor are tested under excitations of sweeping waves as well as scaled earthquake recordings. Amplitude response and correlation coefficients versus frequencies are compared. As for earthquake recordings, comparisons are carried out in terms of strong motion(SM) parameters (PGA, PGV, AI, CAV) and elastic response of structures (Sa). Furthermore, this paper also focuses on sensitivity and selectivity for sensor performances in time-frequency domain to compare different sensing characteristics and analyzes the basic strong motion parameters that influence the design majors. Results show that the cheapest MEMS sensors under investigation are able to record the mid-frequency dominant SM parameters PGV and CAV with high correlation. PGA and AI, the high frequency components of the ground motion, are underestimated. Such a difference, on the other hand, does not manifest itself on intensity estimations. PGV and CAV values from the reference and MEMS sensors converge to the same seismic intensity level. Hence a strong motion network with MEMS sensors could be a modest option to produce PGV-based damage impact of an urban area under large magnitude earthquake threats in the immediate vicinity.

  4. Electro-Optic Segment-Segment Sensors for Radio and Optical Telescopes

    NASA Technical Reports Server (NTRS)

    Abramovici, Alex

    2012-01-01

    A document discusses an electro-optic sensor that consists of a collimator, attached to one segment, and a quad diode, attached to an adjacent segment. Relative segment-segment motion causes the beam from the collimator to move across the quad diode, thus generating a measureable electric signal. This sensor type, which is relatively inexpensive, can be configured as an edge sensor, or as a remote segment-segment motion sensor.

  5. Virtual Sensors for Advanced Controllers in Rehabilitation Robotics.

    PubMed

    Mancisidor, Aitziber; Zubizarreta, Asier; Cabanes, Itziar; Portillo, Eva; Jung, Je Hyung

    2018-03-05

    In order to properly control rehabilitation robotic devices, the measurement of interaction force and motion between patient and robot is an essential part. Usually, however, this is a complex task that requires the use of accurate sensors which increase the cost and the complexity of the robotic device. In this work, we address the development of virtual sensors that can be used as an alternative of actual force and motion sensors for the Universal Haptic Pantograph (UHP) rehabilitation robot for upper limbs training. These virtual sensors estimate the force and motion at the contact point where the patient interacts with the robot using the mathematical model of the robotic device and measurement through low cost position sensors. To demonstrate the performance of the proposed virtual sensors, they have been implemented in an advanced position/force controller of the UHP rehabilitation robot and experimentally evaluated. The experimental results reveal that the controller based on the virtual sensors has similar performance to the one using direct measurement (less than 0.005 m and 1.5 N difference in mean error). Hence, the developed virtual sensors to estimate interaction force and motion can be adopted to replace actual precise but normally high-priced sensors which are fundamental components for advanced control of rehabilitation robotic devices.

  6. 3D Measurement of Forearm and Upper Arm during Throwing Motion using Body Mounted Sensor

    NASA Astrophysics Data System (ADS)

    Koda, Hideharu; Sagawa, Koichi; Kuroshima, Kouta; Tsukamoto, Toshiaki; Urita, Kazutaka; Ishibashi, Yasuyuki

    The aim of this study is to propose the measurement method of three-dimensional (3D) movement of forearm and upper arm during pitching motion of baseball using inertial sensors without serious consideration of sensor installation. Although high accuracy measurement of sports motion is achieved by using optical motion capture system at present, it has some disadvantages such as the calibration of cameras and limitation of measurement place. Whereas the proposed method for 3D measurement of pitching motion using body mounted sensors provides trajectory and orientation of upper arm by the integration of acceleration and angular velocity measured on upper limb. The trajectory of forearm is derived so that the elbow joint axis of forearm corresponds to that of upper arm. Spatial relation between upper limb and sensor system is obtained by performing predetermined movements of upper limb and utilizing angular velocity and gravitational acceleration. The integration error is modified so that the estimated final position, velocity and posture of upper limb agree with the actual ones. The experimental results of the measurement of pitching motion show that trajectories of shoulder, elbow and wrist estimated by the proposed method are highly correlated to those from the motion capture system within the estimation error of about 10 [%].

  7. Movement Behaviour of Traditionally Managed Cattle in the Eastern Province of Zambia Captured Using Two-Dimensional Motion Sensors.

    PubMed

    Lubaba, Caesar H; Hidano, Arata; Welburn, Susan C; Revie, Crawford W; Eisler, Mark C

    2015-01-01

    Two-dimensional motion sensors use electronic accelerometers to record the lying, standing and walking activity of cattle. Movement behaviour data collected automatically using these sensors over prolonged periods of time could be of use to stakeholders making management and disease control decisions in rural sub-Saharan Africa leading to potential improvements in animal health and production. Motion sensors were used in this study with the aim of monitoring and quantifying the movement behaviour of traditionally managed Angoni cattle in Petauke District in the Eastern Province of Zambia. This study was designed to assess whether motion sensors were suitable for use on traditionally managed cattle in two veterinary camps in Petauke District in the Eastern Province of Zambia. In each veterinary camp, twenty cattle were selected for study. Each animal had a motion sensor placed on its hind leg to continuously measure and record its movement behaviour over a two week period. Analysing the sensor data using principal components analysis (PCA) revealed that the majority of variability in behaviour among studied cattle could be attributed to their behaviour at night and in the morning. The behaviour at night was markedly different between veterinary camps; while differences in the morning appeared to reflect varying behaviour across all animals. The study results validate the use of such motion sensors in the chosen setting and highlight the importance of appropriate data summarisation techniques to adequately describe and compare animal movement behaviours if association to other factors, such as location, breed or health status are to be assessed.

  8. Use of digital webcam images to track spring green-up in a deciduous broadleaf forest.

    PubMed

    Richardson, Andrew D; Jenkins, Julian P; Braswell, Bobby H; Hollinger, David Y; Ollinger, Scott V; Smith, Marie-Louise

    2007-05-01

    Understanding relationships between canopy structure and the seasonal dynamics of photosynthetic uptake of CO(2) by forest canopies requires improved knowledge of canopy phenology at eddy covariance flux tower sites. We investigated whether digital webcam images could be used to monitor the trajectory of spring green-up in a deciduous northern hardwood forest. A standard, commercially available webcam was mounted at the top of the eddy covariance tower at the Bartlett AmeriFlux site. Images were collected each day around midday. Red, green, and blue color channel brightness data for a 640 x 100-pixel region-of-interest were extracted from each image. We evaluated the green-up signal extracted from webcam images against changes in the fraction of incident photosynthetically active radiation that is absorbed by the canopy (f (APAR)), a broadband normalized difference vegetation index (NDVI), and the light-saturated rate of canopy photosynthesis (A(max)), inferred from eddy flux measurements. The relative brightness of the green channel (green %) was relatively stable through the winter months. A steady rising trend in green % began around day 120 and continued through day 160, at which point a stable plateau was reached. The relative brightness of the blue channel (blue %) also responded to spring green-up, although there was more day-to-day variation in the signal because blue % was more sensitive to changes in the quality (spectral distribution) of incident radiation. Seasonal changes in blue % were most similar to those in f (APAR) and broadband NDVI, whereas changes in green % proceeded more slowly, and were drawn out over a longer period of time. Changes in A(max) lagged green-up by at least a week. We conclude that webcams offer an inexpensive means by which phenological changes in the canopy state can be quantified. A network of cameras could offer a novel opportunity to implement a regional or national phenology monitoring program.

  9. Estimation of heart rate variability using a compact radiofrequency motion sensor.

    PubMed

    Sugita, Norihiro; Matsuoka, Narumi; Yoshizawa, Makoto; Abe, Makoto; Homma, Noriyasu; Otake, Hideharu; Kim, Junghyun; Ohtaki, Yukio

    2015-12-01

    Physiological indices that reflect autonomic nervous activity are considered useful for monitoring peoples' health on a daily basis. A number of such indices are derived from heart rate variability, which is obtained by a radiofrequency (RF) motion sensor without making physical contact with the user's body. However, the bulkiness of RF motion sensors used in previous studies makes them unsuitable for home use. In this study, a new method to measure heart rate variability using a compact RF motion sensor that is sufficiently small to fit in a user's shirt pocket is proposed. To extract a heart rate related component from the sensor signal, an algorithm that optimizes a digital filter based on the power spectral density of the signal is proposed. The signals of the RF motion sensor were measured for 29 subjects during the resting state and their heart rate variability was estimated from the measured signals using the proposed method and a conventional method. A correlation coefficient between true heart rate and heart rate estimated from the proposed method was 0.69. Further, the experimental results showed the viability of the RF sensor for monitoring autonomic nervous activity. However, some improvements such as controlling the direction of sensing were necessary for stable measurement. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.

  10. Highly stretchable and wearable graphene strain sensors with controllable sensitivity for human motion monitoring.

    PubMed

    Park, Jung Jin; Hyun, Woo Jin; Mun, Sung Cik; Park, Yong Tae; Park, O Ok

    2015-03-25

    Because of their outstanding electrical and mechanical properties, graphene strain sensors have attracted extensive attention for electronic applications in virtual reality, robotics, medical diagnostics, and healthcare. Although several strain sensors based on graphene have been reported, the stretchability and sensitivity of these sensors remain limited, and also there is a pressing need to develop a practical fabrication process. This paper reports the fabrication and characterization of new types of graphene strain sensors based on stretchable yarns. Highly stretchable, sensitive, and wearable sensors are realized by a layer-by-layer assembly method that is simple, low-cost, scalable, and solution-processable. Because of the yarn structures, these sensors exhibit high stretchability (up to 150%) and versatility, and can detect both large- and small-scale human motions. For this study, wearable electronics are fabricated with implanted sensors that can monitor diverse human motions, including joint movement, phonation, swallowing, and breathing.

  11. A Motion Tracking and Sensor Fusion Module for Medical Simulation.

    PubMed

    Shen, Yunhe; Wu, Fan; Tseng, Kuo-Shih; Ye, Ding; Raymond, John; Konety, Badrinath; Sweet, Robert

    2016-01-01

    Here we introduce a motion tracking or navigation module for medical simulation systems. Our main contribution is a sensor fusion method for proximity or distance sensors integrated with inertial measurement unit (IMU). Since IMU rotation tracking has been widely studied, we focus on the position or trajectory tracking of the instrument moving freely within a given boundary. In our experiments, we have found that this module reliably tracks instrument motion.

  12. Scalable sensing electronics towards a motion capture suit

    NASA Astrophysics Data System (ADS)

    Xu, Daniel; Gisby, Todd A.; Xie, Shane; Anderson, Iain A.

    2013-04-01

    Being able to accurately record body motion allows complex movements to be characterised and studied. This is especially important in the film or sport coaching industry. Unfortunately, the human body has over 600 skeletal muscles, giving rise to multiple degrees of freedom. In order to accurately capture motion such as hand gestures, elbow or knee flexion and extension, vast numbers of sensors are required. Dielectric elastomer (DE) sensors are an emerging class of electroactive polymer (EAP) that is soft, lightweight and compliant. These characteristics are ideal for a motion capture suit. One challenge is to design sensing electronics that can simultaneously measure multiple sensors. This paper describes a scalable capacitive sensing device that can measure up to 8 different sensors with an update rate of 20Hz.

  13. Second Interim Report on the Installation and Evaluation of Weigh-In-Motion Utilizing Quartz-Piezo Sensor Technology

    DOT National Transportation Integrated Search

    1999-11-01

    The objective of this study is to determine the sensor survivability, accuracy and reliability of quartz-piezoelectric weigh-in-motion (WIM) sensors under actual traffic conditions in Connecticut's environment. This second interim report provides a s...

  14. Validation of cardiac accelerometer sensor measurements.

    PubMed

    Remme, Espen W; Hoff, Lars; Halvorsen, Per Steinar; Naerum, Edvard; Skulstad, Helge; Fleischer, Lars A; Elle, Ole Jakob; Fosse, Erik

    2009-12-01

    In this study we have investigated the accuracy of an accelerometer sensor designed for the measurement of cardiac motion and automatic detection of motion abnormalities caused by myocardial ischaemia. The accelerometer, attached to the left ventricular wall, changed its orientation relative to the direction of gravity during the cardiac cycle. This caused a varying gravity component in the measured acceleration signal that introduced an error in the calculation of myocardial motion. Circumferential displacement, velocity and rotation of the left ventricular apical region were calculated from the measured acceleration signal. We developed a mathematical method to separate translational and gravitational acceleration components based on a priori assumptions of myocardial motion. The accuracy of the measured motion was investigated by comparison with known motion of a robot arm programmed to move like the heart wall. The accuracy was also investigated in an animal study. The sensor measurements were compared with simultaneously recorded motion from a robot arm attached next to the sensor on the heart and with measured motion by echocardiography and a video camera. The developed compensation method for the varying gravity component improved the accuracy of the calculated velocity and displacement traces, giving very good agreement with the reference methods.

  15. Extraction and Analysis of Respiratory Motion Using Wearable Inertial Sensor System during Trunk Motion

    PubMed Central

    Gaidhani, Apoorva; Moon, Kee S.; Ozturk, Yusuf; Lee, Sung Q.; Youm, Woosub

    2017-01-01

    Respiratory activity is an essential vital sign of life that can indicate changes in typical breathing patterns and irregular body functions such as asthma and panic attacks. Many times, there is a need to monitor breathing activity while performing day-to-day functions such as standing, bending, trunk stretching or during yoga exercises. A single IMU (inertial measurement unit) can be used in measuring respiratory motion; however, breathing motion data may be influenced by a body trunk movement that occurs while recording respiratory activity. This research employs a pair of wireless, wearable IMU sensors custom-made by the Department of Electrical Engineering at San Diego State University. After appropriate sensor placement for data collection, this research applies principles of robotics, using the Denavit-Hartenberg convention, to extract relative angular motion between the two sensors. One of the obtained relative joint angles in the “Sagittal” plane predominantly yields respiratory activity. An improvised version of the proposed method and wearable, wireless sensors can be suitable to extract respiratory information while performing sports or exercises, as they do not restrict body motion or the choice of location to gather data. PMID:29258214

  16. Inertial navigation sensor integrated motion analysis for autonomous vehicle navigation

    NASA Technical Reports Server (NTRS)

    Roberts, Barry; Bhanu, Bir

    1992-01-01

    Recent work on INS integrated motion analysis is described. Results were obtained with a maximally passive system of obstacle detection (OD) for ground-based vehicles and rotorcraft. The OD approach involves motion analysis of imagery acquired by a passive sensor in the course of vehicle travel to generate range measurements to world points within the sensor FOV. INS data and scene analysis results are used to enhance interest point selection, the matching of the interest points, and the subsequent motion-based computations, tracking, and OD. The most important lesson learned from the research described here is that the incorporation of inertial data into the motion analysis program greatly improves the analysis and makes the process more robust.

  17. Validation of enhanced kinect sensor based motion capturing for gait assessment

    PubMed Central

    Müller, Björn; Ilg, Winfried; Giese, Martin A.

    2017-01-01

    Optical motion capturing systems are expensive and require substantial dedicated space to be set up. On the other hand, they provide unsurpassed accuracy and reliability. In many situations however flexibility is required and the motion capturing system can only temporarily be placed. The Microsoft Kinect v2 sensor is comparatively cheap and with respect to gait analysis promising results have been published. We here present a motion capturing system that is easy to set up, flexible with respect to the sensor locations and delivers high accuracy in gait parameters comparable to a gold standard motion capturing system (VICON). Further, we demonstrate that sensor setups which track the person only from one-side are less accurate and should be replaced by two-sided setups. With respect to commonly analyzed gait parameters, especially step width, our system shows higher agreement with the VICON system than previous reports. PMID:28410413

  18. Using the Scroll Wheel on a Wireless Mouse as a Motion Sensor

    NASA Astrophysics Data System (ADS)

    Taylor, Richard S.; Wilson, William R.

    2010-12-01

    Since its inception in the mid-80s, the computer mouse has undergone several design changes. As the mouse has evolved, physicists have found new ways to utilize it as a motion sensor. For example, the rollers in a mechanical mouse have been used as pulleys to study the motion of a magnet moving through a copper tube as a quantitative demonstration of Lenz's law and to study mechanical oscillators (e.g., mass-spring system and compound pendulum).1-3 Additionally, the optical system in an optical mouse has been used to study a mechanical oscillator (e.g., mass-spring system).4 The argument for using a mouse as a motion sensor has been and continues to be availability and cost. This paper continues this tradition by detailing the use of the scroll wheel on a wireless mouse as a motion sensor.

  19. Android Based Area Web Monitoring

    NASA Astrophysics Data System (ADS)

    Kanigoro, Bayu; Galih Salman, Afan; Moniaga, Jurike V.; Chandra, Eric; Rezky Chandra, Zein

    2014-03-01

    The research objective is to develop an application that can be used in the monitoring of an area by using a webcam. It aims to create a sense of security on the user's application because it can monitor an area using mobile phone anywhere. The results obtained in this study is to create an area with a webcam monitoring application that can be accessed anywhere as long as the monitoring results have internet access and can also be accessed through Android Based Mobile Phone.

  20. Highly Sensitive Flexible Human Motion Sensor Based on ZnSnO3/PVDF Composite

    NASA Astrophysics Data System (ADS)

    Yang, Young Jin; Aziz, Shahid; Mehdi, Syed Murtuza; Sajid, Memoon; Jagadeesan, Srikanth; Choi, Kyung Hyun

    2017-07-01

    A highly sensitive body motion sensor has been fabricated based on a composite active layer of zinc stannate (ZnSnO3) nano-cubes and poly(vinylidene fluoride) (PVDF) polymer. The thin film-based active layer was deposited on polyethylene terephthalate flexible substrate through D-bar coating technique. Electrical and morphological characterizations of the films and sensors were carried out to discover the physical characteristics and the output response of the devices. The synergistic effect between piezoelectric ZnSnO3 nanocubes and β phase PVDF provides the composite with a desirable electrical conductivity, remarkable bend sensitivity, and excellent stability, ideal for the fabrication of a motion sensor. The recorded resistance of the sensor towards the bending angles of -150° to 0° to 150° changed from 20 MΩ to 55 MΩ to 100 MΩ, respectively, showing the composite to be a very good candidate for motion sensing applications.

  1. Effect of tilt on strong motion data processing

    USGS Publications Warehouse

    Graizer, V.M.

    2005-01-01

    In the near-field of an earthquake the effects of the rotational components of ground motion may not be negligible compared to the effects of translational motions. Analyses of the equations of motion of horizontal and vertical pendulums show that horizontal sensors are sensitive not only to translational motion but also to tilts. Ignoring this tilt sensitivity may produce unreliable results, especially in calculations of permanent displacements and long-period calculations. In contrast to horizontal sensors, vertical sensors do not have these limitations, since they are less sensitive to tilts. In general, only six-component systems measuring rotations and accelerations, or three-component systems similar to systems used in inertial navigation assuring purely translational motion of accelerometers can be used to calculate residual displacements. ?? 2004 Elsevier Ltd. All rights reserved.

  2. Self-evaluation on Motion Adaptation for Service Robots

    NASA Astrophysics Data System (ADS)

    Funabora, Yuki; Yano, Yoshikazu; Doki, Shinji; Okuma, Shigeru

    We suggest self motion evaluation method to adapt to environmental changes for service robots. Several motions such as walking, dancing, demonstration and so on are described with time series patterns. These motions are optimized with the architecture of the robot and under certain surrounding environment. Under unknown operating environment, robots cannot accomplish their tasks. We propose autonomous motion generation techniques based on heuristic search with histories of internal sensor values. New motion patterns are explored under unknown operating environment based on self-evaluation. Robot has some prepared motions which realize the tasks under the designed environment. Internal sensor values observed under the designed environment with prepared motions show the interaction results with the environment. Self-evaluation is composed of difference of internal sensor values between designed environment and unknown operating environment. Proposed method modifies the motions to synchronize the interaction results on both environment. New motion patterns are generated to maximize self-evaluation function without external information, such as run length, global position of robot, human observation and so on. Experimental results show that the possibility to adapt autonomously patterned motions to environmental changes.

  3. Video-based eye tracking for neuropsychiatric assessment.

    PubMed

    Adhikari, Sam; Stark, David E

    2017-01-01

    This paper presents a video-based eye-tracking method, ideally deployed via a mobile device or laptop-based webcam, as a tool for measuring brain function. Eye movements and pupillary motility are tightly regulated by brain circuits, are subtly perturbed by many disease states, and are measurable using video-based methods. Quantitative measurement of eye movement by readily available webcams may enable early detection and diagnosis, as well as remote/serial monitoring, of neurological and neuropsychiatric disorders. We successfully extracted computational and semantic features for 14 testing sessions, comprising 42 individual video blocks and approximately 17,000 image frames generated across several days of testing. Here, we demonstrate the feasibility of collecting video-based eye-tracking data from a standard webcam in order to assess psychomotor function. Furthermore, we were able to demonstrate through systematic analysis of this data set that eye-tracking features (in particular, radial and tangential variance on a circular visual-tracking paradigm) predict performance on well-validated psychomotor tests. © 2017 New York Academy of Sciences.

  4. Imaging the Moon II: Webcam CCD Observations and Analysis (a Two-Week Lab for Non-Majors)

    NASA Astrophysics Data System (ADS)

    Sato, T.

    2014-07-01

    Imaging the Moon is a successful two-week lab involving real sky observations of the Moon in which students make telescopic observations and analyze their own images. Originally developed around the 35 mm film camera, a common household object adapted for astronomical work, the lab now uses webcams as film photography has evolved into an obscure specialty technology and increasing numbers of students have little familiarity with it. The printed circuit board with the CCD is harvested from a commercial webcam and affixed to a tube to mount on a telescope in place of an eyepiece. Image frames are compiled to form a lunar mosaic, and crater sizes are measured. Students also work through the logistical steps of telescope time assignment and scheduling. They learn to keep a schedule and work with uncertainties of weather in ways paralleling research observations. Because there is no need for a campus observatory, this lab can be replicated at a wide variety of institutions.

  5. ShakeMapple : tapping laptop motion sensors to map the felt extents of an earthquake

    NASA Astrophysics Data System (ADS)

    Bossu, Remy; McGilvary, Gary; Kamb, Linus

    2010-05-01

    There is a significant pool of untapped sensor resources available in portable computer embedded motion sensors. Included primarily to detect sudden strong motion in order to park the disk heads to prevent damage to the disks in the event of a fall or other severe motion, these sensors may also be tapped for other uses as well. We have developed a system that takes advantage of the Apple Macintosh laptops' embedded Sudden Motion Sensors to record earthquake strong motion data to rapidly build maps of where and to what extent an earthquake has been felt. After an earthquake, it is vital to understand the damage caused especially in urban environments as this is often the scene for large amounts of damage caused by earthquakes. Gathering as much information from these impacts to determine where the areas that are likely to be most effected, can aid in distributing emergency services effectively. The ShakeMapple system operates in the background, continuously saving the most recent data from the motion sensors. After an earthquake has occurred, the ShakeMapple system calculates the peak acceleration within a time window around the expected arrival and sends that to servers at the EMSC. A map plotting the felt responses is then generated and presented on the web. Because large-scale testing of such an application is inherently difficult, we propose to organize a broadly distributed "simulated event" test. The software will be available for download in April, after which we plan to organize a large-scale test by the summer. At a specified time, participating testers will be asked to create their own strong motion to be registered and submitted by the ShakeMapple client. From these responses, a felt map will be produced representing the broadly-felt effects of the simulated event.

  6. Using Passive Sensing to Estimate Relative Energy Expenditure for Eldercare Monitoring

    PubMed Central

    2012-01-01

    This paper describes ongoing work in analyzing sensor data logged in the homes of seniors. An estimation of relative energy expenditure is computed using motion density from passive infrared motion sensors mounted in the environment. We introduce a new algorithm for detecting visitors in the home using motion sensor data and a set of fuzzy rules. The visitor algorithm, as well as a previous algorithm for identifying time-away-from-home (TAFH), are used to filter the logged motion sensor data. Thus, the energy expenditure estimate uses data collected only when the resident is home alone. Case studies are included from TigerPlace, an Aging in Place community, to illustrate how the relative energy expenditure estimate can be used to track health conditions over time. PMID:25266777

  7. In vitro validation and reliability study of electromagnetic skin sensors for evaluation of end range of motion positions of the hip.

    PubMed

    Audenaert, E A; Vigneron, L; Van Hoof, T; D'Herde, K; van Maele, G; Oosterlinck, D; Pattyn, C

    2011-12-01

    There is growing evidence that femoroacetabular impingement (FAI) is a probable risk factor for the development of early osteoarthritis in the nondysplastic hip. As FAI arises with end range of motion activities, measurement errors related to skin movement might be higher than anticipated when using previously reported methods for kinematic evaluation of the hip. We performed an in vitro validation and reliability study of a noninvasive method to define pelvic and femur positions in end range of motion activities of the hip using an electromagnetic tracking device. Motion data, collected from sensors attached to the bone and skin of 11 cadaver hips, were simultaneously obtained and compared in a global reference frame. Motion data were then transposed in the hip joint local coordinate systems. Observer-related variability in locating the anatomical landmarks required to define the local coordinate system and variability of determining the hip joint center was evaluated. Angular root mean square (RMS) differences between the bony and skin sensors averaged 3.2° (SD 3.5°) and 1.8° (SD 2.3°) in the global reference frame for the femur and pelvic sensors, respectively. Angular RMS differences between the bony and skin sensors in the hip joint local coordinate systems ranged at end range of motion and dependent on the motion under investigation from 1.91 to 5.81°. The presented protocol for evaluation of hip motion seems to be suited for the 3-D description of motion relevant to the experimental and clinical evaluation of femoroacetabular impingement.

  8. Homemade laparoscopic simulators for surgical trainees.

    PubMed

    Khine, Myo; Leung, Edward; Morran, Chris; Muthukumarasamy, Giri

    2011-06-01

    Laparoscopic surgery has become increasingly popular in recent times. Laparoscopic skills and dexterity can be improved by using simulators. We provide a step-by-step guide with diagrams to build an individual homemade laparoscopic trainer box, which is easily available and affordable. We collected the required material for our homemade trainer box from a local DIY shop and purchased a high-definition (HD) webcam online. We used a 12-litre plastic storage box and mounted the webcam inside the lid of the plastic box. The ultraslim energy-saving fluorescent light was mounted behind the webcam. Holes were made in the plastic lid and patched with circular pieces of Neoprene to accommodate the insertion of laparoscopic instruments. The trainer box can be built in 3 hours. The trainer box weighs 1.2 kg with a light source, and is easily portable. It was demonstrated to a cohort of surgical trainees and they were very receptive, and liked the idea of an easy to assemble, low-cost trainer box with high-quality images. Our homemade trainer box offers HD vision that can be viewed on a personal computer, and the webcam is adjustable so it gives hands-free stability. It is built with a lightweight plastic box so it can be easily carried around by a trainee. This simple, inexpensive, easy-to-build trainer box makes a perfect solution for individuals who want to practise basic laparoscopic skills at home or in the workplace. © Blackwell Publishing Ltd 2011.

  9. Smart lighting using a liquid crystal modulator

    NASA Astrophysics Data System (ADS)

    Baril, Alexandre; Thibault, Simon; Galstian, Tigran

    2017-08-01

    Now that LEDs have massively invaded the illumination market, a clear trend has emerged for more efficient and targeted lighting. The project described here is at the leading edge of the trend and aims at developing an evaluation board to test smart lighting applications. This is made possible thanks to a new liquid crystal light modulator recently developed for broadening LED light beams. The modulator is controlled by electrical signals and is characterized by a linear working zone. This feature allows the implementation of a closed loop control with a sensor feedback. This project shows that the use of computer vision is a promising opportunity for cheap closed loop control. The developed evaluation board integrates the liquid crystal modulator, a webcam, a LED light source and all the required electronics to implement a closed loop control with a computer vision algorithm.

  10. Smart Braid Feedback for the Closed-Loop Control of Soft Robotic Systems.

    PubMed

    Felt, Wyatt; Chin, Khai Yi; Remy, C David

    2017-09-01

    This article experimentally investigates the potential of using flexible, inductance-based contraction sensors in the closed-loop motion control of soft robots. Accurate motion control remains a highly challenging task for soft robotic systems. Precise models of the actuation dynamics and environmental interactions are often unavailable. This renders open-loop control impossible, while closed-loop control suffers from a lack of suitable feedback. Conventional motion sensors, such as linear or rotary encoders, are difficult to adapt to robots that lack discrete mechanical joints. The rigid nature of these sensors runs contrary to the aspirational benefits of soft systems. As truly soft sensor solutions are still in their infancy, motion control of soft robots has so far relied on laboratory-based sensing systems such as motion capture, electromagnetic (EM) tracking, or Fiber Bragg Gratings. In this article, we used embedded flexible sensors known as Smart Braids to sense the contraction of McKibben muscles through changes in inductance. We evaluated closed-loop control on two systems: a revolute joint and a planar, one degree of freedom continuum manipulator. In the revolute joint, our proposed controller compensated for elasticity in the actuator connections. The Smart Braid feedback allowed motion control with a steady-state root-mean-square (RMS) error of [1.5]°. In the continuum manipulator, Smart Braid feedback enabled tracking of the desired tip angle with a steady-state RMS error of [1.25]°. This work demonstrates that Smart Braid sensors can provide accurate position feedback in closed-loop motion control suitable for field applications of soft robotic systems.

  11. A sensor fusion method for tracking vertical velocity and height based on inertial and barometric altimeter measurements.

    PubMed

    Sabatini, Angelo Maria; Genovese, Vincenzo

    2014-07-24

    A sensor fusion method was developed for vertical channel stabilization by fusing inertial measurements from an Inertial Measurement Unit (IMU) and pressure altitude measurements from a barometric altimeter integrated in the same device (baro-IMU). An Extended Kalman Filter (EKF) estimated the quaternion from the sensor frame to the navigation frame; the sensed specific force was rotated into the navigation frame and compensated for gravity, yielding the vertical linear acceleration; finally, a complementary filter driven by the vertical linear acceleration and the measured pressure altitude produced estimates of height and vertical velocity. A method was also developed to condition the measured pressure altitude using a whitening filter, which helped to remove the short-term correlation due to environment-dependent pressure changes from raw pressure altitude. The sensor fusion method was implemented to work on-line using data from a wireless baro-IMU and tested for the capability of tracking low-frequency small-amplitude vertical human-like motions that can be critical for stand-alone inertial sensor measurements. Validation tests were performed in different experimental conditions, namely no motion, free-fall motion, forced circular motion and squatting. Accurate on-line tracking of height and vertical velocity was achieved, giving confidence to the use of the sensor fusion method for tracking typical vertical human motions: velocity Root Mean Square Error (RMSE) was in the range 0.04-0.24 m/s; height RMSE was in the range 5-68 cm, with statistically significant performance gains when the whitening filter was used by the sensor fusion method to track relatively high-frequency vertical motions.

  12. On-site semi-quantitative analysis for ammonium nitrate detection using digital image colourimetry.

    PubMed

    Choodum, Aree; Boonsamran, Pichapat; NicDaeid, Niamh; Wongniramaikul, Worawit

    2015-12-01

    Digital image colourimetry was successfully applied in the semi-quantitative analysis of ammonium nitrate using Griess's test with zinc reduction. A custom-built detection box was developed to enable reproducible lighting of samples, and was used with the built-in webcams of a netbook and an ultrabook for on-site detection. The webcams were used for colour imaging of chemical reaction products in the samples, while the netbook was used for on-site colour analysis. The analytical performance was compared to a commercial external webcam and a digital single-lens reflex (DSLR) camera. The relationship between Red-Green-Blue intensities and ammonium nitrate concentration was investigated. The green channel intensity (IG) was the most sensitive for the pink-violet products from ammonium nitrate that revealed a spectrometric absorption peak at 546 nm. A wide linear range (5 to 250 mgL⁻¹) with a high sensitivity was obtained with the built-in webcam of the ultrabook. A considerably lower detection limit (1.34 ± 0.05mgL⁻¹) was also obtained using the ultrabook, in comparison with the netbook (2.6 ± 0.2 mgL⁻¹), the external web cam (3.4 ± 0.1 mgL⁻¹) and the DSLR (8.0 ± 0.5 mgL⁻¹). The best inter-day precision (over 3 days) was obtained with the external webcam (0.40 to 1.34%RSD), while the netbook and the ultrabook had 0.52 to 3.62% and 1.25 to 4.99% RSDs, respectively. The relative errors were +3.6, +5.6 and -7.1%, on analysing standard ammonium nitrate solutions of known concentration using IG, for the ultrabook, the external webcam, and the netbook, respectively, while the DSLR gave -4.4% relative error. However, the IG of the pink-violet reaction product suffers from interference by soil, so that blank subtraction (|IG-IGblank| or |AG-AGblank|) is recommended for soil sample analysis. This method also gave very good accuracies of -0.11 to -5.61% for spiked soil samples and the results presented for five seized samples showed good correlations between the various imaging devices and spectrophotometer used to determine ammonium nitrate concentrations. Five post-blast soil samples were also analysed and pink-violet product were observed using Griess's test without zinc reduction indicating the absence of ammonium nitrate. This demonstrates significant potential for practical and accurate on-site semi-quantitative determinations of ammonium nitrate concentration. Copyright © 2015 The Chartered Society of Forensic Sciences. Published by Elsevier Ireland Ltd. All rights reserved.

  13. Automatic, Satellite-Linked "Webcams" as a Tool in Ice-Shelf and Iceberg Research.

    NASA Astrophysics Data System (ADS)

    Ross, R.; Okal, M. H.; Thom, J. E.; Macayeal, D. R.

    2004-12-01

    Important dynamic events governing the behavior of ice shelves and icebergs are episodic in time and small in scale, making them difficult to observe. Traditional satellite imagery is acquired on a rigid schedule with coarse spatial resolution and this means that collisions between icebergs or the processes which create ice "mélange" that fills detachment rifts leading to ice-shelf calving, to give examples, cannot be readily observed. To overcome the temporal and spatial gaps in traditional remote sensing, we have deployed cameras at locations in Antarctica where research is conducted on the calving and subsequent evolution of icebergs. One camera is located at the edge of iceberg C16 in the Ross Sea, and is positioned to capture visual imagery of collisions between C16 and neighboring B15A. The second camera is located within the anticipated detachment rift of a "nascent" iceberg on the Ross Ice Shelf. The second camera is positioned to capture visual imagery of the rift's propagation and the in-fill of ice mélange, which constrains the mechanical influence of such rifts on the surrounding ice shelf. Both cameras are designed for connection to the internet (hence are referred to as "webcams") and possess variable image qualities and image-control technology. The cameras are also connected to data servers via the Iridium satellite telephone network and produce a daily image that is transmitted to the internet through the Iridium connection. Results of the initial trial deployments will be presented as a means of assessing both the techniques involved and the value of the scientific information acquired by these webcams. In the case of the iceberg webcam, several collisions between B15A and C16 were monitored over the period between January, 2003 and December, 2004. The time-lapse imagery obtained through this period showed giant "push mounds" of damaged firn on the edge and surface of the icebergs within the zones of contact as a consequence of the collisions. The push mounds were subsequently unstable, and calved as small scale ice debris soon after the collision, thereby returning the iceberg edge to a clean, vertical cliff-like appearance. A correlation between the iceberg collision record available from the webcam and data from a seismometer located on C16 is anticipated once the seismometer data is recovered. The webcam associated with the detachment rift of the nascent iceberg on the Ross Ice Shelf is planned to be deployed in early November, 2004. If results are available from this deployment, they too will be discussed.

  14. Analysing Harmonic Motions with an iPhone's Magnetometer

    ERIC Educational Resources Information Center

    Yavuz, Ahmet; Temiz, Burak Kagan

    2016-01-01

    In this paper, we propose an experiment for analysing harmonic motion using an iPhone's (or iPad's) magnetometer. This experiment consists of the detection of magnetic field variations obtained from an iPhone's magnetometer sensor. A graph of harmonic motion is directly displayed on the iPhone's screen using the "Sensor Kinetics"…

  15. Drift-Free Position Estimation of Periodic or Quasi-Periodic Motion Using Inertial Sensors

    PubMed Central

    Latt, Win Tun; Veluvolu, Kalyana Chakravarthy; Ang, Wei Tech

    2011-01-01

    Position sensing with inertial sensors such as accelerometers and gyroscopes usually requires other aided sensors or prior knowledge of motion characteristics to remove position drift resulting from integration of acceleration or velocity so as to obtain accurate position estimation. A method based on analytical integration has previously been developed to obtain accurate position estimate of periodic or quasi-periodic motion from inertial sensors using prior knowledge of the motion but without using aided sensors. In this paper, a new method is proposed which employs linear filtering stage coupled with adaptive filtering stage to remove drift and attenuation. The prior knowledge of the motion the proposed method requires is only approximate band of frequencies of the motion. Existing adaptive filtering methods based on Fourier series such as weighted-frequency Fourier linear combiner (WFLC), and band-limited multiple Fourier linear combiner (BMFLC) are modified to combine with the proposed method. To validate and compare the performance of the proposed method with the method based on analytical integration, simulation study is performed using periodic signals as well as real physiological tremor data, and real-time experiments are conducted using an ADXL-203 accelerometer. Results demonstrate that the performance of the proposed method outperforms the existing analytical integration method. PMID:22163935

  16. Implementation of advanced fiber optic and piezoelectric sensors : fabrication and laboratory testing of piezoelectric ceramic-polymer composite sensors for weigh-in-motion systems.

    DOT National Transportation Integrated Search

    1999-02-01

    Weigh-in-motion (WIM) systems might soon replace the conventional techniques used to enforce : weight restrictions for large vehicles on highways. Currently WIM systems use a piezoelectric : polymer sensor that produces a voltage proportional to an a...

  17. Inertial Motion Capture Costume Design Study

    PubMed Central

    Szczęsna, Agnieszka; Skurowski, Przemysław; Lach, Ewa; Pruszowski, Przemysław; Pęszor, Damian; Paszkuta, Marcin; Słupik, Janusz; Lebek, Kamil; Janiak, Mateusz; Polański, Andrzej; Wojciechowski, Konrad

    2017-01-01

    The paper describes a scalable, wearable multi-sensor system for motion capture based on inertial measurement units (IMUs). Such a unit is composed of accelerometer, gyroscope and magnetometer. The final quality of an obtained motion arises from all the individual parts of the described system. The proposed system is a sequence of the following stages: sensor data acquisition, sensor orientation estimation, system calibration, pose estimation and data visualisation. The construction of the system’s architecture with the dataflow programming paradigm makes it easy to add, remove and replace the data processing steps. The modular architecture of the system allows an effortless introduction of a new sensor orientation estimation algorithms. The original contribution of the paper is the design study of the individual components used in the motion capture system. The two key steps of the system design are explored in this paper: the evaluation of sensors and algorithms for the orientation estimation. The three chosen algorithms have been implemented and investigated as part of the experiment. Due to the fact that the selection of the sensor has a significant impact on the final result, the sensor evaluation process is also explained and tested. The experimental results confirmed that the choice of sensor and orientation estimation algorithm affect the quality of the final results. PMID:28304337

  18. Electromagnetic tracking of motion in the proximity of computer generated graphical stimuli: a tutorial.

    PubMed

    Schnabel, Ulf H; Hegenloh, Michael; Müller, Hermann J; Zehetleitner, Michael

    2013-09-01

    Electromagnetic motion-tracking systems have the advantage of capturing the tempo-spatial kinematics of movements independently of the visibility of the sensors. However, they are limited in that they cannot be used in the proximity of electromagnetic field sources, such as computer monitors. This prevents exploiting the tracking potential of the sensor system together with that of computer-generated visual stimulation. Here we present a solution for presenting computer-generated visual stimulation that does not distort the electromagnetic field required for precise motion tracking, by means of a back projection medium. In one experiment, we verify that cathode ray tube monitors, as well as thin-film-transistor monitors, distort electro-magnetic sensor signals even at a distance of 18 cm. Our back projection medium, by contrast, leads to no distortion of the motion-tracking signals even when the sensor is touching the medium. This novel solution permits combining the advantages of electromagnetic motion tracking with computer-generated visual stimulation.

  19. An error-based micro-sensor capture system for real-time motion estimation

    NASA Astrophysics Data System (ADS)

    Yang, Lin; Ye, Shiwei; Wang, Zhibo; Huang, Zhipei; Wu, Jiankang; Kong, Yongmei; Zhang, Li

    2017-10-01

    A wearable micro-sensor motion capture system with 16 IMUs and an error-compensatory complementary filter algorithm for real-time motion estimation has been developed to acquire accurate 3D orientation and displacement in real life activities. In the proposed filter algorithm, the gyroscope bias error, orientation error and magnetic disturbance error are estimated and compensated, significantly reducing the orientation estimation error due to sensor noise and drift. Displacement estimation, especially for activities such as jumping, has been the challenge in micro-sensor motion capture. An adaptive gait phase detection algorithm has been developed to accommodate accurate displacement estimation in different types of activities. The performance of this system is benchmarked with respect to the results of VICON optical capture system. The experimental results have demonstrated effectiveness of the system in daily activities tracking, with estimation error 0.16 ± 0.06 m for normal walking and 0.13 ± 0.11 m for jumping motions. Research supported by the National Natural Science Foundation of China (Nos. 61431017, 81272166).

  20. Navigation Aiding by a Hybrid Laser-Camera Motion Estimator for Micro Aerial Vehicles.

    PubMed

    Atman, Jamal; Popp, Manuel; Ruppelt, Jan; Trommer, Gert F

    2016-09-16

    Micro Air Vehicles (MAVs) equipped with various sensors are able to carry out autonomous flights. However, the self-localization of autonomous agents is mostly dependent on Global Navigation Satellite Systems (GNSS). In order to provide an accurate navigation solution in absence of GNSS signals, this article presents a hybrid sensor. The hybrid sensor is a deep integration of a monocular camera and a 2D laser rangefinder so that the motion of the MAV is estimated. This realization is expected to be more flexible in terms of environments compared to laser-scan-matching approaches. The estimated ego-motion is then integrated in the MAV's navigation system. However, first, the knowledge about the pose between both sensors is obtained by proposing an improved calibration method. For both calibration and ego-motion estimation, 3D-to-2D correspondences are used and the Perspective-3-Point (P3P) problem is solved. Moreover, the covariance estimation of the relative motion is presented. The experiments show very accurate calibration and navigation results.

  1. Note: Reliable and non-contact 6D motion tracking system based on 2D laser scanners for cargo transportation.

    PubMed

    Kim, Young-Keun; Kim, Kyung-Soo

    2014-10-01

    Maritime transportation demands an accurate measurement system to track the motion of oscillating container boxes in real time. However, it is a challenge to design a sensor system that can provide both reliable and non-contact methods of 6-DOF motion measurements of a remote object for outdoor applications. In the paper, a sensor system based on two 2D laser scanners is proposed for detecting the relative 6-DOF motion of a crane load in real time. Even without implementing a camera, the proposed system can detect the motion of a remote object using four laser beam points. Because it is a laser-based sensor, the system is expected to be highly robust to sea weather conditions.

  2. Kinematic model for the space-variant image motion of star sensors under dynamical conditions

    NASA Astrophysics Data System (ADS)

    Liu, Chao-Shan; Hu, Lai-Hong; Liu, Guang-Bin; Yang, Bo; Li, Ai-Jun

    2015-06-01

    A kinematic description of a star spot in the focal plane is presented for star sensors under dynamical conditions, which involves all necessary parameters such as the image motion, velocity, and attitude parameters of the vehicle. Stars at different locations of the focal plane correspond to the slightly different orientation and extent of motion blur, which characterize the space-variant point spread function. Finally, the image motion, the energy distribution, and centroid extraction are numerically investigated using the kinematic model under dynamic conditions. A centroid error of eight successive iterations <0.002 pixel is used as the termination criterion for the Richardson-Lucy deconvolution algorithm. The kinematic model of a star sensor is useful for evaluating the compensation algorithms of motion-blurred images.

  3. Note: Reliable and non-contact 6D motion tracking system based on 2D laser scanners for cargo transportation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Young-Keun, E-mail: ykkim@handong.edu; Kim, Kyung-Soo

    Maritime transportation demands an accurate measurement system to track the motion of oscillating container boxes in real time. However, it is a challenge to design a sensor system that can provide both reliable and non-contact methods of 6-DOF motion measurements of a remote object for outdoor applications. In the paper, a sensor system based on two 2D laser scanners is proposed for detecting the relative 6-DOF motion of a crane load in real time. Even without implementing a camera, the proposed system can detect the motion of a remote object using four laser beam points. Because it is a laser-basedmore » sensor, the system is expected to be highly robust to sea weather conditions.« less

  4. Note: Reliable and non-contact 6D motion tracking system based on 2D laser scanners for cargo transportation

    NASA Astrophysics Data System (ADS)

    Kim, Young-Keun; Kim, Kyung-Soo

    2014-10-01

    Maritime transportation demands an accurate measurement system to track the motion of oscillating container boxes in real time. However, it is a challenge to design a sensor system that can provide both reliable and non-contact methods of 6-DOF motion measurements of a remote object for outdoor applications. In the paper, a sensor system based on two 2D laser scanners is proposed for detecting the relative 6-DOF motion of a crane load in real time. Even without implementing a camera, the proposed system can detect the motion of a remote object using four laser beam points. Because it is a laser-based sensor, the system is expected to be highly robust to sea weather conditions.

  5. Economy of scale: a motion sensor with variable speed tuning.

    PubMed

    Perrone, John A

    2005-01-26

    We have previously presented a model of how neurons in the primate middle temporal (MT/V5) area can develop selectivity for image speed by using common properties of the V1 neurons that precede them in the visual motion pathway (J. A. Perrone & A. Thiele, 2002). The motion sensor developed in this model is based on two broad classes of V1 complex neurons (sustained and transient). The S-type neuron has low-pass temporal frequency tuning, p(omega), and the T-type has band-pass temporal frequency tuning, m(omega). The outputs from the S and T neurons are combined in a special way (weighted intersection mechanism [WIM]) to generate a sensor tuned to a particular speed, v. Here I go on to show that if the S and T temporal frequency tuning functions have a particular form (i.e., p(omega)/(m(omega) = k/omega), then a motion sensor with variable speed tuning can be generated from just two V1 neurons. A simple scaling of the S- or T-type neuron output before it is incorporated into the WIM model produces a motion sensor that can be tuned to a wide continuous range of optimal speeds.

  6. Development of esMOCA RULA, Motion Capture Instrumentation for RULA Assessment

    NASA Astrophysics Data System (ADS)

    Akhmad, S.; Arendra, A.

    2018-01-01

    The purpose of this research is to build motion capture instrumentation using sensors fusion accelerometer and gyroscope to assist in RULA assessment. Data processing of sensor orientation is done in every sensor node by digital motion processor. Nine sensors are placed in the upper limb of operator subject. Development of kinematics model is done with Simmechanic Simulink. This kinematics model receives streaming data from sensors via wireless sensors network. The output of the kinematics model is the relative angular angle between upper limb members and visualized on the monitor. This angular information is compared to the look-up table of the RULA worksheet and gives the RULA score. The assessment result of the instrument is compared with the result of the assessment by rula assessors. To sum up, there is no significant difference of assessment by the instrument with an assessment by an assessor.

  7. Interactive augmented reality using Scratch 2.0 to improve physical activities for children with developmental disabilities.

    PubMed

    Lin, Chien-Yu; Chang, Yu-Ming

    2015-02-01

    This study uses a body motion interactive game developed in Scratch 2.0 to enhance the body strength of children with disabilities. Scratch 2.0, using an augmented-reality function on a program platform, creates real world and virtual reality displays at the same time. This study uses a webcam integration that tracks movements and allows participants to interact physically with the project, to enhance the motivation of children with developmental disabilities to perform physical activities. This study follows a single-case research using an ABAB structure, in which A is the baseline and B is the intervention. The experimental period was 2 months. The experimental results demonstrated that the scores for 3 children with developmental disabilities increased considerably during the intervention phrases. The developmental applications of these results are also discussed. Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. Screen printing of a capacitive cantilever-based motion sensor on fabric using a novel sacrificial layer process for smart fabric applications

    NASA Astrophysics Data System (ADS)

    Wei, Yang; Torah, Russel; Yang, Kai; Beeby, Steve; Tudor, John

    2013-07-01

    Free-standing cantilevers have been fabricated by screen printing sacrificial and structural layers onto a standard polyester cotton fabric. By printing additional conductive layers, a complete capacitive motion sensor on fabric using only screen printing has been fabricated. This type of free-standing structure cannot currently be fabricated using conventional fabric manufacturing processes. In addition, compared to conventional smart fabric fabrication processes (e.g. weaving and knitting), screen printing offers the advantages of geometric design flexibility and the ability to simultaneously print multiple devices of the same or different designs. Furthermore, a range of active inks exists from the printed electronics industry which can potentially be applied to create many types of smart fabric. Four cantilevers with different lengths have been printed on fabric using a five-layer structure with a sacrificial material underneath the cantilever. The sacrificial layer is subsequently removed at 160 °C for 30 min to achieve a freestanding cantilever above the fabric. Two silver electrodes, one on top of the cantilever and the other on top of the fabric, are used to capacitively detect the movement of the cantilever. In this way, an entirely printed motion sensor is produced on a standard fabric. The motion sensor was initially tested on an electromechanical shaker rig at a low frequency range to examine the linearity and the sensitivity of each design. Then, these sensors were individually attached to a moving human forearm to evaluate more representative results. A commercial accelerometer (Microstrain G-link) was mounted alongside for comparison. The printed sensors have a similar motion response to the commercial accelerometer, demonstrating the potential of a printed smart fabric motion sensor for use in intelligent clothing applications.

  9. A Sensor Fusion Method for Tracking Vertical Velocity and Height Based on Inertial and Barometric Altimeter Measurements

    PubMed Central

    Sabatini, Angelo Maria; Genovese, Vincenzo

    2014-01-01

    A sensor fusion method was developed for vertical channel stabilization by fusing inertial measurements from an Inertial Measurement Unit (IMU) and pressure altitude measurements from a barometric altimeter integrated in the same device (baro-IMU). An Extended Kalman Filter (EKF) estimated the quaternion from the sensor frame to the navigation frame; the sensed specific force was rotated into the navigation frame and compensated for gravity, yielding the vertical linear acceleration; finally, a complementary filter driven by the vertical linear acceleration and the measured pressure altitude produced estimates of height and vertical velocity. A method was also developed to condition the measured pressure altitude using a whitening filter, which helped to remove the short-term correlation due to environment-dependent pressure changes from raw pressure altitude. The sensor fusion method was implemented to work on-line using data from a wireless baro-IMU and tested for the capability of tracking low-frequency small-amplitude vertical human-like motions that can be critical for stand-alone inertial sensor measurements. Validation tests were performed in different experimental conditions, namely no motion, free-fall motion, forced circular motion and squatting. Accurate on-line tracking of height and vertical velocity was achieved, giving confidence to the use of the sensor fusion method for tracking typical vertical human motions: velocity Root Mean Square Error (RMSE) was in the range 0.04–0.24 m/s; height RMSE was in the range 5–68 cm, with statistically significant performance gains when the whitening filter was used by the sensor fusion method to track relatively high-frequency vertical motions. PMID:25061835

  10. Energy Optimization on the Battlefield: How Integrating Energy Efficient Technologies at the Tactical Level Can Reduce Fuel Consumption and Lessen the Burden of Fuel Logistics

    DTIC Science & Technology

    2014-06-13

    a kill zone. This also created maintenance issues including diesel soot buildup in the vehicles exhaust systems. Since it took three to four days to...cost savings when summed over the thousands of light fixtures used on bases and FOBs. Changing to LEDs in tents and installing motion sensors could...CFL systems. Motion sensors also aided in turning lights off when the tents 78 are not occupied. Both LEDs and motion sensors produced significant

  11. Method and System for Physiologically Modulating Videogames and Simulations which Use Gesture and Body Image Sensing Control Input Devices

    NASA Technical Reports Server (NTRS)

    Pope, Alan T. (Inventor); Stephens, Chad L. (Inventor); Habowski, Tyler (Inventor)

    2017-01-01

    Method for physiologically modulating videogames and simulations includes utilizing input from a motion-sensing video game system and input from a physiological signal acquisition device. The inputs from the physiological signal sensors are utilized to change the response of a user's avatar to inputs from the motion-sensing sensors. The motion-sensing system comprises a 3D sensor system having full-body 3D motion capture of a user's body. This arrangement encourages health-enhancing physiological self-regulation skills or therapeutic amplification of healthful physiological characteristics. The system provides increased motivation for users to utilize biofeedback as may be desired for treatment of various conditions.

  12. A stretchable strain sensor based on a metal nanoparticle thin film for human motion detection

    NASA Astrophysics Data System (ADS)

    Lee, Jaehwan; Kim, Sanghyeok; Lee, Jinjae; Yang, Daejong; Park, Byong Chon; Ryu, Seunghwa; Park, Inkyu

    2014-09-01

    Wearable strain sensors for human motion detection are being highlighted in various fields such as medical, entertainment and sports industry. In this paper, we propose a new type of stretchable strain sensor that can detect both tensile and compressive strains and can be fabricated by a very simple process. A silver nanoparticle (Ag NP) thin film patterned on the polydimethylsiloxane (PDMS) stamp by a single-step direct transfer process is used as the strain sensing material. The working principle is the change in the electrical resistance caused by the opening/closure of micro-cracks under mechanical deformation. The fabricated stretchable strain sensor shows highly sensitive and durable sensing performances in various tensile/compressive strains, long-term cyclic loading and relaxation tests. We demonstrate the applications of our stretchable strain sensors such as flexible pressure sensors and wearable human motion detection devices with high sensitivity, response speed and mechanical robustness.Wearable strain sensors for human motion detection are being highlighted in various fields such as medical, entertainment and sports industry. In this paper, we propose a new type of stretchable strain sensor that can detect both tensile and compressive strains and can be fabricated by a very simple process. A silver nanoparticle (Ag NP) thin film patterned on the polydimethylsiloxane (PDMS) stamp by a single-step direct transfer process is used as the strain sensing material. The working principle is the change in the electrical resistance caused by the opening/closure of micro-cracks under mechanical deformation. The fabricated stretchable strain sensor shows highly sensitive and durable sensing performances in various tensile/compressive strains, long-term cyclic loading and relaxation tests. We demonstrate the applications of our stretchable strain sensors such as flexible pressure sensors and wearable human motion detection devices with high sensitivity, response speed and mechanical robustness. Electronic supplementary information (ESI) available. See DOI: 10.1039/c4nr03295k

  13. INS integrated motion analysis for autonomous vehicle navigation

    NASA Technical Reports Server (NTRS)

    Roberts, Barry; Bazakos, Mike

    1991-01-01

    The use of inertial navigation system (INS) measurements to enhance the quality and robustness of motion analysis techniques used for obstacle detection is discussed with particular reference to autonomous vehicle navigation. The approach to obstacle detection used here employs motion analysis of imagery generated by a passive sensor. Motion analysis of imagery obtained during vehicle travel is used to generate range measurements to points within the field of view of the sensor, which can then be used to provide obstacle detection. Results obtained with an INS integrated motion analysis approach are reviewed.

  14. Quantification of equine sacral and iliac motion during gait: a comparison between motion capture with skin-mounted and bone-fixated sensors.

    PubMed

    Goff, L; Van Weeren, P R; Jeffcott, L; Condie, P; McGowan, C

    2010-11-01

    Information regarding movement at the ilium and sacrum in nonlame horses during normal gait may assist in understanding the biomechanics of the equine sacroiliac joint. To determine the amount and direction of motion at the ilium and sacrum using 3D orientation sensors during walk and trot in sound Thoroughbreds. To compare results from sensors fixed to the skin with results from sensors fixed to bone-implanted pins. Three 3D wireless orientation sensors were mounted to the skin over the tuber sacrale (TS) and sacrum of 6 horses and motion at the ilium and sacrum was recorded for lateral bending (LB) flexion-extension (F-E) and axial rotation (AR) during walk and trot. This process was repeated with the orientation sensors mounted to the same pelvic landmarks via Steinmann pins. Mean walk values were greater than trot values using pin-mounted sensors for all planes of movement (P < 0.05). Walk had 1.64 ± 0.22° (mean ± s.e.) more LB than trot (pin-mounted) yet 0.68 ± 0.22° less than trot when skin-mounted; 3.45 ± 0.15° more F-E (pin- and skin-mounted), and 4.99 ± 0.4° more AR (pin-mounted), but trot had 3.4 ± 0.40° more AR than walk with skin mounting. Using pinned sensors for trot resulted in less LB (2.47 ± 0.22°), F-E (1.12 ± 0.15°) and AR (10.62 ± 0.40°); and for walk less F-E (1.12 ± 0.15°) and AR (2.15 ± 0.40°) compared to skin-mounted. Poor correlation existed between mean values for skin- and pin-mounted data for walk and trot, for all planes of motion. Movements were smaller at trot with bone-fixated sensors compared to walk, suggesting increased muscular control of movement at the trot. The apparent increase in skin motion at the trot and no clear correlation between skin- and bone-mounted sensors indicates inaccuracies when measuring sacral and iliac movement with skin mounting. © 2010 EVJ Ltd.

  15. Assessing Arthroscopic Skills Using Wireless Elbow-Worn Motion Sensors.

    PubMed

    Kirby, Georgina S J; Guyver, Paul; Strickland, Louise; Alvand, Abtin; Yang, Guang-Zhong; Hargrove, Caroline; Lo, Benny P L; Rees, Jonathan L

    2015-07-01

    Assessment of surgical skill is a critical component of surgical training. Approaches to assessment remain predominantly subjective, although more objective measures such as Global Rating Scales are in use. This study aimed to validate the use of elbow-worn, wireless, miniaturized motion sensors to assess the technical skill of trainees performing arthroscopic procedures in a simulated environment. Thirty participants were divided into three groups on the basis of their surgical experience: novices (n = 15), intermediates (n = 10), and experts (n = 5). All participants performed three standardized tasks on an arthroscopic virtual reality simulator while wearing wireless wrist and elbow motion sensors. Video output was recorded and a validated Global Rating Scale was used to assess performance; dexterity metrics were recorded from the simulator. Finally, live motion data were recorded via Bluetooth from the wireless wrist and elbow motion sensors and custom algorithms produced an arthroscopic performance score. Construct validity was demonstrated for all tasks, with Global Rating Scale scores and virtual reality output metrics showing significant differences between novices, intermediates, and experts (p < 0.001). The correlation of the virtual reality path length to the number of hand movements calculated from the wireless sensors was very high (p < 0.001). A comparison of the arthroscopic performance score levels with virtual reality output metrics also showed highly significant differences (p < 0.01). Comparisons of the arthroscopic performance score levels with the Global Rating Scale scores showed strong and highly significant correlations (p < 0.001) for both sensor locations, but those of the elbow-worn sensors were stronger and more significant (p < 0.001) than those of the wrist-worn sensors. A new wireless assessment of surgical performance system for objective assessment of surgical skills has proven valid for assessing arthroscopic skills. The elbow-worn sensors were shown to achieve an accurate assessment of surgical dexterity and performance. The validation of an entirely objective assessment of arthroscopic skill with wireless elbow-worn motion sensors introduces, for the first time, a feasible assessment system for the live operating theater with the added potential to be applied to other surgical and interventional specialties. Copyright © 2015 by The Journal of Bone and Joint Surgery, Incorporated.

  16. Implementation of weigh-in-motion (WIM) systems.

    DOT National Transportation Integrated Search

    2009-02-01

    This research finished the development and implementation of a novel and durable, higher voltage, : and lower temperature dependant weigh-in-motion (WIM) sensor that was begun under an earlier : research project. These better sensors will require few...

  17. 3D mouse shape reconstruction based on phase-shifting algorithm for fluorescence molecular tomography imaging system.

    PubMed

    Zhao, Yue; Zhu, Dianwen; Baikejiang, Reheman; Li, Changqing

    2015-11-10

    This work introduces a fast, low-cost, robust method based on fringe pattern and phase shifting to obtain three-dimensional (3D) mouse surface geometry for fluorescence molecular tomography (FMT) imaging. We used two pico projector/webcam pairs to project and capture fringe patterns from different views. We first calibrated the pico projectors and the webcams to obtain their system parameters. Each pico projector/webcam pair had its own coordinate system. We used a cylindrical calibration bar to calculate the transformation matrix between these two coordinate systems. After that, the pico projectors projected nine fringe patterns with a phase-shifting step of 2π/9 onto the surface of a mouse-shaped phantom. The deformed fringe patterns were captured by the corresponding webcam respectively, and then were used to construct two phase maps, which were further converted to two 3D surfaces composed of scattered points. The two 3D point clouds were further merged into one with the transformation matrix. The surface extraction process took less than 30 seconds. Finally, we applied the Digiwarp method to warp a standard Digimouse into the measured surface. The proposed method can reconstruct the surface of a mouse-sized object with an accuracy of 0.5 mm, which we believe is sufficient to obtain a finite element mesh for FMT imaging. We performed an FMT experiment using a mouse-shaped phantom with one embedded fluorescence capillary target. With the warped finite element mesh, we successfully reconstructed the target, which validated our surface extraction approach.

  18. Multisensory visual servoing by a neural network.

    PubMed

    Wei, G Q; Hirzinger, G

    1999-01-01

    Conventional computer vision methods for determining a robot's end-effector motion based on sensory data needs sensor calibration (e.g., camera calibration) and sensor-to-hand calibration (e.g., hand-eye calibration). This involves many computations and even some difficulties, especially when different kinds of sensors are involved. In this correspondence, we present a neural network approach to the motion determination problem without any calibration. Two kinds of sensory data, namely, camera images and laser range data, are used as the input to a multilayer feedforward network to associate the direct transformation from the sensory data to the required motions. This provides a practical sensor fusion method. Using a recursive motion strategy and in terms of a network correction, we relax the requirement for the exactness of the learned transformation. Another important feature of our work is that the goal position can be changed without having to do network retraining. Experimental results show the effectiveness of our method.

  19. Ultrasensitive, passive and wearable sensors for monitoring human muscle motion and physiological signals.

    PubMed

    Cai, Feng; Yi, Changrui; Liu, Shichang; Wang, Yan; Liu, Lacheng; Liu, Xiaoqing; Xu, Xuming; Wang, Li

    2016-03-15

    Flexible sensors have attracted more and more attention as a fundamental part of anthropomorphic robot research, medical diagnosis and physical health monitoring. Here, we constructed an ultrasensitive and passive flexible sensor with the advantages of low cost, lightness and wearability, electric safety and reliability. The fundamental mechanism of the sensor is based on triboelectric effect inducing electrostatic charges on the surfaces between two different materials. Just like a plate capacitor, current will be generated while the distance or size of the parallel capacitors changes caused by the small mechanical disturbance upon it and therefore the output current/voltage will be produced. Typically, the passive sensor unambiguously monitors muscle motions including hand motion from stretch-clench-stretch, mouth motion from open-bite-open, blink and respiration. Moreover, this sensor records the details of the consecutive phases in a cardiac cycle of the apex cardiogram, and identify the peaks including percussion wave, tidal wave and diastolic wave of the radial pulse wave. To record subtle human physiological signals including radial pulsilogram and apex cardiogram with excellent signal/noise ratio, stability and reproducibility, the sensor shows great potential in the applications of medical diagnosis and daily health monitoring. Copyright © 2015 Elsevier B.V. All rights reserved.

  20. Accurate respiration measurement using DC-coupled continuous-wave radar sensor for motion-adaptive cancer radiotherapy.

    PubMed

    Gu, Changzhan; Li, Ruijiang; Zhang, Hualiang; Fung, Albert Y C; Torres, Carlos; Jiang, Steve B; Li, Changzhi

    2012-11-01

    Accurate respiration measurement is crucial in motion-adaptive cancer radiotherapy. Conventional methods for respiration measurement are undesirable because they are either invasive to the patient or do not have sufficient accuracy. In addition, measurement of external respiration signal based on conventional approaches requires close patient contact to the physical device which often causes patient discomfort and undesirable motion during radiation dose delivery. In this paper, a dc-coupled continuous-wave radar sensor was presented to provide a noncontact and noninvasive approach for respiration measurement. The radar sensor was designed with dc-coupled adaptive tuning architectures that include RF coarse-tuning and baseband fine-tuning, which allows the radar sensor to precisely measure movement with stationary moment and always work with the maximum dynamic range. The accuracy of respiration measurement with the proposed radar sensor was experimentally evaluated using a physical phantom, human subject, and moving plate in a radiotherapy environment. It was shown that respiration measurement with radar sensor while the radiation beam is on is feasible and the measurement has a submillimeter accuracy when compared with a commercial respiration monitoring system which requires patient contact. The proposed radar sensor provides accurate, noninvasive, and noncontact respiration measurement and therefore has a great potential in motion-adaptive radiotherapy.

  1. Can a low-cost webcam be used for a remote neurological exam?

    PubMed

    Wood, Jeffrey; Wallin, Mitchell; Finkelstein, Joseph

    2013-01-01

    Multiple sclerosis (MS) is a demyelinating and axonal degenerative disease of the central nervous system. It is the most common progressive neurological disorder of young adults affecting over 1 million persons worldwide. Despite the increased use of neuroimaging and other tools to measure MS morbidity, the neurological examination remains the primary method to document relapses and progression in disease. The goal of this study was to demonstrate the feasibility and validity of using a low-cost webcam for remote neurological examination in home-setting for patients with MS. Using cross-over design, 20 MS patients were evaluated in-person and via remote televisit and results of the neurological evaluation were compared. Overall, we found that agreement between face-to-face and remote EDSS evaluation was sufficient to provide clinically valid information. Another important finding of this study was high acceptance of patients and their providers of using remote televisits for conducting neurological examinations at MS patient homes. The results of this study demonstrated potential of using low-cost webcams for remote neurological exam in patients with MS.

  2. An experimental study to investigate the effects of a motion tracking electromagnetic sensor during EEG data acquisition.

    PubMed

    Bashashati, Ali; Noureddin, Borna; Ward, Rabab K; Lawrence, Peter D; Birch, Gary E

    2006-03-01

    A power spectral analysis study was conducted to investigate the effects of using an electromagnetic motion tracking sensor on an electroencephalogram (EEG) recording system. The results showed that the sensors do not generate any consistent frequency component(s) in the power spectrum of the EEG in the frequencies of interest (0.1-55 Hz).

  3. Complex Human Activity Recognition Using Smartphone and Wrist-Worn Motion Sensors.

    PubMed

    Shoaib, Muhammad; Bosch, Stephan; Incel, Ozlem Durmaz; Scholten, Hans; Havinga, Paul J M

    2016-03-24

    The position of on-body motion sensors plays an important role in human activity recognition. Most often, mobile phone sensors at the trouser pocket or an equivalent position are used for this purpose. However, this position is not suitable for recognizing activities that involve hand gestures, such as smoking, eating, drinking coffee and giving a talk. To recognize such activities, wrist-worn motion sensors are used. However, these two positions are mainly used in isolation. To use richer context information, we evaluate three motion sensors (accelerometer, gyroscope and linear acceleration sensor) at both wrist and pocket positions. Using three classifiers, we show that the combination of these two positions outperforms the wrist position alone, mainly at smaller segmentation windows. Another problem is that less-repetitive activities, such as smoking, eating, giving a talk and drinking coffee, cannot be recognized easily at smaller segmentation windows unlike repetitive activities, like walking, jogging and biking. For this purpose, we evaluate the effect of seven window sizes (2-30 s) on thirteen activities and show how increasing window size affects these various activities in different ways. We also propose various optimizations to further improve the recognition of these activities. For reproducibility, we make our dataset publicly available.

  4. Stretch sensors for human body motion

    NASA Astrophysics Data System (ADS)

    O'Brien, Ben; Gisby, Todd; Anderson, Iain A.

    2014-03-01

    Sensing motion of the human body is a difficult task. From an engineers' perspective people are soft highly mobile objects that move in and out of complex environments. As well as the technical challenge of sensing, concepts such as comfort, social intrusion, usability, and aesthetics are paramount in determining whether someone will adopt a sensing solution or not. At the same time the demands for human body motion sensing are growing fast. Athletes want feedback on posture and technique, consumers need new ways to interact with augmented reality devices, and healthcare providers wish to track recovery of a patient. Dielectric elastomer stretch sensors are ideal for bridging this gap. They are soft, flexible, and precise. They are low power, lightweight, and can be easily mounted on the body or embedded into clothing. From a commercialisation point of view stretch sensing is easier than actuation or generation - such sensors can be low voltage and integrated with conventional microelectronics. This paper takes a birds-eye view of the use of these sensors to measure human body motion. A holistic description of sensor operation and guidelines for sensor design will be presented to help technologists and developers in the space.

  5. Error analysis on spinal motion measurement using skin mounted sensors.

    PubMed

    Yang, Zhengyi; Ma, Heather Ting; Wang, Deming; Lee, Raymond

    2008-01-01

    Measurement errors of skin-mounted sensors in measuring forward bending movement of the lumbar spines are investigated. In this investigation, radiographic images capturing the entire lumbar spines' positions were acquired and used as a 'gold' standard. Seventeen young male volunteers (21 (SD 1) years old) agreed to participate in the study. Light-weight miniature sensors of the electromagnetic tracking systems-Fastrak were attached to the skin overlying the spinous processes of the lumbar spine. With the sensors attached, the subjects were requested to take lateral radiographs in two postures: neutral upright and full flexion. The ranges of motions of lumbar spine were calculated from two sets of digitized data: the bony markers of vertebral bodies and the sensors and compared. The differences between the two sets of results were then analyzed. The relative movement between sensor and vertebrae was decomposed into sensor sliding and titling, from which sliding error and titling error were introduced. Gross motion range of forward bending of lumbar spine measured from bony markers of vertebrae is 67.8 degrees (SD 10.6 degrees ) and that from sensors is 62.8 degrees (SD 12.8 degrees ). The error and absolute error for gross motion range were 5.0 degrees (SD 7.2 degrees ) and 7.7 degrees (SD 3.9 degrees ). The contributions of sensors placed on S1 and L1 to the absolute error were 3.9 degrees (SD 2.9 degrees ) and 4.4 degrees (SD 2.8 degrees ), respectively.

  6. Lumbar joint torque estimation based on simplified motion measurement using multiple inertial sensors.

    PubMed

    Miyajima, Saori; Tanaka, Takayuki; Imamura, Yumeko; Kusaka, Takashi

    2015-01-01

    We estimate lumbar torque based on motion measurement using only three inertial sensors. First, human motion is measured by a 6-axis motion tracking device that combines a 3-axis accelerometer and a 3-axis gyroscope placed on the shank, thigh, and back. Next, the lumbar joint torque during the motion is estimated by kinematic musculoskeletal simulation. The conventional method for estimating joint torque uses full body motion data measured by an optical motion capture system. However, in this research, joint torque is estimated by using only three link angles of the body, thigh, and shank. The utility of our method was verified by experiments. We measured motion of bendung knee and waist simultaneously. As the result, we were able to estimate the lumbar joint torque from measured motion.

  7. Experiments and hands-on activities for geoscience observing and measuring by using low-priced instruments

    NASA Astrophysics Data System (ADS)

    Yang, S. S.; Lin, Y. Y.; Tang-Iunn, S. S.

    2016-12-01

    In this presentation, we will introduce five experiments and hands-on activities for geoscience observing and measuring by using low-priced and small-sized commercial instruments. The Black Box for Environmental Measuring (BBEM) system is based on Arduino platform, low-power consumption sensors are employed to measure meteorological and environmental parameters. Commercial GPS receiver is used to observe the influence of geomagnetic storms on GPS system. Webcam is an accessible instrument which is suitable for detecting and recording sprites, thunders, and the development of cumulonimbus. Real-time flight trackers and websites are employed to determine the altitude of cloud base. A simple VLF receiver is built by using the audio interface on computer, and the observed signals showed the variations of the D-region of the ionosphere. All these experiments and activities are practical and have been applied in classroom and science outreach in Taiwan.

  8. Incorporating active-learning techniques into the photonics-related teaching in the Erasmus Mundus Master in "Color in Informatics and Media Technology"

    NASA Astrophysics Data System (ADS)

    Pozo, Antonio M.; Rubiño, Manuel; Hernández-Andrés, Javier; Nieves, Juan L.

    2014-07-01

    In this work, we present a teaching methodology using active-learning techniques in the course "Devices and Instrumentation" of the Erasmus Mundus Master's Degree in "Color in Informatics and Media Technology" (CIMET). A part of the course "Devices and Instrumentation" of this Master's is dedicated to the study of image sensors and methods to evaluate their image quality. The teaching methodology that we present consists of incorporating practical activities during the traditional lectures. One of the innovative aspects of this teaching methodology is that students apply the concepts and methods studied in class to real devices. For this, students use their own digital cameras, webcams, or cellphone cameras in class. These activities provide students a better understanding of the theoretical subject given in class and encourage the active participation of students.

  9. Inertial sensor-based smoother for gait analysis.

    PubMed

    Suh, Young Soo

    2014-12-17

    An off-line smoother algorithm is proposed to estimate foot motion using an inertial sensor unit (three-axis gyroscopes and accelerometers) attached to a shoe. The smoother gives more accurate foot motion estimation than filter-based algorithms by using all of the sensor data instead of using the current sensor data. The algorithm consists of two parts. In the first part, a Kalman filter is used to obtain initial foot motion estimation. In the second part, the error in the initial estimation is compensated using a smoother, where the problem is formulated in the quadratic optimization problem. An efficient solution of the quadratic optimization problem is given using the sparse structure. Through experiments, it is shown that the proposed algorithm can estimate foot motion more accurately than a filter-based algorithm with reasonable computation time. In particular, there is significant improvement in the foot motion estimation when the foot is moving off the floor: the z-axis position error squared sum (total time: 3.47 s) when the foot is in the air is 0.0807 m2 (Kalman filter) and 0.0020 m2 (the proposed smoother).

  10. Navigation Aiding by a Hybrid Laser-Camera Motion Estimator for Micro Aerial Vehicles

    PubMed Central

    Atman, Jamal; Popp, Manuel; Ruppelt, Jan; Trommer, Gert F.

    2016-01-01

    Micro Air Vehicles (MAVs) equipped with various sensors are able to carry out autonomous flights. However, the self-localization of autonomous agents is mostly dependent on Global Navigation Satellite Systems (GNSS). In order to provide an accurate navigation solution in absence of GNSS signals, this article presents a hybrid sensor. The hybrid sensor is a deep integration of a monocular camera and a 2D laser rangefinder so that the motion of the MAV is estimated. This realization is expected to be more flexible in terms of environments compared to laser-scan-matching approaches. The estimated ego-motion is then integrated in the MAV’s navigation system. However, first, the knowledge about the pose between both sensors is obtained by proposing an improved calibration method. For both calibration and ego-motion estimation, 3D-to-2D correspondences are used and the Perspective-3-Point (P3P) problem is solved. Moreover, the covariance estimation of the relative motion is presented. The experiments show very accurate calibration and navigation results. PMID:27649203

  11. In-motion optical sensing for assessment of animal well-being

    NASA Astrophysics Data System (ADS)

    Atkins, Colton A.; Pond, Kevin R.; Madsen, Christi K.

    2017-05-01

    The application of in-motion optical sensor measurements was investigated for inspecting livestock soundness as a means of animal well-being. An optical sensor-based platform was used to collect in-motion, weight-related information. Eight steers, weighing between 680 and 1134 kg, were evaluated twice. Six of the 8 steers were used for further evaluation and analysis. Hoof impacts caused plate flexion that was optically sensed. Observed kinetic differences between animals' strides at a walking or running/trotting gait with significant force distributions of animals' hoof impacts allowed for observation of real-time, biometric patterns. Overall, optical sensor-based measurements identified hoof differences between and within animals in motion that may allow for diagnosis of musculoskeletal unsoundness without visual evaluation.

  12. Monitoring stage fright outside the laboratory: an example in a professional musician using wearable sensors.

    PubMed

    Kusserow, Martin; Candia, Victor; Amft, Oliver; Hildebrandt, Horst; Folkers, Gerd; Tröster, Gerhard

    2012-03-01

    We implemented and tested a wearable sensor system to measure patterns of stress responses in a professional musician under public performance conditions. Using this sensor system, we monitored the cellist's heart activity, the motion of multiple body parts, and their gradual changes during three repeated performances of a skill-demanding piece in front of a professional audience. From the cellist and her teachers, we collected stage fright self-reports and performance ratings that were related to our sensor data analysis results. Concomitant to changes in body motion and heart rate, the cellist perceived a reduction in stage fright. Performance quality was objectively improved, as technical playing errors decreased throughout repeated renditions. In particular, from performance 1 to 3, the wearable sensors measured a significant increase in the cellist's bowing motion dynamics of approximately 6% and a decrease in heart rate. Bowing motion showed a marginal correlation to the observed heart rate patterns during playing. The wearable system did not interfere with the cellist's performance, thereby allowing investigation of stress responses during natural public performances.

  13. Low-cost spectrometers and learning applications for exposing kids to optics

    NASA Astrophysics Data System (ADS)

    Khodadad, Iman; Abedzadeh, Navid; Lakshminarayan, Vasudevan; Saini, Simarjeet S.

    2015-10-01

    We designed and built a low-cost imaging spectrometer using an in-house grating and a webcam and demonstrated its applications for active learning in science with experiments ranging from understanding light spectra from various sources to detecting adulteration in edible oils. The experiments were designed and run in an elementary school in Waterloo, Ontario with young students from grade 4 to grade 8. The performance of the spectrometer is benchmarked to commercial spectrometers and showed excellent correlation for wavelengths between 450 nm to 650 nm. The spectral range can be improved by removing infra-red filters integrated in webcams.

  14. Webcam camera as a detector for a simple lab-on-chip time based approach.

    PubMed

    Wongwilai, Wasin; Lapanantnoppakhun, Somchai; Grudpan, Supara; Grudpan, Kate

    2010-05-15

    A modification of a webcam camera for use as a small and low cost detector was demonstrated with a simple lab-on-chip reactor. Real time continuous monitoring of the reaction zone could be done. Acid-base neutralization with phenolphthalein indicator was used as a model reaction. The fading of pink color of the indicator when the acidic solution diffused into the basic solution zone was recorded as the change of red, blue and green colors (%RBG.) The change was related to acid concentration. A low cost portable semi-automation analysis system was achieved.

  15. A webcam in Bayer-mode as a light beam profiler for the near infra-red

    PubMed Central

    Langer, Gregor; Hochreiner, Armin; Burgholzer, Peter; Berer, Thomas

    2013-01-01

    Beam profiles are commonly measured with complementary metal oxide semiconductors (CMOS) or charge coupled devices (CCD). The devices are fast and reliable but expensive. By making use of the fact that the Bayer-filter in commercial webcams is transparent in the near infra-red (>800 nm) and their CCD chips are sensitive up to about 1100 nm, we demonstrate a cheap and simple way to measure laser beam profiles with a resolution down to around ±1 μm, which is close to the resolution of the knife-edge technique. PMID:23645943

  16. A webcam in Bayer-mode as a light beam profiler for the near infra-red.

    PubMed

    Langer, Gregor; Hochreiner, Armin; Burgholzer, Peter; Berer, Thomas

    2013-05-01

    Beam profiles are commonly measured with complementary metal oxide semiconductors (CMOS) or charge coupled devices (CCD). The devices are fast and reliable but expensive. By making use of the fact that the Bayer-filter in commercial webcams is transparent in the near infra-red (>800 nm) and their CCD chips are sensitive up to about 1100 nm, we demonstrate a cheap and simple way to measure laser beam profiles with a resolution down to around ±1 μm, which is close to the resolution of the knife-edge technique.

  17. In-pavement fiber Bragg grating sensors for high-speed weigh-in-motion measurements

    NASA Astrophysics Data System (ADS)

    Al-Tarawneh, Mu'ath; Huang, Ying

    2017-04-01

    The demand on high-speed weigh-in-motion (WIM) measurement rises significantly in last decade to collect weight information for traffic managements especially after the introduction of weigh-station bypass programs such as Pre-Pass. In this study, a three-dimension glass fiber-reinforced polymer packaged fiber Bragg grating sensor (3D GFRP-FBG) is introduced to be embedded inside flexible pavements for weigh-in-motion (WIM) measurement at high speed. Sensitivity study showed that the developed sensor is very sensitive to the passing weights at high speed. Field tests also validated that the developed sensor was able to detect weights at a vehicle driving speed up to 55mph, which can be applied for WIM measurements at high speed.

  18. An ultrasensitive strain sensor with a wide strain range based on graphene armour scales.

    PubMed

    Yang, Yi-Fan; Tao, Lu-Qi; Pang, Yu; Tian, He; Ju, Zhen-Yi; Wu, Xiao-Ming; Yang, Yi; Ren, Tian-Ling

    2018-06-12

    An ultrasensitive strain sensor with a wide strain range based on graphene armour scales is demonstrated in this paper. The sensor shows an ultra-high gauge factor (GF, up to 1054) and a wide strain range (ε = 26%), both of which present an advantage compared to most other flexible sensors. Moreover, the sensor is developed by a simple fabrication process. Due to the excellent performance, this strain sensor can meet the demands of subtle, large and complex human motion monitoring, which indicates its tremendous application potential in health monitoring, mechanical control, real-time motion monitoring and so on.

  19. Weigh-in-Motion Sensor and Controller Operation and Performance Comparison

    DOT National Transportation Integrated Search

    2018-01-01

    This research project utilized statistical inference and comparison techniques to compare the performance of different Weigh-in-Motion (WIM) sensors. First, we analyzed test-vehicle data to perform an accuracy check of the results reported by the sen...

  20. Biomechanics of the Sensor–Tissue Interface—Effects of Motion, Pressure, and Design on Sensor Performance and the Foreign Body Response—Part I: Theoretical Framework

    PubMed Central

    Helton, Kristen L; Ratner, Buddy D; Wisniewski, Natalie A

    2011-01-01

    The importance of biomechanics in glucose sensor function has been largely overlooked. This article is the first part of a two-part review in which we look beyond commonly recognized chemical biocompatibility to explore the biomechanics of the sensor–tissue interface as an important aspect of continuous glucose sensor biocompatibility. Part I provides a theoretical framework to describe how biomechanical factors such as motion and pressure (typically micromotion and micropressure) give rise to interfacial stresses, which affect tissue physiology around a sensor and, in turn, impact sensor performance. Three main contributors to sensor motion and pressure are explored: applied forces, sensor design, and subject/patient considerations. We describe how acute forces can temporarily impact sensor signal and how chronic forces can alter the foreign body response and inflammation around an implanted sensor, and thus impact sensor performance. The importance of sensor design (e.g., size, shape, modulus, texture) and specific implant location on the tissue response are also explored. In Part II: Examples and Application (a sister publication), examples from the literature are reviewed, and the application of biomechanical concepts to sensor design are described. We believe that adding biomechanical strategies to the arsenal of material compositions, surface modifications, drug elution, and other chemical strategies will lead to improvements in sensor biocompatibility and performance. PMID:21722578

  1. Development of a Plantar Load Estimation Algorithm for Evaluation of Forefoot Load of Diabetic Patients during Daily Walks Using a Foot Motion Sensor

    PubMed Central

    Noguchi, Hiroshi; Sanada, Hiromi

    2017-01-01

    Forefoot load (FL) contributes to callus formation, which is one of the pathways to diabetic foot ulcers (DFU). In this study, we hypothesized that excessive FL, which cannot be detected by plantar load measurements within laboratory settings, occurs in daily walks. To demonstrate this, we created a FL estimation algorithm using foot motion data. Acceleration and angular velocity data were obtained from a motion sensor attached to each shoe of the subjects. The accuracy of the estimated FL was validated by correlation with the FL measured by force sensors on the metatarsal heads, which was assessed using the Pearson correlation coefficient. The mean of correlation coefficients of all the subjects was 0.63 at a level corridor, while it showed an intersubject difference at a slope and stairs. We conducted daily walk measurements in two diabetic patients, and additionally, we verified the safety of daily walk measurement using a wearable motion sensor attached to each shoe. We found that excessive FL occurred during their daily walks for approximately three hours in total, when any adverse event was not observed. This study indicated that FL evaluation method using wearable motion sensors was one of the promising ways to prevent DFUs. PMID:28840130

  2. Development of a Plantar Load Estimation Algorithm for Evaluation of Forefoot Load of Diabetic Patients during Daily Walks Using a Foot Motion Sensor.

    PubMed

    Watanabe, Ayano; Noguchi, Hiroshi; Oe, Makoto; Sanada, Hiromi; Mori, Taketoshi

    2017-01-01

    Forefoot load (FL) contributes to callus formation, which is one of the pathways to diabetic foot ulcers (DFU). In this study, we hypothesized that excessive FL, which cannot be detected by plantar load measurements within laboratory settings, occurs in daily walks. To demonstrate this, we created a FL estimation algorithm using foot motion data. Acceleration and angular velocity data were obtained from a motion sensor attached to each shoe of the subjects. The accuracy of the estimated FL was validated by correlation with the FL measured by force sensors on the metatarsal heads, which was assessed using the Pearson correlation coefficient. The mean of correlation coefficients of all the subjects was 0.63 at a level corridor, while it showed an intersubject difference at a slope and stairs. We conducted daily walk measurements in two diabetic patients, and additionally, we verified the safety of daily walk measurement using a wearable motion sensor attached to each shoe. We found that excessive FL occurred during their daily walks for approximately three hours in total, when any adverse event was not observed. This study indicated that FL evaluation method using wearable motion sensors was one of the promising ways to prevent DFUs.

  3. Thermal noise variance of a receive radiofrequency coil as a respiratory motion sensor.

    PubMed

    Andreychenko, A; Raaijmakers, A J E; Sbrizzi, A; Crijns, S P M; Lagendijk, J J W; Luijten, P R; van den Berg, C A T

    2017-01-01

    Development of a passive respiratory motion sensor based on the noise variance of the receive coil array. Respiratory motion alters the body resistance. The noise variance of an RF coil depends on the body resistance and, thus, is also modulated by respiration. For the noise variance monitoring, the noise samples were acquired without and with MR signal excitation on clinical 1.5/3 T MR scanners. The performance of the noise sensor was compared with the respiratory bellow and with the diaphragm displacement visible on MR images. Several breathing patterns were tested. The noise variance demonstrated a periodic, temporal modulation that was synchronized with the respiratory bellow signal. The modulation depth of the noise variance resulting from the respiration varied between the channels of the array and depended on the channel's location with respect to the body. The noise sensor combined with MR acquisition was able to detect the respiratory motion for every k-space read-out line. Within clinical MR systems, the respiratory motion can be detected by the noise in receive array. The noise sensor does not require careful positioning unlike the bellow, any additional hardware, and/or MR acquisition. Magn Reson Med 77:221-228, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  4. Using the Scroll Wheel on a Wireless Mouse as a Motion Sensor

    ERIC Educational Resources Information Center

    Taylor, Richard S.; Wilson, William R.

    2010-01-01

    Since its inception in the mid-80s, the computer mouse has undergone several design changes. As the mouse has evolved, physicists have found new ways to utilize it as a motion sensor. For example, the rollers in a mechanical mouse have been used as pulleys to study the motion of a magnet moving through a copper tube as a quantitative demonstration…

  5. Mathematical Modeling Of The Terrain Around A Robot

    NASA Technical Reports Server (NTRS)

    Slack, Marc G.

    1992-01-01

    In conceptual system for modeling of terrain around autonomous mobile robot, representation of terrain used for control separated from representation provided by sensors. Concept takes motion-planning system out from under constraints imposed by discrete spatial intervals of square terrain grid(s). Separation allows sensing and motion-controlling systems to operate asynchronously; facilitating integration of new map and sensor data into planning of motions.

  6. Acoustic sensor array extracts physiology during movement

    NASA Astrophysics Data System (ADS)

    Scanlon, Michael V.

    2001-08-01

    An acoustic sensor attached to a person's neck can extract heart and breath sounds, as well as voice and other physiology related to their health and performance. Soldiers, firefighters, law enforcement, and rescue personnel, as well as people at home or in health care facilities, can benefit form being remotely monitored. ARLs acoustic sensor, when worn around a person's neck, picks up the carotid artery and breath sounds very well by matching the sensor's acoustic impedance to that of the body via a gel pad, while airborne noise is minimized by an impedance mismatch. Although the physiological sounds have high SNR, the acoustic sensor also responds to motion-induced artifacts that obscure the meaningful physiology. To exacerbate signal extraction, these interfering signals are usually covariant with the heart sounds, in that as a person walks faster the heart tends to beat faster, and motion noises tend to contain low frequency component similar to the heart sounds. A noise-canceling configuration developed by ARL uses two acoustic sensor on the front sides of the neck as physiology sensors, and two additional acoustic sensor on the back sides of the neck as noise references. Breath and heart sounds, which occur with near symmetry and simultaneously at the two front sensor, will correlate well. The motion noise present on all four sensor will be used to cancel the noise on the two physiology sensors. This report will compare heart rate variability derived from both the acoustic array and from ECG data taken simultaneously on a treadmill test. Acoustically derived breath rate and volume approximations will be introduced as well. A miniature 3- axis accelerometer on the same neckband provides additional noise references to validate footfall and motion activity.

  7. Image Processing Occupancy Sensor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    The Image Processing Occupancy Sensor, or IPOS, is a novel sensor technology developed at the National Renewable Energy Laboratory (NREL). The sensor is based on low-cost embedded microprocessors widely used by the smartphone industry and leverages mature open-source computer vision software libraries. Compared to traditional passive infrared and ultrasonic-based motion sensors currently used for occupancy detection, IPOS has shown the potential for improved accuracy and a richer set of feedback signals for occupant-optimized lighting, daylighting, temperature setback, ventilation control, and other occupancy and location-based uses. Unlike traditional passive infrared (PIR) or ultrasonic occupancy sensors, which infer occupancy based only onmore » motion, IPOS uses digital image-based analysis to detect and classify various aspects of occupancy, including the presence of occupants regardless of motion, their number, location, and activity levels of occupants, as well as the illuminance properties of the monitored space. The IPOS software leverages the recent availability of low-cost embedded computing platforms, computer vision software libraries, and camera elements.« less

  8. Hand Motion Classification Using a Multi-Channel Surface Electromyography Sensor

    PubMed Central

    Tang, Xueyan; Liu, Yunhui; Lv, Congyi; Sun, Dong

    2012-01-01

    The human hand has multiple degrees of freedom (DOF) for achieving high-dexterity motions. Identifying and replicating human hand motions are necessary to perform precise and delicate operations in many applications, such as haptic applications. Surface electromyography (sEMG) sensors are a low-cost method for identifying hand motions, in addition to the conventional methods that use data gloves and vision detection. The identification of multiple hand motions is challenging because the error rate typically increases significantly with the addition of more hand motions. Thus, the current study proposes two new methods for feature extraction to solve the problem above. The first method is the extraction of the energy ratio features in the time-domain, which are robust and invariant to motion forces and speeds for the same gesture. The second method is the extraction of the concordance correlation features that describe the relationship between every two channels of the multi-channel sEMG sensor system. The concordance correlation features of a multi-channel sEMG sensor system were shown to provide a vast amount of useful information for identification. Furthermore, a new cascaded-structure classifier is also proposed, in which 11 types of hand gestures can be identified accurately using the newly defined features. Experimental results show that the success rate for the identification of the 11 gestures is significantly high. PMID:22438703

  9. Hand motion classification using a multi-channel surface electromyography sensor.

    PubMed

    Tang, Xueyan; Liu, Yunhui; Lv, Congyi; Sun, Dong

    2012-01-01

    The human hand has multiple degrees of freedom (DOF) for achieving high-dexterity motions. Identifying and replicating human hand motions are necessary to perform precise and delicate operations in many applications, such as haptic applications. Surface electromyography (sEMG) sensors are a low-cost method for identifying hand motions, in addition to the conventional methods that use data gloves and vision detection. The identification of multiple hand motions is challenging because the error rate typically increases significantly with the addition of more hand motions. Thus, the current study proposes two new methods for feature extraction to solve the problem above. The first method is the extraction of the energy ratio features in the time-domain, which are robust and invariant to motion forces and speeds for the same gesture. The second method is the extraction of the concordance correlation features that describe the relationship between every two channels of the multi-channel sEMG sensor system. The concordance correlation features of a multi-channel sEMG sensor system were shown to provide a vast amount of useful information for identification. Furthermore, a new cascaded-structure classifier is also proposed, in which 11 types of hand gestures can be identified accurately using the newly defined features. Experimental results show that the success rate for the identification of the 11 gestures is significantly high.

  10. The fiber optic gyroscope - a portable rotational ground motion sensor

    NASA Astrophysics Data System (ADS)

    Wassermann, J. M.; Bernauer, F.; Guattari, F.; Igel, H.

    2016-12-01

    It was already shown that a portable broadband rotational ground motion sensor will have large impact on several fields of seismological research such as volcanology, marine geophysics, seismic tomography and planetary seismology. Here, we present results of tests and experiments with one of the first broadband rotational motion sensors available. BlueSeis-3A, is a fiber optic gyroscope (FOG) especially designed for the needs of seismology, developed by iXBlue, France, in close collaboration with researchers financed by the European Research council project ROMY (Rotational motions - a new observable for seismology). We first present the instrument characteristics which were estimated by different standard laboratory tests, e.g. self noise using operational range diagrams or Allan deviation. Next we present the results of a field experiment which was designed to demonstrate the value of a 6C measurement (3 components of translation and 3 components of rotation). This field test took place at Mt. Stromboli volcano, Italy, and is accompanied by seismic array installation to proof the FOG output against more commonly known array derived rotation. As already shown with synthetic data an additional direct measurement of three components of rotation can reduce the ambiguity in source mechanism estimation and can be taken to correct for dynamic tilt of the translational sensors (i.e. seismometers). We can therefore demonstrate that the deployment of a weak motion broadband rotational motion sensor is in fact producing superior results by a reduction of the number of deployed instruments.

  11. Inertial Sensor-Based Motion Analysis of Lower Limbs for Rehabilitation Treatments

    PubMed Central

    Sun, Tongyang; Duan, Lihong; Wang, Yulong

    2017-01-01

    The hemiplegic rehabilitation state diagnosing performed by therapists can be biased due to their subjective experience, which may deteriorate the rehabilitation effect. In order to improve this situation, a quantitative evaluation is proposed. Though many motion analysis systems are available, they are too complicated for practical application by therapists. In this paper, a method for detecting the motion of human lower limbs including all degrees of freedom (DOFs) via the inertial sensors is proposed, which permits analyzing the patient's motion ability. This method is applicable to arbitrary walking directions and tracks of persons under study, and its results are unbiased, as compared to therapist qualitative estimations. Using the simplified mathematical model of a human body, the rotation angles for each lower limb joint are calculated from the input signals acquired by the inertial sensors. Finally, the rotation angle versus joint displacement curves are constructed, and the estimated values of joint motion angle and motion ability are obtained. The experimental verification of the proposed motion detection and analysis method was performed, which proved that it can efficiently detect the differences between motion behaviors of disabled and healthy persons and provide a reliable quantitative evaluation of the rehabilitation state. PMID:29065575

  12. Webcams, Crowdsourcing, and Enhanced Crosswalks: Developing a Novel Method to Analyze Active Transportation.

    PubMed

    Hipp, J Aaron; Manteiga, Alicia; Burgess, Amanda; Stylianou, Abby; Pless, Robert

    2016-01-01

    Active transportation opportunities and infrastructure are an important component of a community's design, livability, and health. Features of the built environment influence active transportation, but objective study of the natural experiment effects of built environment improvements on active transportation is challenging. The purpose of this study was to develop and present a novel method of active transportation research using webcams and crowdsourcing, and to determine if crosswalk enhancement was associated with changes in active transportation rates, including across a variety of weather conditions. The 20,529 publicly available webcam images from two street intersections in Washington, DC, USA were used to examine the impact of an improved crosswalk on active transportation. A crowdsource, Amazon Mechanical Turk, annotated image data. Temperature data were collected from the National Oceanic and Atmospheric Administration, and precipitation data were annotated from images by trained research assistants. Summary analyses demonstrated slight, bi-directional differences in the percent of images with pedestrians and bicyclists captured before and after the enhancement of the crosswalks. Chi-square analyses revealed these changes were not significant. In general, pedestrian presence increased in images captured during moderate temperatures compared to images captured during hot or cold temperatures. Chi-square analyses indicated the crosswalk improvement may have encouraged walking and biking in uncomfortable outdoor conditions (P < 0.5). The methods employed provide an objective, cost-effective alternative to traditional means of examining the effects of built environment changes on active transportation. The use of webcams to collect active transportation data has applications for community policymakers, planners, and health professionals. Future research will work to validate this method in a variety of settings as well as across different built environment and community policy initiatives.

  13. Registration of Large Motion Blurred Images

    DTIC Science & Technology

    2016-05-09

    in handling the dynamics of the capturing system, for example, a drone. CMOS sensors , used in recent times, when employed in these cameras produce...handling the dynamics of the capturing system, for example, a drone. CMOS sensors , used in recent times, when employed in these cameras produce two types...blur in the captured image when there is camera motion during exposure. However, contemporary CMOS sensors employ an electronic rolling shutter (RS

  14. Commercial Motion Sensor Based Low-Cost and Convenient Interactive Treadmill.

    PubMed

    Kim, Jonghyun; Gravunder, Andrew; Park, Hyung-Soon

    2015-09-17

    Interactive treadmills were developed to improve the simulation of overground walking when compared to conventional treadmills. However, currently available interactive treadmills are expensive and inconvenient, which limits their use. We propose a low-cost and convenient version of the interactive treadmill that does not require expensive equipment and a complicated setup. As a substitute for high-cost sensors, such as motion capture systems, a low-cost motion sensor was used to recognize the subject's intention for speed changing. Moreover, the sensor enables the subject to make a convenient and safe stop using gesture recognition. For further cost reduction, the novel interactive treadmill was based on an inexpensive treadmill platform and a novel high-level speed control scheme was applied to maximize performance for simulating overground walking. Pilot tests with ten healthy subjects were conducted and results demonstrated that the proposed treadmill achieves similar performance to a typical, costly, interactive treadmill that contains a motion capture system and an instrumented treadmill, while providing a convenient and safe method for stopping.

  15. Autonomous Landmark Calibration Method for Indoor Localization

    PubMed Central

    Kim, Jae-Hoon; Kim, Byoung-Seop

    2017-01-01

    Machine-generated data expansion is a global phenomenon in recent Internet services. The proliferation of mobile communication and smart devices has increased the utilization of machine-generated data significantly. One of the most promising applications of machine-generated data is the estimation of the location of smart devices. The motion sensors integrated into smart devices generate continuous data that can be used to estimate the location of pedestrians in an indoor environment. We focus on the estimation of the accurate location of smart devices by determining the landmarks appropriately for location error calibration. In the motion sensor-based location estimation, the proposed threshold control method determines valid landmarks in real time to avoid the accumulation of errors. A statistical method analyzes the acquired motion sensor data and proposes a valid landmark for every movement of the smart devices. Motion sensor data used in the testbed are collected from the actual measurements taken throughout a commercial building to demonstrate the practical usefulness of the proposed method. PMID:28837071

  16. Complex Human Activity Recognition Using Smartphone and Wrist-Worn Motion Sensors

    PubMed Central

    Shoaib, Muhammad; Bosch, Stephan; Incel, Ozlem Durmaz; Scholten, Hans; Havinga, Paul J. M.

    2016-01-01

    The position of on-body motion sensors plays an important role in human activity recognition. Most often, mobile phone sensors at the trouser pocket or an equivalent position are used for this purpose. However, this position is not suitable for recognizing activities that involve hand gestures, such as smoking, eating, drinking coffee and giving a talk. To recognize such activities, wrist-worn motion sensors are used. However, these two positions are mainly used in isolation. To use richer context information, we evaluate three motion sensors (accelerometer, gyroscope and linear acceleration sensor) at both wrist and pocket positions. Using three classifiers, we show that the combination of these two positions outperforms the wrist position alone, mainly at smaller segmentation windows. Another problem is that less-repetitive activities, such as smoking, eating, giving a talk and drinking coffee, cannot be recognized easily at smaller segmentation windows unlike repetitive activities, like walking, jogging and biking. For this purpose, we evaluate the effect of seven window sizes (2–30 s) on thirteen activities and show how increasing window size affects these various activities in different ways. We also propose various optimizations to further improve the recognition of these activities. For reproducibility, we make our dataset publicly available. PMID:27023543

  17. An automatic fall detection framework using data fusion of Doppler radar and motion sensor network.

    PubMed

    Liu, Liang; Popescu, Mihail; Skubic, Marjorie; Rantz, Marilyn

    2014-01-01

    This paper describes the ongoing work of detecting falls in independent living senior apartments. We have developed a fall detection system with Doppler radar sensor and implemented ceiling radar in real senior apartments. However, the detection accuracy on real world data is affected by false alarms inherent in the real living environment, such as motions from visitors. To solve this issue, this paper proposes an improved framework by fusing the Doppler radar sensor result with a motion sensor network. As a result, performance is significantly improved after the data fusion by discarding the false alarms generated by visitors. The improvement of this new method is tested on one week of continuous data from an actual elderly person who frequently falls while living in her senior home.

  18. Turbulence Measurements from Compliant Moorings. Part II: Motion Correction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kilcher, Levi F.; Thomson, Jim; Harding, Samuel

    2017-06-01

    Acoustic Doppler velocimeters (ADVs) are a valuable tool for making highprecision measurements of turbulence, and moorings are a convenient and ubiquitous platform for making many kinds of measurements in the ocean. However—because of concerns that mooring motion can contaminate turbulence measurements and acoustic Doppler profilers are relatively easy to deploy—ADVs are not frequently deployed from moorings. This work details a method for measuring turbulence using moored ADVs that corrects for mooring motion using measurements from inertial motion sensors. Three distinct mooring platforms were deployed in a tidal channel with inertial motion-sensor-equipped ADVs. In each case, the motion correction based onmore » the inertial measurements dramatically reduced contamination from mooring motion. The spectra from these measurements have a shape that is consistent with other measurements in tidal channels, and have a f^(5/3) slope at high frequencies—consistent with Kolmogorov’s theory of isotropic turbulence. Motion correction also improves estimates of cross-spectra and Reynold’s stresses. Comparison of turbulence dissipation with flow speed and turbulence production indicates a bottom boundary layer production-dissipation balance during ebb and flood that is consistent with the strong tidal forcing at the site. These results indicate that inertial-motion-sensor-equipped ADVs are a valuable new tool for measuring turbulence from moorings.« less

  19. Analysing harmonic motions with an iPhone’s magnetometer

    NASA Astrophysics Data System (ADS)

    Yavuz, Ahmet; Kağan Temiz, Burak

    2016-05-01

    In this paper, we propose an experiment for analysing harmonic motion using an iPhone’s (or iPad’s) magnetometer. This experiment consists of the detection of magnetic field variations obtained from an iPhone’s magnetometer sensor. A graph of harmonic motion is directly displayed on the iPhone’s screen using the Sensor Kinetics application. Data from this application was analysed with Eureqa software to establish the equation of the harmonic motion. Analyses show that the use of an iPhone’s magnetometer to analyse harmonic motion is a practical and effective method for small oscillations and frequencies less than 15-20 Hz.

  20. OFSETH: smart medical textile for continuous monitoring of respiratory motions under magnetic resonance imaging.

    PubMed

    De Jonckheere, J; Narbonneau, F; Jeanne, M; Kinet, D; Witt, J; Krebber, K; Paquet, B; Depre, A; Logier, R

    2009-01-01

    The potential impact of optical fiber sensors embedded into medical textiles for the continuous monitoring of the patient during Magnetic Resonance Imaging is presented. We report on two pure optical sensing technologies for respiratory movements monitoring - a macro bending sensor and a Bragg grating sensor, designed to measure the elongation due to abdominal and thoracic motions during breathing. We demonstrate that the two sensors can successfully sense textile elongation between, 0% and 3%, while maintaining the stretching properties of the textile substrates for a good comfort of the patient.

  1. Visualization of Heart Sounds and Motion Using Multichannel Sensor

    NASA Astrophysics Data System (ADS)

    Nogata, Fumio; Yokota, Yasunari; Kawamura, Yoko

    2010-06-01

    As there are various difficulties associated with auscultation techniques, we have devised a technique for visualizing heart motion in order to assist in the understanding of heartbeat for both doctors and patients. Auscultatory sounds were first visualized using FFT and Wavelet analysis to visualize heart sounds. Next, to show global and simultaneous heart motions, a new technique for visualization was established. The visualization system consists of a 64-channel unit (63 acceleration sensors and one ECG sensor) and a signal/image analysis unit. The acceleration sensors were arranged in a square array (8×8) with a 20-mm pitch interval, which was adhered to the chest surface. The heart motion of one cycle was visualized at a sampling frequency of 3 kHz and quantization of 12 bits. The visualized results showed a typical waveform motion of the strong pressure shock due to closing tricuspid valve and mitral valve of the cardiac apex (first sound), and the closing aortic and pulmonic valve (second sound) in sequence. To overcome difficulties in auscultation, the system can be applied to the detection of heart disease and to the digital database management of the auscultation examination in medical areas.

  2. The instantaneous linear motion information measurement method based on inertial sensors for ships

    NASA Astrophysics Data System (ADS)

    Yang, Xu; Huang, Jing; Gao, Chen; Quan, Wei; Li, Ming; Zhang, Yanshun

    2018-05-01

    Ship instantaneous line motion information is the important foundation for ship control, which needs to be measured accurately. For this purpose, an instantaneous line motion measurement method based on inertial sensors is put forward for ships. By introducing a half-fixed coordinate system to realize the separation between instantaneous line motion and ship master movement, the instantaneous line motion acceleration of ships can be obtained with higher accuracy. Then, the digital high-pass filter is applied to suppress the velocity error caused by the low frequency signal such as schuler period. Finally, the instantaneous linear motion displacement of ships can be measured accurately. Simulation experimental results show that the method is reliable and effective, and can realize the precise measurement of velocity and displacement of instantaneous line motion for ships.

  3. Computer-Guided Deep Brain Stimulation Programming for Parkinson's Disease.

    PubMed

    Heldman, Dustin A; Pulliam, Christopher L; Urrea Mendoza, Enrique; Gartner, Maureen; Giuffrida, Joseph P; Montgomery, Erwin B; Espay, Alberto J; Revilla, Fredy J

    2016-02-01

    Pilot study to evaluate computer-guided deep brain stimulation (DBS) programming designed to optimize stimulation settings using objective motion sensor-based motor assessments. Seven subjects (five males; 54-71 years) with Parkinson's disease (PD) and recently implanted DBS systems participated in this pilot study. Within two months of lead implantation, the subject returned to the clinic to undergo computer-guided programming and parameter selection. A motion sensor was placed on the index finger of the more affected hand. Software guided a monopolar survey during which monopolar stimulation on each contact was iteratively increased followed by an automated assessment of tremor and bradykinesia. After completing assessments at each setting, a software algorithm determined stimulation settings designed to minimize symptom severities, side effects, and battery usage. Optimal DBS settings were chosen based on average severity of motor symptoms measured by the motion sensor. Settings chosen by the software algorithm identified a therapeutic window and improved tremor and bradykinesia by an average of 35.7% compared with baseline in the "off" state (p < 0.01). Motion sensor-based computer-guided DBS programming identified stimulation parameters that significantly improved tremor and bradykinesia with minimal clinician involvement. Automated motion sensor-based mapping is worthy of further investigation and may one day serve to extend programming to populations without access to specialized DBS centers. © 2015 International Neuromodulation Society.

  4. Relative-Motion Sensors and Actuators for Two Optical Tables

    NASA Technical Reports Server (NTRS)

    Gursel, Yekta; McKenney, Elizabeth

    2004-01-01

    Optoelectronic sensors and magnetic actuators have been developed as parts of a system for controlling the relative position and attitude of two massive optical tables that float on separate standard air suspensions that attenuate ground vibrations. In the specific application for which these sensors and actuators were developed, one of the optical tables holds an optical system that mimics distant stars, while the other optical table holds a test article that simulates a spaceborne stellar interferometer that would be used to observe the stars. The control system is designed to suppress relative motion of the tables or, on demand, to impose controlled relative motion between the tables. The control system includes a sensor system that detects relative motion of the tables in six independent degrees of freedom and a drive system that can apply force to the star-simulator table in the six degrees of freedom. The sensor system includes (1) a set of laser heterodyne gauges and (2) a set of four diode lasers on the star-simulator table, each aimed at one of four quadrant photodiodes at nominal corresponding positions on the test-article table. The heterodyne gauges are used to measure relative displacements along the x axis.

  5. Contact lenses fitting teaching: learning improvement with monitor visualization of webcam video recordings

    NASA Astrophysics Data System (ADS)

    Gargallo, Ana; Arines, Justo

    2014-08-01

    We have adapted low cost webcams to the slit lamps objectives with the aim of improving contact lens fitting practice. With this solution we obtain good quality pictures and videos, we also recorded videos of eye examination, evaluation routines of contact lens fitting, and the final practice exam of our students. In addition, the video system increases the interactions between students because they could see what their colleagues are doing and take conscious of their mistakes, helping and correcting each others. We think that the proposed system is a low cost solution for supporting the training in contact lens fitting practice.

  6. An investigation of pupil-based cognitive load measurement with low cost infrared webcam under light reflex interference.

    PubMed

    Chen, Siyuan; Epps, Julien; Chen, Fang

    2013-01-01

    Using the task-evoked pupillary response (TEPR) to index cognitive load can contribute significantly to the assessment of memory function and cognitive skills in patients. However, the measurement of pupillary response is currently limited to a well-controlled lab environment due to light reflex and also relies heavily on expensive video-based eye trackers. Furthermore, commercial eye trackers are usually dedicated to gaze direction measurement, and their calibration procedure and computing resource are largely redundant for pupil-based cognitive load measurement (PCLM). In this study, we investigate the validity of cognitive load measurement with (i) pupil light reflex in a less controlled luminance background; (ii) a low-cost infrared (IR) webcam for the TEPR in a controlled luminance background. ANOVA results show that with an appropriate baseline selection and subtraction, the light reflex is significantly reduced, suggesting the possibility of less constrained practical applications of PCLM. Compared with the TEPR from a commercial remote eye tracker, a low-cost IR webcam achieved a similar TEPR pattern and no significant difference was found between the two devices in terms of cognitive load measurement across five induced load levels.

  7. Webcams as a tool for teaching in Optometry training

    NASA Astrophysics Data System (ADS)

    Gargallo, A.; Arines, J.

    2015-04-01

    Clinical Optometry lab training is devoted to develop the students skills needed in eye healthcare professional practice. Nevertheless, students always find difficulties in the management of some optometric instruments and in the understanding of the evaluation techniques. Moreover, teachers also have problems in explaining the eye evaluation tests or making demonstrations of instruments handling. In order to facilitate the learning process, webcams adapted to the optometric devices represent a helpful and useful tool. In this work we present the use of webcams in some of the most common clinical test in Optometry as ocular refraction, colour vision test, eye health evaluation with slip-lamp, retinoscopy, ophthalmoscopy and contact lens fitting. Our experience shows that with this simple approach we can do things easier: show the instrument handling to all the students at the same time; take pictures or videos of different eye health conditions or exploratory routines for posterior visualization with all the students; recreate visual experience of the patient during optometric exam; simulate colour vision pathologies; increase the interactions between students allowing them to help and correct each other; and also record the final routine exam in order to make possible its revision with the students.

  8. Mobile user identity sensing using the motion sensor

    NASA Astrophysics Data System (ADS)

    Zhao, Xi; Feng, Tao; Xu, Lei; Shi, Weidong

    2014-05-01

    Employing mobile sensor data to recognize user behavioral activities has been well studied in recent years. However, to adopt the data as a biometric modality has rarely been explored. Existing methods either used the data to recognize gait, which is considered as a distinguished identity feature; or segmented a specific kind of motion for user recognition, such as phone picking-up motion. Since the identity and the motion gesture jointly affect motion data, to fix the gesture (walking or phone picking-up) definitively simplifies the identity sensing problem. However, it meanwhile introduces the complexity from gesture detection or requirement on a higher sample rate from motion sensor readings, which may draw the battery fast and affect the usability of the phone. In general, it is still under investigation that motion based user authentication in a large scale satisfies the accuracy requirement as a stand-alone biometrics modality. In this paper, we propose a novel approach to use the motion sensor readings for user identity sensing. Instead of decoupling the user identity from a gesture, we reasonably assume users have their own distinguishing phone usage habits and extract the identity from fuzzy activity patterns, represented by a combination of body movements whose signals in chains span in relative low frequency spectrum and hand movements whose signals span in relative high frequency spectrum. Then Bayesian Rules are applied to analyze the dependency of different frequency components in the signals. During testing, a posterior probability of user identity given the observed chains can be computed for authentication. Tested on an accelerometer dataset with 347 users, our approach has demonstrated the promising results.

  9. Determining the 3-D structure and motion of objects using a scanning laser range sensor

    NASA Technical Reports Server (NTRS)

    Nandhakumar, N.; Smith, Philip W.

    1993-01-01

    In order for the EVAHR robot to autonomously track and grasp objects, its vision system must be able to determine the 3-D structure and motion of an object from a sequence of sensory images. This task is accomplished by the use of a laser radar range sensor which provides dense range maps of the scene. Unfortunately, the currently available laser radar range cameras use a sequential scanning approach which complicates image analysis. Although many algorithms have been developed for recognizing objects from range images, none are suited for use with single beam, scanning, time-of-flight sensors because all previous algorithms assume instantaneous acquisition of the entire image. This assumption is invalid since the EVAHR robot is equipped with a sequential scanning laser range sensor. If an object is moving while being imaged by the device, the apparent structure of the object can be significantly distorted due to the significant non-zero delay time between sampling each image pixel. If an estimate of the motion of the object can be determined, this distortion can be eliminated; but, this leads to the motion-structure paradox - most existing algorithms for 3-D motion estimation use the structure of objects to parameterize their motions. The goal of this research is to design a rigid-body motion recovery technique which overcomes this limitation. The method being developed is an iterative, linear, feature-based approach which uses the non-zero image acquisition time constraint to accurately recover the motion parameters from the distorted structure of the 3-D range maps. Once the motion parameters are determined, the structural distortion in the range images is corrected.

  10. Wearable Wide-Range Strain Sensors Based on Ionic Liquids and Monitoring of Human Activities

    PubMed Central

    Zhang, Shao-Hui; Wang, Feng-Xia; Li, Jia-Jia; Peng, Hong-Dan; Yan, Jing-Hui; Pan, Ge-Bo

    2017-01-01

    Wearable sensors for detection of human activities have encouraged the development of highly elastic sensors. In particular, to capture subtle and large-scale body motion, stretchable and wide-range strain sensors are highly desired, but still a challenge. Herein, a highly stretchable and transparent stain sensor based on ionic liquids and elastic polymer has been developed. The as-obtained sensor exhibits impressive stretchability with wide-range strain (from 0.1% to 400%), good bending properties and high sensitivity, whose gauge factor can reach 7.9. Importantly, the sensors show excellent biological compatibility and succeed in monitoring the diverse human activities ranging from the complex large-scale multidimensional motions to subtle signals, including wrist, finger and elbow joint bending, finger touch, breath, speech, swallow behavior and pulse wave. PMID:29135928

  11. A portable platform to collect and review behavioral data simultaneously with neurophysiological signals.

    PubMed

    Tianxiao Jiang; Siddiqui, Hasan; Ray, Shruti; Asman, Priscella; Ozturk, Musa; Ince, Nuri F

    2017-07-01

    This paper presents a portable platform to collect and review behavioral data simultaneously with neurophysiological signals. The whole system is comprised of four parts: a sensor data acquisition interface, a socket server for real-time data streaming, a Simulink system for real-time processing and an offline data review and analysis toolbox. A low-cost microcontroller is used to acquire data from external sensors such as accelerometer and hand dynamometer. The micro-controller transfers the data either directly through USB or wirelessly through a bluetooth module to a data server written in C++ for MS Windows OS. The data server also interfaces with the digital glove and captures HD video from webcam. The acquired sensor data are streamed under User Datagram Protocol (UDP) to other applications such as Simulink/Matlab for real-time analysis and recording. Neurophysiological signals such as electroencephalography (EEG), electrocorticography (ECoG) and local field potential (LFP) recordings can be collected simultaneously in Simulink and fused with behavioral data. In addition, we developed a customized Matlab Graphical User Interface (GUI) software to review, annotate and analyze the data offline. The software provides a fast, user-friendly data visualization environment with synchronized video playback feature. The software is also capable of reviewing long-term neural recordings. Other featured functions such as fast preprocessing with multithreaded filters, annotation, montage selection, power-spectral density (PSD) estimate, time-frequency map and spatial spectral map are also implemented.

  12. Highly sensitive strain sensors based on fragmentized carbon nanotube/polydimethylsiloxane composites

    NASA Astrophysics Data System (ADS)

    Gao, Yang; Fang, Xiaoliang; Tan, Jianping; Lu, Ting; Pan, Likun; Xuan, Fuzhen

    2018-06-01

    Wearable strain sensors based on nanomaterial/elastomer composites have potential applications in flexible electronic skin, human motion detection, human–machine interfaces, etc. In this research, a type of high performance strain sensors has been developed using fragmentized carbon nanotube/polydimethylsiloxane (CNT/PDMS) composites. The CNT/PDMS composites were ground into fragments, and a liquid-induced densification method was used to fabricate the strain sensors. The strain sensors showed high sensitivity with gauge factors (GFs) larger than 200 and a broad strain detection range up to 80%, much higher than those strain sensors based on unfragmentized CNT/PDMS composites (GF < 1). The enhanced sensitivity of the strain sensors is ascribed to the sliding of individual fragmentized-CNT/PDMS-composite particles during mechanical deformation, which causes significant resistance change in the strain sensors. The strain sensors can differentiate mechanical stimuli and monitor various human body motions, such as bending of the fingers, human breathing, and blood pulsing.

  13. Highly sensitive strain sensors based on fragmentized carbon nanotube/polydimethylsiloxane composites.

    PubMed

    Gao, Yang; Fang, Xiaoliang; Tan, Jianping; Lu, Ting; Pan, Likun; Xuan, Fuzhen

    2018-06-08

    Wearable strain sensors based on nanomaterial/elastomer composites have potential applications in flexible electronic skin, human motion detection, human-machine interfaces, etc. In this research, a type of high performance strain sensors has been developed using fragmentized carbon nanotube/polydimethylsiloxane (CNT/PDMS) composites. The CNT/PDMS composites were ground into fragments, and a liquid-induced densification method was used to fabricate the strain sensors. The strain sensors showed high sensitivity with gauge factors (GFs) larger than 200 and a broad strain detection range up to 80%, much higher than those strain sensors based on unfragmentized CNT/PDMS composites (GF < 1). The enhanced sensitivity of the strain sensors is ascribed to the sliding of individual fragmentized-CNT/PDMS-composite particles during mechanical deformation, which causes significant resistance change in the strain sensors. The strain sensors can differentiate mechanical stimuli and monitor various human body motions, such as bending of the fingers, human breathing, and blood pulsing.

  14. A study on validating KinectV2 in comparison of Vicon system as a motion capture system for using in Health Engineering in industry

    NASA Astrophysics Data System (ADS)

    Jebeli, Mahvash; Bilesan, Alireza; Arshi, Ahmadreza

    2017-06-01

    The currently available commercial motion capture systems are constrained by space requirement and thus pose difficulties when used in developing kinematic description of human movements within the existing manufacturing and production cells. The Kinect sensor does not share similar limitations but it is not as accurate. The proposition made in this article is to adopt the Kinect sensor in to facilitate implementation of Health Engineering concepts to industrial environments. This article is an evaluation of the Kinect sensor accuracy when providing three dimensional kinematic data. The sensor is thus utilized to assist in modeling and simulation of worker performance within an industrial cell. For this purpose, Kinect 3D data was compared to that of Vicon motion capture system in a gait analysis laboratory. Results indicated that the Kinect sensor exhibited a coefficient of determination of 0.9996 on the depth axis and 0.9849 along the horizontal axis and 0.2767 on vertical axis. The results prove the competency of the Kinect sensor to be used in the industrial environments.

  15. An experimental protocol for the definition of upper limb anatomical frames on children using magneto-inertial sensors.

    PubMed

    Ricci, L; Formica, D; Tamilia, E; Taffoni, F; Sparaci, L; Capirci, O; Guglielmelli, E

    2013-01-01

    Motion capture based on magneto-inertial sensors is a technology enabling data collection in unstructured environments, allowing "out of the lab" motion analysis. This technology is a good candidate for motion analysis of children thanks to the reduced weight and size as well as the use of wireless communication that has improved its wearability and reduced its obtrusivity. A key issue in the application of such technology for motion analysis is its calibration, i.e. a process that allows mapping orientation information from each sensor to a physiological reference frame. To date, even if there are several calibration procedures available for adults, no specific calibration procedures have been developed for children. This work addresses this specific issue presenting a calibration procedure for motion capture of thorax and upper limbs on healthy children. Reported results suggest comparable performance with similar studies on adults and emphasize some critical issues, opening the way to further improvements.

  16. A High-Speed Vision-Based Sensor for Dynamic Vibration Analysis Using Fast Motion Extraction Algorithms.

    PubMed

    Zhang, Dashan; Guo, Jie; Lei, Xiujun; Zhu, Changan

    2016-04-22

    The development of image sensor and optics enables the application of vision-based techniques to the non-contact dynamic vibration analysis of large-scale structures. As an emerging technology, a vision-based approach allows for remote measuring and does not bring any additional mass to the measuring object compared with traditional contact measurements. In this study, a high-speed vision-based sensor system is developed to extract structure vibration signals in real time. A fast motion extraction algorithm is required for this system because the maximum sampling frequency of the charge-coupled device (CCD) sensor can reach up to 1000 Hz. Two efficient subpixel level motion extraction algorithms, namely the modified Taylor approximation refinement algorithm and the localization refinement algorithm, are integrated into the proposed vision sensor. Quantitative analysis shows that both of the two modified algorithms are at least five times faster than conventional upsampled cross-correlation approaches and achieve satisfactory error performance. The practicability of the developed sensor is evaluated by an experiment in a laboratory environment and a field test. Experimental results indicate that the developed high-speed vision-based sensor system can extract accurate dynamic structure vibration signals by tracking either artificial targets or natural features.

  17. Generalized compliant motion primitive

    NASA Technical Reports Server (NTRS)

    Backes, Paul G. (Inventor)

    1994-01-01

    This invention relates to a general primitive for controlling a telerobot with a set of input parameters. The primitive includes a trajectory generator; a teleoperation sensor; a joint limit generator; a force setpoint generator; a dither function generator, which produces telerobot motion inputs in a common coordinate frame for simultaneous combination in sensor summers. Virtual return spring motion input is provided by a restoration spring subsystem. The novel features of this invention include use of a single general motion primitive at a remote site to permit the shared and supervisory control of the robot manipulator to perform tasks via a remotely transferred input parameter set.

  18. A New Multi-Sensor Fusion Scheme to Improve the Accuracy of Knee Flexion Kinematics for Functional Rehabilitation Movements.

    PubMed

    Tannous, Halim; Istrate, Dan; Benlarbi-Delai, Aziz; Sarrazin, Julien; Gamet, Didier; Ho Ba Tho, Marie Christine; Dao, Tien Tuan

    2016-11-15

    Exergames have been proposed as a potential tool to improve the current practice of musculoskeletal rehabilitation. Inertial or optical motion capture sensors are commonly used to track the subject's movements. However, the use of these motion capture tools suffers from the lack of accuracy in estimating joint angles, which could lead to wrong data interpretation. In this study, we proposed a real time quaternion-based fusion scheme, based on the extended Kalman filter, between inertial and visual motion capture sensors, to improve the estimation accuracy of joint angles. The fusion outcome was compared to angles measured using a goniometer. The fusion output shows a better estimation, when compared to inertial measurement units and Kinect outputs. We noted a smaller error (3.96°) compared to the one obtained using inertial sensors (5.04°). The proposed multi-sensor fusion system is therefore accurate enough to be applied, in future works, to our serious game for musculoskeletal rehabilitation.

  19. The validity of the first and second generation Microsoft Kinect™ for identifying joint center locations during static postures.

    PubMed

    Xu, Xu; McGorry, Raymond W

    2015-07-01

    The Kinect™ sensor released by Microsoft is a low-cost, portable, and marker-less motion tracking system for the video game industry. Since the first generation Kinect sensor was released in 2010, many studies have been conducted to examine the validity of this sensor when used to measure body movement in different research areas. In 2014, Microsoft released the computer-used second generation Kinect sensor with a better resolution for the depth sensor. However, very few studies have performed a direct comparison between all the Kinect sensor-identified joint center locations and their corresponding motion tracking system-identified counterparts, the result of which may provide some insight into the error of the Kinect-identified segment length, joint angles, as well as the feasibility of adapting inverse dynamics to Kinect-identified joint centers. The purpose of the current study is to first propose a method to align the coordinate system of the Kinect sensor with respect to the global coordinate system of a motion tracking system, and then to examine the accuracy of the Kinect sensor-identified coordinates of joint locations during 8 standing and 8 sitting postures of daily activities. The results indicate the proposed alignment method can effectively align the Kinect sensor with respect to the motion tracking system. The accuracy level of the Kinect-identified joint center location is posture-dependent and joint-dependent. For upright standing posture, the average error across all the participants and all Kinect-identified joint centers is 76 mm and 87 mm for the first and second generation Kinect sensor, respectively. In general, standing postures can be identified with better accuracy than sitting postures, and the identification accuracy of the joints of the upper extremities is better than for the lower extremities. This result may provide some information regarding the feasibility of using the Kinect sensor in future studies. Copyright © 2015 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  20. In vivo measurement of the 3D kinematics of the temporomandibular joint using miniaturized electromagnetic trackers: technical report.

    PubMed

    Baeyens, J-P; Gilomen, H; Erdmann, B; Clijsen, R; Cabri, J; Vissers, D

    2013-04-01

    The aim of this study was to evaluate the use of miniaturized electromagnetic trackers (1 × 0.5 × 0.5 cm) fixed on teeth of the maxilla and mandible to analyse in vivo the 3D kinematics of the temporomandibular joint (TMJ). A third sensor was fixed to the forehead, and a fourth sensor was used as a stylus pointer to detect several anatomical landmarks in order to embed a local frame on the cranium. Temporomandibular opening/closing, chewing, laterotrusion and protrusion were examined. The prime objective within this study was to rigidly attach electromagnetic minisensors on teeth. The key for a successful affixation was the kevlar interface. The distances between the two mandibular affixed sensors and between the two maxillar affixed sensors were overall smaller than 0.033 cm for position and 0.2° for attitude throughout the temporomandibular motions. The relative motions between a forehead sensor and the maxilla affixed sensor are too big to suggest a forehead sensor as an alternative for a maxilla affixed sensor. The technique using miniaturized electromagnetic trackers furthers on the methods using electromagnetic trackers on external appliances. The method allows full range of motion of the TMJ and does not disturb normal TMJ function.

  1. Development of an image operation system with a motion sensor in dental radiology.

    PubMed

    Sato, Mitsuru; Ogura, Toshihiro; Yasumoto, Yoshiaki; Kadowaki, Yuta; Hayashi, Norio; Doi, Kunio

    2015-07-01

    During examinations and/or treatment, a dentist in the examination room needs to view images with a proper display system. However, they cannot operate the image display system by hands, because dentists always wear gloves to be kept their hands away from unsanitized materials. Therefore, we developed a new image operating system that uses a motion sensor. We used the Leap motion sensor technique to read the hand movements of a dentist. We programmed the system using C++ to enable various movements of the display system, i.e., click, double click, drag, and drop. Thus, dentists with their gloves on in the examination room can control dental and panoramic images on the image display system intuitively and quickly with movement of their hands only. We investigated the time required with the conventional method using a mouse and with the new method using the finger operation. The average operation time with the finger method was significantly shorter than that with the mouse method. This motion sensor method, with appropriate training for finger movements, can provide a better operating performance than the conventional mouse method.

  2. Data fusion of multiple kinect sensors for a rehabilitation system.

    PubMed

    Huibin Du; Yiwen Zhao; Jianda Han; Zheng Wang; Guoli Song

    2016-08-01

    Kinect-like depth sensors have been widely used in rehabilitation systems. However, single depth sensor processes limb-blocking, data loss or data error poorly, making it less reliable. This paper focus on using two Kinect sensors and data fusion method to solve these problems. First, two Kinect sensors capture the motion data of the healthy arm of the hemiplegic patient; Second, merge the data using the method of Set-Membership-Filter (SMF); Then, mirror this motion data by the Middle-Plane; In the end, control the wearable robotic arm driving the patient's paralytic arm so that the patient can interactively and initiatively complete a variety of recovery actions prompted by computer with 3D animation games.

  3. FPGA-based fused smart sensor for dynamic and vibration parameter extraction in industrial robot links.

    PubMed

    Rodriguez-Donate, Carlos; Morales-Velazquez, Luis; Osornio-Rios, Roque Alfredo; Herrera-Ruiz, Gilberto; de Jesus Romero-Troncoso, Rene

    2010-01-01

    Intelligent robotics demands the integration of smart sensors that allow the controller to efficiently measure physical quantities. Industrial manipulator robots require a constant monitoring of several parameters such as motion dynamics, inclination, and vibration. This work presents a novel smart sensor to estimate motion dynamics, inclination, and vibration parameters on industrial manipulator robot links based on two primary sensors: an encoder and a triaxial accelerometer. The proposed smart sensor implements a new methodology based on an oversampling technique, averaging decimation filters, FIR filters, finite differences and linear interpolation to estimate the interest parameters, which are computed online utilizing digital hardware signal processing based on field programmable gate arrays (FPGA).

  4. FPGA-Based Fused Smart Sensor for Dynamic and Vibration Parameter Extraction in Industrial Robot Links

    PubMed Central

    Rodriguez-Donate, Carlos; Morales-Velazquez, Luis; Osornio-Rios, Roque Alfredo; Herrera-Ruiz, Gilberto; de Jesus Romero-Troncoso, Rene

    2010-01-01

    Intelligent robotics demands the integration of smart sensors that allow the controller to efficiently measure physical quantities. Industrial manipulator robots require a constant monitoring of several parameters such as motion dynamics, inclination, and vibration. This work presents a novel smart sensor to estimate motion dynamics, inclination, and vibration parameters on industrial manipulator robot links based on two primary sensors: an encoder and a triaxial accelerometer. The proposed smart sensor implements a new methodology based on an oversampling technique, averaging decimation filters, FIR filters, finite differences and linear interpolation to estimate the interest parameters, which are computed online utilizing digital hardware signal processing based on field programmable gate arrays (FPGA). PMID:22319345

  5. Using the Xbox Kinect sensor for positional data acquisition

    NASA Astrophysics Data System (ADS)

    Ballester, Jorge; Pheatt, Chuck

    2013-01-01

    The Kinect sensor was introduced in November 2010 by Microsoft for the Xbox 360 video game system. It is designed to be positioned above or below a video display to track player body and hand movements in three dimensions (3D). The sensor contains a red, green, and blue (RGB) camera, a depth sensor, an infrared (IR) light source, a three-axis accelerometer, and a multi-array microphone, as well as hardware required to transmit sensor information to an external receiver. In this article, we evaluate the capabilities of the Kinect sensor as a 3D data-acquisition platform for use in physics experiments. Data obtained for a simple pendulum, a spherical pendulum, projectile motion, and a bouncing basketball are presented. Overall, the Kinect sensor is found to be a useful data-acquisition tool for motion studies in the physics laboratory.

  6. Apparatus and Method for Assessing Vestibulo-Ocular Function

    NASA Technical Reports Server (NTRS)

    Shelhamer, Mark J. (Inventor)

    2015-01-01

    A system for assessing vestibulo-ocular function includes a motion sensor system adapted to be coupled to a user's head; a data processing system configured to communicate with the motion sensor system to receive the head-motion signals; a visual display system configured to communicate with the data processing system to receive image signals from the data processing system; and a gain control device arranged to be operated by the user and to communicate gain adjustment signals to the data processing system.

  7. Investigation of the rolling motion of a hollow cylinder using a smartphone’s digital compass

    NASA Astrophysics Data System (ADS)

    Wattanayotin, Phattara; Puttharugsa, Chokchai; Khemmani, Supitch

    2017-07-01

    This study used a smartphone’s digital compass to observe the rolling motion of a hollow cylinder on an inclined plane. The smartphone (an iPhone 4s) was attached to the end of one side of a hollow cylinder to record the experimental data using the SensorLog application. In the experiment, the change of angular position was measured by the smartphone’s digital compass. The obtained results were then analyzed and calculated to determine various parameters of the motion, such as the angular velocity, angular acceleration, critical angle, and coefficient of static friction. The experimental results obtained from using the digital compass were compared with those obtained from using a gyroscope sensor. Moreover, the results obtained from both sensors were consistent with the calculations for the rolling motion. We expect that this experiment will be valuable for use in physics laboratories.

  8. Model of human visual-motion sensing

    NASA Technical Reports Server (NTRS)

    Watson, A. B.; Ahumada, A. J., Jr.

    1985-01-01

    A model of how humans sense the velocity of moving images is proposed. The model exploits constraints provided by human psychophysics, notably that motion-sensing elements appear tuned for two-dimensional spatial frequency, and by the frequency spectrum of a moving image, namely, that its support lies in the plane in which the temporal frequency equals the dot product of the spatial frequency and the image velocity. The first stage of the model is a set of spatial-frequency-tuned, direction-selective linear sensors. The temporal frequency of the response of each sensor is shown to encode the component of the image velocity in the sensor direction. At the second stage, these components are resolved in order to measure the velocity of image motion at each of a number of spatial locations and spatial frequencies. The model has been applied to several illustrative examples, including apparent motion, coherent gratings, and natural image sequences. The model agrees qualitatively with human perception.

  9. A Self-Alignment Algorithm for SINS Based on Gravitational Apparent Motion and Sensor Data Denoising

    PubMed Central

    Liu, Yiting; Xu, Xiaosu; Liu, Xixiang; Yao, Yiqing; Wu, Liang; Sun, Jin

    2015-01-01

    Initial alignment is always a key topic and difficult to achieve in an inertial navigation system (INS). In this paper a novel self-initial alignment algorithm is proposed using gravitational apparent motion vectors at three different moments and vector-operation. Simulation and analysis showed that this method easily suffers from the random noise contained in accelerometer measurements which are used to construct apparent motion directly. Aiming to resolve this problem, an online sensor data denoising method based on a Kalman filter is proposed and a novel reconstruction method for apparent motion is designed to avoid the collinearity among vectors participating in the alignment solution. Simulation, turntable tests and vehicle tests indicate that the proposed alignment algorithm can fulfill initial alignment of strapdown INS (SINS) under both static and swinging conditions. The accuracy can either reach or approach the theoretical values determined by sensor precision under static or swinging conditions. PMID:25923932

  10. Physical activity classification using time-frequency signatures of motion artifacts in multi-channel electrical impedance plethysmographs.

    PubMed

    Khan, Hassan Aqeel; Gore, Amit; Ashe, Jeff; Chakrabartty, Shantanu

    2017-07-01

    Physical activities are known to introduce motion artifacts in electrical impedance plethysmographic (EIP) sensors. Existing literature considers motion artifacts as a nuisance and generally discards the artifact containing portion of the sensor output. This paper examines the notion of exploiting motion artifacts for detecting the underlying physical activities which give rise to the artifacts in question. In particular, we investigate whether the artifact pattern associated with a physical activity is unique; and does it vary from one human-subject to another? Data was recorded from 19 adult human-subjects while conducting 5 distinct, artifact inducing, activities. A set of novel features based on the time-frequency signatures of the sensor outputs are then constructed. Our analysis demonstrates that these features enable high accuracy detection of the underlying physical activity. Using an SVM classifier we are able to differentiate between 5 distinct physical activities (coughing, reaching, walking, eating and rolling-on-bed) with an average accuracy of 85.46%. Classification is performed solely using features designed specifically to capture the time-frequency signatures of different physical activities. This enables us to measure both respiratory and motion information using only one type of sensor. This is in contrast to conventional approaches to physical activity monitoring; which rely on additional hardware such as accelerometers to capture activity information.

  11. Measurement of three-dimensional posture and trajectory of lower body during standing long jumping utilizing body-mounted sensors.

    PubMed

    Ibata, Yuki; Kitamura, Seiji; Motoi, Kosuke; Sagawa, Koichi

    2013-01-01

    The measurement method of three-dimensional posture and flying trajectory of lower body during jumping motion using body-mounted wireless inertial measurement units (WIMU) is introduced. The WIMU is composed of three-dimensional (3D) accelerometer and gyroscope of two kinds with different dynamic range and one 3D geomagnetic sensor to adapt to quick movement. Three WIMUs are mounted under the chest, right thigh and right shank. Thin film pressure sensors are connected to the shank WIMU and are installed under right heel and tiptoe to distinguish the state of the body motion between grounding and jumping. Initial and final postures of trunk, thigh and shank at standing-still are obtained using gravitational acceleration and geomagnetism. The posture of body is determined using the 3D direction of each segment updated by the numerical integration of angular velocity. Flying motion is detected from pressure sensors and 3D flying trajectory is derived by the double integration of trunk acceleration applying the 3D velocity of trunk at takeoff. Standing long jump experiments are performed and experimental results show that the joint angle and flying trajectory agree with the actual motion measured by the optical motion capture system.

  12. FPGA-Based Multimodal Embedded Sensor System Integrating Low- and Mid-Level Vision

    PubMed Central

    Botella, Guillermo; Martín H., José Antonio; Santos, Matilde; Meyer-Baese, Uwe

    2011-01-01

    Motion estimation is a low-level vision task that is especially relevant due to its wide range of applications in the real world. Many of the best motion estimation algorithms include some of the features that are found in mammalians, which would demand huge computational resources and therefore are not usually available in real-time. In this paper we present a novel bioinspired sensor based on the synergy between optical flow and orthogonal variant moments. The bioinspired sensor has been designed for Very Large Scale Integration (VLSI) using properties of the mammalian cortical motion pathway. This sensor combines low-level primitives (optical flow and image moments) in order to produce a mid-level vision abstraction layer. The results are described trough experiments showing the validity of the proposed system and an analysis of the computational resources and performance of the applied algorithms. PMID:22164069

  13. Fiber-based generator for wearable electronics and mobile medication.

    PubMed

    Zhong, Junwen; Zhang, Yan; Zhong, Qize; Hu, Qiyi; Hu, Bin; Wang, Zhong Lin; Zhou, Jun

    2014-06-24

    Smart garments for monitoring physiological and biomechanical signals of the human body are key sensors for personalized healthcare. However, they typically require bulky battery packs or have to be plugged into an electric plug in order to operate. Thus, a smart shirt that can extract energy from human body motions to run body-worn healthcare sensors is particularly desirable. Here, we demonstrated a metal-free fiber-based generator (FBG) via a simple, cost-effective method by using commodity cotton threads, a polytetrafluoroethylene aqueous suspension, and carbon nanotubes as source materials. The FBGs can convert biomechanical motions/vibration energy into electricity utilizing the electrostatic effect with an average output power density of ∼0.1 μW/cm(2) and have been identified as an effective building element for a power shirt to trigger a wireless body temperature sensor system. Furthermore, the FBG was demonstrated as a self-powered active sensor to quantitatively detect human motion.

  14. FPGA-based multimodal embedded sensor system integrating low- and mid-level vision.

    PubMed

    Botella, Guillermo; Martín H, José Antonio; Santos, Matilde; Meyer-Baese, Uwe

    2011-01-01

    Motion estimation is a low-level vision task that is especially relevant due to its wide range of applications in the real world. Many of the best motion estimation algorithms include some of the features that are found in mammalians, which would demand huge computational resources and therefore are not usually available in real-time. In this paper we present a novel bioinspired sensor based on the synergy between optical flow and orthogonal variant moments. The bioinspired sensor has been designed for Very Large Scale Integration (VLSI) using properties of the mammalian cortical motion pathway. This sensor combines low-level primitives (optical flow and image moments) in order to produce a mid-level vision abstraction layer. The results are described trough experiments showing the validity of the proposed system and an analysis of the computational resources and performance of the applied algorithms.

  15. Application of a Leap Motion Sensor for Improved Drone Control

    DTIC Science & Technology

    2017-12-01

    command ( )u t needed to control the distance error ( )e t was obtained using         0 1 t p d i de t u t K e t e d T T dt...SENSOR FOR IMPROVED DRONE CONTROL by Alfredo Belaunde Sara-Lafosse December 2017 Thesis Advisor: Xiaoping Yun Second Reader: James Calusdian THIS...thesis 4. TITLE AND SUBTITLE APPLICATION OF A LEAP MOTION SENSOR FOR IMPROVED DRONE CONTROL 5. FUNDING NUMBERS 6. AUTHOR(S) Alfredo Belaunde Sara

  16. Structural action recognition in body sensor networks: distributed classification based on string matching.

    PubMed

    Ghasemzadeh, Hassan; Loseu, Vitali; Jafari, Roozbeh

    2010-03-01

    Mobile sensor-based systems are emerging as promising platforms for healthcare monitoring. An important goal of these systems is to extract physiological information about the subject wearing the network. Such information can be used for life logging, quality of life measures, fall detection, extraction of contextual information, and many other applications. Data collected by these sensor nodes are overwhelming, and hence, an efficient data processing technique is essential. In this paper, we present a system using inexpensive, off-the-shelf inertial sensor nodes that constructs motion transcripts from biomedical signals and identifies movements by taking collaboration between the nodes into consideration. Transcripts are built of motion primitives and aim to reduce the complexity of the original data. We then label each primitive with a unique symbol and generate a sequence of symbols, known as motion template, representing a particular action. This model leads to a distributed algorithm for action recognition using edit distance with respect to motion templates. The algorithm reduces the number of active nodes during every classification decision. We present our results using data collected from five normal subjects performing transitional movements. The results clearly illustrate the effectiveness of our framework. In particular, we obtain a classification accuracy of 84.13% with only one sensor node involved in the classification process.

  17. In vivo real-time recording of UV-induced changes in the autofluorescence of a melanin-containing fungus using a micro-spectrofluorimeter and a low-cost webcam.

    PubMed

    Raimondi, V; Agati, G; Cecchi, G; Gomoiu, I; Lognoli, D; Palombi, L

    2009-12-07

    An optical epifluorescence microscope, coupled to a CCD camera, a standard webcam and a microspectrofluorimeter, are used to record in vivo real-time changes in the autofluorescence of spores and hyphae in Aspergillus niger, a fungus containing melanin, while exposed to UV irradiation. The results point out major changes in both signal intensity and the spectral shape of the autofluorescence signal after only few minutes of exposure, and can contribute to the interpretation of data obtained with other fluorescence techniques, including those, such as GPF labeling, in which endogenous fluorophores constitute a major disturbance.

  18. Simple and inexpensive hardware and software method to measure volume changes in Xenopus oocytes expressing aquaporins.

    PubMed

    Dorr, Ricardo; Ozu, Marcelo; Parisi, Mario

    2007-04-15

    Water channels (aquaporins) family members have been identified in central nervous system cells. A classic method to measure membrane water permeability and its regulation is to capture and analyse images of Xenopus laevis oocytes expressing them. Laboratories dedicated to the analysis of motion images usually have powerful equipment valued in thousands of dollars. However, some scientists consider that new approaches are needed to reduce costs in scientific labs, especially in developing countries. The objective of this work is to share a very low-cost hardware and software setup based on a well-selected webcam, a hand-made adapter to a microscope and the use of free software to measure membrane water permeability in Xenopus oocytes. One of the main purposes of this setup is to maintain a high level of quality in images obtained at brief intervals (shorter than 70 ms). The presented setup helps to economize without sacrificing image analysis requirements.

  19. Resolving the Role of the Dynamic Pressure in the Burial, Exposure, Scour, and Mobility of Underwater Munitions

    NASA Astrophysics Data System (ADS)

    Gilooly, S.; Foster, D. L.

    2017-12-01

    In nearshore environments, the motion of munitions results from a mixture of sediment transport conditions including sheet flow, scour, bedform migration, and momentary liquefaction. Incipient motion can be caused by disruptive shear stresses and pressure gradients. Foster et al. (2006) incorporated both processes into a single parameter, indicating incipient motion as a function of the bed state. This research looks to evaluate the role of the pressure gradient in positional state changes such as burial, exposure, and mobility. In the case of munitions, this may include pressure gradients induced by vortex shedding or the passing wave. Pressure-mapped model munitions are being developed to measure the orientation, rotation, and surface pressure of the munitions during threshold events leading to a new positional state. These munitions will be deployed in inner surf zone and estuary environments along with acoustic Doppler velocimeters (ADVs), pore water pressure sensors, a laser grid, and a pencil beam sonar with an azimuth drive. The additional instruments allow for near bed and far field water column and sediment bed sampling. Currently preliminary assessments of various pressure sensors and munition designs are underway. Two pressure sensors were selected; the thin FlexiForce A201 sensors will be used to indicate munition rolling during threshold events and diaphragm sensors will be used to understand changes in surrounding pore water pressure as the munition begins to bury/unbury. Both sensors are expected to give quantitative measurements of dynamic pressure gradients in the flow field surrounding the munition. Resolving the role of this process will give insight to an improved incipient motion parameter and allow for better munition motion predictions.

  20. Continuous monitoring of large civil structures using a digital fiber optic motion sensor system

    NASA Astrophysics Data System (ADS)

    Hodge, Malcolm H.; Kausel, Theodore C., Jr.

    1998-03-01

    There is no single attribute which can always predict structural deterioration. Accordingly, we have developed a scheme for monitoring a wide range of incipient deterioration parameters, all based on a single motion sensor concept. In this presentation, we describe how an intrinsically low power- consumption fiber optic harness can be permanently deployed to poll an array of optical sensors. The function and design of these simple, durable, and naturally digital sensors is described, along with the manner in which they have been configured to collect information for changes in the most important structural aspects. The SIMS system is designed to interrogate each sensor up to five-thousand times per second for the life of the structure, and to report sensor data back to a remote computer base for current and long-term analysis, and is directed primarily towards bridges. By suitably modifying the actuation of this very precise motion sensor, SIMS is able to track bridge deck deflection and vibration, expansion joint travel, concrete and rebar corrosion, pothole development, pier scour and tilt. Other sensors will track bolt clamp load, cable tension, and metal fatigue. All of these data are received within microseconds, which means that appropriate computer algorithm manipulations can be carried out to correlate one sensor with other sensors in real time. This internal verification feature automatically enhances confidence in the system's predictive ability and alerts the user to any anomalous behavior.

  1. Verification of real sensor motion for a high-dynamic 3D measurement inspection system

    NASA Astrophysics Data System (ADS)

    Breitbarth, Andreas; Correns, Martin; Zimmermann, Manuel; Zhang, Chen; Rosenberger, Maik; Schambach, Jörg; Notni, Gunther

    2017-06-01

    Inline three-dimensional measurements are a growing part of optical inspection. Considering increasing production capacities and economic aspects, dynamic measurements under motion are inescapable. Using a sequence of different pattern, like it is generally done in fringe projection systems, relative movements of the measurement object with respect to the 3d sensor between the images of one pattern sequence have to be compensated. Based on the application of fully automated optical inspection of circuit boards at an assembly line, the knowledge of the relative speed of movement between the measurement object and the 3d sensor system should be used inside the algorithms of motion compensation. Optimally, this relative speed is constant over the whole measurement process and consists of only one motion direction to avoid sensor vibrations. The quantified evaluation of this two assumptions and the error impact on the 3d accuracy are content of the research project described by this paper. For our experiments we use a glass etalon with non-transparent circles and transmitted light. Focused on the circle borders, this is one of the most reliable methods to determine subpixel positions using a couple of searching rays. The intersection point of all rays characterize the center of each circle. Based on these circle centers determined with a precision of approximately 1=50 pixel, the motion vector between two images could be calculated and compared with the input motion vector. Overall, the results are used to optimize the weight distribution of the 3d sensor head and reduce non-uniformly vibrations. Finally, there exists a dynamic 3d measurement system with an error of motion vectors about 4 micrometer. Based on this outcome, simulations result in a 3d standard deviation at planar object regions of 6 micrometers. The same system yields a 3d standard deviation of 9 µm without the optimization of weight distribution.

  2. Method for measuring tri-axial lumbar motion angles using wearable sheet stretch sensors

    PubMed Central

    Nakamoto, Hiroyuki; Yamaji, Tokiya; Ootaka, Hideo; Bessho, Yusuke; Nakamura, Ryo; Ono, Rei

    2017-01-01

    Background Body movements, such as trunk flexion and rotation, are risk factors for low back pain in occupational settings, especially in healthcare workers. Wearable motion capture systems are potentially useful to monitor lower back movement in healthcare workers to help avoid the risk factors. In this study, we propose a novel system using sheet stretch sensors and investigate the system validity for estimating lower back movement. Methods Six volunteers (female:male = 1:1, mean age: 24.8 ± 4.0 years, height 166.7 ± 5.6 cm, weight 56.3 ± 7.6 kg) participated in test protocols that involved executing seven types of movements. The movements were three uniaxial trunk movements (i.e., trunk flexion-extension, trunk side-bending, and trunk rotation) and four multiaxial trunk movements (i.e., flexion + rotation, flexion + side-bending, side-bending + rotation, and moving around the cranial–caudal axis). Each trial lasted for approximately 30 s. Four stretch sensors were attached to each participant’s lower back. The lumbar motion angles were estimated using simple linear regression analysis based on the stretch sensor outputs and compared with those obtained by the optical motion capture system. Results The estimated lumbar motion angles showed a good correlation with the actual angles, with correlation values of r = 0.68 (SD = 0.35), r = 0.60 (SD = 0.19), and r = 0.72 (SD = 0.18) for the flexion-extension, side bending, and rotation movements, respectively (all P < 0.05). The estimation errors in all three directions were less than 3°. Conclusion The stretch sensors mounted on the back provided reasonable estimates of the lumbar motion angles. The novel motion capture system provided three directional angles without capture space limits. The wearable system possessed great potential to monitor the lower back movement in healthcare workers and helping prevent low back pain. PMID:29020053

  3. United States Naval Academy Polar Science Program's Visual Arctic Observing Buoys; The IceGoat

    NASA Astrophysics Data System (ADS)

    Woods, J. E.; Clemente-Colon, P.; Nghiem, S. V.; Rigor, I.; Valentic, T. A.

    2012-12-01

    The U.S. Naval Academy Oceanography Department currently has a curriculum based Polar Science Program (USNA PSP). Within the PSP there is an Arctic Buoy Program (ABP) student research component that will include the design, build, testing and deployment of Arctic Buoys. Establishing an active, field-research program in Polar Science will greatly enhance Midshipman education and research, as well as introduce future Naval Officers to the Arctic environment. The Oceanography Department has engaged the USNA Ocean Engineering, Systems Engineering, Aerospace Engineering, and Computer Science Departments and developed a USNA Visual Arctic Observing Buoy, IceGoat1, which was designed, built, and deployed by midshipmen. The experience gained through Polar field studies and data derived from these buoys will be used to enhance course materials and laboratories and will also be used directly in Midshipman independent research projects. The USNA PSP successfully deployed IceGoat1 during the BROMEX 2012 field campaign out of Barrow, AK in March 2012. This buoy reports near real-time observation of Air Temperature, Sea Temperature, Atmospheric Pressure, Position and Images from 2 mounted webcams. The importance of this unique type of buoy being inserted into the U.S. Interagency Arctic Buoy Program and the International Arctic Buoy Programme (USIABP/IABP) array is cross validating satellite observations of sea ice cover in the Arctic with the buoys webcams. We also propose to develop multiple sensor packages for the IceGoat to include a more robust weather suite, and a passive acoustic hydrophone. Remote cameras on buoys have provided crucial qualitative information that complements the quantitative measurements of geophysical parameters. For example, the mechanical anemometers on the IABP Polar Arctic Weather Station at the North Pole Environmental Observatory (NPEO) have at times reported zero winds speeds, and inspection of the images from the NPEO cameras have showed frosting on the camera during these same periods indicating that the anemometer has temporarily frozen up. Later when the camera lens clears, the anemometers resume providing reasonable wind speeds. The cameras have also provided confirmation of the onset of melt and freeze, and indications of cloudy and clear skies. USNA PSP will monitor meteorological and oceanographic parameters of the Arctic environment remotely via its own buoys. Web cameras will provide near real time visual observations of the buoys current positions, allowing for instant validation of other remotes sensors and modeled data. Each buoy will be developed with at a minimum a meteorological sensor package in accordance with IABP protocol (2m Air Temp, SLP). Platforms will also be developed with new sensor packages to possibly include, wind speed, ice temperature, sea ice thickness, underwater acoustics, and new communications suites (Iridium, Radio). The uniqueness of the IceGoat is that it is based on the new AXIB buoy designed by LBI, Inc. that has a proven record of being able to survive in the harsh marginal ice zone environment. IceGoat1 will be deployed in the High Arctic during the USCGC HEALY cruise in late August 2012.

  4. Gait Recognition Using Wearable Motion Recording Sensors

    NASA Astrophysics Data System (ADS)

    Gafurov, Davrondzhon; Snekkenes, Einar

    2009-12-01

    This paper presents an alternative approach, where gait is collected by the sensors attached to the person's body. Such wearable sensors record motion (e.g. acceleration) of the body parts during walking. The recorded motion signals are then investigated for person recognition purposes. We analyzed acceleration signals from the foot, hip, pocket and arm. Applying various methods, the best EER obtained for foot-, pocket-, arm- and hip- based user authentication were 5%, 7%, 10% and 13%, respectively. Furthermore, we present the results of our analysis on security assessment of gait. Studying gait-based user authentication (in case of hip motion) under three attack scenarios, we revealed that a minimal effort mimicking does not help to improve the acceptance chances of impostors. However, impostors who know their closest person in the database or the genders of the users can be a threat to gait-based authentication. We also provide some new insights toward the uniqueness of gait in case of foot motion. In particular, we revealed the following: a sideway motion of the foot provides the most discrimination, compared to an up-down or forward-backward directions; and different segments of the gait cycle provide different level of discrimination.

  5. Clinically acceptable agreement between the ViMove wireless motion sensor system and the Vicon motion capture system when measuring lumbar region inclination motion in the sagittal and coronal planes.

    PubMed

    Mjøsund, Hanne Leirbekk; Boyle, Eleanor; Kjaer, Per; Mieritz, Rune Mygind; Skallgård, Tue; Kent, Peter

    2017-03-21

    Wireless, wearable, inertial motion sensor technology introduces new possibilities for monitoring spinal motion and pain in people during their daily activities of work, rest and play. There are many types of these wireless devices currently available but the precision in measurement and the magnitude of measurement error from such devices is often unknown. This study investigated the concurrent validity of one inertial motion sensor system (ViMove) for its ability to measure lumbar inclination motion, compared with the Vicon motion capture system. To mimic the variability of movement patterns in a clinical population, a sample of 34 people were included - 18 with low back pain and 16 without low back pain. ViMove sensors were attached to each participant's skin at spinal levels T12 and S2, and Vicon surface markers were attached to the ViMove sensors. Three repetitions of end-range flexion inclination, extension inclination and lateral flexion inclination to both sides while standing were measured by both systems concurrently with short rest periods in between. Measurement agreement through the whole movement range was analysed using a multilevel mixed-effects regression model to calculate the root mean squared errors and the limits of agreement were calculated using the Bland Altman method. We calculated root mean squared errors (standard deviation) of 1.82° (±1.00°) in flexion inclination, 0.71° (±0.34°) in extension inclination, 0.77° (±0.24°) in right lateral flexion inclination and 0.98° (±0.69°) in left lateral flexion inclination. 95% limits of agreement ranged between -3.86° and 4.69° in flexion inclination, -2.15° and 1.91° in extension inclination, -2.37° and 2.05° in right lateral flexion inclination and -3.11° and 2.96° in left lateral flexion inclination. We found a clinically acceptable level of agreement between these two methods for measuring standing lumbar inclination motion in these two cardinal movement planes. Further research should investigate the ViMove system's ability to measure lumbar motion in more complex 3D functional movements and to measure changes of movement patterns related to treatment effects.

  6. Studies of human dynamic space orientation using techniques of control theory

    NASA Technical Reports Server (NTRS)

    Young, L. R.

    1974-01-01

    Studies of human orientation and manual control in high order systems are summarized. Data cover techniques for measuring and altering orientation perception, role of non-visual motion sensors, particularly the vestibular and tactile sensors, use of motion cues in closed loop control of simple stable and unstable systems, and advanced computer controlled display systems.

  7. A Review of Accelerometry-Based Wearable Motion Detectors for Physical Activity Monitoring

    PubMed Central

    Yang, Che-Chang; Hsu, Yeh-Liang

    2010-01-01

    Characteristics of physical activity are indicative of one’s mobility level, latent chronic diseases and aging process. Accelerometers have been widely accepted as useful and practical sensors for wearable devices to measure and assess physical activity. This paper reviews the development of wearable accelerometry-based motion detectors. The principle of accelerometry measurement, sensor properties and sensor placements are first introduced. Various research using accelerometry-based wearable motion detectors for physical activity monitoring and assessment, including posture and movement classification, estimation of energy expenditure, fall detection and balance control evaluation, are also reviewed. Finally this paper reviews and compares existing commercial products to provide a comprehensive outlook of current development status and possible emerging technologies. PMID:22163626

  8. Smart Sensor-Based Motion Detection System for Hand Movement Training in Open Surgery.

    PubMed

    Sun, Xinyao; Byrns, Simon; Cheng, Irene; Zheng, Bin; Basu, Anup

    2017-02-01

    We introduce a smart sensor-based motion detection technique for objective measurement and assessment of surgical dexterity among users at different experience levels. The goal is to allow trainees to evaluate their performance based on a reference model shared through communication technology, e.g., the Internet, without the physical presence of an evaluating surgeon. While in the current implementation we used a Leap Motion Controller to obtain motion data for analysis, our technique can be applied to motion data captured by other smart sensors, e.g., OptiTrack. To differentiate motions captured from different participants, measurement and assessment in our approach are achieved using two strategies: (1) low level descriptive statistical analysis, and (2) Hidden Markov Model (HMM) classification. Based on our surgical knot tying task experiment, we can conclude that finger motions generated from users with different surgical dexterity, e.g., expert and novice performers, display differences in path length, number of movements and task completion time. In order to validate the discriminatory ability of HMM for classifying different movement patterns, a non-surgical task was included in our analysis. Experimental results demonstrate that our approach had 100 % accuracy in discriminating between expert and novice performances. Our proposed motion analysis technique applied to open surgical procedures is a promising step towards the development of objective computer-assisted assessment and training systems.

  9. Wearable Inertial Sensors Allow for Quantitative Assessment of Shoulder and Elbow Kinematics in a Cadaveric Knee Arthroscopy Model.

    PubMed

    Rose, Michael; Curtze, Carolin; O'Sullivan, Joseph; El-Gohary, Mahmoud; Crawford, Dennis; Friess, Darin; Brady, Jacqueline M

    2017-12-01

    To develop a model using wearable inertial sensors to assess the performance of orthopaedic residents while performing a diagnostic knee arthroscopy. Fourteen subjects performed a diagnostic arthroscopy on a cadaveric right knee. Participants were divided into novices (5 postgraduate year 3 residents), intermediates (5 postgraduate year 4 residents), and experts (4 faculty) based on experience. Arm movement data were collected by inertial measurement units (Opal sensors) by securing 2 sensors to each upper extremity (dorsal forearm and lateral arm) and 2 sensors to the trunk (sternum and lumbar spine). Kinematics of the elbow and shoulder joints were calculated from the inertial data by biomechanical modeling based on a sequence of links connected by joints. Range of motion required to complete the procedure was calculated for each group. Histograms were used to compare the distribution of joint positions for an expert, intermediate, and novice. For both the right and left upper extremities, skill level corresponded well with shoulder abduction-adduction and elbow prono-supination. Novices required on average 17.2° more motion in the right shoulder abduction-adduction plane than experts to complete the diagnostic arthroscopy (P = .03). For right elbow prono-supination (probe hand), novices required on average 23.7° more motion than experts to complete the procedure (P = .03). Histogram data showed novices had markedly more variability in shoulder abduction-adduction and elbow prono-supination compared with the other groups. Our data show wearable inertial sensors can measure joint kinematics during diagnostic knee arthroscopy. Range-of-motion data in the shoulder and elbow correlated inversely with arthroscopic experience. Motion pattern-based analysis shows promise as a metric of resident skill acquisition and development in arthroscopy. Wearable inertial sensors show promise as metrics of arthroscopic skill acquisition among residents. Copyright © 2017 Arthroscopy Association of North America. Published by Elsevier Inc. All rights reserved.

  10. BlueSeis3A - performance, laboratory tests and applications

    NASA Astrophysics Data System (ADS)

    Bernauer, F.; Wassermann, J. M.; de Toldi, E.; Guattari, F.; Ponceau, D.; Ripepe, M.; Igel, H.

    2017-12-01

    One of the most emerging developments in seismic instrumentation is the application of fiber optic gyroscopes as portable rotational ground motion sensors. In the framework of the European Research Council Project, ROMY (ROtational Motions in seismologY), BlueSeis3A was developed in a collaboration between researchers from Ludwig-Maximilians University of Munich, Germany, and the fiber optic sensors manufacturer iXblue, France. With its high sensitivity (20 nrads-1Hz-1/2) in a broad frequency range (0.001 Hz to 50 Hz) BlueSeis3A opens a variety of applications which were up to now hampered by the lack of such an instrument. In this contribution, we will first present performance characteristics of BlueSeis3A with a focus on timing stability and scale factor linearity. In a second part we demonstrate the benefit of directly measured rotational motion for dynamic tilt correction of measurements made with a classical seismometer. A well known tilt signal was produced with a shake table and recorded simultaneously with a classical seismometer and BlueSeis3A. The seismometer measurement could be improved significantly by subtracting the coherent tilt signal which was measured directly with the rotational motion sensor. As a last part we show the advantage of directly measured rotational motion for applications in civil engineering. Results from a measurement campaign in the Giotto bell tower in the city of Florence, Italy, show the possibility of direct observation of torsional modes by deploying a rotational motion sensor inside the structure.

  11. A new calibration methodology for thorax and upper limbs motion capture in children using magneto and inertial sensors.

    PubMed

    Ricci, Luca; Formica, Domenico; Sparaci, Laura; Lasorsa, Francesca Romana; Taffoni, Fabrizio; Tamilia, Eleonora; Guglielmelli, Eugenio

    2014-01-09

    Recent advances in wearable sensor technologies for motion capture have produced devices, mainly based on magneto and inertial measurement units (M-IMU), that are now suitable for out-of-the-lab use with children. In fact, the reduced size, weight and the wireless connectivity meet the requirement of minimum obtrusivity and give scientists the possibility to analyze children's motion in daily life contexts. Typical use of magneto and inertial measurement units (M-IMU) motion capture systems is based on attaching a sensing unit to each body segment of interest. The correct use of this setup requires a specific calibration methodology that allows mapping measurements from the sensors' frames of reference into useful kinematic information in the human limbs' frames of reference. The present work addresses this specific issue, presenting a calibration protocol to capture the kinematics of the upper limbs and thorax in typically developing (TD) children. The proposed method allows the construction, on each body segment, of a meaningful system of coordinates that are representative of real physiological motions and that are referred to as functional frames (FFs). We will also present a novel cost function for the Levenberg-Marquardt algorithm, to retrieve the rotation matrices between each sensor frame (SF) and the corresponding FF. Reported results on a group of 40 children suggest that the method is repeatable and reliable, opening the way to the extensive use of this technology for out-of-the-lab motion capture in children.

  12. Internet chameleons: an experimental study on imitating smoking peers through digital interaction.

    PubMed

    Harakeh, Zeena; Vollebergh, Wilma A M

    2012-03-01

    Existing experimental studies indicate that young adults are more likely to smoke in the company of real-life smoking peers. However, it is still unclear whether imitation can explain these findings or whether alternatively the mere smell and not the smoking behavior may have been the trigger to smoke. One way to study this issue is by analyzing the exposure to real-life smoking peers without the possibility of smelling the smoker's cigarette, for example, during digital interaction on the Internet. Although many youngsters meet and interact with each other online, research on exposure to smoking peers through the Internet has not yet been investigated. This experiment was conducted among 36 daily smoking young people aged 16-24 years. Smoking behavior was observed during a 30-min joint music assignment. During this assignment, the confederate and participant sat in 2 separate rooms and interacted with each other online and via webcam. The findings show that young adults interacting with heavy-smoking peers on the Internet and via webcam smoked significantly more cigarettes than those who interacted with nonsmoking peers. Young adult smokers strongly imitate smoking in interaction with peers in online communication via webcam, without smelling the smoker's cigarette. Antismoking policies and smoking cessation programs should focus on (raising awareness of) avoiding smoking peers, even during digital interaction.

  13. A system for beach video-monitoring: Beachkeeper plus

    NASA Astrophysics Data System (ADS)

    Brignone, Massimo; Schiaffino, Chiara F.; Isla, Federico I.; Ferrari, Marco

    2012-12-01

    A suitable knowledge of coastal systems, of their morphodynamic characteristics and their response to storm events and man-made structures is essential for littoral conservation and management. Nowadays webcams represent a useful device to obtain information from beaches. Video-monitoring techniques are generally site specific and softwares working with any image acquisition system are rare. Therefore, this work aims at submitting theory and applications of an experimental video monitoring software: Beachkeeper plus, a freeware non-profit software, can be employed and redistributed without modifications. A license file is provided inside software package and in the user guide. Beachkeeper plus is based on Matlab® and it can be used for the analysis of images and photos coming from any kind of acquisition system (webcams, digital cameras or images downloaded from internet), without any a-priori information or laboratory study of the acquisition system itself. Therefore, it could become a useful tool for beach planning. Through a simple guided interface, images can be analyzed by performing georeferentiation, rectification, averaging and variance. This software was initially operated in Pietra Ligure (Italy), using images from a tourist webcam, and in Mar del Plata (Argentina) using images from a digital camera. In both cases the reliability in different geomorphologic and morphodynamic conditions was confirmed by the good quality of obtained images after georeferentiation, rectification and averaging.

  14. Ultrathin flexible piezoelectric sensors for monitoring eye fatigue

    NASA Astrophysics Data System (ADS)

    Lü, Chaofeng; Wu, Shuang; Lu, Bingwei; Zhang, Yangyang; Du, Yangkun; Feng, Xue

    2018-02-01

    Eye fatigue is a symptom induced by long-term work of both eyes and brains. Without proper treatment, eye fatigue may incur serious problems. Current studies on detecting eye fatigue mainly focus on computer vision detect technology which can be very unreliable due to occasional bad visual conditions. As a solution, we proposed a wearable conformal in vivo eye fatigue monitoring sensor that contains an array of piezoelectric nanoribbons integrated on an ultrathin flexible substrate. By detecting strains on the skin of eyelid, the sensors may collect information about eye blinking, and, therefore, reveal human’s fatigue state. We first report the design and fabrication of the piezoelectric sensor and experimental characterization of voltage responses of the piezoelectric sensors. Under bending stress, the output voltage curves yield key information about the motion of human eyelid. We also develop a theoretical model to reveal the underlying mechanism of detecting eyelid motion. Both mechanical load test and in vivo test are conducted to convince the working performance of the sensors. With satisfied durability and high sensitivity, this sensor may efficiently detect abnormal eyelid motions, such as overlong closure, high blinking frequency, low closing speed and weak gazing strength, and may hopefully provide feedback for assessing eye fatigue in time so that unexpected situations can be prevented.

  15. RGO-coated elastic fibres as wearable strain sensors for full-scale detection of human motions

    NASA Astrophysics Data System (ADS)

    Mi, Qing; Wang, Qi; Zang, Siyao; Mao, Guoming; Zhang, Jinnan; Ren, Xiaomin

    2018-01-01

    In this study, we chose highly-elastic fabric fibres as the functional carrier and then simply coated the fibres with reduced graphene oxide (rGO) using plasma treatment, dip coating and hydrothermal reduction steps, finally making a wearable strain sensor. As a result, the full-scale detection of human motions, ranging from bending joints to the pulse beat, has been achieved by these sensors. Moreover, high sensitivity, good stability and excellent repeatability were realized. The good sensing performances and economical fabrication process of this wearable strain sensor have strengthened our confidence in practical applications in smart clothing, smart fabrics, healthcare, and entertainment fields.

  16. Flexible and Compressible PEDOT:PSS@Melamine Conductive Sponge Prepared via One-Step Dip Coating as Piezoresistive Pressure Sensor for Human Motion Detection.

    PubMed

    Ding, Yichun; Yang, Jack; Tolle, Charles R; Zhu, Zhengtao

    2018-05-09

    Flexible and wearable pressure sensor may offer convenient, timely, and portable solutions to human motion detection, yet it is a challenge to develop cost-effective materials for pressure sensor with high compressibility and sensitivity. Herein, a cost-efficient and scalable approach is reported to prepare a highly flexible and compressible conductive sponge for piezoresistive pressure sensor. The conductive sponge, poly(3,4-ethylenedioxythiophene):poly(styrenesulfonate) (PEDOT:PSS)@melamine sponge (MS), is prepared by one-step dip coating the commercial melamine sponge (MS) in an aqueous dispersion of poly(3,4-ethylenedioxythiophene):poly(styrenesulfonate) (PEDOT:PSS). Due to the interconnected porous structure of MS, the conductive PEDOT:PSS@MS has a high compressibility and a stable piezoresistive response at the compressive strain up to 80%, as well as good reproducibility over 1000 cycles. Thereafter, versatile pressure sensors fabricated using the conductive PEDOT:PSS@MS sponges are attached to the different parts of human body; the capabilities of these devices to detect a variety of human motions including speaking, finger bending, elbow bending, and walking are evaluated. Furthermore, prototype tactile sensory array based on these pressure sensors is demonstrated.

  17. Flexible wire-shaped strain sensor from cotton thread for human health and motion detection.

    PubMed

    Li, Yuan-Qing; Huang, Pei; Zhu, Wei-Bin; Fu, Shao-Yun; Hu, Ning; Liao, Kin

    2017-03-21

    In this work, a wire-shaped flexible strain sensor was fabricated by encapsulating conductive carbon thread (CT) with polydimethylsiloxane (PDMS) elastomer. The key strain sensitive material, CT, was prepared by pyrolysing cotton thread in N 2 atmosphere. The CT/PDMS composite wire shows a typical piezo-resistive behavior with high strain sensitivity. The gauge factors (GF) calculated at low strain of 0-4% and high strain of 8-10% are 8.7 and 18.5, respectively, which are much higher than that of the traditional metallic strain sensor (GF around 2). The wire-shaped CT/PDMS composite sensor shows excellent response to cyclic tensile loading within the strain range of 0-10%, the frequency range of 0.01-10 Hz, to up to 2000 cycles. The potential of the wire senor as wearable strain sensor is demonstrated by the finger motion and blood pulse monitoring. Featured by the low costs of cotton wire and PDMS resin, the simple structure and fabrication technique, as well as high performance with miniaturized size, the wire-shaped sensor based on CT/PDMS composite is believed to have a great potential for application in wearable electronics for human health and motion monitoring.

  18. Dynamic Metasurface Aperture as Smart Around-the-Corner Motion Detector.

    PubMed

    Del Hougne, Philipp; F Imani, Mohammadreza; Sleasman, Timothy; Gollub, Jonah N; Fink, Mathias; Lerosey, Geoffroy; Smith, David R

    2018-04-25

    Detecting and analysing motion is a key feature of Smart Homes and the connected sensor vision they embrace. At present, most motion sensors operate in line-of-sight Doppler shift schemes. Here, we propose an alternative approach suitable for indoor environments, which effectively constitute disordered cavities for radio frequency (RF) waves; we exploit the fundamental sensitivity of modes of such cavities to perturbations, caused here by moving objects. We establish experimentally three key features of our proposed system: (i) ability to capture the temporal variations of motion and discern information such as periodicity ("smart"), (ii) non line-of-sight motion detection, and (iii) single-frequency operation. Moreover, we explain theoretically and demonstrate experimentally that the use of dynamic metasurface apertures can substantially enhance the performance of RF motion detection. Potential applications include accurately detecting human presence and monitoring inhabitants' vital signs.

  19. Study on robot motion control for intelligent welding processes based on the laser tracking sensor

    NASA Astrophysics Data System (ADS)

    Zhang, Bin; Wang, Qian; Tang, Chen; Wang, Ju

    2017-06-01

    A robot motion control method is presented for intelligent welding processes of complex spatial free-form curve seams based on the laser tracking sensor. First, calculate the tip position of the welding torch according to the velocity of the torch and the seam trajectory detected by the sensor. Then, search the optimal pose of the torch under constraints using genetic algorithms. As a result, the intersection point of the weld seam and the laser plane of the sensor is within the detectable range of the sensor. Meanwhile, the angle between the axis of the welding torch and the tangent of the weld seam meets the requirements. The feasibility of the control method is proved by simulation.

  20. A novel dynamic sensing of wearable digital textile sensor with body motion analysis.

    PubMed

    Yang, Chang-Ming; Lin, Zhan-Sheng; Hu, Chang-Lin; Chen, Yu-Shih; Ke, Ling-Yi; Chen, Yin-Rui

    2010-01-01

    This work proposes an innovative textile sensor system to monitor dynamic body movement and human posture by attaching wearable digital sensors to analyze body motion. The proposed system can display and analyze signals when individuals are walking, running, veering around, walking up and down stairs, as well as falling down with a wearable monitoring system, which reacts to the coordination between the body and feet. Several digital sensor designs are embedded in clothing and wear apparel. Any pressure point can determine which activity is underway. Importantly, wearable digital sensors and a wearable monitoring system allow adaptive, real-time postures, real time velocity, acceleration, non-invasive, transmission healthcare, and point of care (POC) for home and non-clinical environments.

  1. Can we develop an effective early warning system for volcanic eruptions using `off the shelf' webcams and low-light cameras?

    NASA Astrophysics Data System (ADS)

    Harrild, M.; Webley, P. W.; Dehn, J.

    2016-12-01

    An effective early warning system to detect volcanic activity is an invaluable tool, but often very expensive. Detecting and monitoring precursory events, thermal signatures, and ongoing eruptions in near real-time is essential, but conventional methods are often logistically challenging, expensive, and difficult to maintain. Our investigation explores the use of `off the shelf' webcams and low-light cameras, operating in the visible to near-infrared portions of the electromagnetic spectrum, to detect and monitor volcanic incandescent activity. Large databases of webcam imagery already exist at institutions around the world, but are often extremely underutilised and we aim to change this. We focus on the early detection of thermal signatures at volcanoes, using automated scripts to analyse individual images for changes in pixel brightness, allowing us to detect relative changes in thermally incandescent activity. Primarily, our work focuses on freely available streams of webcam images from around the world, which we can download and analyse in near real-time. When changes in activity are detected, an alert is sent to the users informing them of the changes in activity and a need for further investigation. Although relatively rudimentary, this technique provides constant monitoring for volcanoes in remote locations and developing nations, where it is not financially viable to deploy expensive equipment. We also purchased several of our own cameras, which were extensively tested in controlled laboratory settings with a black body source to determine their individual spectral response. Our aim is to deploy these cameras at active volcanoes knowing exactly how they will respond to varying levels of incandescence. They are ideal for field deployments as they are cheap (0-1,000), consume little power, are easily replaced, and can provide telemetered near real-time data. Data from Shiveluch volcano, Russia and our spectral response lab experiments are presented here.

  2. The development and validation of using inertial sensors to monitor postural change in resistance exercise.

    PubMed

    Gleadhill, Sam; Lee, James Bruce; James, Daniel

    2016-05-03

    This research presented and validated a method of assessing postural changes during resistance exercise using inertial sensors. A simple lifting task was broken down to a series of well-defined tasks, which could be examined and measured in a controlled environment. The purpose of this research was to determine whether timing measures obtained from inertial sensor accelerometer outputs are able to provide accurate, quantifiable information of resistance exercise movement patterns. The aim was to complete a timing measure validation of inertial sensor outputs. Eleven participants completed five repetitions of 15 different deadlift variations. Participants were monitored with inertial sensors and an infrared three dimensional motion capture system. Validation was undertaken using a Will Hopkins Typical Error of the Estimate, with a Pearson׳s correlation and a Bland Altman Limits of Agreement analysis. Statistical validation measured the timing agreement during deadlifts, from inertial sensor outputs and the motion capture system. Timing validation results demonstrated a Pearson׳s correlation of 0.9997, with trivial standardised error (0.026) and standardised bias (0.002). Inertial sensors can now be used in practical settings with as much confidence as motion capture systems, for accelerometer timing measurements of resistance exercise. This research provides foundations for inertial sensors to be applied for qualitative activity recognition of resistance exercise and safe lifting practices. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Low-cost human motion capture system for postural analysis onboard ships

    NASA Astrophysics Data System (ADS)

    Nocerino, Erica; Ackermann, Sebastiano; Del Pizzo, Silvio; Menna, Fabio; Troisi, Salvatore

    2011-07-01

    The study of human equilibrium, also known as postural stability, concerns different research sectors (medicine, kinesiology, biomechanics, robotics, sport) and is usually performed employing motion analysis techniques for recording human movements and posture. A wide range of techniques and methodologies has been developed, but the choice of instrumentations and sensors depends on the requirement of the specific application. Postural stability is a topic of great interest for the maritime community, since ship motions can make demanding and difficult the maintenance of the upright stance with hazardous consequences for the safety of people onboard. The need of capturing the motion of an individual standing on a ship during its daily service does not permit to employ optical systems commonly used for human motion analysis. These sensors are not designed for operating in disadvantageous environmental conditions (water, wetness, saltiness) and with not optimal lighting. The solution proposed in this study consists in a motion acquisition system that could be easily usable onboard ships. It makes use of two different methodologies: (I) motion capture with videogrammetry and (II) motion measurement with Inertial Measurement Unit (IMU). The developed image-based motion capture system, made up of three low-cost, light and compact video cameras, was validated against a commercial optical system and then used for testing the reliability of the inertial sensors. In this paper, the whole process of planning, designing, calibrating, and assessing the accuracy of the motion capture system is reported and discussed. Results from the laboratory tests and preliminary campaigns in the field are presented.

  4. Non-contact cardiac pulse rate estimation based on web-camera

    NASA Astrophysics Data System (ADS)

    Wang, Yingzhi; Han, Tailin

    2015-12-01

    In this paper, we introduce a new methodology of non-contact cardiac pulse rate estimation based on the imaging Photoplethysmography (iPPG) and blind source separation. This novel's approach can be applied to color video recordings of the human face and is based on automatic face tracking along with blind source separation of the color channels into RGB three-channel component. First of all, we should do some pre-processings of the data which can be got from color video such as normalization and sphering. We can use spectrum analysis to estimate the cardiac pulse rate by Independent Component Analysis (ICA) and JADE algorithm. With Bland-Altman and correlation analysis, we compared the cardiac pulse rate extracted from videos recorded by a basic webcam to a Commercial pulse oximetry sensors and achieved high accuracy and correlation. Root mean square error for the estimated results is 2.06bpm, which indicates that the algorithm can realize the non-contact measurements of cardiac pulse rate.

  5. Smart mobile robot system for rubbish collection

    NASA Astrophysics Data System (ADS)

    Ali, Mohammed A. H.; Sien Siang, Tan

    2018-03-01

    This paper records the research and procedures of developing a smart mobility robot with detection system to collect rubbish. The objective of this paper is to design a mobile robot that can detect and recognize medium-size rubbish such as drinking cans. Besides that, the objective is also to design a mobile robot with the ability to estimate the position of rubbish from the robot. In addition, the mobile robot is also able to approach the rubbish based on position of rubbish. This paper explained about the types of image processing, detection and recognition methods and image filters. This project implements RGB subtraction method as the prior system. Other than that, algorithm for distance measurement based on image plane is implemented in this project. This project is limited to use computer webcam as the sensor. Secondly, the robot is only able to approach the nearest rubbish in the same views of camera vision and any rubbish that contain RGB colour components on its body.

  6. Note: Compact and light displacement sensor for a precision measurement system in large motion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Sang Heon, E-mail: shlee@andong.ac.kr

    We developed a compact and light displacement sensor applicable to systems that require wide range motions of its sensing device. The proposed sensor utilized the optical pickup unit of the optical disk drive, which has been used applied to atomic force microscopy (AFM) because of its compactness and lightness as well as its high performance. We modified the structure of optical pickup unit and made the compact sensor driver attachable to a probe head of AFM to make large rotation. The feasibilities of the developed sensor for a general probe-moving measurement device and for probe-rotating AFM were verified. Moreover, amore » simple and precise measurement of alignment between centers of rotator and probe tip in probe-rotation AFM was experimentally demonstrated using the developed sensor.« less

  7. Reference respiratory waveforms by minimum jerk model analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anetai, Yusuke, E-mail: anetai@radonc.med.osaka-u.ac.jp; Sumida, Iori; Takahashi, Yutaka

    Purpose: CyberKnife{sup ®} robotic surgery system has the ability to deliver radiation to a tumor subject to respiratory movements using Synchrony{sup ®} mode with less than 2 mm tracking accuracy. However, rapid and rough motion tracking causes mechanical tracking errors and puts mechanical stress on the robotic joint, leading to unexpected radiation delivery errors. During clinical treatment, patient respiratory motions are much more complicated, suggesting the need for patient-specific modeling of respiratory motion. The purpose of this study was to propose a novel method that provides a reference respiratory wave to enable smooth tracking for each patient. Methods: The minimummore » jerk model, which mathematically derives smoothness by means of jerk, or the third derivative of position and the derivative of acceleration with respect to time that is proportional to the time rate of force changed was introduced to model a patient-specific respiratory motion wave to provide smooth motion tracking using CyberKnife{sup ®}. To verify that patient-specific minimum jerk respiratory waves were being tracked smoothly by Synchrony{sup ®} mode, a tracking laser projection from CyberKnife{sup ®} was optically analyzed every 0.1 s using a webcam and a calibrated grid on a motion phantom whose motion was in accordance with three pattern waves (cosine, typical free-breathing, and minimum jerk theoretical wave models) for the clinically relevant superior–inferior directions from six volunteers assessed on the same node of the same isocentric plan. Results: Tracking discrepancy from the center of the grid to the beam projection was evaluated. The minimum jerk theoretical wave reduced the maximum-peak amplitude of radial tracking discrepancy compared with that of the waveforms modeled by cosine and typical free-breathing model by 22% and 35%, respectively, and provided smooth tracking for radial direction. Motion tracking constancy as indicated by radial tracking discrepancy affected by respiratory phase was improved in the minimum jerk theoretical model by 7.0% and 13% compared with that of the waveforms modeled by cosine and free-breathing model, respectively. Conclusions: The minimum jerk theoretical respiratory wave can achieve smooth tracking by CyberKnife{sup ®} and may provide patient-specific respiratory modeling, which may be useful for respiratory training and coaching, as well as quality assurance of the mechanical CyberKnife{sup ®} robotic trajectory.« less

  8. A Saccade Based Framework for Real-Time Motion Segmentation Using Event Based Vision Sensors

    PubMed Central

    Mishra, Abhishek; Ghosh, Rohan; Principe, Jose C.; Thakor, Nitish V.; Kukreja, Sunil L.

    2017-01-01

    Motion segmentation is a critical pre-processing step for autonomous robotic systems to facilitate tracking of moving objects in cluttered environments. Event based sensors are low power analog devices that represent a scene by means of asynchronous information updates of only the dynamic details at high temporal resolution and, hence, require significantly less calculations. However, motion segmentation using spatiotemporal data is a challenging task due to data asynchrony. Prior approaches for object tracking using neuromorphic sensors perform well while the sensor is static or a known model of the object to be followed is available. To address these limitations, in this paper we develop a technique for generalized motion segmentation based on spatial statistics across time frames. First, we create micromotion on the platform to facilitate the separation of static and dynamic elements of a scene, inspired by human saccadic eye movements. Second, we introduce the concept of spike-groups as a methodology to partition spatio-temporal event groups, which facilitates computation of scene statistics and characterize objects in it. Experimental results show that our algorithm is able to classify dynamic objects with a moving camera with maximum accuracy of 92%. PMID:28316563

  9. Motion-related resource allocation in dynamic wireless visual sensor network environments.

    PubMed

    Katsenou, Angeliki V; Kondi, Lisimachos P; Parsopoulos, Konstantinos E

    2014-01-01

    This paper investigates quality-driven cross-layer optimization for resource allocation in direct sequence code division multiple access wireless visual sensor networks. We consider a single-hop network topology, where each sensor transmits directly to a centralized control unit (CCU) that manages the available network resources. Our aim is to enable the CCU to jointly allocate the transmission power and source-channel coding rates for each node, under four different quality-driven criteria that take into consideration the varying motion characteristics of each recorded video. For this purpose, we studied two approaches with a different tradeoff of quality and complexity. The first one allocates the resources individually for each sensor, whereas the second clusters them according to the recorded level of motion. In order to address the dynamic nature of the recorded scenery and re-allocate the resources whenever it is dictated by the changes in the amount of motion in the scenery, we propose a mechanism based on the particle swarm optimization algorithm, combined with two restarting schemes that either exploit the previously determined resource allocation or conduct a rough estimation of it. Experimental simulations demonstrate the efficiency of the proposed approaches.

  10. Highly Stretchable and Transparent Microfluidic Strain Sensors for Monitoring Human Body Motions.

    PubMed

    Yoon, Sun Geun; Koo, Hyung-Jun; Chang, Suk Tai

    2015-12-16

    We report a new class of simple microfluidic strain sensors with high stretchability, transparency, sensitivity, and long-term stability with no considerable hysteresis and a fast response to various deformations by combining the merits of microfluidic techniques and ionic liquids. The high optical transparency of the strain sensors was achieved by introducing refractive-index matched ionic liquids into microfluidic networks or channels embedded in an elastomeric matrix. The microfluidic strain sensors offer the outstanding sensor performance under a variety of deformations induced by stretching, bending, pressing, and twisting of the microfluidic strain sensors. The principle of our microfluidic strain sensor is explained by a theoretical model based on the elastic channel deformation. In order to demonstrate its capability of practical usage, the simple-structured microfluidic strain sensors were performed onto a finger, wrist, and arm. The highly stretchable and transparent microfluidic strain sensors were successfully applied as potential platforms for distinctively monitoring a wide range of human body motions in real time. Our novel microfluidic strain sensors show great promise for making future stretchable electronic devices.

  11. Model-Based Reinforcement of Kinect Depth Data for Human Motion Capture Applications

    PubMed Central

    Calderita, Luis Vicente; Bandera, Juan Pedro; Bustos, Pablo; Skiadopoulos, Andreas

    2013-01-01

    Motion capture systems have recently experienced a strong evolution. New cheap depth sensors and open source frameworks, such as OpenNI, allow for perceiving human motion on-line without using invasive systems. However, these proposals do not evaluate the validity of the obtained poses. This paper addresses this issue using a model-based pose generator to complement the OpenNI human tracker. The proposed system enforces kinematics constraints, eliminates odd poses and filters sensor noise, while learning the real dimensions of the performer's body. The system is composed by a PrimeSense sensor, an OpenNI tracker and a kinematics-based filter and has been extensively tested. Experiments show that the proposed system improves pure OpenNI results at a very low computational cost. PMID:23845933

  12. Precision and repeatability of the Optotrak 3020 motion measurement system.

    PubMed

    States, R A; Pappas, E

    2006-01-01

    Several motion analysis systems are used by researchers to quantify human motion and to perform accurate surgical procedures. The Optotrak 3020 is one of these systems and despite its widespread use there is not any published information on its precision and repeatability. We used a repeated measures design study to evaluate the precision and repeatability of the Optotrak 3020 by measuring distance and angle in three sessions, four distances and three conditions (motion, static vertical, and static tilted). Precision and repeatability were found to be excellent for both angle and distance although they decreased with increasing distance from the sensors and with tilt from the plane of the sensors. Motion did not have a significant effect on the precision of the measurements. In conclusion, the measurement error of the Optotrak is minimal. Further studies are needed to evaluate its precision and repeatability under human motion conditions.

  13. Region-confined restoration method for motion-blurred star image of the star sensor under dynamic conditions.

    PubMed

    Ma, Liheng; Bernelli-Zazzera, Franco; Jiang, Guangwen; Wang, Xingshu; Huang, Zongsheng; Qin, Shiqiao

    2016-06-10

    Under dynamic conditions, the centroiding accuracy of the motion-blurred star image decreases and the number of identified stars reduces, which leads to the degradation of the attitude accuracy of the star sensor. To improve the attitude accuracy, a region-confined restoration method, which concentrates on the noise removal and signal to noise ratio (SNR) improvement of the motion-blurred star images, is proposed for the star sensor under dynamic conditions. A multi-seed-region growing technique with the kinematic recursive model for star image motion is given to find the star image regions and to remove the noise. Subsequently, a restoration strategy is employed in the extracted regions, taking the time consumption and SNR improvement into consideration simultaneously. Simulation results indicate that the region-confined restoration method is effective in removing noise and improving the centroiding accuracy. The identification rate and the average number of identified stars in the experiments verify the advantages of the region-confined restoration method.

  14. Statistical data mining of streaming motion data for fall detection in assistive environments.

    PubMed

    Tasoulis, S K; Doukas, C N; Maglogiannis, I; Plagianakos, V P

    2011-01-01

    The analysis of human motion data is interesting for the purpose of activity recognition or emergency event detection, especially in the case of elderly or disabled people living independently in their homes. Several techniques have been proposed for identifying such distress situations using either motion, audio or video sensors on the monitored subject (wearable sensors) or the surrounding environment. The output of such sensors is data streams that require real time recognition, especially in emergency situations, thus traditional classification approaches may not be applicable for immediate alarm triggering or fall prevention. This paper presents a statistical mining methodology that may be used for the specific problem of real time fall detection. Visual data captured from the user's environment, using overhead cameras along with motion data are collected from accelerometers on the subject's body and are fed to the fall detection system. The paper includes the details of the stream data mining methodology incorporated in the system along with an initial evaluation of the achieved accuracy in detecting falls.

  15. Mobile robotic sensors for perimeter detection and tracking.

    PubMed

    Clark, Justin; Fierro, Rafael

    2007-02-01

    Mobile robot/sensor networks have emerged as tools for environmental monitoring, search and rescue, exploration and mapping, evaluation of civil infrastructure, and military operations. These networks consist of many sensors each equipped with embedded processors, wireless communication, and motion capabilities. This paper describes a cooperative mobile robot network capable of detecting and tracking a perimeter defined by a certain substance (e.g., a chemical spill) in the environment. Specifically, the contributions of this paper are twofold: (i) a library of simple reactive motion control algorithms and (ii) a coordination mechanism for effectively carrying out perimeter-sensing missions. The decentralized nature of the methodology implemented could potentially allow the network to scale to many sensors and to reconfigure when adding/deleting sensors. Extensive simulation results and experiments verify the validity of the proposed cooperative control scheme.

  16. Development of Gravity Acceleration Measurement Using Simple Harmonic Motion Pendulum Method Based on Digital Technology and Photogate Sensor

    NASA Astrophysics Data System (ADS)

    Yulkifli; Afandi, Zurian; Yohandri

    2018-04-01

    Development of gravitation acceleration measurement using simple harmonic motion pendulum method, digital technology and photogate sensor has been done. Digital technology is more practical and optimizes the time of experimentation. The pendulum method is a method of calculating the acceleration of gravity using a solid ball that connected to a rope attached to a stative pole. The pendulum is swung at a small angle resulted a simple harmonic motion. The measurement system consists of a power supply, Photogate sensors, Arduino pro mini and seven segments. The Arduino pro mini receives digital data from the photogate sensor and processes the digital data into the timing data of the pendulum oscillation. The calculation result of the pendulum oscillation time is displayed on seven segments. Based on measured data, the accuracy and precision of the experiment system are 98.76% and 99.81%, respectively. Based on experiment data, the system can be operated in physics experiment especially in determination of the gravity acceleration.

  17. Extremely Elastic Wearable Carbon Nanotube Fiber Strain Sensor for Monitoring of Human Motion.

    PubMed

    Ryu, Seongwoo; Lee, Phillip; Chou, Jeffrey B; Xu, Ruize; Zhao, Rong; Hart, Anastasios John; Kim, Sang-Gook

    2015-06-23

    The increasing demand for wearable electronic devices has made the development of highly elastic strain sensors that can monitor various physical parameters an essential factor for realizing next generation electronics. Here, we report an ultrahigh stretchable and wearable device fabricated from dry-spun carbon nanotube (CNT) fibers. Stretching the highly oriented CNT fibers grown on a flexible substrate (Ecoflex) induces a constant decrease in the conductive pathways and contact areas between nanotubes depending on the stretching distance; this enables CNT fibers to behave as highly sensitive strain sensors. Owing to its unique structure and mechanism, this device can be stretched by over 900% while retaining high sensitivity, responsiveness, and durability. Furthermore, the device with biaxially oriented CNT fiber arrays shows independent cross-sensitivity, which facilitates simultaneous measurement of strains along multiple axes. We demonstrated potential applications of the proposed device, such as strain gauge, single and multiaxial detecting motion sensors. These devices can be incorporated into various motion detecting systems where their applications are limited to their strain.

  18. Self-powered Real-time Movement Monitoring Sensor Using Triboelectric Nanogenerator Technology.

    PubMed

    Jin, Liangmin; Tao, Juan; Bao, Rongrong; Sun, Li; Pan, Caofeng

    2017-09-05

    The triboelectric nanogenerator (TENG) has great potential in the field of self-powered sensor fabrication. Recently, smart electronic devices and movement monitoring sensors have attracted the attention of scientists because of their application in the field of artificial intelligence. In this article, a TENG finger movement monitoring, self-powered sensor has been designed and analysed. Under finger movements, the TENG realizes the contact and separation to convert the mechanical energy into electrical signal. A pulse output current of 7.8 μA is generated by the bending and straightening motions of the artificial finger. The optimal output power can be realized when the external resistance is approximately 30 MΩ. The random motions of the finger are detected by the system with multiple TENG sensors in series. This type of flexible and self-powered sensor has potential applications in artificial intelligence and robot manufacturing.

  19. 3D Data Acquisition Platform for Human Activity Understanding

    DTIC Science & Technology

    2016-03-02

    3D data. The support for the acquisition of such research instrumentation have significantly facilitated our current and future research and educate ...SECURITY CLASSIFICATION OF: In this project, we incorporated motion capture devices, 3D vision sensors, and EMG sensors to cross validate...multimodality data acquisition, and address fundamental research problems of representation and invariant description of 3D data, human motion modeling and

  20. A new smart traffic monitoring method using embedded cement-based piezoelectric sensors

    NASA Astrophysics Data System (ADS)

    Zhang, Jinrui; Lu, Youyuan; Lu, Zeyu; Liu, Chao; Sun, Guoxing; Li, Zongjin

    2015-02-01

    Cement-based piezoelectric composites are employed as the sensing elements of a new smart traffic monitoring system. The piezoelectricity of the cement-based piezoelectric sensors enables powerful and accurate real-time detection of the pressure induced by the traffic flow. To describe the mechanical-electrical conversion mechanism between traffic flow and the electrical output of the embedded piezoelectric sensors, a mathematical model is established based on Duhamel’s integral, the constitutive law and the charge-leakage characteristics of the piezoelectric composite. Laboratory tests show that the voltage magnitude of the sensor is linearly proportional to the applied pressure, which ensures the reliability of the cement-based piezoelectric sensors for traffic monitoring. A series of on-site road tests by a 10 tonne truck and a 6.8 tonne van show that vehicle weight-in-motion can be predicted based on the mechanical-electrical model by taking into account the vehicle speed and the charge-leakage property of the piezoelectric sensor. In the speed range from 20 km h-1 to 70 km h-1, the error of the repeated weigh-in-motion measurements of the 6.8 tonne van is less than 1 tonne. The results indicate that the embedded cement-based piezoelectric sensors and associated measurement setup have good capability of smart traffic monitoring, such as traffic flow detection, vehicle speed detection and weigh-in-motion measurement.

  1. Orientation-independent measures of ground motion

    USGS Publications Warehouse

    Boore, D.M.; Watson-Lamprey, Jennie; Abrahamson, N.A.

    2006-01-01

    The geometric mean of the response spectra for two orthogonal horizontal components of motion, commonly used as the response variable in predictions of strong ground motion, depends on the orientation of the sensors as installed in the field. This means that the measure of ground-motion intensity could differ for the same actual ground motion. This dependence on sensor orientation is most pronounced for strongly correlated motion (the extreme example being linearly polarized motion), such as often occurs at periods of 1 sec or longer. We propose two new measures of the geometric mean, GMRotDpp, and GMRotIpp, that are independent of the sensor orientations. Both are based on a set of geometric means computed from the as-recorded orthogonal horizontal motions rotated through all possible non-redundant rotation angles. GMRotDpp is determined as the ppth percentile of the set of geometric means for a given oscillator period. For example, GMRotDOO, GMRotD50, and GMRotD100 correspond to the minimum, median, and maximum values, respectively. The rotations that lead to GMRotDpp depend on period, whereas a single-period-independent rotation is used for GMRotIpp, the angle being chosen to minimize the spread of the rotation-dependent geometric mean (normalized by GMRotDpp) over the usable range of oscillator periods. GMRotI50 is the ground-motion intensity measure being used in the development of new ground-motion prediction equations by the Pacific Earthquake Engineering Center Next Generation Attenuation project. Comparisons with as-recorded geometric means for a large dataset show that the new measures are systematically larger than the geometric-mean response spectra using the as-recorded values of ground acceleration, but only by a small amount (less than 3%). The theoretical advantage of the new measures is that they remove sensor orientation as a contributor to aleatory uncertainty. Whether the reduction is of practical significance awaits detailed studies of large datasets. A preliminary analysis contained in a companion article by Beyer and Bommer finds that the reduction is small-to-nonexistent for equations based on a wide range of magnitudes and distances. The results of Beyer and Bommer do suggest, however, that there is an increasing reduction as period increases. Whether the reduction increases with other subdivisions of the dataset for which strongly correlated motions might be expected (e.g., pulselike motions close to faults) awaits further analysis.

  2. Refining the effects of aircraft motion on an airborne beam-type gravimeter

    NASA Astrophysics Data System (ADS)

    Childers, V. A.; Weil, C.

    2016-12-01

    A challenge of modern airborne gravimetry is identifying an aircraft/autopilot combination that will allow for high quality data collection. The natural motion of the aircraft coupled with the autopilot's reaction to changing winds and turbulence can result in a successful data collection effort when the motion is benign or in total failure when the motion is at its worst. Aircraft motion plays such an important role in airborne gravimetry for several reasons, but most importantly to this study it affects the behavior of the gravimeter's gyro-stabilized platform. The gyro-stabilized platform keeps the sensor aligned with a time-averaged local vertical to produce a scalar measurement along the plumb direction. However, turbulence can cause the sensor to align temporarily with aircraft horizontal accelerations that can both decrease the measured gravity (because the sensor is no longer aligned with the gravity field) and increase the measured gravity (because horizontal accelerations are coupling into the measurement). NOAA's Gravity for the Redefinition of the American Vertical Datum (GRAV-D) project has collected airborne gravity data using a Micro-g LaCoste TAGS (Turnkey Airborne Gravity System) beam-type meter on a variety of mostly turboprop aircraft with a wide range of outcomes, some different than one would predict. Some aircraft that seem the smoothest to the operator in flight do not produce as high quality a measurement as one would expect. Alternatively, some aircraft that have significant motion produce very high quality data. Due to the extensive nature of the GRAV-D survey, significant quantities of data exist on our various successful aircraft. In addition, we have numerous flights, although fewer, that were not successful for a number of reasons. In this study, we use spectral analysis to evaluate the aircraft motion for our various successful aircraft and compare with the problem flights in our effort to identify the signature motions indicative of aircraft that could be successful or not successful for airborne gravity collection with a beam-type sensor.

  3. Fault detection and isolation in motion monitoring system.

    PubMed

    Kim, Duk-Jin; Suk, Myoung Hoon; Prabhakaran, B

    2012-01-01

    Pervasive computing becomes very active research field these days. A watch that can trace human movement to record motion boundary as well as to study of finding social life pattern by one's localized visiting area. Pervasive computing also helps patient monitoring. A daily monitoring system helps longitudinal study of patient monitoring such as Alzheimer's and Parkinson's or obesity monitoring. Due to the nature of monitoring sensor (on-body wireless sensor), however, signal noise or faulty sensors errors can be present at any time. Many research works have addressed these problems any with a large amount of sensor deployment. In this paper, we present the faulty sensor detection and isolation using only two on-body sensors. We have been investigating three different types of sensor errors: the SHORT error, the CONSTANT error, and the NOISY SENSOR error (see more details on section V). Our experimental results show that the success rate of isolating faulty signals are an average of over 91.5% on fault type 1, over 92% on fault type 2, and over 99% on fault type 3 with the fault prior of 30% sensor errors.

  4. Unobtrusive Monitoring of Neonatal Brain Temperature Using a Zero-Heat-Flux Sensor Matrix.

    PubMed

    Atallah, Louis; Bongers, Edwin; Lamichhane, Bishal; Bambang-Oetomo, Sidarto

    2016-01-01

    The temperature of preterm neonates must be maintained within a narrow window to ensure their survival. Continuously measuring their core temperature provides an optimal means of monitoring their thermoregulation and their response to environmental changes. However, existing methods of measuring core temperature can be very obtrusive, such as rectal probes, or inaccurate/lagging, such as skin temperature sensors and spot-checks using tympanic temperature sensors. This study investigates an unobtrusive method of measuring brain temperature continuously using an embedded zero-heat-flux (ZHF) sensor matrix placed under the head of the neonate. The measured temperature profile is used to segment areas of motion and incorrect positioning, where the neonate's head is not above the sensors. We compare our measurements during low motion/stable periods to esophageal temperatures for 12 preterm neonates, measured for an average of 5 h per neonate. The method we propose shows good correlation with the reference temperature for most of the neonates. The unobtrusive embedding of the matrix in the neonate's environment poses no harm or disturbance to the care work-flow, while measuring core temperature. To address the effect of motion on the ZHF measurements in the current embodiment, we recommend a more ergonomic embedding ensuring the sensors are continuously placed under the neonate's head.

  5. Development and testing of a magnetic position sensor system for automotive and avionics applications

    NASA Astrophysics Data System (ADS)

    Jacobs, Bryan C.; Nelson, Carl V.

    2001-08-01

    A magnetic sensor system has been developed to measure the 3-D location and orientation of a rigid body relative to an array of magnetic dipole transmitters. A generalized solution to the measurement problem has been formulated, allowing the transmitter and receiver parameters (position, orientation, number, etc.) to be optimized for various applications. Additionally, the method of images has been used to mitigate the impact of metallic materials in close proximity to the sensor. The resulting system allows precise tracking of high-speed motion in confined metal environments. The sensor system was recently configured and tested as an abdomen displacement sensor for an automobile crash-test dummy. The test results indicate a positional accuracy of approximately 1 mm rms during 20 m/s motions. The dynamic test results also confirmed earlier covariance model predictions, which were used to optimize the sensor geometry. A covariance analysis was performed to evaluate the applicability of this magnetic position system for tracking a pilot's head motion inside an aircraft cockpit. Realistic design parameters indicate that a robust tracking system, consisting of lightweight pickup coils mounted on a pilot's helmet, and an array of transmitter coils distributed throughout a cockpit, is feasible. Recent test and covariance results are presented.

  6. Blind multirigid retrospective motion correction of MR images.

    PubMed

    Loktyushin, Alexander; Nickisch, Hannes; Pohmann, Rolf; Schölkopf, Bernhard

    2015-04-01

    Physiological nonrigid motion is inevitable when imaging, e.g., abdominal viscera, and can lead to serious deterioration of the image quality. Prospective techniques for motion correction can handle only special types of nonrigid motion, as they only allow global correction. Retrospective methods developed so far need guidance from navigator sequences or external sensors. We propose a fully retrospective nonrigid motion correction scheme that only needs raw data as an input. Our method is based on a forward model that describes the effects of nonrigid motion by partitioning the image into patches with locally rigid motion. Using this forward model, we construct an objective function that we can optimize with respect to both unknown motion parameters per patch and the underlying sharp image. We evaluate our method on both synthetic and real data in 2D and 3D. In vivo data was acquired using standard imaging sequences. The correction algorithm significantly improves the image quality. Our compute unified device architecture (CUDA)-enabled graphic processing unit implementation ensures feasible computation times. The presented technique is the first computationally feasible retrospective method that uses the raw data of standard imaging sequences, and allows to correct for nonrigid motion without guidance from external motion sensors. © 2014 Wiley Periodicals, Inc.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Polese, Luigi Gentile; Brackney, Larry

    An image-based occupancy sensor includes a motion detection module that receives and processes an image signal to generate a motion detection signal, a people detection module that receives the image signal and processes the image signal to generate a people detection signal, a face detection module that receives the image signal and processes the image signal to generate a face detection signal, and a sensor integration module that receives the motion detection signal from the motion detection module, receives the people detection signal from the people detection module, receives the face detection signal from the face detection module, and generatesmore » an occupancy signal using the motion detection signal, the people detection signal, and the face detection signal, with the occupancy signal indicating vacancy or occupancy, with an occupancy indication specifying that one or more people are detected within the monitored volume.« less

  8. Webcam delivery of the Camperdown Program for adolescents who stutter: a phase I trial.

    PubMed

    Carey, Brenda; O'Brian, Sue; Onslow, Mark; Packman, Ann; Menzies, Ross

    2012-07-01

    This Phase I clinical trial explored the viability of webcam Internet delivery of the Camperdown Program for adolescents who stutter. Method and Procedure Participants were 3 adolescents ages 13, 15, and 16 years, with moderate-severe stuttering. Each was treated with the Camperdown Program delivered by webcam with no clinic attendance. Primary outcome measures were percentage of syllables stuttered and number of treatment sessions to maintenance. Secondary outcome measures were speech naturalness, situation avoidance, self-reported stuttering severity, and parent and adolescent satisfaction. Data were collected pre treatment and at 1 day, 6 months, and 12 months post entry to maintenance. Participants entered maintenance after means of 18 sessions and 11 clinician hours. Group mean reduction of stuttering from pre treatment to entry to maintenance was 83%, from pre treatment to 6 months post entry to maintenance was 93%, and from pre treatment to 12 months post entry to maintenance was 74%. Self-reported stuttering severity ratings confirmed these results. Post entry to maintenance speech naturalness for participants fell within the range of that of 3 matched controls. However, avoidance of speech situations showed no corresponding improvements for 2 of the participants. The service delivery model was efficacious and efficient. All of the participants and their parents also found it appealing. Results justify a Phase II trial of the delivery model.

  9. Implementation of webcam-based hyperspectral imaging system

    NASA Astrophysics Data System (ADS)

    Balooch, Ali; Nazeri, Majid; Abbasi, Hamed

    2018-02-01

    In the present work, a hyperspectral imaging system (imaging spectrometer) using a commercial webcam has been designed and developed. This system was able to capture two-dimensional spectra (in emission, transmission and reflection modes) directly from the scene in the desired wavelengths. Imaging of the object is done directly by linear sweep (pushbroom method). To do so, the spectrometer is equipped with a suitable collecting lens and a linear travel stage. A 1920 x 1080 pixel CMOS webcam was used as a detector. The spectrometer has been calibrated by the reference spectral lines of standard lamps. The spectral resolution of this system was about 2nm and its spatial resolution was about 1 mm for a 10 cm long object. The hardware solution is based on data acquisition working on the USB platform and controlled by a LabVIEW program. In this system, the initial output was a three-dimensional matrix in which two dimensions of the matrix were related to the spatial information of the object and the third dimension was the spectrum of any point of the object. Finally, the images in different wavelengths were created by reforming the data of the matrix. The free spectral range (FSR) of the system was 400 to 1100 nm. The system was successfully tested for some applications, such as plasma diagnosis as well as applications in food and agriculture sciences.

  10. Image deblurring in smartphone devices using built-in inertial measurement sensors

    NASA Astrophysics Data System (ADS)

    Šindelář, Ondřej; Šroubek, Filip

    2013-01-01

    Long-exposure handheld photography is degraded with blur, which is difficult to remove without prior information about the camera motion. In this work, we utilize inertial sensors (accelerometers and gyroscopes) in modern smartphones to detect exact motion trajectory of the smartphone camera during exposure and remove blur from the resulting photography based on the recorded motion data. The whole system is implemented on the Android platform and embedded in the smartphone device, resulting in a close-to-real-time deblurring algorithm. The performance of the proposed system is demonstrated in real-life scenarios.

  11. Spin Stabilized Impulsively Controlled Missile (SSICM)

    NASA Astrophysics Data System (ADS)

    Crawford, J. I.; Howell, W. M.

    1985-12-01

    This patent is for the Spin Stabilized Impulsively Controlled Missile (SSICM). SSICM is a missile configuration which employs spin stabilization, nutational motion, and impulsive thrusting, and a body mounted passive or semiactive sensor to achieve very small miss distances against a high speed moving target. SSICM does not contain an autopilot, control surfaces, a control actuation system, nor sensor stabilization gimbals. SSICM spins at a rate sufficient to provide frequency separation between body motions and inertial target motion. Its impulsive thrusters provide near instantaneous changes in lateral velocity, whereas conventional missiles require a significant time delay to achieve lateral acceleration.

  12. Advances in Rotational Seismic Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pierson, Robert; Laughlin, Darren; Brune, Robert

    2016-10-19

    Rotational motion is increasingly understood to be a significant part of seismic wave motion. Rotations can be important in earthquake strong motion and in Induced Seismicity Monitoring. Rotational seismic data can also enable shear selectivity and improve wavefield sampling for vertical geophones in 3D surveys, among other applications. However, sensor technology has been a limiting factor to date. The US Department of Energy (DOE) and Applied Technology Associates (ATA) are funding a multi-year project that is now entering Phase 2 to develop and deploy a new generation of rotational sensors for validation of rotational seismic applications. Initial focus is onmore » induced seismicity monitoring, particularly for Enhanced Geothermal Systems (EGS) with fracturing. The sensors employ Magnetohydrodynamic (MHD) principles with broadband response, improved noise floors, robustness, and repeatability. This paper presents a summary of Phase 1 results and Phase 2 status.« less

  13. Early Improper Motion Detection in Golf Swings Using Wearable Motion Sensors: The First Approach

    PubMed Central

    Stančin, Sara; Tomažič, Sašo

    2013-01-01

    This paper presents an analysis of a golf swing to detect improper motion in the early phase of the swing. Led by the desire to achieve a consistent shot outcome, a particular golfer would (in multiple trials) prefer to perform completely identical golf swings. In reality, some deviations from the desired motion are always present due to the comprehensive nature of the swing motion. Swing motion deviations that are not detrimental to performance are acceptable. This analysis is conducted using a golfer's leading arm kinematic data, which are obtained from a golfer wearing a motion sensor that is comprised of gyroscopes and accelerometers. Applying the principal component analysis (PCA) to the reference observations of properly performed swings, the PCA components of acceptable swing motion deviations are established. Using these components, the motion deviations in the observations of other swings are examined. Any unacceptable deviations that are detected indicate an improper swing motion. Arbitrarily long observations of an individual player's swing sequences can be included in the analysis. The results obtained for the considered example show an improper swing motion in early phase of the swing, i.e., the first part of the backswing. An early detection method for improper swing motions that is conducted on an individual basis provides assistance for performance improvement. PMID:23752563

  14. Early improper motion detection in golf swings using wearable motion sensors: the first approach.

    PubMed

    Stančin, Sara; Tomažič, Sašo

    2013-06-10

    This paper presents an analysis of a golf swing to detect improper motion in the early phase of the swing. Led by the desire to achieve a consistent shot outcome, a particular golfer would (in multiple trials) prefer to perform completely identical golf swings. In reality, some deviations from the desired motion are always present due to the comprehensive nature of the swing motion. Swing motion deviations that are not detrimental to performance are acceptable. This analysis is conducted using a golfer's leading arm kinematic data, which are obtained from a golfer wearing a motion sensor that is comprised of gyroscopes and accelerometers. Applying the principal component analysis (PCA) to the reference observations of properly performed swings, the PCA components of acceptable swing motion deviations are established. Using these components, the motion deviations in the observations of other swings are examined. Any unacceptable deviations that are detected indicate an improper swing motion. Arbitrarily long observations of an individual player's swing sequences can be included in the analysis. The results obtained for the considered example show an improper swing motion in early phase of the swing, i.e., the first part of the backswing. An early detection method for improper swing motions that is conducted on an individual basis provides assistance for performance improvement.

  15. Hydrostatic Level Sensors as High Precision Ground Motion Instrumentation for Tevatron and Other Energy Frontier Accelerators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Volk, James; Hansen, Sten; Johnson, Todd

    2012-01-01

    Particle accelerators require very tight tolerances on the alignment and stability of their elements: magnets, accelerating cavities, vacuum chambers, etc. In this article we describe the Hydrostatic Level Sensors (HLS) for very low frequency measurements used in a variety of facilities at Fermilab. We present design features of the sensors, outline their technical parameters, describe their test and calibration procedures, discuss different regimes of operation and give few illustrative examples of the experimental data. Detail experimental results of the ground motion measurements with these detectors will be presented in subsequent papers.

  16. Real-time motion artifacts compensation of ToF sensors data on GPU

    NASA Astrophysics Data System (ADS)

    Lefloch, Damien; Hoegg, Thomas; Kolb, Andreas

    2013-05-01

    Over the last decade, ToF sensors attracted many computer vision and graphics researchers. Nevertheless, ToF devices suffer from severe motion artifacts for dynamic scenes as well as low-resolution depth data which strongly justifies the importance of a valid correction. To counterbalance this effect, a pre-processing approach is introduced to greatly improve range image data on dynamic scenes. We first demonstrate the robustness of our approach using simulated data to finally validate our method using sensor range data. Our GPU-based processing pipeline enhances range data reliability in real-time.

  17. IMU-Based Gait Recognition Using Convolutional Neural Networks and Multi-Sensor Fusion.

    PubMed

    Dehzangi, Omid; Taherisadr, Mojtaba; ChangalVala, Raghvendar

    2017-11-27

    The wide spread usage of wearable sensors such as in smart watches has provided continuous access to valuable user generated data such as human motion that could be used to identify an individual based on his/her motion patterns such as, gait. Several methods have been suggested to extract various heuristic and high-level features from gait motion data to identify discriminative gait signatures and distinguish the target individual from others. However, the manual and hand crafted feature extraction is error prone and subjective. Furthermore, the motion data collected from inertial sensors have complex structure and the detachment between manual feature extraction module and the predictive learning models might limit the generalization capabilities. In this paper, we propose a novel approach for human gait identification using time-frequency (TF) expansion of human gait cycles in order to capture joint 2 dimensional (2D) spectral and temporal patterns of gait cycles. Then, we design a deep convolutional neural network (DCNN) learning to extract discriminative features from the 2D expanded gait cycles and jointly optimize the identification model and the spectro-temporal features in a discriminative fashion. We collect raw motion data from five inertial sensors placed at the chest, lower-back, right hand wrist, right knee, and right ankle of each human subject synchronously in order to investigate the impact of sensor location on the gait identification performance. We then present two methods for early (input level) and late (decision score level) multi-sensor fusion to improve the gait identification generalization performance. We specifically propose the minimum error score fusion (MESF) method that discriminatively learns the linear fusion weights of individual DCNN scores at the decision level by minimizing the error rate on the training data in an iterative manner. 10 subjects participated in this study and hence, the problem is a 10-class identification task. Based on our experimental results, 91% subject identification accuracy was achieved using the best individual IMU and 2DTF-DCNN. We then investigated our proposed early and late sensor fusion approaches, which improved the gait identification accuracy of the system to 93.36% and 97.06%, respectively.

  18. inertial orientation tracker having automatic drift compensation using an at rest sensor for tracking parts of a human body

    NASA Technical Reports Server (NTRS)

    Foxlin, Eric M. (Inventor)

    2004-01-01

    A self contained sensor apparatus generates a signal that corresponds to at least two of the three orientational aspects of yaw, pitch and roll of a human-scale body, relative to an external reference frame. A sensor generates first sensor signals that correspond to rotational accelerations or rates of the body about certain body axes. The sensor may be mounted to the body. Coupled to the sensor is a signal processor for generating orientation signals relative to the external reference frame that correspond to the angular rate or acceleration signals. The first sensor signals are impervious to interference from electromagnetic, acoustic, optical and mechanical sources. The sensors may be rate sensors. An integrator may integrate the rate signal over time. A drift compensator is coupled to the rate sensors and the integrator. The drift compensator may include a gravitational tilt sensor or a magnetic field sensor or both. A verifier periodically measures the orientation of the body by a means different from the drift sensitive sate sensors. The verifier may take into account characteristic features of human motion, such as stillness periods. The drift compensator may be, in part, a Kalman filter, which may utilize statistical data about human head motion.

  19. Sensor fusion IV: Control paradigms and data structures; Proceedings of the Meeting, Boston, MA, Nov. 12-15, 1991

    NASA Technical Reports Server (NTRS)

    Schenker, Paul S. (Editor)

    1992-01-01

    Various papers on control paradigms and data structures in sensor fusion are presented. The general topics addressed include: decision models and computational methods, sensor modeling and data representation, active sensing strategies, geometric planning and visualization, task-driven sensing, motion analysis, models motivated biology and psychology, decentralized detection and distributed decision, data fusion architectures, robust estimation of shapes and features, application and implementation. Some of the individual subjects considered are: the Firefly experiment on neural networks for distributed sensor data fusion, manifold traversing as a model for learning control of autonomous robots, choice of coordinate systems for multiple sensor fusion, continuous motion using task-directed stereo vision, interactive and cooperative sensing and control for advanced teleoperation, knowledge-based imaging for terrain analysis, physical and digital simulations for IVA robotics.

  20. VO2 estimation using 6-axis motion sensor with sports activity classification.

    PubMed

    Nagata, Takashi; Nakamura, Naoteru; Miyatake, Masato; Yuuki, Akira; Yomo, Hiroyuki; Kawabata, Takashi; Hara, Shinsuke

    2016-08-01

    In this paper, we focus on oxygen consumption (VO2) estimation using 6-axis motion sensor (3-axis accelerometer and 3-axis gyroscope) for people playing sports with diverse intensities. The VO2 estimated with a small motion sensor can be used to calculate the energy expenditure, however, its accuracy depends on the intensities of various types of activities. In order to achieve high accuracy over a wide range of intensities, we employ an estimation framework that first classifies activities with a simple machine-learning based classification algorithm. We prepare different coefficients of linear regression model for different types of activities, which are determined with training data obtained by experiments. The best-suited model is used for each type of activity when VO2 is estimated. The accuracy of the employed framework depends on the trade-off between the degradation due to classification errors and improvement brought by applying separate, optimum model to VO2 estimation. Taking this trade-off into account, we evaluate the accuracy of the employed estimation framework by using a set of experimental data consisting of VO2 and motion data of people with a wide range of intensities of exercises, which were measured by a VO2 meter and motion sensor, respectively. Our numerical results show that the employed framework can improve the estimation accuracy in comparison to a reference method that uses a common regression model for all types of activities.

  1. DNA Encoding Training Using 3D Gesture Interaction.

    PubMed

    Nicola, Stelian; Handrea, Flavia-Laura; Crişan-Vida, Mihaela; Stoicu-Tivadar, Lăcrămioara

    2017-01-01

    The work described in this paper summarizes the development process and presents the results of a human genetics training application, studying the 20 amino acids formed by the combination of the 3 nucleotides of DNA targeting mainly medical and bioinformatics students. Currently, the domain applications using recognized human gestures of the Leap Motion sensor are used in molecules controlling and learning from Mendeleev table or in visualizing the animated reactions of specific molecules with water. The novelty in the current application consists in using the Leap Motion sensor creating new gestures for the application control and creating a tag based algorithm corresponding to each amino acid, depending on the position in the 3D virtual space of the 4 nucleotides of DNA and their type. The team proposes a 3D application based on Unity editor and on Leap Motion sensor where the user has the liberty of forming different combinations of the 20 amino acids. The results confirm that this new type of study of medicine/biochemistry using the Leap Motion sensor for handling amino acids is suitable for students. The application is original and interactive and the users can create their own amino acid structures in a 3D-like environment which they could not do otherwise using traditional pen-and-paper.

  2. Drift Reduction in Pedestrian Navigation System by Exploiting Motion Constraints and Magnetic Field.

    PubMed

    Ilyas, Muhammad; Cho, Kuk; Baeg, Seung-Ho; Park, Sangdeok

    2016-09-09

    Pedestrian navigation systems (PNS) using foot-mounted MEMS inertial sensors use zero-velocity updates (ZUPTs) to reduce drift in navigation solutions and estimate inertial sensor errors. However, it is well known that ZUPTs cannot reduce all errors, especially as heading error is not observable. Hence, the position estimates tend to drift and even cyclic ZUPTs are applied in updated steps of the Extended Kalman Filter (EKF). This urges the use of other motion constraints for pedestrian gait and any other valuable heading reduction information that is available. In this paper, we exploit two more motion constraints scenarios of pedestrian gait: (1) walking along straight paths; (2) standing still for a long time. It is observed that these motion constraints (called "virtual sensor"), though considerably reducing drift in PNS, still need an absolute heading reference. One common absolute heading estimation sensor is the magnetometer, which senses the Earth's magnetic field and, hence, the true heading angle can be calculated. However, magnetometers are susceptible to magnetic distortions, especially in indoor environments. In this work, an algorithm, called magnetic anomaly detection (MAD) and compensation is designed by incorporating only healthy magnetometer data in the EKF updating step, to reduce drift in zero-velocity updated INS. Experiments are conducted in GPS-denied and magnetically distorted environments to validate the proposed algorithms.

  3. A simple webcam-based approach for the measurement of rodent locomotion and other behavioural parameters.

    PubMed

    Tort, Adriano B L; Neto, Waldemar P; Amaral, Olavo B; Kazlauckas, Vanessa; Souza, Diogo O; Lara, Diogo R

    2006-10-15

    We hereby describe a simple and inexpensive approach to evaluate the position and locomotion of rodents in an arena. The system is based on webcam registering of animal behaviour with subsequent analysis on customized software. Based on black/white differentiation, it provides rapid evaluation of animal position over a period of time, and can be used in a myriad of behavioural tasks in which locomotion, velocity or place preference are variables of interest. A brief review of the results obtained so far with this system and a discussion of other possible applications in behavioural neuroscience are also included. Such a system can be easily implemented in most laboratories and can significantly reduce the time and costs involved in behavioural analysis, especially in developing countries.

  4. Webcam autofocus mechanism used as a delay line for the characterization of femtosecond pulses.

    PubMed

    Castro-Marín, Pablo; Kapellmann-Zafra, Gabriel; Garduño-Mejía, Jesús; Rosete-Aguilar, Martha; Román-Moreno, Carlos J

    2015-08-01

    In this work, we present an electromagnetic focusing mechanism (EFM), from a commercial webcam, implemented as a delay line of a femtosecond laser pulse characterization system. The characterization system consists on a second order autocorrelator based on a two-photon-absorption detection. The results presented here were performed for two different home-made femtosecond oscillators: Ti:sapph @ 820 nm and highly chirped pulses generated with an Erbium Doped Fiber @ 1550 nm. The EFM applied as a delay line represents an excellent alternative due its performance in terms of stability, resolution, and long scan range up to 3 ps. Due its low power consumption, the device can be connected through the Universal Serial Bus (USB) port. Details of components, schematics of electronic controls, and detection systems are presented.

  5. Webcam autofocus mechanism used as a delay line for the characterization of femtosecond pulses

    NASA Astrophysics Data System (ADS)

    Castro-Marín, Pablo; Kapellmann-Zafra, Gabriel; Garduño-Mejía, Jesús; Rosete-Aguilar, Martha; Román-Moreno, Carlos J.

    2015-08-01

    In this work, we present an electromagnetic focusing mechanism (EFM), from a commercial webcam, implemented as a delay line of a femtosecond laser pulse characterization system. The characterization system consists on a second order autocorrelator based on a two-photon-absorption detection. The results presented here were performed for two different home-made femtosecond oscillators: Ti:sapph @ 820 nm and highly chirped pulses generated with an Erbium Doped Fiber @ 1550 nm. The EFM applied as a delay line represents an excellent alternative due its performance in terms of stability, resolution, and long scan range up to 3 ps. Due its low power consumption, the device can be connected through the Universal Serial Bus (USB) port. Details of components, schematics of electronic controls, and detection systems are presented.

  6. A robust vision-based sensor fusion approach for real-time pose estimation.

    PubMed

    Assa, Akbar; Janabi-Sharifi, Farrokh

    2014-02-01

    Object pose estimation is of great importance to many applications, such as augmented reality, localization and mapping, motion capture, and visual servoing. Although many approaches based on a monocular camera have been proposed, only a few works have concentrated on applying multicamera sensor fusion techniques to pose estimation. Higher accuracy and enhanced robustness toward sensor defects or failures are some of the advantages of these schemes. This paper presents a new Kalman-based sensor fusion approach for pose estimation that offers higher accuracy and precision, and is robust to camera motion and image occlusion, compared to its predecessors. Extensive experiments are conducted to validate the superiority of this fusion method over currently employed vision-based pose estimation algorithms.

  7. Sea ice motion from low-resolution satellite sensors: An alternative method and its validation in the Arctic

    NASA Astrophysics Data System (ADS)

    Lavergne, T.; Eastwood, S.; Teffah, Z.; Schyberg, H.; Breivik, L.-A.

    2010-10-01

    The retrieval of sea ice motion with the Maximum Cross-Correlation (MCC) method from low-resolution (10-15 km) spaceborne imaging sensors is challenged by a dominating quantization noise as the time span of displacement vectors is shortened. To allow investigating shorter displacements from these instruments, we introduce an alternative sea ice motion tracking algorithm that builds on the MCC method but relies on a continuous optimization step for computing the motion vector. The prime effect of this method is to effectively dampen the quantization noise, an artifact of the MCC. It allows for retrieving spatially smooth 48 h sea ice motion vector fields in the Arctic. Strategies to detect and correct erroneous vectors as well as to optimally merge several polarization channels of a given instrument are also described. A test processing chain is implemented and run with several active and passive microwave imagers (Advanced Microwave Scanning Radiometer-EOS (AMSR-E), Special Sensor Microwave Imager, and Advanced Scatterometer) during three Arctic autumn, winter, and spring seasons. Ice motion vectors are collocated to and compared with GPS positions of in situ drifters. Error statistics are shown to be ranging from 2.5 to 4.5 km (standard deviation for components of the vectors) depending on the sensor, without significant bias. We discuss the relative contribution of measurement and representativeness errors by analyzing monthly validation statistics. The 37 GHz channels of the AMSR-E instrument allow for the best validation statistics. The operational low-resolution sea ice drift product of the EUMETSAT OSI SAF (European Organisation for the Exploitation of Meteorological Satellites Ocean and Sea Ice Satellite Application Facility) is based on the algorithms presented in this paper.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Edmunds, D; Donovan, E

    Purpose: To determine whether the Microsoft Kinect Version 2 (Kinect v2), a commercial off-the-shelf (COTS) depth sensors designed for entertainment purposes, were robust to the radiotherapy treatment environment and could be suitable for monitoring of voluntary breath-hold compliance. This could complement current visual monitoring techniques, and be useful for heart sparing left breast radiotherapy. Methods: In-house software to control Kinect v2 sensors, and capture output information, was developed using the free Microsoft software development kit, and the Cinder creative coding C++ library. Each sensor was used with a 12m USB 3.0 active cable. A solid water block was used asmore » the object. The depth accuracy and precision of the sensors was evaluated by comparing Kinect reported distance to the object with a precision laser measurement across a distance range of 0.6m to 2.0 m. The object was positioned on a high-precision programmable motion platform and moved in two programmed motion patterns and Kinect reported distance logged. Robustness to the radiation environment was tested by repeating all measurements with a linear accelerator operating over a range of pulse repetition frequencies (6Hz to 400Hz) and dose rates 50 to 1500 monitor units (MU) per minute. Results: The complex, consistent relationship between true and measured distance was unaffected by the radiation environment, as was the ability to detect motion. Sensor precision was < 1 mm and the accuracy between 1.3 mm and 1.8 mm when a distance correction was applied. Both motion patterns were tracked successfully with a root mean squared error (RMSE) of 1.4 and 1.1 mm respectively. Conclusion: Kinect v2 sensors are capable of tracking pre-programmed motion patterns with an accuracy <2 mm and appear robust to the radiotherapy treatment environment. A clinical trial using Kinect v2 sensor for monitoring voluntary breath hold has ethical approval and is open to recruitment. The authors are supported by a National Institute of Health Research (NIHR) Career Development Fellowship (CDF-2013-06-005). Microsoft Corporation donated three sensors. The views expressed in this publication are those of the author(s) and not necessarily those of the NHS, the National Institute for Health Research or the Department of Health.« less

  9. Inertial orientation tracker having automatic drift compensation for tracking human head and other similarly sized body

    NASA Technical Reports Server (NTRS)

    Foxlin, Eric M. (Inventor)

    2000-01-01

    A self contained sensor apparatus generates a signal that corresponds to at least two of the three orientational aspects of yaw, pitch and roll of a human-scale body, relative to an external reference frame. A sensor generates first sensor signals that correspond to rotational accelerations or rates of the body about certain body axes. The sensor may be mounted to the body. Coupled to the sensor is a signal processor for generating orientation signals relative to the external reference frame that correspond to the angular rate or acceleration signals. The first sensor signals are impervious to interference from electromagnetic, acoustic, optical and mechanical sources. The sensors may be rate sensors. An integrator may integrate the rate signal over time. A drift compensator is coupled to the rate sensors and the integrator. The drift compensator may include a gravitational tilt sensor or a magnetic field sensor or both. A verifier periodically measures the orientation of the body by a means different from the drift sensitive rate sensors. The verifier may take into account characteristic features of human motion, such as stillness periods. The drift compensator may be, in part, a Kalman filter, which may utilize statistical data about human head motion.

  10. Inertial orientation tracker having gradual automatic drift compensation for tracking human head and other similarly sized body

    NASA Technical Reports Server (NTRS)

    Foxlin, Eric M. (Inventor)

    2002-01-01

    A self contained sensor apparatus generates a signal that corresponds to at least two of the three orientational aspects of yaw, pitch and roll of a human-scale body, relative to an external reference frame. A sensor generates first sensor signals that correspond to rotational accelerations or rates of the body about certain body axes. The sensor may be mounted to the body. Coupled to the sensor is a signal processor for generating orientation signals relative to the external reference frame that correspond to the angular rate or acceleration signals. The first sensor signals are impervious to interference from electromagnetic, acoustic, optical and mechanical sources. The sensors may be rate sensors. An integrator may integrate the rate signal over time. A drift compensator is coupled to the rate sensors and the integrator. The drift compensator may include a gravitational tilt sensor or a magnetic field sensor or both. A verifier periodically measures the orientation of the body by a means different from the drift sensitive rate sensors. The verifier may take into account characteristic features of human motion, such as stillness periods. The drift compensator may be, in part, a Kalman filter, which may utilize statistical data about human head motion.

  11. Inertial orientation tracker apparatus method having automatic drift compensation for tracking human head and other similarly sized body

    NASA Technical Reports Server (NTRS)

    Foxlin, Eric M. (Inventor)

    1998-01-01

    A self contained sensor apparatus generates a signal that corresponds to at least two of the three orientational aspects of yaw, pitch and roll of a human-scale body, relative to an external reference frame. A sensor generates first sensor signals that correspond to rotational accelerations or rates of the body about certain body axes. The sensor may be mounted to the body. Coupled to the sensor is a signal processor for generating orientation signals relative to the external reference frame that correspond to the angular rate or acceleration signals. The first sensor signals are impervious to interference from electromagnetic, acoustic, optical and mechanical sources. The sensors may be rate sensors. An integrator may integrate the rate signal over time. A drift compensator is coupled to the rate sensors and the integrator. The drift compensator may include a gravitational tilt sensor or a magnetic field sensor or both. A verifier periodically measures the orientation of the body by a means different from the drift sensitive rate sensors. The verifier may take into account characteristic features of human motion, such as stillness periods. The drift compensator may be, in part, a Kalman filter, which may utilize statistical data about human head motion.

  12. Inertial orientation tracker apparatus having automatic drift compensation for tracking human head and other similarly sized body

    NASA Technical Reports Server (NTRS)

    Foxlin, Eric M. (Inventor)

    1997-01-01

    A self contained sensor apparatus generates a signal that corresponds to at least two of the three orientational aspects of yaw, pitch and roll of a human-scale body, relative to an external reference frame. A sensor generates first sensor signals that correspond to rotational accelerations or rates of the body about certain body axes. The sensor may be mounted to the body. Coupled to the sensor is a signal processor for generating orientation signals relative to the external reference frame that correspond to the angular rate or acceleration signals. The first sensor signals are impervious to interference from electromagnetic, acoustic, optical and mechanical sources. The sensors may be rate sensors. An integrator may integrate the rate signal over time. A drift compensator is coupled to the rate sensors and the integrator. The drift compensator may include a gravitational tilt sensor or a magnetic field sensor or both. A verifier periodically measures the orientation of the body by a means different from the drift sensitive rate sensors. The verifier may take into account characteristic features of human motion, such as stillness periods. The drift compensator may be, in part, a Kalman filter, which may utilize statistical data about human head motion.

  13. Measurement of six-degree-of-freedom planar motions by using a multiprobe surface encoder

    NASA Astrophysics Data System (ADS)

    Li, Xinghui; Shimizu, Yuki; Ito, Takeshi; Cai, Yindi; Ito, So; Gao, Wei

    2014-12-01

    A multiprobe surface encoder for optical metrology of six-degree-of-freedom (six-DOF) planar motions is presented. The surface encoder is composed of an XY planar scale grating with identical microstructures in X- and Y-axes and an optical sensor head. In the optical sensor head, three paralleled laser beams were used as laser probes. After being divided by a beam splitter, the three laser probes were projected onto the scale grating and a reference grating with identical microstructures, respectively. For each probe, the first-order positive and negative diffraction beams along the X- and Y-directions from the scale grating and from the reference grating superimposed with each other and four pieces of interference signals were generated. Three-DOF translational motions of the scale grating Δx, Δy, and Δz can be obtained simultaneously from the interference signals of each probe. Three-DOF angular error motions θX, θY, and θZ can also be calculated simultaneously from differences of displacement output variations and the geometric relationship among the three probes. A prototype optical sensor head was designed, constructed, and evaluated. Experimental results verified that this surface encoder could provide measurement resolutions of subnanometer and better than 0.1 arc sec for three-DOF translational motions and three-DOF angular error motions, respectively.

  14. Development of a Shipboard Remote Control and Telemetry Experimental System for Large-Scale Model’s Motions and Loads Measurement in Realistic Sea Waves

    PubMed Central

    Jiao, Jialong; Ren, Huilong; Adenya, Christiaan Adika; Chen, Chaohe

    2017-01-01

    Wave-induced motion and load responses are important criteria for ship performance evaluation. Physical experiments have long been an indispensable tool in the predictions of ship’s navigation state, speed, motions, accelerations, sectional loads and wave impact pressure. Currently, majority of the experiments are conducted in laboratory tank environment, where the wave environments are different from the realistic sea waves. In this paper, a laboratory tank testing system for ship motions and loads measurement is reviewed and reported first. Then, a novel large-scale model measurement technique is developed based on the laboratory testing foundations to obtain accurate motion and load responses of ships in realistic sea conditions. For this purpose, a suite of advanced remote control and telemetry experimental system was developed in-house to allow for the implementation of large-scale model seakeeping measurement at sea. The experimental system includes a series of technique sensors, e.g., the Global Position System/Inertial Navigation System (GPS/INS) module, course top, optical fiber sensors, strain gauges, pressure sensors and accelerometers. The developed measurement system was tested by field experiments in coastal seas, which indicates that the proposed large-scale model testing scheme is capable and feasible. Meaningful data including ocean environment parameters, ship navigation state, motions and loads were obtained through the sea trial campaign. PMID:29109379

  15. Simulation on measurement of five-DOF motion errors of high precision spindle with cylindrical capacitive sensor

    NASA Astrophysics Data System (ADS)

    Zhang, Min; Wang, Wen; Xiang, Kui; Lu, Keqing; Fan, Zongwei

    2015-02-01

    This paper describes a novel cylindrical capacitive sensor (CCS) to measure the spindle five degree-of-freedom (DOF) motion errors. The operating principle and mathematical models of the CCS are presented. Using Ansoft Maxwell software to calculate the different capacitances in different configurations, structural parameters of end face electrode are then investigated. Radial, axial and tilt motions are also simulated by making comparisons with the given displacements and the simulation values respectively. It could be found that the proposed CCS has a high accuracy for measuring radial motion error when the average eccentricity is about 15 μm. Besides, the maximum relative error of axial displacement is 1.3% when the axial motion is within [0.7, 1.3] mm, and the maximum relative error of the tilt displacement is 1.6% as rotor tilts around a single axis within [-0.6, 0.6]°. Finally, the feasibility of the CCS for measuring five DOF motion errors is verified through simulation and analysis.

  16. Rotational motions for teleseismic surface waves

    NASA Astrophysics Data System (ADS)

    Lin, Chin-Jen; Huang, Han-Pang; Pham, Nguyen Dinh; Liu, Chun-Chi; Chi, Wu-Cheng; Lee, William H. K.

    2011-08-01

    We report the findings for the first teleseismic six degree-of-freedom (6-DOF) measurements including three components of rotational motions recorded by a sensitive rotation-rate sensor (model R-1, made by eentec) and three components of translational motions recorded by a traditional seismometer (STS-2) at the NACB station in Taiwan. The consistent observations in waveforms of rotational motions and translational motions in sections of Rayleigh and Love waves are presented in reference to the analytical solution for these waves in a half space of Poisson solid. We show that additional information (e.g., Rayleigh wave phase velocity, shear wave velocity of the surface layer) might be exploited from six degree-of-freedom recordings of teleseismic events at only one station. We also find significant errors in the translational records of these teleseismic surface waves due to the sensitivity of inertial translation sensors (seismometers) to rotational motions. The result suggests that the effects of such errors need to be counted in surface wave inversions commonly used to derive earthquake source parameters and Earth structure.

  17. Samba: a real-time motion capture system using wireless camera sensor networks.

    PubMed

    Oh, Hyeongseok; Cha, Geonho; Oh, Songhwai

    2014-03-20

    There is a growing interest in 3D content following the recent developments in 3D movies, 3D TVs and 3D smartphones. However, 3D content creation is still dominated by professionals, due to the high cost of 3D motion capture instruments. The availability of a low-cost motion capture system will promote 3D content generation by general users and accelerate the growth of the 3D market. In this paper, we describe the design and implementation of a real-time motion capture system based on a portable low-cost wireless camera sensor network. The proposed system performs motion capture based on the data-driven 3D human pose reconstruction method to reduce the computation time and to improve the 3D reconstruction accuracy. The system can reconstruct accurate 3D full-body poses at 16 frames per second using only eight markers on the subject's body. The performance of the motion capture system is evaluated extensively in experiments.

  18. Samba: A Real-Time Motion Capture System Using Wireless Camera Sensor Networks

    PubMed Central

    Oh, Hyeongseok; Cha, Geonho; Oh, Songhwai

    2014-01-01

    There is a growing interest in 3D content following the recent developments in 3D movies, 3D TVs and 3D smartphones. However, 3D content creation is still dominated by professionals, due to the high cost of 3D motion capture instruments. The availability of a low-cost motion capture system will promote 3D content generation by general users and accelerate the growth of the 3D market. In this paper, we describe the design and implementation of a real-time motion capture system based on a portable low-cost wireless camera sensor network. The proposed system performs motion capture based on the data-driven 3D human pose reconstruction method to reduce the computation time and to improve the 3D reconstruction accuracy. The system can reconstruct accurate 3D full-body poses at 16 frames per second using only eight markers on the subject's body. The performance of the motion capture system is evaluated extensively in experiments. PMID:24658618

  19. 49 CFR 234.265 - Timing relays and timing devices.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... ADMINISTRATION, DEPARTMENT OF TRANSPORTATION GRADE CROSSING SIGNAL SYSTEM SAFETY AND STATE ACTION PLANS... devices which perform internal functions associated with motion detectors, motion sensors, and grade...

  20. Gyroscope-reduced inertial navigation system for flight vehicle motion estimation

    NASA Astrophysics Data System (ADS)

    Wang, Xin; Xiao, Lu

    2017-01-01

    In this paper, a novel configuration of strategically distributed accelerometer sensors with the aid of one gyro to infer a flight vehicle's angular motion is presented. The MEMS accelerometer and gyro sensors are integrated to form a gyroscope-reduced inertial measurement unit (GR-IMU). The motivation for gyro aided accelerometers array is to have direct measurements of angular rates, which is an improvement to the traditional gyroscope-free inertial system that employs only direct measurements of specific force. Some technical issues regarding error calibration in accelerometers and gyro in GR-IMU are put forward. The GR-IMU based inertial navigation system can be used to find a complete attitude solution for flight vehicle motion estimation. Results of numerical simulation are given to illustrate the effectiveness of the proposed configuration. The gyroscope-reduced inertial navigation system based on distributed accelerometer sensors can be developed into a cost effective solution for a fast reaction, MEMS based motion capture system. Future work will include the aid from external navigation references (e.g. GPS) to improve long time mission performance.

  1. The Effect of Flexible Pavement Mechanics on the Accuracy of Axle Load Sensors in Vehicle Weigh-in-Motion Systems

    PubMed Central

    Rys, Dawid

    2017-01-01

    Weigh-in-Motion systems are tools to prevent road pavements from the adverse phenomena of vehicle overloading. However, the effectiveness of these systems can be significantly increased by improving weighing accuracy, which is now insufficient for direct enforcement of overloaded vehicles. Field tests show that the accuracy of Weigh-in-Motion axle load sensors installed in the flexible (asphalt) pavements depends on pavement temperature and vehicle speeds. Although this is a known phenomenon, it has not been explained yet. The aim of our study is to fill this gap in the knowledge. The explanation of this phenomena which is presented in the paper is based on pavement/sensors mechanics and the application of the multilayer elastic half-space theory. We show that differences in the distribution of vertical and horizontal stresses in the pavement structure are the cause of vehicle weight measurement errors. These studies are important in terms of Weigh-in-Motion systems for direct enforcement and will help to improve the weighing results accuracy. PMID:28880215

  2. The Effect of Flexible Pavement Mechanics on the Accuracy of Axle Load Sensors in Vehicle Weigh-in-Motion Systems.

    PubMed

    Burnos, Piotr; Rys, Dawid

    2017-09-07

    Weigh-in-Motion systems are tools to prevent road pavements from the adverse phenomena of vehicle overloading. However, the effectiveness of these systems can be significantly increased by improving weighing accuracy, which is now insufficient for direct enforcement of overloaded vehicles. Field tests show that the accuracy of Weigh-in-Motion axle load sensors installed in the flexible (asphalt) pavements depends on pavement temperature and vehicle speeds. Although this is a known phenomenon, it has not been explained yet. The aim of our study is to fill this gap in the knowledge. The explanation of this phenomena which is presented in the paper is based on pavement/sensors mechanics and the application of the multilayer elastic half-space theory. We show that differences in the distribution of vertical and horizontal stresses in the pavement structure are the cause of vehicle weight measurement errors. These studies are important in terms of Weigh-in-Motion systems for direct enforcement and will help to improve the weighing results accuracy.

  3. Sensing of minute airflow motions near walls using pappus-type nature-inspired sensors

    PubMed Central

    Mikulich, Vladimir

    2017-01-01

    This work describes the development and use of pappus-like structures as sensitive sensors to detect minute air-flow motions. We made such sensors from pappi taken from nature-grown seed, whose filiform hairs’ length-scale is suitable for the study of large-scale turbulent convection flows. The stem with the pappus on top is fixated on an elastic membrane on the wall and tilts under wind-load proportional to the velocity magnitude in direction of the wind, similar as the biological sensory hairs found in spiders, however herein the sensory hair has multiple filiform protrusions at the tip. As the sensor response is proportional to the drag on the tip and a low mass ensures a larger bandwidth, lightweight pappus structures similar as those found in nature with documented large drag are useful to improve the response of artificial sensors. The pappus of a Dandelion represents such a structure which has evolved to maximize wind-driven dispersion, therefore it is used herein as the head of our sensor. Because of its multiple hairs arranged radially around the stem it generates uniform drag for all wind directions. While still being permeable to the flow, the hundreds of individual hairs on the tip of the sensor head maximize the drag and minimize influence of pressure gradients or shear-induced lift forces on the sensor response as they occur in non-permeable protrusions. In addition, the flow disturbance by the sensor itself is limited. The optical recording of the head-motion allows continuously remote-distance monitoring of the flow fluctuations in direction and magnitude. Application is shown for the measurement of a reference flow under isothermal conditions to detect the early occurrence of instabilities. PMID:28658272

  4. Flexible Piezoelectric Sensor-Based Gait Recognition.

    PubMed

    Cha, Youngsu; Kim, Hojoon; Kim, Doik

    2018-02-05

    Most motion recognition research has required tight-fitting suits for precise sensing. However, tight-suit systems have difficulty adapting to real applications, because people normally wear loose clothes. In this paper, we propose a gait recognition system with flexible piezoelectric sensors in loose clothing. The gait recognition system does not directly sense lower-body angles. It does, however, detect the transition between standing and walking. Specifically, we use the signals from the flexible sensors attached to the knee and hip parts on loose pants. We detect the periodic motion component using the discrete time Fourier series from the signal during walking. We adapt the gait detection method to a real-time patient motion and posture monitoring system. In the monitoring system, the gait recognition operates well. Finally, we test the gait recognition system with 10 subjects, for which the proposed system successfully detects walking with a success rate over 93 %.

  5. Design and development of LED-based irregular leather area measuring machine

    NASA Astrophysics Data System (ADS)

    Adil, Rehan; Khan, Sarah Jamal

    2012-01-01

    Using optical sensor array, a precision motion control system in a conveyer follows the irregular shaped leather sheet to measure its surface area. In operation, irregular shaped leather sheet passes on conveyer belt and optical sensor array detects the leather sheet edge. In this way outside curvature of the leather sheet is detected and is then feed to the controller to measure its approximate area. Such system can measure irregular shapes, by neglecting rounded corners, ellipses etc. To minimize the error in calculating surface area of irregular curve to the above mentioned system, the motion control system only requires the footprint of the optical sensor to be small and the distance between the sensors is to be minimized. In the proposed technique surface area measurement of irregular shaped leather sheet is done by defining velocity and detecting position of the move. The motion controller takes the information and creates the necessary edge profile on point-to-point bases. As a result irregular shape of leather sheet is mapped and is then feed to the controller to calculate surface area.

  6. Microsoft Kinect Sensor Evaluation

    NASA Technical Reports Server (NTRS)

    Billie, Glennoah

    2011-01-01

    My summer project evaluates the Kinect game sensor input/output and its suitability to perform as part of a human interface for a spacecraft application. The primary objective is to evaluate, understand, and communicate the Kinect system's ability to sense and track fine (human) position and motion. The project will analyze the performance characteristics and capabilities of this game system hardware and its applicability for gross and fine motion tracking. The software development kit for the Kinect was also investigated and some experimentation has begun to understand its development environment. To better understand the software development of the Kinect game sensor, research in hacking communities has brought a better understanding of the potential for a wide range of personal computer (PC) application development. The project also entails the disassembly of the Kinect game sensor. This analysis would involve disassembling a sensor, photographing it, and identifying components and describing its operation.

  7. Liquid-Embedded Elastomer Electronics

    NASA Astrophysics Data System (ADS)

    Kramer, Rebecca; Majidi, Carmel; Park, Yong-Lae; Paik, Jamie; Wood, Robert

    2012-02-01

    Hyperelastic sensors are fabricated by embedding a silicone rubber film with microchannels of conductive liquid. In the case of soft tactile sensors, pressing the surface of the elastomer will deform the cross-section of underlying channels and change their electrical resistance. Soft pressure sensors may be employed in a variety of applications. For example, a network of pressure sensors can serve as artificial skin by yielding detailed information about contact pressures. This concept was demonstrated in a hyperelastic keypad, where perpendicular conductive channels form a quasi-planar network within an elastomeric matrix that registers the location, intensity and duration of applied pressure. In a second demonstration, soft curvature sensors were used for joint angle proprioception. Because the sensors are soft and stretchable, they conform to the host without interfering with the natural mechanics of motion. This marked the first use of liquid-embedded elastomer electronics to monitor human or robotic motion. Finally, liquid-embedded elastomers may be implemented as conductors in applications that call for flexible or stretchable circuitry, such as robotic origami.

  8. An Inexpensive Digital Infrared Camera

    ERIC Educational Resources Information Center

    Mills, Allan

    2012-01-01

    Details are given for the conversion of an inexpensive webcam to a camera specifically sensitive to the near infrared (700-1000 nm). Some experiments and practical applications are suggested and illustrated. (Contains 9 figures.)

  9. Design considerations for computationally constrained two-way real-time video communication

    NASA Astrophysics Data System (ADS)

    Bivolarski, Lazar M.; Saunders, Steven E.; Ralston, John D.

    2009-08-01

    Today's video codecs have evolved primarily to meet the requirements of the motion picture and broadcast industries, where high-complexity studio encoding can be utilized to create highly-compressed master copies that are then broadcast one-way for playback using less-expensive, lower-complexity consumer devices for decoding and playback. Related standards activities have largely ignored the computational complexity and bandwidth constraints of wireless or Internet based real-time video communications using devices such as cell phones or webcams. Telecommunications industry efforts to develop and standardize video codecs for applications such as video telephony and video conferencing have not yielded image size, quality, and frame-rate performance that match today's consumer expectations and market requirements for Internet and mobile video services. This paper reviews the constraints and the corresponding video codec requirements imposed by real-time, 2-way mobile video applications. Several promising elements of a new mobile video codec architecture are identified, and more comprehensive computational complexity metrics and video quality metrics are proposed in order to support the design, testing, and standardization of these new mobile video codecs.

  10. Controlling a robot with intention derived from motion.

    PubMed

    Crick, Christopher; Scassellati, Brian

    2010-01-01

    We present a novel, sophisticated intention-based control system for a mobile robot built from an extremely inexpensive webcam and radio-controlled toy vehicle. The system visually observes humans participating in various playground games and infers their goals and intentions through analyzing their spatiotemporal activity in relation to itself and each other, and then builds a coherent narrative out of the succession of these intentional states. Starting from zero information about the room, the rules of the games, or even which vehicle it controls, it learns rich relationships between players, their goals and intentions, probing uncertain situations with its own behavior. The robot is able to watch people playing various playground games, learn the roles and rules that apply to specific games, and participate in the play. The narratives it constructs capture essential information about the observed social roles and types of activity. After watching play for a short while, the system is able to participate appropriately in the games. We demonstrate how the system acts appropriately in scenarios such as chasing, follow-the-leader, and variants of tag. Copyright © 2009 Cognitive Science Society, Inc.

  11. Sensor for In-Motion Continuous 3D Shape Measurement Based on Dual Line-Scan Cameras

    PubMed Central

    Sun, Bo; Zhu, Jigui; Yang, Linghui; Yang, Shourui; Guo, Yin

    2016-01-01

    The acquisition of three-dimensional surface data plays an increasingly important role in the industrial sector. Numerous 3D shape measurement techniques have been developed. However, there are still limitations and challenges in fast measurement of large-scale objects or high-speed moving objects. The innovative line scan technology opens up new potentialities owing to the ultra-high resolution and line rate. To this end, a sensor for in-motion continuous 3D shape measurement based on dual line-scan cameras is presented. In this paper, the principle and structure of the sensor are investigated. The image matching strategy is addressed and the matching error is analyzed. The sensor has been verified by experiments and high-quality results are obtained. PMID:27869731

  12. Interaction force and motion estimators facilitating impedance control of the upper limb rehabilitation robot.

    PubMed

    Mancisidor, Aitziber; Zubizarreta, Asier; Cabanes, Itziar; Bengoa, Pablo; Jung, Je Hyung

    2017-07-01

    In order to enhance the performance of rehabilitation robots, it is imperative to know both force and motion caused by the interaction between user and robot. However, common direct measurement of both signals through force and motion sensors not only increases the complexity of the system but also impedes affordability of the system. As an alternative of the direct measurement, in this work, we present new force and motion estimators for the proper control of the upper-limb rehabilitation Universal Haptic Pantograph (UHP) robot. The estimators are based on the kinematic and dynamic model of the UHP and the use of signals measured by means of common low-cost sensors. In order to demonstrate the effectiveness of the estimators, several experimental tests were carried out. The force and impedance control of the UHP was implemented first by directly measuring the interaction force using accurate extra sensors and the robot performance was compared to the case where the proposed estimators replace the direct measured values. The experimental results reveal that the controller based on the estimators has similar performance to that using direct measurement (less than 1 N difference in root mean square error between two cases), indicating that the proposed force and motion estimators can facilitate implementation of interactive controller for the UHP in robotmediated rehabilitation trainings.

  13. Wearable carbon nanotube-based fabric sensors for monitoring human physiological performance

    NASA Astrophysics Data System (ADS)

    Wang, Long; Loh, Kenneth J.

    2017-05-01

    A target application of wearable sensors is to detect human motion and to monitor physical activity for improving athletic performance and for delivering better physical therapy. In addition, measuring human vital signals (e.g., respiration rate and body temperature) provides rich information that can be used to assess a subject’s physiological or psychological condition. This study aims to design a multifunctional, wearable, fabric-based sensing system. First, carbon nanotube (CNT)-based thin films were fabricated by spraying. Second, the thin films were integrated with stretchable fabrics to form the fabric sensors. Third, the strain and temperature sensing properties of sensors fabricated using different CNT concentrations were characterized. Furthermore, the sensors were demonstrated to detect human finger bending motions, so as to validate their practical strain sensing performance. Finally, to monitor human respiration, the fabric sensors were integrated with a chest band, which was directly worn by a human subject. Quantification of respiration rates were successfully achieved. Overall, the fabric sensors were characterized by advantages such as flexibility, ease of fabrication, lightweight, low-cost, noninvasiveness, and user comfort.

  14. Accuracy and precision of smartphone applications and commercially available motion sensors in multiple sclerosis

    PubMed Central

    Balto, Julia M; Kinnett-Hopkins, Dominique L

    2016-01-01

    Background There is increased interest in the application of smartphone applications and wearable motion sensors among multiple sclerosis (MS) patients. Objective This study examined the accuracy and precision of common smartphone applications and motion sensors for measuring steps taken by MS patients while walking on a treadmill. Methods Forty-five MS patients (Expanded Disability Status Scale (EDSS) = 1.0–5.0) underwent two 500-step walking trials at comfortable walking speed on a treadmill. Participants wore five motion sensors: the Digi-Walker SW-200 pedometer (Yamax), the UP2 and UP Move (Jawbone), and the Flex and One (Fitbit). The smartphone applications were Health (Apple), Health Mate (Withings), and Moves (ProtoGeo Oy). Results The Fitbit One had the best absolute (mean = 490.6 steps, 95% confidence interval (CI) = 485.6–495.5 steps) and relative accuracy (1.9% error), and absolute (SD = 16.4) and relative precision (coefficient of variation (CV) = 0.0), for the first 500-step walking trial; this was repeated with the second trial. Relative accuracy was correlated with slower walking speed for the first (rs = −.53) and second (rs = −.53) trials. Conclusion The results suggest that the waist-worn Fitbit One is the most precise and accurate sensor for measuring steps when walking on a treadmill, but future research is needed (testing the device across a broader range of disability, at different speeds, and in real-life walking conditions) before inclusion in clinical research and practice with MS patients. PMID:28607720

  15. Accuracy and precision of smartphone applications and commercially available motion sensors in multiple sclerosis.

    PubMed

    Balto, Julia M; Kinnett-Hopkins, Dominique L; Motl, Robert W

    2016-01-01

    There is increased interest in the application of smartphone applications and wearable motion sensors among multiple sclerosis (MS) patients. This study examined the accuracy and precision of common smartphone applications and motion sensors for measuring steps taken by MS patients while walking on a treadmill. Forty-five MS patients (Expanded Disability Status Scale (EDSS) = 1.0-5.0) underwent two 500-step walking trials at comfortable walking speed on a treadmill. Participants wore five motion sensors: the Digi-Walker SW-200 pedometer (Yamax), the UP2 and UP Move (Jawbone), and the Flex and One (Fitbit). The smartphone applications were Health (Apple), Health Mate (Withings), and Moves (ProtoGeo Oy). The Fitbit One had the best absolute (mean = 490.6 steps, 95% confidence interval (CI) = 485.6-495.5 steps) and relative accuracy (1.9% error), and absolute (SD = 16.4) and relative precision (coefficient of variation (CV) = 0.0), for the first 500-step walking trial; this was repeated with the second trial. Relative accuracy was correlated with slower walking speed for the first ( r s  =  -.53) and second ( r s  =  -.53) trials. The results suggest that the waist-worn Fitbit One is the most precise and accurate sensor for measuring steps when walking on a treadmill, but future research is needed (testing the device across a broader range of disability, at different speeds, and in real-life walking conditions) before inclusion in clinical research and practice with MS patients.

  16. Eye motion triggered self-powered mechnosensational communication system using triboelectric nanogenerator.

    PubMed

    Pu, Xianjie; Guo, Hengyu; Chen, Jie; Wang, Xue; Xi, Yi; Hu, Chenguo; Wang, Zhong Lin

    2017-07-01

    Mechnosensational human-machine interfaces (HMIs) can greatly extend communication channels between human and external devices in a natural way. The mechnosensational HMIs based on biopotential signals have been developing slowly owing to the low signal-to-noise ratio and poor stability. In eye motions, the corneal-retinal potential caused by hyperpolarization and depolarization is very weak. However, the mechanical micromotion of the skin around the corners of eyes has never been considered as a good trigger signal source. We report a novel triboelectric nanogenerator (TENG)-based micromotion sensor enabled by the coupling of triboelectricity and electrostatic induction. By using an indium tin oxide electrode and two opposite tribomaterials, the proposed flexible and transparent sensor is capable of effectively capturing eye blink motion with a super-high signal level (~750 mV) compared with the traditional electrooculogram approach (~1 mV). The sensor is fixed on a pair of glasses and applied in two real-time mechnosensational HMIs-the smart home control system and the wireless hands-free typing system with advantages of super-high sensitivity, stability, easy operation, and low cost. This TENG-based micromotion sensor is distinct and unique in its fundamental mechanism, which provides a novel design concept for intelligent sensor technique and shows great potential application in mechnosensational HMIs.

  17. Eye motion triggered self-powered mechnosensational communication system using triboelectric nanogenerator

    PubMed Central

    Pu, Xianjie; Guo, Hengyu; Chen, Jie; Wang, Xue; Xi, Yi; Hu, Chenguo; Wang, Zhong Lin

    2017-01-01

    Mechnosensational human-machine interfaces (HMIs) can greatly extend communication channels between human and external devices in a natural way. The mechnosensational HMIs based on biopotential signals have been developing slowly owing to the low signal-to-noise ratio and poor stability. In eye motions, the corneal-retinal potential caused by hyperpolarization and depolarization is very weak. However, the mechanical micromotion of the skin around the corners of eyes has never been considered as a good trigger signal source. We report a novel triboelectric nanogenerator (TENG)–based micromotion sensor enabled by the coupling of triboelectricity and electrostatic induction. By using an indium tin oxide electrode and two opposite tribomaterials, the proposed flexible and transparent sensor is capable of effectively capturing eye blink motion with a super-high signal level (~750 mV) compared with the traditional electrooculogram approach (~1 mV). The sensor is fixed on a pair of glasses and applied in two real-time mechnosensational HMIs—the smart home control system and the wireless hands-free typing system with advantages of super-high sensitivity, stability, easy operation, and low cost. This TENG-based micromotion sensor is distinct and unique in its fundamental mechanism, which provides a novel design concept for intelligent sensor technique and shows great potential application in mechnosensational HMIs. PMID:28782029

  18. Webcam autofocus mechanism used as a delay line for the characterization of femtosecond pulses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Castro-Marín, Pablo; Kapellmann-Zafra, Gabriel; Garduño-Mejía, Jesús, E-mail: jesus.garduno@ccadet.unam.mx

    2015-08-15

    In this work, we present an electromagnetic focusing mechanism (EFM), from a commercial webcam, implemented as a delay line of a femtosecond laser pulse characterization system. The characterization system consists on a second order autocorrelator based on a two-photon-absorption detection. The results presented here were performed for two different home-made femtosecond oscillators: Ti:sapph @ 820 nm and highly chirped pulses generated with an Erbium Doped Fiber @ 1550 nm. The EFM applied as a delay line represents an excellent alternative due its performance in terms of stability, resolution, and long scan range up to 3 ps. Due its low powermore » consumption, the device can be connected through the Universal Serial Bus (USB) port. Details of components, schematics of electronic controls, and detection systems are presented.« less

  19. Sequential monitoring of beach litter using webcams.

    PubMed

    Kako, Shin'ichiro; Isobe, Atsuhiko; Magome, Shinya

    2010-05-01

    This study attempts to establish a system for the sequential monitoring of beach litter using webcams placed at the Ookushi beach, Goto Islands, Japan, to establish the temporal variability in the quantities of beach litter every 90 min over a one and a half year period. The time series of the quantities of beach litter, computed by counting pixels with a greater lightness than a threshold value in photographs, shows that litter does not increase monotonically on the beach, but fluctuates mainly on a monthly time scale or less. To investigate what factors influence this variability, the time derivative of the quantity of beach litter is compared with satellite-derived wind speeds. It is found that the beach litter quantities vary largely with winds, but there may be other influencing factors. (c) 2010 Elsevier Ltd. All rights reserved.

  20. An objective measure of hyperactivity aspects with compressed webcam video.

    PubMed

    Wehrmann, Thomas; Müller, Jörg Michael

    2015-01-01

    Objective measures of physical activity are currently not considered in clinical guidelines for the assessment of hyperactivity in the context of Attention-Deficit/Hyperactivity Disorder (ADHD) due to low and inconsistent associations between clinical ratings, missing age-related norm data and high technical requirements. This pilot study introduces a new objective measure for physical activity using compressed webcam video footage, which should be less affected by age-related variables. A pre-test established a preliminary standard procedure for testing a clinical sample of 39 children aged 6-16 years (21 with a clinical ADHD diagnosis, 18 without). Subjects were filmed for 6 min while solving a standardized cognitive performance task. Our webcam video-based video-activity score was compared with respect to two independent video-based movement ratings by students, ratings of Inattentiveness, Hyperactivity and Impulsivity by clinicians (DCL-ADHS) giving a clinical diagnosis of ADHD and parents (FBB-ADHD) and physical features (age, weight, height, BMI) using mean scores, correlations and multiple regression. Our video-activity score showed a high agreement (r = 0.81) with video-based movement ratings, but also considerable associations with age-related physical attributes. After controlling for age-related confounders, the video-activity score showed not the expected association with clinicians' or parents' hyperactivity ratings. Our preliminary conclusion is that our video-activity score assesses physical activity but not specific information related to hyperactivity. The general problem of defining and assessing hyperactivity with objective criteria remains.

  1. Slit-lamp management in contact lenses laboratory classes: learning upgrade with monitor visualization of webcam video recordings

    NASA Astrophysics Data System (ADS)

    Arines, Justo; Gargallo, Ana

    2014-07-01

    The training in the use of the slit lamp has always been difficult for students of the degree in Optics and Optometry. Instruments with associated cameras helps a lot in this task, they allow teachers to observe and control if the students evaluate the eye health appropriately, correct use errors and show them how to do it with a visual demonstration. However, these devices are more expensive than those that do not have an integrated camera connected to a display unit. With the aim to improve students' skills in the management of slit lamp, we have adapted USB HD webcams (Microsoft Lifecam HD-5000) to the objectives of the slit lamps available in our contact lenses laboratory room. The webcams are connected to a PC running Linux Ubuntu 11.0; therefore that is a low-cost device. Our experience shows that single method has several advantages. It allows us to take pictures with a good quality of different conditions of the eye health; we can record videos of eye evaluation and make demonstrations of the instrument. Besides it increases the interactions between students because they could see what their colleagues are doing and take conscious of the mistakes, helping and correcting each others. It is a useful tool in the practical exam too. We think that the method supports the training in optometry practice and increase the students' confidence without a huge outlay.

  2. Motion sensors in mathematics teaching: learning tools for understanding general math concepts?

    NASA Astrophysics Data System (ADS)

    Urban-Woldron, Hildegard

    2015-05-01

    Incorporating technology tools into the mathematics classroom adds a new dimension to the teaching of mathematics concepts and establishes a whole new approach to mathematics learning. In particular, gathering data in a hands-on and real-time method helps classrooms coming alive. The focus of this paper is on bringing forward important mathematics concepts such as functions and rate of change with the motion detector. Findings from the author's studies suggest that the motion detector can be introduced from a very early age and used to enliven classes at any level. Using real-world data to present the main functions invites an experimental approach to mathematics and encourages students to engage actively in their learning. By emphasizing learning experiences with computer-based motion detectors and aiming to involve students in mathematical representations of real-world phenomena, six learning activities, which were developed in previous research studies, will be presented. Students use motion sensors to collect physical data that are graphed in real time and then manipulate and analyse them. Because data are presented in an immediately understandable graphical form, students are allowed to take an active role in their learning by constructing mathematical knowledge from observation of the physical world. By utilizing a predict-observe-explain format, students learn about slope, determining slope and distance vs. time graphs through motion-filled activities. Furthermore, exploring the meaning of slope, viewed as the rate of change, students acquire competencies for reading, understanding and interpreting kinematics graphs involving a multitude of mathematical representations. Consequently, the students are empowered to efficiently move among tabular, graphical and symbolic representation to analyse patterns and discover the relationships between different representations of motion. In fact, there is a need for further research to explore how mathematics teachers can integrate motion sensors into their classrooms.

  3. 77 FR 74513 - Certain CMOS Image Sensors and Products Containing Same; Investigations: Terminations...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-14

    .... International Trade Commission. ACTION: Notice. SUMMARY: Notice is hereby given that the U.S. International... unopposed motions to terminate the above-captioned investigation based on a settlement agreement. The..., ``Nokia''); and Research In Motion Ltd., of Ontario, Canada, and Research In Motion Corp., of Irving...

  4. Sensorimotor Assessment and Rehabilitative Apparatus

    DTIC Science & Technology

    2017-10-01

    vestibulo-ocular assessment without measuring eye movements per se. VON uses a head-mounted motion sensor, laptop computer with user...powered laptop computer with extensive processing algorithms. Frequent occlusion of the pupil by 2 eurosc t a o t t T t m I L f t o e n o a s h e t t s...The apparatus consists of a laptop computer , mirror galvanometer, back-projected laser target, data acquisition board, rate sensor, and motion-gain

  5. DURIP: Piezoresponse Force Microscope (PFM) with Controlled Environment for Characterization of Flexoelectric Nanostructures

    DTIC Science & Technology

    2015-04-21

    seismic sensors , acoustic sensors , electromagnetic sensors and infrared (IR) detectors are among in-need multimodal sensing of vehicles, personnel, weapons... sensors and detectors largely due to the fact that the nature of piezoelectricity renders both active and passive sensing with fast response, low profile...and low power consumption. Acoustic and seismic sensors are used to ascertain the exact target location, speed, direction of motion, and

  6. OceanCubes: An Affordable Cabled Observatory System for Integrated Long-Term, High Frequency Biological, Chemical, and Physical Measurements for Understanding Coastal Ecosystems

    NASA Astrophysics Data System (ADS)

    Gallager, S. M.

    2016-02-01

    Understanding how coastal ocean processes are forcing and/or responding to ecosystem change is a central premise in current oceanographic research and monitoring. A distributed, high capacity observing capability is necessary to address biological processes requiring high frequency observations on short ( turbulence, internal waves), moderate (typhoons), and decadal time scales (e.g., NAO, El Nino-SO, PDO). The current belief that ocean observing systems need to be expensive, large, difficult to deploy and limited in capacity was tested by developing OceanCubes, an end-to-end cabled observational system with real-time telemetry, state-of-the-art sensor packages, high level of expandability, and diver maintained to reduce operating costs. A modular approach allows for a scalable system that can grow over time to accommodate budgets. The control volume design allows for measurement of material flux and energy from the water column to the benthos at a rate of s-1. The sensor package is connected by electro-optical cable to shore providing the capability for internet-based teleoperation by scientists world-wide. The central node provides underwater mateable connections for > 22 serial and Ethernet-based sensors (CTD, four ADCPs, chlorophyll and CDOM fluorescence, O2, nitrate, pCO2, pH, a bio-optical package, a Continuous Plankton Imaging and Classification Sensor (CPICS) for mesoplankton, a pan and tilt webcam, and two stereo cameras to observe and track fish communities. ADCPs and temperature strings mark the corners of the 162,000 m3 control volume. Disparate data streams are remotely archived, correlated, and analyzed while plankton and fish are identified using state-of-the-art machine vision and learning techniques. Two OceanCubes have been installed in Japan (Okinawa and Oshima Island, Tokyo) and have survived several typhoon seasons. Two additional systems are planned for either side of the Panamanian Isthmus. Results of these systems will be discussed.

  7. Shallow outgassing changes disrupt steady lava lake activity, Kilauea Volcano

    NASA Astrophysics Data System (ADS)

    Patrick, M. R.; Orr, T. R.; Swanson, D. A.; Lev, E.

    2015-12-01

    Persistent lava lakes are a testament to sustained magma supply and outgassing in basaltic systems, and the surface activity of lava lakes has been used to infer processes in the underlying magmatic system. At Kilauea Volcano, Hawai`i, the lava lake in Halema`uma`u Crater has been closely studied for several years with webcam imagery, geophysical, petrological and gas emission techniques. The lava lake in Halema`uma`u is now the second largest on Earth, and provides an unprecedented opportunity for detailed observations of lava lake outgassing processes. We observe that steady activity is characterized by continuous southward motion of the lake's surface and slow changes in lava level, seismic tremor and gas emissions. This normal, steady activity can be abruptly interrupted by the appearance of spattering - sometimes triggered by rockfalls - on the lake surface, which abruptly shifts the lake surface motion, lava level and gas emissions to a more variable, unstable regime. The lake commonly alternates between this a) normal, steady activity and b) unstable behavior several times per day. The spattering represents outgassing of shallowly accumulated gas in the lake. Therefore, although steady lava lake behavior at Halema`uma`u may be deeply driven by upwelling of magma, we argue that the sporadic interruptions to this behavior are the result of shallow processes occurring near the lake surface. These observations provide a cautionary note that some lava lake behavior is not representative of deep-seated processes. This behavior also highlights the complex and dynamic nature of lava lake activity.

  8. Sensor fusion II: Human and machine strategies; Proceedings of the Meeting, Philadelphia, PA, Nov. 6-9, 1989

    NASA Technical Reports Server (NTRS)

    Schenker, Paul S. (Editor)

    1990-01-01

    Various papers on human and machine strategies in sensor fusion are presented. The general topics addressed include: active vision, measurement and analysis of visual motion, decision models for sensor fusion, implementation of sensor fusion algorithms, applying sensor fusion to image analysis, perceptual modules and their fusion, perceptual organization and object recognition, planning and the integration of high-level knowledge with perception, using prior knowledge and context in sensor fusion.

  9. Turbulence Measurements from Compliant Moorings. Part II: Motion Correction

    DOE PAGES

    Kilcher, Levi F.; Thomson, Jim; Harding, Samuel; ...

    2017-06-20

    Acoustic Doppler velocimeters (ADVs) are a valuable tool for making high-precision measurements of turbulence, and moorings are a convenient and ubiquitous platform for making many kinds of measurements in the ocean. However, because of concerns that mooring motion can contaminate turbulence measurements and that acoustic Doppler profilers make middepth velocity measurements relatively easy, ADVs are not frequently deployed from moorings. This work demonstrates that inertial motion measurements can be used to reduce motion contamination from moored ADV velocity measurements. Three distinct mooring platforms were deployed in a tidal channel with inertial-motion-sensor-equipped ADVs. In each case, motion correction based on themore » inertial measurements reduces mooring motion contamination of velocity measurements. The spectra from these measurements are consistent with other measurements in tidal channels and have an f –5/3 slope at high frequencies - consistent with Kolmogorov's theory of isotropic turbulence. Motion correction also improves estimates of cross spectra and Reynolds stresses. A comparison of turbulence dissipation with flow speed and turbulence production indicates a bottom boundary layer production-dissipation balance during ebb and flood that is consistent with the strong tidal forcing at the site. Finally, these results indicate that inertial-motion-sensor-equipped ADVs are a valuable new tool for making high-precision turbulence measurements from moorings.« less

  10. Real-time 3D visualization of volumetric video motion sensor data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlson, J.; Stansfield, S.; Shawver, D.

    1996-11-01

    This paper addresses the problem of improving detection, assessment, and response capabilities of security systems. Our approach combines two state-of-the-art technologies: volumetric video motion detection (VVMD) and virtual reality (VR). This work capitalizes on the ability of VVMD technology to provide three-dimensional (3D) information about the position, shape, and size of intruders within a protected volume. The 3D information is obtained by fusing motion detection data from multiple video sensors. The second component involves the application of VR technology to display information relating to the sensors and the sensor environment. VR technology enables an operator, or security guard, to bemore » immersed in a 3D graphical representation of the remote site. VVMD data is transmitted from the remote site via ordinary telephone lines. There are several benefits to displaying VVMD information in this way. Because the VVMD system provides 3D information and because the sensor environment is a physical 3D space, it seems natural to display this information in 3D. Also, the 3D graphical representation depicts essential details within and around the protected volume in a natural way for human perception. Sensor information can also be more easily interpreted when the operator can `move` through the virtual environment and explore the relationships between the sensor data, objects and other visual cues present in the virtual environment. By exploiting the powerful ability of humans to understand and interpret 3D information, we expect to improve the means for visualizing and interpreting sensor information, allow a human operator to assess a potential threat more quickly and accurately, and enable a more effective response. This paper will detail both the VVMD and VR technologies and will discuss a prototype system based upon their integration.« less

  11. Occupant Motion Sensors

    DOT National Transportation Integrated Search

    1971-03-01

    An analysis was made of methods for measuring vehicle occupant motion during crash or impact conditions. The purpose of the measurements is to evaluate restraint performance using human, anthropometric dummy, or animal occupants. A detailed Fourier f...

  12. Can earthquake source inversion benefit from rotational ground motion observations?

    NASA Astrophysics Data System (ADS)

    Igel, H.; Donner, S.; Reinwald, M.; Bernauer, M.; Wassermann, J. M.; Fichtner, A.

    2015-12-01

    With the prospects of instruments to observe rotational ground motions in a wide frequency and amplitude range in the near future we engage in the question how this type of ground motion observation can be used to solve seismic inverse problems. Here, we focus on the question, whether point or finite source inversions can benefit from additional observations of rotational motions. In an attempt to be fair we compare observations from a surface seismic network with N 3-component translational sensors (classic seismometers) with those obtained with N/2 6-component sensors (with additional colocated 3-component rotational motions). Thus we keep the overall number of traces constant. Synthetic seismograms are calculated for known point- or finite-source properties. The corresponding inverse problem is posed in a probabilistic way using the Shannon information content as a measure how the observations constrain the seismic source properties. The results show that with the 6-C subnetworks the source properties are not only equally well recovered (even that would be benefitial because of the substantially reduced logistics installing N/2 sensors) but statistically significant some source properties are almost always better resolved. We assume that this can be attributed to the fact the (in particular vertical) gradient information is contained in the additional rotational motion components. We compare these effects for strike-slip and normal-faulting type sources. Thus the answer to the question raised is a definite "yes". The challenge now is to demonstrate these effects on real data.

  13. A Study on the Development of a Robot-Assisted Automatic Laser Hair Removal System

    PubMed Central

    Lim, Hyoung-woo; Park, Sungwoo; Noh, Seungwoo; Lee, Dong-Hun; Yoon, Chiyul; Koh, Wooseok; Kim, Youdan; Chung, Jin Ho; Kim, Hee Chan

    2014-01-01

    Abstract Background and Objective: The robot-assisted automatic laser hair removal (LHR) system is developed to automatically detect any arbitrary shape of the desired LHR treatment area and to provide uniform laser irradiation to the designated skin area. Methods: For uniform delivery of laser energy, a unit of a commercial LHR device, a laser distance sensor, and a high-resolution webcam are attached at the six axis industrial robot's end-effector, which can be easily controlled using a graphical user interface (GUI). During the treatment, the system provides real-time treatment progress as well as the total number of “pick and place” automatically. Results: During the test, it was demonstrated that the arbitrary shapes were detected, and that the laser was delivered uniformly. The localization error test and the area-per-spot test produced satisfactory outcome averages of 1.04 mm error and 38.22 mm2/spot, respectively. Conclusions: Results showed that the system successfully demonstrated accuracy and effectiveness. The proposed system is expected to become a promising device in LHR treatment. PMID:25343281

  14. On the feasibility of interoperable schemes in hand biometrics.

    PubMed

    Morales, Aythami; González, Ester; Ferrer, Miguel A

    2012-01-01

    Personal recognition through hand-based biometrics has attracted the interest of many researchers in the last twenty years. A significant number of proposals based on different procedures and acquisition devices have been published in the literature. However, comparisons between devices and their interoperability have not been thoroughly studied. This paper tries to fill this gap by proposing procedures to improve the interoperability among different hand biometric schemes. The experiments were conducted on a database made up of 8,320 hand images acquired from six different hand biometric schemes, including a flat scanner, webcams at different wavelengths, high quality cameras, and contactless devices. Acquisitions on both sides of the hand were included. Our experiment includes four feature extraction methods which determine the best performance among the different scenarios for two of the most popular hand biometrics: hand shape and palm print. We propose smoothing techniques at the image and feature levels to reduce interdevice variability. Results suggest that comparative hand shape offers better performance in terms of interoperability than palm prints, but palm prints can be more effective when using similar sensors.

  15. A study on the development of a robot-assisted automatic laser hair removal system.

    PubMed

    Lim, Hyoung-Woo; Park, Sungwoo; Noh, Seungwoo; Lee, Dong-Hun; Yoon, Chiyul; Koh, Wooseok; Kim, Youdan; Chung, Jin Ho; Kim, Hee Chan; Kim, Sungwan

    2014-11-01

    Abstract Background and Objective: The robot-assisted automatic laser hair removal (LHR) system is developed to automatically detect any arbitrary shape of the desired LHR treatment area and to provide uniform laser irradiation to the designated skin area. For uniform delivery of laser energy, a unit of a commercial LHR device, a laser distance sensor, and a high-resolution webcam are attached at the six axis industrial robot's end-effector, which can be easily controlled using a graphical user interface (GUI). During the treatment, the system provides real-time treatment progress as well as the total number of "pick and place" automatically. During the test, it was demonstrated that the arbitrary shapes were detected, and that the laser was delivered uniformly. The localization error test and the area-per-spot test produced satisfactory outcome averages of 1.04 mm error and 38.22 mm(2)/spot, respectively. RESULTS showed that the system successfully demonstrated accuracy and effectiveness. The proposed system is expected to become a promising device in LHR treatment.

  16. A machine learning approach to improve contactless heart rate monitoring using a webcam.

    PubMed

    Monkaresi, Hamed; Calvo, Rafael A; Yan, Hong

    2014-07-01

    Unobtrusive, contactless recordings of physiological signals are very important for many health and human-computer interaction applications. Most current systems require sensors which intrusively touch the user's skin. Recent advances in contact-free physiological signals open the door to many new types of applications. This technology promises to measure heart rate (HR) and respiration using video only. The effectiveness of this technology, its limitations, and ways of overcoming them deserves particular attention. In this paper, we evaluate this technique for measuring HR in a controlled situation, in a naturalistic computer interaction session, and in an exercise situation. For comparison, HR was measured simultaneously using an electrocardiography device during all sessions. The results replicated the published results in controlled situations, but show that they cannot yet be considered as a valid measure of HR in naturalistic human-computer interaction. We propose a machine learning approach to improve the accuracy of HR detection in naturalistic measurements. The results demonstrate that the root mean squared error is reduced from 43.76 to 3.64 beats/min using the proposed method.

  17. On the Feasibility of Interoperable Schemes in Hand Biometrics

    PubMed Central

    Morales, Aythami; González, Ester; Ferrer, Miguel A.

    2012-01-01

    Personal recognition through hand-based biometrics has attracted the interest of many researchers in the last twenty years. A significant number of proposals based on different procedures and acquisition devices have been published in the literature. However, comparisons between devices and their interoperability have not been thoroughly studied. This paper tries to fill this gap by proposing procedures to improve the interoperability among different hand biometric schemes. The experiments were conducted on a database made up of 8,320 hand images acquired from six different hand biometric schemes, including a flat scanner, webcams at different wavelengths, high quality cameras, and contactless devices. Acquisitions on both sides of the hand were included. Our experiment includes four feature extraction methods which determine the best performance among the different scenarios for two of the most popular hand biometrics: hand shape and palm print. We propose smoothing techniques at the image and feature levels to reduce interdevice variability. Results suggest that comparative hand shape offers better performance in terms of interoperability than palm prints, but palm prints can be more effective when using similar sensors. PMID:22438714

  18. Video pulse rate variability analysis in stationary and motion conditions.

    PubMed

    Melchor Rodríguez, Angel; Ramos-Castro, J

    2018-01-29

    In the last few years, some studies have measured heart rate (HR) or heart rate variability (HRV) parameters using a video camera. This technique focuses on the measurement of the small changes in skin colour caused by blood perfusion. To date, most of these works have obtained HRV parameters in stationary conditions, and there are practically no studies that obtain these parameters in motion scenarios and by conducting an in-depth statistical analysis. In this study, a video pulse rate variability (PRV) analysis is conducted by measuring the pulse-to-pulse (PP) intervals in stationary and motion conditions. Firstly, given the importance of the sampling rate in a PRV analysis and the low frame rate of commercial cameras, we carried out an analysis of two models to evaluate their performance in the measurements. We propose a selective tracking method using the Viola-Jones and KLT algorithms, with the aim of carrying out a robust video PRV analysis in stationary and motion conditions. Data and results of the proposed method are contrasted with those reported in the state of the art. The webcam achieved better results in the performance analysis of video cameras. In stationary conditions, high correlation values were obtained in PRV parameters with results above 0.9. The PP time series achieved an RMSE (mean ± standard deviation) of 19.45 ± 5.52 ms (1.70 ± 0.75 bpm). In the motion analysis, most of the PRV parameters also achieved good correlation results, but with lower values as regards stationary conditions. The PP time series presented an RMSE of 21.56 ± 6.41 ms (1.79 ± 0.63 bpm). The statistical analysis showed good agreement between the reference system and the proposed method. In stationary conditions, the results of PRV parameters were improved by our method in comparison with data reported in related works. An overall comparative analysis of PRV parameters in motion conditions was more limited due to the lack of studies or studies containing insufficient data analysis. Based on the results, the proposed method could provide a low-cost, contactless and reliable alternative for measuring HR or PRV parameters in non-clinical environments.

  19. Segmentation of human upper body movement using multiple IMU sensors.

    PubMed

    Aoki, Takashi; Lin, Jonathan Feng-Shun; Kulic, Dana; Venture, Gentiane

    2016-08-01

    This paper proposes an approach for the segmentation of human body movements measured by inertial measurement unit sensors. Using the angular velocity and linear acceleration measurements directly, without converting to joint angles, we perform segmentation by formulating the problem as a classification problem, and training a classifier to differentiate between motion end-point and within-motion points. The proposed approach is validated with experiments measuring the upper body movement during reaching tasks, demonstrating classification accuracy of over 85.8%.

  20. Surface Craft Motion Parameter Estimation Using Multipath Delay Measurements from Hydrophones

    DTIC Science & Technology

    2011-12-01

    the sensor is cd . The slant range of the source from the sensor at time t is given by 21222 ])([)( cc RtvtR +−= τ ( 1 ) where 2122 ])[( crtc dhhR...Surface Craft Motion Parameter Estimation Using Multipath Delay Measurements from Hydrophones Kam W. Lo # 1 and Brian G. Ferguson #2 # Maritime...Eveleigh, NSW 2015 Australia 1 kam.lo@dsto.defence.gov.au 2 brian.ferguson@dsto.defence.gov.au Abstract— An equation-error (EE) method is

  1. Design of a Capacitive Flexible Weighing Sensor for Vehicle WIM System

    PubMed Central

    Cheng, Lu; Zhang, Hongjian; Li, Qing

    2007-01-01

    With the development of the Highway Transportation and Business Trade, vehicle weigh-in-motion (WIM) technology has become a key technology and trend of measuring traffic loads. In this paper, a novel capacitive flexible weighing sensor which is light weight, smaller volume and easy to carry was applied in the vehicle WIM system. The dynamic behavior of the sensor is modeled using the Maxwell-Kelvin model because the materials of the sensor are rubbers which belong to viscoelasticity. A signal processing method based on the model is presented to overcome effects of rubber mechanical properties on the dynamic weight signal. The results showed that the measurement error is less than ±10%. All the theoretic analysis and numerical results demonstrated that appliance of this system to weigh in motion is feasible and convenient for traffic inspection.

  2. A system for respiratory motion detection using optical fibers embedded into textiles.

    PubMed

    D'Angelo, L T; Weber, S; Honda, Y; Thiel, T; Narbonneau, F; Luth, T C

    2008-01-01

    In this contribution, a first prototype for mobile respiratory motion detection using optical fibers embedded into textiles is presented. The developed system consists of a T-shirt with an integrated fiber sensor and a portable monitoring unit with a wireless communication link enabling the data analysis and visualization on a PC. A great effort is done worldwide to develop mobile solutions for health monitoring of vital signs for patients needing continuous medical care. Wearable, comfortable and smart textiles incorporating sensors are good approaches to solve this problem. In most of the cases, electrical sensors are integrated, showing significant limits such as for the monitoring of anaesthetized patients during Magnetic Resonance Imaging (MRI). OFSETH (Optical Fibre Embedded into technical Textile for Healthcare) uses optical sensor technologies to extend the current capabilities of medical technical textiles.

  3. Optimal full motion video registration with rigorous error propagation

    NASA Astrophysics Data System (ADS)

    Dolloff, John; Hottel, Bryant; Doucette, Peter; Theiss, Henry; Jocher, Glenn

    2014-06-01

    Optimal full motion video (FMV) registration is a crucial need for the Geospatial community. It is required for subsequent and optimal geopositioning with simultaneous and reliable accuracy prediction. An overall approach being developed for such registration is presented that models relevant error sources in terms of the expected magnitude and correlation of sensor errors. The corresponding estimator is selected based on the level of accuracy of the a priori information of the sensor's trajectory and attitude (pointing) information, in order to best deal with non-linearity effects. Estimator choices include near real-time Kalman Filters and batch Weighted Least Squares. Registration solves for corrections to the sensor a priori information for each frame. It also computes and makes available a posteriori accuracy information, i.e., the expected magnitude and correlation of sensor registration errors. Both the registered sensor data and its a posteriori accuracy information are then made available to "down-stream" Multi-Image Geopositioning (MIG) processes. An object of interest is then measured on the registered frames and a multi-image optimal solution, including reliable predicted solution accuracy, is then performed for the object's 3D coordinates. This paper also describes a robust approach to registration when a priori information of sensor attitude is unavailable. It makes use of structure-from-motion principles, but does not use standard Computer Vision techniques, such as estimation of the Essential Matrix which can be very sensitive to noise. The approach used instead is a novel, robust, direct search-based technique.

  4. Variational optical flow estimation for images with spectral and photometric sensor diversity

    NASA Astrophysics Data System (ADS)

    Bengtsson, Tomas; McKelvey, Tomas; Lindström, Konstantin

    2015-03-01

    Motion estimation of objects in image sequences is an essential computer vision task. To this end, optical flow methods compute pixel-level motion, with the purpose of providing low-level input to higher-level algorithms and applications. Robust flow estimation is crucial for the success of applications, which in turn depends on the quality of the captured image data. This work explores the use of sensor diversity in the image data within a framework for variational optical flow. In particular, a custom image sensor setup intended for vehicle applications is tested. Experimental results demonstrate the improved flow estimation performance when IR sensitivity or flash illumination is added to the system.

  5. Can mobile phones used in strong motion seismology?

    NASA Astrophysics Data System (ADS)

    D'Alessandro, Antonino; D'Anna, Giuseppe

    2013-04-01

    Micro Electro-Mechanical Systems (MEMS) accelerometers are electromechanical devices able to measure static or dynamic accelerations. In the 1990s MEMS accelerometers revolutionized the automotive-airbag system industry and are currently widely used in laptops, game controllers and mobile phones. Nowadays MEMS accelerometers seems provide adequate sensitivity, noise level and dynamic range to be applicable to earthquake strong motion acquisition. The current use of 3 axes MEMS accelerometers in mobile phone maybe provide a new means to easy increase the number of observations when a strong earthquake occurs. However, before utilize the signals recorded by a mobile phone equipped with a 3 axes MEMS accelerometer for any scientific porpoise, it is fundamental to verify that the signal collected provide reliable records of ground motion. For this reason we have investigated the suitability of the iPhone 5 mobile phone (one of the most popular mobile phone in the world) for strong motion acquisition. It is provided by several MEMS devise like a three-axis gyroscope, a three-axis electronic compass and a the LIS331DLH three-axis accelerometer. The LIS331DLH sensor is a low-cost high performance three axes linear accelerometer, with 16 bit digital output, produced by STMicroelectronics Inc. We have tested the LIS331DLH MEMS accelerometer using a vibrating table and the EpiSensor FBA ES-T as reference sensor. In our experiments the reference sensor was rigidly co-mounted with the LIS331DHL MEMS sensor on the vibrating table. We assessment the MEMS accelerometer in the frequency range 0.2-20 Hz, typical range of interesting in strong motion seismology and earthquake engineering. We generate both constant and damped sine waves with central frequency starting from 0.2 Hz until 20 Hz with step of 0.2 Hz. For each frequency analyzed we generate sine waves with mean amplitude 50, 100, 200, 400, 800 and 1600 mg0. For damped sine waves we generate waveforms with initial amplitude of 2 g0. Our tests show as, in the frequency and amplitude range analyzed (0.2-20 Hz, 10-2000 mg0), the LIS331DLH MEMS accelerometer have excellent frequency and phase response, comparable with that of some standard FBA accelerometer used in strong motion seismology. However, we found that the signal recorded by the LIS331DLH MEMS accelerometer slightly underestimates the real acceleration (of about 2.5%). This suggests that may be important to calibrate a MEMS sensor before using it in scientific applications. A drawback of the LIS331DLH MEMS accelerometer is its low sensitivity. This is an important limitation of all the low cost MEMS accelerometers; therefore nowadays they are desirable to use only in strong motion seismology. However, the rapid development of this technology will lead in the coming years to the development of high sensitivity and low noise digital MEMS sensors that may be replace the current seismic accelerometer used in seismology. Actually, the real main advantage of these sensors is their common use in the mobile phones.

  6. Optical and Acoustic Sensor-Based 3D Ball Motion Estimation for Ball Sport Simulators †.

    PubMed

    Seo, Sang-Woo; Kim, Myunggyu; Kim, Yejin

    2018-04-25

    Estimation of the motion of ball-shaped objects is essential for the operation of ball sport simulators. In this paper, we propose an estimation system for 3D ball motion, including speed and angle of projection, by using acoustic vector and infrared (IR) scanning sensors. Our system is comprised of three steps to estimate a ball motion: sound-based ball firing detection, sound source localization, and IR scanning for motion analysis. First, an impulsive sound classification based on the mel-frequency cepstrum and feed-forward neural network is introduced to detect the ball launch sound. An impulsive sound source localization using a 2D microelectromechanical system (MEMS) microphones and delay-and-sum beamforming is presented to estimate the firing position. The time and position of a ball in 3D space is determined from a high-speed infrared scanning method. Our experimental results demonstrate that the estimation of ball motion based on sound allows a wider activity area than similar camera-based methods. Thus, it can be practically applied to various simulations in sports such as soccer and baseball.

  7. Noncontact respiration-monitoring system using fiber grating sensor

    NASA Astrophysics Data System (ADS)

    Sato, Isao; Nakajima, Masato

    2004-10-01

    In this research, the new non-contact breathing motion monitoring system using Fiber Grating 3-dimension Sensor is used to measure the respiratory movement of the chest and the abdomen and the shape of the human body simultaneously. Respiratory trouble during sleep brings about various kinds of diseases. Particularly, Sleep Apnea Syndrome (SAS), which restricts respiration during sleep, has been in the spotlight in recent years. However, present equipment for analyzing the blessing motion requires attaching various sensors on the patient's body. This system adopted two CCD cameras to measure the movements of projected infrared bright spots on the patient's body which measure the body form, breathing motion of the chest and breathing motion of the abdomen in detail. Since the equipment does not contact the patient's body, the patient feels incompatibility, and there is no necessity to worry about the equipment coming off. Sleep Apnea Syndrome is classified into three types by their respiratory pattern-Obstructive, Central and Mixed SAS based on the characteristic. This paper reports the method of diagnosing SAS automatically. It is thought that this method will be helpful not only for the diagnosis of SAS but also for the diagnosis of other kinds of complicated respiratory disease.

  8. A novel yet effective motion artefact reduction method for continuous physiological monitoring

    NASA Astrophysics Data System (ADS)

    Alzahrani, A.; Hu, S.; Azorin-Peris, V.; Kalawsky, R.; Zhang, X.; Liu, C.

    2014-03-01

    This study presents a non-invasive and wearable optical technique to continuously monitor vital human signs as required for personal healthcare in today's increasing ageing population. The study has researched an effective way to capture human critical physiological parameters, i.e., oxygen saturation (SaO2%), heart rate, respiration rate, body temperature, heart rate variability by a closely coupled wearable opto-electronic patch sensor (OEPS) together with real-time and secure wireless communication functionalities. The work presents the first step of this research; an automatic noise cancellation method using a 3-axes MEMS accelerometer to recover signals corrupted by body movement which is one of the biggest sources of motion artefacts. The effects of these motion artefacts have been reduced by an enhanced electronic design and development of self-cancellation of noise and stability of the sensor. The signals from the acceleration and the opto-electronic sensor are highly correlated thus leading to the desired pulse waveform with rich bioinformatics signals to be retrieved with reduced motion artefacts. The preliminary results from the bench tests and the laboratory setup demonstrate that the goal of the high performance wearable opto-electronics is viable and feasible.

  9. A Simulation Study of a Radiofrequency Localization System for Tracking Patient Motion in Radiotherapy.

    PubMed

    Ostyn, Mark; Kim, Siyong; Yeo, Woon-Hong

    2016-04-13

    One of the most widely used tools in cancer treatment is external beam radiotherapy. However, the major risk involved in radiotherapy is excess radiation dose to healthy tissue, exacerbated by patient motion. Here, we present a simulation study of a potential radiofrequency (RF) localization system designed to track intrafraction motion (target motion during the radiation treatment). This system includes skin-wearable RF beacons and an external tracking system. We develop an analytical model for direction of arrival measurement with radio frequencies (GHz range) for use in a localization estimate. We use a Monte Carlo simulation to investigate the relationship between a localization estimate and angular resolution of sensors (signal receivers) in a simulated room. The results indicate that the external sensor needs an angular resolution of about 0.03 degrees to achieve millimeter-level localization accuracy in a treatment room. This fundamental study of a novel RF localization system offers the groundwork to design a radiotherapy-compatible patient positioning system for active motion compensation.

  10. Human Age Estimation Method Robust to Camera Sensor and/or Face Movement

    PubMed Central

    Nguyen, Dat Tien; Cho, So Ra; Pham, Tuyen Danh; Park, Kang Ryoung

    2015-01-01

    Human age can be employed in many useful real-life applications, such as customer service systems, automatic vending machines, entertainment, etc. In order to obtain age information, image-based age estimation systems have been developed using information from the human face. However, limitations exist for current age estimation systems because of the various factors of camera motion and optical blurring, facial expressions, gender, etc. Motion blurring can usually be presented on face images by the movement of the camera sensor and/or the movement of the face during image acquisition. Therefore, the facial feature in captured images can be transformed according to the amount of motion, which causes performance degradation of age estimation systems. In this paper, the problem caused by motion blurring is addressed and its solution is proposed in order to make age estimation systems robust to the effects of motion blurring. Experiment results show that our method is more efficient for enhancing age estimation performance compared with systems that do not employ our method. PMID:26334282

  11. Control of a Quadcopter Aerial Robot Using Optic Flow Sensing

    NASA Astrophysics Data System (ADS)

    Hurd, Michael Brandon

    This thesis focuses on the motion control of a custom-built quadcopter aerial robot using optic flow sensing. Optic flow sensing is a vision-based approach that can provide a robot the ability to fly in global positioning system (GPS) denied environments, such as indoor environments. In this work, optic flow sensors are used to stabilize the motion of quadcopter robot, where an optic flow algorithm is applied to provide odometry measurements to the quadcopter's central processing unit to monitor the flight heading. The optic-flow sensor and algorithm are capable of gathering and processing the images at 250 frames/sec, and the sensor package weighs 2.5 g and has a footprint of 6 cm2 in area. The odometry value from the optic flow sensor is then used a feedback information in a simple proportional-integral-derivative (PID) controller on the quadcopter. Experimental results are presented to demonstrate the effectiveness of using optic flow for controlling the motion of the quadcopter aerial robot. The technique presented herein can be applied to different types of aerial robotic systems or unmanned aerial vehicles (UAVs), as well as unmanned ground vehicles (UGV).

  12. Feasibility Study of TRISCAN Landing System.

    DTIC Science & Technology

    1977-10-01

    as inclinometers, tiltmeters , vertical sensors , level sensors , pendulums, and gravity sensing electrolytic transducers. Of course, the common...5.0 NAVTOLAND SENSOR REQUIREMENTS 5.1 TRISCAN PERFORMANCE _ --* L 5.2 SHIPS MOTION SENSING 5.3 DATA LINK Dit.S.ia 6.0 CONCLUSIONS AND RECOMMENDATIONS...involved in enabling the pilot to fly V/STOL Aircraft onto Navy Ships and Marine Corps tactical sites. Guidance sensors have been identified as being

  13. Do lower vertebrates suffer from motion sickness?

    NASA Astrophysics Data System (ADS)

    Lychakov, Dmitri

    The poster presents literature data and results of the author’s studies with the goal to find out whether the lower animals are susceptible to motion sickness (Lychakov, 2012). In our studies, fish and amphibians were tested for 2 h and more by using a rotating device (f = 0.24 Hz, a _{centrifugal} = 0.144 g) and a parallel swing (f = 0.2 Hz, a _{horizontal} = 0.059 g). The performed studies did not revealed in 4 fish species and in toads any characteristic reactions of the motion sickness (sopite syndrome, prodromal preparatory behavior, vomiting). At the same time, in toads there appeared characteristic stress reactions (escape response, an increase of the number of urinations, inhibition of appetite), as well as some other reactions not associated with motion sickness (regular head movements, eye retractions). In trout fry the used stimulation promoted division of the individuals into the groups differing by locomotor reaction to stress, as well as the individuals with the well-expressed compensatory reaction that we called the otolithotropic reaction. Analysis of results obtained by other authors confirms our conclusions. Thus, the lower vertebrates, unlike mammals, are immune to motion sickness either under the land conditions or under conditions of weightlessness. On the basis of available experimental data and theoretical concepts of mechanisms of development the motion sickness, formulated in several hypotheses (mismatch hypothesis, Traisman‘ s hypothesis, resonance hypothesis), there presented the synthetic hypothesis of motion sickness that has the conceptual significance. According to the hypothesis, the unusual stimulation producing sensor-motor or sensor-sensor conflict or an action of vestibular and visual stimuli of frequency of about 0.2 Hz is perceived by CNS as poisoning and causes the corresponding reactions. The motion sickness actually is a byproduct of technical evolution. It is suggested that in the lower vertebrates, unlike mammals, there is absent the hypothetical center of subjective «nauseating» sensations; therefore, they are immune to the motion sickness. This work was partly supported by Russian grant RFFI 14-04-00601.

  14. Detection of abnormal living patterns for elderly living alone using support vector data description.

    PubMed

    Shin, Jae Hyuk; Lee, Boreom; Park, Kwang Suk

    2011-05-01

    In this study, we developed an automated behavior analysis system using infrared (IR) motion sensors to assist the independent living of the elderly who live alone and to improve the efficiency of their healthcare. An IR motion-sensor-based activity-monitoring system was installed in the houses of the elderly subjects to collect motion signals and three different feature values, activity level, mobility level, and nonresponse interval (NRI). These factors were calculated from the measured motion signals. The support vector data description (SVDD) method was used to classify normal behavior patterns and to detect abnormal behavioral patterns based on the aforementioned three feature values. The simulation data and real data were used to verify the proposed method in the individual analysis. A robust scheme is presented in this paper for optimally selecting the values of different parameters especially that of the scale parameter of the Gaussian kernel function involving in the training of the SVDD window length, T of the circadian rhythmic approach with the aim of applying the SVDD to the daily behavior patterns calculated over 24 h. Accuracies by positive predictive value (PPV) were 95.8% and 90.5% for the simulation and real data, respectively. The results suggest that the monitoring system utilizing the IR motion sensors and abnormal-behavior-pattern detection with SVDD are effective methods for home healthcare of elderly people living alone.

  15. SVM-Based Spectral Analysis for Heart Rate from Multi-Channel WPPG Sensor Signals.

    PubMed

    Xiong, Jiping; Cai, Lisang; Wang, Fei; He, Xiaowei

    2017-03-03

    Although wrist-type photoplethysmographic (hereafter referred to as WPPG) sensor signals can measure heart rate quite conveniently, the subjects' hand movements can cause strong motion artifacts, and then the motion artifacts will heavily contaminate WPPG signals. Hence, it is challenging for us to accurately estimate heart rate from WPPG signals during intense physical activities. The WWPG method has attracted more attention thanks to the popularity of wrist-worn wearable devices. In this paper, a mixed approach called Mix-SVM is proposed, it can use multi-channel WPPG sensor signals and simultaneous acceleration signals to measurement heart rate. Firstly, we combine the principle component analysis and adaptive filter to remove a part of the motion artifacts. Due to the strong relativity between motion artifacts and acceleration signals, the further denoising problem is regarded as a sparse signals reconstruction problem. Then, we use a spectrum subtraction method to eliminate motion artifacts effectively. Finally, the spectral peak corresponding to heart rate is sought by an SVM-based spectral analysis method. Through the public PPG database in the 2015 IEEE Signal Processing Cup, we acquire the experimental results, i.e., the average absolute error was 1.01 beat per minute, and the Pearson correlation was 0.9972. These results also confirm that the proposed Mix-SVM approach has potential for multi-channel WPPG-based heart rate estimation in the presence of intense physical exercise.

  16. On the correlation between motion data captured from low-cost gaming controllers and high precision encoders.

    PubMed

    Purkayastha, Sagar N; Byrne, Michael D; O'Malley, Marcia K

    2012-01-01

    Gaming controllers are attractive devices for research due to their onboard sensing capabilities and low-cost. However, a proper quantitative analysis regarding their suitability for use in motion capture, rehabilitation and as input devices for teleoperation and gesture recognition has yet to be conducted. In this paper, a detailed analysis of the sensors of two of these controllers, the Nintendo Wiimote and the Sony Playstation 3 Sixaxis, is presented. The acceleration and angular velocity data from the sensors of these controllers were compared and correlated with computed acceleration and angular velocity data derived from a high resolution encoder. The results show high correlation between the sensor data from the controllers and the computed data derived from the position data of the encoder. From these results, it can be inferred that the Wiimote is more consistent and better suited for motion capture applications and as an input device than the Sixaxis. The applications of the findings are discussed with respect to potential research ventures.

  17. Algorithm architecture co-design for ultra low-power image sensor

    NASA Astrophysics Data System (ADS)

    Laforest, T.; Dupret, A.; Verdant, A.; Lattard, D.; Villard, P.

    2012-03-01

    In a context of embedded video surveillance, stand alone leftbehind image sensors are used to detect events with high level of confidence, but also with a very low power consumption. Using a steady camera, motion detection algorithms based on background estimation to find regions in movement are simple to implement and computationally efficient. To reduce power consumption, the background is estimated using a down sampled image formed of macropixels. In order to extend the class of moving objects to be detected, we propose an original mixed mode architecture developed thanks to an algorithm architecture co-design methodology. This programmable architecture is composed of a vector of SIMD processors. A basic RISC architecture was optimized in order to implement motion detection algorithms with a dedicated set of 42 instructions. Definition of delta modulation as a calculation primitive has allowed to implement algorithms in a very compact way. Thereby, a 1920x1080@25fps CMOS image sensor performing integrated motion detection is proposed with a power estimation of 1.8 mW.

  18. Sensor fusion of cameras and a laser for city-scale 3D reconstruction.

    PubMed

    Bok, Yunsu; Choi, Dong-Geol; Kweon, In So

    2014-11-04

    This paper presents a sensor fusion system of cameras and a 2D laser sensorfor large-scale 3D reconstruction. The proposed system is designed to capture data on afast-moving ground vehicle. The system consists of six cameras and one 2D laser sensor,and they are synchronized by a hardware trigger. Reconstruction of 3D structures is doneby estimating frame-by-frame motion and accumulating vertical laser scans, as in previousworks. However, our approach does not assume near 2D motion, but estimates free motion(including absolute scale) in 3D space using both laser data and image features. In orderto avoid the degeneration associated with typical three-point algorithms, we present a newalgorithm that selects 3D points from two frames captured by multiple cameras. The problemof error accumulation is solved by loop closing, not by GPS. The experimental resultsshow that the estimated path is successfully overlaid on the satellite images, such that thereconstruction result is very accurate.

  19. Wideband Motion Control by Position and Acceleration Input Based Disturbance Observer

    NASA Astrophysics Data System (ADS)

    Irie, Kouhei; Katsura, Seiichiro; Ohishi, Kiyoshi

    The disturbance observer can observe and suppress the disturbance torque within its bandwidth. Recent motion systems begin to spread in the society and they are required to have ability to contact with unknown environment. Such a haptic motion requires much wider bandwidth. However, since the conventional disturbance observer attains the acceleration response by the second order derivative of position response, the bandwidth is limited due to the derivative noise. This paper proposes a novel structure of a disturbance observer. The proposed disturbance observer uses an acceleration sensor for enlargement of bandwidth. Generally, the bandwidth of an acceleration sensor is from 1Hz to more than 1kHz. To cover DC range, the conventional position sensor based disturbance observer is integrated. Thus, the performance of the proposed Position and Acceleration input based disturbance observer (PADO) is superior to the conventional one. The PADO is applied to position control (infinity stiffness) and force control (zero stiffness). The numerical and experimental results show viability of the proposed method.

  20. Noncontact spirometry with a webcam

    NASA Astrophysics Data System (ADS)

    Liu, Chenbin; Yang, Yuting; Tsow, Francis; Shao, Dangdang; Tao, Nongjian

    2017-05-01

    We present an imaging-based method for noncontact spirometry. The method tracks the subtle respiratory-induced shoulder movement of a subject, builds a calibration curve, and determines the flow-volume spirometry curve and vital respiratory parameters, including forced expiratory volume in the first second, forced vital capacity, and peak expiratory flow rate. We validate the accuracy of the method by comparing the data with those simultaneously recorded with a gold standard reference method and examine the reliability of the noncontact spirometry with a pilot study including 16 subjects. This work demonstrates that the noncontact method can provide accurate and reliable spirometry tests with a webcam. Compared to the traditional spirometers, the present noncontact spirometry does not require using a spirometer, breathing into a mouthpiece, or wearing a nose clip, thus making spirometry test more easily accessible for the growing population of asthma and chronic obstructive pulmonary diseases.

  1. Noncontact spirometry with a webcam.

    PubMed

    Liu, Chenbin; Yang, Yuting; Tsow, Francis; Shao, Dangdang; Tao, Nongjian

    2017-05-01

    We present an imaging-based method for noncontact spirometry. The method tracks the subtle respiratory-induced shoulder movement of a subject, builds a calibration curve, and determines the flow-volume spirometry curve and vital respiratory parameters, including forced expiratory volume in the first second, forced vital capacity, and peak expiratory flow rate. We validate the accuracy of the method by comparing the data with those simultaneously recorded with a gold standard reference method and examine the reliability of the noncontact spirometry with a pilot study including 16 subjects. This work demonstrates that the noncontact method can provide accurate and reliable spirometry tests with a webcam. Compared to the traditional spirometers, the present noncontact spirometry does not require using a spirometer, breathing into a mouthpiece, or wearing a nose clip, thus making spirometry test more easily accessible for the growing population of asthma and chronic obstructive pulmonary diseases.

  2. Investigating diffusion with technology

    NASA Astrophysics Data System (ADS)

    Miller, Jon S.; Windelborn, Augden F.

    2013-07-01

    The activities described here allow students to explore the concept of diffusion with the use of common equipment such as computers, webcams and analysis software. The procedure includes taking a series of digital pictures of a container of water with a webcam as a dye slowly diffuses. At known time points, measurements of the pixel densities (darkness) of the digital pictures are recorded and then plotted on a graph. The resulting graph of darkness versus time allows students to see the results of diffusion of the dye over time. Through modification of the basic lesson plan, students are able to investigate the influence of a variety of variables on diffusion. Furthermore, students are able to expand the boundaries of their thinking by formulating hypotheses and testing their hypotheses through experimentation. As a result, students acquire a relevant science experience through taking measurements, organizing data into tables, analysing data and drawing conclusions.

  3. Sequential webcam monitoring and modeling of marine debris abundance.

    PubMed

    Kako, Shin'ichiro; Isobe, Atsuhiko; Kataoka, Tomoya; Yufu, Kei; Sugizono, Shuto; Plybon, Charlie; Murphy, Thomas A

    2018-05-14

    The amount of marine debris washed ashore on a beach in Newport, Oregon, USA was observed automatically and sequentially using a webcam system. To investigate potential causes of the temporal variability of marine debris abundance, its time series was compared with those of satellite-derived wind speeds and sea surface height off the Oregon coast. Shoreward flow induced by downwelling-favorable southerly winds increases marine debris washed ashore on the beach in winter. We also found that local sea-level rise caused by westerly winds, especially at spring tide, moved the high-tide line toward the land, so that marine debris littered on the beach was likely to re-drift into the ocean. Seasonal and sub-monthly fluctuations of debris abundance were well reproduced using a simple numerical model driven by satellite-derived wind data, with significant correlation at 95% confidence level. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. Sensing human physiological response using wearable carbon nanotube-based fabrics

    NASA Astrophysics Data System (ADS)

    Wang, Long; Loh, Kenneth J.; Koo, Helen S.

    2016-04-01

    Flexible and wearable sensors for human monitoring have received increased attention. Besides detecting motion and physical activity, measuring human vital signals (e.g., respiration rate and body temperature) provide rich data for assessing subjects' physiological or psychological condition. Instead of using conventional, bulky, sensing transducers, the objective of this study was to design and test a wearable, fabric-like sensing system. In particular, multi-walled carbon nanotube (MWCNT)-latex thin films of different MWCNT concentrations were first fabricated using spray coating. Freestanding MWCNT-latex films were then sandwiched between two layers of flexible fabric using iron-on adhesive to form the wearable sensor. Second, to characterize its strain sensing properties, the fabric sensors were subjected to uniaxial and cyclic tensile load tests, and they exhibited relatively stable electromechanical responses. Finally, the wearable sensors were placed on a human subject for monitoring simple motions and for validating their practical strain sensing performance. Overall, the wearable fabric sensor design exhibited advances such as flexibility, ease of fabrication, light weight, low cost, noninvasiveness, and user comfort.

  5. Simultaneous Detection of Displacement, Rotation Angle, and Contact Pressure Using Sandpaper Molded Elastomer Based Triple Electrode Sensor

    PubMed Central

    Sul, Onejae; Lee, Seung-Beck

    2017-01-01

    In this article, we report on a flexible sensor based on a sandpaper molded elastomer that simultaneously detects planar displacement, rotation angle, and vertical contact pressure. When displacement, rotation, and contact pressure are applied, the contact area between the translating top elastomer electrode and the stationary three bottom electrodes change characteristically depending on the movement, making it possible to distinguish between them. The sandpaper molded undulating surface of the elastomer reduces friction at the contact allowing the sensor not to affect the movement during measurement. The sensor showed a 0.25 mm−1 displacement sensitivity with a ±33 μm accuracy, a 0.027 degree−1 of rotation sensitivity with ~0.95 degree accuracy, and a 4.96 kP−1 of pressure sensitivity. For possible application to joint movement detection, we demonstrated that our sensor effectively detected the up-and-down motion of a human forefinger and the bending and straightening motion of a human arm. PMID:28878166

  6. Simultaneous Detection of Displacement, Rotation Angle, and Contact Pressure Using Sandpaper Molded Elastomer Based Triple Electrode Sensor.

    PubMed

    Choi, Eunsuk; Sul, Onejae; Lee, Seung-Beck

    2017-09-06

    In this article, we report on a flexible sensor based on a sandpaper molded elastomer that simultaneously detects planar displacement, rotation angle, and vertical contact pressure. When displacement, rotation, and contact pressure are applied, the contact area between the translating top elastomer electrode and the stationary three bottom electrodes change characteristically depending on the movement, making it possible to distinguish between them. The sandpaper molded undulating surface of the elastomer reduces friction at the contact allowing the sensor not to affect the movement during measurement. The sensor showed a 0.25 mm −1 displacement sensitivity with a ±33 μm accuracy, a 0.027 degree −1 of rotation sensitivity with ~0.95 degree accuracy, and a 4.96 kP −1 of pressure sensitivity. For possible application to joint movement detection, we demonstrated that our sensor effectively detected the up-and-down motion of a human forefinger and the bending and straightening motion of a human arm.

  7. Identifying compensatory movement patterns in the upper extremity using a wearable sensor system.

    PubMed

    Ranganathan, Rajiv; Wang, Rui; Dong, Bo; Biswas, Subir

    2017-11-30

    Movement impairments such as those due to stroke often result in the nervous system adopting atypical movements to compensate for movement deficits. Monitoring these compensatory patterns is critical for improving functional outcomes during rehabilitation. The purpose of this study was to test the feasibility and validity of a wearable sensor system for detecting compensatory trunk kinematics during activities of daily living. Participants with no history of neurological impairments performed reaching and manipulation tasks with their upper extremity, and their movements were recorded by a wearable sensor system and validated using a motion capture system. Compensatory movements of the trunk were induced using a brace that limited range of motion at the elbow. Our results showed that the elbow brace elicited compensatory movements of the trunk during reaching tasks but not manipulation tasks, and that a wearable sensor system with two sensors could reliably classify compensatory movements (~90% accuracy). These results show the potential of the wearable system to assess and monitor compensatory movements outside of a lab setting.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kilcher, Levi F.; Thomson, Jim; Harding, Samuel

    Acoustic Doppler velocimeters (ADVs) are a valuable tool for making high-precision measurements of turbulence, and moorings are a convenient and ubiquitous platform for making many kinds of measurements in the ocean. However, because of concerns that mooring motion can contaminate turbulence measurements and that acoustic Doppler profilers make middepth velocity measurements relatively easy, ADVs are not frequently deployed from moorings. This work demonstrates that inertial motion measurements can be used to reduce motion contamination from moored ADV velocity measurements. Three distinct mooring platforms were deployed in a tidal channel with inertial-motion-sensor-equipped ADVs. In each case, motion correction based on themore » inertial measurements reduces mooring motion contamination of velocity measurements. The spectra from these measurements are consistent with other measurements in tidal channels and have an f –5/3 slope at high frequencies - consistent with Kolmogorov's theory of isotropic turbulence. Motion correction also improves estimates of cross spectra and Reynolds stresses. A comparison of turbulence dissipation with flow speed and turbulence production indicates a bottom boundary layer production-dissipation balance during ebb and flood that is consistent with the strong tidal forcing at the site. Finally, these results indicate that inertial-motion-sensor-equipped ADVs are a valuable new tool for making high-precision turbulence measurements from moorings.« less

  9. Acoustic monitoring of first responder's physiology for health and performance surveillance

    NASA Astrophysics Data System (ADS)

    Scanlon, Michael V.

    2002-08-01

    Acoustic sensors have been used to monitor firefighter and soldier physiology to assess health and performance. The Army Research Laboratory has developed a unique body-contacting acoustic sensor that can monitor the health and performance of firefighters and soldiers while they are doing their mission. A gel-coupled sensor has acoustic impedance properties similar to the skin that facilitate the transmission of body sounds into the sensor pad, yet significantly repel ambient airborne noises due to an impedance mismatch. This technology can monitor heartbeats, breaths, blood pressure, motion, voice, and other indicators that can provide vital feedback to the medics and unit commanders. Diverse physiological parameters can be continuously monitored with acoustic sensors and transmitted for remote surveillance of personnel status. Body-worn acoustic sensors located at the neck, breathing mask, and wrist do an excellent job at detecting heartbeats and activity. However, they have difficulty extracting physiology during rigorous exercise or movements due to the motion artifacts sensed. Rigorous activity often indicates that the person is healthy by virtue of being active, and injury often causes the subject to become less active or incapacitated making the detection of physiology easier. One important measure of performance, heart rate variability, is the measure of beat-to-beat timing fluctuations derived from the interval between two adjacent beats. The Lomb periodogram is optimized for non-uniformly sampled data, and can be applied to non-stationary acoustic heart rate features (such as 1st and 2nd heart sounds) to derive heart rate variability and help eliminate errors created by motion artifacts. Simple peak-detection above or below a certain threshold or waveform derivative parameters can produce the timing and amplitude features necessary for the Lomb periodogram and cross-correlation techniques. High-amplitude motion artifacts may contribute to a different frequency or baseline noise due to the timing differences between the noise artifacts and heartbeat features. Data from a firefighter experiment is presented.

  10. Fusion of smartphone motion sensors for physical activity recognition.

    PubMed

    Shoaib, Muhammad; Bosch, Stephan; Incel, Ozlem Durmaz; Scholten, Hans; Havinga, Paul J M

    2014-06-10

    For physical activity recognition, smartphone sensors, such as an accelerometer and a gyroscope, are being utilized in many research studies. So far, particularly, the accelerometer has been extensively studied. In a few recent studies, a combination of a gyroscope, a magnetometer (in a supporting role) and an accelerometer (in a lead role) has been used with the aim to improve the recognition performance. How and when are various motion sensors, which are available on a smartphone, best used for better recognition performance, either individually or in combination? This is yet to be explored. In order to investigate this question, in this paper, we explore how these various motion sensors behave in different situations in the activity recognition process. For this purpose, we designed a data collection experiment where ten participants performed seven different activities carrying smart phones at different positions. Based on the analysis of this data set, we show that these sensors, except the magnetometer, are each capable of taking the lead roles individually, depending on the type of activity being recognized, the body position, the used data features and the classification method employed (personalized or generalized). We also show that their combination only improves the overall recognition performance when their individual performances are not very high, so that there is room for performance improvement. We have made our data set and our data collection application publicly available, thereby making our experiments reproducible.

  11. Developments in hot-film anemometry measurements of hydroacoustic particle motion

    NASA Astrophysics Data System (ADS)

    Dubbelday, Pieter S.; Apostolico, Virgil V.; Diebel, Dean L.

    1988-08-01

    Hot film anemometry may be used to measure particle motion in hydroacoustic fields. Since the cylindrical sensors used thus far are very fragile, the method is little suited for use outside the laboratory. The measurement of the response of a more rugged conical sensor is reported here. Another way of protecting the sensor consists of packaging the sensor in a rubber liquid filled boot. This also prevents fouling and bubble formation on the heated film. The response shows a resonance at low frequency, ascribed to the liquid filled boot, which may be used for enhanced response in a limited frequency region. The response of a hot film anemometer to vertical hydroacoustic particle motion is influenced by free convection, which acts as a bias flow. The output was shown to be proportional to particle displacement for a wide range of parameters. It was expected that an imposed bias flow would increase the output and remove the dependence on the direction of gravity. Therefore, a hot-film sensor (diameter d) was subjected to an underwater jet from a nozzle. The output showed a transition from being proportional to particle speed, to being proportional to particle displacement, depending on the angular frequency omega and imposed flow speed omega. The transition takes place when a dimensionless number omega, defined as omega = omega/nu is of order 1.

  12. Multimodal integration in rostral fastigial nucleus provides an estimate of body movement

    PubMed Central

    Brooks, Jessica X.; Cullen, Kathleen E.

    2012-01-01

    The ability to accurately control posture and perceive self motion and spatial orientation requires knowledge of both the motion of the head and body. However, while the vestibular sensors and nuclei directly encode head motion, no sensors directly encode body motion. Instead, the convergence of vestibular and neck proprioceptive inputs during self-motion is generally believed to underlie the ability to compute body motion. Here, we provide evidence that the brain explicitly computes an internal estimate of body motion at the level of single cerebellar neurons. Neuronal responses were recorded from the rostral fastigial nucleus, the most medial of the deep cerebellar nuclei, during whole-body, body-under-head, and head-on-body rotations. We found that approximately half of the neurons encoded the motion of the body-in-space, while the other half encoded the motion of the head-in-space in a manner similar to neurons in the vestibular nuclei. Notably, neurons encoding body motion responded to both vestibular and proprioceptive stimulation (accordingly termed bimodal neurons). In contrast, neurons encoding head motion were only sensitive to vestibular inputs (accordingly termed unimodal neurons). Comparison of the proprioceptive and vestibular responses of bimodal neurons further revealed similar tuning in response to changes in head-on-body position. We propose that the similarity in nonlinear processing of vestibular and proprioceptive signals underlies the accurate computation of body motion. Furthermore, the same neurons that encode body motion (i.e., bimodal neurons) most likely encode vestibular signals in a body referenced coordinate frame, since the integration of proprioceptive and vestibular information is required for both computations. PMID:19710303

  13. Potential of IMU Sensors in Performance Analysis of Professional Alpine Skiers

    PubMed Central

    Yu, Gwangjae; Jang, Young Jae; Kim, Jinhyeok; Kim, Jin Hae; Kim, Hye Young; Kim, Kitae; Panday, Siddhartha Bikram

    2016-01-01

    In this paper, we present an analysis to identify a sensor location for an inertial measurement unit (IMU) on the body of a skier and propose the best location to capture turn motions for training. We also validate the manner in which the data from the IMU sensor on the proposed location can characterize ski turns and performance with a series of statistical analyses, including a comparison with data collected from foot pressure sensors. The goal of the study is to logically identify the ideal location on the skier’s body to attach the IMU sensor and the best use of the data collected for the skier. The statistical analyses and the hierarchical clustering method indicate that the pelvis is the best location for attachment of an IMU, and numerical validation shows that the data collected from this location can effectively estimate the performance and characteristics of the skier. Moreover, placement of the sensor at this location does not distract the skier’s motion, and the sensor can be easily attached and detached. The findings of this study can be used for the development of a wearable device for the routine training of professional skiers. PMID:27043579

  14. Compact Hip-Force Sensor for a Gait-Assistance Exoskeleton System.

    PubMed

    Choi, Hyundo; Seo, Keehong; Hyung, Seungyong; Shim, Youngbo; Lim, Soo-Chul

    2018-02-13

    In this paper, we propose a compact force sensor system for a hip-mounted exoskeleton for seniors with difficulties in walking due to muscle weakness. It senses and monitors the delivered force and power of the exoskeleton for motion control and taking urgent safety action. Two FSR (force-sensitive resistors) sensors are used to measure the assistance force when the user is walking. The sensor system directly measures the interaction force between the exoskeleton and the lower limb of the user instead of a previously reported force-sensing method, which estimated the hip assistance force from the current of the motor and lookup tables. Furthermore, the sensor system has the advantage of generating torque in the walking-assistant actuator based on directly measuring the hip-assistance force. Thus, the gait-assistance exoskeleton system can control the delivered power and torque to the user. The force sensing structure is designed to decouple the force caused by hip motion from other directional forces to the sensor so as to only measure that force. We confirmed that the hip-assistance force could be measured with the proposed prototype compact force sensor attached to a thigh frame through an experiment with a real system.

  15. An intelligent surveillance platform for large metropolitan areas with dense sensor deployment.

    PubMed

    Fernández, Jorge; Calavia, Lorena; Baladrón, Carlos; Aguiar, Javier M; Carro, Belén; Sánchez-Esguevillas, Antonio; Alonso-López, Jesus A; Smilansky, Zeev

    2013-06-07

    This paper presents an intelligent surveillance platform based on the usage of large numbers of inexpensive sensors designed and developed inside the European Eureka Celtic project HuSIMS. With the aim of maximizing the number of deployable units while keeping monetary and resource/bandwidth costs at a minimum, the surveillance platform is based on the usage of inexpensive visual sensors which apply efficient motion detection and tracking algorithms to transform the video signal in a set of motion parameters. In order to automate the analysis of the myriad of data streams generated by the visual sensors, the platform's control center includes an alarm detection engine which comprises three components applying three different Artificial Intelligence strategies in parallel. These strategies are generic, domain-independent approaches which are able to operate in several domains (traffic surveillance, vandalism prevention, perimeter security, etc.). The architecture is completed with a versatile communication network which facilitates data collection from the visual sensors and alarm and video stream distribution towards the emergency teams. The resulting surveillance system is extremely suitable for its deployment in metropolitan areas, smart cities, and large facilities, mainly because cheap visual sensors and autonomous alarm detection facilitate dense sensor network deployments for wide and detailed coverage.

  16. Compact Hip-Force Sensor for a Gait-Assistance Exoskeleton System

    PubMed Central

    Choi, Hyundo; Seo, Keehong; Hyung, Seungyong; Shim, Youngbo; Lim, Soo-Chul

    2018-01-01

    In this paper, we propose a compact force sensor system for a hip-mounted exoskeleton for seniors with difficulties in walking due to muscle weakness. It senses and monitors the delivered force and power of the exoskeleton for motion control and taking urgent safety action. Two FSR (force-sensitive resistors) sensors are used to measure the assistance force when the user is walking. The sensor system directly measures the interaction force between the exoskeleton and the lower limb of the user instead of a previously reported force-sensing method, which estimated the hip assistance force from the current of the motor and lookup tables. Furthermore, the sensor system has the advantage of generating torque in the walking-assistant actuator based on directly measuring the hip-assistance force. Thus, the gait-assistance exoskeleton system can control the delivered power and torque to the user. The force sensing structure is designed to decouple the force caused by hip motion from other directional forces to the sensor so as to only measure that force. We confirmed that the hip-assistance force could be measured with the proposed prototype compact force sensor attached to a thigh frame through an experiment with a real system. PMID:29438300

  17. Use of Finite Elements Analysis for a Weigh-in-Motion Sensor Design

    PubMed Central

    Opitz, Rigobert; Goanta, Viorel; Carlescu, Petru; Barsanescu, Paul-Doru; Taranu, Nicolae; Banu, Oana

    2012-01-01

    High speed weigh-in-motion (WIM) sensors are utilized as components of complex traffic monitoring and measurement systems. They should be able to determine the weights on wheels, axles and vehicle gross weights, and to help the classification of vehicles (depending on the number of axles). WIM sensors must meet the following main requirements: good accuracy, high endurance, low price and easy installation in the road structure. It is not advisable to use cheap materials in constructing these devices for lower prices, since the sensors are normally working in harsh environmental conditions such as temperatures between −40 °C and +70 °C, dust, temporary water immersion, shocks and vibrations. Consequently, less expensive manufacturing technologies are recommended. Because the installation cost in the road structure is high and proportional to the WIM sensor cross section (especially with its thickness), the device needs to be made as flat as possible. The WIM sensor model presented and analyzed in this paper uses a spring element equipped with strain gages. Using Finite Element Analysis (FEA), the authors have attempted to obtain a more sensitive, reliable, lower profile and overall cheaper elastic element for a new WIM sensor. PMID:22969332

  18. Muscle Strength Endurance Testing Development Based Photo Transistor with Motion Sensor Ultrasonic

    NASA Astrophysics Data System (ADS)

    Rusdiana, A.

    2017-03-01

    The endurance of upper-body muscles is one of the most important physical fitness components. As technology develops, the process of test and assessment is now getting digital; for instance, there are a sensor stuck to the shoe (Foot Pod, Polar, and Sunto), Global Positioning System (GPS) and Differential Global Positioning System (DGPS), radar, photo finish, kinematic analysis, and photocells. Those devices aim to analyze the performances and fitness of athletes particularly the endurance of arm, chest, and shoulder muscles. In relation to that, this study attempt to create a software and a hardware for pull-ups through phototransistor with ultrasonic motion sensor. Components needed to develop this device consist of microcontroller MCS-51, photo transistor, light emitting diode, buzzer, ultrasonic sensor, and infrared sensor. The infrared sensor is put under the buffer while the ultrasonic sensor is stuck on the upper pole. The components are integrated with an LED or a laptop made using Visual Basic 12 software. The results show that pull-ups test using digital device (mean; 9.4 rep) is lower than using manual calculation (mean; 11.3 rep). This is due to the fact that digital test requires the test-takers to do pull-ups perfectly.

  19. The feasibility of using Microsoft Kinect v2 sensors during radiotherapy delivery.

    PubMed

    Edmunds, David M; Bashforth, Sophie E; Tahavori, Fatemeh; Wells, Kevin; Donovan, Ellen M

    2016-11-08

    Consumer-grade distance sensors, such as the Microsoft Kinect devices (v1 and v2), have been investigated for use as marker-free motion monitoring systems for radiotherapy. The radiotherapy delivery environment is challenging for such sen-sors because of the proximity to electromagnetic interference (EMI) from the pulse forming network which fires the magnetron and electron gun of a linear accelerator (linac) during radiation delivery, as well as the requirement to operate them from the control area. This work investigated whether using Kinect v2 sensors as motion monitors was feasible during radiation delivery. Three sensors were used each with a 12 m USB 3.0 active cable which replaced the supplied 3 m USB 3.0 cable. Distance output data from the Kinect v2 sensors was recorded under four condi-tions of linac operation: (i) powered up only, (ii) pulse forming network operating with no radiation, (iii) pulse repetition frequency varied between 6 Hz and 400 Hz, (iv) dose rate varied between 50 and 1450 monitor units (MU) per minute. A solid water block was used as an object and imaged when static, moved in a set of steps from 0.6 m to 2.0 m from the sensor and moving dynamically in two sinusoidal-like trajectories. Few additional image artifacts were observed and there was no impact on the tracking of the motion patterns (root mean squared accuracy of 1.4 and 1.1mm, respectively). The sensors' distance accuracy varied by 2.0 to 3.8 mm (1.2 to 1.4 mm post distance calibration) across the range measured; the precision was 1 mm. There was minimal effect from the EMI on the distance calibration data: 0 mm or 1 mm reported distance change (2 mm maximum change at one position). Kinect v2 sensors operated with 12 m USB 3.0 active cables appear robust to the radiotherapy treatment environment. © 2016 The Authors.

  20. An open architecture motion controller

    NASA Technical Reports Server (NTRS)

    Rossol, Lothar

    1994-01-01

    Nomad, an open architecture motion controller, is described. It is formed by a combination of TMOS, C-WORKS, and other utilities. Nomad software runs in a UNIX environment and provides for sensor-controlled robotic motions, with user replaceable kinematics. It can also be tailored for highly specialized applications. Open controllers such as Nomad should have a major impact on the robotics industry.

  1. A markerless system based on smartphones and webcam for the measure of step length, width and duration on treadmill.

    PubMed

    Barone, V; Verdini, F; Burattini, L; Di Nardo, F; Fioretti, S

    2016-03-01

    A markerless low cost prototype has been developed for the determination of some spatio-temporal parameters of human gait: step-length, step-width and cadence have been considered. Only a smartphone and a high-definition webcam have been used. The signals obtained by the accelerometer embedded in the smartphone are used to recognize the heel strike events, while the feet positions are calculated through image processing of the webcam stream. Step length and width are computed during gait trials on a treadmill at various speeds (3, 4 and 5 km/h). Six subjects have been tested for a total of 504 steps. Results were compared with those obtained by a stereo-photogrammetric system (Elite, BTS Engineering). The maximum average errors were 3.7 cm (5.36%) for the right step length and 1.63 cm (15.16%) for the right step width at 5 km/h. The maximum average error for step duration was 0.02 s (1.69%) at 5 km/h for the right steps. The system is characterized by a very high level of automation that allows its use by non-expert users in non-structured environments. A low cost system able to automatically provide a reliable and repeatable evaluation of some gait events and parameters during treadmill walking, is relevant also from a clinical point of view because it allows the analysis of hundreds of steps and consequently an analysis of their variability. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  2. Development of a smartphone application to measure physical activity using sensor-assisted self-report.

    PubMed

    Dunton, Genevieve Fridlund; Dzubur, Eldin; Kawabata, Keito; Yanez, Brenda; Bo, Bin; Intille, Stephen

    2014-01-01

    Despite the known advantages of objective physical activity monitors (e.g., accelerometers), these devices have high rates of non-wear, which leads to missing data. Objective activity monitors are also unable to capture valuable contextual information about behavior. Adolescents recruited into physical activity surveillance and intervention studies will increasingly have smartphones, which are miniature computers with built-in motion sensors. This paper describes the design and development of a smartphone application ("app") called Mobile Teen that combines objective and self-report assessment strategies through (1) sensor-informed context-sensitive ecological momentary assessment (CS-EMA) and (2) sensor-assisted end-of-day recall. The Mobile Teen app uses the mobile phone's built-in motion sensor to automatically detect likely bouts of phone non-wear, sedentary behavior, and physical activity. The app then uses transitions between these inferred states to trigger CS-EMA self-report surveys measuring the type, purpose, and context of activity in real-time. The end of the day recall component of the Mobile Teen app allows users to interactively review and label their own physical activity data each evening using visual cues from automatically detected major activity transitions from the phone's built-in motion sensors. Major activity transitions are identified by the app, which cues the user to label that "chunk," or period, of time using activity categories. Sensor-driven CS-EMA and end-of-day recall smartphone apps can be used to augment physical activity data collected by objective activity monitors, filling in gaps during non-wear bouts and providing additional real-time data on environmental, social, and emotional correlates of behavior. Smartphone apps such as these have potential for affordable deployment in large-scale epidemiological and intervention studies.

  3. Development of a Smartphone Application to Measure Physical Activity Using Sensor-Assisted Self-Report

    PubMed Central

    Dunton, Genevieve Fridlund; Dzubur, Eldin; Kawabata, Keito; Yanez, Brenda; Bo, Bin; Intille, Stephen

    2013-01-01

    Introduction: Despite the known advantages of objective physical activity monitors (e.g., accelerometers), these devices have high rates of non-wear, which leads to missing data. Objective activity monitors are also unable to capture valuable contextual information about behavior. Adolescents recruited into physical activity surveillance and intervention studies will increasingly have smartphones, which are miniature computers with built-in motion sensors. Methods: This paper describes the design and development of a smartphone application (“app”) called Mobile Teen that combines objective and self-report assessment strategies through (1) sensor-informed context-sensitive ecological momentary assessment (CS-EMA) and (2) sensor-assisted end-of-day recall. Results: The Mobile Teen app uses the mobile phone’s built-in motion sensor to automatically detect likely bouts of phone non-wear, sedentary behavior, and physical activity. The app then uses transitions between these inferred states to trigger CS-EMA self-report surveys measuring the type, purpose, and context of activity in real-time. The end of the day recall component of the Mobile Teen app allows users to interactively review and label their own physical activity data each evening using visual cues from automatically detected major activity transitions from the phone’s built-in motion sensors. Major activity transitions are identified by the app, which cues the user to label that “chunk,” or period, of time using activity categories. Conclusion: Sensor-driven CS-EMA and end-of-day recall smartphone apps can be used to augment physical activity data collected by objective activity monitors, filling in gaps during non-wear bouts and providing additional real-time data on environmental, social, and emotional correlates of behavior. Smartphone apps such as these have potential for affordable deployment in large-scale epidemiological and intervention studies. PMID:24616888

  4. PSD Camera Based Position and Posture Control of Redundant Robot Considering Contact Motion

    NASA Astrophysics Data System (ADS)

    Oda, Naoki; Kotani, Kentaro

    The paper describes a position and posture controller design based on the absolute position by external PSD vision sensor for redundant robot manipulator. The redundancy enables a potential capability to avoid obstacle while continuing given end-effector jobs under contact with middle link of manipulator. Under contact motion, the deformation due to joint torsion obtained by comparing internal and external position sensor, is actively suppressed by internal/external position hybrid controller. The selection matrix of hybrid loop is given by the function of the deformation. And the detected deformation is also utilized in the compliant motion controller for passive obstacle avoidance. The validity of the proposed method is verified by several experimental results of 3link planar redundant manipulator.

  5. A collaborative computing framework of cloud network and WBSN applied to fall detection and 3-D motion reconstruction.

    PubMed

    Lai, Chin-Feng; Chen, Min; Pan, Jeng-Shyang; Youn, Chan-Hyun; Chao, Han-Chieh

    2014-03-01

    As cloud computing and wireless body sensor network technologies become gradually developed, ubiquitous healthcare services prevent accidents instantly and effectively, as well as provides relevant information to reduce related processing time and cost. This study proposes a co-processing intermediary framework integrated cloud and wireless body sensor networks, which is mainly applied to fall detection and 3-D motion reconstruction. In this study, the main focuses includes distributed computing and resource allocation of processing sensing data over the computing architecture, network conditions and performance evaluation. Through this framework, the transmissions and computing time of sensing data are reduced to enhance overall performance for the services of fall events detection and 3-D motion reconstruction.

  6. A Low-Power ASIC Signal Processor for a Vestibular Prosthesis.

    PubMed

    Töreyin, Hakan; Bhatti, Pamela T

    2016-06-01

    A low-power ASIC signal processor for a vestibular prosthesis (VP) is reported. Fabricated with TI 0.35 μm CMOS technology and designed to interface with implanted inertial sensors, the digitally assisted analog signal processor operates extensively in the CMOS subthreshold region. During its operation the ASIC encodes head motion signals captured by the inertial sensors as electrical pulses ultimately targeted for in-vivo stimulation of vestibular nerve fibers. To achieve this, the ASIC implements a coordinate system transformation to correct for misalignment between natural sensors and implanted inertial sensors. It also mimics the frequency response characteristics and frequency encoding mappings of angular and linear head motions observed at the peripheral sense organs, semicircular canals and otolith. Overall the design occupies an area of 6.22 mm (2) and consumes 1.24 mW when supplied with ± 1.6 V.

  7. A Low-Power ASIC Signal Processor for a Vestibular Prosthesis

    PubMed Central

    Töreyin, Hakan; Bhatti, Pamela T.

    2017-01-01

    A low-power ASIC signal processor for a vestibular prosthesis (VP) is reported. Fabricated with TI 0.35 μm CMOS technology and designed to interface with implanted inertial sensors, the digitally assisted analog signal processor operates extensively in the CMOS subthreshold region. During its operation the ASIC encodes head motion signals captured by the inertial sensors as electrical pulses ultimately targeted for in-vivo stimulation of vestibular nerve fibers. To achieve this, the ASIC implements a coordinate system transformation to correct for misalignment between natural sensors and implanted inertial sensors. It also mimics the frequency response characteristics and frequency encoding mappings of angular and linear head motions observed at the peripheral sense organs, semicircular canals and otolith. Overall the design occupies an area of 6.22 mm2 and consumes 1.24 mW when supplied with ± 1.6 V. PMID:26800546

  8. Avoiding space robot collisions utilizing the NASA/GSFC tri-mode skin sensor

    NASA Technical Reports Server (NTRS)

    Prinz, F. B.

    1991-01-01

    Sensor based robot motion planning research has primarily focused on mobile robots. Consider, however, the case of a robot manipulator expected to operate autonomously in a dynamic environment where unexpected collisions can occur with many parts of the robot. Only a sensor based system capable of generating collision free paths would be acceptable in such situations. Recently, work in this area has been reported in which a deterministic solution for 2DOF systems has been generated. The arm was sensitized with 'skin' of infra-red sensors. We have proposed a heuristic (potential field based) methodology for redundant robots with large DOF's. The key concepts are solving the path planning problem by cooperating global and local planning modules, the use of complete information from the sensors and partial (but appropriate) information from a world model, representation of objects with hyper-ellipsoids in the world model, and the use of variational planning. We intend to sensitize the robot arm with a 'skin' of capacitive proximity sensors. These sensors were developed at NASA, and are exceptionally suited for the space application. In the first part of the report, we discuss the development and modeling of the capacitive proximity sensor. In the second part we discuss the motion planning algorithm.

  9. Highly stretchable strain sensor based on polyurethane substrate using hydrogen bond-assisted laminated structure for monitoring of tiny human motions

    NASA Astrophysics Data System (ADS)

    Huang, Ying; Zhao, Yunong; Wang, Yang; Guo, Xiaohui; Zhang, Yangyang; Liu, Ping; Liu, Caixia; Zhang, Yugang

    2018-03-01

    Strain sensors used as flexible and wearable electronic devices have improved prospects in the fields of artificial skin, robotics, human-machine interfaces, and healthcare. This work introduces a highly stretchable fiber-based strain sensor with a laminated structure made up of a graphene nanoplatelet layer and a carbon black/single-walled carbon nanotube synergetic conductive network layer. An ultrathin, flexible, and elastic two-layer polyurethane (PU) yarn substrate was successively deposited by a novel chemical bonding-based layered dip-coating process. These strain sensors demonstrated high stretchability (˜350%), little hysteresis, and long-term durability (over 2400 cycles) due to the favorable tensile properties of the PU substrate. The linearity of the strain sensor could reach an adjusted R-squared of 0.990 at 100% strain, which is better than most of the recently reported strain sensors. Meanwhile, the strain sensor exhibited good sensibility, rapid response, and a lower detection limit. The lower detection limit benefited from the hydrogen bond-assisted laminated structure and continuous conductive path. Finally, a series of experiments were carried out based on the special features of the PU strain sensor to show its capacity of detecting and monitoring tiny human motions.

  10. Nurses lead the way with webcam consultations.

    PubMed

    Pearce, Lynne

    2017-09-06

    More than a decade ago, Airedale NHS Foundation Trust in West Yorkshire began using a video link to deliver consultations to prisoners at a high-security jail. It meant prisoners no longer had to be escorted to the outpatient department.

  11. Thousand-fold fluorescent signal amplification for mHealth diagnostics

    PubMed Central

    Balsam, Joshua; Rasooly, Reuven; Bruck, Hugh Alan; Rasooly, Avraham

    2013-01-01

    The low sensitivity of Mobile Health (mHealth) optical detectors, such as those found on mobile phones, is a limiting factor for many mHealth clinical applications. To improve sensitivity, we have combined two approaches for optical signal amplification: (1) a computational approach based on an image stacking algorithm to decrease the image noise and enhance weak signals, and (2) an optical signal amplifier utilizing a capillary tube array. These approaches were used in a detection system which includes a multi-wavelength LEDs capable of exciting many fluorophores in multiple wavelengths, a mobile phone or a webcam as a detector, and capillary tube array configured with 36 capillary tubes for signal enhancement. The capillary array enables a ~100X increase in signal sensitivity for fluorescein, reducing the limit of detection (LOD) for mobile phones and webcams from 1000 nM to 10 nM. Computational image stacking enables another ~10X increase in signal sensitivity, further reducing the LOD for webcam from 10 nM to 1 nM. To demonstrate the feasibility of the device for the detection of disease-related biomarkers, Adenovirus DNA labeled with SYBR Green or fluorescein was analyzed by both our capillary array and a commercial plate reader. The LOD for the capillary array was 5ug/mL, and that of the plate reader was 1 ug/mL. Similar results were obtained using DNA stained with fluorescein. The combination of the two signal amplification approaches enables a ~1000X increase in LOD for the webcam platform. This brings it into the range of a conventional plate reader while using a smaller sample volume (10ul) than the plate reader requires (100 ul). This suggests that such a device could be suitable for biosensing applications where up to 10 fold smaller sample sizes are needed. The simple optical configuration for mHealth described in this paper employing the combined capillary and image processing signal amplification is capable of measuring weak fluorescent signals without the need of dedicated laboratories. It has the potential to be used to increase sensitivity of other optically based mHealth technologies, and may increase mHealth’s clinical utility, especially for telemedicine and for resource-poor settings and global health applications. PMID:23928092

  12. Thousand-fold fluorescent signal amplification for mHealth diagnostics.

    PubMed

    Balsam, Joshua; Rasooly, Reuven; Bruck, Hugh Alan; Rasooly, Avraham

    2014-01-15

    The low sensitivity of Mobile Health (mHealth) optical detectors, such as those found on mobile phones, is a limiting factor for many mHealth clinical applications. To improve sensitivity, we have combined two approaches for optical signal amplification: (1) a computational approach based on an image stacking algorithm to decrease the image noise and enhance weak signals, and (2) an optical signal amplifier utilizing a capillary tube array. These approaches were used in a detection system which includes multi-wavelength LEDs capable of exciting many fluorophores in multiple wavelengths, a mobile phone or a webcam as a detector, and capillary tube array configured with 36 capillary tubes for signal enhancement. The capillary array enables a ~100× increase in signal sensitivity for fluorescein, reducing the limit of detection (LOD) for mobile phones and webcams from 1000 nM to 10nM. Computational image stacking enables another ~10× increase in signal sensitivity, further reducing the LOD for webcam from 10nM to 1 nM. To demonstrate the feasibility of the device for the detection of disease-related biomarkers, adenovirus DNA labeled with SYBR green or fluorescein was analyzed by both our capillary array and a commercial plate reader. The LOD for the capillary array was 5 ug/mL, and that of the plate reader was 1 ug/mL. Similar results were obtained using DNA stained with fluorescein. The combination of the two signal amplification approaches enables a ~1000× increase in LOD for the webcam platform. This brings it into the range of a conventional plate reader while using a smaller sample volume (10 ul) than the plate reader requires (100 ul). This suggests that such a device could be suitable for biosensing applications where up to 10 fold smaller sample sizes are needed. The simple optical configuration for mHealth described in this paper employing the combined capillary and image processing signal amplification is capable of measuring weak fluorescent signals without the need of dedicated laboratories. It has the potential to be used to increase sensitivity of other optically based mHealth technologies, and may increase mHealth's clinical utility, especially for telemedicine and for resource-poor settings and global health applications. Published by Elsevier B.V.

  13. Design constraints of the LST fine guidance sensor

    NASA Technical Reports Server (NTRS)

    Wissinger, A. B.

    1975-01-01

    The LST Fine Guidance Sensor design is shaped by the rate of occurrence of suitable guide stars, the competition for telescope focal plane space with the Science Instruments, and the sensitivity of candidate image motion sensors. The relationship between these parameters is presented, and sensitivity to faint stars is shown to be of prime importance. An interferometric technique of image motion sensing is shown to have improved sensitivity and, therefore, a reduced focal plane area requirement in comparison with other candidate techniques (image-splitting prism and image dissector tube techniques). Another design requirement is speed in acquiring the guide star in order to maximize the time available for science observations. The design constraints are shown parametrically, and modelling results are presented.

  14. Autonomous microsystems for ground observation (AMIGO)

    NASA Astrophysics Data System (ADS)

    Laou, Philips

    2005-05-01

    This paper reports the development of a prototype autonomous surveillance microsystem AMIGO that can be used for remote surveillance. Each AMIGO unit is equipped with various sensors and electronics. These include passive infrared motion sensor, acoustic sensor, uncooled IR camera, electronic compass, global positioning system (GPS), and spread spectrum wireless transceiver. The AMIGO unit was configured to multipoint (AMIGO units) to point (base station) communication mode. In addition, field trials were conducted with AMIGO in various scenarios. These scenarios include personnel and vehicle intrusion detection (motion or sound) and target imaging; determination of target GPS position by triangulation; GPS position real time tracking; entrance event counting; indoor surveillance; and aerial surveillance on a radio controlled model plane. The architecture and test results of AMIGO will be presented.

  15. 47 CFR 15.253 - Operation within the bands 46.7-46.9 GHz and 76.0-77.0 GHz.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... restricted to vehicle-mounted field disturbance sensors used as vehicle radar systems. The transmission of...-mounted field disturbance sensor. Operation under the provisions of this section is not permitted on... structure. (2) For forward-looking vehicle mounted field disturbance sensors, if the vehicle is in motion...

  16. 47 CFR 15.253 - Operation within the bands 46.7-46.9 GHz and 76.0-77.0 GHz.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... restricted to vehicle-mounted field disturbance sensors used as vehicle radar systems. The transmission of...-mounted field disturbance sensor. Operation under the provisions of this section is not permitted on... structure. (2) For forward-looking vehicle mounted field disturbance sensors, if the vehicle is in motion...

  17. 47 CFR 15.253 - Operation within the bands 46.7-46.9 GHz and 76.0-77.0 GHz.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... restricted to vehicle-mounted field disturbance sensors used as vehicle radar systems. The transmission of...-mounted field disturbance sensor. Operation under the provisions of this section is not permitted on... structure. (2) For forward-looking vehicle mounted field disturbance sensors, if the vehicle is in motion...

  18. The Sense-It App: A Smartphone Sensor Toolkit for Citizen Inquiry Learning

    ERIC Educational Resources Information Center

    Sharples, Mike; Aristeidou, Maria; Villasclaras-Fernández, Eloy; Herodotou, Christothea; Scanlon, Eileen

    2017-01-01

    The authors describe the design and formative evaluation of a sensor toolkit for Android smartphones and tablets that supports inquiry-based science learning. The Sense-it app enables a user to access all the motion, environmental and position sensors available on a device, linking these to a website for shared crowd-sourced investigations. The…

  19. Optimal geometry for a quartz multipurpose SPM sensor.

    PubMed

    Stirling, Julian

    2013-01-01

    We propose a geometry for a piezoelectric SPM sensor that can be used for combined AFM/LFM/STM. The sensor utilises symmetry to provide a lateral mode without the need to excite torsional modes. The symmetry allows normal and lateral motion to be completely isolated, even when introducing large tips to tune the dynamic properties to optimal values.

  20. Image-Aided Navigation Using Cooperative Binocular Stereopsis

    DTIC Science & Technology

    2014-03-27

    Global Postioning System . . . . . . . . . . . . . . . . . . . . . . . . . 1 IMU Inertial Measurement Unit...an intertial measurement unit ( IMU ). This technique capitalizes on an IMU’s ability to capture quick motion and the ability of GPS to constrain long...the sensor-aided IMU framework. Visual sensors provide a number of benefits, such as low cost and weight. These sensors are also able to measure

Top