Sample records for camera-based microswitch technology

  1. Camera-Based Microswitch Technology to Monitor Mouth, Eyebrow, and Eyelid Responses of Children with Profound Multiple Disabilities

    ERIC Educational Resources Information Center

    Lancioni, Giulio E.; Bellini, Domenico; Oliva, Doretta; Singh, Nirbhay N.; O'Reilly, Mark F.; Lang, Russell; Didden, Robert

    2011-01-01

    A camera-based microswitch technology was recently used to successfully monitor small eyelid and mouth responses of two adults with profound multiple disabilities (Lancioni et al., Res Dev Disab 31:1509-1514, 2010a). This technology, in contrast with the traditional optic microswitches used for those responses, did not require support frames on…

  2. Camera-Based Microswitch Technology for Eyelid and Mouth Responses of Persons with Profound Multiple Disabilities: Two Case Studies

    ERIC Educational Resources Information Center

    Lancioni, Giulio E.; Bellini, Domenico; Oliva, Doretta; Singh, Nirbhay N.; O'Reilly, Mark F.; Sigafoos, Jeff

    2010-01-01

    These two studies assessed camera-based microswitch technology for eyelid and mouth responses of two persons with profound multiple disabilities and minimal motor behavior. This technology, in contrast with the traditional optic microswitches used for those responses, did not require support frames on the participants' face but only small color…

  3. Two Persons with Multiple Disabilities Use Camera-Based Microswitch Technology to Control Stimulation with Small Mouth and Eyelid Responses

    ERIC Educational Resources Information Center

    Lancioni, Giulio E.; Bellini, Domenico; Oliva, Doretta; Singh, Nirbhay N.; O'Reilly, Mark F.; Sigafoos, Jeff; Lang, Russell

    2012-01-01

    Background: A camera-based microswitch technology was recently developed to monitor small facial responses of persons with multiple disabilities and allow those responses to control environmental stimulation. This study assessed such a technology with 2 new participants using slight variations of previous responses. Method: The technology involved…

  4. New camera-based microswitch technology to monitor small head and mouth responses of children with multiple disabilities.

    PubMed

    Lancioni, Giulio E; Bellini, Domenico; Oliva, Doretta; Singh, Nirbhay N; O'Reilly, Mark F; Green, Vanessa A; Furniss, Fred

    2014-06-01

    Assessing a new camera-based microswitch technology, which did not require the use of color marks on the participants' face. Two children with extensive multiple disabilities participated. The responses selected for them consisted of small, lateral head movements and mouth closing or opening. The intervention was carried out according to a multiple probe design across responses. The technology involved a computer with a CPU using a 2-GHz clock, a USB video camera with a 16-mm lens, a USB cable connecting the camera and the computer, and a special software program written in ISO C++ language. The new technology was satisfactorily used with both children. Large increases in their responding were observed during the intervention periods (i.e. when the responses were followed by preferred stimulation). The new technology may be an important resource for persons with multiple disabilities and minimal motor behavior.

  5. Persons with multiple disabilities select environmental stimuli through a smile response monitored via camera-based technology.

    PubMed

    Lancioni, Giulio E; Bellini, Domenico; Oliva, Doretta; Singh, Nirbhay N; O'reilly, Mark F; Lang, Russell; Didden, Robert; Bosco, Andrea

    2011-01-01

    To assess whether two persons with multiple disabilities could use smile expressions and new camera-based microswitch technology to select environmental stimuli. Within each session, a computer system provided samples/reminders of preferred and non-preferred stimuli. The camera-based microswitch determined whether the participants had smile expressions in relation to those samples. If they did, stimuli matching the specific samples to which they responded were presented for 20 seconds. The smile expression could be profitably used by the participants who managed to select means of ∼70% or 75% of the preferred stimulus opportunities made available by the environment while avoiding almost all the non-preferred stimulus opportunities. Smile expressions (a) might be an effective and rapid means for selecting preferred stimulation and (b) might develop into cognitively more elaborate forms of responding through the learning experience (i.e. their consistent association with positive/reinforcing consequences).

  6. Post-coma persons emerging from a minimally conscious state with multiple disabilities make technology-aided phone contacts with relevant partners.

    PubMed

    Lancioni, Giulio E; Singh, Nirbhay N; O'Reilly, Mark F; Sigafoos, Jeff; Oliva, Doretta; Campodonico, Francesca; D'Amico, Fiora; Buonocunto, Francesca; Sacco, Valentina; Didden, Robert

    2013-10-01

    Post-coma individuals emerging from a minimally conscious state with multiple disabilities may enjoy contact with relevant partners (e.g., family members and friends), but may not have easy access to them. These two single-case studies assessed whether those individuals could make contact with partners through computer-aided telephone technology and enjoy such contact. The technology involved a computer system with special software, a global system for mobile communication modem (GSM), and microswitch devices. In Study I, the computer system presented a 23-year-old man the names of the partners that he could contact, one at a time, automatically. Together with each partner's name, the system also presented the voice of the partner asking the man whether he wanted to call him or her. The man could (a) place a call to that partner by activating a camera-based microswitch through mouth movements or (b) bypass that partner and wait for the next one to be presented. In Study II, the system presented a 36-year-old man the partners' names only after he had activated his wobble microswitch with a hand movement. The man could place a call or bypass a partner as in Study I. The results showed that both men (a) were able to contact relevant partners through the technology, (b) seemed to enjoy their telephone-mediated communication contacts with the partners, and (c) showed preferences among the partners. Implications of the findings are discussed. Copyright © 2013 Elsevier Ltd. All rights reserved.

  7. Technology-based intervention programs to promote stimulation control and communication in post-coma persons with different levels of disability

    PubMed Central

    Lancioni, Giulio E.; Bosco, Andrea; Olivetti Belardinelli, Marta; Singh, Nirbhay N.; O'Reilly, Mark F.; Sigafoos, Jeff; Oliva, Doretta

    2013-01-01

    Post-coma persons in a minimally conscious state and with extensive motor impairment or emerging/emerged from such a state, but affected by lack of speech and motor impairment, tend to be passive and isolated. A way to help them develop functional responding to control environmental events and communication involves the use of intervention programs relying on assistive technology. This paper provides an overview of technology-based intervention programs for enabling the participants to (a) access brief periods of stimulation through one or two microswitches, (b) pursue stimulation and social contact through the combination of a microswitch and a sensor connected to a speech generating device (SGD) or through two SGD-related sensors, (c) control stimulation options through computer or radio systems and a microswitch, (d) communicate through modified messaging or telephone systems operated via microswitch, and (e) control combinations of leisure and communication options through computer systems operated via microswitch. Twenty-six studies, involving a total of 52 participants, were included in this paper. The intervention programs were carried out using single-subject methodology, and their outcomes were generally considered positive from the standpoint of the participants and their context. Practical implications of the programs are discussed. PMID:24574992

  8. Post-Coma Persons Emerged from a Minimally Conscious State and Showing Multiple Disabilities Learn to Manage a Radio-Listening Activity

    ERIC Educational Resources Information Center

    Lancioni, Giulio E.; Singh, Nirbhay N.; O'Reilly, Mark F.; Sigafoos, Jeff; Colonna, Fabio; Buonocunto, Francesca; Sacco, Valentina; Megna, Marisa; Oliva, Doretta

    2012-01-01

    This study assessed microswitch-based technology to enable three post-coma adults, who had emerged from a minimally conscious state but presented motor and communication disabilities, to operate a radio device. The material involved a modified radio device, a microprocessor-based electronic control unit, a personal microswitch, and an amplified…

  9. Post-coma persons with extensive multiple disabilities use microswitch technology to access selected stimulus events or operate a radio device.

    PubMed

    Lancioni, Giulio E; Singh, Nirbhay N; O'Reilly, Mark F; Sigafoos, Jeff; Alberti, Gloria; Oliva, Doretta; Megna, Gianfranco; Iliceto, Carla; Damiani, Sabino; Ricci, Irene; Spica, Antonella

    2011-01-01

    The present two studies extended research evidence on the use of microswitch technology by post-coma persons with multiple disabilities. Specifically, Study I examined whether three adults with a diagnosis of minimally conscious state and multiple disabilities could use microswitches as tools to access brief, selected stimulus events. Study II assessed whether an adult, who had emerged from a minimally conscious state but was affected by multiple disabilities, could manage the use of a radio device via a microswitch-aided program. Results showed that the participants of Study I had a significant increase of microswitch responding during the intervention phases. The participant of Study II learned to change radio stations and seemed to spend different amounts of session time on the different stations available (suggesting preferences among the programs characterizing them). The importance of microswitch technology for assisting post-coma persons with multiple disabilities to positively engage with their environment was discussed. Copyright © 2011 Elsevier Ltd. All rights reserved.

  10. Post-Coma Persons with Extensive Multiple Disabilities Use Microswitch Technology to Access Selected Stimulus Events or Operate a Radio Device

    ERIC Educational Resources Information Center

    Lancioni, Giulio E.; Singh, Nirbhay N.; O'Reilly, Mark F.; Sigafoos, Jeff; Alberti, Gloria; Oliva, Doretta; Megna, Gianfranco; Iliceto, Carla; Damiani, Sabino; Ricci, Irene; Spica, Antonella

    2011-01-01

    The present two studies extended research evidence on the use of microswitch technology by post-coma persons with multiple disabilities. Specifically, Study I examined whether three adults with a diagnosis of minimally conscious state and multiple disabilities could use microswitches as tools to access brief, selected stimulus events. Study II…

  11. Microswitch- and VOCA-Assisted Programs for Two Post-Coma Persons with Minimally Conscious State and Pervasive Motor Disabilities

    ERIC Educational Resources Information Center

    Lancioni, Giulio E.; Singh, Nirbhay N.; O'Reilly, Mark F.; Sigafoos, Jeff; Buonocunto, Francesca; Sacco, Valentina; Colonna, Fabio; Navarro, Jorge; Oliva, Doretta; Signorino, Mario; Megna, Gianfranco

    2009-01-01

    Intervention programs, based on learning principles and assistive technology, were assessed in two studies with two post-coma men with minimally conscious state and pervasive motor disabilities. Study I assessed a program that included (a) an optic microswitch, activated via double blinking, which allowed a man direct access to brief music…

  12. Two Boys with Multiple Disabilities Increasing Adaptive Responding and Curbing Dystonic/Spastic Behavior via a Microswitch-Based Program

    ERIC Educational Resources Information Center

    Lancioni, Giulio E.; Singh, Nirbhay N.; O'Reilly, Mark F.; Sigafoos, Jeff; Didden, Robert; Oliva, Doretta

    2009-01-01

    A recent study has shown that microswitch clusters (i.e., combinations of microswitches) and contingent stimulation could be used to increase adaptive responding and reduce dystonic/spastic behavior in two children with multiple disabilities [Lancioni, G. E., Singh, N. N., Oliva, D., Scalini, L., & Groeneweg, J. (2003). Microswitch clusters to…

  13. Simulation and characterization of a laterally-driven inertial micro-switch

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Wenguo; Wang, Yang; Wang, Huiying

    2015-04-15

    A laterally-driven inertial micro-switch was designed and fabricated using surface micromachining technology. The dynamic response process was simulated by ANSYS software, which revealed the vibration process of movable electrode when the proof mass is shocked by acceleration in sensitive direction. The test results of fabricated inertial micro-switches with and without anti-shock beams indicated that the contact process of micro-switch with anti-shock beams is more reliable than the one without anti-shock beams. The test results indicated that three contact signals had been observed in the contact process of the inertial switch without anti-shock beams, and only one contact signal in themore » inertial switch with anti-shock beams, which demonstrated that the anti-shock beams can effectively constrain the vibration in non-sensitive direction.« less

  14. Microswitch and Keyboard-Emulator Technology to Facilitate the Writing Performance of Persons with Extensive Motor Disabilities

    ERIC Educational Resources Information Center

    Lancioni, Giulio E.; Singh, Nirbhay N.; O'Reilly, Mark F.; Sigafoos, Jeff; Green, Vanessa; Oliva, Doretta; Lang, Russell

    2011-01-01

    This study assessed the effectiveness of microswitches for simple responses (i.e., partial hand closure, vocalization, and hand stroking) and a keyboard emulator to facilitate the writing performance of three participants with extensive motor disabilities. The study was carried out according to an ABAB design. During the A phases, the participants…

  15. Fiber Optic Microswitch For Industrial Use

    NASA Astrophysics Data System (ADS)

    Desforges, F. X.; Jeunhomme, L. B.; Graindorge, Ph.; LeBoudec, G.

    1988-03-01

    Process control instrumentation is a large potential market for fiber optic sensors and particulary for fiber optic microswitches. Use of such devices brings a lot of advantages such as lighter cables, E.M. immunity, intrinsic security due to optical measurement, no grounding problems and so on. However, commercially available fiber optic microswitches exhibit high insertion losses as well as non optimal mechanical design. In fact, these drawbacks are due to operation principles which are based on a mobile shutter displaced between two fibers. The fiber optic microswitch we present here, has been specially designed for harsh environments (oil industry). The patented operation principle uses only one fiber placed in front of a retroreflecting material by the mean of a fiber optic connector. The use of this retroreflector material allows an important reduction of the position tolerances required in two fibers devices, as well as easier fabrication and potential mass production of the optical microswitch. Moreover, such a configuration yields good performances in term of reflection coefficient leading to large dynamic range and consequently large distances (up to 250 m) between the optical microswitch and its optoelectronic instrument. Optomechanical design of the microswitch as well as electronic design of the optoelectronic instrument will be examined and discussed.

  16. Post-Coma Persons with Motor and Communication/Consciousness Impairments Choose among Environmental Stimuli and Request Stimulus Repetitions via Assistive Technology

    ERIC Educational Resources Information Center

    Lancioni, Giulio E.; Singh, Nirbhay N.; O'Reilly, Mark F.; Sigafoos, Jeff; Buonocunto, Francesca; Sacco, Valentina; Colonna, Fabio; Navarro, Jorge; Lanzilotti, Crocifissa; Oliva, Doretta; Megna, Gianfranco

    2010-01-01

    This study assessed whether a program based on microswitch and computer technology would enable three post-coma participants (adults) with motor and communication/consciousness impairments to choose among environmental stimuli and request their repetition whenever they so desired. Within each session, 16 stimuli (12 preferred and 4 non-preferred)…

  17. Post-coma persons emerged from a minimally conscious state and showing multiple disabilities learn to manage a radio-listening activity.

    PubMed

    Lancioni, Giulio E; Singh, Nirbhay N; O'Reilly, Mark F; Sigafoos, Jeff; Colonna, Fabio; Buonocunto, Francesca; Sacco, Valentina; Megna, Marisa; Oliva, Doretta

    2012-01-01

    This study assessed microswitch-based technology to enable three post-coma adults, who had emerged from a minimally conscious state but presented motor and communication disabilities, to operate a radio device. The material involved a modified radio device, a microprocessor-based electronic control unit, a personal microswitch, and an amplified MP3 player. The study was carried out according to a non-concurrent multiple baseline design across participants. During the intervention, all three participants learned to operate the radio device, changing stations and tuning on some of them longer amounts of time than on others (i.e., suggesting preferences among the topics covered by those stations). They also ended a number of sessions before the maximum length of time allowed for them had elapsed. The practical (rehabilitation) implications of the findings were discussed. Copyright © 2011 Elsevier Ltd. All rights reserved.

  18. Walker devices and microswitch technology to enhance assisted indoor ambulation by persons with multiple disabilities: three single-case studies.

    PubMed

    Lancioni, Giulio E; Singh, Nirbhay N; O'Reilly, Mark F; Sigafoos, Jeff; Oliva, Doretta; Campodonico, Francesca; Buono, Serafino

    2013-07-01

    These three single-case studies assessed the use of walker devices and microswitch technology for promoting ambulation behavior among persons with multiple disabilities. The walker devices were equipped with support and weight lifting features. The microswitch technology ensured that brief stimulation followed the participants' ambulation responses. The participants were two children (i.e., Study I and Study II) and one man (i.e., Study III) with poor ambulation performance. The ambulation efforts of the child in Study I involved regular steps, while those of the child in Study II involved pushing responses (i.e., he pushed himself forward with both feet while sitting on the walker's saddle). The man involved in Study III combined his poor ambulation performance with problem behavior, such as shouting or slapping his face. The results were positive for all three participants. The first two participants had a large increase in the number of steps/pushes performed during the ambulation events provided and in the percentages of those events that they completed independently. The third participant improved his ambulation performance as well as his general behavior (i.e., had a decline in problem behavior and an increase in indices of happiness). The wide-ranging implications of the results are discussed. Copyright © 2013 Elsevier Ltd. All rights reserved.

  19. Concepts, characterization, and modeling of MEMS microswitches with gold contacts in MUMPs

    NASA Astrophysics Data System (ADS)

    Lafontan, Xavier; Dufaza, Christian; Robert, Michel; Pressecq, Francis; Perez, Guy

    2001-04-01

    This paper demonstrates that RF MEMS micro-switches can be realized with a low cost MEMS technology such as MUMPs. Two different switches are proposed, namely the hinged beam switch and the gold overflowing switch. Their concepts, design and characterization are described in details. On-resistance as low as 5 - 6 (Omega) for the gold overflowing switch and 2 - 3 (Omega) for the hinged beam switch have been measured. Finally, experimental measurements showed that force and electrical current had strong influences on the overall electrical contact.

  20. Design and Optimization of a Stationary Electrode in a Vertically-Driven MEMS Inertial Switch for Extending Contact Duration

    PubMed Central

    Xu, Qiu; Yang, Zhuo-Qing; Fu, Bo; Bao, Yan-Ping; Wu, Hao; Sun, Yun-Na; Zhao, Meng-Yuan; Li, Jian; Ding, Gui-Fu; Zhao, Xiao-Lin

    2017-01-01

    A novel micro-electro-mechanical systems (MEMS) inertial microswitch with a flexible contact-enhanced structure to extend the contact duration has been proposed in the present work. In order to investigate the stiffness k of the stationary electrodes, the stationary electrodes with different shapes, thickness h, width b, and length l were designed, analyzed, and simulated using ANSYS software. Both the analytical and the simulated results indicate that the stiffness k increases with thickness h and width b, while decreasing with an increase of length l, and it is related to the shape. The inertial micro-switches with different kinds of stationary electrodes were simulated using ANSYS software and fabricated using surface micromachining technology. The dynamic simulation indicates that the contact time will decrease with the increase of thickness h and width b, but increase with the length l, and it is related to the shape. As a result, the contact time decreases with the stiffness k of the stationary electrode. Furthermore, the simulated results reveal that the stiffness k changes more rapidly with h and l compared to b. However, overlarge dimension of the whole microswitch is contradicted with small footprint area expectation in the structure design. Therefore, it is unreasonable to extend the contact duration by increasing the length l excessively. Thus, the best and most convenient way to prolong the contact time is to reduce the thickness h of the stationary electrode while keeping the plane geometric structure of the inertial micro-switch unchanged. Finally, the fabricated micro-switches with different shapes of stationary electrodes have been evaluated by a standard dropping hammer system. The test maximum contact time under 288 g acceleration can reach 125 µs. It is shown that the test results are in accordance with the simulated results. The conclusions obtained in this work can provide guidance for the future design and fabrication of inertial microswitches. PMID:28272330

  1. Design and Optimization of a Stationary Electrode in a Vertically-Driven MEMS Inertial Switch for Extending Contact Duration.

    PubMed

    Xu, Qiu; Yang, Zhuo-Qing; Fu, Bo; Bao, Yan-Ping; Wu, Hao; Sun, Yun-Na; Zhao, Meng-Yuan; Li, Jian; Ding, Gui-Fu; Zhao, Xiao-Lin

    2017-03-07

    A novel micro-electro-mechanical systems (MEMS) inertial microswitch with a flexible contact-enhanced structure to extend the contact duration has been proposed in the present work. In order to investigate the stiffness k of the stationary electrodes, the stationary electrodes with different shapes, thickness h , width b , and length l were designed, analyzed, and simulated using ANSYS software. Both the analytical and the simulated results indicate that the stiffness k increases with thickness h and width b , while decreasing with an increase of length l , and it is related to the shape. The inertial micro-switches with different kinds of stationary electrodes were simulated using ANSYS software and fabricated using surface micromachining technology. The dynamic simulation indicates that the contact time will decrease with the increase of thickness h and width b , but increase with the length l , and it is related to the shape. As a result, the contact time decreases with the stiffness k of the stationary electrode. Furthermore, the simulated results reveal that the stiffness k changes more rapidly with h and l compared to b . However, overlarge dimension of the whole microswitch is contradicted with small footprint area expectation in the structure design. Therefore, it is unreasonable to extend the contact duration by increasing the length l excessively. Thus, the best and most convenient way to prolong the contact time is to reduce the thickness h of the stationary electrode while keeping the plane geometric structure of the inertial micro-switch unchanged. Finally, the fabricated micro-switches with different shapes of stationary electrodes have been evaluated by a standard dropping hammer system. The test maximum contact time under 288 g acceleration can reach 125 µs. It is shown that the test results are in accordance with the simulated results. The conclusions obtained in this work can provide guidance for the future design and fabrication of inertial microswitches.

  2. A Microswitch-Based Program to Enable Students with Multiple Disabilities to Choose among Environmental Stimuli

    ERIC Educational Resources Information Center

    Lancioni, Giulio E.; O'Reilly, Mark F.; Singh, Nirbhay N.; Sigafoos, Jeff; Didden, Robert; Oliva, Doretta; Severini, Laura

    2006-01-01

    Students with multiple disabilities, such as severe to profound mental retardation combined with motor and visual impairment, are usually unable to engage in constructive activity or play a positive role in their daily context. Microswitches are technical tools that may help them improve their status by allowing them to control environmental…

  3. Microswitch and keyboard-emulator technology to facilitate the writing performance of persons with extensive motor disabilities.

    PubMed

    Lancioni, Giulio E; Singh, Nirbhay N; O'Reilly, Mark F; Sigafoos, Jeff; Green, Vanessa; Oliva, Doretta; Lang, Russell

    2011-01-01

    This study assessed the effectiveness of microswitches for simple responses (i.e., partial hand closure, vocalization, and hand stroking) and a keyboard emulator to facilitate the writing performance of three participants with extensive motor disabilities. The study was carried out according to an ABAB design. During the A phases, the participants (one child and two adults) were to write using the responses and technology available to them prior to this study. During the B phases, they used the new responses and technology. Data showed that two of the three participants had a faster writing performance during the B phases while the third participant had a slower writing performance. All three participants indicated a clear preference for the use of the new responses and technology, which were considered relatively easy and comfortable to manage and did not seem to cause any specific signs of tiredness. Implications of the findings are discussed. Copyright © 2010 Elsevier Ltd. All rights reserved.

  4. A learning setup for a post-coma adolescent with profound multiple disabilities involving small forehead movements and new microswitch technology.

    PubMed

    Lancioni, Giulio E; Singh, Nirbhay N; O'Reilly, Mark F; Sigafoos, Jeff; Didden, Robert; Oliva, Doretta; Calzolari, Cinzia; Montironi, Gianluigi

    2007-09-01

    A learning setup was arranged for an adolescent with profound multiple disabilities and a diagnosis of vegetative state. Signs of learning by the adolescent would underline an improvement in his immediate situation with potential implications for his general prospect, and could help revise his diagnosis. The response adopted in the learning setup was forehead skin movement. The microswitch technology used for detecting such a response consisted of (a) an optic sensor (i.e., barcode reader), (b) a small tag with horizontal bars attached to the participant's forehead, and (c) an electronic control system that activated stimuli in relation to the participant's forehead responses. The study followed an ABABACAB sequence, in which A represented baseline periods, B intervention periods with stimuli contingent on the response, and C a control condition with stimuli presented non-contingently. Data showed that the level of responding during the B phases was significantly higher than the levels observed during the A phases as well as the C phase, indicating clear signs of learning. Intervention strategies based on a learning format and suitable technology might be useful to improve the situation and prospect of persons with profound multiple disabilities and a diagnosis of vegetative state.

  5. Entrepreneur Grows Microswitch Company

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Czaja, Danny; Christenson, Todd

    2014-10-24

    Todd Christenson took advantage of Sandia National Laboratories’ Entrepreneurial Separation to Transfer Technology (ESTT) program to start HT MicroAnalytical (HT Micro) in 2003 in order to apply his specialized expertise in high aspect ratio microfabrication (HARM) technology gained while at Sandia to the creation of the world’s smallest electromechanical switches.

  6. Entrepreneur Grows Microswitch Company

    ScienceCinema

    Czaja, Danny; Christenson, Todd

    2018-05-30

    Todd Christenson took advantage of Sandia National Laboratories’ Entrepreneurial Separation to Transfer Technology (ESTT) program to start HT MicroAnalytical (HT Micro) in 2003 in order to apply his specialized expertise in high aspect ratio microfabrication (HARM) technology gained while at Sandia to the creation of the world’s smallest electromechanical switches.

  7. A Microswitch-Cluster Program to Foster Adaptive Responses and Head Control in Students with Multiple Disabilities: Replication and Validation Assessment

    ERIC Educational Resources Information Center

    Lancioni, Giulio E.; Singh, Nirbhay N.; O'Reilly, Mark F.; Sigafoos, Jeff; Oliva, Doretta; Gatti, Michela; Manfredi, Francesco; Megna, Gianfranco; La Martire, Maria L.; Tota, Alessia; Smaldone, Angela; Groeneweg, Jop

    2008-01-01

    A program relying on microswitch clusters (i.e., combinations of microswitches) and preferred stimuli was recently developed to foster adaptive responses and head control in persons with multiple disabilities. In the last version of this program, preferred stimuli (a) are scheduled for adaptive responses occurring in combination with head control…

  8. Physical mechanisms for reduction of the breakdown voltage in the circuit of a rod lightning protector with an opening microswitch

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bobrov, Yu. K.; Zhuravkov, I. V.; Ostapenko, E. I.

    2010-12-15

    The effect of air gap breakdown voltage reduction in the circuit with an opening microswitch is substantiated from the physical point of view. This effect can be used to increase the efficiency of lightning protection system with a rod lightning protector. The processes which take place in the electric circuit of a lightning protector with a microswitch during a voltage breakdown are investigated. Openings of the microswitch are shown to lead to resonance overvoltages in the dc circuit and, as a result, efficient reduction in the breakdown voltage in a lightning protector-thundercloud air gap.

  9. A learning assessment procedure to re-evaluate three persons with a diagnosis of post-coma vegetative state and pervasive motor impairment.

    PubMed

    Lancioni, Giulio E; Singh, Nirbhay N; O'Reilly, Mark F; Sigafoos, Jeff; de Tommaso, Marina; Megna, Gianfranco; Bosco, Andrea; Buonocunto, Francesca; Sacco, Valentina; Chiapparino, Claudia

    2009-02-01

    Detecting signs of learning in persons with a diagnosis of post-coma vegetative state and profound motor disabilities could modify their diagnostic label and provide new hopes. In this study, three adults with such a diagnosis were exposed to learning assessment to search for those signs. PROCEDURE AND DESIGN: The assessment procedure relied on participants' eye-blinking responses and microswitch-based technology. The technology consisted of an electronically regulated optic microswitch mounted on an eyeglasses' frame that the participants wore during the study and an electronic control system connected to stimulus sources. Each participant followed an ABABCB design, in which A represented baseline periods, B intervention periods with stimuli contingent on the responses and C a control condition with stimuli presented non-contingently. The level of responding during the B phases was significantly higher than the levels observed during the A phases as well as the C phase for all participants (i.e. indicating clear signs of learning by them). These findings may have important implications for (a) changing the participants' diagnostic label and offering them new programme opportunities and (b) including learning assessment within the evaluation package used for persons with post-coma profound multiple disabilities.

  10. Children with Multiple Disabilities and Minimal Motor Behavior Using Chin Movements to Operate Microswitches to Obtain Environmental Stimulation

    ERIC Educational Resources Information Center

    Lancioni, Giulio E.; O'Reilly, Mark F.; Singh, Nirbhay N.; Sigafoos, Jeff; Tota, Alessia; Antonucci, Massimo; Oliva, Doretta

    2006-01-01

    In these two studies, two children with multiple disabilities and minimal motor behavior were assessed to see if they could use chin movements to operate microswitches to obtain environmental stimulation. In Study I, we applied an adapted version of a recently introduced electronic microswitch [Lancioni, G. E., O'Reilly, M. F., Singh, N. N.,…

  11. Technology-aided leisure and communication opportunities for two post-coma persons emerged from a minimally conscious state and affected by multiple disabilities.

    PubMed

    Lancioni, Giulio E; O'Reilly, Mark F; Singh, Nirbhay N; Sigafoos, Jeff; Buonocunto, Francesca; Sacco, Valentina; Navarro, Jorge; Lanzilotti, Crocifissa; De Tommaso, Marina; Megna, Marisa; Oliva, Doretta

    2013-02-01

    This study assessed technology-aided programs for helping two post-coma persons, who had emerged from a minimally conscious state and were affected by multiple disabilities, to (a) engage with leisure stimuli and request caregiver's procedures, (b) send out and listen to text messages for communication with distant partners, and (c) combine leisure engagement and procedure requests with text messaging within the same sessions. The program for leisure engagement and procedure requests relied on the use of a portable computer with commercial software, and a microswitch for the participants' response. The program for text messaging communication involved the use of a portable computer, a GSM modem, a microswitch for the participants' response, and specifically developed software. Results indicated that the participants were successful at each of the three stages of the study, thus providing relevant evidence concerning performance achievements only minimally documented. The implications of the findings in terms of technology and practical opportunities for post-coma persons with multiple disabilities are discussed. Copyright © 2012 Elsevier Ltd. All rights reserved.

  12. Technique for microswitch manufacture

    NASA Technical Reports Server (NTRS)

    Kitamura, T.; Kiyoyama, S.

    1983-01-01

    A five-step technique for microswitch manufacture is described: (1) A clad board is inlaid with a precious metal and the board is pressed. (2) One end of the fixed contact containing a precious metal inlay section is curved, and this edge of the precious metal inlay section becomes a fixed contact. (3) Inserts are formed in the unit body and terminal strips are placed through the top and bottom of the base and held. (4) The unit body is held by the base and the sequential contact strips are cut off. (5) Movable stripes are attached to the support of the terminal strips on the movable side and movable contacts are placed opposite the fixed contacts.

  13. A Low-G Silicon Inertial Micro-Switch with Enhanced Contact Effect Using Squeeze-Film Damping.

    PubMed

    Peng, Yingchun; Wen, Zhiyu; Li, Dongling; Shang, Zhengguo

    2017-02-16

    Contact time is one of the most important properties for inertial micro-switches. However, it is usually less than 20 μs for the switch with rigid electrode, which is difficult for the external circuit to recognize. This issue is traditionally addressed by designing the switch with a keep-close function or flexible electrode. However, the switch with keep-close function requires an additional operation to re-open itself, causing inconvenience for some applications wherein repeated monitoring is needed. The switch with a flexible electrode is usually fabricated by electroplating technology, and it is difficult to realize low-g switches (<50 g) due to inherent fabrication errors. This paper reports a contact enhancement using squeeze-film damping effect for low-g switches. A vertically driven switch with large proof mass and flexible springs was designed based on silicon micromachining, in order to achieve a damping ratio of 2 and a threshold value of 10 g. The proposed contact enhancement was investigated by theoretical and experimental studies. The results show that the damping effect can not only prolong the contact time for the dynamic acceleration load, but also reduce the contact bounce for the quasi-static acceleration load. The contact time under dynamic and quasi-static loads was 40 μs and 570 μs, respectively.

  14. Teaching Individuals with Profound Multiple Disabilities to Access Preferred Stimuli with Multiple Microswitches

    ERIC Educational Resources Information Center

    Tam, Gee May; Phillips, Katrina J.; Mudford, Oliver C.

    2011-01-01

    We replicated and extended previous research on microswitch facilitated choice making by individuals with profound multiple disabilities. Following an assessment of stimulus preferences, we taught 6 adults with profound multiple disabilities to emit 2 different responses to activate highly preferred stimuli. All participants learnt to activate…

  15. Microswitch-aided programs to support physical exercise or adequate ambulation in persons with multiple disabilities.

    PubMed

    Lancioni, Giulio E; Singh, Nirbhay N; O'Reilly, Mark F; Sigafoos, Jeff; Alberti, Gloria; Perilli, Viviana; Oliva, Doretta; Buono, Serafino

    2014-09-01

    Three microswitch-aided programs were assessed in three single-case studies to enhance physical exercise or ambulation in participants with multiple disabilities. Study I was aimed at helping a woman who tended to have the head bending forward and the arms down to exercise a combination of appropriate head and arms movements. Study II was aimed at promoting ambulation continuity with a man who tended to have ambulation breaks. Study III was aimed at promoting ambulation with appropriate foot position in a girl who usually showed toe walking. The experimental designs of the studies consisted of a multiple probe across responses (Study I), an ABAB sequence (Study II), and an ABABB(1) sequence (Study III). The last phase of each study was followed by a post-intervention check. The microswitches monitored the target responses selected for the participants and triggered a computer system to provide preferred stimuli contingent on those responses during the intervention phases of the studies. Data showed that the programs were effective with each of the participants who learned to exercise head and arms movements, increased ambulation continuity, and acquired high levels of appropriate foot position during ambulation, respectively. The positive performance levels were retained during the post-intervention checks. The discussion focused on (a) the potential of technology-aided programs for persons with multiple disabilities and (b) the need of replication studies to extend the evidence available in the area. Copyright © 2014 Elsevier Ltd. All rights reserved.

  16. Soldering iron temperature is automatically reduced

    NASA Technical Reports Server (NTRS)

    Lum, J. Y.

    1966-01-01

    Hinged cradle-microswitch arrangement maintains a soldering iron at less than peak temperature when not in use. The microswitch introduces a voltage reducing element into the soldering iron power circuit when the iron is placed on the cradle. The iron, when removed from the cradle, returns to operating temperature in 15 to 30 seconds.

  17. On the Nonlinear Dynamics of a Tunable Shock Micro-switch

    NASA Astrophysics Data System (ADS)

    Azizi, Saber; Javaheri, Hamid; Ghanati, Parisa

    2016-12-01

    A tunable shock micro-switch based on piezoelectric excitation is proposed in this study. This model includes a clamped-clamped micro-beam sandwiched with two piezoelectric layers throughout the entire length. Actuation of the piezoelectric layers via a DC voltage leads to an initial axial force in the micro-beam and directly affects on its overall bending stiffness; accordingly enables two-side tuning of both the trigger time and threshold shock. The governing motion equation, in the presence of an electrostatic actuation and a shock wave, is derived using Hamilton's principle. We employ the finite element method based on the Galerkin technique to obtain the temporal and phase responses subjected to three different shock waves including half sine, triangular and rectangular forms. Subsequently, we investigate the effect of the piezoelectric excitations on the threshold shock amplitude and trigger time.

  18. Two Persons with Multiple Disabilities Use a Mouth-Drying Response to Reduce the Effects of Their Drooling

    ERIC Educational Resources Information Center

    Lancioni, Giulio E.; Singh, Nirbhay N.; O'Reilly, Mark F.; Sigafoos, Jeff; Oliva, Doretta; Smaldone, Angela; La Martire, Maria L.

    2009-01-01

    These two studies involved a boy and a man with multiple disabilities, who were taught to use a mouth-drying response to reduce the effects of their drooling. Both studies relied on microswitch technology to monitor the drying response and follow it with positive stimulation (i.e., during intervention). In Study I, the boy performed the drying…

  19. Promoting Mouth-Drying Responses to Reduce Drooling Effects by Persons with Intellectual and Multiple Disabilities: A Study of Two Cases

    ERIC Educational Resources Information Center

    Lancioni, Giulio E.; Singh, Nirbhay N.; O'Reilly, Mark F.; Sigafoos, Jeff; Oliva, Doretta; Smaldone, Angela; La Martire, Maria L.; Pichierri, Sabrina; Groeneweg, Jop

    2011-01-01

    This study assessed the use of microswitch technology to promote mouth-drying responses and thereby reduce the effects of drooling by two adults with severe intellectual and multiple disabilities. Mouth-drying responses were performed via a special napkin that contained pressure sensors, a microprocessor and an MP3 to monitor the responses and…

  20. A Computer System Serving as a Microswitch for Vocal Utterances of Persons with Multiple Disabilities: Two Case Evaluations. Research Report

    ERIC Educational Resources Information Center

    Lancioni, Giulio E.; Singh, Nirbhay N.; O'Reilly, Mark F.; Oliva, Doretta; Montironi, Gianluigi

    2004-01-01

    The use of microswitches has been considered a crucial strategy to help individuals with extensive multiple disabilities overcome passivity and achieve control of environmental stimulation (Crawford & Schuster, 1993; Gutowski, 1996; Ko, McConachie, & Jolleff, 1998). In recent years, considerable efforts have been made to extend the evaluation of…

  1. Fostering Locomotor Behavior of Children with Developmental Disabilities: An Overview of Studies Using Treadmills and Walkers with Microswitches

    ERIC Educational Resources Information Center

    Lancioni, Giulio E.; Singh, Nirbhay N.; O'Reilly, Mark F.; Sigafoos, Jeff; Didden, Robert; Manfredi, Francesco; Putignano, Pietro; Stasolla, Fabrizio; Basili, Gabriella

    2009-01-01

    This paper provides an overview of studies using programs with treadmills or walkers with microswitches and contingent stimulation to foster locomotor behavior of children with developmental disabilities. Twenty-six studies were identified in the period 2000-2008 (i.e., the period in which research in this area has actually taken shape).…

  2. RF MEMS microswitches design and characterization

    NASA Astrophysics Data System (ADS)

    Lafontan, Xavier; Dufaza, Christian; Robert, Michel; Perez, Guy; Pressecq, Francis

    2000-08-01

    This paper presents the work performed in MUMPs on RF MEMS micro-switch. Concepts, design and characterization of switches are studied. The study particularly focuses on the electrical resistance characterization and modelization. The switches developed uses two different principle: overflowed gold and hinged beam. The realized contacts exhibited high on resistance (~20(Omega) ) due to nanoscopics asperities of contacts and insulating interfacial films. Results of a typical contact cleaning method are also presented.

  3. Smart Materials for Electromagnetic and Optical Applications

    NASA Astrophysics Data System (ADS)

    Ramesh, Prashanth

    The research presented in this dissertation focuses on the development of solid-state materials that have the ability to sense, act, think and communicate. Two broad classes of materials, namely ferroelectrics and wideband gap semiconductors were investigated for this purpose. Ferroelectrics possess coupled electromechanical behavior which makes them sensitive to mechanical strains and fluctuations in ambient temperature. Use of ferroelectrics in antenna structures, especially those subject to mechanical and thermal loads, requires knowledge of the phenomenological relationship between the ferroelectric properties of interest (especially dielectric permittivity) and the external physical variables, viz. electric field(s), mechanical strains and temperature. To this end, a phenomenological model of ferroelectric materials based on the Devonshire thermodynamic theory was developed. This model was then used to obtain a relationship expressing the dependence of the dielectric permittivity on the mechanical strain, applied electric field and ambient temperature. The relationship is shown to compare well with published experimental data and other related models in literature. A model relating ferroelectric loss tangent to the applied electric field and temperature is also discussed. Subsequently, relationships expressing the dependence of antenna operating frequency and radiation efficiency on those external physical quantities are described. These relationships demonstrate the tunability of load-bearing antenna structures that integrate ferroelectrics when they are subjected to mechanical and thermal loads. In order to address the inability of ferroelectrics to integrate microelectronic devices, a feature needed in a material capable of sensing, acting, thinking and communicating, the material Gallium Nitride (GaN) is pursued next. There is an increasing utilization of GaN in the area of microelectronics due to the advantages it offers over other semiconductors. This dissertation demonstrates GaN as a candidate material well suited for novel microelectromechanical systems. The potential of GaN for MEMS is demonstrated via the design, analysis, fabrication, testing and characterization of an optical microswitch device actuated by piezoelectric and electrostrictive means. The piezoelectric and electrostrictive properties of GaN and its differences from common piezoelectrics are discussed before elaborating on the device configuration used to implement the microswitch device. Next, the development of two recent fabrication technologies, Photoelectrochemical etch and Bias-enabled Dark Electrochemical etch, used to realize the 3-dimensional device structure in GaN are described in detail. Finally, an ultra-low-cost, laser-based, non-contact approach to test and characterize the microswitch device is described, followed by the device testing results.

  4. Transient Region Coverage in the Propulsion IVHM Technology Experiment

    NASA Technical Reports Server (NTRS)

    Balaban, Edward; Sweet, Adam; Bajwa, Anupa; Maul, William; Fulton, Chris; Chicatelli, amy

    2004-01-01

    Over the last several years researchers at NASA Glenn and Ames Research Centers have developed a real-time fault detection and isolation system for propulsion subsystems of future space vehicles. The Propulsion IVHM Technology Experiment (PITEX), as it is called follows the model-based diagnostic methodology and employs Livingstone, developed at NASA Ames, as its reasoning engine. The system has been tested on,flight-like hardware through a series of nominal and fault scenarios. These scenarios have been developed using a highly detailed simulation of the X-34 flight demonstrator main propulsion system and include realistic failures involving valves, regulators, microswitches, and sensors. This paper focuses on one of the recent research and development efforts under PITEX - to provide more complete transient region coverage. It describes the development of the transient monitors, the corresponding modeling methodology, and the interface software responsible for coordinating the flow of information between the quantitative monitors and the qualitative, discrete representation Livingstone.

  5. Helping people in a minimally conscious state develop responding and stimulation control through a microswitch-aided program.

    PubMed

    Lancioni, Giulio E; Singh, Nirbhay N; O'Reilly, Mark F; Sigafoos, Jeff; D'Amico, Fiora; Buonocunto, Francesca; Navarro, Jorge; Lanzilotti, Crocifissa; Fiore, Pietro; Megna, Marisa; Damiani, Sabino; Marvulli, Riccardo

    2017-06-01

    Postcoma persons in a minimally conscious state (MCS) and with extensive motor impairment cannot independently access and control environmental stimulation. Assessing the effects of a microswitch-aided program aimed at helping MCS persons develop responding and stimulation control and conducting a social validation/evaluation of the program. A single-subject ABAB design was used for each participant to determine the impact of the program on his or her responding. Staff interviews were used for the social validation/evaluation of the program. Rehabilitation and care facilities that the participants attended. Eleven MCS persons with extensive motor impairment and lack of speech or any other functional communication. For each participant, baseline (A) phases were alternated with intervention (B) phases during which the program was used. The program relied on microswitches to monitor participants' specific responses (e.g., prolonged eyelid closures) and on a computer system to enable those responses to control stimulation. In practice, the participants could use a simple response such as prolonged eyelid closure to generate a new stimulation input. Sixty-six staff people took part in the social validation of the program. They were to compare the program to basic and elaborate forms of externally controlled stimulation, scoring each of them on a six-item questionnaire. All participants showed increased response frequencies (and thus higher levels of independent stimulation input/control) during the B phases of the study. Their frequencies for each intervention phase more than doubled their frequencies for the preceding baseline phase with the difference between the two being clearly significant (P<0.01). Staff involved in the social validation procedure provided significantly higher scoring (P<0.01) for the program on five of the six questionnaire items. A microswitch-aided program can be an effective and socially acceptable tool in the work with MCS persons. The participants and staff's data can be taken as an encouragement for the use of a microswitch-aided program within care and rehabilitation settings for MCS persons.

  6. Fostering locomotor behavior of children with developmental disabilities: An overview of studies using treadmills and walkers with microswitches.

    PubMed

    Lancioni, Giulio E; Singh, Nirbhay N; O'Reilly, Mark F; Sigafoos, Jeff; Didden, Robert; Manfredi, Francesco; Putignano, Pietro; Stasolla, Fabrizio; Basili, Gabriella

    2009-01-01

    This paper provides an overview of studies using programs with treadmills or walkers with microswitches and contingent stimulation to foster locomotor behavior of children with developmental disabilities. Twenty-six studies were identified in the period 2000-2008 (i.e., the period in which research in this area has actually taken shape). Twenty-one of the studies involved the use of treadmills (i.e., 13 were aimed at children with cerebral palsy, 6 at children with Down syndrome, and 2 at children with Rett syndome or cerebellar ataxia). The remaining five studies concerned the use of walkers with microswitches and contingent stimulation with children with multiple disabilities. The outcomes of the studies tended to be positive but occasional failures also occurred. The outcomes were analyzed considering the characteristics of the approaches employed, the implications of the approaches for the participants' overall functioning situation (development), as well as methodological and practical aspects related to those approaches. Issues for future research were also examined.

  7. Intelligent Therapeutics and Metabolic Programming Through Tailormade, Ligand-Controlled RNA Switches

    DTIC Science & Technology

    2007-02-05

    lines. Three regulatory mechanisms have been examined in our laboratory: antisense inhibition, ribozyme cleavage, and RNA interference (RNAi...cell lines. However, the latter two regulatory mechanisms, ribozyme -based inactivation and RNAi-mediated silencing, demonstrated significant activity...in these cell lines as is briefly described below. Microswitches responsive to the small molecule theophylline and targeting GFP based on a ribozyme

  8. A basic technology-aided programme for leisure and communication of persons with advanced amyotrophic lateral sclerosis: performance and social rating.

    PubMed

    Lancioni, Giulio E; Singh, Nirbhay N; O'Reilly, Mark F; Sigafoos, Jeff; D'Amico, Fiora; Ferlisi, Gabriele; Zullo, Valeria; Denitto, Floriana; Lauta, Enrico; Abbinante, Crescenza; Pesce, Caterina V

    2017-02-01

    This study assessed (a) the impact of a technology-aided programme on the leisure and communication engagement of persons with advanced amyotrophic lateral sclerosis (ALS) and (b) the opinion of rehabilitation and care personnel regarding the programme. The programme's impact was assessed with four participants who were allowed to activate leisure and communication options through basic responses (e.g. knee, finger or lip movements) and microswitches. Forty-two care and health professionals rated the programme after watching video clips of persons with ALS (three of the four involved in this study and three involved in previous studies) during and outside of the programme. The programme was effective with all participants. Their mean percentages of session time with independently initiated leisure and communication engagements were zero during baseline and increased to between nearly 70 and 80 during the intervention. The care and health professionals rated the technology-aided programme as beneficial for the participants' positive engagement and social image, fairly practical for daily contexts and interesting from a personal standpoint. The programme might be viewed as a viable resource for persons with advanced ALS. Implications for Rehabilitation A programme characterised by versatility, simplicity and relatively low cost could be considered practically relevant for persons with ALS and their contexts. A programme that is effective in fostering participants' independent leisure and communication engagement and is positively rated by care and rehabilitation personnel is more likely to be accepted and used with consistency. Any programme directed at persons affected by ALS needs to be adapted to the persons' progressive deterioration, starting from the response and microswitch used for accessing the programme's options.

  9. An optical microswitch chip integrated with silicon waveguides and touch-down electrostatic micromirrors

    NASA Astrophysics Data System (ADS)

    Jin, Young-Hyun; Seo, Kyoung-Sun; Cho, Young-Ho; Lee, Sang-Shin; Song, Ki-Chang; Bu, Jong-Uk

    2004-12-01

    We present an silicon-on-insulator (SOI) optical microswitch, composed of silicon waveguides and electrostatically actuated gold-coated silicon micromirrors integrated with laser diode (LD) receivers and photo diode (PD) transmitters. For a low switching voltage, we modify the conventional curved electrode microactuator into a new microactuator with touch-down beams. We fabricate the waveguides and the actuated micromirror using the inductively coupled plasma (ICP) etching process of SOI wafers. The fabricated microswitch operates at the switching voltage of 31.7 ± 4 V with the resonant frequency of 6.89 kHz. Compared to the conventional microactuator, the touch-down beam microactuator achieves 77.4% reduction of the switching voltage. We observe the single mode wave propagation through the silicon waveguide with the measured micromirror loss of 4.18 ± 0.25 dB. We discuss a feasible method to achieve the switching voltage lower than 10 V by reducing the residual stress in the insulation layers of touch-down beams to the level of 30 MPa. We also analyze the major source of micromirror loss, thereby presenting design guidelines for low-loss micromirror switches.

  10. Memory switches based on metal oxide thin films

    NASA Technical Reports Server (NTRS)

    Ramesham, Rajeshuni (Inventor); Thakoor, Anilkumar P. (Inventor); Lambe, John J. (Inventor)

    1990-01-01

    MnO.sub.2-x thin films (12) exhibit irreversible memory switching (28) with an OFF/ON resistance ratio of at least about 10.sup.3 and the tailorability of ON state (20) resistance. Such films are potentially extremely useful as a connection element in a variety of microelectronic circuits and arrays (24). Such films provide a pre-tailored, finite, non-volatile resistive element at a desired place in an electric circuit, which can be electrically turned OFF (22) or disconnected as desired, by application of an electrical pulse. Microswitch structures (10) constitute the thin film element, contacted by a pair of separate electrodes (16a, 16b) and have a finite, pre-selected ON resistance which is ideally suited, for example, as a programmable binary synaptic connection for electronic implementation of neural network architectures. The MnO.sub.2-x microswitch is non-volatile, patternable, insensitive to ultraviolet light, and adherent to a variety of insulating substrates (14), such as glass and silicon dioxide-coated silicon substrates.

  11. A microswitch program to foster simple foot and leg movements in adult wheelchair users with multiple disabilities.

    PubMed

    Lancioni, Giulio E; O'Reilly, Mark F; Singh, Nirbhay N; Campodonico, Francesca; Marziani, Monia; Oliva, Doretta

    2004-01-01

    This study assessed a microswitch program to foster simple foot and leg movements in 2 adult wheelchair users with multiple disabilities. The participants' mood (indices of happiness) was recorded throughout the study. Data showed that participants rapidly increased the target foot and leg movements and maintained those movements during the course of the study, which lasted about 4.5 months. With regard to indices of happiness, 1 participant showed a fairly modest increase during the intervention while the other participant showed a substantial increase. Implications of the findings are discussed.

  12. Assistive technology for promoting adaptive skills of children with cerebral palsy: ten cases evaluation.

    PubMed

    Stasolla, Fabrizio; Caffò, Alessandro O; Perilli, Viviana; Boccasini, Adele; Damiani, Rita; D'Amico, Fiora

    2018-05-06

    To extend the use of assistive technology for promoting adaptive skills of children with cerebral palsy. To assess its effects on positive participation of ten participants involved. To carry out a social validation recruiting parents, physiotherapists and support teachers as external raters. A multiple probe design was implemented for Studies I and II. Study I involved five participants exposed to a combined program aimed at enhancing choice process of preferred items and locomotion fluency. Study II involved five further children for a combined intervention finalized at ensuring them with literacy access and ambulation responses. Study III recruited 60 external raters for a social validation assessment. All participants improved their performance, although differences among children occurred. Indices of positive participation increased as well. Social raters favorably scored the use of both technology and programs. Assistive technology-based programs were effective for promoting independence of children with cerebral palsy. Implications for Rehabilitation A basic form of assistive technology such as a microswitch-based program may be useful and helpful for supporting adaptive skills of children with cerebral palsy and different levels of functioning. The same program may improve the participants' indices of positive participation and constructive engagement with beneficial effects on their quality of life. The positive social rating provided by external experts sensitive to the matter may recommend a favorable acceptance and implementation of the program in daily settings.

  13. Development of camera technology for monitoring nests. Chapter 15

    Treesearch

    W. Andrew Cox; M. Shane Pruett; Thomas J. Benson; Scott J. Chiavacci; Frank R., III Thompson

    2012-01-01

    Photo and video technology has become increasingly useful in the study of avian nesting ecology. However, researchers interested in using camera systems are often faced with insufficient information on the types and relative advantages of available technologies. We reviewed the literature for studies of nests that used cameras and summarized them based on study...

  14. Promoting step responses of children with multiple disabilities through a walker device and microswitches with contingent stimuli.

    PubMed

    Lancioni, G E; De Pace, C; Singh, N N; O'Reilly, M F; Sigafoos, J; Didden, R

    2008-08-01

    Children with severe or profound intellectual and motor disabilities often present problems of balance and locomotion and spend much of their time sitting or lying, with negative consequences for their development and social image. This study provides a replication of recent (pilot) studies using a walker (support) device and microswitches with preferred stimuli to promote locomotion in two children with multiple disabilities. One child used an ABAB design; the other only an AB sequence. Both succeeded in increasing their frequencies of step responses during the B (intervention) phase(s). These findings support the positive evidence already available on the effectiveness of this intervention in motivating and promoting children's locomotion.

  15. Image Sensors Enhance Camera Technologies

    NASA Technical Reports Server (NTRS)

    2010-01-01

    In the 1990s, a Jet Propulsion Laboratory team led by Eric Fossum researched ways of improving complementary metal-oxide semiconductor (CMOS) image sensors in order to miniaturize cameras on spacecraft while maintaining scientific image quality. Fossum s team founded a company to commercialize the resulting CMOS active pixel sensor. Now called the Aptina Imaging Corporation, based in San Jose, California, the company has shipped over 1 billion sensors for use in applications such as digital cameras, camera phones, Web cameras, and automotive cameras. Today, one of every three cell phone cameras on the planet feature Aptina s sensor technology.

  16. Impact of New Camera Technologies on Discoveries in Cell Biology.

    PubMed

    Stuurman, Nico; Vale, Ronald D

    2016-08-01

    New technologies can make previously invisible phenomena visible. Nowhere is this more obvious than in the field of light microscopy. Beginning with the observation of "animalcules" by Antonie van Leeuwenhoek, when he figured out how to achieve high magnification by shaping lenses, microscopy has advanced to this day by a continued march of discoveries driven by technical innovations. Recent advances in single-molecule-based technologies have achieved unprecedented resolution, and were the basis of the Nobel prize in Chemistry in 2014. In this article, we focus on developments in camera technologies and associated image processing that have been a major driver of technical innovations in light microscopy. We describe five types of developments in camera technology: video-based analog contrast enhancement, charge-coupled devices (CCDs), intensified sensors, electron multiplying gain, and scientific complementary metal-oxide-semiconductor cameras, which, together, have had major impacts in light microscopy. © 2016 Marine Biological Laboratory.

  17. Case studies of technology for adults with multiple disabilities to make telephone calls independently.

    PubMed

    Lancioni, Giulio E; Singh, Nirbhay N; O'Reilly, Mark F; Sigafoos, Jeff; Boccasini, Adele; La Martire, Maria L; Lang, Russell

    2014-08-01

    Recent literature has shown the possibility of enabling individuals with multiple disabilities to make telephone calls independently via computer-aided telephone technology. These two case studies assessed a modified version of such technology and a commercial alternative to it for a woman and a man with multiple disabilities, respectively. The modified version used in Study 1 (a) presented the names of the persons available for a call and (b) reminded the participant of the response she needed to perform (i.e., pressing a microswitch) if she wanted to call any of those names/persons. The commercial device used in Study 2 was a Galaxy S3 (Samsung) equipped with the S-voice module, which allowed the participant to activate phone calls by uttering the word "Call" followed by the name of the persons he wanted to call. The results of the studies showed that the participants learned to make phone calls independently using the technology/device available. Implications of the results are discussed.

  18. Utilising the Intel RealSense Camera for Measuring Health Outcomes in Clinical Research.

    PubMed

    Siena, Francesco Luke; Byrom, Bill; Watts, Paul; Breedon, Philip

    2018-02-05

    Applications utilising 3D Camera technologies for the measurement of health outcomes in the health and wellness sector continues to expand. The Intel® RealSense™ is one of the leading 3D depth sensing cameras currently available on the market and aligns itself for use in many applications, including robotics, automation, and medical systems. One of the most prominent areas is the production of interactive solutions for rehabilitation which includes gait analysis and facial tracking. Advancements in depth camera technology has resulted in a noticeable increase in the integration of these technologies into portable platforms, suggesting significant future potential for pervasive in-clinic and field based health assessment solutions. This paper reviews the Intel RealSense technology's technical capabilities and discusses its application to clinical research and includes examples where the Intel RealSense camera range has been used for the measurement of health outcomes. This review supports the use of the technology to develop robust, objective movement and mobility-based endpoints to enable accurate tracking of the effects of treatment interventions in clinical trials.

  19. Effects of ambient stimuli on measures of behavioral state and microswitch use in adults with profound multiple impairments.

    PubMed

    Murphy, Kathleen M; Saunders, Muriel D; Saunders, Richard R; Olswang, Lesley B

    2004-01-01

    The effects of different types and amounts of environmental stimuli (visual and auditory) on microswitch use and behavioral states of three individuals with profound multiple impairments were examined. The individual's switch use and behavioral states were measured under three setting conditions: natural stimuli (typical visual and auditory stimuli in a recreational situation), reduced visual stimuli, and reduced visual and auditory stimuli. Results demonstrated differential switch use in all participants with the varying environmental setting conditions. No consistent effects were observed in behavioral state related to environmental condition. Predominant behavioral state scores and switch use did not systematically covary with any participant. Results suggest the importance of considering environmental stimuli in relationship to switch use when working with individuals with profound multiple impairments.

  20. A technology-assisted learning setup as assessment supplement for three persons with a diagnosis of post-coma vegetative state and pervasive motor impairment.

    PubMed

    Lancioni, Giulio E; Singh, Nirbhay N; O'Reilly, Mark F; Sigafoos, Jeff; Buonocunto, Francesca; Sacco, Valentina; Colonna, Fabio; Navarro, Jorge; Lanzilotti, Crocifissa; Bosco, Andrea; Megna, Gianfranco; De Tommaso, Marina

    2009-01-01

    Post-coma persons in an apparent condition of vegetative state and pervasive motor impairment pose serious problems in terms of assessment and intervention options. A technology-based learning assessment procedure might serve for them as a diagnostic supplement with possible implications for rehabilitation intervention. The learning assessment procedure adopted in this study relied on hand-closure and eye-blinking responses and on microswitch technology to detect such responses and to present stimuli. Three participants were involved in the study. The technology consisted of a touch/pressure sensor fixed on the hand or an optic sensor mounted on an eyeglasses' frame, which were combined with a control system linked to stimulus sources. The study adopted an ABABCB sequence, in which A represented baseline periods, B intervention periods with stimuli contingent on the responses, and C a control condition with stimuli presented non-contingently. Data showed that the level of responding during the B phases was significantly higher than the levels observed during the A phases as well as the C phase for two of the three participants (i.e., indicating clear signs of learning by them). Learning might be deemed to represent basic levels of knowledge/consciousness. Thus, detecting signs of learning might help one revise a previous diagnosis of vegetative state with wide implications for rehabilitation perspectives.

  1. Motor actuated vacuum door. [for photography from sounding rockets

    NASA Technical Reports Server (NTRS)

    Hanagud, A. V.

    1986-01-01

    Doors that allow scientific instruments to record and retrieve the observed data are often required to be designed and installed as a part of sounding rocket hardware. The motor-actuated vacuum door was designed to maintain a medium vacuum of the order of 0.0001 torr or better while closed, and to provide an opening 15 inches long x 8.5 inches wide while open for cameras to image Halley's comet. When the electric motor receives the instruction to open the door through the payload battery, timer, and relay circuit, the first operation is to unlock the door. After unlatching, the torque transmitted by the motor to the main shaft through the links opens the door. A microswitch actuator, which rides on the linear motion conversion mechanism, is adjusted to trip the limit switch at the end of the travel. The process is repeated in the reverse order to close the door. 'O' rings are designed to maintain the seal. Door mechanisms similar to the one described have flown on Aerobee 17.018 and Black Brant 27.047 payloads.

  2. Qualification Tests of Micro-camera Modules for Space Applications

    NASA Astrophysics Data System (ADS)

    Kimura, Shinichi; Miyasaka, Akira

    Visual capability is very important for space-based activities, for which small, low-cost space cameras are desired. Although cameras for terrestrial applications are continually being improved, little progress has been made on cameras used in space, which must be extremely robust to withstand harsh environments. This study focuses on commercial off-the-shelf (COTS) CMOS digital cameras because they are very small and are based on an established mass-market technology. Radiation and ultrahigh-vacuum tests were conducted on a small COTS camera that weighs less than 100 mg (including optics). This paper presents the results of the qualification tests for COTS cameras and for a small, low-cost COTS-based space camera.

  3. Night Vision Camera

    NASA Technical Reports Server (NTRS)

    1996-01-01

    PixelVision, Inc. developed the Night Video NV652 Back-illuminated CCD Camera, based on the expertise of a former Jet Propulsion Laboratory employee and a former employee of Scientific Imaging Technologies, Inc. The camera operates without an image intensifier, using back-illuminated and thinned CCD technology to achieve extremely low light level imaging performance. The advantages of PixelVision's system over conventional cameras include greater resolution and better target identification under low light conditions, lower cost and a longer lifetime. It is used commercially for research and aviation.

  4. Technology-aided leisure and communication: Opportunities for persons with advanced Parkinson's disease.

    PubMed

    Lancioni, Giulio; Singh, Nirbhay; O'Reilly, Mark; Sigafoos, Jeff; D'Amico, Fiora; Sasanelli, Giovanni; Denitto, Floriana; Lang, Russell

    2016-12-01

    This study investigated whether simple technology-aided programs could be used to promote leisure and communication engagement in three persons with advanced Parkinson's disease. The programs included music and video options, which were combined with (a) text messaging and telephone calls for the first participant, (b) verbal statements/requests, text messaging, and reading for the second participant, and (c) verbal statements/requests and prayers for the third participant. The participants could activate those options via hand movement or vocal emission and specific microswitches. All three participants were successful in activating the options available. The mean cumulative frequencies of option activations were about five per 15-min session for the first two participants and about four per 10-min session for the third participant. The results were considered encouraging and relevant given the limited amount of evidence available on helping persons with advanced Parkinson's disease with leisure and communication.

  5. 12. VIEW OF TYPICAL CELL LOCKING MECHANISM, BUILDING 220 CELL ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    12. VIEW OF TYPICAL CELL LOCKING MECHANISM, BUILDING 220 CELL BLOCK 'A'. THE FACE PLATE OF THE CELL LOCK IS SHOWN REMOVED, EXPOSING THE ELECTROMAGNETIC LOCKING MECHANISM COMPRISING OF 2 MICROSWITCHES FOR LOCK POSITION INDICATION (FRONT LEFT CENTER AND REAR RIGHT CENTER OF PANEL); KEY SLOT MECHANICAL LOCK; LOCK SPRING (UPPER RIGHT OF PANEL); ELECTRIC SOLENOID (BOTTOM RIGHT CORNER OF PANEL); AND MISCELLANEOUS MECHANICAL LINKAGES. - U.S. Naval Base, Pearl Harbor, Brig, Neville Way near Ninth Street at Marine Barracks, Pearl City, Honolulu County, HI

  6. New light field camera based on physical based rendering tracing

    NASA Astrophysics Data System (ADS)

    Chung, Ming-Han; Chang, Shan-Ching; Lee, Chih-Kung

    2014-03-01

    Even though light field technology was first invented more than 50 years ago, it did not gain popularity due to the limitation imposed by the computation technology. With the rapid advancement of computer technology over the last decade, the limitation has been uplifted and the light field technology quickly returns to the spotlight of the research stage. In this paper, PBRT (Physical Based Rendering Tracing) was introduced to overcome the limitation of using traditional optical simulation approach to study the light field camera technology. More specifically, traditional optical simulation approach can only present light energy distribution but typically lack the capability to present the pictures in realistic scenes. By using PBRT, which was developed to create virtual scenes, 4D light field information was obtained to conduct initial data analysis and calculation. This PBRT approach was also used to explore the light field data calculation potential in creating realistic photos. Furthermore, we integrated the optical experimental measurement results with PBRT in order to place the real measurement results into the virtually created scenes. In other words, our approach provided us with a way to establish a link of virtual scene with the real measurement results. Several images developed based on the above-mentioned approaches were analyzed and discussed to verify the pros and cons of the newly developed PBRT based light field camera technology. It will be shown that this newly developed light field camera approach can circumvent the loss of spatial resolution associated with adopting a micro-lens array in front of the image sensors. Detailed operational constraint, performance metrics, computation resources needed, etc. associated with this newly developed light field camera technique were presented in detail.

  7. Non-invasive brain-computer interface system: towards its application as assistive technology.

    PubMed

    Cincotti, Febo; Mattia, Donatella; Aloise, Fabio; Bufalari, Simona; Schalk, Gerwin; Oriolo, Giuseppe; Cherubini, Andrea; Marciani, Maria Grazia; Babiloni, Fabio

    2008-04-15

    The quality of life of people suffering from severe motor disabilities can benefit from the use of current assistive technology capable of ameliorating communication, house-environment management and mobility, according to the user's residual motor abilities. Brain-computer interfaces (BCIs) are systems that can translate brain activity into signals that control external devices. Thus they can represent the only technology for severely paralyzed patients to increase or maintain their communication and control options. Here we report on a pilot study in which a system was implemented and validated to allow disabled persons to improve or recover their mobility (directly or by emulation) and communication within the surrounding environment. The system is based on a software controller that offers to the user a communication interface that is matched with the individual's residual motor abilities. Patients (n=14) with severe motor disabilities due to progressive neurodegenerative disorders were trained to use the system prototype under a rehabilitation program carried out in a house-like furnished space. All users utilized regular assistive control options (e.g., microswitches or head trackers). In addition, four subjects learned to operate the system by means of a non-invasive EEG-based BCI. This system was controlled by the subjects' voluntary modulations of EEG sensorimotor rhythms recorded on the scalp; this skill was learnt even though the subjects have not had control over their limbs for a long time. We conclude that such a prototype system, which integrates several different assistive technologies including a BCI system, can potentially facilitate the translation from pre-clinical demonstrations to a clinical useful BCI.

  8. Non invasive Brain-Computer Interface system: towards its application as assistive technology

    PubMed Central

    Cincotti, Febo; Mattia, Donatella; Aloise, Fabio; Bufalari, Simona; Schalk, Gerwin; Oriolo, Giuseppe; Cherubini, Andrea; Marciani, Maria Grazia; Babiloni, Fabio

    2010-01-01

    The quality of life of people suffering from severe motor disabilities can benefit from the use of current assistive technology capable of ameliorating communication, house-environment management and mobility, according to the user's residual motor abilities. Brain Computer Interfaces (BCIs) are systems that can translate brain activity into signals that control external devices. Thus they can represent the only technology for severely paralyzed patients to increase or maintain their communication and control options. Here we report on a pilot study in which a system was implemented and validated to allow disabled persons to improve or recover their mobility (directly or by emulation) and communication within the surrounding environment. The system is based on a software controller that offers to the user a communication interface that is matched with the individual's residual motor abilities. Patients (n=14) with severe motor disabilities due to progressive neurodegenerative disorders were trained to use the system prototype under a rehabilitation program carried out in a house-like furnished space. All users utilized regular assistive control options (e.g., microswitches or head trackers). In addition, four subjects learned to operate the system by means of a non-invasive EEG-based BCI. This system was controlled by the subjects' voluntary modulations of EEG sensorimotor rhythms recorded on the scalp; this skill was learnt even though the subjects have not had control over their limbs for a long time. We conclude that such a prototype system, which integrates several different assistive technologies including a BCI system, can potentially facilitate the translation from pre-clinical demonstrations to a clinical useful BCI. PMID:18394526

  9. Fundus Photography in the 21st Century--A Review of Recent Technological Advances and Their Implications for Worldwide Healthcare.

    PubMed

    Panwar, Nishtha; Huang, Philemon; Lee, Jiaying; Keane, Pearse A; Chuan, Tjin Swee; Richhariya, Ashutosh; Teoh, Stephen; Lim, Tock Han; Agrawal, Rupesh

    2016-03-01

    The introduction of fundus photography has impacted retinal imaging and retinal screening programs significantly. Fundus cameras play a vital role in addressing the cause of preventive blindness. More attention is being turned to developing countries, where infrastructure and access to healthcare are limited. One of the major limitations for tele-ophthalmology is restricted access to the office-based fundus camera. Recent advances in access to telecommunications coupled with introduction of portable cameras and smartphone-based fundus imaging systems have resulted in an exponential surge in available technologies for portable fundus photography. Retinal cameras in the near future would have to cater to these needs by featuring a low-cost, portable design with automated controls and digitalized images with Web-based transfer. In this review, we aim to highlight the advances of fundus photography for retinal screening as well as discuss the advantages, disadvantages, and implications of the various technologies that are currently available.

  10. Fundus Photography in the 21st Century—A Review of Recent Technological Advances and Their Implications for Worldwide Healthcare

    PubMed Central

    Panwar, Nishtha; Huang, Philemon; Lee, Jiaying; Keane, Pearse A.; Chuan, Tjin Swee; Richhariya, Ashutosh; Teoh, Stephen; Lim, Tock Han

    2016-01-01

    Abstract Background: The introduction of fundus photography has impacted retinal imaging and retinal screening programs significantly. Literature Review: Fundus cameras play a vital role in addressing the cause of preventive blindness. More attention is being turned to developing countries, where infrastructure and access to healthcare are limited. One of the major limitations for tele-ophthalmology is restricted access to the office-based fundus camera. Results: Recent advances in access to telecommunications coupled with introduction of portable cameras and smartphone-based fundus imaging systems have resulted in an exponential surge in available technologies for portable fundus photography. Retinal cameras in the near future would have to cater to these needs by featuring a low-cost, portable design with automated controls and digitalized images with Web-based transfer. Conclusions: In this review, we aim to highlight the advances of fundus photography for retinal screening as well as discuss the advantages, disadvantages, and implications of the various technologies that are currently available. PMID:26308281

  11. Enhanced technologies for unattended ground sensor systems

    NASA Astrophysics Data System (ADS)

    Hartup, David C.

    2010-04-01

    Progress in several technical areas is being leveraged to advantage in Unattended Ground Sensor (UGS) systems. This paper discusses advanced technologies that are appropriate for use in UGS systems. While some technologies provide evolutionary improvements, other technologies result in revolutionary performance advancements for UGS systems. Some specific technologies discussed include wireless cameras and viewers, commercial PDA-based system programmers and monitors, new materials and techniques for packaging improvements, low power cueing sensor radios, advanced long-haul terrestrial and SATCOM radios, and networked communications. Other technologies covered include advanced target detection algorithms, high pixel count cameras for license plate and facial recognition, small cameras that provide large stand-off distances, video transmissions of target activity instead of still images, sensor fusion algorithms, and control center hardware. The impact of each technology on the overall UGS system architecture is discussed, along with the advantages provided to UGS system users. Areas of analysis include required camera parameters as a function of stand-off distance for license plate and facial recognition applications, power consumption for wireless cameras and viewers, sensor fusion communication requirements, and requirements to practically implement video transmission through UGS systems. Examples of devices that have already been fielded using technology from several of these areas are given.

  12. The LSST Camera 500 watt -130 degC Mixed Refrigerant Cooling System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bowden, Gordon B.; Langton, Brian J.; /SLAC

    2014-05-28

    The LSST Camera has a higher cryogenic heat load than previous CCD telescope cameras due to its large size (634 mm diameter focal plane, 3.2 Giga pixels) and its close coupled front-end electronics operating at low temperature inside the cryostat. Various refrigeration technologies are considered for this telescope/camera environment. MMR-Technology’s Mixed Refrigerant technology was chosen. A collaboration with that company was started in 2009. The system, based on a cluster of Joule-Thomson refrigerators running a special blend of mixed refrigerants is described. Both the advantages and problems of applying this technology to telescope camera refrigeration are discussed. Test results frommore » a prototype refrigerator running in a realistic telescope configuration are reported. Current and future stages of the development program are described. (auth)« less

  13. Camera-on-a-Chip

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Jet Propulsion Laboratory's research on a second generation, solid-state image sensor technology has resulted in the Complementary Metal- Oxide Semiconductor Active Pixel Sensor (CMOS), establishing an alternative to the Charged Coupled Device (CCD). Photobit Corporation, the leading supplier of CMOS image sensors, has commercialized two products of their own based on this technology: the PB-100 and PB-300. These devices are cameras on a chip, combining all camera functions. CMOS "active-pixel" digital image sensors offer several advantages over CCDs, a technology used in video and still-camera applications for 30 years. The CMOS sensors draw less energy, they use the same manufacturing platform as most microprocessors and memory chips, and they allow on-chip programming of frame size, exposure, and other parameters.

  14. High Speed Digital Camera Technology Review

    NASA Technical Reports Server (NTRS)

    Clements, Sandra D.

    2009-01-01

    A High Speed Digital Camera Technology Review (HSD Review) is being conducted to evaluate the state-of-the-shelf in this rapidly progressing industry. Five HSD cameras supplied by four camera manufacturers participated in a Field Test during the Space Shuttle Discovery STS-128 launch. Each camera was also subjected to Bench Tests in the ASRC Imaging Development Laboratory. Evaluation of the data from the Field and Bench Tests is underway. Representatives from the imaging communities at NASA / KSC and the Optical Systems Group are participating as reviewers. A High Speed Digital Video Camera Draft Specification was updated to address Shuttle engineering imagery requirements based on findings from this HSD Review. This draft specification will serve as the template for a High Speed Digital Video Camera Specification to be developed for the wider OSG imaging community under OSG Task OS-33.

  15. Comparison of the effectiveness of three retinal camera technologies for malarial retinopathy detection in Malawi

    NASA Astrophysics Data System (ADS)

    Soliz, Peter; Nemeth, Sheila C.; Barriga, E. Simon; Harding, Simon P.; Lewallen, Susan; Taylor, Terrie E.; MacCormick, Ian J.; Joshi, Vinayak S.

    2016-03-01

    The purpose of this study was to test the suitability of three available camera technologies (desktop, portable, and iphone based) for imaging comatose children who presented with clinical symptoms of malaria. Ultimately, the results of the project would form the basis for a design of a future camera to screen for malaria retinopathy (MR) in a resource challenged environment. The desktop, portable, and i-phone based cameras were represented by the Topcon, Pictor Plus, and Peek cameras, respectively. These cameras were tested on N=23 children presenting with symptoms of cerebral malaria (CM) at a malaria clinic, Queen Elizabeth Teaching Hospital in Malawi, Africa. Each patient was dilated for binocular indirect ophthalmoscopy (BIO) exam by an ophthalmologist followed by imaging with all three cameras. Each of the cases was graded according to an internationally established protocol and compared to the BIO as the clinical ground truth. The reader used three principal retinal lesions as markers for MR: hemorrhages, retinal whitening, and vessel discoloration. The study found that the mid-priced Pictor Plus hand-held camera performed considerably better than the lower price mobile phone-based camera, and slightly the higher priced table top camera. When comparing the readings of digital images against the clinical reference standard (BIO), the Pictor Plus camera had sensitivity and specificity for MR of 100% and 87%, respectively. This compares to a sensitivity and specificity of 87% and 75% for the i-phone based camera and 100% and 75% for the desktop camera. The drawback of all the cameras were their limited field of view which did not allow complete view of the periphery where vessel discoloration occurs most frequently. The consequence was that vessel discoloration was not addressed in this study. None of the cameras offered real-time image quality assessment to ensure high quality images to afford the best possible opportunity for reading by a remotely located specialist.

  16. System for critical infrastructure security based on multispectral observation-detection module

    NASA Astrophysics Data System (ADS)

    Trzaskawka, Piotr; Kastek, Mariusz; Życzkowski, Marek; Dulski, Rafał; Szustakowski, Mieczysław; Ciurapiński, Wiesław; Bareła, Jarosław

    2013-10-01

    Recent terrorist attacks and possibilities of such actions in future have forced to develop security systems for critical infrastructures that embrace sensors technologies and technical organization of systems. The used till now perimeter protection of stationary objects, based on construction of a ring with two-zone fencing, visual cameras with illumination are efficiently displaced by the systems of the multisensor technology that consists of: visible technology - day/night cameras registering optical contrast of a scene, thermal technology - cheap bolometric cameras recording thermal contrast of a scene and active ground radars - microwave and millimetre wavelengths that record and detect reflected radiation. Merging of these three different technologies into one system requires methodology for selection of technical conditions of installation and parameters of sensors. This procedure enables us to construct a system with correlated range, resolution, field of view and object identification. Important technical problem connected with the multispectral system is its software, which helps couple the radar with the cameras. This software can be used for automatic focusing of cameras, automatic guiding cameras to an object detected by the radar, tracking of the object and localization of the object on the digital map as well as target identification and alerting. Based on "plug and play" architecture, this system provides unmatched flexibility and simplistic integration of sensors and devices in TCP/IP networks. Using a graphical user interface it is possible to control sensors and monitor streaming video and other data over the network, visualize the results of data fusion process and obtain detailed information about detected intruders over a digital map. System provide high-level applications and operator workload reduction with features such as sensor to sensor cueing from detection devices, automatic e-mail notification and alarm triggering. The paper presents a structure and some elements of critical infrastructure protection solution which is based on a modular multisensor security system. System description is focused mainly on methodology of selection of sensors parameters. The results of the tests in real conditions are also presented.

  17. [Results of testing of MINISKAN mobile gamma-ray camera and specific features of its design].

    PubMed

    Utkin, V M; Kumakhov, M A; Blinov, N N; Korsunskiĭ, V N; Fomin, D K; Kolesnikova, N V; Tultaev, A V; Nazarov, A A; Tararukhina, O B

    2007-01-01

    The main results of engineering, biomedical, and clinical testing of MINISKAN mobile gamma-ray camera are presented. Specific features of the camera hardware and software, as well as the main technical specifications, are described. The gamma-ray camera implements a new technology based on reconstructive tomography, aperture encoding, and digital processing of signals.

  18. Feasibility study of a ``4H'' X-ray camera based on GaAs:Cr sensor

    NASA Astrophysics Data System (ADS)

    Dragone, A.; Kenney, C.; Lozinskaya, A.; Tolbanov, O.; Tyazhev, A.; Zarubin, A.; Wang, Zhehui

    2016-11-01

    A multilayer stacked X-ray camera concept is described. This type of technology is called `4H' X-ray cameras, where 4H stands for high-Z (Z>30) sensor, high-resolution (less than 300 micron pixel pitch), high-speed (above 100 MHz), and high-energy (above 30 keV in photon energy). The components of the technology, similar to the popular two-dimensional (2D) hybrid pixelated array detectors, consists of GaAs:Cr sensors bonded to high-speed ASICs. 4H cameras based on GaAs also use integration mode of X-ray detection. The number of layers, on the order of ten, is smaller than an earlier configuration for single-photon-counting (SPC) mode of detection [1]. High-speed ASIC based on modification to the ePix family of ASIC is discussed. Applications in X-ray free electron lasers (XFELs), synchrotrons, medicine and non-destructive testing are possible.

  19. Streak camera receiver definition study

    NASA Technical Reports Server (NTRS)

    Johnson, C. B.; Hunkler, L. T., Sr.; Letzring, S. A.; Jaanimagi, P.

    1990-01-01

    Detailed streak camera definition studies were made as a first step toward full flight qualification of a dual channel picosecond resolution streak camera receiver for the Geoscience Laser Altimeter and Ranging System (GLRS). The streak camera receiver requirements are discussed as they pertain specifically to the GLRS system, and estimates of the characteristics of the streak camera are given, based upon existing and near-term technological capabilities. Important problem areas are highlighted, and possible corresponding solutions are discussed.

  20. Neural network based feed-forward high density associative memory

    NASA Technical Reports Server (NTRS)

    Daud, T.; Moopenn, A.; Lamb, J. L.; Ramesham, R.; Thakoor, A. P.

    1987-01-01

    A novel thin film approach to neural-network-based high-density associative memory is described. The information is stored locally in a memory matrix of passive, nonvolatile, binary connection elements with a potential to achieve a storage density of 10 to the 9th bits/sq cm. Microswitches based on memory switching in thin film hydrogenated amorphous silicon, and alternatively in manganese oxide, have been used as programmable read-only memory elements. Low-energy switching has been ascertained in both these materials. Fabrication and testing of memory matrix is described. High-speed associative recall approaching 10 to the 7th bits/sec and high storage capacity in such a connection matrix memory system is also described.

  1. Spinoff 1999

    NASA Technical Reports Server (NTRS)

    1999-01-01

    A survey is presented of NASA-developed technologies and systems that were reaching commercial application in the course of 1999. Attention is given to the contributions of each major NASA Research Center. Representative 'spinoff' technologies include the predictive AI engine monitoring system EMPAS, the GPS-based Wide Area Augmentation System for aircraft navigation, a CMOS-Active Pixel Sensor camera-on-a-chip, a marine spectroradiometer, portable fuel cells, hyperspectral camera technology, and a rapid-prototyping process for ceramic components.

  2. Visual Positioning Indoors: Human Eyes vs. Smartphone Cameras

    PubMed Central

    Wu, Dewen; Chen, Ruizhi; Chen, Liang

    2017-01-01

    Artificial Intelligence (AI) technologies and their related applications are now developing at a rapid pace. Indoor positioning will be one of the core technologies that enable AI applications because people spend 80% of their time indoors. Humans can locate themselves related to a visually well-defined object, e.g., a door, based on their visual observations. Can a smartphone camera do a similar job when it points to an object? In this paper, a visual positioning solution was developed based on a single image captured from a smartphone camera pointing to a well-defined object. The smartphone camera simulates the process of human eyes for the purpose of relatively locating themselves against a well-defined object. Extensive experiments were conducted with five types of smartphones on three different indoor settings, including a meeting room, a library, and a reading room. Experimental results shown that the average positioning accuracy of the solution based on five smartphone cameras is 30.6 cm, while that for the human-observed solution with 300 samples from 10 different people is 73.1 cm. PMID:29144420

  3. Visual Positioning Indoors: Human Eyes vs. Smartphone Cameras.

    PubMed

    Wu, Dewen; Chen, Ruizhi; Chen, Liang

    2017-11-16

    Artificial Intelligence (AI) technologies and their related applications are now developing at a rapid pace. Indoor positioning will be one of the core technologies that enable AI applications because people spend 80% of their time indoors. Humans can locate themselves related to a visually well-defined object, e.g., a door, based on their visual observations. Can a smartphone camera do a similar job when it points to an object? In this paper, a visual positioning solution was developed based on a single image captured from a smartphone camera pointing to a well-defined object. The smartphone camera simulates the process of human eyes for the purpose of relatively locating themselves against a well-defined object. Extensive experiments were conducted with five types of smartphones on three different indoor settings, including a meeting room, a library, and a reading room. Experimental results shown that the average positioning accuracy of the solution based on five smartphone cameras is 30.6 cm, while that for the human-observed solution with 300 samples from 10 different people is 73.1 cm.

  4. Image quality assessment for selfies with and without super resolution

    NASA Astrophysics Data System (ADS)

    Kubota, Aya; Gohshi, Seiichi

    2018-04-01

    With the advent of cellphone cameras, in particular, on smartphones, many people now take photos of themselves alone and with others in the frame; such photos are popularly known as "selfies". Most smartphones are equipped with two cameras: the front-facing and rear cameras. The camera located on the back of the smartphone is referred to as the "out-camera," whereas the one located on the front of the smartphone is called the "in-camera." In-cameras are mainly used for selfies. Some smartphones feature high-resolution cameras. However, the original image quality cannot be obtained because smartphone cameras often have low-performance lenses. Super resolution (SR) is one of the recent technological advancements that has increased image resolution. We developed a new SR technology that can be processed on smartphones. Smartphones with new SR technology are currently available in the market have already registered sales. However, the effective use of new SR technology has not yet been verified. Comparing the image quality with and without SR on smartphone display is necessary to confirm the usefulness of this new technology. Methods that are based on objective and subjective assessments are required to quantitatively measure image quality. It is known that the typical object assessment value, such as Peak Signal to Noise Ratio (PSNR), does not go together with how we feel when we assess image/video. When digital broadcast started, the standard was determined using subjective assessment. Although subjective assessment usually comes at high cost because of personnel expenses for observers, the results are highly reproducible when they are conducted under right conditions and statistical analysis. In this study, the subjective assessment results for selfie images are reported.

  5. Infrared detectors and test technology of cryogenic camera

    NASA Astrophysics Data System (ADS)

    Yang, Xiaole; Liu, Xingxin; Xing, Mailing; Ling, Long

    2016-10-01

    Cryogenic camera which is widely used in deep space detection cools down optical system and support structure by cryogenic refrigeration technology, thereby improving the sensitivity. Discussing the characteristics and design points of infrared detector combined with camera's characteristics. At the same time, cryogenic background test systems of chip and detector assembly are established. Chip test system is based on variable cryogenic and multilayer Dewar, and assembly test system is based on target and background simulator in the thermal vacuum environment. The core of test is to establish cryogenic background. Non-uniformity, ratio of dead pixels and noise of test result are given finally. The establishment of test system supports for the design and calculation of infrared systems.

  6. Promoting ambulation responses among children with multiple disabilities through walkers and microswitches with contingent stimuli.

    PubMed

    Lancioni, Giulio E; Singh, Nirbhay N; O'Reilly, Mark F; Sigafoos, Jeff; Oliva, Doretta; Smaldone, Angela; La Martire, Maria L; Stasolla, Fabrizio; Castagnaro, Francesca; Groeneweg, Jop

    2010-01-01

    Children with severe or profound intellectual and motor disabilities often present problems of balance and ambulation and spend much of their time sitting or lying, with negative consequences for their development and social status. Recent research has shown the possibility of using a walker (support) device and microswitches with preferred stimuli to promote ambulation with these children. This study served as a replication of the aforementioned research and involved five new children with multiple disabilities. For four children, the study involved an ABAB design. For the fifth child, only an AB sequence was used. All children succeeded in increasing their frequencies of step responses during the B (intervention) phase(s) of the study, although the overall frequencies of those responses varied largely across them. These findings support the positive evidence already available about the effectiveness of this intervention approach in motivating and promoting children's ambulation. Practical implications of the findings are discussed. 2010 Elsevier Ltd. All rights reserved.

  7. CMOS Image Sensors: Electronic Camera On A Chip

    NASA Technical Reports Server (NTRS)

    Fossum, E. R.

    1995-01-01

    Recent advancements in CMOS image sensor technology are reviewed, including both passive pixel sensors and active pixel sensors. On- chip analog to digital converters and on-chip timing and control circuits permit realization of an electronic camera-on-a-chip. Highly miniaturized imaging systems based on CMOS image sensor technology are emerging as a competitor to charge-coupled devices for low cost uses.

  8. Semi-autonomous wheelchair system using stereoscopic cameras.

    PubMed

    Nguyen, Jordan S; Nguyen, Thanh H; Nguyen, Hung T

    2009-01-01

    This paper is concerned with the design and development of a semi-autonomous wheelchair system using stereoscopic cameras to assist hands-free control technologies for severely disabled people. The stereoscopic cameras capture an image from both the left and right cameras, which are then processed with a Sum of Absolute Differences (SAD) correlation algorithm to establish correspondence between image features in the different views of the scene. This is used to produce a stereo disparity image containing information about the depth of objects away from the camera in the image. A geometric projection algorithm is then used to generate a 3-Dimensional (3D) point map, placing pixels of the disparity image in 3D space. This is then converted to a 2-Dimensional (2D) depth map allowing objects in the scene to be viewed and a safe travel path for the wheelchair to be planned and followed based on the user's commands. This assistive technology utilising stereoscopic cameras has the purpose of automated obstacle detection, path planning and following, and collision avoidance during navigation. Experimental results obtained in an indoor environment displayed the effectiveness of this assistive technology.

  9. Ultrahigh- and high-speed photography, videography, and photonics '91; Proceedings of the Meeting, San Diego, CA, July 24-26, 1991

    NASA Astrophysics Data System (ADS)

    Jaanimagi, Paul A.

    1992-01-01

    This volume presents papers grouped under the topics on advances in streak and framing camera technology, applications of ultrahigh-speed photography, characterizing high-speed instrumentation, high-speed electronic imaging technology and applications, new technology for high-speed photography, high-speed imaging and photonics in detonics, and high-speed velocimetry. The papers presented include those on a subpicosecond X-ray streak camera, photocathodes for ultrasoft X-ray region, streak tube dynamic range, high-speed TV cameras for streak tube readout, femtosecond light-in-flight holography, and electrooptical systems characterization techniques. Attention is also given to high-speed electronic memory video recording techniques, high-speed IR imaging of repetitive events using a standard RS-170 imager, use of a CCD array as a medium-speed streak camera, the photography of shock waves in explosive crystals, a single-frame camera based on the type LD-S-10 intensifier tube, and jitter diagnosis for pico- and femtosecond sources.

  10. Feasibility study of a ``4H'' X-ray camera based on GaAs:Cr sensor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dragone, Angelo; Kenney, Chris; Lozinskaya, Anastassiya

    Here, we describe a multilayer stacked X-ray camera concept. This type of technology is called `4H' X-ray cameras, where 4H stands for high-Z (Z>30) sensor, high-resolution (less than 300 micron pixel pitch), high-speed (above 100 MHz), and high-energy (above 30 keV in photon energy). The components of the technology, similar to the popular two-dimensional (2D) hybrid pixelated array detectors, consists of GaAs:Cr sensors bonded to high-speed ASICs. 4H cameras based on GaAs also use integration mode of X-ray detection. The number of layers, on the order of ten, is smaller than an earlier configuration for single-photon-counting (SPC) mode of detectionmore » [1]. High-speed ASIC based on modification to the ePix family of ASIC is discussed. Applications in X-ray free electron lasers (XFELs), synchrotrons, medicine and non-destructive testing are possible.« less

  11. Feasibility study of a ``4H'' X-ray camera based on GaAs:Cr sensor

    DOE PAGES

    Dragone, Angelo; Kenney, Chris; Lozinskaya, Anastassiya; ...

    2016-11-29

    Here, we describe a multilayer stacked X-ray camera concept. This type of technology is called `4H' X-ray cameras, where 4H stands for high-Z (Z>30) sensor, high-resolution (less than 300 micron pixel pitch), high-speed (above 100 MHz), and high-energy (above 30 keV in photon energy). The components of the technology, similar to the popular two-dimensional (2D) hybrid pixelated array detectors, consists of GaAs:Cr sensors bonded to high-speed ASICs. 4H cameras based on GaAs also use integration mode of X-ray detection. The number of layers, on the order of ten, is smaller than an earlier configuration for single-photon-counting (SPC) mode of detectionmore » [1]. High-speed ASIC based on modification to the ePix family of ASIC is discussed. Applications in X-ray free electron lasers (XFELs), synchrotrons, medicine and non-destructive testing are possible.« less

  12. Assistive technology to help persons in a minimally conscious state develop responding and stimulation control: Performance assessment and social rating.

    PubMed

    Lancioni, Giulio E; Singh, Nirbhay N; O'Reilly, Mark F; Sigafoos, Jeff; D'Amico, Fiora; Buonocunto, Francesca; Navarro, Jorge; Lanzilotti, Crocifissa; Fiore, Piero; Megna, Marisa; Damiani, Sabino

    2015-01-01

    Post-coma persons in a minimally conscious state (MCS) and with extensive motor impairment and lack of speech tend to be passive and isolated. This study aimed to (a) further assess a technology-aided approach for fostering MCS participants' responding and stimulation control and (b) carry out a social validation check about the approach. Eight MCS participants were exposed to the aforementioned approach according to an ABAB design. The technology included optic, pressure or touch microswitches to monitor eyelid, hand or finger responses and a computer system that allowed those responses to produce brief periods of positive stimulation during the B (intervention) phases of the study. Eighty-four university psychology students and 42 care and health professionals were involved in the social validation check. The MCS participants showed clear increases in their response frequencies, thus producing increases in their levels of environmental stimulation input, during the B phases of the study. The students and care and health professionals involved in the social validation check rated the technology-aided approach more positively than a control condition in which stimulation was automatically presented to the participants. A technology-aided approach to foster responding and stimulation control in MCS persons may be effective and socially desirable.

  13. Real-world evaluation of the effectiveness of reversing camera and parking sensor technologies in preventing backover pedestrian injuries.

    PubMed

    Keall, M D; Fildes, B; Newstead, S

    2017-02-01

    Backover injuries to pedestrians are a significant road safety issue, but their prevalence is underestimated as the majority of such injuries are often outside the scope of official road injury recording systems, which just focus on public roads. Based on experimental evidence, reversing cameras have been found to be effective in reducing the rate of collisions when reversing; the evidence for the effectiveness of reverse parking sensors has been mixed. The wide availability of these technologies in recent model vehicles provides impetus for real-world evaluations using crash data. A logistic model was fitted to data from crashes that occurred on public roads constituting 3172 pedestrian injuries in New Zealand and four Australian States to estimate the odds of backover injury (compared to other sorts of pedestrian injury crashes) for the different technology combinations fitted as standard equipment (both reversing cameras and sensors; just reversing cameras; just sensors; neither cameras nor sensors) controlling for vehicle type, jurisdiction, speed limit area and year of manufacture restricted to the range 2007-2013. Compared to vehicles without any of these technologies, reduced odds of backover injury were estimated for all three of these technology configurations: 0.59 (95% CI 0.39-0.88) for reversing cameras by themselves; 0.70 (95% CI 0.49-1.01) for both reversing cameras and sensors; 0.69 (95% CI 0.47-1.03) for reverse parking sensors by themselves. These findings are important as they are the first to our knowledge to present an assessment of real-world safety effectiveness of these technologies. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Miniaturisation of Pressure-Sensitive Paint Measurement Systems Using Low-Cost, Miniaturised Machine Vision Cameras.

    PubMed

    Quinn, Mark Kenneth; Spinosa, Emanuele; Roberts, David A

    2017-07-25

    Measurements of pressure-sensitive paint (PSP) have been performed using new or non-scientific imaging technology based on machine vision tools. Machine vision camera systems are typically used for automated inspection or process monitoring. Such devices offer the benefits of lower cost and reduced size compared with typically scientific-grade cameras; however, their optical qualities and suitability have yet to be determined. This research intends to show relevant imaging characteristics and also show the applicability of such imaging technology for PSP. Details of camera performance are benchmarked and compared to standard scientific imaging equipment and subsequent PSP tests are conducted using a static calibration chamber. The findings demonstrate that machine vision technology can be used for PSP measurements, opening up the possibility of performing measurements on-board small-scale model such as those used for wind tunnel testing or measurements in confined spaces with limited optical access.

  15. Miniaturisation of Pressure-Sensitive Paint Measurement Systems Using Low-Cost, Miniaturised Machine Vision Cameras

    PubMed Central

    Spinosa, Emanuele; Roberts, David A.

    2017-01-01

    Measurements of pressure-sensitive paint (PSP) have been performed using new or non-scientific imaging technology based on machine vision tools. Machine vision camera systems are typically used for automated inspection or process monitoring. Such devices offer the benefits of lower cost and reduced size compared with typically scientific-grade cameras; however, their optical qualities and suitability have yet to be determined. This research intends to show relevant imaging characteristics and also show the applicability of such imaging technology for PSP. Details of camera performance are benchmarked and compared to standard scientific imaging equipment and subsequent PSP tests are conducted using a static calibration chamber. The findings demonstrate that machine vision technology can be used for PSP measurements, opening up the possibility of performing measurements on-board small-scale model such as those used for wind tunnel testing or measurements in confined spaces with limited optical access. PMID:28757553

  16. 16 CFR 1610.5 - Test apparatus and materials.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... electronic circuits, in addition to miscellaneous custom made cams and rods, shock absorbing linkages, and... burn time to 0.1 second. An electronic or mechanical timer can be used to record the burn time, and electro-mechanical devices (i.e., servo-motors, solenoids, micro-switches, and electronic circuits, in...

  17. 16 CFR 1610.5 - Test apparatus and materials.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... electronic circuits, in addition to miscellaneous custom made cams and rods, shock absorbing linkages, and... burn time to 0.1 second. An electronic or mechanical timer can be used to record the burn time, and electro-mechanical devices (i.e., servo-motors, solenoids, micro-switches, and electronic circuits, in...

  18. 16 CFR § 1610.5 - Test apparatus and materials.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... electronic circuits, in addition to miscellaneous custom made cams and rods, shock absorbing linkages, and... burn time to 0.1 second. An electronic or mechanical timer can be used to record the burn time, and electro-mechanical devices (i.e., servo-motors, solenoids, micro-switches, and electronic circuits, in...

  19. Studies on a silicon-photomultiplier-based camera for Imaging Atmospheric Cherenkov Telescopes

    NASA Astrophysics Data System (ADS)

    Arcaro, C.; Corti, D.; De Angelis, A.; Doro, M.; Manea, C.; Mariotti, M.; Rando, R.; Reichardt, I.; Tescaro, D.

    2017-12-01

    Imaging Atmospheric Cherenkov Telescopes (IACTs) represent a class of instruments which are dedicated to the ground-based observation of cosmic VHE gamma ray emission based on the detection of the Cherenkov radiation produced in the interaction of gamma rays with the Earth atmosphere. One of the key elements of such instruments is a pixelized focal-plane camera consisting of photodetectors. To date, photomultiplier tubes (PMTs) have been the common choice given their high photon detection efficiency (PDE) and fast time response. Recently, silicon photomultipliers (SiPMs) are emerging as an alternative. This rapidly evolving technology has strong potential to become superior to that based on PMTs in terms of PDE, which would further improve the sensitivity of IACTs, and see a price reduction per square millimeter of detector area. We are working to develop a SiPM-based module for the focal-plane cameras of the MAGIC telescopes to probe this technology for IACTs with large focal plane cameras of an area of few square meters. We will describe the solutions we are exploring in order to balance a competitive performance with a minimal impact on the overall MAGIC camera design using ray tracing simulations. We further present a comparative study of the overall light throughput based on Monte Carlo simulations and considering the properties of the major hardware elements of an IACT.

  20. Low-cost uncooled VOx infrared camera development

    NASA Astrophysics Data System (ADS)

    Li, Chuan; Han, C. J.; Skidmore, George D.; Cook, Grady; Kubala, Kenny; Bates, Robert; Temple, Dorota; Lannon, John; Hilton, Allan; Glukh, Konstantin; Hardy, Busbee

    2013-06-01

    The DRS Tamarisk® 320 camera, introduced in 2011, is a low cost commercial camera based on the 17 µm pixel pitch 320×240 VOx microbolometer technology. A higher resolution 17 µm pixel pitch 640×480 Tamarisk®640 has also been developed and is now in production serving the commercial markets. Recently, under the DARPA sponsored Low Cost Thermal Imager-Manufacturing (LCTI-M) program and internal project, DRS is leading a team of industrial experts from FiveFocal, RTI International and MEMSCAP to develop a small form factor uncooled infrared camera for the military and commercial markets. The objective of the DARPA LCTI-M program is to develop a low SWaP camera (<3.5 cm3 in volume and <500 mW in power consumption) that costs less than US $500 based on a 10,000 units per month production rate. To meet this challenge, DRS is developing several innovative technologies including a small pixel pitch 640×512 VOx uncooled detector, an advanced digital ROIC and low power miniature camera electronics. In addition, DRS and its partners are developing innovative manufacturing processes to reduce production cycle time and costs including wafer scale optic and vacuum packaging manufacturing and a 3-dimensional integrated camera assembly. This paper provides an overview of the DRS Tamarisk® project and LCTI-M related uncooled technology development activities. Highlights of recent progress and challenges will also be discussed. It should be noted that BAE Systems and Raytheon Vision Systems are also participants of the DARPA LCTI-M program.

  1. Lytro camera technology: theory, algorithms, performance analysis

    NASA Astrophysics Data System (ADS)

    Georgiev, Todor; Yu, Zhan; Lumsdaine, Andrew; Goma, Sergio

    2013-03-01

    The Lytro camera is the first implementation of a plenoptic camera for the consumer market. We consider it a successful example of the miniaturization aided by the increase in computational power characterizing mobile computational photography. The plenoptic camera approach to radiance capture uses a microlens array as an imaging system focused on the focal plane of the main camera lens. This paper analyzes the performance of Lytro camera from a system level perspective, considering the Lytro camera as a black box, and uses our interpretation of Lytro image data saved by the camera. We present our findings based on our interpretation of Lytro camera file structure, image calibration and image rendering; in this context, artifacts and final image resolution are discussed.

  2. Laser Imaging Video Camera Sees Through Fire, Fog, Smoke

    NASA Technical Reports Server (NTRS)

    2015-01-01

    Under a series of SBIR contracts with Langley Research Center, inventor Richard Billmers refined a prototype for a laser imaging camera capable of seeing through fire, fog, smoke, and other obscurants. Now, Canton, Ohio-based Laser Imaging through Obscurants (LITO) Technologies Inc. is demonstrating the technology as a perimeter security system at Glenn Research Center and planning its future use in aviation, shipping, emergency response, and other fields.

  3. Quantified, Interactive Simulation of AMCW ToF Camera Including Multipath Effects

    PubMed Central

    Lambers, Martin; Kolb, Andreas

    2017-01-01

    In the last decade, Time-of-Flight (ToF) range cameras have gained increasing popularity in robotics, automotive industry, and home entertainment. Despite technological developments, ToF cameras still suffer from error sources such as multipath interference or motion artifacts. Thus, simulation of ToF cameras, including these artifacts, is important to improve camera and algorithm development. This paper presents a physically-based, interactive simulation technique for amplitude modulated continuous wave (AMCW) ToF cameras, which, among other error sources, includes single bounce indirect multipath interference based on an enhanced image-space approach. The simulation accounts for physical units down to the charge level accumulated in sensor pixels. Furthermore, we present the first quantified comparison for ToF camera simulators. We present bidirectional reference distribution function (BRDF) measurements for selected, purchasable materials in the near-infrared (NIR) range, craft real and synthetic scenes out of these materials and quantitatively compare the range sensor data. PMID:29271888

  4. Quantified, Interactive Simulation of AMCW ToF Camera Including Multipath Effects.

    PubMed

    Bulczak, David; Lambers, Martin; Kolb, Andreas

    2017-12-22

    In the last decade, Time-of-Flight (ToF) range cameras have gained increasing popularity in robotics, automotive industry, and home entertainment. Despite technological developments, ToF cameras still suffer from error sources such as multipath interference or motion artifacts. Thus, simulation of ToF cameras, including these artifacts, is important to improve camera and algorithm development. This paper presents a physically-based, interactive simulation technique for amplitude modulated continuous wave (AMCW) ToF cameras, which, among other error sources, includes single bounce indirect multipath interference based on an enhanced image-space approach. The simulation accounts for physical units down to the charge level accumulated in sensor pixels. Furthermore, we present the first quantified comparison for ToF camera simulators. We present bidirectional reference distribution function (BRDF) measurements for selected, purchasable materials in the near-infrared (NIR) range, craft real and synthetic scenes out of these materials and quantitatively compare the range sensor data.

  5. CMOS Imaging Sensor Technology for Aerial Mapping Cameras

    NASA Astrophysics Data System (ADS)

    Neumann, Klaus; Welzenbach, Martin; Timm, Martin

    2016-06-01

    In June 2015 Leica Geosystems launched the first large format aerial mapping camera using CMOS sensor technology, the Leica DMC III. This paper describes the motivation to change from CCD sensor technology to CMOS for the development of this new aerial mapping camera. In 2002 the DMC first generation was developed by Z/I Imaging. It was the first large format digital frame sensor designed for mapping applications. In 2009 Z/I Imaging designed the DMC II which was the first digital aerial mapping camera using a single ultra large CCD sensor to avoid stitching of smaller CCDs. The DMC III is now the third generation of large format frame sensor developed by Z/I Imaging and Leica Geosystems for the DMC camera family. It is an evolution of the DMC II using the same system design with one large monolithic PAN sensor and four multi spectral camera heads for R,G, B and NIR. For the first time a 391 Megapixel large CMOS sensor had been used as PAN chromatic sensor, which is an industry record. Along with CMOS technology goes a range of technical benefits. The dynamic range of the CMOS sensor is approx. twice the range of a comparable CCD sensor and the signal to noise ratio is significantly better than with CCDs. Finally results from the first DMC III customer installations and test flights will be presented and compared with other CCD based aerial sensors.

  6. Overview of Digital Forensics Algorithms in Dslr Cameras

    NASA Astrophysics Data System (ADS)

    Aminova, E.; Trapeznikov, I.; Priorov, A.

    2017-05-01

    The widespread usage of the mobile technologies and the improvement of the digital photo devices getting has led to more frequent cases of falsification of images including in the judicial practice. Consequently, the actual task for up-to-date digital image processing tools is the development of algorithms for determining the source and model of the DSLR (Digital Single Lens Reflex) camera and improve image formation algorithms. Most research in this area based on the mention that the extraction of unique sensor trace of DSLR camera could be possible on the certain stage of the imaging process into the camera. It is considered that the study focuses on the problem of determination of unique feature of DSLR cameras based on optical subsystem artifacts and sensor noises.

  7. Interferometric Dynamic Measurement: Techniques Based on High-Speed Imaging or a Single Photodetector

    PubMed Central

    Fu, Yu; Pedrini, Giancarlo

    2014-01-01

    In recent years, optical interferometry-based techniques have been widely used to perform noncontact measurement of dynamic deformation in different industrial areas. In these applications, various physical quantities need to be measured in any instant and the Nyquist sampling theorem has to be satisfied along the time axis on each measurement point. Two types of techniques were developed for such measurements: one is based on high-speed cameras and the other uses a single photodetector. The limitation of the measurement range along the time axis in camera-based technology is mainly due to the low capturing rate, while the photodetector-based technology can only do the measurement on a single point. In this paper, several aspects of these two technologies are discussed. For the camera-based interferometry, the discussion includes the introduction of the carrier, the processing of the recorded images, the phase extraction algorithms in various domains, and how to increase the temporal measurement range by using multiwavelength techniques. For the detector-based interferometry, the discussion mainly focuses on the single-point and multipoint laser Doppler vibrometers and their applications for measurement under extreme conditions. The results show the effort done by researchers for the improvement of the measurement capabilities using interferometry-based techniques to cover the requirements needed for the industrial applications. PMID:24963503

  8. Small Orbital Stereo Tracking Camera Technology Development

    NASA Technical Reports Server (NTRS)

    Bryan, Tom; Macleod, Todd; Gagliano, Larry

    2015-01-01

    On-Orbit Small Debris Tracking and Characterization is a technical gap in the current National Space Situational Awareness necessary to safeguard orbital assets and crew. This poses a major risk of MOD damage to ISS and Exploration vehicles. In 2015 this technology was added to NASA's Office of Chief Technologist roadmap. For missions flying in or assembled in or staging from LEO, the physical threat to vehicle and crew is needed in order to properly design the proper level of MOD impact shielding and proper mission design restrictions. Need to verify debris flux and size population versus ground RADAR tracking. Use of ISS for In-Situ Orbital Debris Tracking development provides attitude, power, data and orbital access without a dedicated spacecraft or restricted operations on-board a host vehicle as a secondary payload. Sensor Applicable to in-situ measuring orbital debris in flux and population in other orbits or on other vehicles. Could enhance safety on and around ISS. Some technologies extensible to monitoring of extraterrestrial debris as well to help accomplish this, new technologies must be developed quickly. The Small Orbital Stereo Tracking Camera is one such up and coming technology. It consists of flying a pair of intensified megapixel telephoto cameras to evaluate Orbital Debris (OD) monitoring in proximity of International Space Station. It will demonstrate on-orbit optical tracking (in situ) of various sized objects versus ground RADAR tracking and small OD models. The cameras are based on Flight Proven Advanced Video Guidance Sensor pixel to spot algorithms (Orbital Express) and military targeting cameras. And by using twin cameras we can provide Stereo images for ranging & mission redundancy. When pointed into the orbital velocity vector (RAM), objects approaching or near the stereo camera set can be differentiated from the stars moving upward in background.

  9. Small Orbital Stereo Tracking Camera Technology Development

    NASA Technical Reports Server (NTRS)

    Bryan, Tom; MacLeod, Todd; Gagliano, Larry

    2016-01-01

    On-Orbit Small Debris Tracking and Characterization is a technical gap in the current National Space Situational Awareness necessary to safeguard orbital assets and crew. This poses a major risk of MOD damage to ISS and Exploration vehicles. In 2015 this technology was added to NASA's Office of Chief Technologist roadmap. For missions flying in or assembled in or staging from LEO, the physical threat to vehicle and crew is needed in order to properly design the proper level of MOD impact shielding and proper mission design restrictions. Need to verify debris flux and size population versus ground RADAR tracking. Use of ISS for In-Situ Orbital Debris Tracking development provides attitude, power, data and orbital access without a dedicated spacecraft or restricted operations on-board a host vehicle as a secondary payload. Sensor Applicable to in-situ measuring orbital debris in flux and population in other orbits or on other vehicles. Could enhance safety on and around ISS. Some technologies extensible to monitoring of extraterrestrial debris as well To help accomplish this, new technologies must be developed quickly. The Small Orbital Stereo Tracking Camera is one such up and coming technology. It consists of flying a pair of intensified megapixel telephoto cameras to evaluate Orbital Debris (OD) monitoring in proximity of International Space Station. It will demonstrate on-orbit optical tracking (in situ) of various sized objects versus ground RADAR tracking and small OD models. The cameras are based on Flight Proven Advanced Video Guidance Sensor pixel to spot algorithms (Orbital Express) and military targeting cameras. And by using twin cameras we can provide Stereo images for ranging & mission redundancy. When pointed into the orbital velocity vector (RAM), objects approaching or near the stereo camera set can be differentiated from the stars moving upward in background.

  10. Controlling the Universe

    ERIC Educational Resources Information Center

    Evanson, Nick

    2004-01-01

    Basic electronic devices have been used to great effect with console computer games. This paper looks at a range of devices from the very simple, such as microswitches and potentiometers, up to the more complex Hall effect probe. There is a great deal of relatively straightforward use of simple devices in computer games systems, and having read…

  11. An Evaluation of Resurgence during Functional Communication Training

    ERIC Educational Resources Information Center

    Wacker, David P.; Harding, Jay W.; Morgan, Theresa A.; Berg, Wendy K.; Schieltz, Kelly M.; Lee, John F.; Padilla, Yaniz C.

    2013-01-01

    Three children who displayed destructive behavior maintained by negative reinforcement received functional communication training (FCT). During FCT, the children were required to complete a demand and then to mand (touch a card attached to a microswitch, sign, or vocalize) to receive brief play breaks. Prior to and 1 to 3 times following the…

  12. Electrostatic Radio Frequency (RF) Microelectromechanical Systems (MEMS) Switches With Metal Alloy Electric Contacts

    DTIC Science & Technology

    2004-09-01

    Serway , Raymond A. Physics for Scientists and Engineers . New York: Saunders College Publishing, 1986. 141. Sharvin, Y.V. Sov. Phys. JETP , 21 :655 (1965...III. Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 3.1 Micro-Switch Physical Description . . . . . . . . . . . 17 3.2 MEMS...Insertion Loss . . . . . . . . . . . . . . . . . . . . . . . . 56 IMD Intermodulation Distortion . . . . . . . . . . . . . . . . 56 PVD Physical Vapor

  13. Differences in glance behavior between drivers using a rearview camera, parking sensor system, both technologies, or no technology during low-speed parking maneuvers.

    PubMed

    Kidd, David G; McCartt, Anne T

    2016-02-01

    This study characterized the use of various fields of view during low-speed parking maneuvers by drivers with a rearview camera, a sensor system, a camera and sensor system combined, or neither technology. Participants performed four different low-speed parking maneuvers five times. Glances to different fields of view the second time through the four maneuvers were coded along with the glance locations at the onset of the audible warning from the sensor system and immediately after the warning for participants in the sensor and camera-plus-sensor conditions. Overall, the results suggest that information from cameras and/or sensor systems is used in place of mirrors and shoulder glances. Participants with a camera, sensor system, or both technologies looked over their shoulders significantly less than participants without technology. Participants with cameras (camera and camera-plus-sensor conditions) used their mirrors significantly less compared with participants without cameras (no-technology and sensor conditions). Participants in the camera-plus-sensor condition looked at the center console/camera display for a smaller percentage of the time during the low-speed maneuvers than participants in the camera condition and glanced more frequently to the center console/camera display immediately after the warning from the sensor system compared with the frequency of glances to this location at warning onset. Although this increase was not statistically significant, the pattern suggests that participants in the camera-plus-sensor condition may have used the warning as a cue to look at the camera display. The observed differences in glance behavior between study groups were illustrated by relating it to the visibility of a 12-15-month-old child-size object. These findings provide evidence that drivers adapt their glance behavior during low-speed parking maneuvers following extended use of rearview cameras and parking sensors, and suggest that other technologies which augment the driving task may do the same. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Verification technology of remote sensing camera satellite imaging simulation based on ray tracing

    NASA Astrophysics Data System (ADS)

    Gu, Qiongqiong; Chen, Xiaomei; Yang, Deyun

    2017-08-01

    Remote sensing satellite camera imaging simulation technology is broadly used to evaluate the satellite imaging quality and to test the data application system. But the simulation precision is hard to examine. In this paper, we propose an experimental simulation verification method, which is based on the test parameter variation comparison. According to the simulation model based on ray-tracing, the experiment is to verify the model precision by changing the types of devices, which are corresponding the parameters of the model. The experimental results show that the similarity between the imaging model based on ray tracing and the experimental image is 91.4%, which can simulate the remote sensing satellite imaging system very well.

  15. Conditions that influence the accuracy of anthropometric parameter estimation for human body segments using shape-from-silhouette

    NASA Astrophysics Data System (ADS)

    Mundermann, Lars; Mundermann, Annegret; Chaudhari, Ajit M.; Andriacchi, Thomas P.

    2005-01-01

    Anthropometric parameters are fundamental for a wide variety of applications in biomechanics, anthropology, medicine and sports. Recent technological advancements provide methods for constructing 3D surfaces directly. Of these new technologies, visual hull construction may be the most cost-effective yet sufficiently accurate method. However, the conditions influencing the accuracy of anthropometric measurements based on visual hull reconstruction are unknown. The purpose of this study was to evaluate the conditions that influence the accuracy of 3D shape-from-silhouette reconstruction of body segments dependent on number of cameras, camera resolution and object contours. The results demonstrate that the visual hulls lacked accuracy in concave regions and narrow spaces, but setups with a high number of cameras reconstructed a human form with an average accuracy of 1.0 mm. In general, setups with less than 8 cameras yielded largely inaccurate visual hull constructions, while setups with 16 and more cameras provided good volume estimations. Body segment volumes were obtained with an average error of 10% at a 640x480 resolution using 8 cameras. Changes in resolution did not significantly affect the average error. However, substantial decreases in error were observed with increasing number of cameras (33.3% using 4 cameras; 10.5% using 8 cameras; 4.1% using 16 cameras; 1.2% using 64 cameras).

  16. Research on inosculation between master of ceremonies or players and virtual scene in virtual studio

    NASA Astrophysics Data System (ADS)

    Li, Zili; Zhu, Guangxi; Zhu, Yaoting

    2003-04-01

    A technical principle about construction of virtual studio has been proposed where orientation tracker and telemeter has been used for improving conventional BETACAM pickup camera and connecting with the software module of the host. A model of virtual camera named Camera & Post-camera Coupling Pair has been put forward, which is different from the common model in computer graphics and has been bound to real BETACAM pickup camera for shooting. The formula has been educed to compute the foreground frame buffer image and the background frame buffer image of the virtual scene whose boundary is based on the depth information of target point of the real BETACAM pickup camera's projective ray. The effect of real-time consistency has been achieved between the video image sequences of the master of ceremonies or players and the CG video image sequences for the virtual scene in spatial position, perspective relationship and image object masking. The experimental result has shown that the technological scheme of construction of virtual studio submitted in this paper is feasible and more applicative and more effective than the existing technology to establish a virtual studio based on color-key and image synthesis with background using non-linear video editing technique.

  17. The integrated design and archive of space-borne signal processing and compression coding

    NASA Astrophysics Data System (ADS)

    He, Qiang-min; Su, Hao-hang; Wu, Wen-bo

    2017-10-01

    With the increasing demand of users for the extraction of remote sensing image information, it is very urgent to significantly enhance the whole system's imaging quality and imaging ability by using the integrated design to achieve its compact structure, light quality and higher attitude maneuver ability. At this present stage, the remote sensing camera's video signal processing unit and image compression and coding unit are distributed in different devices. The volume, weight and consumption of these two units is relatively large, which unable to meet the requirements of the high mobility remote sensing camera. This paper according to the high mobility remote sensing camera's technical requirements, designs a kind of space-borne integrated signal processing and compression circuit by researching a variety of technologies, such as the high speed and high density analog-digital mixed PCB design, the embedded DSP technology and the image compression technology based on the special-purpose chips. This circuit lays a solid foundation for the research of the high mobility remote sensing camera.

  18. Presence capture cameras - a new challenge to the image quality

    NASA Astrophysics Data System (ADS)

    Peltoketo, Veli-Tapani

    2016-04-01

    Commercial presence capture cameras are coming to the markets and a new era of visual entertainment starts to get its shape. Since the true presence capturing is still a very new technology, the real technical solutions are just passed a prototyping phase and they vary a lot. Presence capture cameras have still the same quality issues to tackle as previous phases of digital imaging but also numerous new ones. This work concentrates to the quality challenges of presence capture cameras. A camera system which can record 3D audio-visual reality as it is has to have several camera modules, several microphones and especially technology which can synchronize output of several sources to a seamless and smooth virtual reality experience. Several traditional quality features are still valid in presence capture cameras. Features like color fidelity, noise removal, resolution and dynamic range create the base of virtual reality stream quality. However, co-operation of several cameras brings a new dimension for these quality factors. Also new quality features can be validated. For example, how the camera streams should be stitched together with 3D experience without noticeable errors and how to validate the stitching? The work describes quality factors which are still valid in the presence capture cameras and defines the importance of those. Moreover, new challenges of presence capture cameras are investigated in image and video quality point of view. The work contains considerations how well current measurement methods can be used in presence capture cameras.

  19. Application of real-time single camera SLAM technology for image-guided targeting in neurosurgery

    NASA Astrophysics Data System (ADS)

    Chang, Yau-Zen; Hou, Jung-Fu; Tsao, Yi Hsiang; Lee, Shih-Tseng

    2012-10-01

    In this paper, we propose an application of augmented reality technology for targeting tumors or anatomical structures inside the skull. The application is a combination of the technologies of MonoSLAM (Single Camera Simultaneous Localization and Mapping) and computer graphics. A stereo vision system is developed to construct geometric data of human face for registration with CT images. Reliability and accuracy of the application is enhanced by the use of fiduciary markers fixed to the skull. The MonoSLAM keeps track of the current location of the camera with respect to an augmented reality (AR) marker using the extended Kalman filter. The fiduciary markers provide reference when the AR marker is invisible to the camera. Relationship between the markers on the face and the augmented reality marker is obtained by a registration procedure by the stereo vision system and is updated on-line. A commercially available Android based tablet PC equipped with a 320×240 front-facing camera was used for implementation. The system is able to provide a live view of the patient overlaid by the solid models of tumors or anatomical structures, as well as the missing part of the tool inside the skull.

  20. Hydrogen Flame Imaging System Soars to New, Different Heights

    NASA Technical Reports Server (NTRS)

    2002-01-01

    When Judy and Dave Duncan of Auburn, Calif.-based Duncan Technologies Inc. (DTI) developed their color hydrogen flame imaging system in the early 1990's, their market prospects were limited. 'We talked about commercializing the technology in the hydrogen community, but we also looked at commercialization on a much broader aspect. While there were some hydrogen applications, the market was not large enough to suppport an entire company; also, safety issues were a concern,' said Judy Duncan, owner and CEO of Duncan Technologies. Using the basic technology developed under the Small Business Innovation Research Program (SBIR); DTI conducted market research, identified other applications, formulated a plan for next generation development, and implemented a far-reaching marketing strategy. 'We took that technology; reinvested our own funds and energy into a second-generation design on the overall camera electronics and deployed that basic technology intially in a series of what we call multi-spectral cameras; cameras that could image in both the visible range and the infrared,' explains Duncan. 'The SBIR program allowed us to develop the technology to do a 3CCD camera, which very few compaines in the world do, particularly not small companies. The fact that we designed our own prism and specked the coding as we had for the hydrogen application, we were able to create a custom spectral configuration which could support varying types of research and applications.' As a result, Duncan Technologies Inc. of Auburn, Ca., has achieved a milestone $ 1 million in sales.

  1. Low-cost low-power uncooled a-Si-based micro infrared camera for unattended ground sensor applications

    NASA Astrophysics Data System (ADS)

    Schimert, Thomas R.; Ratcliff, David D.; Brady, John F., III; Ropson, Steven J.; Gooch, Roland W.; Ritchey, Bobbi; McCardel, P.; Rachels, K.; Wand, Marty; Weinstein, M.; Wynn, John

    1999-07-01

    Low power and low cost are primary requirements for an imaging infrared camera used in unattended ground sensor arrays. In this paper, an amorphous silicon (a-Si) microbolometer-based uncooled infrared camera technology offering a low cost, low power solution to infrared surveillance for UGS applications is presented. A 15 X 31 micro infrared camera (MIRC) has been demonstrated which exhibits an f/1 noise equivalent temperature difference sensitivity approximately 67 mK. This sensitivity has been achieved without the use of a thermoelectric cooler for array temperature stabilization thereby significantly reducing the power requirements. The chopperless camera is capable of operating from snapshot mode (1 Hz) to video frame rate (30 Hz). Power consumption of 0.4 W without display, and 0.75 W with display, respectively, has been demonstrated at 30 Hz operation. The 15 X 31 camera demonstrated exhibits a 35 mm camera form factor employing a low cost f/1 singlet optic and LED display, as well as low cost vacuum packaging. A larger 120 X 160 version of the MIRC is also in development and will be discussed. The 120 X 160 MIRC exhibits a substantially smaller form factor and incorporates all the low cost, low power features demonstrated in the 15 X 31 MIRC prototype. In this paper, a-Si microbolometer technology for the MIRC will be presented. Also, the key features and performance parameters of the MIRC are presented.

  2. A Portable, Inexpensive, Nonmydriatic Fundus Camera Based on the Raspberry Pi® Computer.

    PubMed

    Shen, Bailey Y; Mukai, Shizuo

    2017-01-01

    Purpose. Nonmydriatic fundus cameras allow retinal photography without pharmacologic dilation of the pupil. However, currently available nonmydriatic fundus cameras are bulky, not portable, and expensive. Taking advantage of recent advances in mobile technology, we sought to create a nonmydriatic fundus camera that was affordable and could be carried in a white coat pocket. Methods. We built a point-and-shoot prototype camera using a Raspberry Pi computer, an infrared-sensitive camera board, a dual infrared and white light light-emitting diode, a battery, a 5-inch touchscreen liquid crystal display, and a disposable 20-diopter condensing lens. Our prototype camera was based on indirect ophthalmoscopy with both infrared and white lights. Results. The prototype camera measured 133mm × 91mm × 45mm and weighed 386 grams. The total cost of the components, including the disposable lens, was $185.20. The camera was able to obtain good-quality fundus images without pharmacologic dilation of the pupils. Conclusion. A fully functional, inexpensive, handheld, nonmydriatic fundus camera can be easily assembled from a relatively small number of components. With modest improvements, such a camera could be useful for a variety of healthcare professionals, particularly those who work in settings where a traditional table-mounted nonmydriatic fundus camera would be inconvenient.

  3. A Portable, Inexpensive, Nonmydriatic Fundus Camera Based on the Raspberry Pi® Computer

    PubMed Central

    Shen, Bailey Y.

    2017-01-01

    Purpose. Nonmydriatic fundus cameras allow retinal photography without pharmacologic dilation of the pupil. However, currently available nonmydriatic fundus cameras are bulky, not portable, and expensive. Taking advantage of recent advances in mobile technology, we sought to create a nonmydriatic fundus camera that was affordable and could be carried in a white coat pocket. Methods. We built a point-and-shoot prototype camera using a Raspberry Pi computer, an infrared-sensitive camera board, a dual infrared and white light light-emitting diode, a battery, a 5-inch touchscreen liquid crystal display, and a disposable 20-diopter condensing lens. Our prototype camera was based on indirect ophthalmoscopy with both infrared and white lights. Results. The prototype camera measured 133mm × 91mm × 45mm and weighed 386 grams. The total cost of the components, including the disposable lens, was $185.20. The camera was able to obtain good-quality fundus images without pharmacologic dilation of the pupils. Conclusion. A fully functional, inexpensive, handheld, nonmydriatic fundus camera can be easily assembled from a relatively small number of components. With modest improvements, such a camera could be useful for a variety of healthcare professionals, particularly those who work in settings where a traditional table-mounted nonmydriatic fundus camera would be inconvenient. PMID:28396802

  4. Sensitivity, accuracy, and precision issues in opto-electronic holography based on fiber optics and high-spatial- and high-digitial-resolution cameras

    NASA Astrophysics Data System (ADS)

    Furlong, Cosme; Yokum, Jeffrey S.; Pryputniewicz, Ryszard J.

    2002-06-01

    Sensitivity, accuracy, and precision characteristics in quantitative optical metrology techniques, and specifically in optoelectronic holography based on fiber optics and high-spatial and high-digital resolution cameras, are discussed in this paper. It is shown that sensitivity, accuracy, and precision dependent on both, the effective determination of optical phase and the effective characterization of the illumination-observation conditions. Sensitivity, accuracy, and precision are investigated with the aid of National Institute of Standards and Technology (NIST) traceable gages, demonstrating the applicability of quantitative optical metrology techniques to satisfy constantly increasing needs for the study and development of emerging technologies.

  5. Coherent infrared imaging camera (CIRIC)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hutchinson, D.P.; Simpson, M.L.; Bennett, C.A.

    1995-07-01

    New developments in 2-D, wide-bandwidth HgCdTe (MCT) and GaAs quantum-well infrared photodetectors (QWIP) coupled with Monolithic Microwave Integrated Circuit (MMIC) technology are now making focal plane array coherent infrared (IR) cameras viable. Unlike conventional IR cameras which provide only thermal data about a scene or target, a coherent camera based on optical heterodyne interferometry will also provide spectral and range information. Each pixel of the camera, consisting of a single photo-sensitive heterodyne mixer followed by an intermediate frequency amplifier and illuminated by a separate local oscillator beam, constitutes a complete optical heterodyne receiver. Applications of coherent IR cameras are numerousmore » and include target surveillance, range detection, chemical plume evolution, monitoring stack plume emissions, and wind shear detection.« less

  6. Who Goes There? Linking Remote Cameras and Schoolyard Science to Empower Action

    ERIC Educational Resources Information Center

    Tanner, Dawn; Ernst, Julie

    2013-01-01

    Taking Action Opportunities (TAO) is a curriculum that combines guided reflection, a focus on the local environment, and innovative use of wildlife technology to empower student action toward improving the environment. TAO is experientially based and uses remote cameras as a tool for schoolyard exploration. Through TAO, students engage in research…

  7. Assessing the Impact and Social Perception of Self-Regulated Music Stimulation with Patients with Alzheimer's Disease

    ERIC Educational Resources Information Center

    Lancioni, Giulio E.; O'Reilly, Mark F.; Singh, Nirbhay N.; Sigafoos, Jeff; Grumo, Gianluca; Pinto, Katia; Stasolla, Fabrizio; Signorino, Mario; Groeneweg, Jop

    2013-01-01

    We assessed the impact and social rating of an active and a passive music condition implemented with six patients with Alzheimer's disease. In the active condition, the patients used a simple hand response and a microswitch to self-regulate music stimulation inputs. In the passive condition, music stimulation was automatically presented throughout…

  8. In situ oxygen plasma cleaning of microswitch surfaces—comparison of Ti and graphite electrodes

    NASA Astrophysics Data System (ADS)

    Oh, Changho; Streller, Frank; Ashurst, W. Robert; Carpick, Robert W.; de Boer, Maarten P.

    2016-11-01

    Ohmic micro- and nanoswitches are of interest for a wide variety of applications including radio frequency communications and as low power complements to transistors. In these switches, it is of paramount importance to maintain surface cleanliness in order to prevent frequent failure by tribopolymer growth. To prepare surfaces, an oxygen plasma clean is expected to be beneficial compared to a high temperature vacuum bakeout because of shorter cleaning time (<5 min compared to ~24 h) and active removal of organic contaminants. We demonstrate that sputtering of the electrode material during oxygen plasma cleaning is a critical consideration for effective cleaning of switch surfaces. With Ti electrodes, a TiO x layer forms that increases electrical contact resistance. When plasma-cleaned using graphite electrodes, the resistance of Pt-coated microswitches exhibit a long lifetime with consistently low resistance (<0.5 Ω variation over 300 million cycles) if the test chamber is refilled with ultra-high purity nitrogen and if the devices are not exposed to laboratory air. Their current-voltage characteristic is also linear at the millivolt level. This is important for nanoswitches which will be operated in that range.

  9. A multiple camera tongue switch for a child with severe spastic quadriplegic cerebral palsy.

    PubMed

    Leung, Brian; Chau, Tom

    2010-01-01

    The present study proposed a video-based access technology that facilitated a non-contact tongue protrusion access modality for a 7-year-old boy with severe spastic quadriplegic cerebral palsy (GMFCS level 5). The proposed system featured a centre camera and two peripheral cameras to extend coverage of the frontal face view of this user for longer durations. The child participated in a descriptive case study. The participant underwent 3 months of tongue protrusion training while the multiple camera tongue switch prototype was being prepared. Later, the participant was brought back for five experiment sessions where he worked on a single-switch picture matching activity, using the multiple camera tongue switch prototype in a controlled environment. The multiple camera tongue switch achieved an average sensitivity of 82% and specificity of 80%. In three of the experiment sessions, the peripheral cameras were associated with most of the true positive switch activations. These activations would have been missed by a centre-camera-only setup. The study demonstrated proof-of-concept of a non-contact tongue access modality implemented by a video-based system involving three cameras and colour video processing.

  10. Microprocessor-controlled wide-range streak camera

    NASA Astrophysics Data System (ADS)

    Lewis, Amy E.; Hollabaugh, Craig

    2006-08-01

    Bechtel Nevada/NSTec recently announced deployment of their fifth generation streak camera. This camera incorporates many advanced features beyond those currently available for streak cameras. The arc-resistant driver includes a trigger lockout mechanism, actively monitors input trigger levels, and incorporates a high-voltage fault interrupter for user safety and tube protection. The camera is completely modular and may deflect over a variable full-sweep time of 15 nanoseconds to 500 microseconds. The camera design is compatible with both large- and small-format commercial tubes from several vendors. The embedded microprocessor offers Ethernet connectivity, and XML [extensible markup language]-based configuration management with non-volatile parameter storage using flash-based storage media. The camera's user interface is platform-independent (Microsoft Windows, Unix, Linux, Macintosh OSX) and is accessible using an AJAX [asynchronous Javascript and XML]-equipped modem browser, such as Internet Explorer 6, Firefox, or Safari. User interface operation requires no installation of client software or browser plug-in technology. Automation software can also access the camera configuration and control using HTTP [hypertext transfer protocol]. The software architecture supports multiple-simultaneous clients, multiple cameras, and multiple module access with a standard browser. The entire user interface can be customized.

  11. Detecting method of subjects' 3D positions and experimental advanced camera control system

    NASA Astrophysics Data System (ADS)

    Kato, Daiichiro; Abe, Kazuo; Ishikawa, Akio; Yamada, Mitsuho; Suzuki, Takahito; Kuwashima, Shigesumi

    1997-04-01

    Steady progress is being made in the development of an intelligent robot camera capable of automatically shooting pictures with a powerful sense of reality or tracking objects whose shooting requires advanced techniques. Currently, only experienced broadcasting cameramen can provide these pictures.TO develop an intelligent robot camera with these abilities, we need to clearly understand how a broadcasting cameraman assesses his shooting situation and how his camera is moved during shooting. We use a real- time analyzer to study a cameraman's work and his gaze movements at studios and during sports broadcasts. This time, we have developed a detecting method of subjects' 3D positions and an experimental camera control system to help us further understand the movements required for an intelligent robot camera. The features are as follows: (1) Two sensor cameras shoot a moving subject and detect colors, producing its 3D coordinates. (2) Capable of driving a camera based on camera movement data obtained by a real-time analyzer. 'Moving shoot' is the name we have given to the object position detection technology on which this system is based. We used it in a soccer game, producing computer graphics showing how players moved. These results will also be reported.

  12. UCXp camera imaging principle and key technologies of data post-processing

    NASA Astrophysics Data System (ADS)

    Yuan, Fangyan; Li, Guoqing; Zuo, Zhengli; Liu, Jianmin; Wu, Liang; Yu, Xiaoping; Zhao, Haitao

    2014-03-01

    The large format digital aerial camera product UCXp was introduced into the Chinese market in 2008, the image consists of 17310 columns and 11310 rows with a pixel size of 6 mm. The UCXp camera has many advantages compared with the same generation camera, with multiple lenses exposed almost at the same time and no oblique lens. The camera has a complex imaging process whose principle will be detailed in this paper. On the other hand, the UCXp image post-processing method, including data pre-processing and orthophoto production, will be emphasized in this article. Based on the data of new Beichuan County, this paper will describe the data processing and effects.

  13. Empirical Study on Designing of Gaze Tracking Camera Based on the Information of User's Head Movement.

    PubMed

    Pan, Weiyuan; Jung, Dongwook; Yoon, Hyo Sik; Lee, Dong Eun; Naqvi, Rizwan Ali; Lee, Kwan Woo; Park, Kang Ryoung

    2016-08-31

    Gaze tracking is the technology that identifies a region in space that a user is looking at. Most previous non-wearable gaze tracking systems use a near-infrared (NIR) light camera with an NIR illuminator. Based on the kind of camera lens used, the viewing angle and depth-of-field (DOF) of a gaze tracking camera can be different, which affects the performance of the gaze tracking system. Nevertheless, to our best knowledge, most previous researches implemented gaze tracking cameras without ground truth information for determining the optimal viewing angle and DOF of the camera lens. Eye-tracker manufacturers might also use ground truth information, but they do not provide this in public. Therefore, researchers and developers of gaze tracking systems cannot refer to such information for implementing gaze tracking system. We address this problem providing an empirical study in which we design an optimal gaze tracking camera based on experimental measurements of the amount and velocity of user's head movements. Based on our results and analyses, researchers and developers might be able to more easily implement an optimal gaze tracking system. Experimental results show that our gaze tracking system shows high performance in terms of accuracy, user convenience and interest.

  14. Empirical Study on Designing of Gaze Tracking Camera Based on the Information of User’s Head Movement

    PubMed Central

    Pan, Weiyuan; Jung, Dongwook; Yoon, Hyo Sik; Lee, Dong Eun; Naqvi, Rizwan Ali; Lee, Kwan Woo; Park, Kang Ryoung

    2016-01-01

    Gaze tracking is the technology that identifies a region in space that a user is looking at. Most previous non-wearable gaze tracking systems use a near-infrared (NIR) light camera with an NIR illuminator. Based on the kind of camera lens used, the viewing angle and depth-of-field (DOF) of a gaze tracking camera can be different, which affects the performance of the gaze tracking system. Nevertheless, to our best knowledge, most previous researches implemented gaze tracking cameras without ground truth information for determining the optimal viewing angle and DOF of the camera lens. Eye-tracker manufacturers might also use ground truth information, but they do not provide this in public. Therefore, researchers and developers of gaze tracking systems cannot refer to such information for implementing gaze tracking system. We address this problem providing an empirical study in which we design an optimal gaze tracking camera based on experimental measurements of the amount and velocity of user’s head movements. Based on our results and analyses, researchers and developers might be able to more easily implement an optimal gaze tracking system. Experimental results show that our gaze tracking system shows high performance in terms of accuracy, user convenience and interest. PMID:27589768

  15. Technology and Technique Standards for Camera-Acquired Digital Dermatologic Images: A Systematic Review.

    PubMed

    Quigley, Elizabeth A; Tokay, Barbara A; Jewell, Sarah T; Marchetti, Michael A; Halpern, Allan C

    2015-08-01

    Photographs are invaluable dermatologic diagnostic, management, research, teaching, and documentation tools. Digital Imaging and Communications in Medicine (DICOM) standards exist for many types of digital medical images, but there are no DICOM standards for camera-acquired dermatologic images to date. To identify and describe existing or proposed technology and technique standards for camera-acquired dermatologic images in the scientific literature. Systematic searches of the PubMed, EMBASE, and Cochrane databases were performed in January 2013 using photography and digital imaging, standardization, and medical specialty and medical illustration search terms and augmented by a gray literature search of 14 websites using Google. Two reviewers independently screened titles of 7371 unique publications, followed by 3 sequential full-text reviews, leading to the selection of 49 publications with the most recent (1985-2013) or detailed description of technology or technique standards related to the acquisition or use of images of skin disease (or related conditions). No universally accepted existing technology or technique standards for camera-based digital images in dermatology were identified. Recommendations are summarized for technology imaging standards, including spatial resolution, color resolution, reproduction (magnification) ratios, postacquisition image processing, color calibration, compression, output, archiving and storage, and security during storage and transmission. Recommendations are also summarized for technique imaging standards, including environmental conditions (lighting, background, and camera position), patient pose and standard view sets, and patient consent, privacy, and confidentiality. Proposed standards for specific-use cases in total body photography, teledermatology, and dermoscopy are described. The literature is replete with descriptions of obtaining photographs of skin disease, but universal imaging standards have not been developed, validated, and adopted to date. Dermatologic imaging is evolving without defined standards for camera-acquired images, leading to variable image quality and limited exchangeability. The development and adoption of universal technology and technique standards may first emerge in scenarios when image use is most associated with a defined clinical benefit.

  16. Bandit: Technologies for Proximity Operations of Teams of Sub-10Kg Spacecraft

    DTIC Science & Technology

    2007-10-16

    and adding a dedicated overhead camera system. As will be explained below, the forced-air system did not work and the existing system has proven too...erratic to justify the expense of the camera system. 6DOF Software Simulator. The existing Java-based graphical 6DOF simulator was to be improved for...proposed camera system for a nonfunctional table. The C-9 final report is enclosed. ["Prf flj ,er Figure 1. Forced-air table schematic Figure 2

  17. Fiber optic TV direct

    NASA Technical Reports Server (NTRS)

    Kassak, John E.

    1991-01-01

    The objective of the operational television (OTV) technology was to develop a multiple camera system (up to 256 cameras) for NASA Kennedy installations where camera video, synchronization, control, and status data are transmitted bidirectionally via a single fiber cable at distances in excess of five miles. It is shown that the benefits (such as improved video performance, immunity from electromagnetic interference and radio frequency interference, elimination of repeater stations, and more system configuration flexibility) can be realized if application of the proven fiber optic transmission concept is used. The control system will marry the lens, pan and tilt, and camera control functions into a modular based Local Area Network (LAN) control network. Such a system does not exist commercially at present since the Television Broadcast Industry's current practice is to divorce the positional controls from the camera control system. The application software developed for this system will have direct applicability to similar systems in industry using LAN based control systems.

  18. WiseEye: Next Generation Expandable and Programmable Camera Trap Platform for Wildlife Research.

    PubMed

    Nazir, Sajid; Newey, Scott; Irvine, R Justin; Verdicchio, Fabio; Davidson, Paul; Fairhurst, Gorry; Wal, René van der

    2017-01-01

    The widespread availability of relatively cheap, reliable and easy to use digital camera traps has led to their extensive use for wildlife research, monitoring and public outreach. Users of these units are, however, often frustrated by the limited options for controlling camera functions, the generation of large numbers of images, and the lack of flexibility to suit different research environments and questions. We describe the development of a user-customisable open source camera trap platform named 'WiseEye', designed to provide flexible camera trap technology for wildlife researchers. The novel platform is based on a Raspberry Pi single-board computer and compatible peripherals that allow the user to control its functions and performance. We introduce the concept of confirmatory sensing, in which the Passive Infrared triggering is confirmed through other modalities (i.e. radar, pixel change) to reduce the occurrence of false positives images. This concept, together with user-definable metadata, aided identification of spurious images and greatly reduced post-collection processing time. When tested against a commercial camera trap, WiseEye was found to reduce the incidence of false positive images and false negatives across a range of test conditions. WiseEye represents a step-change in camera trap functionality, greatly increasing the value of this technology for wildlife research and conservation management.

  19. WiseEye: Next Generation Expandable and Programmable Camera Trap Platform for Wildlife Research

    PubMed Central

    Nazir, Sajid; Newey, Scott; Irvine, R. Justin; Verdicchio, Fabio; Davidson, Paul; Fairhurst, Gorry; van der Wal, René

    2017-01-01

    The widespread availability of relatively cheap, reliable and easy to use digital camera traps has led to their extensive use for wildlife research, monitoring and public outreach. Users of these units are, however, often frustrated by the limited options for controlling camera functions, the generation of large numbers of images, and the lack of flexibility to suit different research environments and questions. We describe the development of a user-customisable open source camera trap platform named ‘WiseEye’, designed to provide flexible camera trap technology for wildlife researchers. The novel platform is based on a Raspberry Pi single-board computer and compatible peripherals that allow the user to control its functions and performance. We introduce the concept of confirmatory sensing, in which the Passive Infrared triggering is confirmed through other modalities (i.e. radar, pixel change) to reduce the occurrence of false positives images. This concept, together with user-definable metadata, aided identification of spurious images and greatly reduced post-collection processing time. When tested against a commercial camera trap, WiseEye was found to reduce the incidence of false positive images and false negatives across a range of test conditions. WiseEye represents a step-change in camera trap functionality, greatly increasing the value of this technology for wildlife research and conservation management. PMID:28076444

  20. Blinded evaluation of the effects of high definition and magnification on perceived image quality in laryngeal imaging.

    PubMed

    Otto, Kristen J; Hapner, Edie R; Baker, Michael; Johns, Michael M

    2006-02-01

    Advances in commercial video technology have improved office-based laryngeal imaging. This study investigates the perceived image quality of a true high-definition (HD) video camera and the effect of magnification on laryngeal videostroboscopy. We performed a prospective, dual-armed, single-blinded analysis of a standard laryngeal videostroboscopic examination comparing 3 separate add-on camera systems: a 1-chip charge-coupled device (CCD) camera, a 3-chip CCD camera, and a true 720p (progressive scan) HD camera. Displayed images were controlled for magnification and image size (20-inch [50-cm] display, red-green-blue, and S-video cable for 1-chip and 3-chip cameras; digital visual interface cable and HD monitor for HD camera). Ten blinded observers were then asked to rate the following 5 items on a 0-to-100 visual analog scale: resolution, color, ability to see vocal fold vibration, sense of depth perception, and clarity of blood vessels. Eight unblinded observers were then asked to rate the difference in perceived resolution and clarity of laryngeal examination images when displayed on a 10-inch (25-cm) monitor versus a 42-inch (105-cm) monitor. A visual analog scale was used. These monitors were controlled for actual resolution capacity. For each item evaluated, randomized block design analysis demonstrated that the 3-chip camera scored significantly better than the 1-chip camera (p < .05). For the categories of color and blood vessel discrimination, the 3-chip camera scored significantly better than the HD camera (p < .05). For magnification alone, observers rated the 42-inch monitor statistically better than the 10-inch monitor. The expense of new medical technology must be judged against its added value. This study suggests that HD laryngeal imaging may not add significant value over currently available video systems, in perceived image quality, when a small monitor is used. Although differences in clarity between standard and HD cameras may not be readily apparent on small displays, a large display size coupled with HD technology may impart improved diagnosis of subtle vocal fold lesions and vibratory anomalies.

  1. CdTe Based Hard X-ray Imager Technology For Space Borne Missions

    NASA Astrophysics Data System (ADS)

    Limousin, Olivier; Delagnes, E.; Laurent, P.; Lugiez, F.; Gevin, O.; Meuris, A.

    2009-01-01

    CEA Saclay has recently developed an innovative technology for CdTe based Pixelated Hard X-Ray Imagers with high spectral performance and high timing resolution for efficient background rejection when the camera is coupled to an active veto shield. This development has been done in a R&D program supported by CNES (French National Space Agency) and has been optimized towards the Simbol-X mission requirements. In the latter telescope, the hard X-Ray imager is 64 cm² and is equipped with 625µm pitch pixels (16384 independent channels) operating at -40°C in the range of 4 to 80 keV. The camera we demonstrate in this paper consists of a mosaic of 64 independent cameras, divided in 8 independent sectors. Each elementary detection unit, called Caliste, is the hybridization of a 256-pixel Cadmium Telluride (CdTe) detector with full custom front-end electronics into a unique 1 cm² component, juxtaposable on its four sides. Recently, promising results have been obtained from the first micro-camera prototypes called Caliste 64 and will be presented to illustrate the capabilities of the device as well as the expected performance of an instrument based on it. The modular design of Caliste enables to consider extended developments toward IXO type mission, according to its specific scientific requirements.

  2. Microprocessor-controlled, wide-range streak camera

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amy E. Lewis, Craig Hollabaugh

    Bechtel Nevada/NSTec recently announced deployment of their fifth generation streak camera. This camera incorporates many advanced features beyond those currently available for streak cameras. The arc-resistant driver includes a trigger lockout mechanism, actively monitors input trigger levels, and incorporates a high-voltage fault interrupter for user safety and tube protection. The camera is completely modular and may deflect over a variable full-sweep time of 15 nanoseconds to 500 microseconds. The camera design is compatible with both large- and small-format commercial tubes from several vendors. The embedded microprocessor offers Ethernet connectivity, and XML [extensible markup language]-based configuration management with non-volatile parameter storagemore » using flash-based storage media. The camera’s user interface is platform-independent (Microsoft Windows, Unix, Linux, Macintosh OSX) and is accessible using an AJAX [asynchronous Javascript and XML]-equipped modem browser, such as Internet Explorer 6, Firefox, or Safari. User interface operation requires no installation of client software or browser plug-in technology. Automation software can also access the camera configuration and control using HTTP [hypertext transfer protocol]. The software architecture supports multiple-simultaneous clients, multiple cameras, and multiple module access with a standard browser. The entire user interface can be customized.« less

  3. Camera Systems Rapidly Scan Large Structures

    NASA Technical Reports Server (NTRS)

    2013-01-01

    Needing a method to quickly scan large structures like an aircraft wing, Langley Research Center developed the line scanning thermography (LST) system. LST works in tandem with a moving infrared camera to capture how a material responds to changes in temperature. Princeton Junction, New Jersey-based MISTRAS Group Inc. now licenses the technology and uses it in power stations and industrial plants.

  4. High dynamic range image acquisition based on multiplex cameras

    NASA Astrophysics Data System (ADS)

    Zeng, Hairui; Sun, Huayan; Zhang, Tinghua

    2018-03-01

    High dynamic image is an important technology of photoelectric information acquisition, providing higher dynamic range and more image details, and it can better reflect the real environment, light and color information. Currently, the method of high dynamic range image synthesis based on different exposure image sequences cannot adapt to the dynamic scene. It fails to overcome the effects of moving targets, resulting in the phenomenon of ghost. Therefore, a new high dynamic range image acquisition method based on multiplex cameras system was proposed. Firstly, different exposure images sequences were captured with the camera array, using the method of derivative optical flow based on color gradient to get the deviation between images, and aligned the images. Then, the high dynamic range image fusion weighting function was established by combination of inverse camera response function and deviation between images, and was applied to generated a high dynamic range image. The experiments show that the proposed method can effectively obtain high dynamic images in dynamic scene, and achieves good results.

  5. Geometric rectification of camera-captured document images.

    PubMed

    Liang, Jian; DeMenthon, Daniel; Doermann, David

    2008-04-01

    Compared to typical scanners, handheld cameras offer convenient, flexible, portable, and non-contact image capture, which enables many new applications and breathes new life into existing ones. However, camera-captured documents may suffer from distortions caused by non-planar document shape and perspective projection, which lead to failure of current OCR technologies. We present a geometric rectification framework for restoring the frontal-flat view of a document from a single camera-captured image. Our approach estimates 3D document shape from texture flow information obtained directly from the image without requiring additional 3D/metric data or prior camera calibration. Our framework provides a unified solution for both planar and curved documents and can be applied in many, especially mobile, camera-based document analysis applications. Experiments show that our method produces results that are significantly more OCR compatible than the original images.

  6. Research into a Single-aperture Light Field Camera System to Obtain Passive Ground-based 3D Imagery of LEO Objects

    NASA Astrophysics Data System (ADS)

    Bechis, K.; Pitruzzello, A.

    2014-09-01

    This presentation describes our ongoing research into using a ground-based light field camera to obtain passive, single-aperture 3D imagery of LEO objects. Light field cameras are an emerging and rapidly evolving technology for passive 3D imaging with a single optical sensor. The cameras use an array of lenslets placed in front of the camera focal plane, which provides angle of arrival information for light rays originating from across the target, allowing range to target and 3D image to be obtained from a single image using monocular optics. The technology, which has been commercially available for less than four years, has the potential to replace dual-sensor systems such as stereo cameras, dual radar-optical systems, and optical-LIDAR fused systems, thus reducing size, weight, cost, and complexity. We have developed a prototype system for passive ranging and 3D imaging using a commercial light field camera and custom light field image processing algorithms. Our light field camera system has been demonstrated for ground-target surveillance and threat detection applications, and this paper presents results of our research thus far into applying this technology to the 3D imaging of LEO objects. The prototype 3D imaging camera system developed by Northrop Grumman uses a Raytrix R5 C2GigE light field camera connected to a Windows computer with an nVidia graphics processing unit (GPU). The system has a frame rate of 30 Hz, and a software control interface allows for automated camera triggering and light field image acquisition to disk. Custom image processing software then performs the following steps: (1) image refocusing, (2) change detection, (3) range finding, and (4) 3D reconstruction. In Step (1), a series of 2D images are generated from each light field image; the 2D images can be refocused at up to 100 different depths. Currently, steps (1) through (3) are automated, while step (4) requires some user interaction. A key requirement for light field camera operation is that the target must be within the near-field (Fraunhofer distance) of the collecting optics. For example, in visible light the near-field of a 1-m telescope extends out to about 3,500 km, while the near-field of the AEOS telescope extends out over 46,000 km. For our initial proof of concept, we have integrated our light field camera with a 14-inch Meade LX600 advanced coma-free telescope, to image various surrogate ground targets at up to tens of kilometers range. Our experiments with the 14-inch telescope have assessed factors and requirements that are traceable and scalable to a larger-aperture system that would have the near-field distance needed to obtain 3D images of LEO objects. The next step would be to integrate a light field camera with a 1-m or larger telescope and evaluate its 3D imaging capability against LEO objects. 3D imaging of LEO space objects with light field camera technology can potentially provide a valuable new tool for space situational awareness, especially for those situations where laser or radar illumination of the target objects is not feasible.

  7. Accurate estimation of camera shot noise in the real-time

    NASA Astrophysics Data System (ADS)

    Cheremkhin, Pavel A.; Evtikhiev, Nikolay N.; Krasnov, Vitaly V.; Rodin, Vladislav G.; Starikov, Rostislav S.

    2017-10-01

    Nowadays digital cameras are essential parts of various technological processes and daily tasks. They are widely used in optics and photonics, astronomy, biology and other various fields of science and technology such as control systems and video-surveillance monitoring. One of the main information limitations of photo- and videocameras are noises of photosensor pixels. Camera's photosensor noise can be divided into random and pattern components. Temporal noise includes random noise component while spatial noise includes pattern noise component. Temporal noise can be divided into signal-dependent shot noise and signal-nondependent dark temporal noise. For measurement of camera noise characteristics, the most widely used methods are standards (for example, EMVA Standard 1288). It allows precise shot and dark temporal noise measurement but difficult in implementation and time-consuming. Earlier we proposed method for measurement of temporal noise of photo- and videocameras. It is based on the automatic segmentation of nonuniform targets (ASNT). Only two frames are sufficient for noise measurement with the modified method. In this paper, we registered frames and estimated shot and dark temporal noises of cameras consistently in the real-time. The modified ASNT method is used. Estimation was performed for the cameras: consumer photocamera Canon EOS 400D (CMOS, 10.1 MP, 12 bit ADC), scientific camera MegaPlus II ES11000 (CCD, 10.7 MP, 12 bit ADC), industrial camera PixeLink PL-B781F (CMOS, 6.6 MP, 10 bit ADC) and video-surveillance camera Watec LCL-902C (CCD, 0.47 MP, external 8 bit ADC). Experimental dependencies of temporal noise on signal value are in good agreement with fitted curves based on a Poisson distribution excluding areas near saturation. Time of registering and processing of frames used for temporal noise estimation was measured. Using standard computer, frames were registered and processed during a fraction of second to several seconds only. Also the accuracy of the obtained temporal noise values was estimated.

  8. Development of an all-metal electrothermal actuator and its applications

    NASA Astrophysics Data System (ADS)

    Luo, JiKui; He, Johnny H.; Flewitt, Andrew J.; Moore, David F.; Spearing, S. Mark; Fleck, Norman A.; Milne, Williams I.

    2004-01-01

    The in-plane motion of microelectrothermal actuator ("heatuator") has been analysed for Si-based and metallic devices. It was found that the lateral deflection of a heatuator made of a Ni-metal is about ~60% larger than that of a Si-based actuator under the same power consumption. Metals are much better for thermal actuators as they provide a relatively large deflection and large force, for a low operating temperature, and power consumption. Electroplated Ni films were used to fabricate heatuators. The electrical and mechanical properties of electroplated Ni thin films have been investigated as a function of temperature and plating current density, and the process conditions have been optimised to obtain stress-free films suitable for MEMS applications. Lateral thermal actuators have been successfully fabricated, and electrically tested. Microswitches and microtweezers utilising the heatuator have also been fabricated and tested.

  9. Two adults with multiple disabilities use a computer-aided telephone system to make phone calls independently.

    PubMed

    Lancioni, Giulio E; O'Reilly, Mark F; Singh, Nirbhay N; Sigafoos, Jeff; Oliva, Doretta; Alberti, Gloria; Lang, Russell

    2011-01-01

    This study extended the assessment of a newly developed computer-aided telephone system with two participants (adults) who presented with blindness or severe visual impairment and motor or motor and intellectual disabilities. For each participant, the study was carried out according to an ABAB design, in which the A represented baseline phases and the B represented intervention phases, during which the special telephone system was available. The system involved among others a net-book computer provided with specific software, a global system for mobile communication modem, and a microswitch. Both participants learned to use the system very rapidly and managed to make phone calls independently to a variety of partners such as family members, friends and staff personnel. The results were discussed in terms of the technology under investigation (its advantages, drawbacks, and need of improvement) and the social-communication impact it can make for persons with multiple disabilities. Copyright © 2011 Elsevier Ltd. All rights reserved.

  10. Handheld hyperspectral imager for standoff detection of chemical and biological aerosols

    NASA Astrophysics Data System (ADS)

    Hinnrichs, Michele; Jensen, James O.; McAnally, Gerard

    2004-02-01

    Pacific Advanced Technology has developed a small hand held imaging spectrometer, Sherlock, for gas leak and aerosol detection and imaging. The system is based on a patent technique that uses diffractive optics and image processing algorithms to detect spectral information about objects in the scene of the camera (IMSS Image Multi-spectral Sensing). This camera has been tested at Dugway Proving Ground and Dstl Porton Down facility looking at Chemical and Biological agent simulants. The camera has been used to investigate surfaces contaminated with chemical agent simulants. In addition to Chemical and Biological detection the camera has been used for environmental monitoring of green house gases and is currently undergoing extensive laboratory and field testing by the Gas Technology Institute, British Petroleum and Shell Oil for applications for gas leak detection and repair. The camera contains an embedded Power PC and a real time image processor for performing image processing algorithms to assist in the detection and identification of gas phase species in real time. In this paper we will present an over view of the technology and show how it has performed for different applications, such as gas leak detection, surface contamination, remote sensing and surveillance applications. In addition a sampling of the results form TRE field testing at Dugway in July of 2002 and Dstl at Porton Down in September of 2002 will be given.

  11. Handheld hyperspectral imager for standoff detection of chemical and biological aerosols

    NASA Astrophysics Data System (ADS)

    Hinnrichs, Michele; Jensen, James O.; McAnally, Gerard

    2004-08-01

    Pacific Advanced Technology has developed a small hand held imaging spectrometer, Sherlock, for gas leak and aerosol detection and imaging. The system is based on a patented technique, (IMSS Image Multi-spectral Sensing), that uses diffractive optics and image processing algorithms to detect spectral information about objects in the scene of the camera. This cameras technology has been tested at Dugway Proving Ground and Dstl Porton Down facilities looking at Chemical and Biological agent simulants. In addition to Chemical and Biological detection, the camera has been used for environmental monitoring of green house gases and is currently undergoing extensive laboratory and field testing by the Gas Technology Institute, British Petroleum and Shell Oil for applications for gas leak detection and repair. In this paper we will present some of the results from the data collection at the TRE test at Dugway Proving Ground during the summer of 2002 and laboratory testing at the Dstl facility at Porton Down in the UK in the fall of 2002.

  12. United States Homeland Security and National Biometric Identification

    DTIC Science & Technology

    2002-04-09

    security number. Biometrics is the use of unique individual traits such as fingerprints, iris eye patterns, voice recognition, and facial recognition to...technology to control access onto their military bases using a Defense Manpower Management Command developed software application. FACIAL Facial recognition systems...installed facial recognition systems in conjunction with a series of 200 cameras to fight street crime and identify terrorists. The cameras, which are

  13. Uncooled infrared sensors: rapid growth and future perspective

    NASA Astrophysics Data System (ADS)

    Balcerak, Raymond S.

    2000-07-01

    The uncooled infrared cameras are now available for both the military and commercial markets. The current camera technology incorporates the fruits of many years of development, focusing on the details of pixel design, novel material processing, and low noise read-out electronics. The rapid insertion of cameras into systems is testimony to the successful completion of this 'first phase' of development. In the military market, the first uncooled infrared cameras will be used for weapon sights, driver's viewers and helmet mounted cameras. Major commercial applications include night driving, security, police and fire fighting, and thermography, primarily for preventive maintenance and process control. The technology for the next generation of cameras is even more demanding, but within reach. The paper outlines the technology program planned for the next generation of cameras, and the approaches to further enhance performance, even to the radiation limit of thermal detectors.

  14. Global calibration of multi-cameras with non-overlapping fields of view based on photogrammetry and reconfigurable target

    NASA Astrophysics Data System (ADS)

    Xia, Renbo; Hu, Maobang; Zhao, Jibin; Chen, Songlin; Chen, Yueling

    2018-06-01

    Multi-camera vision systems are often needed to achieve large-scale and high-precision measurement because these systems have larger fields of view (FOV) than a single camera. Multiple cameras may have no or narrow overlapping FOVs in many applications, which pose a huge challenge to global calibration. This paper presents a global calibration method for multi-cameras without overlapping FOVs based on photogrammetry technology and a reconfigurable target. Firstly, two planar targets are fixed together and made into a long target according to the distance between the two cameras to be calibrated. The relative positions of the two planar targets can be obtained by photogrammetric methods and used as invariant constraints in global calibration. Then, the reprojection errors of target feature points in the two cameras’ coordinate systems are calculated at the same time and optimized by the Levenberg–Marquardt algorithm to find the optimal solution of the transformation matrix between the two cameras. Finally, all the camera coordinate systems are converted to the reference coordinate system in order to achieve global calibration. Experiments show that the proposed method has the advantages of high accuracy (the RMS error is 0.04 mm) and low cost and is especially suitable for on-site calibration.

  15. Development and characterization of a round hand-held silicon photomultiplier based gamma camera for intraoperative imaging

    PubMed Central

    Popovic, Kosta; McKisson, Jack E.; Kross, Brian; Lee, Seungjoon; McKisson, John; Weisenberger, Andrew G.; Proffitt, James; Stolin, Alexander; Majewski, Stan; Williams, Mark B.

    2017-01-01

    This paper describes the development of a hand-held gamma camera for intraoperative surgical guidance that is based on silicon photomultiplier (SiPM) technology. The camera incorporates a cerium doped lanthanum bromide (LaBr3:Ce) plate scintillator, an array of 80 SiPM photodetectors and a two-layer parallel-hole collimator. The field of view is circular with a 60 mm diameter. The disk-shaped camera housing is 75 mm in diameter, approximately 40.5 mm thick and has a mass of only 1.4 kg, permitting either hand-held or arm-mounted use. All camera components are integrated on a mobile cart that allows easy transport. The camera was developed for use in surgical procedures including determination of the location and extent of primary carcinomas, detection of secondary lesions and sentinel lymph node biopsy (SLNB). Here we describe the camera design and its principal operating characteristics, including spatial resolution, energy resolution, sensitivity uniformity, and geometric linearity. The gamma camera has an intrinsic spatial resolution of 4.2 mm FWHM, an energy resolution of 21.1 % FWHM at 140 keV, and a sensitivity of 481 and 73 cps/MBq when using the single- and double-layer collimators, respectively. PMID:28286345

  16. Solid state replacement of rotating mirror cameras

    NASA Astrophysics Data System (ADS)

    Frank, Alan M.; Bartolick, Joseph M.

    2007-01-01

    Rotating mirror cameras have been the mainstay of mega-frame per second imaging for decades. There is still no electronic camera that can match a film based rotary mirror camera for the combination of frame count, speed, resolution and dynamic range. The rotary mirror cameras are predominantly used in the range of 0.1 to 100 micro-seconds per frame, for 25 to more than a hundred frames. Electron tube gated cameras dominate the sub microsecond regime but are frame count limited. Video cameras are pushing into the microsecond regime but are resolution limited by the high data rates. An all solid state architecture, dubbed 'In-situ Storage Image Sensor' or 'ISIS', by Prof. Goji Etoh has made its first appearance into the market and its evaluation is discussed. Recent work at Lawrence Livermore National Laboratory has concentrated both on evaluation of the presently available technologies and exploring the capabilities of the ISIS architecture. It is clear though there is presently no single chip camera that can simultaneously match the rotary mirror cameras, the ISIS architecture has the potential to approach their performance.

  17. COBRA ATD multispectral camera response model

    NASA Astrophysics Data System (ADS)

    Holmes, V. Todd; Kenton, Arthur C.; Hilton, Russell J.; Witherspoon, Ned H.; Holloway, John H., Jr.

    2000-08-01

    A new multispectral camera response model has been developed in support of the US Marine Corps (USMC) Coastal Battlefield Reconnaissance and Analysis (COBRA) Advanced Technology Demonstration (ATD) Program. This analytical model accurately estimates response form five Xybion intensified IMC 201 multispectral cameras used for COBRA ATD airborne minefield detection. The camera model design is based on a series of camera response curves which were generated through optical laboratory test performed by the Naval Surface Warfare Center, Dahlgren Division, Coastal Systems Station (CSS). Data fitting techniques were applied to these measured response curves to obtain nonlinear expressions which estimates digitized camera output as a function of irradiance, intensifier gain, and exposure. This COBRA Camera Response Model was proven to be very accurate, stable over a wide range of parameters, analytically invertible, and relatively simple. This practical camera model was subsequently incorporated into the COBRA sensor performance evaluation and computational tools for research analysis modeling toolbox in order to enhance COBRA modeling and simulation capabilities. Details of the camera model design and comparisons of modeled response to measured experimental data are presented.

  18. Automated face detection for occurrence and occupancy estimation in chimpanzees.

    PubMed

    Crunchant, Anne-Sophie; Egerer, Monika; Loos, Alexander; Burghardt, Tilo; Zuberbühler, Klaus; Corogenes, Katherine; Leinert, Vera; Kulik, Lars; Kühl, Hjalmar S

    2017-03-01

    Surveying endangered species is necessary to evaluate conservation effectiveness. Camera trapping and biometric computer vision are recent technological advances. They have impacted on the methods applicable to field surveys and these methods have gained significant momentum over the last decade. Yet, most researchers inspect footage manually and few studies have used automated semantic processing of video trap data from the field. The particular aim of this study is to evaluate methods that incorporate automated face detection technology as an aid to estimate site use of two chimpanzee communities based on camera trapping. As a comparative baseline we employ traditional manual inspection of footage. Our analysis focuses specifically on the basic parameter of occurrence where we assess the performance and practical value of chimpanzee face detection software. We found that the semi-automated data processing required only 2-4% of the time compared to the purely manual analysis. This is a non-negligible increase in efficiency that is critical when assessing the feasibility of camera trap occupancy surveys. Our evaluations suggest that our methodology estimates the proportion of sites used relatively reliably. Chimpanzees are mostly detected when they are present and when videos are filmed in high-resolution: the highest recall rate was 77%, for a false alarm rate of 2.8% for videos containing only chimpanzee frontal face views. Certainly, our study is only a first step for transferring face detection software from the lab into field application. Our results are promising and indicate that the current limitation of detecting chimpanzees in camera trap footage due to lack of suitable face views can be easily overcome on the level of field data collection, that is, by the combined placement of multiple high-resolution cameras facing reverse directions. This will enable to routinely conduct chimpanzee occupancy surveys based on camera trapping and semi-automated processing of footage. Using semi-automated ape face detection technology for processing camera trap footage requires only 2-4% of the time compared to manual analysis and allows to estimate site use by chimpanzees relatively reliably. © 2017 Wiley Periodicals, Inc.

  19. The infrared imaging radiometer for PICASSO-CENA

    NASA Astrophysics Data System (ADS)

    Corlay, Gilles; Arnolfo, Marie-Christine; Bret-Dibat, Thierry; Lifferman, Anne; Pelon, Jacques

    2017-11-01

    Microbolometers are infrared detectors of an emerging technology mainly developed in US and few other countries for few years. The main targets of these developments are low performing and low cost military and civilian applications like survey cameras. Applications in space are now arising thanks to the design simplification and the associated cost reduction allowed by this new technology. Among the four instruments of the payload of PICASSO-CENA, the Imaging Infrared Radiometer (IIR) is based on the microbolometer technology. An infrared camera in development for the IASI instrument is the core of the IIR. The aim of the paper is to recall the PICASSO-CENA mission goal, to describe the IIR instrument architecture and highlight its main features and performances and to give the its development status.

  20. A self-learning camera for the validation of highly variable and pseudorandom patterns

    NASA Astrophysics Data System (ADS)

    Kelley, Michael

    2004-05-01

    Reliable and productive manufacturing operations have depended on people to quickly detect and solve problems whenever they appear. Over the last 20 years, more and more manufacturing operations have embraced machine vision systems to increase productivity, reliability and cost-effectiveness, including reducing the number of human operators required. Although machine vision technology has long been capable of solving simple problems, it has still not been broadly implemented. The reason is that until now, no machine vision system has been designed to meet the unique demands of complicated pattern recognition. The ZiCAM family was specifically developed to be the first practical hardware to meet these needs. To be able to address non-traditional applications, the machine vision industry must include smart camera technology that meets its users" demands for lower costs, better performance and the ability to address applications of irregular lighting, patterns and color. The next-generation smart cameras will need to evolve as a fundamentally different kind of sensor, with new technology that behaves like a human but performs like a computer. Neural network based systems, coupled with self-taught, n-space, non-linear modeling, promises to be the enabler of the next generation of machine vision equipment. Image processing technology is now available that enables a system to match an operator"s subjectivity. A Zero-Instruction-Set-Computer (ZISC) powered smart camera allows high-speed fuzzy-logic processing, without the need for computer programming. This can address applications of validating highly variable and pseudo-random patterns. A hardware-based implementation of a neural network, Zero-Instruction-Set-Computer, enables a vision system to "think" and "inspect" like a human, with the speed and reliability of a machine.

  1. Clinical evaluation of pixellated NaI:Tl and continuous LaBr 3:Ce, compact scintillation cameras for breast tumors imaging

    NASA Astrophysics Data System (ADS)

    Pani, R.; Pellegrini, R.; Betti, M.; De Vincentis, G.; Cinti, M. N.; Bennati, P.; Vittorini, F.; Casali, V.; Mattioli, M.; Orsolini Cencelli, V.; Navarria, F.; Bollini, D.; Moschini, G.; Iurlaro, G.; Montani, L.; de Notaristefani, F.

    2007-02-01

    The principal limiting factor in the clinical acceptance of scintimammography is certainly its low sensitivity for cancers sized <1 cm, mainly due to the lack of equipment specifically designed for breast imaging. The National Institute of Nuclear Physics (INFN) has been developing a new scintillation camera based on Lanthanum tri-Bromide Cerium-doped crystal (LaBr 3:Ce), that demonstrating superior imaging performances with respect to the dedicated scintillation γ-camera that was previously developed. The proposed detector consists of continuous LaBr 3:Ce scintillator crystal coupled to a Hamamatsu H8500 Flat Panel PMT. One centimeter thick crystal has been chosen to increase crystal detection efficiency. In this paper, we propose a comparison and evaluation between lanthanum γ-camera and a Multi PSPMT camera, NaI(Tl) discrete pixel based, previously developed under "IMI" Italian project for technological transfer of INFN. A phantom study has been developed to test both the cameras before introducing them in clinical trials. High resolution scans produced by LaBr 3:Ce camera showed higher tumor contrast with a detailed imaging of uptake area than pixellated NaI(Tl) dedicated camera. Furthermore, with the lanthanum camera, the Signal-to-Noise Ratio ( SNR) value was increased for a lesion as small as 5 mm, with a consequent strong improvement in detectability.

  2. A time-resolved image sensor for tubeless streak cameras

    NASA Astrophysics Data System (ADS)

    Yasutomi, Keita; Han, SangMan; Seo, Min-Woong; Takasawa, Taishi; Kagawa, Keiichiro; Kawahito, Shoji

    2014-03-01

    This paper presents a time-resolved CMOS image sensor with draining-only modulation (DOM) pixels for tube-less streak cameras. Although the conventional streak camera has high time resolution, the device requires high voltage and bulky system due to the structure with a vacuum tube. The proposed time-resolved imager with a simple optics realize a streak camera without any vacuum tubes. The proposed image sensor has DOM pixels, a delay-based pulse generator, and a readout circuitry. The delay-based pulse generator in combination with an in-pixel logic allows us to create and to provide a short gating clock to the pixel array. A prototype time-resolved CMOS image sensor with the proposed pixel is designed and implemented using 0.11um CMOS image sensor technology. The image array has 30(Vertical) x 128(Memory length) pixels with the pixel pitch of 22.4um. .

  3. Trends in high-speed camera development in the Union of Soviet Socialist Republics /USSR/ and People's Republic of China /PRC/

    NASA Astrophysics Data System (ADS)

    Hyzer, W. G.

    1981-10-01

    Significant advances in high-speed camera technology are being made in the Union of Soviet Socialist Republics (USSR) and People's Republic of China (PRC), which were revealed to the author during recent visits to both of these countries. Past and present developments in high-speed cameras are described in this paper based on personal observations by the author and on private communications with other technical observers. Detailed specifications on individual instruments are presented in those specific cases where such information has been revealed and could be verified.

  4. Use of Programmable Logic Controllers to Automate Control and Monitoring of U.S. Army Wastewater Treatment Systems

    DTIC Science & Technology

    1991-07-01

    transmitter, usually reliable over extended periods (i.e., several months). Vendors: Bindicator Inc. Endress and Hauser Instruments Port Huron, MI 48061...Inc. MicroSwitch Division Michigan City, IN 46360 Honeywell (219/872- 𔃻. 11) Dayton, Ohio 45424 (513/237-4075) Endress & Hauser Instruments Greenwood...Honeywell (312/355-3055) Dayton, Ohio 45424 (513/237-4075) Endress & Hauser Instruments Greenwood, IN 46143 Omega Engineering Inc. (317/535-7138

  5. Development and use of an L3CCD high-cadence imaging system for Optical Astronomy

    NASA Astrophysics Data System (ADS)

    Sheehan, Brendan J.; Butler, Raymond F.

    2008-02-01

    A high cadence imaging system, based on a Low Light Level CCD (L3CCD) camera, has been developed for photometric and polarimetric applications. The camera system is an iXon DV-887 from Andor Technology, which uses a CCD97 L3CCD detector from E2V technologies. This is a back illuminated device, giving it an extended blue response, and has an active area of 512×512 pixels. The camera system allows frame-rates ranging from 30 fps (full frame) to 425 fps (windowed & binned frame). We outline the system design, concentrating on the calibration and control of the L3CCD camera. The L3CCD detector can be either triggered directly by a GPS timeserver/frequency generator or be internally triggered. A central PC remotely controls the camera computer system and timeserver. The data is saved as standard `FITS' files. The large data loads associated with high frame rates, leads to issues with gathering and storing the data effectively. To overcome such problems, a specific data management approach is used, and a Python/PYRAF data reduction pipeline was written for the Linux environment. This uses calibration data collected either on-site, or from lab based measurements, and enables a fast and reliable method for reducing images. To date, the system has been used twice on the 1.5 m Cassini Telescope in Loiano (Italy) we present the reduction methods and observations made.

  6. Pixel-based characterisation of CMOS high-speed camera systems

    NASA Astrophysics Data System (ADS)

    Weber, V.; Brübach, J.; Gordon, R. L.; Dreizler, A.

    2011-05-01

    Quantifying high-repetition rate laser diagnostic techniques for measuring scalars in turbulent combustion relies on a complete description of the relationship between detected photons and the signal produced by the detector. CMOS-chip based cameras are becoming an accepted tool for capturing high frame rate cinematographic sequences for laser-based techniques such as Particle Image Velocimetry (PIV) and Planar Laser Induced Fluorescence (PLIF) and can be used with thermographic phosphors to determine surface temperatures. At low repetition rates, imaging techniques have benefitted from significant developments in the quality of CCD-based camera systems, particularly with the uniformity of pixel response and minimal non-linearities in the photon-to-signal conversion. The state of the art in CMOS technology displays a significant number of technical aspects that must be accounted for before these detectors can be used for quantitative diagnostics. This paper addresses these issues.

  7. Error modeling and analysis of star cameras for a class of 1U spacecraft

    NASA Astrophysics Data System (ADS)

    Fowler, David M.

    As spacecraft today become increasingly smaller, the demand for smaller components and sensors rises as well. The smartphone, a cutting edge consumer technology, has impressive collections of both sensors and processing capabilities and may have the potential to fill this demand in the spacecraft market. If the technologies of a smartphone can be used in space, the cost of building miniature satellites would drop significantly and give a boost to the aerospace and scientific communities. Concentrating on the problem of spacecraft orientation, this study sets ground to determine the capabilities of a smartphone camera when acting as a star camera. Orientations determined from star images taken from a smartphone camera are compared to those of higher quality cameras in order to determine the associated accuracies. The results of the study reveal the abilities of low-cost off-the-shelf imagers in space and give a starting point for future research in the field. The study began with a complete geometric calibration of each analyzed imager such that all comparisons start from the same base. After the cameras were calibrated, image processing techniques were introduced to correct for atmospheric, lens, and image sensor effects. Orientations for each test image are calculated through methods of identifying the stars exposed on each image. Analyses of these orientations allow the overall errors of each camera to be defined and provide insight into the abilities of low-cost imagers.

  8. Miniaturized Autonomous Extravehicular Robotic Camera (Mini AERCam)

    NASA Technical Reports Server (NTRS)

    Fredrickson, Steven E.

    2001-01-01

    The NASA Johnson Space Center (JSC) Engineering Directorate is developing the Autonomous Extravehicular Robotic Camera (AERCam), a low-volume, low-mass free-flying camera system . AERCam project team personnel recently initiated development of a miniaturized version of AERCam known as Mini AERCam. The Mini AERCam target design is a spherical "nanosatellite" free-flyer 7.5 inches in diameter and weighing 1 0 pounds. Mini AERCam is building on the success of the AERCam Sprint STS-87 flight experiment by adding new on-board sensing and processing capabilities while simultaneously reducing volume by 80%. Achieving enhanced capability in a smaller package depends on applying miniaturization technology across virtually all subsystems. Technology innovations being incorporated include micro electromechanical system (MEMS) gyros, "camera-on-a-chip" CMOS imagers, rechargeable xenon gas propulsion system , rechargeable lithium ion battery, custom avionics based on the PowerPC 740 microprocessor, GPS relative navigation, digital radio frequency communications and tracking, micropatch antennas, digital instrumentation, and dense mechanical packaging. The Mini AERCam free-flyer will initially be integrated into an approximate flight-like configuration for demonstration on an airbearing table. A pilot-in-the-loop and hardware-in-the-loop simulation to simulate on-orbit navigation and dynamics will complement the airbearing table demonstration. The Mini AERCam lab demonstration is intended to form the basis for future development of an AERCam flight system that provides beneficial on-orbit views unobtainable from fixed cameras, cameras on robotic manipulators, or cameras carried by EVA crewmembers.

  9. Towards next generation 3D cameras

    NASA Astrophysics Data System (ADS)

    Gupta, Mohit

    2017-03-01

    We are in the midst of a 3D revolution. Robots enabled by 3D cameras are beginning to autonomously drive cars, perform surgeries, and manage factories. However, when deployed in the real-world, these cameras face several challenges that prevent them from measuring 3D shape reliably. These challenges include large lighting variations (bright sunlight to dark night), presence of scattering media (fog, body tissue), and optically complex materials (metal, plastic). Due to these factors, 3D imaging is often the bottleneck in widespread adoption of several key robotics technologies. I will talk about our work on developing 3D cameras based on time-of-flight and active triangulation that addresses these long-standing problems. This includes designing `all-weather' cameras that can perform high-speed 3D scanning in harsh outdoor environments, as well as cameras that recover shape of objects with challenging material properties. These cameras are, for the first time, capable of measuring detailed (<100 microns resolution) scans in extremely demanding scenarios with low-cost components. Several of these cameras are making a practical impact in industrial automation, being adopted in robotic inspection and assembly systems.

  10. Intellectual Dummies

    NASA Technical Reports Server (NTRS)

    2002-01-01

    Goddard Space Flight Center and Triangle Research & Development Corporation collaborated to create "Smart Eyes," a charge coupled device camera that, for the first time, could read and measure bar codes without the use of lasers. The camera operated in conjunction with software and algorithms created by Goddard and Triangle R&D that could track bar code position and direction with speed and precision, as well as with software that could control robotic actions based on vision system input. This accomplishment was intended for robotic assembly of the International Space Station, helping NASA to increase production while using less manpower. After successfully completing the two- phase SBIR project with Goddard, Triangle R&D was awarded a separate contract from the U.S. Department of Transportation (DOT), which was interested in using the newly developed NASA camera technology to heighten automotive safety standards. In 1990, Triangle R&D and the DOT developed a mask made from a synthetic, plastic skin covering to measure facial lacerations resulting from automobile accidents. By pairing NASA's camera technology with Triangle R&D's and the DOT's newly developed mask, a system that could provide repeatable, computerized evaluations of laceration injury was born.

  11. Mosad and Stream Vision For A Telerobotic, Flying Camera System

    NASA Technical Reports Server (NTRS)

    Mandl, William

    2002-01-01

    Two full custom camera systems using the Multiplexed OverSample Analog to Digital (MOSAD) conversion technology for visible light sensing were built and demonstrated. They include a photo gate sensor and a photo diode sensor. The system includes the camera assembly, driver interface assembly, a frame stabler board with integrated decimeter and Windows 2000 compatible software for real time image display. An array size of 320X240 with 16 micron pixel pitch was developed for compatibility with 0.3 inch CCTV optics. With 1.2 micron technology, a 73% fill factor was achieved. Noise measurements indicated 9 to 11 bits operating with 13.7 bits best case. Power measured under 10 milliwatts at 400 samples per second. Nonuniformity variation was below noise floor. Pictures were taken with different cameras during the characterization study to demonstrate the operable range. The successful conclusion of this program demonstrates the utility of the MOSAD for NASA missions, providing superior performance over CMOS and lower cost and power consumption over CCD. The MOSAD approach also provides a path to radiation hardening for space based applications.

  12. An HDR imaging method with DTDI technology for push-broom cameras

    NASA Astrophysics Data System (ADS)

    Sun, Wu; Han, Chengshan; Xue, Xucheng; Lv, Hengyi; Shi, Junxia; Hu, Changhong; Li, Xiangzhi; Fu, Yao; Jiang, Xiaonan; Huang, Liang; Han, Hongyin

    2018-03-01

    Conventionally, high dynamic-range (HDR) imaging is based on taking two or more pictures of the same scene with different exposure. However, due to a high-speed relative motion between the camera and the scene, it is hard for this technique to be applied to push-broom remote sensing cameras. For the sake of HDR imaging in push-broom remote sensing applications, the present paper proposes an innovative method which can generate HDR images without redundant image sensors or optical components. Specifically, this paper adopts an area array CMOS (complementary metal oxide semiconductor) with the digital domain time-delay-integration (DTDI) technology for imaging, instead of adopting more than one row of image sensors, thereby taking more than one picture with different exposure. And then a new HDR image by fusing two original images with a simple algorithm can be achieved. By conducting the experiment, the dynamic range (DR) of the image increases by 26.02 dB. The proposed method is proved to be effective and has potential in other imaging applications where there is a relative motion between the cameras and scenes.

  13. A Camera-Based Target Detection and Positioning UAV System for Search and Rescue (SAR) Purposes

    PubMed Central

    Sun, Jingxuan; Li, Boyang; Jiang, Yifan; Wen, Chih-yung

    2016-01-01

    Wilderness search and rescue entails performing a wide-range of work in complex environments and large regions. Given the concerns inherent in large regions due to limited rescue distribution, unmanned aerial vehicle (UAV)-based frameworks are a promising platform for providing aerial imaging. In recent years, technological advances in areas such as micro-technology, sensors and navigation have influenced the various applications of UAVs. In this study, an all-in-one camera-based target detection and positioning system is developed and integrated into a fully autonomous fixed-wing UAV. The system presented in this paper is capable of on-board, real-time target identification, post-target identification and location and aerial image collection for further mapping applications. Its performance is examined using several simulated search and rescue missions, and the test results demonstrate its reliability and efficiency. PMID:27792156

  14. A Camera-Based Target Detection and Positioning UAV System for Search and Rescue (SAR) Purposes.

    PubMed

    Sun, Jingxuan; Li, Boyang; Jiang, Yifan; Wen, Chih-Yung

    2016-10-25

    Wilderness search and rescue entails performing a wide-range of work in complex environments and large regions. Given the concerns inherent in large regions due to limited rescue distribution, unmanned aerial vehicle (UAV)-based frameworks are a promising platform for providing aerial imaging. In recent years, technological advances in areas such as micro-technology, sensors and navigation have influenced the various applications of UAVs. In this study, an all-in-one camera-based target detection and positioning system is developed and integrated into a fully autonomous fixed-wing UAV. The system presented in this paper is capable of on-board, real-time target identification, post-target identification and location and aerial image collection for further mapping applications. Its performance is examined using several simulated search and rescue missions, and the test results demonstrate its reliability and efficiency.

  15. Monocular-Based 6-Degree of Freedom Pose Estimation Technology for Robotic Intelligent Grasping Systems

    PubMed Central

    Liu, Tao; Guo, Yin; Yang, Shourui; Yin, Shibin; Zhu, Jigui

    2017-01-01

    Industrial robots are expected to undertake ever more advanced tasks in the modern manufacturing industry, such as intelligent grasping, in which robots should be capable of recognizing the position and orientation of a part before grasping it. In this paper, a monocular-based 6-degree of freedom (DOF) pose estimation technology to enable robots to grasp large-size parts at informal poses is proposed. A camera was mounted on the robot end-flange and oriented to measure several featured points on the part before the robot moved to grasp it. In order to estimate the part pose, a nonlinear optimization model based on the camera object space collinearity error in different poses is established, and the initial iteration value is estimated with the differential transformation. Measuring poses of the camera are optimized based on uncertainty analysis. Also, the principle of the robotic intelligent grasping system was developed, with which the robot could adjust its pose to grasp the part. In experimental tests, the part poses estimated with the method described in this paper were compared with those produced by a laser tracker, and results show the RMS angle and position error are about 0.0228° and 0.4603 mm. Robotic intelligent grasping tests were also successfully performed in the experiments. PMID:28216555

  16. Monocular-Based 6-Degree of Freedom Pose Estimation Technology for Robotic Intelligent Grasping Systems.

    PubMed

    Liu, Tao; Guo, Yin; Yang, Shourui; Yin, Shibin; Zhu, Jigui

    2017-02-14

    Industrial robots are expected to undertake ever more advanced tasks in the modern manufacturing industry, such as intelligent grasping, in which robots should be capable of recognizing the position and orientation of a part before grasping it. In this paper, a monocular-based 6-degree of freedom (DOF) pose estimation technology to enable robots to grasp large-size parts at informal poses is proposed. A camera was mounted on the robot end-flange and oriented to measure several featured points on the part before the robot moved to grasp it. In order to estimate the part pose, a nonlinear optimization model based on the camera object space collinearity error in different poses is established, and the initial iteration value is estimated with the differential transformation. Measuring poses of the camera are optimized based on uncertainty analysis. Also, the principle of the robotic intelligent grasping system was developed, with which the robot could adjust its pose to grasp the part. In experimental tests, the part poses estimated with the method described in this paper were compared with those produced by a laser tracker, and results show the RMS angle and position error are about 0.0228° and 0.4603 mm. Robotic intelligent grasping tests were also successfully performed in the experiments.

  17. Quantitative optical metrology with CMOS cameras

    NASA Astrophysics Data System (ADS)

    Furlong, Cosme; Kolenovic, Ervin; Ferguson, Curtis F.

    2004-08-01

    Recent advances in laser technology, optical sensing, and computer processing of data, have lead to the development of advanced quantitative optical metrology techniques for high accuracy measurements of absolute shapes and deformations of objects. These techniques provide noninvasive, remote, and full field of view information about the objects of interest. The information obtained relates to changes in shape and/or size of the objects, characterizes anomalies, and provides tools to enhance fabrication processes. Factors that influence selection and applicability of an optical technique include the required sensitivity, accuracy, and precision that are necessary for a particular application. In this paper, sensitivity, accuracy, and precision characteristics in quantitative optical metrology techniques, and specifically in optoelectronic holography (OEH) based on CMOS cameras, are discussed. Sensitivity, accuracy, and precision are investigated with the aid of National Institute of Standards and Technology (NIST) traceable gauges, demonstrating the applicability of CMOS cameras in quantitative optical metrology techniques. It is shown that the advanced nature of CMOS technology can be applied to challenging engineering applications, including the study of rapidly evolving phenomena occurring in MEMS and micromechatronics.

  18. Advanced imaging research and development at DARPA

    NASA Astrophysics Data System (ADS)

    Dhar, Nibir K.; Dat, Ravi

    2012-06-01

    Advances in imaging technology have huge impact on our daily lives. Innovations in optics, focal plane arrays (FPA), microelectronics and computation have revolutionized camera design. As a result, new approaches to camera design and low cost manufacturing is now possible. These advances are clearly evident in visible wavelength band due to pixel scaling, improvements in silicon material and CMOS technology. CMOS cameras are available in cell phones and many other consumer products. Advances in infrared imaging technology have been slow due to market volume and many technological barriers in detector materials, optics and fundamental limits imposed by the scaling laws of optics. There is of course much room for improvements in both, visible and infrared imaging technology. This paper highlights various technology development projects at DARPA to advance the imaging technology for both, visible and infrared. Challenges and potentials solutions are highlighted in areas related to wide field-of-view camera design, small pitch pixel, broadband and multiband detectors and focal plane arrays.

  19. Report 11HL: Technologies for Trusted Maritime Situational Awareness

    DTIC Science & Technology

    2011-10-01

    Olympics. The AIS antenna can be seen on the wooden pole to the right. The ASIA camera is contained within the Pelco enclosure (i.e., white case) on...tracks based on GPS and radar. The physical deployment of ASIA, radar and the acoustic array are also shown...the 2010 Vancouver Olympics. The AIS antenna can be seen on the wooden pole to the right. The ASIA camera is contained within the Pelco enclosure

  20. JPRS Report, Science & Technology, Japan, 4th Intelligent Robots Symposium, Volume 2

    DTIC Science & Technology

    1989-03-16

    accidents caused by strikes by robots,5 a quantitative model for safety evaluation,6 and evaluations of actual systems7 in order to contribute to...Mobile Robot Position Referencing Using Map-Based Vision Systems.... 160 Safety Evaluation of Man-Robot System 171 Fuzzy Path Pattern of Automatic...camera are made after the robot stops to prevent damage from occurring through obstacle interference. The position of the camera is indicated on the

  1. Invisible watermarking optical camera communication and compatibility issues of IEEE 802.15.7r1 specification

    NASA Astrophysics Data System (ADS)

    Le, Nam-Tuan

    2017-05-01

    Copyright protection and information security are two most considered issues of digital data following the development of internet and computer network. As an important solution for protection, watermarking technology has become one of the challenged roles in industry and academic research. The watermarking technology can be classified by two categories: visible watermarking and invisible watermarking. With invisible technique, there is an advantage on user interaction because of the visibility. By applying watermarking for communication, it will be a challenge and a new direction for communication technology. In this paper we will propose one new research on communication technology using optical camera communications (OCC) based invisible watermarking. Beside the analysis on performance of proposed system, we also suggest the frame structure of PHY and MAC layer for IEEE 802.15.7r1 specification which is a revision of visible light communication (VLC) standardization.

  2. Intimate partner violence, technology, and stalking.

    PubMed

    Southworth, Cynthia; Finn, Jerry; Dawson, Shawndell; Fraser, Cynthia; Tucker, Sarah

    2007-08-01

    This research note describes the use of a broad range of technologies in intimate partner stalking, including cordless and cellular telephones, fax machines, e-mail, Internet-based harassment, global positioning systems, spy ware, video cameras, and online databases. The concept of "stalking with technology" is reviewed, and the need for an expanded definition of cyberstalking is presented. Legal issues and advocacy-centered responses, including training, legal remedies, public policy issues, and technology industry practices, are discussed.

  3. Clementine mission

    NASA Astrophysics Data System (ADS)

    Rustan, Pedro L.

    1995-01-01

    The U.S. Department of Defense (DoD) and the National Aeronautics and Space Administration (NASA) started a cooperative program in 1992 to flight qualify recently developed lightweight technologies in a radiation stressed environment. The spacecraft, referred to as Clementine, was designed, built, and launched in less than a two year period. The spacecraft was launched into a high inclination orbit from Vandenburg Air Force Base in California on a Titan IIG launch vehicle in January 1994. The spacecraft was injected into a 420 by 3000 km orbit around the Moon and remained there for over two months. Unfortunately, after successfully completing the Lunar phase of the mission, a software malfunction prevented the accomplishment of the near-Earth asteroid (NEA) phase. Some of the technologies incorporated in the Clementine spacecraft include: a 370 gram, 7 watt star tracker camera; a 500 gram, 6 watt, UV/Vis camera; a 1600 gram, 30 watt Indium Antimonide focal plane array NIR camera; a 1650 gram, 30 watt, Mercury Cadmium Telluride LWIR camera; a LIDAR camera which consists of a Nd:YAG diode pumped laser for ranging and an intensified photocathode charge-coupled detector for imaging. The scientific results of the mission will be first analyzed by a NASA selected team, and then will be available to the entire community.

  4. Camera Concepts for the Advanced Gamma-Ray Imaging System (AGIS)

    NASA Astrophysics Data System (ADS)

    Nepomuk Otte, Adam

    2009-05-01

    The Advanced Gamma-Ray Imaging System (AGIS) is a concept for the next generation observatory in ground-based very high energy gamma-ray astronomy. Design goals are ten times better sensitivity, higher angular resolution, and a lower energy threshold than existing Cherenkov telescopes. Each telescope is equipped with a camera that detects and records the Cherenkov-light flashes from air showers. The camera is comprised of a pixelated focal plane of blue sensitive and fast (nanosecond) photon detectors that detect the photon signal and convert it into an electrical one. The incorporation of trigger electronics and signal digitization into the camera are under study. Given the size of AGIS, the camera must be reliable, robust, and cost effective. We are investigating several directions that include innovative technologies such as Geiger-mode avalanche-photodiodes as a possible detector and switched capacitor arrays for the digitization.

  5. Utilization of Open Source Technology to Create Cost-Effective Microscope Camera Systems for Teaching.

    PubMed

    Konduru, Anil Reddy; Yelikar, Balasaheb R; Sathyashree, K V; Kumar, Ankur

    2018-01-01

    Open source technologies and mobile innovations have radically changed the way people interact with technology. These innovations and advancements have been used across various disciplines and already have a significant impact. Microscopy, with focus on visually appealing contrasting colors for better appreciation of morphology, forms the core of the disciplines such as Pathology, microbiology, and anatomy. Here, learning happens with the aid of multi-head microscopes and digital camera systems for teaching larger groups and in organizing interactive sessions for students or faculty of other departments. The cost of the original equipment manufacturer (OEM) camera systems in bringing this useful technology at all the locations is a limiting factor. To avoid this, we have used the low-cost technologies like Raspberry Pi, Mobile high definition link and 3D printing for adapters to create portable camera systems. Adopting these open source technologies enabled us to convert any binocular or trinocular microscope be connected to a projector or HD television at a fraction of the cost of the OEM camera systems with comparable quality. These systems, in addition to being cost-effective, have also provided the added advantage of portability, thus providing the much-needed flexibility at various teaching locations.

  6. Application of single-image camera calibration for ultrasound augmented laparoscopic visualization

    NASA Astrophysics Data System (ADS)

    Liu, Xinyang; Su, He; Kang, Sukryool; Kane, Timothy D.; Shekhar, Raj

    2015-03-01

    Accurate calibration of laparoscopic cameras is essential for enabling many surgical visualization and navigation technologies such as the ultrasound-augmented visualization system that we have developed for laparoscopic surgery. In addition to accuracy and robustness, there is a practical need for a fast and easy camera calibration method that can be performed on demand in the operating room (OR). Conventional camera calibration methods are not suitable for the OR use because they are lengthy and tedious. They require acquisition of multiple images of a target pattern in its entirety to produce satisfactory result. In this work, we evaluated the performance of a single-image camera calibration tool (rdCalib; Percieve3D, Coimbra, Portugal) featuring automatic detection of corner points in the image, whether partial or complete, of a custom target pattern. Intrinsic camera parameters of a 5-mm and a 10-mm standard Stryker® laparoscopes obtained using rdCalib and the well-accepted OpenCV camera calibration method were compared. Target registration error (TRE) as a measure of camera calibration accuracy for our optical tracking-based AR system was also compared between the two calibration methods. Based on our experiments, the single-image camera calibration yields consistent and accurate results (mean TRE = 1.18 ± 0.35 mm for the 5-mm scope and mean TRE = 1.13 ± 0.32 mm for the 10-mm scope), which are comparable to the results obtained using the OpenCV method with 30 images. The new single-image camera calibration method is promising to be applied to our augmented reality visualization system for laparoscopic surgery.

  7. Application of single-image camera calibration for ultrasound augmented laparoscopic visualization

    PubMed Central

    Liu, Xinyang; Su, He; Kang, Sukryool; Kane, Timothy D.; Shekhar, Raj

    2017-01-01

    Accurate calibration of laparoscopic cameras is essential for enabling many surgical visualization and navigation technologies such as the ultrasound-augmented visualization system that we have developed for laparoscopic surgery. In addition to accuracy and robustness, there is a practical need for a fast and easy camera calibration method that can be performed on demand in the operating room (OR). Conventional camera calibration methods are not suitable for the OR use because they are lengthy and tedious. They require acquisition of multiple images of a target pattern in its entirety to produce satisfactory result. In this work, we evaluated the performance of a single-image camera calibration tool (rdCalib; Percieve3D, Coimbra, Portugal) featuring automatic detection of corner points in the image, whether partial or complete, of a custom target pattern. Intrinsic camera parameters of a 5-mm and a 10-mm standard Stryker® laparoscopes obtained using rdCalib and the well-accepted OpenCV camera calibration method were compared. Target registration error (TRE) as a measure of camera calibration accuracy for our optical tracking-based AR system was also compared between the two calibration methods. Based on our experiments, the single-image camera calibration yields consistent and accurate results (mean TRE = 1.18 ± 0.35 mm for the 5-mm scope and mean TRE = 1.13 ± 0.32 mm for the 10-mm scope), which are comparable to the results obtained using the OpenCV method with 30 images. The new single-image camera calibration method is promising to be applied to our augmented reality visualization system for laparoscopic surgery. PMID:28943703

  8. Application of single-image camera calibration for ultrasound augmented laparoscopic visualization.

    PubMed

    Liu, Xinyang; Su, He; Kang, Sukryool; Kane, Timothy D; Shekhar, Raj

    2015-03-01

    Accurate calibration of laparoscopic cameras is essential for enabling many surgical visualization and navigation technologies such as the ultrasound-augmented visualization system that we have developed for laparoscopic surgery. In addition to accuracy and robustness, there is a practical need for a fast and easy camera calibration method that can be performed on demand in the operating room (OR). Conventional camera calibration methods are not suitable for the OR use because they are lengthy and tedious. They require acquisition of multiple images of a target pattern in its entirety to produce satisfactory result. In this work, we evaluated the performance of a single-image camera calibration tool ( rdCalib ; Percieve3D, Coimbra, Portugal) featuring automatic detection of corner points in the image, whether partial or complete, of a custom target pattern. Intrinsic camera parameters of a 5-mm and a 10-mm standard Stryker ® laparoscopes obtained using rdCalib and the well-accepted OpenCV camera calibration method were compared. Target registration error (TRE) as a measure of camera calibration accuracy for our optical tracking-based AR system was also compared between the two calibration methods. Based on our experiments, the single-image camera calibration yields consistent and accurate results (mean TRE = 1.18 ± 0.35 mm for the 5-mm scope and mean TRE = 1.13 ± 0.32 mm for the 10-mm scope), which are comparable to the results obtained using the OpenCV method with 30 images. The new single-image camera calibration method is promising to be applied to our augmented reality visualization system for laparoscopic surgery.

  9. Hypervelocity impact studies using a rotating mirror framing laser shadowgraph camera

    NASA Technical Reports Server (NTRS)

    Parker, Vance C.; Crews, Jeanne Lee

    1988-01-01

    The need to study the effects of the impact of micrometeorites and orbital debris on various space-based systems has brought together the technologies of several companies and individuals in order to provide a successful instrumentation package. A light gas gun was employed to accelerate small projectiles to speeds in excess of 7 km/sec. Their impact on various targets is being studied with the help of a specially designed continuous-access rotating-mirror framing camera. The camera provides 80 frames of data at up to 1 x 10 to the 6th frames/sec with exposure times of 20 nsec.

  10. A Bevel Gear Quality Inspection System Based on Multi-Camera Vision Technology.

    PubMed

    Liu, Ruiling; Zhong, Dexing; Lyu, Hongqiang; Han, Jiuqiang

    2016-08-25

    Surface defect detection and dimension measurement of automotive bevel gears by manual inspection are costly, inefficient, low speed and low accuracy. In order to solve these problems, a synthetic bevel gear quality inspection system based on multi-camera vision technology is developed. The system can detect surface defects and measure gear dimensions simultaneously. Three efficient algorithms named Neighborhood Average Difference (NAD), Circle Approximation Method (CAM) and Fast Rotation-Position (FRP) are proposed. The system can detect knock damage, cracks, scratches, dents, gibbosity or repeated cutting of the spline, etc. The smallest detectable defect is 0.4 mm × 0.4 mm and the precision of dimension measurement is about 40-50 μm. One inspection process takes no more than 1.3 s. Both precision and speed meet the requirements of real-time online inspection in bevel gear production.

  11. Research on camera on orbit radial calibration based on black body and infrared calibration stars

    NASA Astrophysics Data System (ADS)

    Wang, YuDu; Su, XiaoFeng; Zhang, WanYing; Chen, FanSheng

    2018-05-01

    Affected by launching process and space environment, the response capability of a space camera must be attenuated. So it is necessary for a space camera to have a spaceborne radiant calibration. In this paper, we propose a method of calibration based on accurate Infrared standard stars was proposed for increasing infrared radiation measurement precision. As stars can be considered as a point target, we use them as the radiometric calibration source and establish the Taylor expansion method and the energy extrapolation model based on WISE catalog and 2MASS catalog. Then we update the calibration results from black body. Finally, calibration mechanism is designed and the technology of design is verified by on orbit test. The experimental calibration result shows the irradiance extrapolation error is about 3% and the accuracy of calibration methods is about 10%, the results show that the methods could satisfy requirements of on orbit calibration.

  12. Optical Indoor Positioning System Based on TFT Technology.

    PubMed

    Gőzse, István

    2015-12-24

    A novel indoor positioning system is presented in the paper. Similarly to the camera-based solutions, it is based on visual detection, but it conceptually differs from the classical approaches. First, the objects are marked by LEDs, and second, a special sensing unit is applied, instead of a camera, to track the motion of the markers. This sensing unit realizes a modified pinhole camera model, where the light-sensing area is fixed and consists of a small number of sensing elements (photodiodes), and it is the hole that can be moved. The markers are tracked by controlling the motion of the hole, such that the light of the LEDs always hits the photodiodes. The proposed concept has several advantages: Apart from its low computational demands, it is insensitive to the disturbing ambient light. Moreover, as every component of the system can be realized by simple and inexpensive elements, the overall cost of the system can be kept low.

  13. Temperature measurement with industrial color camera devices

    NASA Astrophysics Data System (ADS)

    Schmidradler, Dieter J.; Berndorfer, Thomas; van Dyck, Walter; Pretschuh, Juergen

    1999-05-01

    This paper discusses color camera based temperature measurement. Usually, visual imaging and infrared image sensing are treated as two separate disciplines. We will show, that a well selected color camera device might be a cheaper, more robust and more sophisticated solution for optical temperature measurement in several cases. Herein, only implementation fragments and important restrictions for the sensing element will be discussed. Our aim is to draw the readers attention to the use of visual image sensors for measuring thermal radiation and temperature and to give reasons for the need of improved technologies for infrared camera devices. With AVL-List, our partner of industry, we successfully used the proposed sensor to perform temperature measurement for flames inside the combustion chamber of diesel engines which finally led to the presented insights.

  14. Characterization of a multi-user indoor positioning system based on low cost depth vision (Kinect) for monitoring human activity in a smart home.

    PubMed

    Sevrin, Loïc; Noury, Norbert; Abouchi, Nacer; Jumel, Fabrice; Massot, Bertrand; Saraydaryan, Jacques

    2015-01-01

    An increasing number of systems use indoor positioning for many scenarios such as asset tracking, health care, games, manufacturing, logistics, shopping, and security. Many technologies are available and the use of depth cameras is becoming more and more attractive as this kind of device becomes affordable and easy to handle. This paper contributes to the effort of creating an indoor positioning system based on low cost depth cameras (Kinect). A method is proposed to optimize the calibration of the depth cameras, to describe the multi-camera data fusion and to specify a global positioning projection to maintain the compatibility with outdoor positioning systems. The monitoring of the people trajectories at home is intended for the early detection of a shift in daily activities which highlights disabilities and loss of autonomy. This system is meant to improve homecare health management at home for a better end of life at a sustainable cost for the community.

  15. Views of Caregivers on the Ethics of Assistive Technology Used for Home Surveillance of People Living with Dementia.

    PubMed

    Mulvenna, Maurice; Hutton, Anton; Coates, Vivien; Martin, Suzanne; Todd, Stephen; Bond, Raymond; Moorhead, Anne

    2017-01-01

    This paper examines the ethics of using assistive technology such as video surveillance in the homes of people living with dementia. Ideation and concept elaboration around the introduction of a camera-based surveillance service in the homes of people with dementia, typically living alone, is explored. The paper reviews relevant literature on surveillance of people living with dementia, and summarises the findings from ideation and concept elaboration workshops, designed to capture the views of those involved in the care of people living with dementia at home. The research question relates to the ethical considerations of using assistive technologies that include video surveillance in the homes of people living with dementia, and the implications for a person living with dementia whenever video surveillance is used in their home and access to the camera is given to the person's family. The review of related work indicated that such video surveillance may result in loss of autonomy or freedom for the person with dementia. The workshops reflected the findings from the related work, and revealed useful information to inform the service design, in particular in fine-tuning the service to find the best relationship between privacy and usefulness. Those who took part in the workshops supported the concept of the use of camera in the homes of people living with dementia, with some significant caveats around privacy. The research carried out in this work is small in scale but points towards an acceptance by many caregivers of people living with dementia of surveillance technologies. This paper indicates that those who care for people living with dementia at home are willing to make use of camera technology and therefore the value of this work is to help shed light on the direction for future research.

  16. Advances in Gamma-Ray Imaging with Intensified Quantum-Imaging Detectors

    NASA Astrophysics Data System (ADS)

    Han, Ling

    Nuclear medicine, an important branch of modern medical imaging, is an essential tool for both diagnosis and treatment of disease. As the fundamental element of nuclear medicine imaging, the gamma camera is able to detect gamma-ray photons emitted by radiotracers injected into a patient and form an image of the radiotracer distribution, reflecting biological functions of organs or tissues. Recently, an intensified CCD/CMOS-based quantum detector, called iQID, was developed in the Center for Gamma-Ray Imaging. Originally designed as a novel type of gamma camera, iQID demonstrated ultra-high spatial resolution (< 100 micron) and many other advantages over traditional gamma cameras. This work focuses on advancing this conceptually-proven gamma-ray imaging technology to make it ready for both preclinical and clinical applications. To start with, a Monte Carlo simulation of the key light-intensification device, i.e. the image intensifier, was developed, which revealed the dominating factor(s) that limit energy resolution performance of the iQID cameras. For preclinical imaging applications, a previously-developed iQID-based single-photon-emission computed-tomography (SPECT) system, called FastSPECT III, was fully advanced in terms of data acquisition software, system sensitivity and effective FOV by developing and adopting a new photon-counting algorithm, thicker columnar scintillation detectors, and system calibration method. Originally designed for mouse brain imaging, the system is now able to provide full-body mouse imaging with sub-350-micron spatial resolution. To further advance the iQID technology to include clinical imaging applications, a novel large-area iQID gamma camera, called LA-iQID, was developed from concept to prototype. Sub-mm system resolution in an effective FOV of 188 mm x 188 mm has been achieved. The camera architecture, system components, design and integration, data acquisition, camera calibration, and performance evaluation are presented in this work. Mounted on a castered counter-weighted clinical cart, the camera also features portable and mobile capabilities for easy handling and on-site applications at remote locations where hospital facilities are not available.

  17. Development, Deployment, and Cost Effectiveness of a Self-Administered Stereo Non Mydriatic Automated Retinal Camera (SNARC) Containing Automated Retinal Lesion (ARL) Detection Using Adaptive Optics

    DTIC Science & Technology

    2010-10-01

    Requirements Application Server  BEA Weblogic Express 9.2 or higher  Java v5Apache Struts v2  Hibernate v2  C3PO  SQL*Net client / JDBC Database Server...designed for the desktop o An HTML and JavaScript browser-based front end designed for mobile Smartphones - A Java -based framework utilizing Apache...Technology Requirements The recommended technologies are as follows: Technology Use Requirements Java Application Provides the backend application

  18. SpectraCAM SPM: a camera system with high dynamic range for scientific and medical applications

    NASA Astrophysics Data System (ADS)

    Bhaskaran, S.; Baiko, D.; Lungu, G.; Pilon, M.; VanGorden, S.

    2005-08-01

    A scientific camera system having high dynamic range designed and manufactured by Thermo Electron for scientific and medical applications is presented. The newly developed CID820 image sensor with preamplifier-per-pixel technology is employed in this camera system. The 4 Mega-pixel imaging sensor has a raw dynamic range of 82dB. Each high-transparent pixel is based on a preamplifier-per-pixel architecture and contains two photogates for non-destructive readout of the photon-generated charge (NDRO). Readout is achieved via parallel row processing with on-chip correlated double sampling (CDS). The imager is capable of true random pixel access with a maximum operating speed of 4MHz. The camera controller consists of a custom camera signal processor (CSP) with an integrated 16-bit A/D converter and a PowerPC-based CPU running a Linux embedded operating system. The imager is cooled to -40C via three-stage cooler to minimize dark current. The camera housing is sealed and is designed to maintain the CID820 imager in the evacuated chamber for at least 5 years. Thermo Electron has also developed custom software and firmware to drive the SpectraCAM SPM camera. Included in this firmware package is the new Extreme DRTM algorithm that is designed to extend the effective dynamic range of the camera by several orders of magnitude up to 32-bit dynamic range. The RACID Exposure graphical user interface image analysis software runs on a standard PC that is connected to the camera via Gigabit Ethernet.

  19. Passive stand-off terahertz imaging with 1 hertz frame rate

    NASA Astrophysics Data System (ADS)

    May, T.; Zieger, G.; Anders, S.; Zakosarenko, V.; Starkloff, M.; Meyer, H.-G.; Thorwirth, G.; Kreysa, E.

    2008-04-01

    Terahertz (THz) cameras are expected to be a powerful tool for future security applications. If such a technology shall be useful for typical security scenarios (e.g. airport check-in) it has to meet some minimum standards. A THz camera should record images with video rate from a safe distance (stand-off). Although active cameras are conceivable, a passive system has the benefit of concealed operation. Additionally, from an ethic perspective, the lack of exposure to a radiation source is a considerable advantage in public acceptance. Taking all these requirements into account, only cooled detectors are able to achieve the needed sensitivity. A big leap forward in the detector performance and scalability was driven by the astrophysics community. Superconducting bolometers and midsized arrays of them have been developed and are in routine use. Although devices with many pixels are foreseeable nowadays a device with an additional scanning optic is the straightest way to an imaging system with a useful resolution. We demonstrate the capabilities of a concept for a passive Terahertz video camera based on superconducting technology. The actual prototype utilizes a small Cassegrain telescope with a gyrating secondary mirror to record 2 kilopixel THz images with 1 second frame rate.

  20. Investigation of Phototriangulation Accuracy with Using of Various Techniques Laboratory and Field Calibration

    NASA Astrophysics Data System (ADS)

    Chibunichev, A. G.; Kurkov, V. M.; Smirnov, A. V.; Govorov, A. V.; Mikhalin, V. A.

    2016-10-01

    Nowadays, aerial survey technology using aerial systems based on unmanned aerial vehicles (UAVs) becomes more popular. UAVs physically can not carry professional aerocameras. Consumer digital cameras are used instead. Such cameras usually have rolling, lamellar or global shutter. Quite often manufacturers and users of such aerial systems do not use camera calibration. In this case self-calibration techniques are used. However such approach is not confirmed by extensive theoretical and practical research. In this paper we compare results of phototriangulation based on laboratory, test-field or self-calibration. For investigations we use Zaoksky test area as an experimental field provided dense network of target and natural control points. Racurs PHOTOMOD and Agisoft PhotoScan software were used in evaluation. The results of investigations, conclusions and practical recommendations are presented in this article.

  1. Design of a 2-mm Wavelength KIDs Prototype Camera for the Large Millimeter Telescope

    NASA Astrophysics Data System (ADS)

    Velázquez, M.; Ferrusca, D.; Castillo-Dominguez, E.; Ibarra-Medel, E.; Ventura, S.; Gómez-Rivera, V.; Hughes, D.; Aretxaga, I.; Grant, W.; Doyle, S.; Mauskopf, P.

    2016-08-01

    A new camera is being developed for the Large Millimeter Telescope (Sierra Negra, México) by an international collaboration with the University of Massachusetts, the University of Cardiff, and Arizona State University. The camera is based on kinetic inductance detectors (KIDs), a very promising technology due to their sensitivity and especially, their compatibility with frequency domain multiplexing at microwave frequencies allowing large format arrays, in comparison with other detection technologies for mm-wavelength astronomy. The instrument will have a 100 pixels array of KIDs to image the 2-mm wavelength band and is designed for closed cycle operation using a pulse tube cryocooler along with a three-stage sub-kelvin 3He cooler to provide a 250 mK detector stage. RF cabling is used to readout the detectors from room temperature to 250 mK focal plane, and the amplification stage is achieved with a low-noise amplifier operating at 4 K. The readout electronics will be based on open-source reconfigurable open architecture computing hardware in order to perform real-time microwave transmission measurements and monitoring the resonance frequency of each detector, as well as the detection process.

  2. In-Home Exposure Therapy for Veterans with PTSD

    DTIC Science & Technology

    2017-10-01

    telehealth (HBT; Veterans stay at home and meet with the therapist using the computer and video cameras), and (3) PE delivered in home, in person (IHIP... video cameras), and (3) PE delivered in home, in person (IHIP; the therapist comes to the Veterans’ homes for treatment). We will be checking to see...when providing treatment in homes and through home based video technology. BODY: Our focus in the past year (30 Sept 2016 – 10 Oct 2017) has been to

  3. Extending technology-aided leisure and communication programs to persons with spinal cord injury and post-coma multiple disabilities.

    PubMed

    Lancioni, Giulio E; Singh, Nirbhay N; O'Reilly, Mark F; Sigafoos, Jeff; Ricciuti, Riccardo A; Trignani, Roberto; Oliva, Doretta; Signorino, Mario; D'Amico, Fiora; Sasanelli, Giovanni

    2015-01-01

    These two studies extended technology-aided programs to promote leisure and communication opportunities to a man with cervical spinal cord injury and a post-coma man with multiple disabilities. The studies involved the use of ABAB designs, in which A and B represented baseline and intervention phases, respectively. The programs focused on enabling the participants to activate songs, videos, requests, text messages, and telephone calls. These options were presented on a computer screen and activated through a small pressure microswitch by the man with spinal cord injury and a special touch screen by the post-coma man. To help the latter participant, who had no verbal skills, with requests and telephone calls, series of words and phrases were made available that he could activate in those situations. Data showed that both participants were successful in managing the programs arranged for them. The man with spinal cord injury activated mean frequencies of above five options per 10-min session. The post-coma man activated mean frequencies of about 12 options per 20-min session. Technology-aided programs for promoting leisure and communication opportunities might be successfully tailored to persons with spinal cord injury and persons with post-coma multiple disabilities. Implications for Rehabilitation Technology-aided programs may be critical to enable persons with pervasive motor impairment to engage in leisure activities and communication events independently. Persons with spinal cord injury, post-coma extended brain damage, and forms of neurodegenerative disease, such as amyotrophic lateral sclerosis, may benefit from those programs. The programs could be adapted to the participants' characteristics, both in terms of technology and contents, so as to improve their overall impact on the participants' functioning and general mood.

  4. Compact Autonomous Hemispheric Vision System

    NASA Technical Reports Server (NTRS)

    Pingree, Paula J.; Cunningham, Thomas J.; Werne, Thomas A.; Eastwood, Michael L.; Walch, Marc J.; Staehle, Robert L.

    2012-01-01

    Solar System Exploration camera implementations to date have involved either single cameras with wide field-of-view (FOV) and consequently coarser spatial resolution, cameras on a movable mast, or single cameras necessitating rotation of the host vehicle to afford visibility outside a relatively narrow FOV. These cameras require detailed commanding from the ground or separate onboard computers to operate properly, and are incapable of making decisions based on image content that control pointing and downlink strategy. For color, a filter wheel having selectable positions was often added, which added moving parts, size, mass, power, and reduced reliability. A system was developed based on a general-purpose miniature visible-light camera using advanced CMOS (complementary metal oxide semiconductor) imager technology. The baseline camera has a 92 FOV and six cameras are arranged in an angled-up carousel fashion, with FOV overlaps such that the system has a 360 FOV (azimuth). A seventh camera, also with a FOV of 92 , is installed normal to the plane of the other 6 cameras giving the system a > 90 FOV in elevation and completing the hemispheric vision system. A central unit houses the common electronics box (CEB) controlling the system (power conversion, data processing, memory, and control software). Stereo is achieved by adding a second system on a baseline, and color is achieved by stacking two more systems (for a total of three, each system equipped with its own filter.) Two connectors on the bottom of the CEB provide a connection to a carrier (rover, spacecraft, balloon, etc.) for telemetry, commands, and power. This system has no moving parts. The system's onboard software (SW) supports autonomous operations such as pattern recognition and tracking.

  5. Travtek Evaluation Task C3: Camera Car Study

    DOT National Transportation Integrated Search

    1998-11-01

    A "biometric" technology is an automatic method for the identification, or identity verification, of an individual based on physiological or behavioral characteristics. The primary objective of the study summarized in this tech brief was to make reco...

  6. Lights, Camera, Lesson: Teaching Literacy through Film

    ERIC Educational Resources Information Center

    Lipiner, Michael

    2011-01-01

    This in-depth case study explores a modern approach to education: the benefits of using film, technology and other creative, non-conventional pedagogical methods in the classroom to enhance students' understanding of literature. The study explores the positive effects of introducing a variety of visual-based (and auditory-based) teaching methods…

  7. A Bevel Gear Quality Inspection System Based on Multi-Camera Vision Technology

    PubMed Central

    Liu, Ruiling; Zhong, Dexing; Lyu, Hongqiang; Han, Jiuqiang

    2016-01-01

    Surface defect detection and dimension measurement of automotive bevel gears by manual inspection are costly, inefficient, low speed and low accuracy. In order to solve these problems, a synthetic bevel gear quality inspection system based on multi-camera vision technology is developed. The system can detect surface defects and measure gear dimensions simultaneously. Three efficient algorithms named Neighborhood Average Difference (NAD), Circle Approximation Method (CAM) and Fast Rotation-Position (FRP) are proposed. The system can detect knock damage, cracks, scratches, dents, gibbosity or repeated cutting of the spline, etc. The smallest detectable defect is 0.4 mm × 0.4 mm and the precision of dimension measurement is about 40–50 μm. One inspection process takes no more than 1.3 s. Both precision and speed meet the requirements of real-time online inspection in bevel gear production. PMID:27571078

  8. Assessing the reliability and validity of direct observation and traffic camera streams to measure helmet and motorcycle use.

    PubMed

    Zaccaro, Heather N; Carbone, Emily C; Dsouza, Nishita; Xu, Michelle R; Byrne, Mary C; Kraemer, John D

    2015-12-01

    There is a need to develop motorcycle helmet surveillance approaches that are less labour intensive than direct observation (DO), which is the commonly recommended but never formally validated approach, particularly in developing settings. This study sought to assess public traffic camera feeds as an alternative to DO, in addition to the reliability of DO under field conditions. DO had high inter-rater reliability, κ=0.88 and 0.84, respectively, for cycle type and helmet type, which reinforces its use as a gold standard. However, traffic camera-based data collection was found to be unreliable, with κ=0.46 and 0.53 for cycle type and helmet type. When bicycles, motorcycles and scooters were classified based on traffic camera streams, only 68.4% of classifications concurred with those made via DO. Given the current technology, helmet surveillance via traffic camera streams is infeasible, and there remains a need for innovative traffic safety surveillance approaches in low-income urban settings. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  9. Evaluation of Digital Camera Technology For Bridge Inspection

    DOT National Transportation Integrated Search

    1997-07-18

    As part of a cooperative agreement between the Tennessee Department of Transportation and the Federal Highway Administration, a study was conducted to evaluate current levels of digital camera and color printing technology with regard to their applic...

  10. Using digital time-lapse cameras to monitor species-specific understorey and overstorey phenology in support of wildlife habitat assessment.

    PubMed

    Bater, Christopher W; Coops, Nicholas C; Wulder, Michael A; Hilker, Thomas; Nielsen, Scott E; McDermid, Greg; Stenhouse, Gordon B

    2011-09-01

    Critical to habitat management is the understanding of not only the location of animal food resources, but also the timing of their availability. Grizzly bear (Ursus arctos) diets, for example, shift seasonally as different vegetation species enter key phenological phases. In this paper, we describe the use of a network of seven ground-based digital camera systems to monitor understorey and overstorey vegetation within species-specific regions of interest. Established across an elevation gradient in western Alberta, Canada, the cameras collected true-colour (RGB) images daily from 13 April 2009 to 27 October 2009. Fourth-order polynomials were fit to an RGB-derived index, which was then compared to field-based observations of phenological phases. Using linear regression to statistically relate the camera and field data, results indicated that 61% (r (2) = 0.61, df = 1, F = 14.3, p = 0.0043) of the variance observed in the field phenological phase data is captured by the cameras for the start of the growing season and 72% (r (2) = 0.72, df = 1, F = 23.09, p = 0.0009) of the variance in length of growing season. Based on the linear regression models, the mean absolute differences in residuals between predicted and observed start of growing season and length of growing season were 4 and 6 days, respectively. This work extends upon previous research by demonstrating that specific understorey and overstorey species can be targeted for phenological monitoring in a forested environment, using readily available digital camera technology and RGB-based vegetation indices.

  11. 3D vision system for intelligent milking robot automation

    NASA Astrophysics Data System (ADS)

    Akhloufi, M. A.

    2013-12-01

    In a milking robot, the correct localization and positioning of milking teat cups is of very high importance. The milking robots technology has not changed since a decade and is based primarily on laser profiles for teats approximate positions estimation. This technology has reached its limit and does not allow optimal positioning of the milking cups. Also, in the presence of occlusions, the milking robot fails to milk the cow. These problems, have economic consequences for producers and animal health (e.g. development of mastitis). To overcome the limitations of current robots, we have developed a new system based on 3D vision, capable of efficiently positioning the milking cups. A prototype of an intelligent robot system based on 3D vision for real-time positioning of a milking robot has been built and tested under various conditions on a synthetic udder model (in static and moving scenarios). Experimental tests, were performed using 3D Time-Of-Flight (TOF) and RGBD cameras. The proposed algorithms permit the online segmentation of teats by combing 2D and 3D visual information. The obtained results permit the teat 3D position computation. This information is then sent to the milking robot for teat cups positioning. The vision system has a real-time performance and monitors the optimal positioning of the cups even in the presence of motion. The obtained results, with both TOF and RGBD cameras, show the good performance of the proposed system. The best performance was obtained with RGBD cameras. This latter technology will be used in future real life experimental tests.

  12. Binary pressure-sensitive paint measurements using miniaturised, colour, machine vision cameras

    NASA Astrophysics Data System (ADS)

    Quinn, Mark Kenneth

    2018-05-01

    Recent advances in machine vision technology and capability have led to machine vision cameras becoming applicable for scientific imaging. This study aims to demonstrate the applicability of machine vision colour cameras for the measurement of dual-component pressure-sensitive paint (PSP). The presence of a second luminophore component in the PSP mixture significantly reduces its inherent temperature sensitivity, increasing its applicability at low speeds. All of the devices tested are smaller than the cooled CCD cameras traditionally used and most are of significantly lower cost, thereby increasing the accessibility of such technology and techniques. Comparisons between three machine vision cameras, a three CCD camera, and a commercially available specialist PSP camera are made on a range of parameters, and a detailed PSP calibration is conducted in a static calibration chamber. The findings demonstrate that colour machine vision cameras can be used for quantitative, dual-component, pressure measurements. These results give rise to the possibility of performing on-board dual-component PSP measurements in wind tunnels or on real flight/road vehicles.

  13. Terahertz Real-Time Imaging Uncooled Arrays Based on Antenna-Coupled Bolometers or FET Developed at CEA-Leti

    NASA Astrophysics Data System (ADS)

    Simoens, François; Meilhan, Jérôme; Nicolas, Jean-Alain

    2015-10-01

    Sensitive and large-format terahertz focal plane arrays (FPAs) integrated in compact and hand-held cameras that deliver real-time terahertz (THz) imaging are required for many application fields, such as non-destructive testing (NDT), security, quality control of food, and agricultural products industry. Two technologies of uncooled THz arrays that are being studied at CEA-Leti, i.e., bolometer and complementary metal oxide semiconductor (CMOS) field effect transistors (FET), are able to meet these requirements. This paper reminds the followed technological approaches and focuses on the latest modeling and performance analysis. The capabilities of application of these arrays to NDT and security are then demonstrated with experimental tests. In particular, high technological maturity of the THz bolometer camera is illustrated with fast scanning of large field of view of opaque scenes achieved in a complete body scanner prototype.

  14. A Flight Photon Counting Camera for the WFIRST Coronagraph

    NASA Astrophysics Data System (ADS)

    Morrissey, Patrick

    2018-01-01

    A photon counting camera based on the Teledyne-e2v CCD201-20 electron multiplying CCD (EMCCD) is being developed for the NASA WFIRST coronagraph, an exoplanet imaging technology development of the Jet Propulsion Laboratory (Pasadena, CA) that is scheduled to launch in 2026. The coronagraph is designed to directly image planets around nearby stars, and to characterize their spectra. The planets are exceedingly faint, providing signals similar to the detector dark current, and require the use of photon counting detectors. Red sensitivity (600-980nm) is preferred to capture spectral features of interest. Since radiation in space affects the ability of the EMCCD to transfer the required single electron signals, care has been taken to develop appropriate shielding that will protect the cameras during a five year mission. In this poster, consideration of the effects of space radiation on photon counting observations will be described with the mitigating features of the camera design. An overview of the current camera flight system electronics requirements and design will also be described.

  15. A feasibility study of damage detection in beams using high-speed camera (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Wan, Chao; Yuan, Fuh-Gwo

    2017-04-01

    In this paper a method for damage detection in beam structures using high-speed camera is presented. Traditional methods of damage detection in structures typically involve contact (i.e., piezoelectric sensor or accelerometer) or non-contact sensors (i.e., laser vibrometer) which can be costly and time consuming to inspect an entire structure. With the popularity of the digital camera and the development of computer vision technology, video cameras offer a viable capability of measurement including higher spatial resolution, remote sensing and low-cost. In the study, a damage detection method based on the high-speed camera was proposed. The system setup comprises a high-speed camera and a line-laser which can capture the out-of-plane displacement of a cantilever beam. The cantilever beam with an artificial crack was excited and the vibration process was recorded by the camera. A methodology called motion magnification, which can amplify subtle motions in a video is used for modal identification of the beam. A finite element model was used for validation of the proposed method. Suggestions for applications of this methodology and challenges in future work will be discussed.

  16. Multi-Target Camera Tracking, Hand-off and Display LDRD 158819 Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, Robert J.

    2014-10-01

    Modern security control rooms gather video and sensor feeds from tens to hundreds of cameras. Advanced camera analytics can detect motion from individual video streams and convert unexpected motion into alarms, but the interpretation of these alarms depends heavily upon human operators. Unfortunately, these operators can be overwhelmed when a large number of events happen simultaneously, or lulled into complacency due to frequent false alarms. This LDRD project has focused on improving video surveillance-based security systems by changing the fundamental focus from the cameras to the targets being tracked. If properly integrated, more cameras shouldn’t lead to more alarms, moremore » monitors, more operators, and increased response latency but instead should lead to better information and more rapid response times. For the course of the LDRD we have been developing algorithms that take live video imagery from multiple video cameras, identify individual moving targets from the background imagery, and then display the results in a single 3D interactive video. In this document we summarize the work in developing this multi-camera, multi-target system, including lessons learned, tools developed, technologies explored, and a description of current capability.« less

  17. Multi-target camera tracking, hand-off and display LDRD 158819 final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, Robert J.

    2014-10-01

    Modern security control rooms gather video and sensor feeds from tens to hundreds of cameras. Advanced camera analytics can detect motion from individual video streams and convert unexpected motion into alarms, but the interpretation of these alarms depends heavily upon human operators. Unfortunately, these operators can be overwhelmed when a large number of events happen simultaneously, or lulled into complacency due to frequent false alarms. This LDRD project has focused on improving video surveillance-based security systems by changing the fundamental focus from the cameras to the targets being tracked. If properly integrated, more cameras shouldn't lead to more alarms, moremore » monitors, more operators, and increased response latency but instead should lead to better information and more rapid response times. For the course of the LDRD we have been developing algorithms that take live video imagery from multiple video cameras, identifies individual moving targets from the background imagery, and then displays the results in a single 3D interactive video. In this document we summarize the work in developing this multi-camera, multi-target system, including lessons learned, tools developed, technologies explored, and a description of current capability.« less

  18. SFDT-1 Camera Pointing and Sun-Exposure Analysis and Flight Performance

    NASA Technical Reports Server (NTRS)

    White, Joseph; Dutta, Soumyo; Striepe, Scott

    2015-01-01

    The Supersonic Flight Dynamics Test (SFDT) vehicle was developed to advance and test technologies of NASA's Low Density Supersonic Decelerator (LDSD) Technology Demonstration Mission. The first flight test (SFDT-1) occurred on June 28, 2014. In order to optimize the usefulness of the camera data, analysis was performed to optimize parachute visibility in the camera field of view during deployment and inflation and to determine the probability of sun-exposure issues with the cameras given the vehicle heading and launch time. This paper documents the analysis, results and comparison with flight video of SFDT-1.

  19. [Development of a Surgical Navigation System with Beam Split and Fusion of the Visible and Near-Infrared Fluorescence].

    PubMed

    Yang, Xiaofeng; Wu, Wei; Wang, Guoan

    2015-04-01

    This paper presents a surgical optical navigation system with non-invasive, real-time, and positioning characteristics for open surgical procedure. The design was based on the principle of near-infrared fluorescence molecular imaging. The in vivo fluorescence excitation technology, multi-channel spectral camera technology and image fusion software technology were used. Visible and near-infrared light ring LED excitation source, multi-channel band pass filters, spectral camera 2 CCD optical sensor technology and computer systems were integrated, and, as a result, a new surgical optical navigation system was successfully developed. When the near-infrared fluorescence was injected, the system could display anatomical images of the tissue surface and near-infrared fluorescent functional images of surgical field simultaneously. The system can identify the lymphatic vessels, lymph node, tumor edge which doctor cannot find out with naked eye intra-operatively. Our research will guide effectively the surgeon to remove the tumor tissue to improve significantly the success rate of surgery. The technologies have obtained a national patent, with patent No. ZI. 2011 1 0292374. 1.

  20. The system analysis of light field information collection based on the light field imaging

    NASA Astrophysics Data System (ADS)

    Wang, Ye; Li, Wenhua; Hao, Chenyang

    2016-10-01

    Augmented reality(AR) technology is becoming the study focus, and the AR effect of the light field imaging makes the research of light field camera attractive. The micro array structure was adopted in most light field information acquisition system(LFIAS) since emergence of light field camera, micro lens array(MLA) and micro pinhole array(MPA) system mainly included. It is reviewed in this paper the structure of the LFIAS that the Light field camera commonly used in recent years. LFIAS has been analyzed based on the theory of geometrical optics. Meanwhile, this paper presents a novel LFIAS, plane grating system, we call it "micro aperture array(MAA." And the LFIAS are analyzed based on the knowledge of information optics; This paper proves that there is a little difference in the multiple image produced by the plane grating system. And the plane grating system can collect and record the amplitude and phase information of the field light.

  1. Optical Indoor Positioning System Based on TFT Technology

    PubMed Central

    Gőzse, István

    2015-01-01

    A novel indoor positioning system is presented in the paper. Similarly to the camera-based solutions, it is based on visual detection, but it conceptually differs from the classical approaches. First, the objects are marked by LEDs, and second, a special sensing unit is applied, instead of a camera, to track the motion of the markers. This sensing unit realizes a modified pinhole camera model, where the light-sensing area is fixed and consists of a small number of sensing elements (photodiodes), and it is the hole that can be moved. The markers are tracked by controlling the motion of the hole, such that the light of the LEDs always hits the photodiodes. The proposed concept has several advantages: Apart from its low computational demands, it is insensitive to the disturbing ambient light. Moreover, as every component of the system can be realized by simple and inexpensive elements, the overall cost of the system can be kept low. PMID:26712753

  2. Accurate measurement of imaging photoplethysmographic signals based camera using weighted average

    NASA Astrophysics Data System (ADS)

    Pang, Zongguang; Kong, Lingqin; Zhao, Yuejin; Sun, Huijuan; Dong, Liquan; Hui, Mei; Liu, Ming; Liu, Xiaohua; Liu, Lingling; Li, Xiaohui; Li, Rongji

    2018-01-01

    Imaging Photoplethysmography (IPPG) is an emerging technique for the extraction of vital signs of human being using video recordings. IPPG technology with its advantages like non-contact measurement, low cost and easy operation has become one research hot spot in the field of biomedicine. However, the noise disturbance caused by non-microarterial area cannot be removed because of the uneven distribution of micro-arterial, different signal strength of each region, which results in a low signal noise ratio of IPPG signals and low accuracy of heart rate. In this paper, we propose a method of improving the signal noise ratio of camera-based IPPG signals of each sub-region of the face using a weighted average. Firstly, we obtain the region of interest (ROI) of a subject's face based camera. Secondly, each region of interest is tracked and feature-based matched in each frame of the video. Each tracked region of face is divided into 60x60 pixel block. Thirdly, the weights of PPG signal of each sub-region are calculated, based on the signal-to-noise ratio of each sub-region. Finally, we combine the IPPG signal from all the tracked ROI using weighted average. Compared with the existing approaches, the result shows that the proposed method takes modest but significant effects on improvement of signal noise ratio of camera-based PPG estimated and accuracy of heart rate measurement.

  3. Convolutional Neural Network-Based Human Detection in Nighttime Images Using Visible Light Camera Sensors.

    PubMed

    Kim, Jong Hyun; Hong, Hyung Gil; Park, Kang Ryoung

    2017-05-08

    Because intelligent surveillance systems have recently undergone rapid growth, research on accurately detecting humans in videos captured at a long distance is growing in importance. The existing research using visible light cameras has mainly focused on methods of human detection for daytime hours when there is outside light, but human detection during nighttime hours when there is no outside light is difficult. Thus, methods that employ additional near-infrared (NIR) illuminators and NIR cameras or thermal cameras have been used. However, in the case of NIR illuminators, there are limitations in terms of the illumination angle and distance. There are also difficulties because the illuminator power must be adaptively adjusted depending on whether the object is close or far away. In the case of thermal cameras, their cost is still high, which makes it difficult to install and use them in a variety of places. Because of this, research has been conducted on nighttime human detection using visible light cameras, but this has focused on objects at a short distance in an indoor environment or the use of video-based methods to capture multiple images and process them, which causes problems related to the increase in the processing time. To resolve these problems, this paper presents a method that uses a single image captured at night on a visible light camera to detect humans in a variety of environments based on a convolutional neural network. Experimental results using a self-constructed Dongguk night-time human detection database (DNHD-DB1) and two open databases (Korea advanced institute of science and technology (KAIST) and computer vision center (CVC) databases), as well as high-accuracy human detection in a variety of environments, show that the method has excellent performance compared to existing methods.

  4. Automated Ground-based Time-lapse Camera Monitoring of West Greenland ice sheet outlet Glaciers: Challenges and Solutions

    NASA Astrophysics Data System (ADS)

    Ahn, Y.; Box, J. E.; Balog, J.; Lewinter, A.

    2008-12-01

    Monitoring Greenland outlet glaciers using remotely sensed data has drawn a great attention in earth science communities for decades and time series analysis of sensory data has provided important variability information of glacier flow by detecting speed and thickness changes, tracking features and acquiring model input. Thanks to advancements of commercial digital camera technology and increased solid state storage, we activated automatic ground-based time-lapse camera stations with high spatial/temporal resolution in west Greenland outlet and collected one-hour interval data continuous for more than one year at some but not all sites. We believe that important information of ice dynamics are contained in these data and that terrestrial mono-/stereo-photogrammetry can provide theoretical/practical fundamentals in data processing along with digital image processing techniques. Time-lapse images over periods in west Greenland indicate various phenomenon. Problematic is rain, snow, fog, shadows, freezing of water on camera enclosure window, image over-exposure, camera motion, sensor platform drift, and fox chewing of instrument cables, and the pecking of plastic window by ravens. Other problems include: feature identification, camera orientation, image registration, feature matching in image pairs, and feature tracking. Another obstacle is that non-metric digital camera contains large distortion to be compensated for precise photogrammetric use. Further, a massive number of images need to be processed in a way that is sufficiently computationally efficient. We meet these challenges by 1) identifying problems in possible photogrammetric processes, 2) categorizing them based on feasibility, and 3) clarifying limitation and alternatives, while emphasizing displacement computation and analyzing regional/temporal variability. We experiment with mono and stereo photogrammetric techniques in the aide of automatic correlation matching for efficiently handling the enormous data volumes.

  5. Evolution of Instrumentation for Detection of the Raman Effect as Driven by Available Technologies and by Developing Applications

    ERIC Educational Resources Information Center

    Adar, Fran; Delhaye, Michel; DaSilva, Edouard

    2007-01-01

    The evolution of Raman instrumentation from the time of the initial report of the phenomenon in 1928 to 2006 is discussed. The first instruments were prism-based spectrographs using lenses for collimation and focusing and the 21st century instruments are also spectrographs, but they use CCD cameras. The Lippmann filter technology that appears to…

  6. High-immersion three-dimensional display of the numerical computer model

    NASA Astrophysics Data System (ADS)

    Xing, Shujun; Yu, Xunbo; Zhao, Tianqi; Cai, Yuanfa; Chen, Duo; Chen, Zhidong; Sang, Xinzhu

    2013-08-01

    High-immersion three-dimensional (3D) displays making them valuable tools for many applications, such as designing and constructing desired building houses, industrial architecture design, aeronautics, scientific research, entertainment, media advertisement, military areas and so on. However, most technologies provide 3D display in the front of screens which are in parallel with the walls, and the sense of immersion is decreased. To get the right multi-view stereo ground image, cameras' photosensitive surface should be parallax to the public focus plane and the cameras' optical axes should be offset to the center of public focus plane both atvertical direction and horizontal direction. It is very common to use virtual cameras, which is an ideal pinhole camera to display 3D model in computer system. We can use virtual cameras to simulate the shooting method of multi-view ground based stereo image. Here, two virtual shooting methods for ground based high-immersion 3D display are presented. The position of virtual camera is determined by the people's eye position in the real world. When the observer stand in the circumcircle of 3D ground display, offset perspective projection virtual cameras is used. If the observer stands out the circumcircle of 3D ground display, offset perspective projection virtual cameras and the orthogonal projection virtual cameras are adopted. In this paper, we mainly discussed the parameter setting of virtual cameras. The Near Clip Plane parameter setting is the main point in the first method, while the rotation angle of virtual cameras is the main point in the second method. In order to validate the results, we use the D3D and OpenGL to render scenes of different viewpoints and generate a stereoscopic image. A realistic visualization system for 3D models is constructed and demonstrated for viewing horizontally, which provides high-immersion 3D visualization. The displayed 3D scenes are compared with the real objects in the real world.

  7. QuadCam - A Quadruple Polarimetric Camera for Space Situational Awareness

    NASA Astrophysics Data System (ADS)

    Skuljan, J.

    A specialised quadruple polarimetric camera for space situational awareness, QuadCam, has been built at the Defence Technology Agency (DTA), New Zealand, as part of collaboration with the Defence Science and Technology Laboratory (Dstl), United Kingdom. The design was based on a similar system originally developed at Dstl, with some significant modifications for improved performance. The system is made up of four identical CCD cameras looking in the same direction, but in a different plane of polarisation at 0, 45, 90 and 135 degrees with respect to the reference plane. A standard set of Stokes parameters can be derived from the four images in order to describe the state of polarisation of an object captured in the field of view. The modified design of the DTA QuadCam makes use of four small Raspberry Pi computers, so that each camera is controlled by its own computer in order to speed up the readout process and ensure that the four individual frames are taken simultaneously (to within 100-200 microseconds). In addition, a new firmware was requested from the camera manufacturer so that an output signal is generated to indicate the state of the camera shutter. A specialised GPS unit (also developed at DTA) is then used to monitor the shutter signals from the four cameras and record the actual time of exposure to an accuracy of about 100 microseconds. This makes the system well suited for the observation of fast-moving objects in the low Earth orbit (LEO). The QuadCam is currently mounted on a Paramount MEII robotic telescope mount at the newly built DTA space situational awareness observatory located on Whangaparaoa Peninsula near Auckland, New Zealand. The system will be used for tracking satellites in low Earth orbit and geostationary belt as well. The performance of the camera has been evaluated and a series of test images have been collected in order to derive the polarimetric signatures for selected satellites.

  8. Neutron Imaging Camera

    NASA Technical Reports Server (NTRS)

    Hunter, Stanley; deNolfo, G. A.; Barbier, L. M.; Link, J. T.; Son, S.; Floyd, S. R.; Guardala, N.; Skopec, M.; Stark, B.

    2008-01-01

    The Neutron Imaging Camera (NIC) is based on the Three-dimensional Track Imager (3DTI) technology developed at GSFC for gamma-ray astrophysics applications. The 3-DTI, a large volume time-projection chamber, provides accurate, approximately 0.4 mm resolution, 3-D tracking of charged particles. The incident direction of fast neutrons, En > 0.5 MeV, are reconstructed from the momenta and energies of the proton and triton fragments resulting from (sup 3)He(n,p) (sup 3)H interactions in the 3-DTI volume. The performance of the NIC from laboratory and accelerator tests is presented.

  9. Development of SPIES (Space Intelligent Eyeing System) for smart vehicle tracing and tracking

    NASA Astrophysics Data System (ADS)

    Abdullah, Suzanah; Ariffin Osoman, Muhammad; Guan Liyong, Chua; Zulfadhli Mohd Noor, Mohd; Mohamed, Ikhwan

    2016-06-01

    SPIES or Space-based Intelligent Eyeing System is an intelligent technology which can be utilized for various applications such as gathering spatial information of features on Earth, tracking system for the movement of an object, tracing system to trace the history information, monitoring driving behavior, security and alarm system as an observer in real time and many more. SPIES as will be developed and supplied modularly will encourage the usage based on needs and affordability of users. SPIES are a complete system with camera, GSM, GPS/GNSS and G-Sensor modules with intelligent function and capabilities. Mainly the camera is used to capture pictures and video and sometimes with audio of an event. Its usage is not limited to normal use for nostalgic purpose but can be used as a reference for security and material of evidence when an undesirable event such as crime occurs. When integrated with space based technology of the Global Navigational Satellite System (GNSS), photos and videos can be recorded together with positioning information. A product of the integration of these technologies when integrated with Information, Communication and Technology (ICT) and Geographic Information System (GIS) will produce innovation in the form of information gathering methods in still picture or video with positioning information that can be conveyed in real time via the web to display location on the map hence creating an intelligent eyeing system based on space technology. The importance of providing global positioning information is a challenge but overcome by SPIES even in areas without GNSS signal reception for the purpose of continuous tracking and tracing capability

  10. Precise color images a high-speed color video camera system with three intensified sensors

    NASA Astrophysics Data System (ADS)

    Oki, Sachio; Yamakawa, Masafumi; Gohda, Susumu; Etoh, Takeharu G.

    1999-06-01

    High speed imaging systems have been used in a large field of science and engineering. Although the high speed camera systems have been improved to high performance, most of their applications are only to get high speed motion pictures. However, in some fields of science and technology, it is useful to get some other information, such as temperature of combustion flame, thermal plasma and molten materials. Recent digital high speed video imaging technology should be able to get such information from those objects. For this purpose, we have already developed a high speed video camera system with three-intensified-sensors and cubic prism image splitter. The maximum frame rate is 40,500 pps (picture per second) at 64 X 64 pixels and 4,500 pps at 256 X 256 pixels with 256 (8 bit) intensity resolution for each pixel. The camera system can store more than 1,000 pictures continuously in solid state memory. In order to get the precise color images from this camera system, we need to develop a digital technique, which consists of a computer program and ancillary instruments, to adjust displacement of images taken from two or three image sensors and to calibrate relationship between incident light intensity and corresponding digital output signals. In this paper, the digital technique for pixel-based displacement adjustment are proposed. Although the displacement of the corresponding circle was more than 8 pixels in original image, the displacement was adjusted within 0.2 pixels at most by this method.

  11. QWIP technology for both military and civilian applications

    NASA Astrophysics Data System (ADS)

    Gunapala, Sarath D.; Kukkonen, Carl A.; Sirangelo, Mark N.; McQuiston, Barbara K.; Chehayeb, Riad; Kaufmann, M.

    2001-10-01

    Advanced thermal imaging infrared cameras have been a cost effective and reliable method to obtain the temperature of objects. Quantum Well Infrared Photodetector (QWIP) based thermal imaging systems have advanced the state-of-the-art and are the most sensitive commercially available thermal systems. QWIP Technologies LLC, under exclusive agreement with Caltech University, is currently manufacturing the QWIP-ChipTM, a 320 X 256 element, bound-to-quasibound QWIP FPA. The camera performance falls within the long-wave IR band, spectrally peaked at 8.5 μm. The camera is equipped with a 32-bit floating-point digital signal processor combined with multi- tasking software, delivering a digital acquisition resolution of 12-bits using nominal power consumption of less than 50 Watts. With a variety of video interface options, remote control capability via an RS-232 connection, and an integrated control driver circuit to support motorized zoom and focus- compatible lenses, this camera design has excellent application in both the military and commercial sector. In the area of remote sensing, high-performance QWIP systems can be used for high-resolution, target recognition as part of a new system of airborne platforms (including UAVs). Such systems also have direct application in law enforcement, surveillance, industrial monitoring and road hazard detection systems. This presentation will cover the current performance of the commercial QWIP cameras, conceptual platform systems and advanced image processing for use in both military remote sensing and civilian applications currently being developed in road hazard monitoring.

  12. Micro optical fiber display switch based on the magnetohydrodynamic (MHD) principle

    NASA Astrophysics Data System (ADS)

    Lian, Kun; Heng, Khee-Hang

    2001-09-01

    This paper reports on a research effort to design, microfabricate and test an optical fiber display switch based on magneto hydrodynamic (MHD) principal. The switch is driven by the Lorentz force and can be used to turn on/off the light. The SU-8 photoresist and UV light source were used for prototype fabrication in order to lower the cost. With a magnetic field supplied by an external permanent magnet, and a plus electrical current supplied across the two inert sidewall electrodes, the distributed body force generated will produce a pressure difference on the fluid mercury in the switch chamber. By change the direction of current flow, the mercury can turn on or cut off the light pass in less than 10 ms. The major advantages of a MHD-based micro-switch are that it does not contain any solid moving parts and power consumption is much smaller comparing to the relay type switches. This switch can be manufactured by molding gin batch production and may have potential applications in extremely bright traffic control,, high intensity advertising display, and communication.

  13. Linear Acceleration Measurement Utilizing Inter-Instrument Synchronization: A Comparison between Accelerometers and Motion-Based Tracking Approaches

    ERIC Educational Resources Information Center

    Callaway, Andrew J.; Cobb, Jon E.

    2012-01-01

    Where as video cameras are a reliable and established technology for the measurement of kinematic parameters, accelerometers are increasingly being employed for this type of measurement due to their ease of use, performance, and comparatively low cost. However, the majority of accelerometer-based studies involve a single channel due to the…

  14. SPADAS: a high-speed 3D single-photon camera for advanced driver assistance systems

    NASA Astrophysics Data System (ADS)

    Bronzi, D.; Zou, Y.; Bellisai, S.; Villa, F.; Tisa, S.; Tosi, A.; Zappa, F.

    2015-02-01

    Advanced Driver Assistance Systems (ADAS) are the most advanced technologies to fight road accidents. Within ADAS, an important role is played by radar- and lidar-based sensors, which are mostly employed for collision avoidance and adaptive cruise control. Nonetheless, they have a narrow field-of-view and a limited ability to detect and differentiate objects. Standard camera-based technologies (e.g. stereovision) could balance these weaknesses, but they are currently not able to fulfill all automotive requirements (distance range, accuracy, acquisition speed, and frame-rate). To this purpose, we developed an automotive-oriented CMOS single-photon camera for optical 3D ranging based on indirect time-of-flight (iTOF) measurements. Imagers based on Single-photon avalanche diode (SPAD) arrays offer higher sensitivity with respect to CCD/CMOS rangefinders, have inherent better time resolution, higher accuracy and better linearity. Moreover, iTOF requires neither high bandwidth electronics nor short-pulsed lasers, hence allowing the development of cost-effective systems. The CMOS SPAD sensor is based on 64 × 32 pixels, each able to process both 2D intensity-data and 3D depth-ranging information, with background suppression. Pixel-level memories allow fully parallel imaging and prevents motion artefacts (skew, wobble, motion blur) and partial exposure effects, which otherwise would hinder the detection of fast moving objects. The camera is housed in an aluminum case supporting a 12 mm F/1.4 C-mount imaging lens, with a 40°×20° field-of-view. The whole system is very rugged and compact and a perfect solution for vehicle's cockpit, with dimensions of 80 mm × 45 mm × 70 mm, and less that 1 W consumption. To provide the required optical power (1.5 W, eye safe) and to allow fast (up to 25 MHz) modulation of the active illumination, we developed a modular laser source, based on five laser driver cards, with three 808 nm lasers each. We present the full characterization of the 3D automotive system, operated both at night and during daytime, in both indoor and outdoor, in real traffic, scenario. The achieved long-range (up to 45m), high dynamic-range (118 dB), highspeed (over 200 fps) 3D depth measurement, and high precision (better than 90 cm at 45 m), highlight the excellent performance of this CMOS SPAD camera for automotive applications.

  15. Applying AR technology with a projector-camera system in a history museum

    NASA Astrophysics Data System (ADS)

    Miyata, Kimiyoshi; Shiroishi, Rina; Inoue, Yuka

    2011-01-01

    In this research, an AR (augmented reality) technology with projector-camera system is proposed for a history museum to provide user-friendly interface and pseudo hands-on exhibition. The proposed system is a desktop application and designed for old Japanese coins to enhance the visitors' interests and motivation to investigate them. The size of the old coins are small to recognize their features and the surface of the coins has fine structures on both sides, so it is meaningful to show the reverse side and enlarged image of the coins to the visitors for enhancing their interest and motivation. The image of the reverse side of the coins is displayed based on the AR technology to reverse the AR marker by the user. The information to augment the coins is projected by using a data projector, and the information is placed nearby the coins. The proposed system contributes to develop an exhibition method based on the combinations of the real artifacts and the AR technology, and demonstrated the flexibility and capability to offer background information relating to the old Japanese coins. However, the accuracy of the detection and tracking of the markers and visitor evaluation survey are required to improve the effectiveness of the system.

  16. A novel optical investigation technique for railroad track inspection and assessment

    NASA Astrophysics Data System (ADS)

    Sabato, Alessandro; Beale, Christopher H.; Niezrecki, Christopher

    2017-04-01

    Track failures due to cross tie degradation or loss in ballast support may result in a number of problems ranging from simple service interruptions to derailments. Structural Health Monitoring (SHM) of railway track is important for safety reasons and to reduce downtime and maintenance costs. For this reason, novel and cost-effective track inspection technologies for assessing tracks' health are currently insufficient and needed. Advancements achieved in recent years in cameras technology, optical sensors, and image-processing algorithms have made machine vision, Structure from Motion (SfM), and three-dimensional (3D) Digital Image Correlation (DIC) systems extremely appealing techniques for extracting structural deformations and geometry profiles. Therefore, optically based, non-contact measurement techniques may be used for assessing surface defects, rail and tie deflection profiles, and ballast condition. In this study, the design of two camera-based measurement systems is proposed for crossties-ballast condition assessment and track examination purposes. The first one consists of four pairs of cameras installed on the underside of a rail car to detect the induced deformation and displacement on the whole length of the track's cross tie using 3D DIC measurement techniques. The second consists of another set of cameras using SfM techniques for obtaining a 3D rendering of the infrastructure from a series of two-dimensional (2D) images to evaluate the state of the track qualitatively. The feasibility of the proposed optical systems is evaluated through extensive laboratory tests, demonstrating their ability to measure parameters of interest (e.g. crosstie's full-field displacement, vertical deflection, shape, etc.) for assessment and SHM of railroad track.

  17. Digital Elevation Model from Non-Metric Camera in Uas Compared with LIDAR Technology

    NASA Astrophysics Data System (ADS)

    Dayamit, O. M.; Pedro, M. F.; Ernesto, R. R.; Fernando, B. L.

    2015-08-01

    Digital Elevation Model (DEM) data as a representation of surface topography is highly demanded for use in spatial analysis and modelling. Aimed to that issue many methods of acquisition data and process it are developed, from traditional surveying until modern technology like LIDAR. On the other hands, in a past four year the development of Unamend Aerial System (UAS) aimed to Geomatic bring us the possibility to acquire data about surface by non-metric digital camera on board in a short time with good quality for some analysis. Data collectors have attracted tremendous attention on UAS due to possibility of the determination of volume changes over time, monitoring of the breakwaters, hydrological modelling including flood simulation, drainage networks, among others whose support in DEM for proper analysis. The DEM quality is considered as a combination of DEM accuracy and DEM suitability so; this paper is aimed to analyse the quality of the DEM from non-metric digital camera on UAS compared with a DEM from LIDAR corresponding to same geographic space covering 4 km2 in Artemisa province, Cuba. This area is in a frame of urban planning whose need to know the topographic characteristics in order to analyse hydrology behaviour and decide the best place for make roads, building and so on. Base on LIDAR technology is still more accurate method, it offer us a pattern for test DEM from non-metric digital camera on UAS, whose are much more flexible and bring a solution for many applications whose needs DEM of detail.

  18. A novel camera localization system for extending three-dimensional digital image correlation

    NASA Astrophysics Data System (ADS)

    Sabato, Alessandro; Reddy, Narasimha; Khan, Sameer; Niezrecki, Christopher

    2018-03-01

    The monitoring of civil, mechanical, and aerospace structures is important especially as these systems approach or surpass their design life. Often, Structural Health Monitoring (SHM) relies on sensing techniques for condition assessment. Advancements achieved in camera technology and optical sensors have made three-dimensional (3D) Digital Image Correlation (DIC) a valid technique for extracting structural deformations and geometry profiles. Prior to making stereophotogrammetry measurements, a calibration has to be performed to obtain the vision systems' extrinsic and intrinsic parameters. It means that the position of the cameras relative to each other (i.e. separation distance, cameras angle, etc.) must be determined. Typically, cameras are placed on a rigid bar to prevent any relative motion between the cameras. This constraint limits the utility of the 3D-DIC technique, especially as it is applied to monitor large-sized structures and from various fields of view. In this preliminary study, the design of a multi-sensor system is proposed to extend 3D-DIC's capability and allow for easier calibration and measurement. The suggested system relies on a MEMS-based Inertial Measurement Unit (IMU) and a 77 GHz radar sensor for measuring the orientation and relative distance of the stereo cameras. The feasibility of the proposed combined IMU-radar system is evaluated through laboratory tests, demonstrating its ability in determining the cameras position in space for performing accurate 3D-DIC calibration and measurements.

  19. Investigation of the influence of spatial degrees of freedom on thermal infrared measurement

    NASA Astrophysics Data System (ADS)

    Fleuret, Julien R.; Yousefi, Bardia; Lei, Lei; Djupkep Dizeu, Frank Billy; Zhang, Hai; Sfarra, Stefano; Ouellet, Denis; Maldague, Xavier P. V.

    2017-05-01

    Long Wavelength Infrared (LWIR) cameras can provide a representation of a part of the light spectrum that is sensitive to temperature. These cameras also named Thermal Infrared (TIR) cameras are powerful tools to detect features that cannot be seen by other imaging technologies. For instance they enable defect detection in material, fever and anxiety in mammals and many other features for numerous applications. However, the accuracy of thermal cameras can be affected by many parameters; the most critical involves the relative position of the camera with respect to the object of interest. Several models have been proposed in order to minimize the influence of some of the parameters but they are mostly related to specific applications. Because such models are based on some prior informations related to context, their applicability to other contexts cannot be easily assessed. The few models remaining are mostly associated with a specific device. In this paper the authors studied the influence of the camera position on the measurement accuracy. Modeling of the position of the camera from the object of interest depends on many parameters. In order to propose a study which is as accurate as possible, the position of the camera will be represented as a five dimensions model. The aim of this study is to investigate and attempt to introduce a model which is as independent from the device as possible.

  20. Timing generator of scientific grade CCD camera and its implementation based on FPGA technology

    NASA Astrophysics Data System (ADS)

    Si, Guoliang; Li, Yunfei; Guo, Yongfei

    2010-10-01

    The Timing Generator's functions of Scientific Grade CCD Camera is briefly presented: it generates various kinds of impulse sequence for the TDI-CCD, video processor and imaging data output, acting as the synchronous coordinator for time in the CCD imaging unit. The IL-E2TDI-CCD sensor produced by DALSA Co.Ltd. use in the Scientific Grade CCD Camera. Driving schedules of IL-E2 TDI-CCD sensor has been examined in detail, the timing generator has been designed for Scientific Grade CCD Camera. FPGA is chosen as the hardware design platform, schedule generator is described with VHDL. The designed generator has been successfully fulfilled function simulation with EDA software and fitted into XC2VP20-FF1152 (a kind of FPGA products made by XILINX). The experiments indicate that the new method improves the integrated level of the system. The Scientific Grade CCD camera system's high reliability, stability and low power supply are achieved. At the same time, the period of design and experiment is sharply shorted.

  1. Using virtual reality for science mission planning: A Mars Pathfinder case

    NASA Technical Reports Server (NTRS)

    Kim, Jacqueline H.; Weidner, Richard J.; Sacks, Allan L.

    1994-01-01

    NASA's Mars Pathfinder Project requires a Ground Data System (GDS) that supports both engineering and scientific payloads with reduced mission operations staffing, and short planning schedules. Also, successful surface operation of the lander camera requires efficient mission planning and accurate pointing of the camera. To meet these challenges, a new software strategy that integrates virtual reality technology with existing navigational ancillary information and image processing capabilities. The result is an interactive workstation based applications software that provides a high resolution, 3-dimensial, stereo display of Mars as if it were viewed through the lander camera. The design, implementation strategy and parametric specification phases for the development of this software were completed, and the prototype tested. When completed, the software will allow scientists and mission planners to access simulated and actual scenes of Mars' surface. The perspective from the lander camera will enable scientists to plan activities more accurately and completely. The application will also support the sequence and command generation process and will allow testing and verification of camera pointing commands via simulation.

  2. Dynamic displacement measurement of large-scale structures based on the Lucas-Kanade template tracking algorithm

    NASA Astrophysics Data System (ADS)

    Guo, Jie; Zhu, Chang`an

    2016-01-01

    The development of optics and computer technologies enables the application of the vision-based technique that uses digital cameras to the displacement measurement of large-scale structures. Compared with traditional contact measurements, vision-based technique allows for remote measurement, has a non-intrusive characteristic, and does not necessitate mass introduction. In this study, a high-speed camera system is developed to complete the displacement measurement in real time. The system consists of a high-speed camera and a notebook computer. The high-speed camera can capture images at a speed of hundreds of frames per second. To process the captured images in computer, the Lucas-Kanade template tracking algorithm in the field of computer vision is introduced. Additionally, a modified inverse compositional algorithm is proposed to reduce the computing time of the original algorithm and improve the efficiency further. The modified algorithm can rapidly accomplish one displacement extraction within 1 ms without having to install any pre-designed target panel onto the structures in advance. The accuracy and the efficiency of the system in the remote measurement of dynamic displacement are demonstrated in the experiments on motion platform and sound barrier on suspension viaduct. Experimental results show that the proposed algorithm can extract accurate displacement signal and accomplish the vibration measurement of large-scale structures.

  3. MUSIC - Multifunctional stereo imaging camera system for wide angle and high resolution stereo and color observations on the Mars-94 mission

    NASA Astrophysics Data System (ADS)

    Oertel, D.; Jahn, H.; Sandau, R.; Walter, I.; Driescher, H.

    1990-10-01

    Objectives of the multifunctional stereo imaging camera (MUSIC) system to be deployed on the Soviet Mars-94 mission are outlined. A high-resolution stereo camera (HRSC) and wide-angle opto-electronic stereo scanner (WAOSS) are combined in terms of hardware, software, technology aspects, and solutions. Both HRSC and WAOSS are push-button instruments containing a single optical system and focal plates with several parallel CCD line sensors. Emphasis is placed on the MUSIC system's stereo capability, its design, mass memory, and data compression. A 1-Gbit memory is divided into two parts: 80 percent for HRSC and 20 percent for WAOSS, while the selected on-line compression strategy is based on macropixel coding and real-time transform coding.

  4. Vehicular camera pedestrian detection research

    NASA Astrophysics Data System (ADS)

    Liu, Jiahui

    2018-03-01

    With the rapid development of science and technology, it has made great development, but at the same time of highway traffic more convenient in highway traffic and transportation. However, in the meantime, traffic safety accidents occur more and more frequently in China. In order to deal with the increasingly heavy traffic safety. So, protecting the safety of people's personal property and facilitating travel has become a top priority. The real-time accurate pedestrian and driving environment are obtained through a vehicular camera which are used to detection and track the preceding moving targets. It is popular in the domain of intelligent vehicle safety driving, autonomous navigation and traffic system research. Based on the pedestrian video obtained by the Vehicular Camera, this paper studies the trajectory of pedestrian detection and its algorithm.

  5. Multi-MGy Radiation Hardened Camera for Nuclear Facilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Girard, Sylvain; Boukenter, Aziz; Ouerdane, Youcef

    There is an increasing interest in developing cameras for surveillance systems to monitor nuclear facilities or nuclear waste storages. Particularly, for today's and the next generation of nuclear facilities increasing safety requirements consecutive to Fukushima Daiichi's disaster have to be considered. For some applications, radiation tolerance needs to overcome doses in the MGy(SiO{sub 2}) range whereas the most tolerant commercial or prototypes products based on solid state image sensors withstand doses up to few kGy. The objective of this work is to present the radiation hardening strategy developed by our research groups to enhance the tolerance to ionizing radiations ofmore » the various subparts of these imaging systems by working simultaneously at the component and system design levels. Developing radiation-hardened camera implies to combine several radiation-hardening strategies. In our case, we decided not to use the simplest one, the shielding approach. This approach is efficient but limits the camera miniaturization and is not compatible with its future integration in remote-handling or robotic systems. Then, the hardening-by-component strategy appears mandatory to avoid the failure of one of the camera subparts at doses lower than the MGy. Concerning the image sensor itself, the used technology is a CMOS Image Sensor (CIS) designed by ISAE team with custom pixel designs used to mitigate the total ionizing dose (TID) effects that occur well below the MGy range in classical image sensors (e.g. Charge Coupled Devices (CCD), Charge Injection Devices (CID) and classical Active Pixel Sensors (APS)), such as the complete loss of functionality, the dark current increase and the gain drop. We'll present at the conference a comparative study between these radiation-hardened pixel radiation responses with respect to conventional ones, demonstrating the efficiency of the choices made. The targeted strategy to develop the complete radiation hard camera electronics will be exposed. Another important element of the camera is the optical system that transports the image from the scene to the image sensor. This arrangement of glass-based lenses is affected by radiations through two mechanisms: the radiation induced absorption and the radiation induced refractive index changes. The first one will limit the signal to noise ratio of the image whereas the second one will directly affect the resolution of the camera. We'll present at the conference a coupled simulation/experiment study of these effects for various commercial glasses and present vulnerability study of typical optical systems to radiations at MGy doses. The last very important part of the camera is the illumination system that can be based on various technologies of emitting devices like LED, SLED or lasers. The most promising solutions for high radiation doses will be presented at the conference. In addition to this hardening-by-component approach, the global radiation tolerance of the camera can be drastically improve by working at the system level, combining innovative approaches eg. for the optical and illumination systems. We'll present at the conference the developed approach allowing to extend the camera lifetime up to the MGy dose range. (authors)« less

  6. Uncooled Terahertz real-time imaging 2D arrays developed at LETI: present status and perspectives

    NASA Astrophysics Data System (ADS)

    Simoens, François; Meilhan, Jérôme; Dussopt, Laurent; Nicolas, Jean-Alain; Monnier, Nicolas; Sicard, Gilles; Siligaris, Alexandre; Hiberty, Bruno

    2017-05-01

    As for other imaging sensor markets, whatever is the technology, the commercial spread of terahertz (THz) cameras has to fulfil simultaneously the criteria of high sensitivity and low cost and SWAP (size, weight and power). Monolithic silicon-based 2D sensors integrated in uncooled THz real-time cameras are good candidates to meet these requirements. Over the past decade, LETI has been studying and developing such arrays with two complimentary technological approaches, i.e. antenna-coupled silicon bolometers and CMOS Field Effect Transistors (FET), both being compatible to standard silicon microelectronics processes. LETI has leveraged its know-how in thermal infrared bolometer sensors in developing a proprietary architecture for THz sensing. High technological maturity has been achieved as illustrated by the demonstration of fast scanning of large field of view and the recent birth of a commercial camera. In the FET-based THz field, recent works have been focused on innovative CMOS read-out-integrated circuit designs. The studied architectures take advantage of the large pixel pitch to enhance the flexibility and the sensitivity: an embedded in-pixel configurable signal processing chain dramatically reduces the noise. Video sequences at 100 frames per second using our 31x31 pixels 2D Focal Plane Arrays (FPA) have been achieved. The authors describe the present status of these developments and perspectives of performance evolutions are discussed. Several experimental imaging tests are also presented in order to illustrate the capabilities of these arrays to address industrial applications such as non-destructive testing (NDT), security or quality control of food.

  7. UAV-Based Photogrammetry and Integrated Technologies for Architectural Applications—Methodological Strategies for the After-Quake Survey of Vertical Structures in Mantua (Italy)

    PubMed Central

    Achille, Cristiana; Adami, Andrea; Chiarini, Silvia; Cremonesi, Stefano; Fassi, Francesco; Fregonese, Luigi; Taffurelli, Laura

    2015-01-01

    This paper examines the survey of tall buildings in an emergency context like in the case of post-seismic events. The after-earthquake survey has to guarantee time-savings, high precision and security during the operational stages. The main goal is to optimize the application of methodologies based on acquisition and automatic elaborations of photogrammetric data even with the use of Unmanned Aerial Vehicle (UAV) systems in order to provide fast and low cost operations. The suggested methods integrate new technologies with commonly used technologies like TLS and topographic acquisition. The value of the photogrammetric application is demonstrated by a test case, based on the comparison of acquisition, calibration and 3D modeling results in case of use of a laser scanner, metric camera and amateur reflex camera. The test would help us to demonstrate the efficiency of image based methods in the acquisition of complex architecture. The case study is Santa Barbara Bell tower in Mantua. The applied survey solution allows a complete 3D database of the complex architectural structure to be obtained for the extraction of all the information needed for significant intervention. This demonstrates the applicability of the photogrammetry using UAV for the survey of vertical structures, complex buildings and difficult accessible architectural parts, providing high precision results. PMID:26134108

  8. UAV-Based Photogrammetry and Integrated Technologies for Architectural Applications--Methodological Strategies for the After-Quake Survey of Vertical Structures in Mantua (Italy).

    PubMed

    Achille, Cristiana; Adami, Andrea; Chiarini, Silvia; Cremonesi, Stefano; Fassi, Francesco; Fregonese, Luigi; Taffurelli, Laura

    2015-06-30

    This paper examines the survey of tall buildings in an emergency context like in the case of post-seismic events. The after-earthquake survey has to guarantee time-savings, high precision and security during the operational stages. The main goal is to optimize the application of methodologies based on acquisition and automatic elaborations of photogrammetric data even with the use of Unmanned Aerial Vehicle (UAV) systems in order to provide fast and low cost operations. The suggested methods integrate new technologies with commonly used technologies like TLS and topographic acquisition. The value of the photogrammetric application is demonstrated by a test case, based on the comparison of acquisition, calibration and 3D modeling results in case of use of a laser scanner, metric camera and amateur reflex camera. The test would help us to demonstrate the efficiency of image based methods in the acquisition of complex architecture. The case study is Santa Barbara Bell tower in Mantua. The applied survey solution allows a complete 3D database of the complex architectural structure to be obtained for the extraction of all the information needed for significant intervention. This demonstrates the applicability of the photogrammetry using UAV for the survey of vertical structures, complex buildings and difficult accessible architectural parts, providing high precision results.

  9. Traceable Calibration, Performance Metrics, and Uncertainty Estimates of Minirhizotron Digital Imagery for Fine-Root Measurements

    PubMed Central

    Roberti, Joshua A.; SanClements, Michael D.; Loescher, Henry W.; Ayres, Edward

    2014-01-01

    Even though fine-root turnover is a highly studied topic, it is often poorly understood as a result of uncertainties inherent in its sampling, e.g., quantifying spatial and temporal variability. While many methods exist to quantify fine-root turnover, use of minirhizotrons has increased over the last two decades, making sensor errors another source of uncertainty. Currently, no standardized methodology exists to test and compare minirhizotron camera capability, imagery, and performance. This paper presents a reproducible, laboratory-based method by which minirhizotron cameras can be tested and validated in a traceable manner. The performance of camera characteristics was identified and test criteria were developed: we quantified the precision of camera location for successive images, estimated the trueness and precision of each camera's ability to quantify root diameter and root color, and also assessed the influence of heat dissipation introduced by the minirhizotron cameras and electrical components. We report detailed and defensible metrology analyses that examine the performance of two commercially available minirhizotron cameras. These cameras performed differently with regard to the various test criteria and uncertainty analyses. We recommend a defensible metrology approach to quantify the performance of minirhizotron camera characteristics and determine sensor-related measurement uncertainties prior to field use. This approach is also extensible to other digital imagery technologies. In turn, these approaches facilitate a greater understanding of measurement uncertainties (signal-to-noise ratio) inherent in the camera performance and allow such uncertainties to be quantified and mitigated so that estimates of fine-root turnover can be more confidently quantified. PMID:25391023

  10. QUANTITATIVE DETECTION OF ENVIRONMENTALLY IMPORTANT DYES USING DIODE LASER/FIBER-OPTIC RAMAN

    EPA Science Inventory

    A compact diode laser/fiber-optic Raman spectrometer is used for quantitative detection of environmentally important dyes. This system is based on diode laser excitation at 782 mm, fiber optic probe technology, an imaging spectrometer, and state-of-the-art scientific CCD camera. ...

  11. CCDs in the Mechanics Lab--A Competitive Alternative? (Part I).

    ERIC Educational Resources Information Center

    Pinto, Fabrizio

    1995-01-01

    Reports on the implementation of a relatively low-cost, versatile, and intuitive system to teach basic mechanics based on the use of a Charge-Coupled Device (CCD) camera and inexpensive image-processing and analysis software. Discusses strengths and limitations of CCD imaging technologies. (JRH)

  12. DOTD support for UTC project : traffic counting using existing video detection cameras, [research project capsule].

    DOT National Transportation Integrated Search

    2013-10-01

    This study will evaluate the video detection technologies currently adopted by the city : of Baton Rouge, LA, and DOTD with the purpose of establishing design guidelines based : on the detection needs, functionality, and cost. The study will also dev...

  13. Automatic food detection in egocentric images using artificial intelligence technology

    USDA-ARS?s Scientific Manuscript database

    Our objective was to develop an artificial intelligence (AI)-based algorithm which can automatically detect food items from images acquired by an egocentric wearable camera for dietary assessment. To study human diet and lifestyle, large sets of egocentric images were acquired using a wearable devic...

  14. Monocular camera/IMU/GNSS integration for ground vehicle navigation in challenging GNSS environments.

    PubMed

    Chu, Tianxing; Guo, Ningyan; Backén, Staffan; Akos, Dennis

    2012-01-01

    Low-cost MEMS-based IMUs, video cameras and portable GNSS devices are commercially available for automotive applications and some manufacturers have already integrated such facilities into their vehicle systems. GNSS provides positioning, navigation and timing solutions to users worldwide. However, signal attenuation, reflections or blockages may give rise to positioning difficulties. As opposed to GNSS, a generic IMU, which is independent of electromagnetic wave reception, can calculate a high-bandwidth navigation solution, however the output from a self-contained IMU accumulates errors over time. In addition, video cameras also possess great potential as alternate sensors in the navigation community, particularly in challenging GNSS environments and are becoming more common as options in vehicles. Aiming at taking advantage of these existing onboard technologies for ground vehicle navigation in challenging environments, this paper develops an integrated camera/IMU/GNSS system based on the extended Kalman filter (EKF). Our proposed integration architecture is examined using a live dataset collected in an operational traffic environment. The experimental results demonstrate that the proposed integrated system provides accurate estimations and potentially outperforms the tightly coupled GNSS/IMU integration in challenging environments with sparse GNSS observations.

  15. Monocular Camera/IMU/GNSS Integration for Ground Vehicle Navigation in Challenging GNSS Environments

    PubMed Central

    Chu, Tianxing; Guo, Ningyan; Backén, Staffan; Akos, Dennis

    2012-01-01

    Low-cost MEMS-based IMUs, video cameras and portable GNSS devices are commercially available for automotive applications and some manufacturers have already integrated such facilities into their vehicle systems. GNSS provides positioning, navigation and timing solutions to users worldwide. However, signal attenuation, reflections or blockages may give rise to positioning difficulties. As opposed to GNSS, a generic IMU, which is independent of electromagnetic wave reception, can calculate a high-bandwidth navigation solution, however the output from a self-contained IMU accumulates errors over time. In addition, video cameras also possess great potential as alternate sensors in the navigation community, particularly in challenging GNSS environments and are becoming more common as options in vehicles. Aiming at taking advantage of these existing onboard technologies for ground vehicle navigation in challenging environments, this paper develops an integrated camera/IMU/GNSS system based on the extended Kalman filter (EKF). Our proposed integration architecture is examined using a live dataset collected in an operational traffic environment. The experimental results demonstrate that the proposed integrated system provides accurate estimations and potentially outperforms the tightly coupled GNSS/IMU integration in challenging environments with sparse GNSS observations. PMID:22736999

  16. An affordable wearable video system for emergency response training

    NASA Astrophysics Data System (ADS)

    King-Smith, Deen; Mikkilineni, Aravind; Ebert, David; Collins, Timothy; Delp, Edward J.

    2009-02-01

    Many emergency response units are currently faced with restrictive budgets that prohibit their use of advanced technology-based training solutions. Our work focuses on creating an affordable, mobile, state-of-the-art emergency response training solution through the integration of low-cost, commercially available products. The system we have developed consists of tracking, audio, and video capability, coupled with other sensors that can all be viewed through a unified visualization system. In this paper we focus on the video sub-system which helps provide real time tracking and video feeds from the training environment through a system of wearable and stationary cameras. These two camera systems interface with a management system that handles storage and indexing of the video during and after training exercises. The wearable systems enable the command center to have live video and tracking information for each trainee in the exercise. The stationary camera systems provide a fixed point of reference for viewing action during the exercise and consist of a small Linux based portable computer and mountable camera. The video management system consists of a server and database which work in tandem with a visualization application to provide real-time and after action review capability to the training system.

  17. Digital Cameras for Student Use.

    ERIC Educational Resources Information Center

    Simpson, Carol

    1997-01-01

    Describes the features, equipment and operations of digital cameras and compares three different digital cameras for use in education. Price, technology requirements, features, transfer software, and accessories for the Kodak DC25, Olympus D-200L and Casio QV-100 are presented in a comparison table. (AEF)

  18. Space infrared telescope facility wide field and diffraction limited array camera (IRAC)

    NASA Technical Reports Server (NTRS)

    Fazio, G. G.

    1986-01-01

    IRAC focal plane detector technology was developed and studies of alternate focal plane configurations were supported. While any of the alternate focal planes under consideration would have a major impact on the Infrared Array Camera, it was possible to proceed with detector development and optical analysis research based on the proposed design since, to a large degree, the studies undertaken are generic to any SIRTF imaging instrument. Development of the proposed instrument was also important in a situation in which none of the alternate configurations has received the approval of the Science Working Group.

  19. Single-photon sensitive fast ebCMOS camera system for multiple-target tracking of single fluorophores: application to nano-biophotonics

    NASA Astrophysics Data System (ADS)

    Cajgfinger, Thomas; Chabanat, Eric; Dominjon, Agnes; Doan, Quang T.; Guerin, Cyrille; Houles, Julien; Barbier, Remi

    2011-03-01

    Nano-biophotonics applications will benefit from new fluorescent microscopy methods based essentially on super-resolution techniques (beyond the diffraction limit) on large biological structures (membranes) with fast frame rate (1000 Hz). This trend tends to push the photon detectors to the single-photon counting regime and the camera acquisition system to real time dynamic multiple-target tracing. The LUSIPHER prototype presented in this paper aims to give a different approach than those of Electron Multiplied CCD (EMCCD) technology and try to answer to the stringent demands of the new nano-biophotonics imaging techniques. The electron bombarded CMOS (ebCMOS) device has the potential to respond to this challenge, thanks to the linear gain of the accelerating high voltage of the photo-cathode, to the possible ultra fast frame rate of CMOS sensors and to the single-photon sensitivity. We produced a camera system based on a 640 kPixels ebCMOS with its acquisition system. The proof of concept for single-photon based tracking for multiple single-emitters is the main result of this paper.

  20. Public Speaking Anxiety: Comparing Face-to-Face and Web-Based Speeches

    ERIC Educational Resources Information Center

    Campbell, Scott; Larson, James

    2013-01-01

    This study is to determine whether or not students have a different level of anxiety between giving a speech to a group of people in a traditional face-to-face classroom setting to a speech given to an audience (visible on a projected screen) into a camera using distance or web-based technology. The study included approximately 70 students.…

  1. Laser Research

    NASA Technical Reports Server (NTRS)

    1979-01-01

    Eastman Kodak Company, Rochester, New York is a broad-based firm which produces photographic apparatus and supplies, fibers, chemicals and vitamin concentrates. Much of the company's research and development effort is devoted to photographic science and imaging technology, including laser technology. Eastman Kodak is using a COSMIC computer program called LACOMA in the analysis of laser optical systems and camera design studies. The company reports that use of the program has provided development time savings and reduced computer service fees.

  2. Contactless physiological signals extraction based on skin color magnification

    NASA Astrophysics Data System (ADS)

    Suh, Kun Ha; Lee, Eui Chul

    2017-11-01

    Although the human visual system is not sufficiently sensitive to perceive blood circulation, blood flow caused by cardiac activity makes slight changes on human skin surfaces. With advances in imaging technology, it has become possible to capture these changes through digital cameras. However, it is difficult to obtain clear physiological signals from such changes due to its fineness and noise factors, such as motion artifacts and camera sensing disturbances. We propose a method for extracting physiological signals with improved quality from skin colored-videos recorded with a remote RGB camera. The results showed that our skin color magnification method reveals the hidden physiological components remarkably in the time-series signal. A Korea Food and Drug Administration-approved heart rate monitor was used for verifying the resulting signal synchronized with the actual cardiac pulse, and comparisons of signal peaks showed correlation coefficients of almost 1.0. In particular, our method can be an effective preprocessing before applying additional postfiltering techniques to improve accuracy in image-based physiological signal extractions.

  3. Design of video interface conversion system based on FPGA

    NASA Astrophysics Data System (ADS)

    Zhao, Heng; Wang, Xiang-jun

    2014-11-01

    This paper presents a FPGA based video interface conversion system that enables the inter-conversion between digital and analog video. Cyclone IV series EP4CE22F17C chip from Altera Corporation is used as the main video processing chip, and single-chip is used as the information interaction control unit between FPGA and PC. The system is able to encode/decode messages from the PC. Technologies including video decoding/encoding circuits, bus communication protocol, data stream de-interleaving and de-interlacing, color space conversion and the Camera Link timing generator module of FPGA are introduced. The system converts Composite Video Broadcast Signal (CVBS) from the CCD camera into Low Voltage Differential Signaling (LVDS), which will be collected by the video processing unit with Camera Link interface. The processed video signals will then be inputted to system output board and displayed on the monitor.The current experiment shows that it can achieve high-quality video conversion with minimum board size.

  4. Visual object recognition for mobile tourist information systems

    NASA Astrophysics Data System (ADS)

    Paletta, Lucas; Fritz, Gerald; Seifert, Christin; Luley, Patrick; Almer, Alexander

    2005-03-01

    We describe a mobile vision system that is capable of automated object identification using images captured from a PDA or a camera phone. We present a solution for the enabling technology of outdoors vision based object recognition that will extend state-of-the-art location and context aware services towards object based awareness in urban environments. In the proposed application scenario, tourist pedestrians are equipped with GPS, W-LAN and a camera attached to a PDA or a camera phone. They are interested whether their field of view contains tourist sights that would point to more detailed information. Multimedia type data about related history, the architecture, or other related cultural context of historic or artistic relevance might be explored by a mobile user who is intending to learn within the urban environment. Learning from ambient cues is in this way achieved by pointing the device towards the urban sight, capturing an image, and consequently getting information about the object on site and within the focus of attention, i.e., the users current field of view.

  5. Person and gesture tracking with smart stereo cameras

    NASA Astrophysics Data System (ADS)

    Gordon, Gaile; Chen, Xiangrong; Buck, Ron

    2008-02-01

    Physical security increasingly involves sophisticated, real-time visual tracking of a person's location inside a given environment, often in conjunction with biometrics and other security-related technologies. However, demanding real-world conditions like crowded rooms, changes in lighting and physical obstructions have proved incredibly challenging for 2D computer vision technology. In contrast, 3D imaging technology is not affected by constant changes in lighting and apparent color, and thus allows tracking accuracy to be maintained in dynamically lit environments. In addition, person tracking with a 3D stereo camera can provide the location and movement of each individual very precisely, even in a very crowded environment. 3D vision only requires that the subject be partially visible to a single stereo camera to be correctly tracked; multiple cameras are used to extend the system's operational footprint, and to contend with heavy occlusion. A successful person tracking system, must not only perform visual analysis robustly, but also be small, cheap and consume relatively little power. The TYZX Embedded 3D Vision systems are perfectly suited to provide the low power, small footprint, and low cost points required by these types of volume applications. Several security-focused organizations, including the U.S Government, have deployed TYZX 3D stereo vision systems in security applications. 3D image data is also advantageous in the related application area of gesture tracking. Visual (uninstrumented) tracking of natural hand gestures and movement provides new opportunities for interactive control including: video gaming, location based entertainment, and interactive displays. 2D images have been used to extract the location of hands within a plane, but 3D hand location enables a much broader range of interactive applications. In this paper, we provide some background on the TYZX smart stereo cameras platform, describe the person tracking and gesture tracking systems implemented on this platform, and discuss some deployed applications.

  6. Clinical usefulness of augmented reality using infrared camera based real-time feedback on gait function in cerebral palsy: a case study.

    PubMed

    Lee, Byoung-Hee

    2016-04-01

    [Purpose] This study investigated the effects of real-time feedback using infrared camera recognition technology-based augmented reality in gait training for children with cerebral palsy. [Subjects] Two subjects with cerebral palsy were recruited. [Methods] In this study, augmented reality based real-time feedback training was conducted for the subjects in two 30-minute sessions per week for four weeks. Spatiotemporal gait parameters were used to measure the effect of augmented reality-based real-time feedback training. [Results] Velocity, cadence, bilateral step and stride length, and functional ambulation improved after the intervention in both cases. [Conclusion] Although additional follow-up studies of the augmented reality based real-time feedback training are required, the results of this study demonstrate that it improved the gait ability of two children with cerebral palsy. These findings suggest a variety of applications of conservative therapeutic methods which require future clinical trials.

  7. Comparison of parameters of modern cooled and uncooled thermal cameras

    NASA Astrophysics Data System (ADS)

    Bareła, Jarosław; Kastek, Mariusz; Firmanty, Krzysztof; Krupiński, Michał

    2017-10-01

    During the design of a system employing thermal cameras one always faces a problem of choosing the camera types best suited for the task. In many cases such a choice is far from optimal one, and there are several reasons for that. System designers often favor tried and tested solution they are used to. They do not follow the latest developments in the field of infrared technology and sometimes their choices are based on prejudice and not on facts. The paper presents the results of measurements of basic parameters of MWIR and LWIR thermal cameras, carried out in a specialized testing laboratory. The measured parameters are decisive in terms of image quality generated by thermal cameras. All measurements were conducted according to current procedures and standards. However the camera settings were not optimized for a specific test conditions or parameter measurements. Instead the real settings used in normal camera operations were applied to obtain realistic camera performance figures. For example there were significant differences between measured values of noise parameters and catalogue data provided by manufacturers, due to the application of edge detection filters to increase detection and recognition ranges. The purpose of this paper is to provide help in choosing the optimal thermal camera for particular application, answering the question whether to opt for cheaper microbolometer device or apply slightly better (in terms of specifications) yet more expensive cooled unit. Measurements and analysis were performed by qualified personnel with several dozen years of experience in both designing and testing of thermal camera systems with both cooled and uncooled focal plane arrays. Cameras of similar array sizes and optics were compared, and for each tested group the best performing devices were selected.

  8. Direct Reflectance Measurements from Drones: Sensor Absolute Radiometric Calibration and System Tests for Forest Reflectance Characterization.

    PubMed

    Hakala, Teemu; Markelin, Lauri; Honkavaara, Eija; Scott, Barry; Theocharous, Theo; Nevalainen, Olli; Näsi, Roope; Suomalainen, Juha; Viljanen, Niko; Greenwell, Claire; Fox, Nigel

    2018-05-03

    Drone-based remote sensing has evolved rapidly in recent years. Miniaturized hyperspectral imaging sensors are becoming more common as they provide more abundant information of the object compared to traditional cameras. Reflectance is a physically defined object property and therefore often preferred output of the remote sensing data capture to be used in the further processes. Absolute calibration of the sensor provides a possibility for physical modelling of the imaging process and enables efficient procedures for reflectance correction. Our objective is to develop a method for direct reflectance measurements for drone-based remote sensing. It is based on an imaging spectrometer and irradiance spectrometer. This approach is highly attractive for many practical applications as it does not require in situ reflectance panels for converting the sensor radiance to ground reflectance factors. We performed SI-traceable spectral and radiance calibration of a tuneable Fabry-Pérot Interferometer -based (FPI) hyperspectral camera at the National Physical Laboratory NPL (Teddington, UK). The camera represents novel technology by collecting 2D format hyperspectral image cubes using time sequential spectral scanning principle. The radiance accuracy of different channels varied between ±4% when evaluated using independent test data, and linearity of the camera response was on average 0.9994. The spectral response calibration showed side peaks on several channels that were due to the multiple orders of interference of the FPI. The drone-based direct reflectance measurement system showed promising results with imagery collected over Wytham Forest (Oxford, UK).

  9. Direct Reflectance Measurements from Drones: Sensor Absolute Radiometric Calibration and System Tests for Forest Reflectance Characterization

    PubMed Central

    Hakala, Teemu; Scott, Barry; Theocharous, Theo; Näsi, Roope; Suomalainen, Juha; Greenwell, Claire; Fox, Nigel

    2018-01-01

    Drone-based remote sensing has evolved rapidly in recent years. Miniaturized hyperspectral imaging sensors are becoming more common as they provide more abundant information of the object compared to traditional cameras. Reflectance is a physically defined object property and therefore often preferred output of the remote sensing data capture to be used in the further processes. Absolute calibration of the sensor provides a possibility for physical modelling of the imaging process and enables efficient procedures for reflectance correction. Our objective is to develop a method for direct reflectance measurements for drone-based remote sensing. It is based on an imaging spectrometer and irradiance spectrometer. This approach is highly attractive for many practical applications as it does not require in situ reflectance panels for converting the sensor radiance to ground reflectance factors. We performed SI-traceable spectral and radiance calibration of a tuneable Fabry-Pérot Interferometer -based (FPI) hyperspectral camera at the National Physical Laboratory NPL (Teddington, UK). The camera represents novel technology by collecting 2D format hyperspectral image cubes using time sequential spectral scanning principle. The radiance accuracy of different channels varied between ±4% when evaluated using independent test data, and linearity of the camera response was on average 0.9994. The spectral response calibration showed side peaks on several channels that were due to the multiple orders of interference of the FPI. The drone-based direct reflectance measurement system showed promising results with imagery collected over Wytham Forest (Oxford, UK). PMID:29751560

  10. Securing quality of camera-based biomedical optics

    NASA Astrophysics Data System (ADS)

    Guse, Frank; Kasper, Axel; Zinter, Bob

    2009-02-01

    As sophisticated optical imaging technologies move into clinical applications, manufacturers need to guarantee their products meet required performance criteria over long lifetimes and in very different environmental conditions. A consistent quality management marks critical components features derived from end-users requirements in a top-down approach. Careful risk analysis in the design phase defines the sample sizes for production tests, whereas first article inspection assures the reliability of the production processes. We demonstrate the application of these basic quality principles to camera-based biomedical optics for a variety of examples including molecular diagnostics, dental imaging, ophthalmology and digital radiography, covering a wide range of CCD/CMOS chip sizes and resolutions. Novel concepts in fluorescence detection and structured illumination are also highlighted.

  11. Measurement of marine picoplankton cell size by using a cooled, charge-coupled device camera with image-analyzed fluorescence microscopy.

    PubMed Central

    Viles, C L; Sieracki, M E

    1992-01-01

    Accurate measurement of the biomass and size distribution of picoplankton cells (0.2 to 2.0 microns) is paramount in characterizing their contribution to the oceanic food web and global biogeochemical cycling. Image-analyzed fluorescence microscopy, usually based on video camera technology, allows detailed measurements of individual cells to be taken. The application of an imaging system employing a cooled, slow-scan charge-coupled device (CCD) camera to automated counting and sizing of individual picoplankton cells from natural marine samples is described. A slow-scan CCD-based camera was compared to a video camera and was superior for detecting and sizing very small, dim particles such as fluorochrome-stained bacteria. Several edge detection methods for accurately measuring picoplankton cells were evaluated. Standard fluorescent microspheres and a Sargasso Sea surface water picoplankton population were used in the evaluation. Global thresholding was inappropriate for these samples. Methods used previously in image analysis of nanoplankton cells (2 to 20 microns) also did not work well with the smaller picoplankton cells. A method combining an edge detector and an adaptive edge strength operator worked best for rapidly generating accurate cell sizes. A complete sample analysis of more than 1,000 cells averages about 50 min and yields size, shape, and fluorescence data for each cell. With this system, the entire size range of picoplankton can be counted and measured. Images PMID:1610183

  12. Meeting Challenges of the '90s.

    ERIC Educational Resources Information Center

    Smith, Jamie

    1993-01-01

    Describes three new technological devices and possible educational applications: (1) Canon's Xapshot Camera that records photographs as digitized information on disk to be viewed on television, videotapes, or computers; (2) Kodak's Photo CD Player, that stores photographs to be viewed on a CD player; and (3) Apple's Pen-Based Pocket Computer. (LRW)

  13. Surveillance Jumps on the Network

    ERIC Educational Resources Information Center

    Raths, David

    2011-01-01

    Internet protocol (IP) network-based cameras and digital video management software are maturing, and many issues that have surrounded them, including bandwidth, data storage, ease of use, and integration are starting to become clearer as the technology continues to evolve. Prices are going down and the number of features is going up. Many school…

  14. Application of infrared camera to bituminous concrete pavements: measuring vehicle

    NASA Astrophysics Data System (ADS)

    Janků, Michal; Stryk, Josef

    2017-09-01

    Infrared thermography (IR) has been used for decades in certain fields. However, the technological level of advancement of measuring devices has not been sufficient for some applications. Over the recent years, good quality thermal cameras with high resolution and very high thermal sensitivity have started to appear on the market. The development in the field of measuring technologies allowed the use of infrared thermography in new fields and for larger number of users. This article describes the research in progress in Transport Research Centre with a focus on the use of infrared thermography for diagnostics of bituminous road pavements. A measuring vehicle, equipped with a thermal camera, digital camera and GPS sensor, was designed for the diagnostics of pavements. New, highly sensitive, thermal cameras allow to measure very small temperature differences from the moving vehicle. This study shows the potential of a high-speed inspection without lane closures while using IR thermography.

  15. First experience with THE AUTOLAP™ SYSTEM: an image-based robotic camera steering device.

    PubMed

    Wijsman, Paul J M; Broeders, Ivo A M J; Brenkman, Hylke J; Szold, Amir; Forgione, Antonello; Schreuder, Henk W R; Consten, Esther C J; Draaisma, Werner A; Verheijen, Paul M; Ruurda, Jelle P; Kaufman, Yuval

    2018-05-01

    Robotic camera holders for endoscopic surgery have been available for 20 years but market penetration is low. The current camera holders are controlled by voice, joystick, eyeball tracking, or head movements, and this type of steering has proven to be successful but excessive disturbance of surgical workflow has blocked widespread introduction. The Autolap™ system (MST, Israel) uses a radically different steering concept based on image analysis. This may improve acceptance by smooth, interactive, and fast steering. These two studies were conducted to prove safe and efficient performance of the core technology. A total of 66 various laparoscopic procedures were performed with the AutoLap™ by nine experienced surgeons, in two multi-center studies; 41 cholecystectomies, 13 fundoplications including hiatal hernia repair, 4 endometriosis surgeries, 2 inguinal hernia repairs, and 6 (bilateral) salpingo-oophorectomies. The use of the AutoLap™ system was evaluated in terms of safety, image stability, setup and procedural time, accuracy of imaged-based movements, and user satisfaction. Surgical procedures were completed with the AutoLap™ system in 64 cases (97%). The mean overall setup time of the AutoLap™ system was 4 min (04:08 ± 0.10). Procedure times were not prolonged due to the use of the system when compared to literature average. The reported user satisfaction was 3.85 and 3.96 on a scale of 1 to 5 in two studies. More than 90% of the image-based movements were accurate. No system-related adverse events were recorded while using the system. Safe and efficient use of the core technology of the AutoLap™ system was demonstrated with high image stability and good surgeon satisfaction. The results support further clinical studies that will focus on usability, improved ergonomics and additional image-based features.

  16. C-RED One and C-RED2: SWIR high-performance cameras using Saphira e-APD and Snake InGaAs detectors

    NASA Astrophysics Data System (ADS)

    Gach, Jean-Luc; Feautrier, Philippe; Stadler, Eric; Clop, Fabien; Lemarchand, Stephane; Carmignani, Thomas; Wanwanscappel, Yann; Boutolleau, David

    2018-02-01

    After the development of the OCAM2 EMCCD fast visible camera dedicated to advanced adaptive optics wavefront sensing, First Light Imaging moved to the SWIR fast cameras with the development of the C-RED One and the C-RED 2 cameras. First Light Imaging's C-RED One infrared camera is capable of capturing up to 3500 full frames per second with a subelectron readout noise and very low background. C-RED One is based on the last version of the SAPHIRA detector developed by Leonardo UK. This breakthrough has been made possible thanks to the use of an e-APD infrared focal plane array which is a real disruptive technology in imagery. C-RED One is an autonomous system with an integrated cooling system and a vacuum regeneration system. It operates its sensor with a wide variety of read out techniques and processes video on-board thanks to an FPGA. We will show its performances and expose its main features. In addition to this project, First Light Imaging developed an InGaAs 640x512 fast camera with unprecedented performances in terms of noise, dark and readout speed based on the SNAKE SWIR detector from Sofradir. The camera was called C-RED 2. The C-RED 2 characteristics and performances will be described. The C-RED One project has received funding from the European Union's Horizon 2020 research and innovation program under grant agreement N° 673944. The C-RED 2 development is supported by the "Investments for the future" program and the Provence Alpes Côte d'Azur Region, in the frame of the CPER.

  17. Imaging Emission Spectra with Handheld and Cellphone Cameras

    NASA Astrophysics Data System (ADS)

    Sitar, David

    2012-12-01

    As point-and-shoot digital camera technology advances it is becoming easier to image spectra in a laboralory setting on a shoestring budget and get immediale results. With this in mind, I wanted to test three cameras to see how their results would differ. Two undergraduate physics students and I used one handheld 7.1 megapixel (MP) digital Cannon point-and-shoot auto focusing camera and two different cellphone cameras: one at 6.1 MP and the other at 5.1 MP.

  18. Students' Framing of Laboratory Exercises Using Infrared Cameras

    ERIC Educational Resources Information Center

    Haglund, Jesper; Jeppsson, Fredrik; Hedberg, David; Schönborn, Konrad J.

    2015-01-01

    Thermal science is challenging for students due to its largely imperceptible nature. Handheld infrared cameras offer a pedagogical opportunity for students to see otherwise invisible thermal phenomena. In the present study, a class of upper secondary technology students (N = 30) partook in four IR-camera laboratory activities, designed around the…

  19. Use of wildlife webcams - Literature review and annotated bibliography

    USGS Publications Warehouse

    Ratz, Joan M.; Conk, Shannon J.

    2010-01-01

    The U.S. Fish and Wildlife Service National Conservation Training Center requested a literature review product that would serve as a resource to natural resource professionals interested in using webcams to connect people with nature. The literature review focused on the effects on the public of viewing wildlife through webcams and on information regarding installation and use of webcams. We searched the peer reviewed, published literature for three topics: wildlife cameras, virtual tourism, and technological nature. Very few publications directly addressed the effect of viewing wildlife webcams. The review of information on installation and use of cameras yielded information about many aspects of the use of remote photography, but not much specifically regarding webcams. Aspects of wildlife camera use covered in the literature review include: camera options, image retrieval, system maintenance and monitoring, time to assemble, power source, light source, camera mount, frequency of image recording, consequences for animals, and equipment security. Webcam technology is relatively new and more publication regarding the use of the technology is needed. Future research should specifically study the effect that viewing wildlife through webcams has on the viewers' conservation attitudes, behaviors, and sense of connectedness to nature.

  20. Backing collisions: a study of drivers' eye and backing behaviour using combined rear-view camera and sensor systems.

    PubMed

    Hurwitz, David S; Pradhan, Anuj; Fisher, Donald L; Knodler, Michael A; Muttart, Jeffrey W; Menon, Rajiv; Meissner, Uwe

    2010-04-01

    Backing crash injures can be severe; approximately 200 of the 2,500 reported injuries of this type per year to children under the age of 15 years result in death. Technology for assisting drivers when backing has limited success in preventing backing crashes. Two questions are addressed: Why is the reduction in backing crashes moderate when rear-view cameras are deployed? Could rear-view cameras augment sensor systems? 46 drivers (36 experimental, 10 control) completed 16 parking trials over 2 days (eight trials per day). Experimental participants were provided with a sensor camera system, controls were not. Three crash scenarios were introduced. Parking facility at UMass Amherst, USA. 46 drivers (33 men, 13 women) average age 29 years, who were Massachusetts residents licensed within the USA for an average of 9.3 years. Interventions Vehicles equipped with a rear-view camera and sensor system-based parking aid. Subject's eye fixations while driving and researcher's observation of collision with objects during backing. Only 20% of drivers looked at the rear-view camera before backing, and 88% of those did not crash. Of those who did not look at the rear-view camera before backing, 46% looked after the sensor warned the driver. This study indicates that drivers not only attend to an audible warning, but will look at a rear-view camera if available. Evidence suggests that when used appropriately, rear-view cameras can mitigate the occurrence of backing crashes, particularly when paired with an appropriate sensor system.

  1. Backing collisions: a study of drivers’ eye and backing behaviour using combined rear-view camera and sensor systems

    PubMed Central

    Hurwitz, David S; Pradhan, Anuj; Fisher, Donald L; Knodler, Michael A; Muttart, Jeffrey W; Menon, Rajiv; Meissner, Uwe

    2012-01-01

    Context Backing crash injures can be severe; approximately 200 of the 2,500 reported injuries of this type per year to children under the age of 15 years result in death. Technology for assisting drivers when backing has limited success in preventing backing crashes. Objectives Two questions are addressed: Why is the reduction in backing crashes moderate when rear-view cameras are deployed? Could rear-view cameras augment sensor systems? Design 46 drivers (36 experimental, 10 control) completed 16 parking trials over 2 days (eight trials per day). Experimental participants were provided with a sensor camera system, controls were not. Three crash scenarios were introduced. Setting Parking facility at UMass Amherst, USA. Subjects 46 drivers (33 men, 13 women) average age 29 years, who were Massachusetts residents licensed within the USA for an average of 9.3 years. Interventions Vehicles equipped with a rear-view camera and sensor system-based parking aid. Main Outcome Measures Subject’s eye fixations while driving and researcher’s observation of collision with objects during backing. Results Only 20% of drivers looked at the rear-view camera before backing, and 88% of those did not crash. Of those who did not look at the rear-view camera before backing, 46% looked after the sensor warned the driver. Conclusions This study indicates that drivers not only attend to an audible warning, but will look at a rear-view camera if available. Evidence suggests that when used appropriately, rear-view cameras can mitigate the occurrence of backing crashes, particularly when paired with an appropriate sensor system. PMID:20363812

  2. A new mapping function in table-mounted eye tracker

    NASA Astrophysics Data System (ADS)

    Tong, Qinqin; Hua, Xiao; Qiu, Jian; Luo, Kaiqing; Peng, Li; Han, Peng

    2018-01-01

    Eye tracker is a new apparatus of human-computer interaction, which has caught much attention in recent years. Eye tracking technology is to obtain the current subject's "visual attention (gaze)" direction by using mechanical, electronic, optical, image processing and other means of detection. While the mapping function is one of the key technology of the image processing, and is also the determination of the accuracy of the whole eye tracker system. In this paper, we present a new mapping model based on the relationship among the eyes, the camera and the screen that the eye gazed. Firstly, according to the geometrical relationship among the eyes, the camera and the screen, the framework of mapping function between the pupil center and the screen coordinate is constructed. Secondly, in order to simplify the vectors inversion of the mapping function, the coordinate of the eyes, the camera and screen was modeled by the coaxial model systems. In order to verify the mapping function, corresponding experiment was implemented. It is also compared with the traditional quadratic polynomial function. And the results show that our approach can improve the accuracy of the determination of the gazing point. Comparing with other methods, this mapping function is simple and valid.

  3. Development of a Remote Accessibility Assessment System through three-dimensional reconstruction technology.

    PubMed

    Kim, Jong Bae; Brienza, David M

    2006-01-01

    A Remote Accessibility Assessment System (RAAS) that uses three-dimensional (3-D) reconstruction technology is being developed; it enables clinicians to assess the wheelchair accessibility of users' built environments from a remote location. The RAAS uses commercial software to construct 3-D virtualized environments from photographs. We developed custom screening algorithms and instruments for analyzing accessibility. Characteristics of the camera and 3-D reconstruction software chosen for the system significantly affect its overall reliability. In this study, we performed an accuracy assessment to verify that commercial hardware and software can construct accurate 3-D models by analyzing the accuracy of dimensional measurements in a virtual environment and a comparison of dimensional measurements from 3-D models created with four cameras/settings. Based on these two analyses, we were able to specify a consumer-grade digital camera and PhotoModeler (EOS Systems, Inc, Vancouver, Canada) software for this system. Finally, we performed a feasibility analysis of the system in an actual environment to evaluate its ability to assess the accessibility of a wheelchair user's typical built environment. The field test resulted in an accurate accessibility assessment and thus validated our system.

  4. Evaluation of a stereoscopic camera-based three-dimensional viewing workstation for ophthalmic surgery.

    PubMed

    Bhadri, Prashant R; Rowley, Adrian P; Khurana, Rahul N; Deboer, Charles M; Kerns, Ralph M; Chong, Lawrence P; Humayun, Mark S

    2007-05-01

    To evaluate the effectiveness of a prototype stereoscopic camera-based viewing system (Digital Microsurgical Workstation, three-dimensional (3D) Vision Systems, Irvine, California, USA) for anterior and posterior segment ophthalmic surgery. Institutional-based prospective study. Anterior and posterior segment surgeons performed designated standardized tasks on porcine eyes after training on prosthetic plastic eyes. Both anterior and posterior segment surgeons were able to complete tasks requiring minimal or moderate stereoscopic viewing. The results indicate that the system provides improved ergonomics. Improvements in key viewing performance areas would further enhance the value over a conventional operating microscope. The performance of the prototype system is not at par with the planned commercial system. With continued development of this technology, the three- dimensional system may be a novel viewing system in ophthalmic surgery with improved ergonomics with respect to traditional microscopic viewing.

  5. Implementation and Evaluation of a Mobile Mapping System Based on Integrated Range and Intensity Images for Traffic Signs Localization

    NASA Astrophysics Data System (ADS)

    Shahbazi, M.; Sattari, M.; Homayouni, S.; Saadatseresht, M.

    2012-07-01

    Recent advances in positioning techniques have made it possible to develop Mobile Mapping Systems (MMS) for detection and 3D localization of various objects from a moving platform. On the other hand, automatic traffic sign recognition from an equipped mobile platform has recently been a challenging issue for both intelligent transportation and municipal database collection. However, there are several inevitable problems coherent to all the recognition methods completely relying on passive chromatic or grayscale images. This paper presents the implementation and evaluation of an operational MMS. Being distinct from the others, the developed MMS comprises one range camera based on Photonic Mixer Device (PMD) technology and one standard 2D digital camera. The system benefits from certain algorithms to detect, recognize and localize the traffic signs by fusing the shape, color and object information from both range and intensity images. As the calibrating stage, a self-calibration method based on integrated bundle adjustment via joint setup with the digital camera is applied in this study for PMD camera calibration. As the result, an improvement of 83 % in RMS of range error and 72 % in RMS of coordinates residuals for PMD camera, over that achieved with basic calibration is realized in independent accuracy assessments. Furthermore, conventional photogrammetric techniques based on controlled network adjustment are utilized for platform calibration. Likewise, the well-known Extended Kalman Filtering (EKF) is applied to integrate the navigation sensors, namely GPS and INS. The overall acquisition system along with the proposed techniques leads to 90 % true positive recognition and the average of 12 centimetres 3D positioning accuracy.

  6. Implementation and Evaluation of a Mobile Mapping System Based on Integrated Range and Intensity Images for Traffic Signs Localization

    NASA Astrophysics Data System (ADS)

    Shahbazi, M.; Sattari, M.; Homayouni, S.; Saadatseresht, M.

    2012-07-01

    Recent advances in positioning techniques have made it possible to develop Mobile Mapping Systems (MMS) for detection and 3D localization of various objects from a moving platform. On the other hand, automatic traffic sign recognition from an equipped mobile platform has recently been a challenging issue for both intelligent transportation and municipal database collection. However, there are several inevitable problems coherent to all the recognition methods completely relying on passive chromatic or grayscale images. This paper presents the implementation and evaluation of an operational MMS. Being distinct from the others, the developed MMS comprises one range camera based on Photonic Mixer Device (PMD) technology and one standard 2D digital camera. The system benefits from certain algorithms to detect, recognize and localize the traffic signs by fusing the shape, color and object information from both range and intensity images. As the calibrating stage, a self-calibration method based on integrated bundle adjustment via joint setup with the digital camera is applied in this study for PMD camera calibration. As the result, an improvement of 83% in RMS of range error and 72% in RMS of coordinates residuals for PMD camera, over that achieved with basic calibration is realized in independent accuracy assessments. Furthermore, conventional photogrammetric techniques based on controlled network adjustment are utilized for platform calibration. Likewise, the well-known Extended Kalman Filtering (EKF) is applied to integrate the navigation sensors, namely GPS and INS. The overall acquisition system along with the proposed techniques leads to 90% true positive recognition and the average of 12 centimetres 3D positioning accuracy.

  7. Visibility of children behind 2010-2013 model year passenger vehicles using glances, mirrors, and backup cameras and parking sensors.

    PubMed

    Kidd, David G; Brethwaite, Andrew

    2014-05-01

    This study identified the areas behind vehicles where younger and older children are not visible and measured the extent to which vehicle technologies improve visibility. Rear visibility of targets simulating the heights of a 12-15-month-old, a 30-36-month-old, and a 60-72-month-old child was assessed in 21 2010-2013 model year passenger vehicles with a backup camera or a backup camera plus parking sensor system. The average blind zone for a 12-15-month-old was twice as large as it was for a 60-72-month-old. Large SUVs had the worst rear visibility and small cars had the best. Increases in rear visibility provided by backup cameras were larger than the non-visible areas detected by parking sensors, but parking sensors detected objects in areas near the rear of the vehicle that were not visible in the camera or other fields of view. Overall, backup cameras and backup cameras plus parking sensors reduced the blind zone by around 90 percent on average and have the potential to prevent backover crashes if drivers use the technology appropriately. Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. Computer vision research with new imaging technology

    NASA Astrophysics Data System (ADS)

    Hou, Guangqi; Liu, Fei; Sun, Zhenan

    2015-12-01

    Light field imaging is capable of capturing dense multi-view 2D images in one snapshot, which record both intensity values and directions of rays simultaneously. As an emerging 3D device, the light field camera has been widely used in digital refocusing, depth estimation, stereoscopic display, etc. Traditional multi-view stereo (MVS) methods only perform well on strongly texture surfaces, but the depth map contains numerous holes and large ambiguities on textureless or low-textured regions. In this paper, we exploit the light field imaging technology on 3D face modeling in computer vision. Based on a 3D morphable model, we estimate the pose parameters from facial feature points. Then the depth map is estimated through the epipolar plane images (EPIs) method. At last, the high quality 3D face model is exactly recovered via the fusing strategy. We evaluate the effectiveness and robustness on face images captured by a light field camera with different poses.

  9. An evaluation of video cameras for collecting observational data on sanctuary-housed chimpanzees (Pan troglodytes).

    PubMed

    Hansen, Bethany K; Fultz, Amy L; Hopper, Lydia M; Ross, Stephen R

    2018-05-01

    Video cameras are increasingly being used to monitor captive animals in zoo, laboratory, and agricultural settings. This technology may also be useful in sanctuaries with large and/or complex enclosures. However, the cost of camera equipment and a lack of formal evaluations regarding the use of cameras in sanctuary settings make it challenging for facilities to decide whether and how to implement this technology. To address this, we evaluated the feasibility of using a video camera system to monitor chimpanzees at Chimp Haven. We viewed a group of resident chimpanzees in a large forested enclosure and compared observations collected in person and with remote video cameras. We found that via camera, the observer viewed fewer chimpanzees in some outdoor locations (GLMM post hoc test: est. = 1.4503, SE = 0.1457, Z = 9.951, p < 0.001) and identified a lower proportion of chimpanzees (GLMM post hoc test: est. = -2.17914, SE = 0.08490, Z = -25.666, p < 0.001) compared to in-person observations. However, the observer could view the 2 ha enclosure 15 times faster by camera compared to in person. In addition to these results, we provide recommendations to animal facilities considering the installation of a video camera system. Despite some limitations of remote monitoring, we posit that there are substantial benefits of using camera systems in sanctuaries to facilitate animal care and observational research. © 2018 Wiley Periodicals, Inc.

  10. Robot Evolutionary Localization Based on Attentive Visual Short-Term Memory

    PubMed Central

    Vega, Julio; Perdices, Eduardo; Cañas, José M.

    2013-01-01

    Cameras are one of the most relevant sensors in autonomous robots. However, two of their challenges are to extract useful information from captured images, and to manage the small field of view of regular cameras. This paper proposes implementing a dynamic visual memory to store the information gathered from a moving camera on board a robot, followed by an attention system to choose where to look with this mobile camera, and a visual localization algorithm that incorporates this visual memory. The visual memory is a collection of relevant task-oriented objects and 3D segments, and its scope is wider than the current camera field of view. The attention module takes into account the need to reobserve objects in the visual memory and the need to explore new areas. The visual memory is useful also in localization tasks, as it provides more information about robot surroundings than the current instantaneous image. This visual system is intended as underlying technology for service robot applications in real people's homes. Several experiments have been carried out, both with simulated and real Pioneer and Nao robots, to validate the system and each of its components in office scenarios. PMID:23337333

  11. Thermal feature extraction of servers in a datacenter using thermal image registration

    NASA Astrophysics Data System (ADS)

    Liu, Hang; Ran, Jian; Xie, Ting; Gao, Shan

    2017-09-01

    Thermal cameras provide fine-grained thermal information that enhances monitoring and enables automatic thermal management in large datacenters. Recent approaches employing mobile robots or thermal camera networks can already identify the physical locations of hot spots. Other distribution information used to optimize datacenter management can also be obtained automatically using pattern recognition technology. However, most of the features extracted from thermal images, such as shape and gradient, may be affected by changes in the position and direction of the thermal camera. This paper presents a method for extracting the thermal features of a hot spot or a server in a container datacenter. First, thermal and visual images are registered based on textural characteristics extracted from images acquired in datacenters. Then, the thermal distribution of each server is standardized. The features of a hot spot or server extracted from the standard distribution can reduce the impact of camera position and direction. The results of experiments show that image registration is efficient for aligning the corresponding visual and thermal images in the datacenter, and the standardization procedure reduces the impacts of camera position and direction on hot spot or server features.

  12. An evolution of technologies and applications of gamma imagers in the nuclear cycle industry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khalil, R. A.; Carrel, F.; Menaa, N.

    The tracking of radiation contamination and distribution has become a high priority in the nuclear cycle industry in order to respect the ALARA principle which is a main challenge during decontamination and dismantling activities. To support this need, AREVA/CANBERRA and CEA LIST have been actively carrying out research and development on a gamma-radiation imager. In this paper we will present the new generation of gamma camera, called GAMPIX. This system is based on the Timepix chip, hybridized with a CdTe substrate. A coded mask could be used in order to increase the sensitivity of the camera. Moreover, due to themore » USB connection with a standard computer, this gamma camera is immediately operational and user-friendly. The final system is a very compact gamma camera (global weight is less than 1 kg without any shielding) which could be used as a hand-held device for radioprotection purposes. In this article, we present the main characteristics of this new generation of gamma camera and we expose experimental results obtained during in situ measurements. Even though we present preliminary results the final product is under industrialization phase to address various applications specifications. (authors)« less

  13. Hybrid motion sensing and experimental modal analysis using collocated smartphone camera and accelerometers

    NASA Astrophysics Data System (ADS)

    Ozer, Ekin; Feng, Dongming; Feng, Maria Q.

    2017-10-01

    State-of-the-art multisensory technologies and heterogeneous sensor networks propose a wide range of response measurement opportunities for structural health monitoring (SHM). Measuring and fusing different physical quantities in terms of structural vibrations can provide alternative acquisition methods and improve the quality of the modal testing results. In this study, a recently introduced SHM concept, SHM with smartphones, is focused to utilize multisensory smartphone features for a hybridized structural vibration response measurement framework. Based on vibration testing of a small-scale multistory laboratory model, displacement and acceleration responses are monitored using two different smartphone sensors, an embedded camera and accelerometer, respectively. Double-integration or differentiation among different measurement types is performed to combine multisensory measurements on a comparative basis. In addition, distributed sensor signals from collocated devices are processed for modal identification, and performance of smartphone-based sensing platforms are tested under different configuration scenarios and heterogeneity levels. The results of these tests show a novel and successful implementation of a hybrid motion sensing platform through multiple sensor type and device integration. Despite the heterogeneity of motion data obtained from different smartphone devices and technologies, it is shown that multisensory response measurements can be blended for experimental modal analysis. Getting benefit from the accessibility of smartphone technology, similar smartphone-based dynamic testing methodologies can provide innovative SHM solutions with mobile, programmable, and cost-free interfaces.

  14. 2010 A Digital Odyssey: Exploring Document Camera Technology and Computer Self-Efficacy in a Digital Era

    ERIC Educational Resources Information Center

    Hoge, Robert Joaquin

    2010-01-01

    Within the sphere of education, navigating throughout a digital world has become a matter of necessity for the developing professional, as with the advent of Document Camera Technology (DCT). This study explores the pedagogical implications of implementing DCT; to see if there is a relationship between teachers' comfort with DCT and to the…

  15. The 3-D image recognition based on fuzzy neural network technology

    NASA Technical Reports Server (NTRS)

    Hirota, Kaoru; Yamauchi, Kenichi; Murakami, Jun; Tanaka, Kei

    1993-01-01

    Three dimensional stereoscopic image recognition system based on fuzzy-neural network technology was developed. The system consists of three parts; preprocessing part, feature extraction part, and matching part. Two CCD color camera image are fed to the preprocessing part, where several operations including RGB-HSV transformation are done. A multi-layer perception is used for the line detection in the feature extraction part. Then fuzzy matching technique is introduced in the matching part. The system is realized on SUN spark station and special image input hardware system. An experimental result on bottle images is also presented.

  16. Scaling-up camera traps: monitoring the planet's biodiversity with networks of remote sensors

    USGS Publications Warehouse

    Steenweg, Robin; Hebblewhite, Mark; Kays, Roland; Ahumada, Jorge A.; Fisher, Jason T.; Burton, Cole; Townsend, Susan E.; Carbone, Chris; Rowcliffe, J. Marcus; Whittington, Jesse; Brodie, Jedediah; Royle, Andy; Switalski, Adam; Clevenger, Anthony P.; Heim, Nicole; Rich, Lindsey N.

    2017-01-01

    Countries committed to implementing the Convention on Biological Diversity's 2011–2020 strategic plan need effective tools to monitor global trends in biodiversity. Remote cameras are a rapidly growing technology that has great potential to transform global monitoring for terrestrial biodiversity and can be an important contributor to the call for measuring Essential Biodiversity Variables. Recent advances in camera technology and methods enable researchers to estimate changes in abundance and distribution for entire communities of animals and to identify global drivers of biodiversity trends. We suggest that interconnected networks of remote cameras will soon monitor biodiversity at a global scale, help answer pressing ecological questions, and guide conservation policy. This global network will require greater collaboration among remote-camera studies and citizen scientists, including standardized metadata, shared protocols, and security measures to protect records about sensitive species. With modest investment in infrastructure, and continued innovation, synthesis, and collaboration, we envision a global network of remote cameras that not only provides real-time biodiversity data but also serves to connect people with nature.

  17. Infrared Imaging Camera Final Report CRADA No. TC02061.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roos, E. V.; Nebeker, S.

    This was a collaborative effort between the University of California, Lawrence Livermore National Laboratory (LLNL) and Cordin Company (Cordin) to enhance the U.S. ability to develop a commercial infrared camera capable of capturing high-resolution images in a l 00 nanoseconds (ns) time frame. The Department of Energy (DOE), under an Initiative for Proliferation Prevention (IPP) project, funded the Russian Federation Nuclear Center All-Russian Scientific Institute of Experimental Physics (RFNC-VNIIEF) in Sarov. VNIIEF was funded to develop a prototype commercial infrared (IR) framing camera and to deliver a prototype IR camera to LLNL. LLNL and Cordin were partners with VNIIEF onmore » this project. A prototype IR camera was delivered by VNIIEF to LLNL in December 2006. In June of 2007, LLNL and Cordin evaluated the camera and the test results revealed that the camera exceeded presently available commercial IR cameras. Cordin believes that the camera can be sold on the international market. The camera is currently being used as a scientific tool within Russian nuclear centers. This project was originally designated as a two year project. The project was not started on time due to changes in the IPP project funding conditions; the project funding was re-directed through the International Science and Technology Center (ISTC), which delayed the project start by over one year. The project was not completed on schedule due to changes within the Russian government export regulations. These changes were directed by Export Control regulations on the export of high technology items that can be used to develop military weapons. The IR camera was on the list that export controls required. The ISTC and Russian government, after negotiations, allowed the delivery of the camera to LLNL. There were no significant technical or business changes to the original project.« less

  18. Hyperspectral imaging from space: Warfighter-1

    NASA Astrophysics Data System (ADS)

    Cooley, Thomas; Seigel, Gary; Thorsos, Ivan

    1999-01-01

    The Air Force Research Laboratory Integrated Space Technology Demonstrations (ISTD) Program Office has partnered with Orbital Sciences Corporation (OSC) to complement the commercial satellite's high-resolution panchromatic imaging and Multispectral imaging (MSI) systems with a moderate resolution Hyperspectral imaging (HSI) spectrometer camera. The program is an advanced technology demonstration utilizing a commercially based space capability to provide unique functionality in remote sensing technology. This leveraging of commercial industry to enhance the value of the Warfighter-1 program utilizes the precepts of acquisition reform and is a significant departure from the old-school method of contracting for government managed large demonstration satellites with long development times and technology obsolescence concerns. The HSI system will be able to detect targets from the spectral signature measured by the hyperspectral camera. The Warfighter-1 program will also demonstrate the utility of the spectral information to theater military commanders and intelligence analysts by transmitting HSI data directly to a mobile ground station that receives and processes the data. After a brief history of the project origins, this paper will present the details of the Warfighter-1 system and expected results from exploitation of HSI data as well as the benefits realized by this collaboration between the Air Force and commercial industry.

  19. 3D Perception Technologies for Surgical Operating Theatres.

    PubMed

    Beyl, T; Schreiter, L; Nicolai, P; Raczkowsky, J; Wörn, H

    2016-01-01

    3D Perception technologies have been explored in various fields. This paper explores the application of such technologies for surgical operating theatres. Clinical applications can be found in workflow detection, tracking and analysis, collision avoidance with medical robots, perception of interaction between participants of the operation, training of the operation room crew, patient calibration and many more. In this paper a complete perception solution for the operating room is shown. The system is based on the ToF technology integrated to the Microsoft Kinect One implements a multi camera approach. Special emphasize is put on the tracking of the personnel and the evaluation of the system performance and accuracy.

  20. Image intensification; Proceedings of the Meeting, Los Angeles, CA, Jan. 17, 18, 1989

    NASA Astrophysics Data System (ADS)

    Csorba, Illes P.

    Various papers on image intensification are presented. Individual topics discussed include: status of high-speed optical detector technologies, super second generation imge intensifier, gated image intensifiers and applications, resistive-anode position-sensing photomultiplier tube operational modeling, undersea imaging and target detection with gated image intensifier tubes, image intensifier modules for use with commercially available solid state cameras, specifying the components of an intensified solid state television camera, superconducting IR focal plane arrays, one-inch TV camera tube with very high resolution capacity, CCD-Digicon detector system performance parameters, high-resolution X-ray imaging device, high-output technology microchannel plate, preconditioning of microchannel plate stacks, recent advances in small-pore microchannel plate technology, performance of long-life curved channel microchannel plates, low-noise microchannel plates, development of a quartz envelope heater.

  1. Cowboys with Cameras: An Interactive Expedition

    ERIC Educational Resources Information Center

    Robert, Kenny; Lenz, Adam

    2009-01-01

    Utilizing the same technologies pioneered by the embedded journalists in Iraq, the University of Central Florida (UCF) teamed up with TracStar, Inc to create a small-scale, satellite-based expedition transmission package to accompany a university film and digital media professor into parts of Utah and the Moab Desert that had a historical…

  2. SLAM-based dense surface reconstruction in monocular Minimally Invasive Surgery and its application to Augmented Reality.

    PubMed

    Chen, Long; Tang, Wen; John, Nigel W; Wan, Tao Ruan; Zhang, Jian Jun

    2018-05-01

    While Minimally Invasive Surgery (MIS) offers considerable benefits to patients, it also imposes big challenges on a surgeon's performance due to well-known issues and restrictions associated with the field of view (FOV), hand-eye misalignment and disorientation, as well as the lack of stereoscopic depth perception in monocular endoscopy. Augmented Reality (AR) technology can help to overcome these limitations by augmenting the real scene with annotations, labels, tumour measurements or even a 3D reconstruction of anatomy structures at the target surgical locations. However, previous research attempts of using AR technology in monocular MIS surgical scenes have been mainly focused on the information overlay without addressing correct spatial calibrations, which could lead to incorrect localization of annotations and labels, and inaccurate depth cues and tumour measurements. In this paper, we present a novel intra-operative dense surface reconstruction framework that is capable of providing geometry information from only monocular MIS videos for geometry-aware AR applications such as site measurements and depth cues. We address a number of compelling issues in augmenting a scene for a monocular MIS environment, such as drifting and inaccurate planar mapping. A state-of-the-art Simultaneous Localization And Mapping (SLAM) algorithm used in robotics has been extended to deal with monocular MIS surgical scenes for reliable endoscopic camera tracking and salient point mapping. A robust global 3D surface reconstruction framework has been developed for building a dense surface using only unorganized sparse point clouds extracted from the SLAM. The 3D surface reconstruction framework employs the Moving Least Squares (MLS) smoothing algorithm and the Poisson surface reconstruction framework for real time processing of the point clouds data set. Finally, the 3D geometric information of the surgical scene allows better understanding and accurate placement AR augmentations based on a robust 3D calibration. We demonstrate the clinical relevance of our proposed system through two examples: (a) measurement of the surface; (b) depth cues in monocular endoscopy. The performance and accuracy evaluations of the proposed framework consist of two steps. First, we have created a computer-generated endoscopy simulation video to quantify the accuracy of the camera tracking by comparing the results of the video camera tracking with the recorded ground-truth camera trajectories. The accuracy of the surface reconstruction is assessed by evaluating the Root Mean Square Distance (RMSD) of surface vertices of the reconstructed mesh with that of the ground truth 3D models. An error of 1.24 mm for the camera trajectories has been obtained and the RMSD for surface reconstruction is 2.54 mm, which compare favourably with previous approaches. Second, in vivo laparoscopic videos are used to examine the quality of accurate AR based annotation and measurement, and the creation of depth cues. These results show the potential promise of our geometry-aware AR technology to be used in MIS surgical scenes. The results show that the new framework is robust and accurate in dealing with challenging situations such as the rapid endoscopy camera movements in monocular MIS scenes. Both camera tracking and surface reconstruction based on a sparse point cloud are effective and operated in real-time. This demonstrates the potential of our algorithm for accurate AR localization and depth augmentation with geometric cues and correct surface measurements in MIS with monocular endoscopes. Copyright © 2018 Elsevier B.V. All rights reserved.

  3. A high-sensitivity EM-CCD camera for the open port telescope cavity of SOFIA

    NASA Astrophysics Data System (ADS)

    Wiedemann, Manuel; Wolf, Jürgen; McGrotty, Paul; Edwards, Chris; Krabbe, Alfred

    2016-08-01

    The Stratospheric Observatory for Infrared Astronomy (SOFIA) has three target acquisition and tracking cameras. All three imagers originally used the same cameras, which did not meet the sensitivity requirements, due to low quantum efficiency and high dark current. The Focal Plane Imager (FPI) suffered the most from high dark current, since it operated in the aircraft cabin at room temperatures without active cooling. In early 2013 the FPI was upgraded with an iXon3 888 from Andor Techonolgy. Compared to the original cameras, the iXon3 has a factor five higher QE, thanks to its back-illuminated sensor, and orders of magnitude lower dark current, due to a thermo-electric cooler and "inverted mode operation." This leads to an increase in sensitivity of about five stellar magnitudes. The Wide Field Imager (WFI) and Fine Field Imager (FFI) shall now be upgraded with equally sensitive cameras. However, they are exposed to stratospheric conditions in flight (typical conditions: T≍-40° C, p≍ 0:1 atm) and there are no off-the-shelf CCD cameras with the performance of an iXon3, suited for these conditions. Therefore, Andor Technology and the Deutsches SOFIA Institut (DSI) are jointly developing and qualifying a camera for these conditions, based on the iXon3 888. These changes include replacement of electrical components with MIL-SPEC or industrial grade components and various system optimizations, a new data interface that allows the image data transmission over 30m of cable from the camera to the controller, a new power converter in the camera to generate all necessary operating voltages of the camera locally and a new housing that fulfills airworthiness requirements. A prototype of this camera has been built and tested in an environmental test chamber at temperatures down to T=-62° C and pressure equivalent to 50 000 ft altitude. In this paper, we will report about the development of the camera and present results from the environmental testing.

  4. Sensor for In-Motion Continuous 3D Shape Measurement Based on Dual Line-Scan Cameras

    PubMed Central

    Sun, Bo; Zhu, Jigui; Yang, Linghui; Yang, Shourui; Guo, Yin

    2016-01-01

    The acquisition of three-dimensional surface data plays an increasingly important role in the industrial sector. Numerous 3D shape measurement techniques have been developed. However, there are still limitations and challenges in fast measurement of large-scale objects or high-speed moving objects. The innovative line scan technology opens up new potentialities owing to the ultra-high resolution and line rate. To this end, a sensor for in-motion continuous 3D shape measurement based on dual line-scan cameras is presented. In this paper, the principle and structure of the sensor are investigated. The image matching strategy is addressed and the matching error is analyzed. The sensor has been verified by experiments and high-quality results are obtained. PMID:27869731

  5. The Camera-Based Assessment Survey System (C-BASS): A towed camera platform for reef fish abundance surveys and benthic habitat characterization in the Gulf of Mexico

    NASA Astrophysics Data System (ADS)

    Lembke, Chad; Grasty, Sarah; Silverman, Alex; Broadbent, Heather; Butcher, Steven; Murawski, Steven

    2017-12-01

    An ongoing challenge for fisheries management is to provide cost-effective and timely estimates of habitat stratified fish densities. Traditional approaches use modified commercial fishing gear (such as trawls and baited hooks) that have biases in species selectivity and may also be inappropriate for deployment in some habitat types. Underwater visual and optical approaches offer the promise of more precise and less biased assessments of relative fish abundance, as well as direct estimates of absolute fish abundance. A number of video-based approaches have been developed and the technology for data acquisition, calibration, and synthesis has been developing rapidly. Beginning in 2012, our group of engineers and researchers at the University of South Florida has been working towards the goal of completing large scale, video-based surveys in the eastern Gulf of Mexico. This paper discusses design considerations and development of a towed camera system for collection of video-based data on commercially and recreationally important reef fishes and benthic habitat on the West Florida Shelf. Factors considered during development included potential habitat types to be assessed, sea-floor bathymetry, vessel support requirements, personnel requirements, and cost-effectiveness of system components. This regional-specific effort has resulted in a towed platform called the Camera-Based Assessment Survey System, or C-BASS, which has proven capable of surveying tens of kilometers of video transects per day and has the ability to cost-effective population estimates of reef fishes and coincident benthic habitat classification.

  6. A Novel Multi-Digital Camera System Based on Tilt-Shift Photography Technology

    PubMed Central

    Sun, Tao; Fang, Jun-yong; Zhao, Dong; Liu, Xue; Tong, Qing-xi

    2015-01-01

    Multi-digital camera systems (MDCS) are constantly being improved to meet the increasing requirement of high-resolution spatial data. This study identifies the insufficiencies of traditional MDCSs and proposes a new category MDCS based on tilt-shift photography to improve ability of the MDCS to acquire high-accuracy spatial data. A prototype system, including two or four tilt-shift cameras (TSC, camera model: Nikon D90), is developed to validate the feasibility and correctness of proposed MDCS. Similar to the cameras of traditional MDCSs, calibration is also essential for TSC of new MDCS. The study constructs indoor control fields and proposes appropriate calibration methods for TSC, including digital distortion model (DDM) approach and two-step calibrated strategy. The characteristics of TSC are analyzed in detail via a calibration experiment; for example, the edge distortion of TSC. Finally, the ability of the new MDCS to acquire high-accuracy spatial data is verified through flight experiments. The results of flight experiments illustrate that geo-position accuracy of prototype system achieves 0.3 m at a flight height of 800 m, and spatial resolution of 0.15 m. In addition, results of the comparison between the traditional (MADC II) and proposed MDCS demonstrate that the latter (0.3 m) provides spatial data with higher accuracy than the former (only 0.6 m) under the same conditions. We also take the attitude that using higher accuracy TSC in the new MDCS should further improve the accuracy of the photogrammetry senior product. PMID:25835187

  7. A novel multi-digital camera system based on tilt-shift photography technology.

    PubMed

    Sun, Tao; Fang, Jun-Yong; Zhao, Dong; Liu, Xue; Tong, Qing-Xi

    2015-03-31

    Multi-digital camera systems (MDCS) are constantly being improved to meet the increasing requirement of high-resolution spatial data. This study identifies the insufficiencies of traditional MDCSs and proposes a new category MDCS based on tilt-shift photography to improve ability of the MDCS to acquire high-accuracy spatial data. A prototype system, including two or four tilt-shift cameras (TSC, camera model: Nikon D90), is developed to validate the feasibility and correctness of proposed MDCS. Similar to the cameras of traditional MDCSs, calibration is also essential for TSC of new MDCS. The study constructs indoor control fields and proposes appropriate calibration methods for TSC, including digital distortion model (DDM) approach and two-step calibrated strategy. The characteristics of TSC are analyzed in detail via a calibration experiment; for example, the edge distortion of TSC. Finally, the ability of the new MDCS to acquire high-accuracy spatial data is verified through flight experiments. The results of flight experiments illustrate that geo-position accuracy of prototype system achieves 0.3 m at a flight height of 800 m, and spatial resolution of 0.15 m. In addition, results of the comparison between the traditional (MADC II) and proposed MDCS demonstrate that the latter (0.3 m) provides spatial data with higher accuracy than the former (only 0.6 m) under the same conditions. We also take the attitude that using higher accuracy TSC in the new MDCS should further improve the accuracy of the photogrammetry senior product.

  8. Recent results obtained on the APEX 12 m antenna with the ArTeMiS prototype camera

    NASA Astrophysics Data System (ADS)

    Talvard, M.; André, P.; Rodriguez, L.; Le-Pennec, Y.; De Breuck, C.; Revéret, V.; Agnèse, P.; Boulade, O.; Doumayrou, E.; Dubreuil, D.; Ercolani, E.; Gallais, P.; Horeau, B.; Lagage, PO; Leriche, B.; Lortholary, M.; Martignac, J.; Minier, V.; Pantin, E.; Rabanus, D.; Relland, J.; Willmann, G.

    2008-07-01

    ArTeMiS is a camera designed to operate on large ground based submillimetric telescopes in the 3 atmospheric windows 200, 350 and 450 µm. The focal plane of this camera will be equipped with 5760 bolometric pixels cooled down at 300 mK with an autonomous cryogenic system. The pixels have been manufactured, based on the same technology processes as used for the Herschel-PACS space photometer. We review in this paper the present status and the future plans of this project. A prototype camera, named P-ArTeMiS, has been developed and successfully tested on the KOSMA telescope in 2006 at Gornergrat 3100m, Switzerland. Preliminary results were presented at the previous SPIE conference in Orlando (Talvard et al, 2006). Since then, the prototype camera has been proposed and successfully installed on APEX, a 12 m antenna operated by the Max Planck Institute für Radioastronomie, the European Southern Observatory and the Onsala Space Observatory on the Chajnantor site at 5100 m altitude in Chile. Two runs have been achieved in 2007, first in March and the latter in November. We present in the second part of this paper the first processed images obtained on star forming regions and on circumstellar and debris disks. Calculated sensitivities are compared with expectations. These illustrate the improvements achieved on P-ArTeMiS during the 3 experimental campaigns.

  9. Intelligent viewing control for robotic and automation systems

    NASA Astrophysics Data System (ADS)

    Schenker, Paul S.; Peters, Stephen F.; Paljug, Eric D.; Kim, Won S.

    1994-10-01

    We present a new system for supervisory automated control of multiple remote cameras. Our primary purpose in developing this system has been to provide capability for knowledge- based, `hands-off' viewing during execution of teleoperation/telerobotic tasks. The reported technology has broader applicability to remote surveillance, telescience observation, automated manufacturing workcells, etc. We refer to this new capability as `Intelligent Viewing Control (IVC),' distinguishing it from a simple programmed camera motion control. In the IVC system, camera viewing assignment, sequencing, positioning, panning, and parameter adjustment (zoom, focus, aperture, etc.) are invoked and interactively executed by real-time by a knowledge-based controller, drawing on a priori known task models and constraints, including operator preferences. This multi-camera control is integrated with a real-time, high-fidelity 3D graphics simulation, which is correctly calibrated in perspective to the actual cameras and their platform kinematics (translation/pan-tilt). Such merged graphics- with-video design allows the system user to preview and modify the planned (`choreographed') viewing sequences. Further, during actual task execution, the system operator has available both the resulting optimized video sequence, as well as supplementary graphics views from arbitrary perspectives. IVC, including operator-interactive designation of robot task actions, is presented to the user as a well-integrated video-graphic single screen user interface allowing easy access to all relevant telerobot communication/command/control resources. We describe and show pictorial results of a preliminary IVC system implementation for telerobotic servicing of a satellite.

  10. Miniature optical planar camera based on a wide-angle metasurface doublet corrected for monochromatic aberrations

    PubMed Central

    Arbabi, Amir; Arbabi, Ehsan; Kamali, Seyedeh Mahsa; Horie, Yu; Han, Seunghoon; Faraon, Andrei

    2016-01-01

    Optical metasurfaces are two-dimensional arrays of nano-scatterers that modify optical wavefronts at subwavelength spatial resolution. They are poised to revolutionize optics by enabling complex low-cost systems where multiple metasurfaces are lithographically stacked and integrated with electronics. For imaging applications, metasurface stacks can perform sophisticated image corrections and can be directly integrated with image sensors. Here we demonstrate this concept with a miniature flat camera integrating a monolithic metasurface lens doublet corrected for monochromatic aberrations, and an image sensor. The doublet lens, which acts as a fisheye photographic objective, has a small f-number of 0.9, an angle-of-view larger than 60° × 60°, and operates at 850 nm wavelength with 70% focusing efficiency. The camera exhibits nearly diffraction-limited image quality, which indicates the potential of this technology in the development of optical systems for microscopy, photography, and computer vision. PMID:27892454

  11. Determination of the Actual Land Use Pattern Using Unmanned Aerial Vehicles and Multispectral Camera

    NASA Astrophysics Data System (ADS)

    Dindaroğlu, T.; Gündoğan, R.; Gülci, S.

    2017-11-01

    The international initiatives developed in the context of combating global warming are based on the monitoring of Land Use, Land Use Changes, and Forests (LULUCEF). Determination of changes in land use patterns is used to determine the effects of greenhouse gas emissions and to reduce adverse effects in subsequent processes. This process, which requires the investigation and control of quite large areas, has undoubtedly increased the importance of technological tools and equipment. The use of carrier platforms and commercially cheaper various sensors have become widespread. In this study, multispectral camera was used to determine the land use pattern with high sensitivity. Unmanned aerial flights were carried out in the research fields of Kahramanmaras Sutcu Imam University campus area. Unmanned aerial vehicle (UAV) (multi-propeller hexacopter) was used as a carrier platform for aerial photographs. Within the scope of this study, multispectral cameras were used to determine the land use pattern with high sensitivity.

  12. The exploration of outer space with cameras: A history of the NASA unmanned spacecraft missions

    NASA Astrophysics Data System (ADS)

    Mirabito, M. M.

    The use of television cameras and other video imaging devices to explore the solar system's planetary bodies with unmanned spacecraft is chronicled. Attention is given to the missions and the imaging devices, beginning with the Ranger 7 moon mission, which featured the first successfully operated electrooptical subsystem, six television cameras with vidicon image sensors. NASA established a network of parabolic, ground-based antennas on the earth (the Deep Space Network) to receive signals from spacecraft travelling farther than 16,000 km into space. The image processing and enhancement techniques used to convert spacecraft data transmissions into black and white and color photographs are described, together with the technological requirements that drove the development of the various systems. Terrestrial applications of the planetary imaging systems are explored, including medical and educational uses. Finally, the implementation and functional characteristics of CCDs are detailed, noting their installation on the Space Telescope.

  13. The CAOS camera platform: ushering in a paradigm change in extreme dynamic range imager design

    NASA Astrophysics Data System (ADS)

    Riza, Nabeel A.

    2017-02-01

    Multi-pixel imaging devices such as CCD, CMOS and Focal Plane Array (FPA) photo-sensors dominate the imaging world. These Photo-Detector Array (PDA) devices certainly have their merits including increasingly high pixel counts and shrinking pixel sizes, nevertheless, they are also being hampered by limitations in instantaneous dynamic range, inter-pixel crosstalk, quantum full well capacity, signal-to-noise ratio, sensitivity, spectral flexibility, and in some cases, imager response time. Recently invented is the Coded Access Optical Sensor (CAOS) Camera platform that works in unison with current Photo-Detector Array (PDA) technology to counter fundamental limitations of PDA-based imagers while providing high enough imaging spatial resolution and pixel counts. Using for example the Texas Instruments (TI) Digital Micromirror Device (DMD) to engineer the CAOS camera platform, ushered in is a paradigm change in advanced imager design, particularly for extreme dynamic range applications.

  14. Miniature optical planar camera based on a wide-angle metasurface doublet corrected for monochromatic aberrations

    NASA Astrophysics Data System (ADS)

    Arbabi, Amir; Arbabi, Ehsan; Kamali, Seyedeh Mahsa; Horie, Yu; Han, Seunghoon; Faraon, Andrei

    2016-11-01

    Optical metasurfaces are two-dimensional arrays of nano-scatterers that modify optical wavefronts at subwavelength spatial resolution. They are poised to revolutionize optics by enabling complex low-cost systems where multiple metasurfaces are lithographically stacked and integrated with electronics. For imaging applications, metasurface stacks can perform sophisticated image corrections and can be directly integrated with image sensors. Here we demonstrate this concept with a miniature flat camera integrating a monolithic metasurface lens doublet corrected for monochromatic aberrations, and an image sensor. The doublet lens, which acts as a fisheye photographic objective, has a small f-number of 0.9, an angle-of-view larger than 60° × 60°, and operates at 850 nm wavelength with 70% focusing efficiency. The camera exhibits nearly diffraction-limited image quality, which indicates the potential of this technology in the development of optical systems for microscopy, photography, and computer vision.

  15. Adaptive Monocular Visual-Inertial SLAM for Real-Time Augmented Reality Applications in Mobile Devices.

    PubMed

    Piao, Jin-Chun; Kim, Shin-Dug

    2017-11-07

    Simultaneous localization and mapping (SLAM) is emerging as a prominent issue in computer vision and next-generation core technology for robots, autonomous navigation and augmented reality. In augmented reality applications, fast camera pose estimation and true scale are important. In this paper, we present an adaptive monocular visual-inertial SLAM method for real-time augmented reality applications in mobile devices. First, the SLAM system is implemented based on the visual-inertial odometry method that combines data from a mobile device camera and inertial measurement unit sensor. Second, we present an optical-flow-based fast visual odometry method for real-time camera pose estimation. Finally, an adaptive monocular visual-inertial SLAM is implemented by presenting an adaptive execution module that dynamically selects visual-inertial odometry or optical-flow-based fast visual odometry. Experimental results show that the average translation root-mean-square error of keyframe trajectory is approximately 0.0617 m with the EuRoC dataset. The average tracking time is reduced by 7.8%, 12.9%, and 18.8% when different level-set adaptive policies are applied. Moreover, we conducted experiments with real mobile device sensors, and the results demonstrate the effectiveness of performance improvement using the proposed method.

  16. Imaging Emission Spectra with Handheld and Cellphone Cameras

    ERIC Educational Resources Information Center

    Sitar, David

    2012-01-01

    As point-and-shoot digital camera technology advances it is becoming easier to image spectra in a laboratory setting on a shoestring budget and get immediate results. With this in mind, I wanted to test three cameras to see how their results would differ. Two undergraduate physics students and I used one handheld 7.1 megapixel (MP) digital Cannon…

  17. Surveillance Cameras and Their Use as a Dissecting Microscope in the Teaching of Biological Sciences

    ERIC Educational Resources Information Center

    Vale, Marcus R.

    2016-01-01

    Surveillance cameras are prevalent in various public and private areas, and they can also be coupled to optical microscopes and telescopes with excellent results. They are relatively simple cameras without sophisticated technological features and are much less expensive and more accessible to many people. These features enable them to be used in…

  18. Seeing Red: Discourse, Metaphor, and the Implementation of Red Light Cameras in Texas

    ERIC Educational Resources Information Center

    Hayden, Lance Alan

    2009-01-01

    This study examines the deployment of automated red light camera systems in the state of Texas from 2003 through late 2007. The deployment of new technologies in general, and surveillance infrastructures in particular, can prove controversial and challenging for the formation of public policy. Red light camera surveillance during this period in…

  19. HST Solar Arrays photographed by Electronic Still Camera

    NASA Technical Reports Server (NTRS)

    1993-01-01

    This close-up view of one of two Solar Arrays (SA) on the Hubble Space Telescope (HST) was photographed with an Electronic Still Camera (ESC), and downlinked to ground controllers soon afterward. Electronic still photography is a technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality.

  20. Maximizing the Performance of Automated Low Cost All-sky Cameras

    NASA Technical Reports Server (NTRS)

    Bettonvil, F.

    2011-01-01

    Thanks to the wide spread of digital camera technology in the consumer market, a steady increase in the number of active All-sky camera has be noticed European wide. In this paper I look into the details of such All-sky systems and try to optimize the performance in terms of accuracy of the astrometry, the velocity determination and photometry. Having autonomous operation in mind, suggestions are done for the optimal low cost All-sky camera.

  1. SLR digital camera for forensic photography

    NASA Astrophysics Data System (ADS)

    Har, Donghwan; Son, Youngho; Lee, Sungwon

    2004-06-01

    Forensic photography, which was systematically established in the late 19th century by Alphonse Bertillon of France, has developed a lot for about 100 years. The development will be more accelerated with the development of high technologies, in particular the digital technology. This paper reviews three studies to answer the question: Can the SLR digital camera replace the traditional silver halide type ultraviolet photography and infrared photography? 1. Comparison of relative ultraviolet and infrared sensitivity of SLR digital camera to silver halide photography. 2. How much ultraviolet or infrared sensitivity is improved when removing the UV/IR cutoff filter built in the SLR digital camera? 3. Comparison of relative sensitivity of CCD and CMOS for ultraviolet and infrared. The test result showed that the SLR digital camera has a very low sensitivity for ultraviolet and infrared. The cause was found to be the UV/IR cutoff filter mounted in front of the image sensor. Removing the UV/IR cutoff filter significantly improved the sensitivity for ultraviolet and infrared. Particularly for infrared, the sensitivity of the SLR digital camera was better than that of the silver halide film. This shows the possibility of replacing the silver halide type ultraviolet photography and infrared photography with the SLR digital camera. Thus, the SLR digital camera seems to be useful for forensic photography, which deals with a lot of ultraviolet and infrared photographs.

  2. First results from the TOPSAT camera

    NASA Astrophysics Data System (ADS)

    Greenway, Paul; Tosh, Ian; Morris, Nigel; Burton, Gary; Cawley, Steve

    2017-11-01

    The TopSat camera is a low cost remote sensing imager capable of producing 2.5 metre resolution panchromatic imagery, funded by the British National Space Centre's Mosaic programme. The instrument was designed and assembled at the Space Science & Technology Department of the CCLRC's Rutherford Appleton Laboratory (RAL) in the UK, and was launched on the 27th October 2005 from Plesetsk Cosmodrome in Northern Russia on a Kosmos-3M. The camera utilises an off-axis three mirror system, which has the advantages of excellent image quality over a wide field of view, combined with a compactness that makes its overall dimensions smaller than its focal length. Keeping the costs to a minimum has been a major design driver in the development of this camera. The camera is part of the TopSat mission, which is a collaboration between four UK organisations; QinetiQ, Surrey Satellite Technology Ltd (SSTL), RAL and Infoterra. Its objective is to demonstrate provision of rapid response high resolution imagery to fixed and mobile ground stations using a low cost minisatellite. The paper "Development of the TopSat Camera" presented by RAL at the 5th ICSO in 2004 described the opto-mechanical design, assembly, alignment and environmental test methods implemented. Now that the spacecraft is in orbit and successfully acquiring images, this paper presents the first results from the camera and makes an initial assessment of the camera's in-orbit performance.

  3. Heart Imaging System

    NASA Technical Reports Server (NTRS)

    1993-01-01

    Johnson Space Flight Center's device to test astronauts' heart function in microgravity has led to the MultiWire Gamma Camera, which images heart conditions six times faster than conventional devices. Dr. Jeffrey Lacy, who developed the technology as a NASA researcher, later formed Proportional Technologies, Inc. to develop a commercially viable process that would enable use of Tantalum-178 (Ta-178), a radio-pharmaceutical. His company supplies the generator for the radioactive Ta-178 to Xenos Medical Systems, which markets the camera. Ta-178 can only be optimally imaged with the camera. Because the body is subjected to it for only nine minutes, the radiation dose is significantly reduced and the technique can be used more frequently. Ta-178 also enables the camera to be used on pediatric patients who are rarely studied with conventional isotopes because of the high radiation dosage.

  4. Development of Next Generation Lifetime PSP Imaging Systems

    NASA Technical Reports Server (NTRS)

    Watkins, A. Neal; Jordan, Jeffrey D.; Leighty, Bradley D.; Ingram, JoAnne L.; Oglesby, Donald M.

    2002-01-01

    This paper describes a lifetime PSP system that has recently been developed using pulsed light-emitting diode (LED) lamps and a new interline transfer CCD camera technology. This system alleviates noise sources associated with lifetime PSP systems that use either flash-lamp or laser excitation sources and intensified CCD cameras for detection. Calibration curves have been acquired for a variety of PSP formulations using this system, and a validation test was recently completed in the Subsonic Aerodynamic Research Laboratory (SARL) at Wright-Patterson Air Force Base (WPAFB). In this test, global surface pressure distributions were recovered using both a standard intensity-based method and the new lifetime system. Results from the lifetime system agree both qualitatively and quantitatively with those measured using the intensity-based method. Finally, an advanced lifetime imaging technique capable of measuring temperature and pressure simultaneously is introduced and initial results are presented.

  5. Visual fatigue modeling for stereoscopic video shot based on camera motion

    NASA Astrophysics Data System (ADS)

    Shi, Guozhong; Sang, Xinzhu; Yu, Xunbo; Liu, Yangdong; Liu, Jing

    2014-11-01

    As three-dimensional television (3-DTV) and 3-D movie become popular, the discomfort of visual feeling limits further applications of 3D display technology. The cause of visual discomfort from stereoscopic video conflicts between accommodation and convergence, excessive binocular parallax, fast motion of objects and so on. Here, a novel method for evaluating visual fatigue is demonstrated. Influence factors including spatial structure, motion scale and comfortable zone are analyzed. According to the human visual system (HVS), people only need to converge their eyes to the specific objects for static cameras and background. Relative motion should be considered for different camera conditions determining different factor coefficients and weights. Compared with the traditional visual fatigue prediction model, a novel visual fatigue predicting model is presented. Visual fatigue degree is predicted using multiple linear regression method combining with the subjective evaluation. Consequently, each factor can reflect the characteristics of the scene, and the total visual fatigue score can be indicated according to the proposed algorithm. Compared with conventional algorithms which ignored the status of the camera, our approach exhibits reliable performance in terms of correlation with subjective test results.

  6. A new high-speed IR camera system

    NASA Technical Reports Server (NTRS)

    Travis, Jeffrey W.; Shu, Peter K.; Jhabvala, Murzy D.; Kasten, Michael S.; Moseley, Samuel H.; Casey, Sean C.; Mcgovern, Lawrence K.; Luers, Philip J.; Dabney, Philip W.; Kaipa, Ravi C.

    1994-01-01

    A multi-organizational team at the Goddard Space Flight Center is developing a new far infrared (FIR) camera system which furthers the state of the art for this type of instrument by the incorporating recent advances in several technological disciplines. All aspects of the camera system are optimized for operation at the high data rates required for astronomical observations in the far infrared. The instrument is built around a Blocked Impurity Band (BIB) detector array which exhibits responsivity over a broad wavelength band and which is capable of operating at 1000 frames/sec, and consists of a focal plane dewar, a compact camera head electronics package, and a Digital Signal Processor (DSP)-based data system residing in a standard 486 personal computer. In this paper we discuss the overall system architecture, the focal plane dewar, and advanced features and design considerations for the electronics. This system, or one derived from it, may prove useful for many commercial and/or industrial infrared imaging or spectroscopic applications, including thermal machine vision for robotic manufacturing, photographic observation of short-duration thermal events such as combustion or chemical reactions, and high-resolution surveillance imaging.

  7. Thermal imaging as a smartphone application: exploring and implementing a new concept

    NASA Astrophysics Data System (ADS)

    Yanai, Omer

    2014-06-01

    Today's world is going mobile. Smartphone devices have become an important part of everyday life for billions of people around the globe. Thermal imaging cameras have been around for half a century and are now making their way into our daily lives. Originally built for military applications, thermal cameras are starting to be considered for personal use, enabling enhanced vision and temperature mapping for different groups of professional individuals. Through a revolutionary concept that turns smartphones into fully functional thermal cameras, we have explored how these two worlds can converge by utilizing the best of each technology. We will present the thought process, design considerations and outcome of our development process, resulting in a low-power, high resolution, lightweight USB thermal imaging device that turns Android smartphones into thermal cameras. We will discuss the technological challenges that we faced during the development of the product, and what are the system design decisions taken during the implementation. We will provide some insights we came across during this development process. Finally, we will discuss the opportunities that this innovative technology brings to the market.

  8. Automatic treatment of flight test images using modern tools: SAAB and Aeritalia joint approach

    NASA Astrophysics Data System (ADS)

    Kaelldahl, A.; Duranti, P.

    The use of onboard cine cameras, as well as that of on ground cinetheodolites, is very popular in flight tests. The high resolution of film and the high frame rate of cinecameras are still not exceeded by video technology. Video technology can successfully enter the flight test scenario once the availability of solid-state optical sensors dramatically reduces the dimensions, and weight of TV cameras, thus allowing to locate them in positions compatible with space or operational limitations (e.g., HUD cameras). A proper combination of cine and video cameras is the typical solution for a complex flight test program. The output of such devices is very helpful in many flight areas. Several sucessful applications of this technology are summarized. Analysis of the large amount of data produced (frames of images) requires a very long time. The analysis is normally carried out manually. In order to improve the situation, in the last few years, several flight test centers have devoted their attention to possible techniques which allow for quicker and more effective image treatment.

  9. Clinical usefulness of augmented reality using infrared camera based real-time feedback on gait function in cerebral palsy: a case study

    PubMed Central

    Lee, Byoung-Hee

    2016-01-01

    [Purpose] This study investigated the effects of real-time feedback using infrared camera recognition technology-based augmented reality in gait training for children with cerebral palsy. [Subjects] Two subjects with cerebral palsy were recruited. [Methods] In this study, augmented reality based real-time feedback training was conducted for the subjects in two 30-minute sessions per week for four weeks. Spatiotemporal gait parameters were used to measure the effect of augmented reality-based real-time feedback training. [Results] Velocity, cadence, bilateral step and stride length, and functional ambulation improved after the intervention in both cases. [Conclusion] Although additional follow-up studies of the augmented reality based real-time feedback training are required, the results of this study demonstrate that it improved the gait ability of two children with cerebral palsy. These findings suggest a variety of applications of conservative therapeutic methods which require future clinical trials. PMID:27190489

  10. Automatic inoculating apparatus. [includes movable carraige, drive motor, and swabbing motor

    NASA Technical Reports Server (NTRS)

    Wilkins, J. R.; Mills, S. M. (Inventor)

    1974-01-01

    An automatic inoculating apparatus for agar trays is described and using a simple inoculating element, such as a cotton swab or inoculating loop. The apparatus includes a movable carriage for supporting the tray to be inoculated, a drive motor for moving the tray along a trackway, and a swabbing motor for automatically swabbing the tray during the movement. An actuator motor controls lowering of the inoculating element onto the tray and lifting of the inoculating element. An electrical control system, including limit microswitches, enables automatic control of the actuator motor and return of the carriage to the initial position after inoculating is completed.

  11. Location-Based Augmented Reality for Mobile Learning: Algorithm, System, and Implementation

    ERIC Educational Resources Information Center

    Tan, Qing; Chang, William; Kinshuk

    2015-01-01

    AR technology can be considered as mainly consisting of two aspects: identification of real-world object and display of computer-generated digital contents related the identified real-world object. The technical challenge of mobile AR is to identify the real-world object that mobile device's camera aim at. In this paper, we will present a…

  12. User Interface Preferences in the Design of a Camera-Based Navigation and Wayfinding Aid

    ERIC Educational Resources Information Center

    Arditi, Aries; Tian, YingLi

    2013-01-01

    Introduction: Development of a sensing device that can provide a sufficient perceptual substrate for persons with visual impairments to orient themselves and travel confidently has been a persistent rehabilitation technology goal, with the user interface posing a significant challenge. In the study presented here, we enlist the advice and ideas of…

  13. Center for Coastline Security Technology, Year 3

    DTIC Science & Technology

    2008-05-01

    Polarization control for 3D Imaging with the Sony SRX-R105 Digital Cinema Projectors 3.4 HDMAX Camera and Sony SRX-R105 Projector Configuration for 3D...HDMAX Camera Pair Figure 3.2 Sony SRX-R105 Digital Cinema Projector Figure 3.3 Effect of camera rotation on projected overlay image. Figure 3.4...system that combines a pair of FAU’s HD-MAX video cameras with a pair of Sony SRX-R105 digital cinema projectors for stereo imaging and projection

  14. Cameras Monitor Spacecraft Integrity to Prevent Failures

    NASA Technical Reports Server (NTRS)

    2014-01-01

    The Jet Propulsion Laboratory contracted Malin Space Science Systems Inc. to outfit Curiosity with four of its cameras using the latest commercial imaging technology. The company parlayed the knowledge gained under working with NASA to develop an off-the-shelf line of cameras, along with a digital video recorder, designed to help troubleshoot problems that may arise on satellites in space.

  15. Achieving thermography with a thermal security camera using uncooled amorphous silicon microbolometer image sensors

    NASA Astrophysics Data System (ADS)

    Wang, Yu-Wei; Tesdahl, Curtis; Owens, Jim; Dorn, David

    2012-06-01

    Advancements in uncooled microbolometer technology over the last several years have opened up many commercial applications which had been previously cost prohibitive. Thermal technology is no longer limited to the military and government market segments. One type of thermal sensor with low NETD which is available in the commercial market segment is the uncooled amorphous silicon (α-Si) microbolometer image sensor. Typical thermal security cameras focus on providing the best image quality by auto tonemaping (contrast enhancing) the image, which provides the best contrast depending on the temperature range of the scene. While this may provide enough information to detect objects and activities, there are further benefits of being able to estimate the actual object temperatures in a scene. This thermographic ability can provide functionality beyond typical security cameras by being able to monitor processes. Example applications of thermography[2] with thermal camera include: monitoring electrical circuits, industrial machinery, building thermal leaks, oil/gas pipelines, power substations, etc...[3][5] This paper discusses the methodology of estimating object temperatures by characterizing/calibrating different components inside a thermal camera utilizing an uncooled amorphous silicon microbolometer image sensor. Plots of system performance across camera operating temperatures will be shown.

  16. Fast time-of-flight camera based surface registration for radiotherapy patient positioning.

    PubMed

    Placht, Simon; Stancanello, Joseph; Schaller, Christian; Balda, Michael; Angelopoulou, Elli

    2012-01-01

    This work introduces a rigid registration framework for patient positioning in radiotherapy, based on real-time surface acquisition by a time-of-flight (ToF) camera. Dynamic properties of the system are also investigated for future gating/tracking strategies. A novel preregistration algorithm, based on translation and rotation-invariant features representing surface structures, was developed. Using these features, corresponding three-dimensional points were computed in order to determine initial registration parameters. These parameters became a robust input to an accelerated version of the iterative closest point (ICP) algorithm for the fine-tuning of the registration result. Distance calibration and Kalman filtering were used to compensate for ToF-camera dependent noise. Additionally, the advantage of using the feature based preregistration over an "ICP only" strategy was evaluated, as well as the robustness of the rigid-transformation-based method to deformation. The proposed surface registration method was validated using phantom data. A mean target registration error (TRE) for translations and rotations of 1.62 ± 1.08 mm and 0.07° ± 0.05°, respectively, was achieved. There was a temporal delay of about 65 ms in the registration output, which can be seen as negligible considering the dynamics of biological systems. Feature based preregistration allowed for accurate and robust registrations even at very large initial displacements. Deformations affected the accuracy of the results, necessitating particular care in cases of deformed surfaces. The proposed solution is able to solve surface registration problems with an accuracy suitable for radiotherapy cases where external surfaces offer primary or complementary information to patient positioning. The system shows promising dynamic properties for its use in gating/tracking applications. The overall system is competitive with commonly-used surface registration technologies. Its main benefit is the usage of a cost-effective off-the-shelf technology for surface acquisition. Further strategies to improve the registration accuracy are under development.

  17. Electronic Still Camera view of Aft end of Wide Field/Planetary Camera in HST

    NASA Image and Video Library

    1993-12-06

    S61-E-015 (6 Dec 1993) --- A close-up view of the aft part of the new Wide Field/Planetary Camera (WFPC-II) installed on the Hubble Space Telescope (HST). WFPC-II was photographed with the Electronic Still Camera (ESC) from inside Endeavour's cabin as astronauts F. Story Musgrave and Jeffrey A. Hoffman moved it from its stowage position onto the giant telescope. Electronic still photography is a relatively new technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality. The electronic still camera has flown as an experiment on several other shuttle missions.

  18. Auto-converging stereo cameras for 3D robotic tele-operation

    NASA Astrophysics Data System (ADS)

    Edmondson, Richard; Aycock, Todd; Chenault, David

    2012-06-01

    Polaris Sensor Technologies has developed a Stereovision Upgrade Kit for TALON robot to provide enhanced depth perception to the operator. This kit previously required the TALON Operator Control Unit to be equipped with the optional touchscreen interface to allow for operator control of the camera convergence angle adjustment. This adjustment allowed for optimal camera convergence independent of the distance from the camera to the object being viewed. Polaris has recently improved the performance of the stereo camera by implementing an Automatic Convergence algorithm in a field programmable gate array in the camera assembly. This algorithm uses scene content to automatically adjust the camera convergence angle, freeing the operator to focus on the task rather than adjustment of the vision system. The autoconvergence capability has been demonstrated on both visible zoom cameras and longwave infrared microbolometer stereo pairs.

  19. Melon yield prediction using small unmanned aerial vehicles

    NASA Astrophysics Data System (ADS)

    Zhao, Tiebiao; Wang, Zhongdao; Yang, Qi; Chen, YangQuan

    2017-05-01

    Thanks to the development of camera technologies, small unmanned aerial systems (sUAS), it is possible to collect aerial images of field with more flexible visit, higher resolution and much lower cost. Furthermore, the performance of objection detection based on deeply trained convolutional neural networks (CNNs) has been improved significantly. In this study, we applied these technologies in the melon production, where high-resolution aerial images were used to count melons in the field and predict the yield. CNN-based object detection framework-Faster R-CNN is applied in the melon classification. Our results showed that sUAS plus CNNs were able to detect melons accurately in the late harvest season.

  20. Testing of a Methane Cryogenic Heat Pipe with a Liquid Trap Turn-Off Feature for use on Space Interferometer Mission (SIM)

    NASA Technical Reports Server (NTRS)

    Cepeda-Rizo, Juan; Krylo, Robert; Fisher, Melanie; Bugby, David C.

    2011-01-01

    Camera cooling for SIM presents three thermal control challenges; stable operation at 163K (110 C), decontamination heating to +20 C, and a long span from the cameras to the radiator. A novel cryogenic cooling system based on a methane heat pipe meets these challenges. The SIM thermal team, with the help of heat pipe vendor ATK, designed and tested a complete, low temperature, cooling system. The system accommodates the two SIM cameras with a double-ended conduction bar, a single methane heat pipe, independent turn-off devices, and a flight-like radiator. The turn ]off devices consist of a liquid trap, for removing the methane from the pipe, and an electrical heater to raise the methane temperature above the critical point thus preventing two-phase operation. This is the first time a cryogenic heat pipe has been tested at JPL and is also the first heat pipe to incorporate the turn-off features. Operation at 163K with a methane heat pipe is an important new thermal control capability for the lab. In addition, the two turn-off technologies enhance the "bag of tricks" available to the JPL thermal community. The successful test program brings this heat pipe to a high level of technology readiness.

  1. World's fastest and most sensitive astronomical camera

    NASA Astrophysics Data System (ADS)

    2009-06-01

    The next generation of instruments for ground-based telescopes took a leap forward with the development of a new ultra-fast camera that can take 1500 finely exposed images per second even when observing extremely faint objects. The first 240x240 pixel images with the world's fastest high precision faint light camera were obtained through a collaborative effort between ESO and three French laboratories from the French Centre National de la Recherche Scientifique/Institut National des Sciences de l'Univers (CNRS/INSU). Cameras such as this are key components of the next generation of adaptive optics instruments of Europe's ground-based astronomy flagship facility, the ESO Very Large Telescope (VLT). ESO PR Photo 22a/09 The CCD220 detector ESO PR Photo 22b/09 The OCam camera ESO PR Video 22a/09 OCam images "The performance of this breakthrough camera is without an equivalent anywhere in the world. The camera will enable great leaps forward in many areas of the study of the Universe," says Norbert Hubin, head of the Adaptive Optics department at ESO. OCam will be part of the second-generation VLT instrument SPHERE. To be installed in 2011, SPHERE will take images of giant exoplanets orbiting nearby stars. A fast camera such as this is needed as an essential component for the modern adaptive optics instruments used on the largest ground-based telescopes. Telescopes on the ground suffer from the blurring effect induced by atmospheric turbulence. This turbulence causes the stars to twinkle in a way that delights poets, but frustrates astronomers, since it blurs the finest details of the images. Adaptive optics techniques overcome this major drawback, so that ground-based telescopes can produce images that are as sharp as if taken from space. Adaptive optics is based on real-time corrections computed from images obtained by a special camera working at very high speeds. Nowadays, this means many hundreds of times each second. The new generation instruments require these corrections to be done at an even higher rate, more than one thousand times a second, and this is where OCam is essential. "The quality of the adaptive optics correction strongly depends on the speed of the camera and on its sensitivity," says Philippe Feautrier from the LAOG, France, who coordinated the whole project. "But these are a priori contradictory requirements, as in general the faster a camera is, the less sensitive it is." This is why cameras normally used for very high frame-rate movies require extremely powerful illumination, which is of course not an option for astronomical cameras. OCam and its CCD220 detector, developed by the British manufacturer e2v technologies, solve this dilemma, by being not only the fastest available, but also very sensitive, making a significant jump in performance for such cameras. Because of imperfect operation of any physical electronic devices, a CCD camera suffers from so-called readout noise. OCam has a readout noise ten times smaller than the detectors currently used on the VLT, making it much more sensitive and able to take pictures of the faintest of sources. "Thanks to this technology, all the new generation instruments of ESO's Very Large Telescope will be able to produce the best possible images, with an unequalled sharpness," declares Jean-Luc Gach, from the Laboratoire d'Astrophysique de Marseille, France, who led the team that built the camera. "Plans are now underway to develop the adaptive optics detectors required for ESO's planned 42-metre European Extremely Large Telescope, together with our research partners and the industry," says Hubin. Using sensitive detectors developed in the UK, with a control system developed in France, with German and Spanish participation, OCam is truly an outcome of a European collaboration that will be widely used and commercially produced. More information The three French laboratories involved are the Laboratoire d'Astrophysique de Marseille (LAM/INSU/CNRS, Université de Provence; Observatoire Astronomique de Marseille Provence), the Laboratoire d'Astrophysique de Grenoble (LAOG/INSU/CNRS, Université Joseph Fourier; Observatoire des Sciences de l'Univers de Grenoble), and the Observatoire de Haute Provence (OHP/INSU/CNRS; Observatoire Astronomique de Marseille Provence). OCam and the CCD220 are the result of five years work, financed by the European commission, ESO and CNRS-INSU, within the OPTICON project of the 6th Research and Development Framework Programme of the European Union. The development of the CCD220, supervised by ESO, was undertaken by the British company e2v technologies, one of the world leaders in the manufacture of scientific detectors. The corresponding OPTICON activity was led by the Laboratoire d'Astrophysique de Grenoble, France. The OCam camera was built by a team of French engineers from the Laboratoire d'Astrophysique de Marseille, the Laboratoire d'Astrophysique de Grenoble and the Observatoire de Haute Provence. In order to secure the continuation of this successful project a new OPTICON project started in June 2009 as part of the 7th Research and Development Framework Programme of the European Union with the same partners, with the aim of developing a detector and camera with even more powerful functionality for use with an artificial laser star. This development is necessary to ensure the image quality of the future 42-metre European Extremely Large Telescope. ESO, the European Southern Observatory, is the foremost intergovernmental astronomy organisation in Europe and the world's most productive astronomical observatory. It is supported by 14 countries: Austria, Belgium, the Czech Republic, Denmark, France, Finland, Germany, Italy, the Netherlands, Portugal, Spain, Sweden, Switzerland and the United Kingdom. ESO carries out an ambitious programme focused on the design, construction and operation of powerful ground-based observing facilities enabling astronomers to make important scientific discoveries. ESO also plays a leading role in promoting and organising cooperation in astronomical research. ESO operates three unique world-class observing sites in Chile: La Silla, Paranal and Chajnantor. At Paranal, ESO operates the Very Large Telescope, the world's most advanced visible-light astronomical observatory. ESO is the European partner of a revolutionary astronomical telescope ALMA, the largest astronomical project in existence. ESO is currently planning a 42-metre European Extremely Large optical/near-infrared Telescope, the E-ELT, which will become "the world's biggest eye on the sky".

  2. Fuzzy-neural control of an aircraft tracking camera platform

    NASA Technical Reports Server (NTRS)

    Mcgrath, Dennis

    1994-01-01

    A fuzzy-neural control system simulation was developed for the control of a camera platform used to observe aircraft on final approach to an aircraft carrier. The fuzzy-neural approach to control combines the structure of a fuzzy knowledge base with a supervised neural network's ability to adapt and improve. The performance characteristics of this hybrid system were compared to those of a fuzzy system and a neural network system developed independently to determine if the fusion of these two technologies offers any advantage over the use of one or the other. The results of this study indicate that the fuzzy-neural approach to control offers some advantages over either fuzzy or neural control alone.

  3. Wavevector multiplexed atomic quantum memory via spatially-resolved single-photon detection.

    PubMed

    Parniak, Michał; Dąbrowski, Michał; Mazelanik, Mateusz; Leszczyński, Adam; Lipka, Michał; Wasilewski, Wojciech

    2017-12-15

    Parallelized quantum information processing requires tailored quantum memories to simultaneously handle multiple photons. The spatial degree of freedom is a promising candidate to facilitate such photonic multiplexing. Using a single-photon resolving camera, we demonstrate a wavevector multiplexed quantum memory based on a cold atomic ensemble. Observation of nonclassical correlations between Raman scattered photons is confirmed by an average value of the second-order correlation function [Formula: see text] in 665 separated modes simultaneously. The proposed protocol utilizing the multimode memory along with the camera will facilitate generation of multi-photon states, which are a necessity in quantum-enhanced sensing technologies and as an input to photonic quantum circuits.

  4. Research on virtual Guzheng based on Kinect

    NASA Astrophysics Data System (ADS)

    Li, Shuyao; Xu, Kuangyi; Zhang, Heng

    2018-05-01

    There are a lot of researches on virtual instruments, but there are few on classical Chinese instruments, and the techniques used are very limited. This paper uses Unity 3D and Kinect camera combined with virtual reality technology and gesture recognition method to design a virtual playing system of Guzheng, a traditional Chinese musical instrument, with demonstration function. In this paper, the real scene obtained by Kinect camera is fused with virtual Guzheng in Unity 3D. The depth data obtained by Kinect and the Suzuki85 algorithm are used to recognize the relative position of the user's right hand and the virtual Guzheng, and the hand gesture of the user is recognized by Kinect.

  5. Use of iris recognition camera technology for the quantification of corneal opacification in mucopolysaccharidoses.

    PubMed

    Aslam, Tariq Mehmood; Shakir, Savana; Wong, James; Au, Leon; Ashworth, Jane

    2012-12-01

    Mucopolysaccharidoses (MPS) can cause corneal opacification that is currently difficult to objectively quantify. With newer treatments for MPS comes an increased need for a more objective, valid and reliable index of disease severity for clinical and research use. Clinical evaluation by slit lamp is very subjective and techniques based on colour photography are difficult to standardise. In this article the authors present evidence for the utility of dedicated image analysis algorithms applied to images obtained by a highly sophisticated iris recognition camera that is small, manoeuvrable and adapted to achieve rapid, reliable and standardised objective imaging in a wide variety of patients while minimising artefactual interference in image quality.

  6. Practical aspects of modern interferometry for optical manufacturing quality control: Part 2

    NASA Astrophysics Data System (ADS)

    Smythe, Robert

    2012-07-01

    Modern phase shifting interferometers enable the manufacture of optical systems that drive the global economy. Semiconductor chips, solid-state cameras, cell phone cameras, infrared imaging systems, space based satellite imaging and DVD and Blu-Ray disks are all enabled by phase shifting interferometers. Theoretical treatments of data analysis and instrument design advance the technology but often are not helpful towards the practical use of interferometers. An understanding of the parameters that drive system performance is critical to produce useful results. Any interferometer will produce a data map and results; this paper, in three parts, reviews some of the key issues to minimize error sources in that data and provide a valid measurement.

  7. Practical aspects of modern interferometry for optical manufacturing quality control, Part 3

    NASA Astrophysics Data System (ADS)

    Smythe, Robert A.

    2012-09-01

    Modern phase shifting interferometers enable the manufacture of optical systems that drive the global economy. Semiconductor chips, solid-state cameras, cell phone cameras, infrared imaging systems, space-based satellite imaging, and DVD and Blu-Ray disks are all enabled by phase-shifting interferometers. Theoretical treatments of data analysis and instrument design advance the technology but often are not helpful toward the practical use of interferometers. An understanding of the parameters that drive the system performance is critical to produce useful results. Any interferometer will produce a data map and results; this paper, in three parts, reviews some of the key issues to minimize error sources in that data and provide a valid measurement.

  8. Broadband image sensor array based on graphene-CMOS integration

    NASA Astrophysics Data System (ADS)

    Goossens, Stijn; Navickaite, Gabriele; Monasterio, Carles; Gupta, Shuchi; Piqueras, Juan José; Pérez, Raúl; Burwell, Gregory; Nikitskiy, Ivan; Lasanta, Tania; Galán, Teresa; Puma, Eric; Centeno, Alba; Pesquera, Amaia; Zurutuza, Amaia; Konstantatos, Gerasimos; Koppens, Frank

    2017-06-01

    Integrated circuits based on complementary metal-oxide-semiconductors (CMOS) are at the heart of the technological revolution of the past 40 years, enabling compact and low-cost microelectronic circuits and imaging systems. However, the diversification of this platform into applications other than microcircuits and visible-light cameras has been impeded by the difficulty to combine semiconductors other than silicon with CMOS. Here, we report the monolithic integration of a CMOS integrated circuit with graphene, operating as a high-mobility phototransistor. We demonstrate a high-resolution, broadband image sensor and operate it as a digital camera that is sensitive to ultraviolet, visible and infrared light (300-2,000 nm). The demonstrated graphene-CMOS integration is pivotal for incorporating 2D materials into the next-generation microelectronics, sensor arrays, low-power integrated photonics and CMOS imaging systems covering visible, infrared and terahertz frequencies.

  9. Wide-Field-of-View, High-Resolution, Stereoscopic Imager

    NASA Technical Reports Server (NTRS)

    Prechtl, Eric F.; Sedwick, Raymond J.

    2010-01-01

    A device combines video feeds from multiple cameras to provide wide-field-of-view, high-resolution, stereoscopic video to the user. The prototype under development consists of two camera assemblies, one for each eye. One of these assemblies incorporates a mounting structure with multiple cameras attached at offset angles. The video signals from the cameras are fed to a central processing platform where each frame is color processed and mapped into a single contiguous wide-field-of-view image. Because the resolution of most display devices is typically smaller than the processed map, a cropped portion of the video feed is output to the display device. The positioning of the cropped window will likely be controlled through the use of a head tracking device, allowing the user to turn his or her head side-to-side or up and down to view different portions of the captured image. There are multiple options for the display of the stereoscopic image. The use of head mounted displays is one likely implementation. However, the use of 3D projection technologies is another potential technology under consideration, The technology can be adapted in a multitude of ways. The computing platform is scalable, such that the number, resolution, and sensitivity of the cameras can be leveraged to improve image resolution and field of view. Miniaturization efforts can be pursued to shrink the package down for better mobility. Power savings studies can be performed to enable unattended, remote sensing packages. Image compression and transmission technologies can be incorporated to enable an improved telepresence experience.

  10. PtSi gimbal-based FLIR for airborne applications

    NASA Astrophysics Data System (ADS)

    Wallace, Joseph; Ornstein, Itzhak; Nezri, M.; Fryd, Y.; Bloomberg, Steve; Beem, S.; Bibi, B.; Hem, S.; Perna, Steve N.; Tower, John R.; Lang, Frank B.; Villani, Thomas S.; McCarthy, D. R.; Stabile, Paul J.

    1997-08-01

    A new gimbal-based, FLIR camera for several types of airborne platforms has been developed. The FLIR is based on a PtSi on silicon technology: developed for high volume and minimum cost. The gimbal scans an area of 360 degrees in azimuth and an elevation range of plus 15 degrees to minus 105 degrees. It is stabilized to 25 (mu) Rad-rms. A combination of uniformity correction, defect substitution, and compact optics results in a long range, low cost FLIR for all low-speed airborne platforms.

  11. Recording Technologies: Sights & Sounds. Resources in Technology.

    ERIC Educational Resources Information Center

    Deal, Walter F., III

    1994-01-01

    Provides information on recording technologies such as laser disks, audio and videotape, and video cameras. Presents a design brief that includes objectives, student outcomes, and a student quiz. (JOW)

  12. Clinical photography in dermatology using smartphones: An overview

    PubMed Central

    Ashique, K. T.; Kaliyadan, Feroze; Aurangabadkar, Sanjeev J.

    2015-01-01

    The smartphone is one of the biggest revolutions in the era of information technology. Its built in camera offers several advantages. Dermatologists, who handle a specialty that is inherently visual, are most benefited by this handy technology. Here in this article, we attempt to provide an overview of smartphone photography in clinical dermatology in order to help the dermatologist to get the best out of the available camera for clinical imaging and storage PMID:26009708

  13. HST Solar Arrays photographed by Electronic Still Camera

    NASA Technical Reports Server (NTRS)

    1993-01-01

    This view, backdropped against the blackness of space shows one of two original Solar Arrays (SA) on the Hubble Space Telescope (HST). The scene was photographed with an Electronic Still Camera (ESC), and downlinked to ground controllers soon afterward. Electronic still photography is a technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality.

  14. The application of high-speed photography in z-pinch high-temperature plasma diagnostics

    NASA Astrophysics Data System (ADS)

    Wang, Kui-lu; Qiu, Meng-tong; Hei, Dong-wei

    2007-01-01

    This invited paper is presented to discuss the application of high speed photography in z-pinch high temperature plasma diagnostics in recent years in Northwest Institute of Nuclear Technology in concentrative mode. The developments and applications of soft x-ray framing camera, soft x-ray curved crystal spectrometer, optical framing camera, ultraviolet four-frame framing camera and ultraviolet-visible spectrometer are introduced.

  15. On a gas electron multiplier based synthetic diagnostic for soft x-ray tomography on WEST with focus on impurity transport studies

    NASA Astrophysics Data System (ADS)

    Jardin, A.; Mazon, D.; Malard, P.; O'Mullane, M.; Chernyshova, M.; Czarski, T.; Malinowski, K.; Kasprowicz, G.; Wojenski, A.; Pozniak, K.

    2017-08-01

    The tokamak WEST aims at testing ITER divertor high heat flux component technology in long pulse operation. Unfortunately, heavy impurities like tungsten (W) sputtered from the plasma facing components can pollute the plasma core by radiation cooling in the soft x-ray (SXR) range, which is detrimental for the energy confinement and plasma stability. SXR diagnostics give valuable information to monitor impurities and study their transport. The WEST SXR diagnostic is composed of two new cameras based on the Gas Electron Multiplier (GEM) technology. The WEST GEM cameras will be used for impurity transport studies by performing 2D tomographic reconstructions with spectral resolution in tunable energy bands. In this paper, we characterize the GEM spectral response and investigate W density reconstruction thanks to a synthetic diagnostic recently developed and coupled with a tomography algorithm based on the minimum Fisher information (MFI) inversion method. The synthetic diagnostic includes the SXR source from a given plasma scenario, the photoionization, electron cloud transport and avalanche in the detection volume using Magboltz, and tomographic reconstruction of the radiation from the GEM signal. Preliminary studies of the effect of transport on the W ionization equilibrium and on the reconstruction capabilities are also presented.

  16. Applications and Innovations for Use of High Definition and High Resolution Digital Motion Imagery in Space Operations

    NASA Technical Reports Server (NTRS)

    Grubbs, Rodney

    2016-01-01

    The first live High Definition Television (HDTV) from a spacecraft was in November, 2006, nearly ten years before the 2016 SpaceOps Conference. Much has changed since then. Now, live HDTV from the International Space Station (ISS) is routine. HDTV cameras stream live video views of the Earth from the exterior of the ISS every day on UStream, and HDTV has even flown around the Moon on a Japanese Space Agency spacecraft. A great deal has been learned about the operations applicability of HDTV and high resolution imagery since that first live broadcast. This paper will discuss the current state of real-time and file based HDTV and higher resolution video for space operations. A potential roadmap will be provided for further development and innovations of high-resolution digital motion imagery, including gaps in technology enablers, especially for deep space and unmanned missions. Specific topics to be covered in the paper will include: An update on radiation tolerance and performance of various camera types and sensors and ramifications on the future applicability of these types of cameras for space operations; Practical experience with downlinking very large imagery files with breaks in link coverage; Ramifications of larger camera resolutions like Ultra-High Definition, 6,000 [pixels] and 8,000 [pixels] in space applications; Enabling technologies such as the High Efficiency Video Codec, Bundle Streaming Delay Tolerant Networking, Optical Communications and Bayer Pattern Sensors and other similar innovations; Likely future operations scenarios for deep space missions with extreme latency and intermittent communications links.

  17. The new camera calibration system at the US Geological Survey

    USGS Publications Warehouse

    Light, D.L.

    1992-01-01

    Modern computerized photogrammetric instruments are capable of utilizing both radial and decentering camera calibration parameters which can increase plotting accuracy over that of older analog instrumentation technology from previous decades. Also, recent design improvements in aerial cameras have minimized distortions and increased the resolving power of camera systems, which should improve the performance of the overall photogrammetric process. In concert with these improvements, the Geological Survey has adopted the rigorous mathematical model for camera calibration developed by Duane Brown. An explanation of the Geological Survey's calibration facility and the additional calibration parameters now being provided in the USGS calibration certificate are reviewed. -Author

  18. Mechanically assisted liquid lens zoom system for mobile phone cameras

    NASA Astrophysics Data System (ADS)

    Wippermann, F. C.; Schreiber, P.; Bräuer, A.; Berge, B.

    2006-08-01

    Camera systems with small form factor are an integral part of today's mobile phones which recently feature auto focus functionality. Ready to market solutions without moving parts have been developed by using the electrowetting technology. Besides virtually no deterioration, easy control electronics and simple and therefore cost-effective fabrication, this type of liquid lenses enables extremely fast settling times compared to mechanical approaches. As a next evolutionary step mobile phone cameras will be equipped with zoom functionality. We present first order considerations for the optical design of a miniaturized zoom system based on liquid-lenses and compare it to its mechanical counterpart. We propose a design of a zoom lens with a zoom factor of 2.5 considering state-of-the-art commercially available liquid lens products. The lens possesses auto focus capability and is based on liquid lenses and one additional mechanical actuator. The combination of liquid lenses and a single mechanical actuator enables extremely short settling times of about 20ms for the auto focus and a simplified mechanical system design leading to lower production cost and longer life time. The camera system has a mechanical outline of 24mm in length and 8mm in diameter. The lens with f/# 3.5 provides market relevant optical performance and is designed for an image circle of 6.25mm (1/2.8" format sensor).

  19. Department of the Navy Supporting Data for Fiscal Year 1984 Budget Estimates Descriptive Summaries Submitted to Congress January 1983. Research, Development, Test and Evaluation, Navy. Book 1. Technology Base, Advanced Technology Development, Strategic Programs.

    DTIC Science & Technology

    1983-01-01

    altioser access (2) Asesss maturity of on-gotnR efforts and integrate appropriate development Into an effective globally dftjtributod .command spport...numerical techniques for nonlinear media.structure shock Interaction inrluding effects of elastic-plastic deformation have bee.a developed and used to...shtittle flight; develop camera payload for SPARTAN (free flyer) flight f rom shuttle. Develop detailed Interpretivesystem capablity~ for global ultraviolet

  20. Physiologically Modulating Videogames or Simulations which use Motion-Sensing Input Devices

    NASA Technical Reports Server (NTRS)

    Pope, Alan T. (Inventor); Stephens, Chad L. (Inventor); Blanson, Nina Marie (Inventor)

    2014-01-01

    New types of controllers allow players to make inputs to a video game or simulation by moving the entire controller itself. This capability is typically accomplished using a wireless input device having accelerometers, gyroscopes, and an infrared LED tracking camera. The present invention exploits these wireless motion-sensing technologies to modulate the player's movement inputs to the videogame based upon physiological signals. Such biofeedback-modulated video games train valuable mental skills beyond eye-hand coordination. These psychophysiological training technologies enhance personal improvement, not just the diversion, of the user.

  1. Smartphone based face recognition tool for the blind.

    PubMed

    Kramer, K M; Hedin, D S; Rolkosky, D J

    2010-01-01

    The inability to identify people during group meetings is a disadvantage for blind people in many professional and educational situations. To explore the efficacy of face recognition using smartphones in these settings, we have prototyped and tested a face recognition tool for blind users. The tool utilizes Smartphone technology in conjunction with a wireless network to provide audio feedback of the people in front of the blind user. Testing indicated that the face recognition technology can tolerate up to a 40 degree angle between the direction a person is looking and the camera's axis and a 96% success rate with no false positives. Future work will be done to further develop the technology for local face recognition on the smartphone in addition to remote server based face recognition.

  2. Capturing Fine Details Involving Low-Cost Sensors -a Comparative Study

    NASA Astrophysics Data System (ADS)

    Rehany, N.; Barsi, A.; Lovas, T.

    2017-11-01

    Capturing the fine details on the surface of small objects is a real challenge to many conventional surveying methods. Our paper discusses the investigation of several data acquisition technologies, such as arm scanner, structured light scanner, terrestrial laser scanner, object line-scanner, DSLR camera, and mobile phone camera. A palm-sized embossed sculpture reproduction was used as a test object; it has been surveyed by all the instruments. The result point clouds and meshes were then analyzed, using the arm scanner's dataset as reference. In addition to general statistics, the results have been evaluated based both on 3D deviation maps and 2D deviation graphs; the latter allows even more accurate analysis of the characteristics of the different data acquisition approaches. Additionally, own-developed local minimum maps were created that nicely visualize the potential level of detail provided by the applied technologies. Besides the usual geometric assessment, the paper discusses the different resource needs (cost, time, expertise) of the discussed techniques. Our results proved that even amateur sensors operated by amateur users can provide high quality datasets that enable engineering analysis. Based on the results, the paper contains an outlook to potential future investigations in this field.

  3. Nanosatellite optical downlink experiment: design, simulation, and prototyping

    NASA Astrophysics Data System (ADS)

    Clements, Emily; Aniceto, Raichelle; Barnes, Derek; Caplan, David; Clark, James; Portillo, Iñigo del; Haughwout, Christian; Khatsenko, Maxim; Kingsbury, Ryan; Lee, Myron; Morgan, Rachel; Twichell, Jonathan; Riesing, Kathleen; Yoon, Hyosang; Ziegler, Caleb; Cahoy, Kerri

    2016-11-01

    The nanosatellite optical downlink experiment (NODE) implements a free-space optical communications (lasercom) capability on a CubeSat platform that can support low earth orbit (LEO) to ground downlink rates>10 Mbps. A primary goal of NODE is to leverage commercially available technologies to provide a scalable and cost-effective alternative to radio-frequency-based communications. The NODE transmitter uses a 200-mW 1550-nm master-oscillator power-amplifier design using power-efficient M-ary pulse position modulation. To facilitate pointing the 0.12-deg downlink beam, NODE augments spacecraft body pointing with a microelectromechanical fast steering mirror (FSM) and uses an 850-nm uplink beacon to an onboard CCD camera. The 30-cm aperture ground telescope uses an infrared camera and FSM for tracking to an avalanche photodiode detector-based receiver. Here, we describe our approach to transition prototype transmitter and receiver designs to a full end-to-end CubeSat-scale system. This includes link budget refinement, drive electronics miniaturization, packaging reduction, improvements to pointing and attitude estimation, implementation of modulation, coding, and interleaving, and ground station receiver design. We capture trades and technology development needs and outline plans for integrated system ground testing.

  4. D Data Acquisition Based on Opencv for Close-Range Photogrammetry Applications

    NASA Astrophysics Data System (ADS)

    Jurjević, L.; Gašparović, M.

    2017-05-01

    Development of the technology in the area of the cameras, computers and algorithms for 3D the reconstruction of the objects from the images resulted in the increased popularity of the photogrammetry. Algorithms for the 3D model reconstruction are so advanced that almost anyone can make a 3D model of photographed object. The main goal of this paper is to examine the possibility of obtaining 3D data for the purposes of the close-range photogrammetry applications, based on the open source technologies. All steps of obtaining 3D point cloud are covered in this paper. Special attention is given to the camera calibration, for which two-step process of calibration is used. Both, presented algorithm and accuracy of the point cloud are tested by calculating the spatial difference between referent and produced point clouds. During algorithm testing, robustness and swiftness of obtaining 3D data is noted, and certainly usage of this and similar algorithms has a lot of potential in the real-time application. That is the reason why this research can find its application in the architecture, spatial planning, protection of cultural heritage, forensic, mechanical engineering, traffic management, medicine and other sciences.

  5. New Optics See More With Less

    NASA Technical Reports Server (NTRS)

    Nabors, Sammy

    2015-01-01

    NASA offers companies an optical system that provides a unique panoramic perspective with a single camera. NASA's Marshall Space Flight Center has developed a technology that combines a panoramic refracting optic (PRO) lens with a unique detection system to acquire a true 360-degree field of view. Although current imaging systems can acquire panoramic images, they must use up to five cameras to obtain the full field of view. MSFC's technology obtains its panoramic images from one vantage point.

  6. Mini AERCam: A Free-Flying Robot for Space Inspection

    NASA Technical Reports Server (NTRS)

    Fredrickson, Steven

    2001-01-01

    The NASA Johnson Space Center Engineering Directorate is developing the Autonomous Extravehicular Robotic Camera (AERCam), a free-flying camera system for remote viewing and inspection of human spacecraft. The AERCam project team is currently developing a miniaturized version of AERCam known as Mini AERCam, a spherical nanosatellite 7.5 inches in diameter. Mini AERCam development builds on the success of AERCam Sprint, a 1997 Space Shuttle flight experiment, by integrating new on-board sensing and processing capabilities while simultaneously reducing volume by 80%. Achieving these productivity-enhancing capabilities in a smaller package depends on aggressive component miniaturization. Technology innovations being incorporated include micro electromechanical system (MEMS) gyros, "camera-on-a-chip" CMOS imagers, rechargeable xenon gas propulsion, rechargeable lithium ion battery, custom avionics based on the PowerPC 740 microprocessor, GPS relative navigation, digital radio frequency communications and tracking, micropatch antennas, digital instrumentation, and dense mechanical packaging. The Mini AERCam free-flyer will initially be integrated into an approximate flight-like configuration for laboratory demonstration on an airbearing table. A pilot-in-the-loop and hardware-in-the-loop simulation to simulate on-orbit navigation and dynamics will complement the airbearing table demonstration. The Mini AERCam lab demonstration is intended to form the basis for future development of an AERCam flight system that provides on-orbit views of the Space Shuttle and International Space Station unobtainable from fixed cameras, cameras on robotic manipulators, or cameras carried by space-walking crewmembers.

  7. Adaptive Monocular Visual–Inertial SLAM for Real-Time Augmented Reality Applications in Mobile Devices

    PubMed Central

    Piao, Jin-Chun; Kim, Shin-Dug

    2017-01-01

    Simultaneous localization and mapping (SLAM) is emerging as a prominent issue in computer vision and next-generation core technology for robots, autonomous navigation and augmented reality. In augmented reality applications, fast camera pose estimation and true scale are important. In this paper, we present an adaptive monocular visual–inertial SLAM method for real-time augmented reality applications in mobile devices. First, the SLAM system is implemented based on the visual–inertial odometry method that combines data from a mobile device camera and inertial measurement unit sensor. Second, we present an optical-flow-based fast visual odometry method for real-time camera pose estimation. Finally, an adaptive monocular visual–inertial SLAM is implemented by presenting an adaptive execution module that dynamically selects visual–inertial odometry or optical-flow-based fast visual odometry. Experimental results show that the average translation root-mean-square error of keyframe trajectory is approximately 0.0617 m with the EuRoC dataset. The average tracking time is reduced by 7.8%, 12.9%, and 18.8% when different level-set adaptive policies are applied. Moreover, we conducted experiments with real mobile device sensors, and the results demonstrate the effectiveness of performance improvement using the proposed method. PMID:29112143

  8. Human tracking over camera networks: a review

    NASA Astrophysics Data System (ADS)

    Hou, Li; Wan, Wanggen; Hwang, Jenq-Neng; Muhammad, Rizwan; Yang, Mingyang; Han, Kang

    2017-12-01

    In recent years, automated human tracking over camera networks is getting essential for video surveillance. The tasks of tracking human over camera networks are not only inherently challenging due to changing human appearance, but also have enormous potentials for a wide range of practical applications, ranging from security surveillance to retail and health care. This review paper surveys the most widely used techniques and recent advances for human tracking over camera networks. Two important functional modules for the human tracking over camera networks are addressed, including human tracking within a camera and human tracking across non-overlapping cameras. The core techniques of human tracking within a camera are discussed based on two aspects, i.e., generative trackers and discriminative trackers. The core techniques of human tracking across non-overlapping cameras are then discussed based on the aspects of human re-identification, camera-link model-based tracking and graph model-based tracking. Our survey aims to address existing problems, challenges, and future research directions based on the analyses of the current progress made toward human tracking techniques over camera networks.

  9. Teaching with Technology: Step Back and Hand over the Cameras! Using Digital Cameras to Facilitate Mathematics Learning with Young Children in K-2 Classrooms

    ERIC Educational Resources Information Center

    Northcote, Maria

    2011-01-01

    Digital cameras are now commonplace in many classrooms and in the lives of many children in early childhood centres and primary schools. They are regularly used by adults and teachers for "saving special moments and documenting experiences." The use of previously expensive photographic and recording equipment has often remained in the domain of…

  10. Evaluating effectiveness and cost of time-lapse triggered camera trapping techniques to detect terrestrial squamate diversity

    Treesearch

    Connor S. Adams; Wade A. Ryberg; Toby J. Hibbitts; Brian L. Pierce; Josh B. Pierce; D. Craig Rudolph

    2017-01-01

    Recent advancements in camera trap technology have allowed researchers to explore methodologies that are minimally invasive, and both time and cost efficient (Long et al. 2008; O’Connell et al. 2010; Gregory et al. 2014; Meek et al. 2014; Swinnen et al. 2014; Newey et al. 2015). The use of cameras for understanding the distribution and ecology of mammals is advanced;...

  11. On-line measurement of diameter of hot-rolled steel tube

    NASA Astrophysics Data System (ADS)

    Zhu, Xueliang; Zhao, Huiying; Tian, Ailing; Li, Bin

    2015-02-01

    In order to design a online diameter measurement system for Hot-rolled seamless steel tube production line. On one hand, it can play a stimulate part in the domestic pipe measuring technique. On the other hand, it can also make our domestic hot rolled seamless steel tube enterprises gain a strong product competitiveness with low input. Through the analysis of various detection methods and techniques contrast, this paper choose a CCD camera-based online caliper system design. The system mainly includes the hardware measurement portion and the image processing section, combining with software control technology and image processing technology, which can complete online measurement of heat tube diameter. Taking into account the complexity of the actual job site situation, it can choose a relatively simple and reasonable layout. The image processing section mainly to solve the camera calibration and the application of a function in Matlab, to achieve the diameter size display directly through the algorithm to calculate the image. I build a simulation platform in the design last phase, successfully, collect images for processing, to prove the feasibility and rationality of the design and make error in less than 2%. The design successfully using photoelectric detection technology to solve real work problems

  12. A Three-Line Stereo Camera Concept for Planetary Exploration

    NASA Technical Reports Server (NTRS)

    Sandau, Rainer; Hilbert, Stefan; Venus, Holger; Walter, Ingo; Fang, Wai-Chi; Alkalai, Leon

    1997-01-01

    This paper presents a low-weight stereo camera concept for planetary exploration. The camera uses three CCD lines within the image plane of one single objective. Some of the main features of the camera include: focal length-90 mm, FOV-18.5 deg, IFOV-78 (mu)rad, convergence angles-(+/-)10 deg, radiometric dynamics-14 bit, weight-2 kg, and power consumption-12.5 Watts. From an orbit altitude of 250 km the ground pixel size is 20m x 20m and the swath width is 82 km. The CCD line data is buffered in the camera internal mass memory of 1 Gbit. After performing radiometric correction and application-dependent preprocessing the data is compressed and ready for downlink. Due to the aggressive application of advanced technologies in the area of microelectronics and innovative optics, the low mass and power budgets of 2 kg and 12.5 Watts is achieved, while still maintaining high performance. The design of the proposed light-weight camera is also general purpose enough to be applicable to other planetary missions such as the exploration of Mars, Mercury, and the Moon. Moreover, it is an example of excellent international collaboration on advanced technology concepts developed at DLR, Germany, and NASA's Jet Propulsion Laboratory, USA.

  13. Potential use of ground-based sensor technologies for weed detection.

    PubMed

    Peteinatos, Gerassimos G; Weis, Martin; Andújar, Dionisio; Rueda Ayala, Victor; Gerhards, Roland

    2014-02-01

    Site-specific weed management is the part of precision agriculture (PA) that tries to effectively control weed infestations with the least economical and environmental burdens. This can be achieved with the aid of ground-based or near-range sensors in combination with decision rules and precise application technologies. Near-range sensor technologies, developed for mounting on a vehicle, have been emerging for PA applications during the last three decades. These technologies focus on identifying plants and measuring their physiological status with the aid of their spectral and morphological characteristics. Cameras, spectrometers, fluorometers and distance sensors are the most prominent sensors for PA applications. The objective of this article is to describe-ground based sensors that have the potential to be used for weed detection and measurement of weed infestation level. An overview of current sensor systems is presented, describing their concepts, results that have been achieved, already utilized commercial systems and problems that persist. A perspective for the development of these sensors is given. © 2013 Society of Chemical Industry.

  14. Nondestructive assessment of the severity of occlusal caries lesions with near-infrared imaging at 1310 nm.

    PubMed

    Lee, Chulsung; Lee, Dustin; Darling, Cynthia L; Fried, Daniel

    2010-01-01

    The high transparency of dental enamel in the near-infrared (NIR) at 1310 nm can be exploited for imaging dental caries without the use of ionizing radiation. The objective of this study is to determine whether the lesion contrast derived from NIR imaging in both transmission and reflectance can be used to estimate lesion severity. Two NIR imaging detector technologies are investigated: a new Ge-enhanced complementary metal-oxide-semiconductor (CMOS)-based NIR imaging camera, and an InGaAs focal plane array (FPA). Natural occlusal caries lesions are imaged with both cameras at 1310 nm, and the image contrast between sound and carious regions is calculated. After NIR imaging, teeth are sectioned and examined using polarized light microscopy (PLM) and transverse microradiography (TMR) to determine lesion severity. Lesions are then classified into four categories according to lesion severity. Lesion contrast increases significantly with lesion severity for both cameras (p<0.05). The Ge-enhanced CMOS camera equipped with the larger array and smaller pixels yields higher contrast values compared with the smaller InGaAs FPA (p<0.01). Results demonstrate that NIR lesion contrast can be used to estimate lesion severity.

  15. Nondestructive assessment of the severity of occlusal caries lesions with near-infrared imaging at 1310 nm

    PubMed Central

    Lee, Chulsung; Lee, Dustin; Darling, Cynthia L.; Fried, Daniel

    2010-01-01

    The high transparency of dental enamel in the near-infrared (NIR) at 1310 nm can be exploited for imaging dental caries without the use of ionizing radiation. The objective of this study is to determine whether the lesion contrast derived from NIR imaging in both transmission and reflectance can be used to estimate lesion severity. Two NIR imaging detector technologies are investigated: a new Ge-enhanced complementary metal-oxide-semiconductor (CMOS)-based NIR imaging camera, and an InGaAs focal plane array (FPA). Natural occlusal caries lesions are imaged with both cameras at 1310 nm, and the image contrast between sound and carious regions is calculated. After NIR imaging, teeth are sectioned and examined using polarized light microscopy (PLM) and transverse microradiography (TMR) to determine lesion severity. Lesions are then classified into four categories according to lesion severity. Lesion contrast increases significantly with lesion severity for both cameras (p<0.05). The Ge-enhanced CMOS camera equipped with the larger array and smaller pixels yields higher contrast values compared with the smaller InGaAs FPA (p<0.01). Results demonstrate that NIR lesion contrast can be used to estimate lesion severity. PMID:20799842

  16. Nondestructive assessment of the severity of occlusal caries lesions with near-infrared imaging at 1310 nm

    NASA Astrophysics Data System (ADS)

    Lee, Chulsung; Lee, Dustin; Darling, Cynthia L.; Fried, Daniel

    2010-07-01

    The high transparency of dental enamel in the near-infrared (NIR) at 1310 nm can be exploited for imaging dental caries without the use of ionizing radiation. The objective of this study is to determine whether the lesion contrast derived from NIR imaging in both transmission and reflectance can be used to estimate lesion severity. Two NIR imaging detector technologies are investigated: a new Ge-enhanced complementary metal-oxide-semiconductor (CMOS)-based NIR imaging camera, and an InGaAs focal plane array (FPA). Natural occlusal caries lesions are imaged with both cameras at 1310 nm, and the image contrast between sound and carious regions is calculated. After NIR imaging, teeth are sectioned and examined using polarized light microscopy (PLM) and transverse microradiography (TMR) to determine lesion severity. Lesions are then classified into four categories according to lesion severity. Lesion contrast increases significantly with lesion severity for both cameras (p<0.05). The Ge-enhanced CMOS camera equipped with the larger array and smaller pixels yields higher contrast values compared with the smaller InGaAs FPA (p<0.01). Results demonstrate that NIR lesion contrast can be used to estimate lesion severity.

  17. Video approach to chemiluminescence detection using a low-cost complementary metal oxide semiconductor (CMOS)-based camera: determination of paracetamol in pharmaceutical formulations.

    PubMed

    Lahuerta-Zamora, Luis; Mellado-Romero, Ana M

    2017-06-01

    A new system for continuous flow chemiluminescence detection, based on the use of a simple and low-priced lens-free digital camera (with complementary metal oxide semiconductor technology) as a detector, is proposed for the quantitative determination of paracetamol in commercial pharmaceutical formulations. Through the camera software, AVI video files of the chemiluminescence emission are captured and then, using friendly ImageJ public domain software (from National Institutes for Health), properly processed in order to extract the analytical information. The calibration graph was found to be linear over the range 0.01-0.10 mg L -1 and over the range 1.0-100.0 mg L -1 of paracetamol, the limit of detection being 10 μg L -1 . No significative interferences were found. Paracetamol was determined in three different pharmaceutical formulations: Termalgin®, Efferalgan® and Gelocatil®. The obtained results compared well with those declared on the formulation label and with those obtained through the official analytical method of British Pharmacopoeia. Graphical abstract Abbreviated scheme of the new chemiluminescence detection system proposed in this paper.

  18. Detecting personnel around UGVs using stereo vision

    NASA Astrophysics Data System (ADS)

    Bajracharya, Max; Moghaddam, Baback; Howard, Andrew; Matthies, Larry H.

    2008-04-01

    Detecting people around unmanned ground vehicles (UGVs) to facilitate safe operation of UGVs is one of the highest priority issues in the development of perception technology for autonomous navigation. Research to date has not achieved the detection ranges or reliability needed in deployed systems to detect upright pedestrians in flat, relatively uncluttered terrain, let alone in more complex environments and with people in postures that are more difficult to detect. Range data is essential to solve this problem. Combining range data with high resolution imagery may enable higher performance than range data alone because image appearance can complement shape information in range data and because cameras may offer higher angular resolution than typical range sensors. This makes stereo vision a promising approach for several reasons: image resolution is high and will continue to increase, the physical size and power dissipation of the cameras and computers will continue to decrease, and stereo cameras provide range data and imagery that are automatically spatially and temporally registered. We describe a stereo vision-based pedestrian detection system, focusing on recent improvements to a shape-based classifier applied to the range data, and present frame-level performance results that show great promise for the overall approach.

  19. Design of smartphone-based spectrometer to assess fresh meat color

    NASA Astrophysics Data System (ADS)

    Jung, Youngkee; Kim, Hyun-Wook; Kim, Yuan H. Brad; Bae, Euiwon

    2017-02-01

    Based on its integrated camera, new optical attachment, and inherent computing power, we propose an instrument design and validation that can potentially provide an objective and accurate method to determine surface meat color change and myoglobin redox forms using a smartphone-based spectrometer. System is designed to be used as a reflection spectrometer which mimics the conventional spectrometry commonly used for meat color assessment. We utilize a 3D printing technique to make an optical cradle which holds all of the optical components for light collection, collimation, dispersion, and a suitable chamber. A light, which reflects a sample, enters a pinhole and is subsequently collimated by a convex lens. A diffraction grating spreads the wavelength over the camera's pixels to display a high resolution of spectrum. Pixel values in the smartphone image are translated to calibrate the wavelength values through three laser pointers which have different wavelength; 405, 532, 650 nm. Using an in-house app, the camera images are converted into a spectrum in the visible wavelength range based on the exterior light source. A controlled experiment simulating the refrigeration and shelving of the meat has been conducted and the results showed the capability to accurately measure the color change in quantitative and spectroscopic manner. We expect that this technology can be adapted to any smartphone and used to conduct a field-deployable color spectrum assay as a more practical application tool for various food sectors.

  20. Packet based serial link realized in FPGA dedicated for high resolution infrared image transmission

    NASA Astrophysics Data System (ADS)

    Bieszczad, Grzegorz

    2015-05-01

    In article the external digital interface specially designed for thermographic camera built in Military University of Technology is described. The aim of article is to illustrate challenges encountered during design process of thermal vision camera especially related to infrared data processing and transmission. Article explains main requirements for interface to transfer Infra-Red or Video digital data and describes the solution which we elaborated based on Low Voltage Differential Signaling (LVDS) physical layer and signaling scheme. Elaborated link for image transmission is built using FPGA integrated circuit with built-in high speed serial transceivers achieving up to 2500Gbps throughput. Image transmission is realized using proprietary packet protocol. Transmission protocol engine was described in VHDL language and tested in FPGA hardware. The link is able to transmit 1280x1024@60Hz 24bit video data using one signal pair. Link was tested to transmit thermal-vision camera picture to remote monitor. Construction of dedicated video link allows to reduce power consumption compared to solutions with ASIC based encoders and decoders realizing video links like DVI or packed based Display Port, with simultaneous reduction of wires needed to establish link to one pair. Article describes functions of modules integrated in FPGA design realizing several functions like: synchronization to video source, video stream packeting, interfacing transceiver module and dynamic clock generation for video standard conversion.

  1. Miniature optical planar camera based on a wide-angle metasurface doublet corrected for monochromatic aberrations

    DOE PAGES

    Arbabi, Amir; Arbabi, Ehsan; Kamali, Seyedeh Mahsa; ...

    2016-11-28

    Optical metasurfaces are two-dimensional arrays of nano-scatterers that modify optical wavefronts at subwavelength spatial resolution. They are poised to revolutionize optics by enabling complex low-cost systems where multiple metasurfaces are lithographically stacked and integrated with electronics. For imaging applications, metasurface stacks can perform sophisticated image corrections and can be directly integrated with image sensors. Here we demonstrate this concept with a miniature flat camera integrating a monolithic metasurface lens doublet corrected for monochromatic aberrations, and an image sensor. The doublet lens, which acts as a fisheye photographic objective, has a small f-number of 0.9, an angle-of-view larger than 60° ×more » 60°, and operates at 850 nm wavelength with 70% focusing efficiency. The camera exhibits nearly diffraction-limited image quality, which indicates the potential of this technology in the development of optical systems for microscopy, photography, and computer vision.« less

  2. Method to implement the CCD timing generator based on FPGA

    NASA Astrophysics Data System (ADS)

    Li, Binhua; Song, Qian; He, Chun; Jin, Jianhui; He, Lin

    2010-07-01

    With the advance of the PFPA technology, the design methodology of digital systems is changing. In recent years we develop a method to implement the CCD timing generator based on FPGA and VHDL. This paper presents the principles and implementation skills of the method. Taking a developed camera as an example, we introduce the structure, input and output clocks/signals of a timing generator implemented in the camera. The generator is composed of a top module and a bottom module. The bottom one is made up of 4 sub-modules which correspond to 4 different operation modes. The modules are implemented by 5 VHDL programs. Frame charts of the architecture of these programs are shown in the paper. We also describe implementation steps of the timing generator in Quartus II, and the interconnections between the generator and a Nios soft core processor which is the controller of this generator. Some test results are presented in the end.

  3. In-Space Structural Validation Plan for a Stretched-Lens Solar Array Flight Experiment

    NASA Technical Reports Server (NTRS)

    Pappa, Richard S.; Woods-Vedeler, Jessica A.; Jones, Thomas W.

    2001-01-01

    This paper summarizes in-space structural validation plans for a proposed Space Shuttle-based flight experiment. The test article is an innovative, lightweight solar array concept that uses pop-up, refractive stretched-lens concentrators to achieve a power/mass density of at least 175 W/kg, which is more than three times greater than current capabilities. The flight experiment will validate this new technology to retire the risk associated with its first use in space. The experiment includes structural diagnostic instrumentation to measure the deployment dynamics, static shape, and modes of vibration of the 8-meter-long solar array and several of its lenses. These data will be obtained by photogrammetry using the Shuttle payload-bay video cameras and miniature video cameras on the array. Six accelerometers are also included in the experiment to measure base excitations and small-amplitude tip motions.

  4. Autonomous Exploration for Gathering Increased Science

    NASA Technical Reports Server (NTRS)

    Bornstein, Benjamin J.; Castano, Rebecca; Estlin, Tara A.; Gaines, Daniel M.; Anderson, Robert C.; Thompson, David R.; DeGranville, Charles K.; Chien, Steve A.; Tang, Benyang; Burl, Michael C.; hide

    2010-01-01

    The Autonomous Exploration for Gathering Increased Science System (AEGIS) provides automated targeting for remote sensing instruments on the Mars Exploration Rover (MER) mission, which at the time of this reporting has had two rovers exploring the surface of Mars (see figure). Currently, targets for rover remote-sensing instruments must be selected manually based on imagery already on the ground with the operations team. AEGIS enables the rover flight software to analyze imagery onboard in order to autonomously select and sequence targeted remote-sensing observations in an opportunistic fashion. In particular, this technology will be used to automatically acquire sub-framed, high-resolution, targeted images taken with the MER panoramic cameras. This software provides: 1) Automatic detection of terrain features in rover camera images, 2) Feature extraction for detected terrain targets, 3) Prioritization of terrain targets based on a scientist target feature set, and 4) Automated re-targeting of rover remote-sensing instruments at the highest priority target.

  5. Image quality testing of assembled IR camera modules

    NASA Astrophysics Data System (ADS)

    Winters, Daniel; Erichsen, Patrik

    2013-10-01

    Infrared (IR) camera modules for the LWIR (8-12_m) that combine IR imaging optics with microbolometer focal plane array (FPA) sensors with readout electronics are becoming more and more a mass market product. At the same time, steady improvements in sensor resolution in the higher priced markets raise the requirement for imaging performance of objectives and the proper alignment between objective and FPA. This puts pressure on camera manufacturers and system integrators to assess the image quality of finished camera modules in a cost-efficient and automated way for quality control or during end-of-line testing. In this paper we present recent development work done in the field of image quality testing of IR camera modules. This technology provides a wealth of additional information in contrast to the more traditional test methods like minimum resolvable temperature difference (MRTD) which give only a subjective overall test result. Parameters that can be measured are image quality via the modulation transfer function (MTF) for broadband or with various bandpass filters on- and off-axis and optical parameters like e.g. effective focal length (EFL) and distortion. If the camera module allows for refocusing the optics, additional parameters like best focus plane, image plane tilt, auto-focus quality, chief ray angle etc. can be characterized. Additionally, the homogeneity and response of the sensor with the optics can be characterized in order to calculate the appropriate tables for non-uniformity correction (NUC). The technology can also be used to control active alignment methods during mechanical assembly of optics to high resolution sensors. Other important points that are discussed are the flexibility of the technology to test IR modules with different form factors, electrical interfaces and last but not least the suitability for fully automated measurements in mass production.

  6. An on-line calibration algorithm for external parameters of visual system based on binocular stereo cameras

    NASA Astrophysics Data System (ADS)

    Wang, Liqiang; Liu, Zhen; Zhang, Zhonghua

    2014-11-01

    Stereo vision is the key in the visual measurement, robot vision, and autonomous navigation. Before performing the system of stereo vision, it needs to calibrate the intrinsic parameters for each camera and the external parameters of the system. In engineering, the intrinsic parameters remain unchanged after calibrating cameras, and the positional relationship between the cameras could be changed because of vibration, knocks and pressures in the vicinity of the railway or motor workshops. Especially for large baselines, even minute changes in translation or rotation can affect the epipolar geometry and scene triangulation to such a degree that visual system becomes disabled. A technology including both real-time examination and on-line recalibration for the external parameters of stereo system becomes particularly important. This paper presents an on-line method for checking and recalibrating the positional relationship between stereo cameras. In epipolar geometry, the external parameters of cameras can be obtained by factorization of the fundamental matrix. Thus, it offers a method to calculate the external camera parameters without any special targets. If the intrinsic camera parameters are known, the external parameters of system can be calculated via a number of random matched points. The process is: (i) estimating the fundamental matrix via the feature point correspondences; (ii) computing the essential matrix from the fundamental matrix; (iii) obtaining the external parameters by decomposition of the essential matrix. In the step of computing the fundamental matrix, the traditional methods are sensitive to noise and cannot ensure the estimation accuracy. We consider the feature distribution situation in the actual scene images and introduce a regional weighted normalization algorithm to improve accuracy of the fundamental matrix estimation. In contrast to traditional algorithms, experiments on simulated data prove that the method improves estimation robustness and accuracy of the fundamental matrix. Finally, we take an experiment for computing the relationship of a pair of stereo cameras to demonstrate accurate performance of the algorithm.

  7. High-speed optical 3D sensing and its applications

    NASA Astrophysics Data System (ADS)

    Watanabe, Yoshihiro

    2016-12-01

    This paper reviews high-speed optical 3D sensing technologies for obtaining the 3D shape of a target using a camera. The focusing speed is from 100 to 1000 fps, exceeding normal camera frame rates, which are typically 30 fps. In particular, contactless, active, and real-time systems are introduced. Also, three example applications of this type of sensing technology are introduced, including surface reconstruction from time-sequential depth images, high-speed 3D user interaction, and high-speed digital archiving.

  8. Liquid lens: advances in adaptive optics

    NASA Astrophysics Data System (ADS)

    Casey, Shawn Patrick

    2010-12-01

    'Liquid lens' technologies promise significant advancements in machine vision and optical communications systems. Adaptations for machine vision, human vision correction, and optical communications are used to exemplify the versatile nature of this technology. Utilization of liquid lens elements allows the cost effective implementation of optical velocity measurement. The project consists of a custom image processor, camera, and interface. The images are passed into customized pattern recognition and optical character recognition algorithms. A single camera would be used for both speed detection and object recognition.

  9. Reversible Bending Behaviors of Photomechanical Soft Actuators Based on Graphene Nanocomposites.

    PubMed

    Niu, Dong; Jiang, Weitao; Liu, Hongzhong; Zhao, Tingting; Lei, Biao; Li, Yonghao; Yin, Lei; Shi, Yongsheng; Chen, Bangdao; Lu, Bingheng

    2016-06-06

    Photomechanical nanocomposites embedded with light-absorbing nanoparticles show promising applications in photoresponsive actuations. Near infrared (nIR)-responsive nanocomposites based photomechanical soft actuators can offer lightweight functional and underexploited entry into soft robotics, active optics, drug delivery, etc. A novel graphene-based photomechanical soft actuators, constituted by Polydimethylsiloxane (PDMS)/graphene-nanoplatelets (GNPs) layer (PDMS/GNPs) and pristine PDMS layer, have been constructed. Due to the mismatch of coefficient of thermal expansion of two layers induced by dispersion of GNPs, controllable and reversible bendings response to nIR light irradiation are observed. Interestingly, two different bending behaviors are observed when the nIR light comes from different sides, i.e., a gradual single-step photomechanical bending towards PDMS/GNPs layer when irradiation from PDMS side, while a dual-step bending (finally bending to the PDMS/GNPs side but with an strong and fast backlash at the time of light is on/off) when irradiation from PDMS/GNPs side. The two distinctive photomechanical bending behaviors are investigated in terms of heat transfer and thermal expansion, which reveals that the distinctive bending behaviors can be attributed to the differences in temperature gradients along the thickness when irradiation from different sides. In addition, the versatile photomechanical bending properties will provide alternative way for drug-delivery, soft robotics and microswitches, etc.

  10. Reversible Bending Behaviors of Photomechanical Soft Actuators Based on Graphene Nanocomposites

    PubMed Central

    Niu, Dong; Jiang, Weitao; Liu, Hongzhong; Zhao, Tingting; Lei, Biao; Li, Yonghao; Yin, Lei; Shi, Yongsheng; Chen, Bangdao; Lu, Bingheng

    2016-01-01

    Photomechanical nanocomposites embedded with light-absorbing nanoparticles show promising applications in photoresponsive actuations. Near infrared (nIR)-responsive nanocomposites based photomechanical soft actuators can offer lightweight functional and underexploited entry into soft robotics, active optics, drug delivery, etc. A novel graphene-based photomechanical soft actuators, constituted by Polydimethylsiloxane (PDMS)/graphene-nanoplatelets (GNPs) layer (PDMS/GNPs) and pristine PDMS layer, have been constructed. Due to the mismatch of coefficient of thermal expansion of two layers induced by dispersion of GNPs, controllable and reversible bendings response to nIR light irradiation are observed. Interestingly, two different bending behaviors are observed when the nIR light comes from different sides, i.e., a gradual single-step photomechanical bending towards PDMS/GNPs layer when irradiation from PDMS side, while a dual-step bending (finally bending to the PDMS/GNPs side but with an strong and fast backlash at the time of light is on/off) when irradiation from PDMS/GNPs side. The two distinctive photomechanical bending behaviors are investigated in terms of heat transfer and thermal expansion, which reveals that the distinctive bending behaviors can be attributed to the differences in temperature gradients along the thickness when irradiation from different sides. In addition, the versatile photomechanical bending properties will provide alternative way for drug-delivery, soft robotics and microswitches, etc. PMID:27265380

  11. A Comparative Study of Microscopic Images Captured by a Box Type Digital Camera Versus a Standard Microscopic Photography Camera Unit

    PubMed Central

    Desai, Nandini J.; Gupta, B. D.; Patel, Pratik Narendrabhai

    2014-01-01

    Introduction: Obtaining images of slides viewed by a microscope can be invaluable for both diagnosis and teaching.They can be transferred among technologically-advanced hospitals for further consultation and evaluation. But a standard microscopic photography camera unit (MPCU)(MIPS-Microscopic Image projection System) is costly and not available in resource poor settings. The aim of our endeavour was to find a comparable and cheaper alternative method for photomicrography. Materials and Methods: We used a NIKON Coolpix S6150 camera (box type digital camera) with Olympus CH20i microscope and a fluorescent microscope for the purpose of this study. Results: We got comparable results for capturing images of light microscopy, but the results were not as satisfactory for fluorescent microscopy. Conclusion: A box type digital camera is a comparable, less expensive and convenient alternative to microscopic photography camera unit. PMID:25478350

  12. Performance evaluation and clinical applications of 3D plenoptic cameras

    NASA Astrophysics Data System (ADS)

    Decker, Ryan; Shademan, Azad; Opfermann, Justin; Leonard, Simon; Kim, Peter C. W.; Krieger, Axel

    2015-06-01

    The observation and 3D quantification of arbitrary scenes using optical imaging systems is challenging, but increasingly necessary in many fields. This paper provides a technical basis for the application of plenoptic cameras in medical and medical robotics applications, and rigorously evaluates camera integration and performance in the clinical setting. It discusses plenoptic camera calibration and setup, assesses plenoptic imaging in a clinically relevant context, and in the context of other quantitative imaging technologies. We report the methods used for camera calibration, precision and accuracy results in an ideal and simulated surgical setting. Afterwards, we report performance during a surgical task. Test results showed the average precision of the plenoptic camera to be 0.90mm, increasing to 1.37mm for tissue across the calibrated FOV. The ideal accuracy was 1.14mm. The camera showed submillimeter error during a simulated surgical task.

  13. The future of consumer cameras

    NASA Astrophysics Data System (ADS)

    Battiato, Sebastiano; Moltisanti, Marco

    2015-03-01

    In the last two decades multimedia, and in particular imaging devices (camcorders, tablets, mobile phones, etc.) have been dramatically diffused. Moreover the increasing of their computational performances, combined with an higher storage capability, allows them to process large amount of data. In this paper an overview of the current trends of consumer cameras market and technology will be given, providing also some details about the recent past (from Digital Still Camera up today) and forthcoming key issues.

  14. Night Vision and Electro-Optics Technology Transfer, 1972-1981

    DTIC Science & Technology

    1981-09-15

    Lixiscope offers potential applications as: a handheld instrument for dental radiography giving real-time ,1servation in orthodontic procedures; a portable...laboratory are described below. There are however, no hard and fast rules. The laboratory’s experimentation with different films, brackets , cameras and...good single lens reflex camera; an exvosure meter; a tripod; and a custom-built bracket to mate the camera and intensifier (Figure 2-1). Figure 2-1

  15. HST Solar Arrays photographed by Electronic Still Camera

    NASA Technical Reports Server (NTRS)

    1993-01-01

    This medium close-up view of one of two original Solar Arrays (SA) on the Hubble Space Telescope (HST) was photographed with an Electronic Still Camera (ESC), and downlinked to ground controllers soon afterward. This view shows the cell side of the minus V-2 panel. Electronic still photography is a technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality.

  16. Digest of NASA earth observation sensors

    NASA Technical Reports Server (NTRS)

    Drummond, R. R.

    1972-01-01

    A digest of technical characteristics of remote sensors and supporting technological experiments uniquely developed under NASA Applications Programs for Earth Observation Flight Missions is presented. Included are camera systems, sounders, interferometers, communications and experiments. In the text, these are grouped by types, such as television and photographic cameras, lasers and radars, radiometers, spectrometers, technology experiments, and transponder technology experiments. Coverage of the brief history of development extends from the first successful earth observation sensor aboard Explorer 7 in October, 1959, through the latest funded and flight-approved sensors under development as of October 1, 1972. A standard resume format is employed to normalize and mechanize the information presented.

  17. Intelligent traffic lights based on MATLAB

    NASA Astrophysics Data System (ADS)

    Nie, Ying

    2018-04-01

    In this paper, I describes the traffic lights system and it has some. Through analysis, I used MATLAB technology, transformed the camera photographs into digital signals. Than divided the road vehicle is into three methods: very congestion, congestion, a little congestion. Through the MCU programming, solved the different roads have different delay time, and Used this method, saving time and resources, so as to reduce road congestion.

  18. A connectionist model for dynamic control

    NASA Technical Reports Server (NTRS)

    Whitfield, Kevin C.; Goodall, Sharon M.; Reggia, James A.

    1989-01-01

    The application of a connectionist modeling method known as competition-based spreading activation to a camera tracking task is described. The potential is explored for automation of control and planning applications using connectionist technology. The emphasis is on applications suitable for use in the NASA Space Station and in related space activities. The results are quite general and could be applicable to control systems in general.

  19. YouTube War: Fighting in a World of Cameras in Every Cell Phone and Photoshop on Every Computer

    DTIC Science & Technology

    2009-11-01

    free MovieMaker 2 program that Microsoft includes with its Windows XP operating system. Mike Wendland, “From ENG to SNG : TV Technology for Covering the...the deployed soldier. 51. Wendland, “From ENG to SNG .” 52. This is based in part on the typology in Ben Venzke, “Jihadi Master Video Guide, JMVG

  20. New information technology tools for a medical command system for mass decontamination.

    PubMed

    Fuse, Akira; Okumura, Tetsu; Hagiwara, Jun; Tanabe, Tomohide; Fukuda, Reo; Masuno, Tomohiko; Mimura, Seiji; Yamamoto, Kaname; Yokota, Hiroyuki

    2013-06-01

    In a mass decontamination during a nuclear, biological, or chemical (NBC) response, the capability to command, control, and communicate is crucial for the proper flow of casualties at the scene and their subsequent evacuation to definitive medical facilities. Information Technology (IT) tools can be used to strengthen medical control, command, and communication during such a response. Novel IT tools comprise a vehicle-based, remote video camera and communication network systems. During an on-site verification event, an image from a remote video camera system attached to the personal protective garment of a medical responder working in the warm zone was transmitted to the on-site Medical Commander for aid in decision making. Similarly, a communication network system was used for personnel at the following points: (1) the on-site Medical Headquarters; (2) the decontamination hot zone; (3) an on-site coordination office; and (4) a remote medical headquarters of a local government office. A specially equipped, dedicated vehicle was used for the on-site medical headquarters, and facilitated the coordination with other agencies. The use of these IT tools proved effective in assisting with the medical command and control of medical resources and patient transport decisions during a mass-decontamination exercise, but improvements are required to overcome transmission delays and camera direction settings, as well as network limitations in certain areas.

  1. Impact of multi-focused images on recognition of soft biometric traits

    NASA Astrophysics Data System (ADS)

    Chiesa, V.; Dugelay, J. L.

    2016-09-01

    In video surveillance semantic traits estimation as gender and age has always been debated topic because of the uncontrolled environment: while light or pose variations have been largely studied, defocused images are still rarely investigated. Recently the emergence of new technologies, as plenoptic cameras, yields to deal with these problems analyzing multi-focus images. Thanks to a microlens array arranged between the sensor and the main lens, light field cameras are able to record not only the RGB values but also the information related to the direction of light rays: the additional data make possible rendering the image with different focal plane after the acquisition. For our experiments, we use the GUC Light Field Face Database that includes pictures from the First Generation Lytro camera. Taking advantage of light field images, we explore the influence of defocusing on gender recognition and age estimation problems. Evaluations are computed on up-to-date and competitive technologies based on deep learning algorithms. After studying the relationship between focus and gender recognition and focus and age estimation, we compare the results obtained by images defocused by Lytro software with images blurred by more standard filters in order to explore the difference between defocusing and blurring effects. In addition we investigate the impact of deblurring on defocused images with the goal to better understand the different impacts of defocusing and standard blurring on gender and age estimation.

  2. Mobile phone-based biosensing: An emerging "diagnostic and communication" technology.

    PubMed

    Quesada-González, Daniel; Merkoçi, Arben

    2017-06-15

    In this review we discuss recent developments on the use of mobile phones and similar devices for biosensing applications in which diagnostics and communications are coupled. Owing to the capabilities of mobile phones (their cameras, connectivity, portability, etc.) and to advances in biosensing, the coupling of these two technologies is enabling portable and user-friendly analytical devices. Any user can now perform quick, robust and easy (bio)assays anywhere and at any time. Among the most widely reported of such devices are paper-based platforms. Herein we provide an overview of a broad range of biosensing possibilities, from optical to electrochemical measurements; explore the various reported designs for adapters; and consider future opportunities for this technology in fields such as health diagnostics, safety & security, and environment monitoring. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. The Use of Video-Tacheometric Technology for Documenting and Analysing Geometric Features of Objects

    NASA Astrophysics Data System (ADS)

    Woźniak, Marek; Świerczyńska, Ewa; Jastrzębski, Sławomir

    2015-12-01

    This paper analyzes selected aspects of the use of video-tacheometric technology for inventorying and documenting geometric features of objects. Data was collected with the use of the video-tacheometer Topcon Image Station IS-3 and the professional camera Canon EOS 5D Mark II. During the field work and the development of data the following experiments have been performed: multiple determination of the camera interior orientation parameters and distortion parameters of five lenses with different focal lengths, reflectorless measurements of profiles for the elevation and inventory of decorative surface wall of the building of Warsaw Ballet School. During the research the process of acquiring and integrating video-tacheometric data was analysed as well as the process of combining "point cloud" acquired by using video-tacheometer in the scanning process with independent photographs taken by a digital camera. On the basis of tests performed, utility of the use of video-tacheometric technology in geodetic surveys of geometrical features of buildings has been established.

  4. The Practical Application of Uav-Based Photogrammetry Under Economic Aspects

    NASA Astrophysics Data System (ADS)

    Sauerbier, M.; Siegrist, E.; Eisenbeiss, H.; Demir, N.

    2011-09-01

    Nowadays, small size UAVs (Unmanned Aerial Vehicles) have reached a level of practical reliability and functionality that enables this technology to enter the geomatics market as an additional platform for spatial data acquisition. Though one could imagine a wide variety of interesting sensors to be mounted on such a device, here we will focus on photogrammetric applications using digital cameras. In praxis, UAV-based photogrammetry will only be accepted if it a) provides the required accuracy and an additional value and b) if it is competitive in terms of economic application compared to other measurement technologies. While a) was already proven by the scientific community and results were published comprehensively during the last decade, b) still has to be verified under real conditions. For this purpose, a test data set representing a realistic scenario provided by ETH Zurich was used to investigate cost effectiveness and to identify weak points in the processing chain that require further development. Our investigations are limited to UAVs carrying digital consumer cameras, for larger UAVs equipped with medium format cameras the situation has to be considered as significantly different. Image data was acquired during flights using a microdrones MD4-1000 quadrocopter equipped with an Olympus PE-1 digital compact camera. From these images, a subset of 5 images was selected for processing in order to register the effort of time required for the whole production chain of photogrammetric products. We see the potential of mini UAV-based photogrammetry mainly in smaller areas, up to a size of ca. 100 hectares. Larger areas can be efficiently covered by small airplanes with few images, reducing processing effort drastically. In case of smaller areas of a few hectares only, it depends more on the products required. UAVs can be an enhancement or alternative to GNSS measurements, terrestrial laser scanning and ground based photogrammetry. We selected the above mentioned test data from a project featuring an area of interest within the practical range for mini UAVs. While flight planning and flight operation are already quite efficient processes, the bottlenecks identified are mainly related to image processing. Although we used specific software for image processing, the identified gaps in the processing chain today are valid for most commercial photogrammetric software systems on the market. An outlook proposing improvements for a practicable workflow applicable in projects in private economy will be given.

  5. Light in flight photography and applications (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Faccio, Daniele

    2017-02-01

    The first successful attempts (Abramson) at capturing light in flight relied on the holographic interference between the ``object'' beam scattered from a screen and a short reference pulse propagating at an angle, acting as an ultrafast shutter cite{egg}. This interference pattern was recorded on a photographic plate or film and allowed the visualisation of light as it propagated through complex environments with unprecedented temporal and spatial resolution. More recently, advances in ultrafast camera technology and in particular the use of picosecond resolution streak cameras allowed the direct digital recording of a light pulse propagating through a plastic bottle (Rasker at el.). This represented a remarkable step forward as it provided the first ever video recording (in the traditional sense with which one intends a video, i.e. something that can be played back directly on a screen and saved in digital format) of a pulse of light in flight. We will discuss a different technology that is based on an imaging camera with a pixel array in which each individual pixel is a single photon avalanche diode (SPAD). SPADs offer both sensitivity to single photons and picosecond temporal resolution of the photon arrival time (with respect to a trigger event). When adding imaging capability, SPAD arrays can deliver videos of light pulse propagating in free space, without the need for a scattering medium or diffuser as in all previous work (Gariepy et al). This capability can then be harnessed for a variety of applications. We will discuss the details of SPAD camera detection of moving objects (e.g. human beings) that are hidden from view and then conclude with a discussion of future perspectives in the field of bio-imaging.

  6. Reading Out Single-Molecule Digital RNA and DNA Isothermal Amplification in Nanoliter Volumes with Unmodified Camera Phones

    PubMed Central

    2016-01-01

    Digital single-molecule technologies are expanding diagnostic capabilities, enabling the ultrasensitive quantification of targets, such as viral load in HIV and hepatitis C infections, by directly counting single molecules. Replacing fluorescent readout with a robust visual readout that can be captured by any unmodified cell phone camera will facilitate the global distribution of diagnostic tests, including in limited-resource settings where the need is greatest. This paper describes a methodology for developing a visual readout system for digital single-molecule amplification of RNA and DNA by (i) selecting colorimetric amplification-indicator dyes that are compatible with the spectral sensitivity of standard mobile phones, and (ii) identifying an optimal ratiometric image-process for a selected dye to achieve a readout that is robust to lighting conditions and camera hardware and provides unambiguous quantitative results, even for colorblind users. We also include an analysis of the limitations of this methodology, and provide a microfluidic approach that can be applied to expand dynamic range and improve reaction performance, allowing ultrasensitive, quantitative measurements at volumes as low as 5 nL. We validate this methodology using SlipChip-based digital single-molecule isothermal amplification with λDNA as a model and hepatitis C viral RNA as a clinically relevant target. The innovative combination of isothermal amplification chemistry in the presence of a judiciously chosen indicator dye and ratiometric image processing with SlipChip technology allowed the sequence-specific visual readout of single nucleic acid molecules in nanoliter volumes with an unmodified cell phone camera. When paired with devices that integrate sample preparation and nucleic acid amplification, this hardware-agnostic approach will increase the affordability and the distribution of quantitative diagnostic and environmental tests. PMID:26900709

  7. Digital Camera Project Fosters Communication Skills

    ERIC Educational Resources Information Center

    Fisher, Ashley; Lazaros, Edward J.

    2009-01-01

    This article details the many benefits of educators' use of digital camera technology and provides an activity in which students practice taking portrait shots of classmates, manipulate the resulting images, and add language arts practice by interviewing their subjects to produce a photo-illustrated Word document. This activity gives…

  8. High spatial resolution infrared camera as ISS external experiment

    NASA Astrophysics Data System (ADS)

    Eckehard, Lorenz; Frerker, Hap; Fitch, Robert Alan

    High spatial resolution infrared camera as ISS external experiment for monitoring global climate changes uses ISS internal and external resources (eg. data storage). The optical experiment will consist of an infrared camera for monitoring global climate changes from the ISS. This technology was evaluated by the German small satellite mission BIRD and further developed in different ESA projects. Compared to BIRD the presended instrument uses proven sensor advanced technologies (ISS external) and ISS on board processing and storage capabili-ties (internal). The instrument will be equipped with a serial interfaces for TM/TC and several relay commands for the power supply. For data processing and storage a mass memory is re-quired. The access to actual attitude data is highly desired to produce geo referenced maps-if possible by an on board processing.

  9. Band registration of tuneable frame format hyperspectral UAV imagers in complex scenes

    NASA Astrophysics Data System (ADS)

    Honkavaara, Eija; Rosnell, Tomi; Oliveira, Raquel; Tommaselli, Antonio

    2017-12-01

    A recent revolution in miniaturised sensor technology has provided markets with novel hyperspectral imagers operating in the frame format principle. In the case of unmanned aerial vehicle (UAV) based remote sensing, the frame format technology is highly attractive in comparison to the commonly utilised pushbroom scanning technology, because it offers better stability and the possibility to capture stereoscopic data sets, bringing an opportunity for 3D hyperspectral object reconstruction. Tuneable filters are one of the approaches for capturing multi- or hyperspectral frame images. The individual bands are not aligned when operating a sensor based on tuneable filters from a mobile platform, such as UAV, because the full spectrum recording is carried out in the time-sequential principle. The objective of this investigation was to study the aspects of band registration of an imager based on tuneable filters and to develop a rigorous and efficient approach for band registration in complex 3D scenes, such as forests. The method first determines the orientations of selected reference bands and reconstructs the 3D scene using structure-from-motion and dense image matching technologies. The bands, without orientation, are then matched to the oriented bands accounting the 3D scene to provide exterior orientations, and afterwards, hyperspectral orthomosaics, or hyperspectral point clouds, are calculated. The uncertainty aspects of the novel approach were studied. An empirical assessment was carried out in a forested environment using hyperspectral images captured with a hyperspectral 2D frame format camera, based on a tuneable Fabry-Pérot interferometer (FPI) on board a multicopter and supported by a high spatial resolution consumer colour camera. A theoretical assessment showed that the method was capable of providing band registration accuracy better than 0.5-pixel size. The empirical assessment proved the performance and showed that, with the novel method, most parts of the band misalignments were less than the pixel size. Furthermore, it was shown that the performance of the band alignment was dependent on the spatial distance from the reference band.

  10. Improvement of passive THz camera images

    NASA Astrophysics Data System (ADS)

    Kowalski, Marcin; Piszczek, Marek; Palka, Norbert; Szustakowski, Mieczyslaw

    2012-10-01

    Terahertz technology is one of emerging technologies that has a potential to change our life. There are a lot of attractive applications in fields like security, astronomy, biology and medicine. Until recent years, terahertz (THz) waves were an undiscovered, or most importantly, an unexploited area of electromagnetic spectrum. The reasons of this fact were difficulties in generation and detection of THz waves. Recent advances in hardware technology have started to open up the field to new applications such as THz imaging. The THz waves can penetrate through various materials. However, automated processing of THz images can be challenging. The THz frequency band is specially suited for clothes penetration because this radiation does not point any harmful ionizing effects thus it is safe for human beings. Strong technology development in this band have sparked with few interesting devices. Even if the development of THz cameras is an emerging topic, commercially available passive cameras still offer images of poor quality mainly because of its low resolution and low detectors sensitivity. Therefore, THz image processing is very challenging and urgent topic. Digital THz image processing is a really promising and cost-effective way for demanding security and defense applications. In the article we demonstrate the results of image quality enhancement and image fusion of images captured by a commercially available passive THz camera by means of various combined methods. Our research is focused on dangerous objects detection - guns, knives and bombs hidden under some popular types of clothing.

  11. White light phase shifting interferometry and color fringe analysis for the detection of contaminants in water

    NASA Astrophysics Data System (ADS)

    Dubey, Vishesh; Singh, Veena; Ahmad, Azeem; Singh, Gyanendra; Mehta, Dalip Singh

    2016-03-01

    We report white light phase shifting interferometry in conjunction with color fringe analysis for the detection of contaminants in water such as Escherichia coli (E.coli), Campylobacter coli and Bacillus cereus. The experimental setup is based on a common path interferometer using Mirau interferometric objective lens. White light interferograms are recorded using a 3-chip color CCD camera based on prism technology. The 3-chip color camera have lesser color cross talk and better spatial resolution in comparison to single chip CCD camera. A piezo-electric transducer (PZT) phase shifter is fixed with the Mirau objective and they are attached with a conventional microscope. Five phase shifted white light interferograms are recorded by the 3-chip color CCD camera and each phase shifted interferogram is decomposed into the red, green and blue constituent colors, thus making three sets of five phase shifted intererograms for three different colors from a single set of white light interferogram. This makes the system less time consuming and have lesser effect due to surrounding environment. Initially 3D phase maps of the bacteria are reconstructed for red, green and blue wavelengths from these interferograms using MATLAB, from these phase maps we determines the refractive index (RI) of the bacteria. Experimental results of 3D shape measurement and RI at multiple wavelengths will be presented. These results might find applications for detection of contaminants in water without using any chemical processing and fluorescent dyes.

  12. Moving mobile: using an open-sourced framework to enable a web-based health application on touch devices.

    PubMed

    Lindsay, Joseph; McLean, J Allen; Bains, Amrita; Ying, Tom; Kuo, M H

    2013-01-01

    Computer devices using touch-enabled technology are becoming more prevalent today. The application of a touch screen high definition surgical monitor could allow not only high definition video from an endoscopic camera to be displayed, but also the display and interaction with relevant patient and health related data. However, this technology has not been quickly embraced by all health care organizations. Although traditional keyboard or mouse-based software programs may function flawlessly on a touch-based device, many are not practical due to the usage of small buttons, fonts and very complex menu systems. This paper describes an approach taken to overcome these problems. A real case study was used to demonstrate the novelty and efficiency of the proposed method.

  13. Instrumentation for Aim Point Determination in the Close-in Battle

    DTIC Science & Technology

    2007-12-01

    Rugged camcorder with remote “ lipstick ” camera (http://www.samsung.com/Products/ Camcorder/DigitalMemory/files/scx210wl.pdf). ........ 5 Figure 5...target. One way of making a measurement is to mount a small “ lipstick ” camera to the rifle with a mount similar to the laser-tag transmitter mount...technology.com/contractors/surveillance/viotac-inc/viotac-inc1.html). Figure 4. Rugged camcorder with remote “ lipstick ” camera (http://www.samsung.com

  14. Data Mining and Information Technology: Its Impact on Intelligence Collection and Privacy Rights

    DTIC Science & Technology

    2007-11-26

    sources include: Cameras - Digital cameras (still and video ) have been improving in capability while simultaneously dropping in cost at a rate...citizen is caught on camera 300 times each day.5 The power of extensive video coverage is magnified greatly by the nascent capability for voice and...software on security videos and tracking cell phone usage in the local area. However, it would only return the names and data of those who

  15. Designing informed game-based rehabilitation tasks leveraging advances in virtual reality.

    PubMed

    Lange, Belinda; Koenig, Sebastian; Chang, Chien-Yen; McConnell, Eric; Suma, Evan; Bolas, Mark; Rizzo, Albert

    2012-01-01

    This paper details a brief history and rationale for the use of virtual reality (VR) technology for clinical research and intervention, and then focuses on game-based VR applications in the area of rehabilitation. An analysis of the match between rehabilitation task requirements and the assets available with VR technology is presented. Low-cost camera-based systems capable of tracking user behavior at sufficient levels for game-based virtual rehabilitation activities are currently available for in-home use. Authoring software is now being developed that aims to provide clinicians with a usable toolkit for leveraging this technology. This will facilitate informed professional input on software design, development and application to ensure safe and effective use in the rehabilitation context. The field of rehabilitation generally stands to benefit from the continual advances in VR technology, concomitant system cost reductions and an expanding clinical research literature and knowledge base. Home-based activity within VR systems that are low-cost, easy to deploy and maintain, and meet the requirements for "good" interactive rehabilitation tasks could radically improve users' access to care, adherence to prescribed training and subsequently enhance functional activity in everyday life in clinical populations.

  16. Cameras Reveal Elements in the Short Wave Infrared

    NASA Technical Reports Server (NTRS)

    2010-01-01

    Goodrich ISR Systems Inc. (formerly Sensors Unlimited Inc.), based out of Princeton, New Jersey, received Small Business Innovation Research (SBIR) contracts from the Jet Propulsion Laboratory, Marshall Space Flight Center, Kennedy Space Center, Goddard Space Flight Center, Ames Research Center, Stennis Space Center, and Langley Research Center to assist in advancing and refining indium gallium arsenide imaging technology. Used on the Lunar Crater Observation and Sensing Satellite (LCROSS) mission in 2009 for imaging the short wave infrared wavelengths, the technology has dozens of applications in military, security and surveillance, machine vision, medical, spectroscopy, semiconductor inspection, instrumentation, thermography, and telecommunications.

  17. SAE Mil-1394 For Military and Aerospace Vehicle Applications

    NASA Technical Reports Server (NTRS)

    Dunga, Larry; Wroble, Mike; Kreska, Jack

    2004-01-01

    Unique opportunity to utilize new technology while increasing vehicle and crew member safety. Demonstration of new technology that can be utilized for Crew Exploration Vehicle and other future manned vehicles. Future work for other cameras in the vehicle that can be IEEE1394 based without major vehicle modifications. Demonstrates that LM can share information and knowledge between internal groups and NASA to assist in providing a product in support of the NASA Return to Flight Activities. This upgrade will provide a flight active data bus that is 100 times faster than any similar bus on the vehicle.

  18. A Dynamic View of Molecular Switch Behavior at Serotonin Receptors: Implications for Functional Selectivity

    PubMed Central

    Martí-Solano, Maria; Sanz, Ferran; Pastor, Manuel; Selent, Jana

    2014-01-01

    Functional selectivity is a property of G protein-coupled receptors that allows them to preferentially couple to particular signaling partners upon binding of biased agonists. Publication of the X-ray crystal structure of serotonergic 5-HT1B and 5-HT2B receptors in complex with ergotamine, a drug capable of activating G protein coupling and β-arrestin signaling at the 5-HT1B receptor but clearly favoring β-arrestin over G protein coupling at the 5-HT2B subtype, has recently provided structural insight into this phenomenon. In particular, these structures highlight the importance of specific residues, also called micro-switches, for differential receptor activation. In our work, we apply classical molecular dynamics simulations and enhanced sampling approaches to analyze the behavior of these micro-switches and their impact on the stabilization of particular receptor conformational states. Our analysis shows that differences in the conformational freedom of helix 6 between both receptors could explain their different G protein-coupling capacity. In particular, as compared to the 5-HT1B receptor, helix 6 movement in the 5-HT2B receptor can be constrained by two different mechanisms. On the one hand, an anchoring effect of ergotamine, which shows an increased capacity to interact with the extracellular part of helices 5 and 6 and stabilize them, hinders activation of a hydrophobic connector region at the center of the receptor. On the other hand, this connector region in an inactive conformation is further stabilized by unconserved contacts extending to the intracellular part of the 5-HT2B receptor, which hamper opening of the G protein binding site. This work highlights the importance of considering receptor capacity to adopt different conformational states from a dynamic perspective in order to underpin the structural basis of functional selectivity. PMID:25313636

  19. A dynamic view of molecular switch behavior at serotonin receptors: implications for functional selectivity.

    PubMed

    Martí-Solano, Maria; Sanz, Ferran; Pastor, Manuel; Selent, Jana

    2014-01-01

    Functional selectivity is a property of G protein-coupled receptors that allows them to preferentially couple to particular signaling partners upon binding of biased agonists. Publication of the X-ray crystal structure of serotonergic 5-HT1B and 5-HT2B receptors in complex with ergotamine, a drug capable of activating G protein coupling and β-arrestin signaling at the 5-HT1B receptor but clearly favoring β-arrestin over G protein coupling at the 5-HT2B subtype, has recently provided structural insight into this phenomenon. In particular, these structures highlight the importance of specific residues, also called micro-switches, for differential receptor activation. In our work, we apply classical molecular dynamics simulations and enhanced sampling approaches to analyze the behavior of these micro-switches and their impact on the stabilization of particular receptor conformational states. Our analysis shows that differences in the conformational freedom of helix 6 between both receptors could explain their different G protein-coupling capacity. In particular, as compared to the 5-HT1B receptor, helix 6 movement in the 5-HT2B receptor can be constrained by two different mechanisms. On the one hand, an anchoring effect of ergotamine, which shows an increased capacity to interact with the extracellular part of helices 5 and 6 and stabilize them, hinders activation of a hydrophobic connector region at the center of the receptor. On the other hand, this connector region in an inactive conformation is further stabilized by unconserved contacts extending to the intracellular part of the 5-HT2B receptor, which hamper opening of the G protein binding site. This work highlights the importance of considering receptor capacity to adopt different conformational states from a dynamic perspective in order to underpin the structural basis of functional selectivity.

  20. Extracting 3d Semantic Information from Video Surveillance System Using Deep Learning

    NASA Astrophysics Data System (ADS)

    Zhang, J. S.; Cao, J.; Mao, B.; Shen, D. Q.

    2018-04-01

    At present, intelligent video analysis technology has been widely used in various fields. Object tracking is one of the important part of intelligent video surveillance, but the traditional target tracking technology based on the pixel coordinate system in images still exists some unavoidable problems. Target tracking based on pixel can't reflect the real position information of targets, and it is difficult to track objects across scenes. Based on the analysis of Zhengyou Zhang's camera calibration method, this paper presents a method of target tracking based on the target's space coordinate system after converting the 2-D coordinate of the target into 3-D coordinate. It can be seen from the experimental results: Our method can restore the real position change information of targets well, and can also accurately get the trajectory of the target in space.

  1. A review of micro-contact physics for microelectromechanical systems (MEMS) metal contact switches

    NASA Astrophysics Data System (ADS)

    Toler, Benjamin F.; Coutu, Ronald A., Jr.; McBride, John W.

    2013-10-01

    Innovations in relevant micro-contact areas are highlighted, these include, design, contact resistance modeling, contact materials, performance and reliability. For each area the basic theory and relevant innovations are explored. A brief comparison of actuation methods is provided to show why electrostatic actuation is most commonly used by radio frequency microelectromechanical systems designers. An examination of the important characteristics of the contact interface such as modeling and material choice is discussed. Micro-contact resistance models based on plastic, elastic-plastic and elastic deformations are reviewed. Much of the modeling for metal contact micro-switches centers around contact area and surface roughness. Surface roughness and its effect on contact area is stressed when considering micro-contact resistance modeling. Finite element models and various approaches for describing surface roughness are compared. Different contact materials to include gold, gold alloys, carbon nanotubes, composite gold-carbon nanotubes, ruthenium, ruthenium oxide, as well as tungsten have been shown to enhance contact performance and reliability with distinct trade offs for each. Finally, a review of physical and electrical failure modes witnessed by researchers are detailed and examined.

  2. Portable, low-priced retinal imager for eye disease screening

    NASA Astrophysics Data System (ADS)

    Soliz, Peter; Nemeth, Sheila; VanNess, Richard; Barriga, E. S.; Zamora, Gilberto

    2014-02-01

    The objective of this project was to develop and demonstrate a portable, low-priced, easy to use non-mydriatic retinal camera for eye disease screening in underserved urban and rural locations. Existing portable retinal imagers do not meet the requirements of a low-cost camera with sufficient technical capabilities (field of view, image quality, portability, battery power, and ease-of-use) to be distributed widely to low volume clinics, such as the offices of single primary care physicians serving rural communities or other economically stressed healthcare facilities. Our approach for Smart i-Rx is based primarily on a significant departure from current generations of desktop and hand-held commercial retinal cameras as well as those under development. Our techniques include: 1) Exclusive use of off-the-shelf components; 2) Integration of retinal imaging device into low-cost, high utility camera mount and chin rest; 3) Unique optical and illumination designed for small form factor; and 4) Exploitation of autofocus technology built into present digital SLR recreational cameras; and 5) Integration of a polarization technique to avoid the corneal reflex. In a prospective study, 41 out of 44 diabetics were imaged successfully. No imaging was attempted on three of the subjects due to noticeably small pupils (less than 2mm). The images were of sufficient quality to detect abnormalities related to diabetic retinopathy, such as microaneurysms and exudates. These images were compared with ones taken non-mydriatically with a Canon CR-1 Mark II camera. No cases identified as having DR by expert retinal graders were missed in the Smart i-Rx images.

  3. Material of LAPAN's thermal IR camera equipped with two microbolometers in one aperture

    NASA Astrophysics Data System (ADS)

    Bustanul, A.; Irwan, P.; Andi M., T.

    2017-11-01

    Besides the wavelength used, there is another factor that we have to notice in designing an optical system. It is material used which is correct for the spectral bands determined. Basically, due the limitation of the available range and expensive, choosing and determining materials for Infra Red (IR) wavelength are more difficult and complex rather than visible spectrum. We also had the same problem while designing our thermal IR camera equipped with two microbolometers sharing aperture. Two spectral bands, 3 - 4 μm (MWIR) and 8 - 12 μm (LWIR), have been decided to be our thermal IR camera spectrum to address missions, i.e., peat land fire, volcanoes activities, and Sea Surface Temperature (SST). Referring those bands, we chose the appropriate material for LAPAN's IR camera optics. This paper describes material of LAPAN's IR camera equipped with two microbolometer in one aperture. First of all, we were learning and understanding of optical materials properties all matters of IR technology including its bandwidths. Considering some aspects, i.e., Transmission, Index of Refraction, Thermal properties covering the index gradient and coefficient of thermal expansion (CTE), the analysis then has been accomplished. Moreover, we were utilizing a commercial software, Thermal Desktop/Sinda Fluint, to strengthen the process. Some restrictions such as space environment, low cost, and performance mainly durability and transmission, were also cared throughout the trade off the works. The results of all those analysis, either in graphs or in measurement, indicate that the lens of LAPAN's IR camera with sharing aperture is based on Germanium/Zinc Selenide materials.

  4. Implementing a Reliability Centered Maintenance Program at NASA's Kennedy Space Center

    NASA Technical Reports Server (NTRS)

    Tuttle, Raymond E.; Pete, Robert R.

    1998-01-01

    Maintenance practices have long focused on time based "preventive maintenance" techniques. Components were changed out and parts replaced based on how long they had been in place instead of what condition they were in. A reliability centered maintenance (RCM) program seeks to offer equal or greater reliability at decreased cost by insuring only applicable, effective maintenance is performed and by in large part replacing time based maintenance with condition based maintenance. A significant portion of this program involved introducing non-intrusive technologies, such as vibration analysis, oil analysis and I/R cameras, to an existing labor force and management team.

  5. Automatic calibration method for plenoptic camera

    NASA Astrophysics Data System (ADS)

    Luan, Yinsen; He, Xing; Xu, Bing; Yang, Ping; Tang, Guomao

    2016-04-01

    An automatic calibration method is proposed for a microlens-based plenoptic camera. First, all microlens images on the white image are searched and recognized automatically based on digital morphology. Then, the center points of microlens images are rearranged according to their relative position relationships. Consequently, the microlens images are located, i.e., the plenoptic camera is calibrated without the prior knowledge of camera parameters. Furthermore, this method is appropriate for all types of microlens-based plenoptic cameras, even the multifocus plenoptic camera, the plenoptic camera with arbitrarily arranged microlenses, or the plenoptic camera with different sizes of microlenses. Finally, we verify our method by the raw data of Lytro. The experiments show that our method has higher intelligence than the methods published before.

  6. LPT. Shield test control building (TAN645), north facade. Camera facing ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    LPT. Shield test control building (TAN-645), north facade. Camera facing south. Obsolete sign dating from post-1970 program says "Energy and Systems Technology Experimental Facility, INEL." INEEL negative no. HD-40-5-4 - Idaho National Engineering Laboratory, Test Area North, Scoville, Butte County, ID

  7. ProxiScan™: A Novel Camera for Imaging Prostate Cancer

    ScienceCinema

    Ralph James

    2017-12-09

    ProxiScan is a compact gamma camera suited for high-resolution imaging of prostate cancer. Developed by Brookhaven National Laboratory and Hybridyne Imaging Technologies, Inc., ProxiScan won a 2009 R&D 100 Award, sponsored by R&D Magazine to recognize t

  8. Ontological Representation of Light Wave Camera Data to Support Vision-Based AmI

    PubMed Central

    Serrano, Miguel Ángel; Gómez-Romero, Juan; Patricio, Miguel Ángel; García, Jesús; Molina, José Manuel

    2012-01-01

    Recent advances in technologies for capturing video data have opened a vast amount of new application areas in visual sensor networks. Among them, the incorporation of light wave cameras on Ambient Intelligence (AmI) environments provides more accurate tracking capabilities for activity recognition. Although the performance of tracking algorithms has quickly improved, symbolic models used to represent the resulting knowledge have not yet been adapted to smart environments. This lack of representation does not allow to take advantage of the semantic quality of the information provided by new sensors. This paper advocates for the introduction of a part-based representational level in cognitive-based systems in order to accurately represent the novel sensors' knowledge. The paper also reviews the theoretical and practical issues in part-whole relationships proposing a specific taxonomy for computer vision approaches. General part-based patterns for human body and transitive part-based representation and inference are incorporated to an ontology-based previous framework to enhance scene interpretation in the area of video-based AmI. The advantages and new features of the model are demonstrated in a Social Signal Processing (SSP) application for the elaboration of live market researches.

  9. Plenoptic camera image simulation for reconstruction algorithm verification

    NASA Astrophysics Data System (ADS)

    Schwiegerling, Jim

    2014-09-01

    Plenoptic cameras have emerged in recent years as a technology for capturing light field data in a single snapshot. A conventional digital camera can be modified with the addition of a lenslet array to create a plenoptic camera. Two distinct camera forms have been proposed in the literature. The first has the camera image focused onto the lenslet array. The lenslet array is placed over the camera sensor such that each lenslet forms an image of the exit pupil onto the sensor. The second plenoptic form has the lenslet array relaying the image formed by the camera lens to the sensor. We have developed a raytracing package that can simulate images formed by a generalized version of the plenoptic camera. Several rays from each sensor pixel are traced backwards through the system to define a cone of rays emanating from the entrance pupil of the camera lens. Objects that lie within this cone are integrated to lead to a color and exposure level for that pixel. To speed processing three-dimensional objects are approximated as a series of planes at different depths. Repeating this process for each pixel in the sensor leads to a simulated plenoptic image on which different reconstruction algorithms can be tested.

  10. Coral Reef Surveillance: Infrared-Sensitive Video Surveillance Technology as a New Tool for Diurnal and Nocturnal Long-Term Field Observations.

    PubMed

    Dirnwoeber, Markus; Machan, Rudolf; Herler, Juergen

    2012-10-31

    Direct field observations of fine-scaled biological processes and interactions of the benthic community of corals and associated reef organisms (e.g., feeding, reproduction, mutualistic or agonistic behavior, behavioral responses to changing abiotic factors) usually involve a disturbing intervention. Modern digital camcorders (without inflexible land-or ship-based cable connection) such as the GoPro camera enable undisturbed and unmanned, stationary close-up observations. Such observations, however, are also very time-limited (~3 h) and full 24 h-recordings throughout day and night, including nocturnal observations without artificial daylight illumination, are not possible. Herein we introduce the application of modern standard video surveillance technology with the main objective of providing a tool for monitoring coral reef or other sessile and mobile organisms for periods of 24 h and longer. This system includes nocturnal close-up observations with miniature infrared (IR)-sensitive cameras and separate high-power IR-LEDs. Integrating this easy-to-set up and portable remote-sensing equipment into coral reef research is expected to significantly advance our understanding of fine-scaled biotic processes on coral reefs. Rare events and long-lasting processes can easily be recorded, in situ -experiments can be monitored live on land, and nocturnal IR-observations reveal undisturbed behavior. The options and equipment choices in IR-sensitive surveillance technology are numerous and subject to a steadily increasing technical supply and quality at decreasing prices. Accompanied by short video examples, this report introduces a radio-transmission system for simultaneous recordings and real-time monitoring of multiple cameras with synchronized timestamps, and a surface-independent underwater-recording system.

  11. Anaglyph Image Technology As a Visualization Tool for Teaching Geology of National Parks

    NASA Astrophysics Data System (ADS)

    Stoffer, P. W.; Phillips, E.; Messina, P.

    2003-12-01

    Anaglyphic stereo viewing technology emerged in the mid 1800's. Anaglyphs use offset images in contrasting colors (typically red and cyan) that when viewed through color filters produce a three-dimensional (3-D) image. Modern anaglyph image technology has become increasingly easy to use and relatively inexpensive using digital cameras, scanners, color printing, and common image manipulation software. Perhaps the primary drawbacks of anaglyph images include visualization problems with primary colors (such as flowers, bright clothing, or blue sky) and distortion factors in large depth-of-field images. However, anaglyphs are more versatile than polarization techniques since they can be printed, displayed on computer screens (such as on websites), or projected with a single projector (as slides or digital images), and red and cyan viewing glasses cost less than polarization glasses and other 3-D viewing alternatives. Anaglyph images are especially well suited for most natural landscapes, such as views dominated by natural earth tones (grays, browns, greens), and they work well for sepia and black and white images (making the conversion of historic stereo photography into anaglyphs easy). We used a simple stereo camera setup incorporating two digital cameras with a rigid base to photograph landscape features in national parks (including arches, caverns, cactus, forests, and coastlines). We also scanned historic stereographic images. Using common digital image manipulation software we created websites featuring anaglyphs of geologic features from national parks. We used the same images for popular 3-D poster displays at the U.S. Geological Survey Open House 2003 in Menlo Park, CA. Anaglyph photography could easily be used in combined educational outdoor activities and laboratory exercises.

  12. Coral Reef Surveillance: Infrared-Sensitive Video Surveillance Technology as a New Tool for Diurnal and Nocturnal Long-Term Field Observations

    PubMed Central

    Dirnwoeber, Markus; Machan, Rudolf; Herler, Juergen

    2014-01-01

    Direct field observations of fine-scaled biological processes and interactions of the benthic community of corals and associated reef organisms (e.g., feeding, reproduction, mutualistic or agonistic behavior, behavioral responses to changing abiotic factors) usually involve a disturbing intervention. Modern digital camcorders (without inflexible land-or ship-based cable connection) such as the GoPro camera enable undisturbed and unmanned, stationary close-up observations. Such observations, however, are also very time-limited (~3 h) and full 24 h-recordings throughout day and night, including nocturnal observations without artificial daylight illumination, are not possible. Herein we introduce the application of modern standard video surveillance technology with the main objective of providing a tool for monitoring coral reef or other sessile and mobile organisms for periods of 24 h and longer. This system includes nocturnal close-up observations with miniature infrared (IR)-sensitive cameras and separate high-power IR-LEDs. Integrating this easy-to-set up and portable remote-sensing equipment into coral reef research is expected to significantly advance our understanding of fine-scaled biotic processes on coral reefs. Rare events and long-lasting processes can easily be recorded, in situ-experiments can be monitored live on land, and nocturnal IR-observations reveal undisturbed behavior. The options and equipment choices in IR-sensitive surveillance technology are numerous and subject to a steadily increasing technical supply and quality at decreasing prices. Accompanied by short video examples, this report introduces a radio-transmission system for simultaneous recordings and real-time monitoring of multiple cameras with synchronized timestamps, and a surface-independent underwater-recording system. PMID:24829763

  13. Idea Technology and Product Technology: Seeing beyond the Text to the Technology That Works

    ERIC Educational Resources Information Center

    Bednar, Maryanne R.

    2004-01-01

    Sifting through the myriad "idea" technologies (such as multiple intelligence theories or Piaget's Theory of Cognitive Development) and "product" technologies (such as PowerPoint or digital cameras) can be overwhelming, but Bednar persuades us that it's not about having the most recent technology, it's about using what works for "your" students in…

  14. Sound localization with communications headsets: comparison of passive and active systems.

    PubMed

    Abel, Sharon M; Tsang, Suzanne; Boyne, Stephen

    2007-01-01

    Studies have demonstrated that conventional hearing protectors interfere with sound localization. This research examines possible benefits from advanced communications devices. Horizontal plane sound localization was compared in normal-hearing males with the ears unoccluded and fitted with Peltor H10A passive attenuation earmuffs, Racal Slimgard II communications muffs in active noise reduction (ANR) and talk-through-circuitry (TTC) modes and Nacre QUIETPRO TM communications earplugs in off (passive attenuation) and push-to-talk (PTT) modes. Localization was assessed using an array of eight loudspeakers, two in each spatial quadrant. The stimulus was 75 dB SPL, 300-ms broadband noise. One block of 120 forced-choice loudspeaker identification trials was presented in each condition. Subjects responded using a laptop response box with a set of eight microswitches in the same configuration as the speaker array. A repeated measures ANOVA was applied to the dataset. The results reveal that the overall percent correct response was highest in the unoccluded condition (94%). A significant reduction of 24% was observed for the communications devices in TTC and PTT modes and a reduction of 49% for the passive muff and plug and muff with ANR. Disruption in performance was due to an increase in front-back reversal errors for mirror image spatial positions. The results support the conclusion that communications devices with advanced technologies are less detrimental to directional hearing than conventional, passive, limited amplification and ANR devices.

  15. Application Analysis of BIM Technology in Metro Rail Transit

    NASA Astrophysics Data System (ADS)

    Liu, Bei; Sun, Xianbin

    2018-03-01

    With the rapid development of urban roads, especially the construction of subway rail transit, it is an effective way to alleviate urban traffic congestion. There are limited site space, complex resource allocation, tight schedule, underground pipeline complex engineering problems. BIM technology, three-dimensional visualization, parameterization, virtual simulation and many other advantages can effectively solve these technical problems. Based on the project of Shenzhen Metro Line 9, BIM technology is innovatively researched throughout the lifecycle of BIM technology in the context of the metro rail transit project rarely used at this stage. The model information file is imported into Navisworks for four-dimensional animation simulation to determine the optimum construction scheme of the shield machine. Subway construction management application platform based on BIM and private cloud technology, the use of cameras and sensors to achieve electronic integration, dynamic monitoring of the operation and maintenance of underground facilities. Make full use of the many advantages of BIM technology to improve the engineering quality and construction efficiency of the subway rail transit project and to complete the operation and maintenance.

  16. Can reliable sage-grouse lek counts be obtained using aerial infrared technology

    USGS Publications Warehouse

    Gillette, Gifford L.; Coates, Peter S.; Petersen, Steven; Romero, John P.

    2013-01-01

    More effective methods for counting greater sage-grouse (Centrocercus urophasianus) are needed to better assess population trends through enumeration or location of new leks. We describe an aerial infrared technique for conducting sage-grouse lek counts and compare this method with conventional ground-based lek count methods. During the breeding period in 2010 and 2011, we surveyed leks from fixed-winged aircraft using cryogenically cooled mid-wave infrared cameras and surveyed the same leks on the same day from the ground following a standard lek count protocol. We did not detect significant differences in lek counts between surveying techniques. These findings suggest that using a cryogenically cooled mid-wave infrared camera from an aerial platform to conduct lek surveys is an effective alternative technique to conventional ground-based methods, but further research is needed. We discuss multiple advantages to aerial infrared surveys, including counting in remote areas, representing greater spatial variation, and increasing the number of counted leks per season. Aerial infrared lek counts may be a valuable wildlife management tool that releases time and resources for other conservation efforts. Opportunities exist for wildlife professionals to refine and apply aerial infrared techniques to wildlife monitoring programs because of the increasing reliability and affordability of this technology.

  17. Crop 3D-a LiDAR based platform for 3D high-throughput crop phenotyping.

    PubMed

    Guo, Qinghua; Wu, Fangfang; Pang, Shuxin; Zhao, Xiaoqian; Chen, Linhai; Liu, Jin; Xue, Baolin; Xu, Guangcai; Li, Le; Jing, Haichun; Chu, Chengcai

    2018-03-01

    With the growing population and the reducing arable land, breeding has been considered as an effective way to solve the food crisis. As an important part in breeding, high-throughput phenotyping can accelerate the breeding process effectively. Light detection and ranging (LiDAR) is an active remote sensing technology that is capable of acquiring three-dimensional (3D) data accurately, and has a great potential in crop phenotyping. Given that crop phenotyping based on LiDAR technology is not common in China, we developed a high-throughput crop phenotyping platform, named Crop 3D, which integrated LiDAR sensor, high-resolution camera, thermal camera and hyperspectral imager. Compared with traditional crop phenotyping techniques, Crop 3D can acquire multi-source phenotypic data in the whole crop growing period and extract plant height, plant width, leaf length, leaf width, leaf area, leaf inclination angle and other parameters for plant biology and genomics analysis. In this paper, we described the designs, functions and testing results of the Crop 3D platform, and briefly discussed the potential applications and future development of the platform in phenotyping. We concluded that platforms integrating LiDAR and traditional remote sensing techniques might be the future trend of crop high-throughput phenotyping.

  18. Scientific CCD technology at JPL

    NASA Technical Reports Server (NTRS)

    Janesick, J.; Collins, S. A.; Fossum, E. R.

    1991-01-01

    Charge-coupled devices (CCD's) were recognized for their potential as an imaging technology almost immediately following their conception in 1970. Twenty years later, they are firmly established as the technology of choice for visible imaging. While consumer applications of CCD's, especially the emerging home video camera market, dominated manufacturing activity, the scientific market for CCD imagers has become significant. Activity of the Jet Propulsion Laboratory and its industrial partners in the area of CCD imagers for space scientific instruments is described. Requirements for scientific imagers are significantly different from those needed for home video cameras, and are described. An imager for an instrument on the CRAF/Cassini mission is described in detail to highlight achieved levels of performance.

  19. Astronaut Kathryn Thornton on HST photographed by Electronic Still Camera

    NASA Image and Video Library

    1993-12-05

    S61-E-011 (5 Dec 1993) --- This view of astronaut Kathryn C. Thornton working on the Hubble Space Telescope (HST) was photographed with an Electronic Still Camera (ESC), and down linked to ground controllers soon afterward. Thornton, anchored to the end of the Remote Manipulator System (RMS) arm, is installing the +V2 Solar Array Panel as a replacement for the original one removed earlier. Electronic still photography is a relatively new technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality. The electronic still camera has flown as an experiment on several other shuttle missions.

  20. DataPlay's mobile recording technology

    NASA Astrophysics Data System (ADS)

    Bell, Bernard W., Jr.

    2002-01-01

    A small rotating memory device which utilizes optical prerecorded and writeable technology to provide a mobile recording technology solution for digital cameras, cell phones, music players, PDA's, and hybrid multipurpose devices have been developed. This solution encompasses writeable, read only, and encrypted storage media.

  1. The Sensor Irony: How Reliance on Sensor Technology is Limiting Our View of the Battlefield

    DTIC Science & Technology

    2010-05-10

    thermal ) camera, as well as a laser illuminator/range finder.73 Similar to the MQ- 1 , the MQ-9 Reaper is primarily a strike asset for emerging targets...Wescam 14TS. 1 Both systems have an Electro-optical (daylight) TV camera, an Infra-red ( thermal ) camera, as well as a laser illuminator/range finder...Report Documentation Page Form Approved OMB No. 0704-0188 Public reporting burden for the collection of information is estimated to average 1 hour

  2. Leading Edge. Sensors Challenges and Solutions for the 21st Century. Volume 7, Issue Number 2

    DTIC Science & Technology

    2010-01-01

    above, microbolometer technology is not very sen- sitive. To gain sensitivity, one needs to go to IR cam- eras that have cryogenically cooled detector ...QWIP) and detector arrays made from mercury cadmium telluride ( MCT ). Both types can be very sensitive. QWIP cameras have spectral detection bands...commercially available IR camera to meet the needs of CAPTC. One MCT camera was located that had a detection band from 7.7 µ to 11.6 µ and included an

  3. IRAIT project: future mid-IR operations at Dome C during summer

    NASA Astrophysics Data System (ADS)

    Tosti, Gino; IRAIT Collaboration

    The project IRAIT consists of a robotic mid-infrared telescope that will be hosted at Dome C in the Italian-French Concordia station on the Antarctic Plateau. The telescope was built in collaboration with the PNRA (sectors Technology and Earth-Sun Interaction and Astrophysics). Its focal plane instrumentation is a mid-infrared Camera (5-25 mu m), based on the TIRCAM II prototype, which is the result of a join effort between Institutes of CNR and INAF. International collaborations with French and Spanish Institutes for the construction of a near infrared spectrographic camera have also been started. We present the status of the project and the ongoing developments that will make possible to start infrared observations at Dome C during the summer Antarctic campaign 2005-2006.

  4. Efficient large-scale graph data optimization for intelligent video surveillance

    NASA Astrophysics Data System (ADS)

    Shang, Quanhong; Zhang, Shujun; Wang, Yanbo; Sun, Chen; Wang, Zepeng; Zhang, Luming

    2017-08-01

    Society is rapidly accepting the use of a wide variety of cameras Location and applications: site traffic monitoring, parking Lot surveillance, car and smart space. These ones here the camera provides data every day in an analysis Effective way. Recent advances in sensor technology Manufacturing, communications and computing are stimulating.The development of new applications that can change the traditional Vision system incorporating universal smart camera network. This Analysis of visual cues in multi camera networks makes wide Applications ranging from smart home and office automation to large area surveillance and traffic surveillance. In addition, dense Camera networks, most of which have large overlapping areas of cameras. In the view of good research, we focus on sparse camera networks. One Sparse camera network using large area surveillance. As few cameras as possible, most cameras do not overlap Each other’s field of vision. This task is challenging Lack of knowledge of topology Network, the specific changes in appearance and movement Track different opinions of the target, as well as difficulties Understanding complex events in a network. In this review in this paper, we present a comprehensive survey of recent studies Results to solve the problem of topology learning, Object appearance modeling and global activity understanding sparse camera network. In addition, some of the current open Research issues are discussed.

  5. (99m)Tc-MDP bone scintigraphy of the hand: comparing the use of novel cadmium zinc telluride (CZT) and routine NaI(Tl) detectors.

    PubMed

    Koulikov, Victoria; Lerman, Hedva; Kesler, Mikhail; Even-Sapir, Einat

    2015-12-01

    Cadmium zinc telluride (CZT) solid-state detectors have been recently introduced in the field of nuclear medicine in cardiology and breast imaging. The aim of the current study was to evaluate the performance of the novel detectors (CZT) compared to that of the routine NaI(Tl) in bone scintigraphy. A dual-headed CZT-based camera dedicated originally to breast imaging has been used, and in view of the limited size of the detectors, the hands were chosen as the organ for assessment. This is a clinical study. Fifty-eight consecutive patients (total 116 hands) referred for bone scan for suspected hand pathology gave their informed consent to have two acquisitions, using the routine camera and the CZT-based camera. The latter was divided into full-dose full-acquisition time (FD CZT) and reduced-dose short-acquisition time (RD CZT) on CZT technology, so three image sets were available for analysis. Data analysis included comparing the detection of hot lesions and identification of the metacarpophalangeal, proximal interphalangeal, and distal interphalangeal joints. A total of 69 hot lesions were detected on the CZT image sets; of these, 61 were identified as focal sites of uptake on NaI(Tl) data. On FD CZT data, 385 joints were identified compared to 168 on NaI(Tl) data (p < 0.001). There was no statistically significant difference in delineation of joints between FD and RD CZT data as the latter identified 383 joints. Bone scintigraphy using a CZT-based gamma camera is associated with improved lesion detection and anatomic definition. The superior physical characteristics of this technique raised a potential reduction in administered dose and/or acquisition time without compromising image quality.

  6. Radar based autonomous sensor module

    NASA Astrophysics Data System (ADS)

    Styles, Tim

    2016-10-01

    Most surveillance systems combine camera sensors with other detection sensors that trigger an alert to a human operator when an object is detected. The detection sensors typically require careful installation and configuration for each application and there is a significant burden on the operator to react to each alert by viewing camera video feeds. A demonstration system known as Sensing for Asset Protection with Integrated Electronic Networked Technology (SAPIENT) has been developed to address these issues using Autonomous Sensor Modules (ASM) and a central High Level Decision Making Module (HLDMM) that can fuse the detections from multiple sensors. This paper describes the 24 GHz radar based ASM, which provides an all-weather, low power and license exempt solution to the problem of wide area surveillance. The radar module autonomously configures itself in response to tasks provided by the HLDMM, steering the transmit beam and setting range resolution and power levels for optimum performance. The results show the detection and classification performance for pedestrians and vehicles in an area of interest, which can be modified by the HLDMM without physical adjustment. The module uses range-Doppler processing for reliable detection of moving objects and combines Radar Cross Section and micro-Doppler characteristics for object classification. Objects are classified as pedestrian or vehicle, with vehicle sub classes based on size. Detections are reported only if the object is detected in a task coverage area and it is classified as an object of interest. The system was shown in a perimeter protection scenario using multiple radar ASMs, laser scanners, thermal cameras and visible band cameras. This combination of sensors enabled the HLDMM to generate reliable alerts with improved discrimination of objects and behaviours of interest.

  7. Field test studies of our infrared-based human temperature screening system embedded with a parallel measurement approach

    NASA Astrophysics Data System (ADS)

    Sumriddetchkajorn, Sarun; Chaitavon, Kosom

    2009-07-01

    This paper introduces a parallel measurement approach for fast infrared-based human temperature screening suitable for use in a large public area. Our key idea is based on the combination of simple image processing algorithms, infrared technology, and human flow management. With this multidisciplinary concept, we arrange as many people as possible in a two-dimensional space in front of a thermal imaging camera and then highlight all human facial areas through simple image filtering, image morphological, and particle analysis processes. In this way, an individual's face in live thermal image can be located and the maximum facial skin temperature can be monitored and displayed. Our experiment shows a measured 1 ms processing time in highlighting all human face areas. With a thermal imaging camera having an FOV lens of 24° × 18° and 320 × 240 active pixels, the maximum facial skin temperatures from three people's faces located at 1.3 m from the camera can also be simultaneously monitored and displayed in a measured rate of 31 fps, limited by the looping process in determining coordinates of all faces. For our 3-day test under the ambient temperature of 24-30 °C, 57-72% relative humidity, and weak wind from the outside hospital building, hyperthermic patients can be identified with 100% sensitivity and 36.4% specificity when the temperature threshold level and the offset temperature value are appropriately chosen. Appropriately locating our system away from the building doors, air conditioners and electric fans in order to eliminate wind blow coming toward the camera lens can significantly help improve our system specificity.

  8. Retinal axial focusing and multi-layer imaging with a liquid crystal adaptive optics camera

    NASA Astrophysics Data System (ADS)

    Liu, Rui-Xue; Zheng, Xian-Liang; Li, Da-Yu; Xia, Ming-Liang; Hu, Li-Fa; Cao, Zhao-Liang; Mu, Quan-Quan; Xuan, Li

    2014-09-01

    With the help of adaptive optics (AO) technology, cellular level imaging of living human retina can be achieved. Aiming to reduce distressing feelings and to avoid potential drug induced diseases, we attempted to image retina with dilated pupil and froze accommodation without drugs. An optimized liquid crystal adaptive optics camera was adopted for retinal imaging. A novel eye stared system was used for stimulating accommodation and fixating imaging area. Illumination sources and imaging camera kept linkage for focusing and imaging different layers. Four subjects with diverse degree of myopia were imaged. Based on the optical properties of the human eye, the eye stared system reduced the defocus to less than the typical ocular depth of focus. In this way, the illumination light can be projected on certain retina layer precisely. Since that the defocus had been compensated by the eye stared system, the adopted 512 × 512 liquid crystal spatial light modulator (LC-SLM) corrector provided the crucial spatial fidelity to fully compensate high-order aberrations. The Strehl ratio of a subject with -8 diopter myopia was improved to 0.78, which was nearly close to diffraction-limited imaging. By finely adjusting the axial displacement of illumination sources and imaging camera, cone photoreceptors, blood vessels and nerve fiber layer were clearly imaged successfully.

  9. Virtual-stereo fringe reflection technique for specular free-form surface testing

    NASA Astrophysics Data System (ADS)

    Ma, Suodong; Li, Bo

    2016-11-01

    Due to their excellent ability to improve the performance of optical systems, free-form optics have attracted extensive interest in many fields, e.g. optical design of astronomical telescopes, laser beam expanders, spectral imagers, etc. However, compared with traditional simple ones, testing for such kind of optics is usually more complex and difficult which has been being a big barrier for the manufacture and the application of these optics. Fortunately, owing to the rapid development of electronic devices and computer vision technology, fringe reflection technique (FRT) with advantages of simple system structure, high measurement accuracy and large dynamic range is becoming a powerful tool for specular free-form surface testing. In order to obtain absolute surface shape distributions of test objects, two or more cameras are often required in the conventional FRT which makes the system structure more complex and the measurement cost much higher. Furthermore, high precision synchronization between each camera is also a troublesome issue. To overcome the aforementioned drawback, a virtual-stereo FRT for specular free-form surface testing is put forward in this paper. It is able to achieve absolute profiles with the help of only one single biprism and a camera meanwhile avoiding the problems of stereo FRT based on binocular or multi-ocular cameras. Preliminary experimental results demonstrate the feasibility of the proposed technique.

  10. Towards Camera-LIDAR Fusion-Based Terrain Modelling for Planetary Surfaces: Review and Analysis

    PubMed Central

    Shaukat, Affan; Blacker, Peter C.; Spiteri, Conrad; Gao, Yang

    2016-01-01

    In recent decades, terrain modelling and reconstruction techniques have increased research interest in precise short and long distance autonomous navigation, localisation and mapping within field robotics. One of the most challenging applications is in relation to autonomous planetary exploration using mobile robots. Rovers deployed to explore extraterrestrial surfaces are required to perceive and model the environment with little or no intervention from the ground station. Up to date, stereopsis represents the state-of-the art method and can achieve short-distance planetary surface modelling. However, future space missions will require scene reconstruction at greater distance, fidelity and feature complexity, potentially using other sensors like Light Detection And Ranging (LIDAR). LIDAR has been extensively exploited for target detection, identification, and depth estimation in terrestrial robotics, but is still under development to become a viable technology for space robotics. This paper will first review current methods for scene reconstruction and terrain modelling using cameras in planetary robotics and LIDARs in terrestrial robotics; then we will propose camera-LIDAR fusion as a feasible technique to overcome the limitations of either of these individual sensors for planetary exploration. A comprehensive analysis will be presented to demonstrate the advantages of camera-LIDAR fusion in terms of range, fidelity, accuracy and computation. PMID:27879625

  11. Automatic and robust extrinsic camera calibration for high-accuracy mobile mapping

    NASA Astrophysics Data System (ADS)

    Goeman, Werner; Douterloigne, Koen; Bogaert, Peter; Pires, Rui; Gautama, Sidharta

    2012-10-01

    A mobile mapping system (MMS) is the answer of the geoinformation community to the exponentially growing demand for various geospatial data with increasingly higher accuracies and captured by multiple sensors. As the mobile mapping technology is pushed to explore its use for various applications on water, rail, or road, the need emerges to have an external sensor calibration procedure which is portable, fast and easy to perform. This way, sensors can be mounted and demounted depending on the application requirements without the need for time consuming calibration procedures. A new methodology is presented to provide a high quality external calibration of cameras which is automatic, robust and fool proof.The MMS uses an Applanix POSLV420, which is a tightly coupled GPS/INS positioning system. The cameras used are Point Grey color video cameras synchronized with the GPS/INS system. The method uses a portable, standard ranging pole which needs to be positioned on a known ground control point. For calibration a well studied absolute orientation problem needs to be solved. Here, a mutual information based image registration technique is studied for automatic alignment of the ranging pole. Finally, a few benchmarking tests are done under various lighting conditions which proves the methodology's robustness, by showing high absolute stereo measurement accuracies of a few centimeters.

  12. Distributed processing method for arbitrary view generation in camera sensor network

    NASA Astrophysics Data System (ADS)

    Tehrani, Mehrdad P.; Fujii, Toshiaki; Tanimoto, Masayuki

    2003-05-01

    Camera sensor network as a new advent of technology is a network that each sensor node can capture video signals, process and communicate them with other nodes. The processing task in this network is to generate arbitrary view, which can be requested from central node or user. To avoid unnecessary communication between nodes in camera sensor network and speed up the processing time, we have distributed the processing tasks between nodes. In this method, each sensor node processes part of interpolation algorithm to generate the interpolated image with local communication between nodes. The processing task in camera sensor network is ray-space interpolation, which is an object independent method and based on MSE minimization by using adaptive filtering. Two methods were proposed for distributing processing tasks, which are Fully Image Shared Decentralized Processing (FIS-DP), and Partially Image Shared Decentralized Processing (PIS-DP), to share image data locally. Comparison of the proposed methods with Centralized Processing (CP) method shows that PIS-DP has the highest processing speed after FIS-DP, and CP has the lowest processing speed. Communication rate of CP and PIS-DP is almost same and better than FIS-DP. So, PIS-DP is recommended because of its better performance than CP and FIS-DP.

  13. Hardware Testing for the Optical PAyload for Lasercomm Science (OPALS)

    NASA Technical Reports Server (NTRS)

    Slagle, Amanda

    2011-01-01

    Hardware for several subsystems of the proposed Optical PAyload for Lasercomm Science (OPALS), including the gimbal and avionics, was tested. Microswitches installed on the gimbal were evaluated to verify that their point of actuation would remain within the acceptable range even if the switches themselves move slightly during launch. An inspection of the power board was conducted to ensure that all power and ground signals were isolated, that polarized components were correctly oriented, and that all components were intact and securely soldered. Initial testing on the power board revealed several minor problems, but once they were fixed the power board was shown to function correctly. All tests and inspections were documented for future use in verifying launch requirements.

  14. Characterization and application of selective all-wet metallization of silicon

    NASA Astrophysics Data System (ADS)

    Uncuer, Muhammet; Koser, Hur

    2012-01-01

    We demonstrate selective, two-level metallization of silicon using electroless deposition of copper and gold. In this process, adhesion between the copper and silicon is improved with the formation of intermediary copper-silicide, and the gold layer protects copper from oxidation. The resistivity and residual stress of Au/Cu is 450 Ω nm (220 Ω nm annealed) and 56 MPa (tensile), respectively. These Au/Cu films allow a truly conformal and selective coating of high-aspect-ratio Si structures with good adhesion. We demonstrate the potential of these films in microswitches/relays, accelerometers and sensors by conformally coating the sidewalls of long (up to 1 mm in length), slender microbeams (5 µm × 5 µm) without inducing curvature.

  15. Integration of USB and firewire cameras in machine vision applications

    NASA Astrophysics Data System (ADS)

    Smith, Timothy E.; Britton, Douglas F.; Daley, Wayne D.; Carey, Richard

    1999-08-01

    Digital cameras have been around for many years, but a new breed of consumer market cameras is hitting the main stream. By using these devices, system designers and integrators will be well posited to take advantage of technological advances developed to support multimedia and imaging applications on the PC platform. Having these new cameras on the consumer market means lower cost, but it does not necessarily guarantee ease of integration. There are many issues that need to be accounted for like image quality, maintainable frame rates, image size and resolution, supported operating system, and ease of software integration. This paper will describe briefly a couple of the consumer digital standards, and then discuss some of the advantages and pitfalls of integrating both USB and Firewire cameras into computer/machine vision applications.

  16. Multi-color IR sensors based on QWIP technology for security and surveillance applications

    NASA Astrophysics Data System (ADS)

    Sundaram, Mani; Reisinger, Axel; Dennis, Richard; Patnaude, Kelly; Burrows, Douglas; Cook, Robert; Bundas, Jason

    2006-05-01

    Room-temperature targets are detected at the furthest distance by imaging them in the long wavelength (LW: 8-12 μm) infrared spectral band where they glow brightest. Focal plane arrays (FPAs) based on quantum well infrared photodetectors (QWIPs) have sensitivity, noise, and cost metrics that have enabled them to become the best commercial solution for certain security and surveillance applications. Recently, QWIP technology has advanced to provide pixelregistered dual-band imaging in both the midwave (MW: 3-5 μm) and longwave infrared spectral bands in a single chip. This elegant technology affords a degree of target discrimination as well as the ability to maximize detection range for hot targets (e.g. missile plumes) by imaging in the midwave and for room-temperature targets (e.g. humans, trucks) by imaging in the longwave with one simple camera. Detection-range calculations are illustrated and FPA performance is presented.

  17. Capitalizing on mobile technology to support healthy eating in ethnic minority college students.

    PubMed

    Rodgers, Rachel F; Pernal, Wendy; Matsumoto, Atsushi; Shiyko, Mariya; Intille, Stephen; Franko, Debra L

    2016-01-01

    To evaluate the capacity of a mobile technology-based intervention to support healthy eating among ethnic minority female students. Forty-three African American and Hispanic female students participated in a 3-week intervention between January and May 2013. Participants photographed their meals using their smart phone camera and received motivational text messages 3 times a day. At baseline, postintervention, and 10 weeks after the intervention, participants reported on fruit, vegetable, and sugar-sweetened beverage consumption. Participants were also weighed at baseline. Among participants with body mass index (BMI) ≥25, fruit and vegetable consumption increased with time (p < .01). Among participants with BMI <21, consumption of fruit decreased (p < .05), whereas the consumption of vegetables remained stable. No effects were found for sugar-sweetened beverage consumption. Mobile technology-based interventions could facilitate healthy eating among female ethnic minority college students, particularly those with higher BMI.

  18. Toward a digital camera to rival the human eye

    NASA Astrophysics Data System (ADS)

    Skorka, Orit; Joseph, Dileepan

    2011-07-01

    All things considered, electronic imaging systems do not rival the human visual system despite notable progress over 40 years since the invention of the CCD. This work presents a method that allows design engineers to evaluate the performance gap between a digital camera and the human eye. The method identifies limiting factors of the electronic systems by benchmarking against the human system. It considers power consumption, visual field, spatial resolution, temporal resolution, and properties related to signal and noise power. A figure of merit is defined as the performance gap of the weakest parameter. Experimental work done with observers and cadavers is reviewed to assess the parameters of the human eye, and assessment techniques are also covered for digital cameras. The method is applied to 24 modern image sensors of various types, where an ideal lens is assumed to complete a digital camera. Results indicate that dynamic range and dark limit are the most limiting factors. The substantial functional gap, from 1.6 to 4.5 orders of magnitude, between the human eye and digital cameras may arise from architectural differences between the human retina, arranged in a multiple-layer structure, and image sensors, mostly fabricated in planar technologies. Functionality of image sensors may be significantly improved by exploiting technologies that allow vertical stacking of active tiers.

  19. Novel hyperspectral prediction method and apparatus

    NASA Astrophysics Data System (ADS)

    Kemeny, Gabor J.; Crothers, Natalie A.; Groth, Gard A.; Speck, Kathy A.; Marbach, Ralf

    2009-05-01

    Both the power and the challenge of hyperspectral technologies is the very large amount of data produced by spectral cameras. While off-line methodologies allow the collection of gigabytes of data, extended data analysis sessions are required to convert the data into useful information. In contrast, real-time monitoring, such as on-line process control, requires that compression of spectral data and analysis occur at a sustained full camera data rate. Efficient, high-speed practical methods for calibration and prediction are therefore sought to optimize the value of hyperspectral imaging. A novel method of matched filtering known as science based multivariate calibration (SBC) was developed for hyperspectral calibration. Classical (MLR) and inverse (PLS, PCR) methods are combined by spectroscopically measuring the spectral "signal" and by statistically estimating the spectral "noise." The accuracy of the inverse model is thus combined with the easy interpretability of the classical model. The SBC method is optimized for hyperspectral data in the Hyper-CalTM software used for the present work. The prediction algorithms can then be downloaded into a dedicated FPGA based High-Speed Prediction EngineTM module. Spectral pretreatments and calibration coefficients are stored on interchangeable SD memory cards, and predicted compositions are produced on a USB interface at real-time camera output rates. Applications include minerals, pharmaceuticals, food processing and remote sensing.

  20. Compact Hyperspectral Imaging System (cosi) for Small Remotely Piloted Aircraft Systems (rpas) - System Overview and First Performance Evaluation Results

    NASA Astrophysics Data System (ADS)

    Sima, A. A.; Baeck, P.; Nuyts, D.; Delalieux, S.; Livens, S.; Blommaert, J.; Delauré, B.; Boonen, M.

    2016-06-01

    This paper gives an overview of the new COmpact hyperSpectral Imaging (COSI) system recently developed at the Flemish Institute for Technological Research (VITO, Belgium) and suitable for remotely piloted aircraft systems. A hyperspectral dataset captured from a multirotor platform over a strawberry field is presented and explored in order to assess spectral bands co-registration quality. Thanks to application of line based interference filters deposited directly on the detector wafer the COSI camera is compact and lightweight (total mass of 500g), and captures 72 narrow (FWHM: 5nm to 10 nm) bands in the spectral range of 600-900 nm. Covering the region of red edge (680 nm to 730 nm) allows for deriving plant chlorophyll content, biomass and hydric status indicators, making the camera suitable for agriculture purposes. Additionally to the orthorectified hypercube digital terrain model can be derived enabling various analyses requiring object height, e.g. plant height in vegetation growth monitoring. Geometric data quality assessment proves that the COSI camera and the dedicated data processing chain are capable to deliver very high resolution data (centimetre level) where spectral information can be correctly derived. Obtained results are comparable or better than results reported in similar studies for an alternative system based on the Fabry-Pérot interferometer.

  1. Using Technology To Reduce Public School Violence.

    ERIC Educational Resources Information Center

    Brown, John A.; Brown, Robert C.; Ledford, Bruce R.

    1996-01-01

    Describes technology-driven strategies for reducing school violence: (1) commitment communicated by newsletters and cable television; (2) elimination of weapons using metal detectors, surveillance cameras, breathalyzers, student passes, alarm systems, and school emergency plans; (3) two-way communications and low technology; (4) educational…

  2. Field Demonstration of Multi-Sensor Technology for Condition Assessment of Wastewater Collection Systems (Abstract)

    EPA Science Inventory

    The purpose of the field demonstration program is to gather technically reliable cost and performance information on selected condition assessment technologies under defined field conditions. The selected technologies include zoom camera, focused electrode leak location (FELL), ...

  3. 25 CFR 543.2 - What are the definitions for this part?

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ..., mechanical, or other technologic form, that function together to aid the play of one or more Class II games... a particular game, player interface, shift, or other period. Count room. A secured room where the... validated directly by a voucher system. Dedicated camera. A video camera that continuously records a...

  4. Development of a digital camera tree evaluation system

    Treesearch

    Neil Clark; Daniel L. Schmoldt; Philip A. Araman

    2000-01-01

    Within the Strategic Plan for Forest Inventory and Monitoring (USDA Forest Service 1998), there is a call to "conduct applied research in the use of [advanced technology] towards the end of increasing the operational efficiency and effectiveness of our program". The digital camera tree evaluation system is part of that research, aimed at decreasing field...

  5. LSST camera grid structure made out of ceramic composite material, HB-Cesic

    NASA Astrophysics Data System (ADS)

    Kroedel, Matthias R.; Langton, J. Bryan

    2016-08-01

    In this paper we are presenting the ceramic design and the fabrication of the camera structure which is using the unique manufacturing features of the HB-Cesic technology and associated with a dedicated metrology device in order to ensure the challenging flatness requirement of 4 micron over the full array.

  6. W-Band Free Space Permittivity Measurement Setup for Candidate Radome Materials

    NASA Technical Reports Server (NTRS)

    Fralick, Dion T.

    1997-01-01

    This paper presents a measurement system used for w-band complex permittivity measurements performed in NASA Langley Research Center's Electromagnetics Research Branch. The system was used to characterize candidate radome materials for the passive millimeter wave (PMMW) camera experiment. The PMMW camera is a new technology sensor, with goals of all-weather landings of civilian and military aircraft. The sensor is being developed under a NASA Technology Reinvestment program with TRW, McDonnell- Douglas, Honeywell, and Composite Optics, Inc. as participants. The experiment is scheduled to be flight tested on the Air Force's 'Speckled Trout' aircraft in late 1997. The camera operates at W-band, in a radiometric capacity and generates an image of the viewable field. Because the camera is a radiometer, the system is very sensitive to losses. Minimal transmission loss through the radome at the operating frequency, 89 GHz, was critical to the success of the experiment. This paper details the design, set-up, calibration and operation of a free space measurement system developed and used to characterize the candidate radome materials for this program.

  7. SPECT detectors: the Anger Camera and beyond

    PubMed Central

    Peterson, Todd E.; Furenlid, Lars R.

    2011-01-01

    The development of radiation detectors capable of delivering spatial information about gamma-ray interactions was one of the key enabling technologies for nuclear medicine imaging and, eventually, single-photon emission computed tomography (SPECT). The continuous NaI(Tl) scintillator crystal coupled to an array of photomultiplier tubes, almost universally referred to as the Anger Camera after its inventor, has long been the dominant SPECT detector system. Nevertheless, many alternative materials and configurations have been investigated over the years. Technological advances as well as the emerging importance of specialized applications, such as cardiac and preclinical imaging, have spurred innovation such that alternatives to the Anger Camera are now part of commercial imaging systems. Increased computing power has made it practical to apply advanced signal processing and estimation schemes to make better use of the information contained in the detector signals. In this review we discuss the key performance properties of SPECT detectors and survey developments in both scintillator and semiconductor detectors and their readouts with an eye toward some of the practical issues at least in part responsible for the continuing prevalence of the Anger Camera in the clinic. PMID:21828904

  8. Aerodynamic Measurement Technology

    NASA Technical Reports Server (NTRS)

    Burner, Alpheus W.

    2002-01-01

    Ohio State University developed a new spectrally filtered light-scattering apparatus based on a diode laser injected-locked titanium: sapphire laser and rubidium vapor filter at 780.2 nm. When the device was combined with a stimulated Brillouin scattering phase conjugate mirror, the realizable peak attenuation of elastic scattering interferences exceeded 105. The potential of the system was demonstrated by performing Thomson scattering measurements. Under USAF-NASA funding, West Virginia University developed a Doppler global velocimetry system using inexpensive 8-bit charged coupled device cameras and digitizers and a CW argon ion laser. It has demonstrated a precision of +/- 2.5 m/sec in a swirling jet flow. Low-noise silicon-micromachined microphones developed and incorporated in a novel two-tier, hybrid packaging scheme at the University of Florida used printed circuit board technology to realize a MEMS-based directional acoustic array. The array demonstrated excellent performance relative to conventional sensor technologies and provides scaling technologies that can reduce cost and increase speed and mobility.

  9. A Wireless Sensor Network-Based Ubiquitous Paprika Growth Management System

    PubMed Central

    Hwang, Jeonghwan; Shin, Changsun; Yoe, Hyun

    2010-01-01

    Wireless Sensor Network (WSN) technology can facilitate advances in productivity, safety and human quality of life through its applications in various industries. In particular, the application of WSN technology to the agricultural area, which is labor-intensive compared to other industries, and in addition is typically lacking in IT technology applications, adds value and can increase the agricultural productivity. This study attempts to establish a ubiquitous agricultural environment and improve the productivity of farms that grow paprika by suggesting a ‘Ubiquitous Paprika Greenhouse Management System’ using WSN technology. The proposed system can collect and monitor information related to the growth environment of crops outside and inside paprika greenhouses by installing WSN sensors and monitoring images captured by CCTV cameras. In addition, the system provides a paprika greenhouse environment control facility for manual and automatic control from a distance, improves the convenience and productivity of users, and facilitates an optimized environment to grow paprika based on the growth environment data acquired by operating the system. PMID:22163543

  10. Principal axis-based correspondence between multiple cameras for people tracking.

    PubMed

    Hu, Weiming; Hu, Min; Zhou, Xue; Tan, Tieniu; Lou, Jianguang; Maybank, Steve

    2006-04-01

    Visual surveillance using multiple cameras has attracted increasing interest in recent years. Correspondence between multiple cameras is one of the most important and basic problems which visual surveillance using multiple cameras brings. In this paper, we propose a simple and robust method, based on principal axes of people, to match people across multiple cameras. The correspondence likelihood reflecting the similarity of pairs of principal axes of people is constructed according to the relationship between "ground-points" of people detected in each camera view and the intersections of principal axes detected in different camera views and transformed to the same view. Our method has the following desirable properties: 1) Camera calibration is not needed. 2) Accurate motion detection and segmentation are less critical due to the robustness of the principal axis-based feature to noise. 3) Based on the fused data derived from correspondence results, positions of people in each camera view can be accurately located even when the people are partially occluded in all views. The experimental results on several real video sequences from outdoor environments have demonstrated the effectiveness, efficiency, and robustness of our method.

  11. Future Technology Workshop: A Collaborative Method for the Design of New Learning Technologies and Activities

    ERIC Educational Resources Information Center

    Vavoula, Giasemi N.; Sharples, Mike

    2007-01-01

    We describe the future technology workshop (FTW), a method whereby people with everyday knowledge or experience in a specific area of technology use (such as using digital cameras) envision and design the interactions between current and future technology and activity. Through a series of structured workshop sessions, participants collaborate to…

  12. Intermediate view synthesis algorithm using mesh clustering for rectangular multiview camera system

    NASA Astrophysics Data System (ADS)

    Choi, Byeongho; Kim, Taewan; Oh, Kwan-Jung; Ho, Yo-Sung; Choi, Jong-Soo

    2010-02-01

    A multiview video-based three-dimensional (3-D) video system offers a realistic impression and a free view navigation to the user. The efficient compression and intermediate view synthesis are key technologies since 3-D video systems deal multiple views. We propose an intermediate view synthesis using a rectangular multiview camera system that is suitable to realize 3-D video systems. The rectangular multiview camera system not only can offer free view navigation both horizontally and vertically but also can employ three reference views such as left, right, and bottom for intermediate view synthesis. The proposed view synthesis method first represents the each reference view to meshes and then finds the best disparity for each mesh element by using the stereo matching between reference views. Before stereo matching, we separate the virtual image to be synthesized into several regions to enhance the accuracy of disparities. The mesh is classified into foreground and background groups by disparity values and then affine transformed. By experiments, we confirm that the proposed method synthesizes a high-quality image and is suitable for 3-D video systems.

  13. A fast new cadioptric design for fiber-fed spectrographs

    NASA Astrophysics Data System (ADS)

    Saunders, Will

    2012-09-01

    The next generation of massively multiplexed multi-object spectrographs (DESpec, SUMIRE, BigBOSS, 4MOST, HECTOR) demand fast, efficient and affordable spectrographs, with higher resolutions (R = 3000-5000) than current designs. Beam-size is a (relatively) free parameter in the design, but the properties of VPH gratings are such that, for fixed resolution and wavelength coverage, the effect on beam-size on overall VPH efficiency is very small. For alltransmissive cameras, this suggests modest beam-sizes (say 80-150mm) to minimize costs; while for cadioptric (Schmidt-type) cameras, much larger beam-sizes (say 250mm+) are preferred to improve image quality and to minimize obstruction losses. Schmidt designs have benefits in terms of image quality, camera speed and scattered light performance, and recent advances such as MRF technology mean that the required aspherics are no longer a prohibitive cost or risk. The main objections to traditional Schmidt designs are the inaccessibility of the detector package, and the loss in throughput caused by it being in the beam. With expected count rates and current read-noise technology, the gain in camera speed allowed by Schmidt optics largely compensates for the additional obstruction losses. However, future advances in readout technology may erase most of this compensation. A new Schmidt/Maksutov-derived design is presented, which differs from previous designs in having the detector package outside the camera, and adjacent to the spectrograph pupil. The telescope pupil already contains a hole at its center, because of the obstruction from the telescope top-end. With a 250mm beam, it is possible to largely hide a 6cm × 6cm detector package and its dewar within this hole. This means that the design achieves a very high efficiency, competitive with transmissive designs. The optics are excellent, as least as good as classic Schmidt designs, allowing F/1.25 or even faster cameras. The principal hardware has been costed at $300K per arm, making the design affordable.

  14. Vertically integrated photonic multichip module architecture for vision applications

    NASA Astrophysics Data System (ADS)

    Tanguay, Armand R., Jr.; Jenkins, B. Keith; von der Malsburg, Christoph; Mel, Bartlett; Holt, Gary; O'Brien, John D.; Biederman, Irving; Madhukar, Anupam; Nasiatka, Patrick; Huang, Yunsong

    2000-05-01

    The development of a truly smart camera, with inherent capability for low latency semi-autonomous object recognition, tracking, and optimal image capture, has remained an elusive goal notwithstanding tremendous advances in the processing power afforded by VLSI technologies. These features are essential for a number of emerging multimedia- based applications, including enhanced augmented reality systems. Recent advances in understanding of the mechanisms of biological vision systems, together with similar advances in hybrid electronic/photonic packaging technology, offer the possibility of artificial biologically-inspired vision systems with significantly different, yet complementary, strengths and weaknesses. We describe herein several system implementation architectures based on spatial and temporal integration techniques within a multilayered structure, as well as the corresponding hardware implementation of these architectures based on the hybrid vertical integration of multiple silicon VLSI vision chips by means of dense 3D photonic interconnections.

  15. Frequency division multiplexed multi-color fluorescence microscope system

    NASA Astrophysics Data System (ADS)

    Le, Vu Nam; Yang, Huai Dong; Zhang, Si Chun; Zhang, Xin Rong; Jin, Guo Fan

    2017-10-01

    Grayscale camera can only obtain gray scale image of object, while the multicolor imaging technology can obtain the color information to distinguish the sample structures which have the same shapes but in different colors. In fluorescence microscopy, the current method of multicolor imaging are flawed. Problem of these method is affecting the efficiency of fluorescence imaging, reducing the sampling rate of CCD etc. In this paper, we propose a novel multiple color fluorescence microscopy imaging method which based on the Frequency division multiplexing (FDM) technology, by modulating the excitation lights and demodulating the fluorescence signal in frequency domain. This method uses periodic functions with different frequency to modulate amplitude of each excitation lights, and then combine these beams for illumination in a fluorescence microscopy imaging system. The imaging system will detect a multicolor fluorescence image by a grayscale camera. During the data processing, the signal obtained by each pixel of the camera will be processed with discrete Fourier transform, decomposed by color in the frequency domain and then used inverse discrete Fourier transform. After using this process for signals from all of the pixels, monochrome images of each color on the image plane can be obtained and multicolor image is also acquired. Based on this method, this paper has constructed and set up a two-color fluorescence microscope system with two excitation wavelengths of 488 nm and 639 nm. By using this system to observe the linearly movement of two kinds of fluorescent microspheres, after the data processing, we obtain a two-color fluorescence dynamic video which is consistent with the original image. This experiment shows that the dynamic phenomenon of multicolor fluorescent biological samples can be generally observed by this method. Compared with the current methods, this method can obtain the image signals of each color at the same time, and the color video's frame rate is consistent with the frame rate of the camera. The optical system is simpler and does not need extra color separation element. In addition, this method has a good filtering effect on the ambient light or other light signals which are not affected by the modulation process.

  16. Situational Awareness from a Low-Cost Camera System

    NASA Technical Reports Server (NTRS)

    Freudinger, Lawrence C.; Ward, David; Lesage, John

    2010-01-01

    A method gathers scene information from a low-cost camera system. Existing surveillance systems using sufficient cameras for continuous coverage of a large field necessarily generate enormous amounts of raw data. Digitizing and channeling that data to a central computer and processing it in real time is difficult when using low-cost, commercially available components. A newly developed system is located on a combined power and data wire to form a string-of-lights camera system. Each camera is accessible through this network interface using standard TCP/IP networking protocols. The cameras more closely resemble cell-phone cameras than traditional security camera systems. Processing capabilities are built directly onto the camera backplane, which helps maintain a low cost. The low power requirements of each camera allow the creation of a single imaging system comprising over 100 cameras. Each camera has built-in processing capabilities to detect events and cooperatively share this information with neighboring cameras. The location of the event is reported to the host computer in Cartesian coordinates computed from data correlation across multiple cameras. In this way, events in the field of view can present low-bandwidth information to the host rather than high-bandwidth bitmap data constantly being generated by the cameras. This approach offers greater flexibility than conventional systems, without compromising performance through using many small, low-cost cameras with overlapping fields of view. This means significant increased viewing without ignoring surveillance areas, which can occur when pan, tilt, and zoom cameras look away. Additionally, due to the sharing of a single cable for power and data, the installation costs are lower. The technology is targeted toward 3D scene extraction and automatic target tracking for military and commercial applications. Security systems and environmental/ vehicular monitoring systems are also potential applications.

  17. Young Child.

    ERIC Educational Resources Information Center

    DeVoogd, Glenn, Ed.

    This document contains the following papers focusing on contexts and activities in which teachers can use technology to promote learning with young children: (1) "Read, Write and Click: Using Digital Camera Technology in a Language Arts and Literacy K-5 Classroom" (Judith F. Robbins and Jacqueline Bedell); (2) "Technology for the…

  18. Light field imaging and application analysis in THz

    NASA Astrophysics Data System (ADS)

    Zhang, Hongfei; Su, Bo; He, Jingsuo; Zhang, Cong; Wu, Yaxiong; Zhang, Shengbo; Zhang, Cunlin

    2018-01-01

    The light field includes the direction information and location information. Light field imaging can capture the whole light field by single exposure. The four-dimensional light field function model represented by two-plane parameter, which is proposed by Levoy, is adopted in the light field. Acquisition of light field is based on the microlens array, camera array and the mask. We calculate the dates of light-field to synthetize light field image. The processing techniques of light field data include technology of refocusing rendering, technology of synthetic aperture and technology of microscopic imaging. Introducing the technology of light field imaging into THz, the efficiency of 3D imaging is higher than that of conventional THz 3D imaging technology. The advantages compared with visible light field imaging include large depth of field, wide dynamic range and true three-dimensional. It has broad application prospects.

  19. How Many Pixels Does It Take to Make a Good 4"×6" Print? Pixel Count Wars Revisited

    NASA Astrophysics Data System (ADS)

    Kriss, Michael A.

    Digital still cameras emerged following the introduction of the Sony Mavica analog prototype camera in 1981. These early cameras produced poor image quality and did not challenge film cameras for overall quality. By 1995 digital still cameras in expensive SLR formats had 6 mega-pixels and produced high quality images (with significant image processing). In 2005 significant improvement in image quality was apparent and lower prices for digital still cameras (DSCs) started a rapid decline in film usage and film camera sells. By 2010 film usage was mostly limited to professionals and the motion picture industry. The rise of DSCs was marked by a “pixel war” where the driving feature of the cameras was the pixel count where even moderate cost, ˜120, DSCs would have 14 mega-pixels. The improvement of CMOS technology pushed this trend of lower prices and higher pixel counts. Only the single lens reflex cameras had large sensors and large pixels. The drive for smaller pixels hurt the quality aspects of the final image (sharpness, noise, speed, and exposure latitude). Only today are camera manufactures starting to reverse their course and producing DSCs with larger sensors and pixels. This paper will explore why larger pixels and sensors are key to the future of DSCs.

  20. Capturing method for integral three-dimensional imaging using multiviewpoint robotic cameras

    NASA Astrophysics Data System (ADS)

    Ikeya, Kensuke; Arai, Jun; Mishina, Tomoyuki; Yamaguchi, Masahiro

    2018-03-01

    Integral three-dimensional (3-D) technology for next-generation 3-D television must be able to capture dynamic moving subjects with pan, tilt, and zoom camerawork as good as in current TV program production. We propose a capturing method for integral 3-D imaging using multiviewpoint robotic cameras. The cameras are controlled through a cooperative synchronous system composed of a master camera controlled by a camera operator and other reference cameras that are utilized for 3-D reconstruction. When the operator captures a subject using the master camera, the region reproduced by the integral 3-D display is regulated in real space according to the subject's position and view angle of the master camera. Using the cooperative control function, the reference cameras can capture images at the narrowest view angle that does not lose any part of the object region, thereby maximizing the resolution of the image. 3-D models are reconstructed by estimating the depth from complementary multiviewpoint images captured by robotic cameras arranged in a two-dimensional array. The model is converted into elemental images to generate the integral 3-D images. In experiments, we reconstructed integral 3-D images of karate players and confirmed that the proposed method satisfied the above requirements.

  1. Technology development: Future use of NASA's large format camera is uncertain

    NASA Astrophysics Data System (ADS)

    Rey, Charles F.; Fliegel, Ilene H.; Rohner, Karl A.

    1990-06-01

    The Large Format Camera, developed as a project to verify an engineering concept or design, has been flown only once, in 1984, on the shuttle Challenger. Since this flight, the camera has been in storage. NASA had expected that, following the camera's successful demonstration, other government agencies or private companies with special interests in photographic applications would absorb the costs for further flights using the Large Format Camera. But, because shuttle transportation costs for the Large Format Camera were estimated to be approximately $20 million (in 1987 dollars) per flight and the market for selling Large Format Camera products was limited, NASA was not successful in interesting other agencies or private companies in paying the costs. Using the camera on the space station does not appear to be a realistic alternative. Using the camera aboard NASA's Earth Resources Research (ER-2) aircraft may be feasible. Until the final disposition of the camera is decided, NASA has taken actions to protect it from environmental deterioration. The Government Accounting Office (GAO) recommends that the NASA Administrator should consider, first, using the camera on an aircraft such as the ER-2. NASA plans to solicit the private sector for expressions of interest in such use of the camera, at no cost to the government, and will be guided by the private sector response. Second, GAO recommends that if aircraft use is determined to be infeasible, NASA should consider transferring the camera to a museum, such as the National Air and Space Museum.

  2. Uav Photogrammetric Solution Using a Raspberry pi Camera Module and Smart Devices: Test and Results

    NASA Astrophysics Data System (ADS)

    Piras, M.; Grasso, N.; Jabbar, A. Abdul

    2017-08-01

    Nowadays, smart technologies are an important part of our action and life, both in indoor and outdoor environment. There are several smart devices very friendly to be setting, where they can be integrated and embedded with other sensors, having a very low cost. Raspberry allows to install an internal camera called Raspberry Pi Camera Module, both in RGB band and NIR band. The advantage of this system is the limited cost (< 60 euro), their light weight and their simplicity to be used and embedded. This paper will describe a research where a Raspberry Pi with the Camera Module was installed onto a UAV hexacopter based on arducopter system, with purpose to collect pictures for photogrammetry issue. Firstly, the system was tested with aim to verify the performance of RPi camera in terms of frame per second/resolution and the power requirement. Moreover, a GNSS receiver Ublox M8T was installed and connected to the Raspberry platform in order to collect real time position and the raw data, for data processing and to define the time reference. IMU was also tested to see the impact of UAV rotors noise on different sensors like accelerometer, Gyroscope and Magnetometer. A comparison of the achieved results (accuracy) on some check points of the point clouds obtained by the camera will be reported as well in order to analyse in deeper the main discrepancy on the generated point cloud and the potentiality of these proposed approach. In this contribute, the assembling of the system is described, in particular the dataset acquired and the results carried out will be analysed.

  3. Re-scan confocal microscopy: scanning twice for better resolution.

    PubMed

    De Luca, Giulia M R; Breedijk, Ronald M P; Brandt, Rick A J; Zeelenberg, Christiaan H C; de Jong, Babette E; Timmermans, Wendy; Azar, Leila Nahidi; Hoebe, Ron A; Stallinga, Sjoerd; Manders, Erik M M

    2013-01-01

    We present a new super-resolution technique, Re-scan Confocal Microscopy (RCM), based on standard confocal microscopy extended with an optical (re-scanning) unit that projects the image directly on a CCD-camera. This new microscope has improved lateral resolution and strongly improved sensitivity while maintaining the sectioning capability of a standard confocal microscope. This simple technology is typically useful for biological applications where the combination high-resolution and high-sensitivity is required.

  4. Enhanced Virtual Presence for Immersive Visualization of Complex Situations for Mission Rehearsal

    DTIC Science & Technology

    1997-06-01

    taken. We propose to join both these technologies together in a registration device . The registration device would be small and portable and easily...registering the panning of the camera (or other sensing device ) and also stitch together the shots to automatically generate panoramic files necessary to...database and as the base information changes each of the linked drawings is automatically updated. Filename Format A specific naming convention should be

  5. X-ray detectors at the Linac Coherent Light Source.

    PubMed

    Blaj, Gabriel; Caragiulo, Pietro; Carini, Gabriella; Carron, Sebastian; Dragone, Angelo; Freytag, Dietrich; Haller, Gunther; Hart, Philip; Hasi, Jasmine; Herbst, Ryan; Herrmann, Sven; Kenney, Chris; Markovic, Bojan; Nishimura, Kurtis; Osier, Shawn; Pines, Jack; Reese, Benjamin; Segal, Julie; Tomada, Astrid; Weaver, Matt

    2015-05-01

    Free-electron lasers (FELs) present new challenges for camera development compared with conventional light sources. At SLAC a variety of technologies are being used to match the demands of the Linac Coherent Light Source (LCLS) and to support a wide range of scientific applications. In this paper an overview of X-ray detector design requirements at FELs is presented and the various cameras in use at SLAC are described for the benefit of users planning experiments or analysts looking at data. Features and operation of the CSPAD camera, which is currently deployed at LCLS, are discussed, and the ePix family, a new generation of cameras under development at SLAC, is introduced.

  6. Extreme Faint Flux Imaging with an EMCCD

    NASA Astrophysics Data System (ADS)

    Daigle, Olivier; Carignan, Claude; Gach, Jean-Luc; Guillaume, Christian; Lessard, Simon; Fortin, Charles-Anthony; Blais-Ouellette, Sébastien

    2009-08-01

    An EMCCD camera, designed from the ground up for extreme faint flux imaging, is presented. CCCP, the CCD Controller for Counting Photons, has been integrated with a CCD97 EMCCD from e2v technologies into a scientific camera at the Laboratoire d’Astrophysique Expérimentale (LAE), Université de Montréal. This new camera achieves subelectron readout noise and very low clock-induced charge (CIC) levels, which are mandatory for extreme faint flux imaging. It has been characterized in laboratory and used on the Observatoire du Mont Mégantic 1.6 m telescope. The performance of the camera is discussed and experimental data with the first scientific data are presented.

  7. X-ray detectors at the Linac Coherent Light Source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blaj, Gabriel; Caragiulo, Pietro; Carini, Gabriella

    Free-electron lasers (FELs) present new challenges for camera development compared with conventional light sources. At SLAC a variety of technologies are being used to match the demands of the Linac Coherent Light Source (LCLS) and to support a wide range of scientific applications. In this paper an overview of X-ray detector design requirements at FELs is presented and the various cameras in use at SLAC are described for the benefit of users planning experiments or analysts looking at data. Features and operation of the CSPAD camera, which is currently deployed at LCLS, are discussed, and the ePix family, a newmore » generation of cameras under development at SLAC, is introduced.« less

  8. X-ray detectors at the Linac Coherent Light Source

    DOE PAGES

    Blaj, Gabriel; Caragiulo, Pietro; Carini, Gabriella; ...

    2015-04-21

    Free-electron lasers (FELs) present new challenges for camera development compared with conventional light sources. At SLAC a variety of technologies are being used to match the demands of the Linac Coherent Light Source (LCLS) and to support a wide range of scientific applications. In this paper an overview of X-ray detector design requirements at FELs is presented and the various cameras in use at SLAC are described for the benefit of users planning experiments or analysts looking at data. Features and operation of the CSPAD camera, which is currently deployed at LCLS, are discussed, and the ePix family, a newmore » generation of cameras under development at SLAC, is introduced.« less

  9. X-ray detectors at the Linac Coherent Light Source

    PubMed Central

    Blaj, Gabriel; Caragiulo, Pietro; Carini, Gabriella; Carron, Sebastian; Dragone, Angelo; Freytag, Dietrich; Haller, Gunther; Hart, Philip; Hasi, Jasmine; Herbst, Ryan; Herrmann, Sven; Kenney, Chris; Markovic, Bojan; Nishimura, Kurtis; Osier, Shawn; Pines, Jack; Reese, Benjamin; Segal, Julie; Tomada, Astrid; Weaver, Matt

    2015-01-01

    Free-electron lasers (FELs) present new challenges for camera development compared with conventional light sources. At SLAC a variety of technologies are being used to match the demands of the Linac Coherent Light Source (LCLS) and to support a wide range of scientific applications. In this paper an overview of X-ray detector design requirements at FELs is presented and the various cameras in use at SLAC are described for the benefit of users planning experiments or analysts looking at data. Features and operation of the CSPAD camera, which is currently deployed at LCLS, are discussed, and the ePix family, a new generation of cameras under development at SLAC, is introduced. PMID:25931071

  10. Hubble Space Telescope photographed by Electronic Still Camera

    NASA Image and Video Library

    1993-12-04

    S61-E-008 (4 Dec 1993) --- This view of the Earth-orbiting Hubble Space Telescope (HST) was photographed with an Electronic Still Camera (ESC), and down linked to ground controllers soon afterward. This view was taken during rendezvous operations. Endeavour's crew captured the HST on December 4, 1993 in order to service the telescope. Over a period of five days, four of the crew members will work in alternating pairs outside Endeavour's shirt sleeve environment. Electronic still photography is a relatively new technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality. The electronic still camera has flown as an experiment on several other shuttle missions.

  11. Electronic Still Camera image of Astronaut Claude Nicollier working with RMS

    NASA Image and Video Library

    1993-12-05

    S61-E-006 (5 Dec 1993) --- The robot arm controlling work of Swiss scientist Claude Nicollier was photographed with an Electronic Still Camera (ESC), and down linked to ground controllers soon afterward. With the mission specialist's assistance, Endeavour's crew captured the Hubble Space Telescope (HST) on December 4, 1993. Four of the seven crew members will work in alternating pairs outside Endeavour's shirt sleeve environment to service the giant telescope. Electronic still photography is a relatively new technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality. The electronic still camera has flown as an experiment on several other shuttle missions.

  12. A Bionic Camera-Based Polarization Navigation Sensor

    PubMed Central

    Wang, Daobin; Liang, Huawei; Zhu, Hui; Zhang, Shuai

    2014-01-01

    Navigation and positioning technology is closely related to our routine life activities, from travel to aerospace. Recently it has been found that Cataglyphis (a kind of desert ant) is able to detect the polarization direction of skylight and navigate according to this information. This paper presents a real-time bionic camera-based polarization navigation sensor. This sensor has two work modes: one is a single-point measurement mode and the other is a multi-point measurement mode. An indoor calibration experiment of the sensor has been done under a beam of standard polarized light. The experiment results show that after noise reduction the accuracy of the sensor can reach up to 0.3256°. It is also compared with GPS and INS (Inertial Navigation System) in the single-point measurement mode through an outdoor experiment. Through time compensation and location compensation, the sensor can be a useful alternative to GPS and INS. In addition, the sensor also can measure the polarization distribution pattern when it works in multi-point measurement mode. PMID:25051029

  13. Mobile cosmetics advisor: an imaging based mobile service

    NASA Astrophysics Data System (ADS)

    Bhatti, Nina; Baker, Harlyn; Chao, Hui; Clearwater, Scott; Harville, Mike; Jain, Jhilmil; Lyons, Nic; Marguier, Joanna; Schettino, John; Süsstrunk, Sabine

    2010-01-01

    Selecting cosmetics requires visual information and often benefits from the assessments of a cosmetics expert. In this paper we present a unique mobile imaging application that enables women to use their cell phones to get immediate expert advice when selecting personal cosmetic products. We derive the visual information from analysis of camera phone images, and provide the judgment of the cosmetics specialist through use of an expert system. The result is a new paradigm for mobile interactions-image-based information services exploiting the ubiquity of camera phones. The application is designed to work with any handset over any cellular carrier using commonly available MMS and SMS features. Targeted at the unsophisticated consumer, it must be quick and easy to use, not requiring download capabilities or preplanning. Thus, all application processing occurs in the back-end system and not on the handset itself. We present the imaging pipeline technology and a comparison of the services' accuracy with respect to human experts.

  14. Develop Direct Geo-referencing System Based on Open Source Software and Hardware Platform

    NASA Astrophysics Data System (ADS)

    Liu, H. S.; Liao, H. M.

    2015-08-01

    Direct geo-referencing system uses the technology of remote sensing to quickly grasp images, GPS tracks, and camera position. These data allows the construction of large volumes of images with geographic coordinates. So that users can be measured directly on the images. In order to properly calculate positioning, all the sensor signals must be synchronized. Traditional aerial photography use Position and Orientation System (POS) to integrate image, coordinates and camera position. However, it is very expensive. And users could not use the result immediately because the position information does not embed into image. To considerations of economy and efficiency, this study aims to develop a direct geo-referencing system based on open source software and hardware platform. After using Arduino microcontroller board to integrate the signals, we then can calculate positioning with open source software OpenCV. In the end, we use open source panorama browser, panini, and integrate all these to open source GIS software, Quantum GIS. A wholesome collection of data - a data processing system could be constructed.

  15. DAQ: Software Architecture for Data Acquisition in Sounding Rockets

    NASA Technical Reports Server (NTRS)

    Ahmad, Mohammad; Tran, Thanh; Nichols, Heidi; Bowles-Martinez, Jessica N.

    2011-01-01

    A multithreaded software application was developed by Jet Propulsion Lab (JPL) to collect a set of correlated imagery, Inertial Measurement Unit (IMU) and GPS data for a Wallops Flight Facility (WFF) sounding rocket flight. The data set will be used to advance Terrain Relative Navigation (TRN) technology algorithms being researched at JPL. This paper describes the software architecture and the tests used to meet the timing and data rate requirements for the software used to collect the dataset. Also discussed are the challenges of using commercial off the shelf (COTS) flight hardware and open source software. This includes multiple Camera Link (C-link) based cameras, a Pentium-M based computer, and Linux Fedora 11 operating system. Additionally, the paper talks about the history of the software architecture's usage in other JPL projects and its applicability for future missions, such as cubesats, UAVs, and research planes/balloons. Also talked about will be the human aspect of project especially JPL's Phaeton program and the results of the launch.

  16. Stereo-Optic High Definition Imaging: A New Technology to Understand Bird and Bat Avoidance of Wind Turbines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, Evan; Goodale, Wing; Burns, Steve

    There is a critical need to develop monitoring tools to track aerofauna (birds and bats) in three dimensions around wind turbines. New monitoring systems will reduce permitting uncertainty by increasing the understanding of how birds and bats are interacting with wind turbines, which will improve the accuracy of impact predictions. Biodiversity Research Institute (BRI), The University of Maine Orono School of Computing and Information Science (UMaine SCIS), HiDef Aerial Surveying Limited (HiDef), and SunEdison, Inc. (formerly First Wind) responded to this need by using stereo-optic cameras with near-infrared (nIR) technology to investigate new methods for documenting aerofauna behavior around windmore » turbines. The stereo-optic camera system used two synchronized high-definition video cameras with fisheye lenses and processing software that detected moving objects, which could be identified in post-processing. The stereo- optic imaging system offered the ability to extract 3-D position information from pairs of images captured from different viewpoints. Fisheye lenses allowed for a greater field of view, but required more complex image rectification to contend with fisheye distortion. The ability to obtain 3-D positions provided crucial data on the trajectory (speed and direction) of a target, which, when the technology is fully developed, will provide data on how animals are responding to and interacting with wind turbines. This project was focused on testing the performance of the camera system, improving video review processing time, advancing the 3-D tracking technology, and moving the system from Technology Readiness Level 4 to 5. To achieve these objectives, we determined the size and distance at which aerofauna (particularly eagles) could be detected and identified, created efficient data management systems, improved the video post-processing viewer, and attempted refinement of 3-D modeling with respect to fisheye lenses. The 29-megapixel camera system successfully captured 16,173 five-minute video segments in the field. During nighttime field trials using nIR, we found that bat-sized objects could not be detected more than 60 m from the camera system. This led to a decision to focus research efforts exclusively on daytime monitoring and to redirect resources towards improving the video post- processing viewer. We redesigned the bird event post-processing viewer, which substantially decreased the review time necessary to detect and identify flying objects. During daytime field trials, we determine that eagles could be detected up to 500 m away using the fisheye wide-angle lenses, and eagle-sized targets could be identified to species within 350 m of the camera system. We used distance sampling survey methods to describe the probability of detecting and identifying eagles and other aerofauna as a function of distance from the system. The previously developed 3-D algorithm for object isolation and tracking was tested, but the image rectification (flattening) required to obtain accurate distance measurements with fish-eye lenses was determined to be insufficient for distant eagles. We used MATLAB and OpenCV to improve fisheye lens rectification towards the center of the image, but accurate measurements towards the image corners could not be achieved. We believe that changing the fisheye lens to rectilinear lens would greatly improve position estimation, but doing so would result in a decrease in viewing angle and depth of field. Finally, we generated simplified shape profiles of birds to look for similarities between unknown animals and known species. With further development, this method could provide a mechanism for filtering large numbers of shapes to reduce data storage and processing. These advancements further refined the camera system and brought this new technology closer to market. Once commercialized, the stereo-optic camera system technology could be used to: a) research how different species interact with wind turbines in order to refine collision risk models and inform mitigation solutions; and b) monitor aerofauna interactions with terrestrial and offshore wind farms replacing costly human observers and allowing for long-term monitoring in the offshore environment. The camera system will provide developers and regulators with data on the risk that wind turbines present to aerofauna, which will reduce uncertainty in the environmental permitting process.« less

  17. Image based performance analysis of thermal imagers

    NASA Astrophysics Data System (ADS)

    Wegner, D.; Repasi, E.

    2016-05-01

    Due to advances in technology, modern thermal imagers resemble sophisticated image processing systems in functionality. Advanced signal and image processing tools enclosed into the camera body extend the basic image capturing capability of thermal cameras. This happens in order to enhance the display presentation of the captured scene or specific scene details. Usually, the implemented methods are proprietary company expertise, distributed without extensive documentation. This makes the comparison of thermal imagers especially from different companies a difficult task (or at least a very time consuming/expensive task - e.g. requiring the execution of a field trial and/or an observer trial). For example, a thermal camera equipped with turbulence mitigation capability stands for such a closed system. The Fraunhofer IOSB has started to build up a system for testing thermal imagers by image based methods in the lab environment. This will extend our capability of measuring the classical IR-system parameters (e.g. MTF, MTDP, etc.) in the lab. The system is set up around the IR- scene projector, which is necessary for the thermal display (projection) of an image sequence for the IR-camera under test. The same set of thermal test sequences might be presented to every unit under test. For turbulence mitigation tests, this could be e.g. the same turbulence sequence. During system tests, gradual variation of input parameters (e. g. thermal contrast) can be applied. First ideas of test scenes selection and how to assembly an imaging suite (a set of image sequences) for the analysis of imaging thermal systems containing such black boxes in the image forming path is discussed.

  18. Visual Odometry for Autonomous Deep-Space Navigation

    NASA Technical Reports Server (NTRS)

    Robinson, Shane; Pedrotty, Sam

    2016-01-01

    Visual Odometry fills two critical needs shared by all future exploration architectures considered by NASA: Autonomous Rendezvous and Docking (AR&D), and autonomous navigation during loss of comm. To do this, a camera is combined with cutting-edge algorithms (called Visual Odometry) into a unit that provides accurate relative pose between the camera and the object in the imagery. Recent simulation analyses have demonstrated the ability of this new technology to reliably, accurately, and quickly compute a relative pose. This project advances this technology by both preparing the system to process flight imagery and creating an activity to capture said imagery. This technology can provide a pioneering optical navigation platform capable of supporting a wide variety of future missions scenarios: deep space rendezvous, asteroid exploration, loss-of-comm.

  19. Motion Estimation Utilizing Range Detection-Enhanced Visual Odometry

    NASA Technical Reports Server (NTRS)

    Morris, Daniel Dale (Inventor); Chang, Hong (Inventor); Friend, Paul Russell (Inventor); Chen, Qi (Inventor); Graf, Jodi Seaborn (Inventor)

    2016-01-01

    A motion determination system is disclosed. The system may receive a first and a second camera image from a camera, the first camera image received earlier than the second camera image. The system may identify corresponding features in the first and second camera images. The system may receive range data comprising at least one of a first and a second range data from a range detection unit, corresponding to the first and second camera images, respectively. The system may determine first positions and the second positions of the corresponding features using the first camera image and the second camera image. The first positions or the second positions may be determined by also using the range data. The system may determine a change in position of the machine based on differences between the first and second positions, and a VO-based velocity of the machine based on the determined change in position.

  20. Physiologically Modulating Videogames or Simulations which Use Motion-Sensing Input Devices

    NASA Technical Reports Server (NTRS)

    Blanson, Nina Marie (Inventor); Stephens, Chad L. (Inventor); Pope, Alan T. (Inventor)

    2017-01-01

    New types of controllers allow a player to make inputs to a video game or simulation by moving the entire controller itself or by gesturing or by moving the player's body in whole or in part. This capability is typically accomplished using a wireless input device having accelerometers, gyroscopes, and a camera. The present invention exploits these wireless motion-sensing technologies to modulate the player's movement inputs to the videogame based upon physiological signals. Such biofeedback-modulated video games train valuable mental skills beyond eye-hand coordination. These psychophysiological training technologies enhance personal improvement, not just the diversion, of the user.

Top