Sample records for single image sensor

  1. Single Photon Counting Performance and Noise Analysis of CMOS SPAD-Based Image Sensors.

    PubMed

    Dutton, Neale A W; Gyongy, Istvan; Parmesan, Luca; Henderson, Robert K

    2016-07-20

    SPAD-based solid state CMOS image sensors utilising analogue integrators have attained deep sub-electron read noise (DSERN) permitting single photon counting (SPC) imaging. A new method is proposed to determine the read noise in DSERN image sensors by evaluating the peak separation and width (PSW) of single photon peaks in a photon counting histogram (PCH). The technique is used to identify and analyse cumulative noise in analogue integrating SPC SPAD-based pixels. The DSERN of our SPAD image sensor is exploited to confirm recent multi-photon threshold quanta image sensor (QIS) theory. Finally, various single and multiple photon spatio-temporal oversampling techniques are reviewed.

  2. Single Photon Counting Performance and Noise Analysis of CMOS SPAD-Based Image Sensors

    PubMed Central

    Dutton, Neale A. W.; Gyongy, Istvan; Parmesan, Luca; Henderson, Robert K.

    2016-01-01

    SPAD-based solid state CMOS image sensors utilising analogue integrators have attained deep sub-electron read noise (DSERN) permitting single photon counting (SPC) imaging. A new method is proposed to determine the read noise in DSERN image sensors by evaluating the peak separation and width (PSW) of single photon peaks in a photon counting histogram (PCH). The technique is used to identify and analyse cumulative noise in analogue integrating SPC SPAD-based pixels. The DSERN of our SPAD image sensor is exploited to confirm recent multi-photon threshold quanta image sensor (QIS) theory. Finally, various single and multiple photon spatio-temporal oversampling techniques are reviewed. PMID:27447643

  3. Transmission-Type 2-Bit Programmable Metasurface for Single-Sensor and Single-Frequency Microwave Imaging

    PubMed Central

    Li, Yun Bo; Li, Lian Lin; Xu, Bai Bing; Wu, Wei; Wu, Rui Yuan; Wan, Xiang; Cheng, Qiang; Cui, Tie Jun

    2016-01-01

    The programmable and digital metamaterials or metasurfaces presented recently have huge potentials in designing real-time-controlled electromagnetic devices. Here, we propose the first transmission-type 2-bit programmable coding metasurface for single-sensor and single- frequency imaging in the microwave frequency. Compared with the existing single-sensor imagers composed of active spatial modulators with their units controlled independently, we introduce randomly programmable metasurface to transform the masks of modulators, in which their rows and columns are controlled simultaneously so that the complexity and cost of the imaging system can be reduced drastically. Different from the single-sensor approach using the frequency agility, the proposed imaging system makes use of variable modulators under single frequency, which can avoid the object dispersion. In order to realize the transmission-type 2-bit programmable metasurface, we propose a two-layer binary coding unit, which is convenient for changing the voltages in rows and columns to switch the diodes in the top and bottom layers, respectively. In our imaging measurements, we generate the random codes by computer to achieve different transmission patterns, which can support enough multiple modes to solve the inverse-scattering problem in the single-sensor imaging. Simple experimental results are presented in the microwave frequency, validating our new single-sensor and single-frequency imaging system. PMID:27025907

  4. Transmission-Type 2-Bit Programmable Metasurface for Single-Sensor and Single-Frequency Microwave Imaging.

    PubMed

    Li, Yun Bo; Li, Lian Lin; Xu, Bai Bing; Wu, Wei; Wu, Rui Yuan; Wan, Xiang; Cheng, Qiang; Cui, Tie Jun

    2016-03-30

    The programmable and digital metamaterials or metasurfaces presented recently have huge potentials in designing real-time-controlled electromagnetic devices. Here, we propose the first transmission-type 2-bit programmable coding metasurface for single-sensor and single- frequency imaging in the microwave frequency. Compared with the existing single-sensor imagers composed of active spatial modulators with their units controlled independently, we introduce randomly programmable metasurface to transform the masks of modulators, in which their rows and columns are controlled simultaneously so that the complexity and cost of the imaging system can be reduced drastically. Different from the single-sensor approach using the frequency agility, the proposed imaging system makes use of variable modulators under single frequency, which can avoid the object dispersion. In order to realize the transmission-type 2-bit programmable metasurface, we propose a two-layer binary coding unit, which is convenient for changing the voltages in rows and columns to switch the diodes in the top and bottom layers, respectively. In our imaging measurements, we generate the random codes by computer to achieve different transmission patterns, which can support enough multiple modes to solve the inverse-scattering problem in the single-sensor imaging. Simple experimental results are presented in the microwave frequency, validating our new single-sensor and single-frequency imaging system.

  5. Experimental single-chip color HDTV image acquisition system with 8M-pixel CMOS image sensor

    NASA Astrophysics Data System (ADS)

    Shimamoto, Hiroshi; Yamashita, Takayuki; Funatsu, Ryohei; Mitani, Kohji; Nojiri, Yuji

    2006-02-01

    We have developed an experimental single-chip color HDTV image acquisition system using 8M-pixel CMOS image sensor. The sensor has 3840 × 2160 effective pixels and is progressively scanned at 60 frames per second. We describe the color filter array and interpolation method to improve image quality with a high-pixel-count single-chip sensor. We also describe an experimental image acquisition system we used to measured spatial frequency characteristics in the horizontal direction. The results indicate good prospects for achieving a high quality single chip HDTV camera that reduces pseudo signals and maintains high spatial frequency characteristics within the frequency band for HDTV.

  6. Photon Counting Imaging with an Electron-Bombarded Pixel Image Sensor

    PubMed Central

    Hirvonen, Liisa M.; Suhling, Klaus

    2016-01-01

    Electron-bombarded pixel image sensors, where a single photoelectron is accelerated directly into a CCD or CMOS sensor, allow wide-field imaging at extremely low light levels as they are sensitive enough to detect single photons. This technology allows the detection of up to hundreds or thousands of photon events per frame, depending on the sensor size, and photon event centroiding can be employed to recover resolution lost in the detection process. Unlike photon events from electron-multiplying sensors, the photon events from electron-bombarded sensors have a narrow, acceleration-voltage-dependent pulse height distribution. Thus a gain voltage sweep during exposure in an electron-bombarded sensor could allow photon arrival time determination from the pulse height with sub-frame exposure time resolution. We give a brief overview of our work with electron-bombarded pixel image sensor technology and recent developments in this field for single photon counting imaging, and examples of some applications. PMID:27136556

  7. Single-shot and single-sensor high/super-resolution microwave imaging based on metasurface.

    PubMed

    Wang, Libo; Li, Lianlin; Li, Yunbo; Zhang, Hao Chi; Cui, Tie Jun

    2016-06-01

    Real-time high-resolution (including super-resolution) imaging with low-cost hardware is a long sought-after goal in various imaging applications. Here, we propose broadband single-shot and single-sensor high-/super-resolution imaging by using a spatio-temporal dispersive metasurface and an imaging reconstruction algorithm. The metasurface with spatio-temporal dispersive property ensures the feasibility of the single-shot and single-sensor imager for super- and high-resolution imaging, since it can convert efficiently the detailed spatial information of the probed object into one-dimensional time- or frequency-dependent signal acquired by a single sensor fixed in the far-field region. The imaging quality can be improved by applying a feature-enhanced reconstruction algorithm in post-processing, and the desired imaging resolution is related to the distance between the object and metasurface. When the object is placed in the vicinity of the metasurface, the super-resolution imaging can be realized. The proposed imaging methodology provides a unique means to perform real-time data acquisition, high-/super-resolution images without employing expensive hardware (e.g. mechanical scanner, antenna array, etc.). We expect that this methodology could make potential breakthroughs in the areas of microwave, terahertz, optical, and even ultrasound imaging.

  8. High-content analysis of single cells directly assembled on CMOS sensor based on color imaging.

    PubMed

    Tanaka, Tsuyoshi; Saeki, Tatsuya; Sunaga, Yoshihiko; Matsunaga, Tadashi

    2010-12-15

    A complementary metal oxide semiconductor (CMOS) image sensor was applied to high-content analysis of single cells which were assembled closely or directly onto the CMOS sensor surface. The direct assembling of cell groups on CMOS sensor surface allows large-field (6.66 mm×5.32 mm in entire active area of CMOS sensor) imaging within a second. Trypan blue-stained and non-stained cells in the same field area on the CMOS sensor were successfully distinguished as white- and blue-colored images under white LED light irradiation. Furthermore, the chemiluminescent signals of each cell were successfully visualized as blue-colored images on CMOS sensor only when HeLa cells were placed directly on the micro-lens array of the CMOS sensor. Our proposed approach will be a promising technique for real-time and high-content analysis of single cells in a large-field area based on color imaging. Copyright © 2010 Elsevier B.V. All rights reserved.

  9. Single-shot and single-sensor high/super-resolution microwave imaging based on metasurface

    PubMed Central

    Wang, Libo; Li, Lianlin; Li, Yunbo; Zhang, Hao Chi; Cui, Tie Jun

    2016-01-01

    Real-time high-resolution (including super-resolution) imaging with low-cost hardware is a long sought-after goal in various imaging applications. Here, we propose broadband single-shot and single-sensor high-/super-resolution imaging by using a spatio-temporal dispersive metasurface and an imaging reconstruction algorithm. The metasurface with spatio-temporal dispersive property ensures the feasibility of the single-shot and single-sensor imager for super- and high-resolution imaging, since it can convert efficiently the detailed spatial information of the probed object into one-dimensional time- or frequency-dependent signal acquired by a single sensor fixed in the far-field region. The imaging quality can be improved by applying a feature-enhanced reconstruction algorithm in post-processing, and the desired imaging resolution is related to the distance between the object and metasurface. When the object is placed in the vicinity of the metasurface, the super-resolution imaging can be realized. The proposed imaging methodology provides a unique means to perform real-time data acquisition, high-/super-resolution images without employing expensive hardware (e.g. mechanical scanner, antenna array, etc.). We expect that this methodology could make potential breakthroughs in the areas of microwave, terahertz, optical, and even ultrasound imaging. PMID:27246668

  10. Estimation of Image Sensor Fill Factor Using a Single Arbitrary Image

    PubMed Central

    Wen, Wei; Khatibi, Siamak

    2017-01-01

    Achieving a high fill factor is a bottleneck problem for capturing high-quality images. There are hardware and software solutions to overcome this problem. In the solutions, the fill factor is known. However, this is an industrial secrecy by most image sensor manufacturers due to its direct effect on the assessment of the sensor quality. In this paper, we propose a method to estimate the fill factor of a camera sensor from an arbitrary single image. The virtual response function of the imaging process and sensor irradiance are estimated from the generation of virtual images. Then the global intensity values of the virtual images are obtained, which are the result of fusing the virtual images into a single, high dynamic range radiance map. A non-linear function is inferred from the original and global intensity values of the virtual images. The fill factor is estimated by the conditional minimum of the inferred function. The method is verified using images of two datasets. The results show that our method estimates the fill factor correctly with significant stability and accuracy from one single arbitrary image according to the low standard deviation of the estimated fill factors from each of images and for each camera. PMID:28335459

  11. The Design of a Single-Bit CMOS Image Sensor for Iris Recognition Applications

    PubMed Central

    Park, Keunyeol; Song, Minkyu

    2018-01-01

    This paper presents a single-bit CMOS image sensor (CIS) that uses a data processing technique with an edge detection block for simple iris segmentation. In order to recognize the iris image, the image sensor conventionally captures high-resolution image data in digital code, extracts the iris data, and then compares it with a reference image through a recognition algorithm. However, in this case, the frame rate decreases by the time required for digital signal conversion of multi-bit digital data through the analog-to-digital converter (ADC) in the CIS. In order to reduce the overall processing time as well as the power consumption, we propose a data processing technique with an exclusive OR (XOR) logic gate to obtain single-bit and edge detection image data instead of multi-bit image data through the ADC. In addition, we propose a logarithmic counter to efficiently measure single-bit image data that can be applied to the iris recognition algorithm. The effective area of the proposed single-bit image sensor (174 × 144 pixel) is 2.84 mm2 with a 0.18 μm 1-poly 4-metal CMOS image sensor process. The power consumption of the proposed single-bit CIS is 2.8 mW with a 3.3 V of supply voltage and 520 frame/s of the maximum frame rates. The error rate of the ADC is 0.24 least significant bit (LSB) on an 8-bit ADC basis at a 50 MHz sampling frequency. PMID:29495273

  12. The Design of a Single-Bit CMOS Image Sensor for Iris Recognition Applications.

    PubMed

    Park, Keunyeol; Song, Minkyu; Kim, Soo Youn

    2018-02-24

    This paper presents a single-bit CMOS image sensor (CIS) that uses a data processing technique with an edge detection block for simple iris segmentation. In order to recognize the iris image, the image sensor conventionally captures high-resolution image data in digital code, extracts the iris data, and then compares it with a reference image through a recognition algorithm. However, in this case, the frame rate decreases by the time required for digital signal conversion of multi-bit digital data through the analog-to-digital converter (ADC) in the CIS. In order to reduce the overall processing time as well as the power consumption, we propose a data processing technique with an exclusive OR (XOR) logic gate to obtain single-bit and edge detection image data instead of multi-bit image data through the ADC. In addition, we propose a logarithmic counter to efficiently measure single-bit image data that can be applied to the iris recognition algorithm. The effective area of the proposed single-bit image sensor (174 × 144 pixel) is 2.84 mm² with a 0.18 μm 1-poly 4-metal CMOS image sensor process. The power consumption of the proposed single-bit CIS is 2.8 mW with a 3.3 V of supply voltage and 520 frame/s of the maximum frame rates. The error rate of the ADC is 0.24 least significant bit (LSB) on an 8-bit ADC basis at a 50 MHz sampling frequency.

  13. An infrared/video fusion system for military robotics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davis, A.W.; Roberts, R.S.

    1997-08-05

    Sensory information is critical to the telerobotic operation of mobile robots. In particular, visual sensors are a key component of the sensor package on a robot engaged in urban military operations. Visual sensors provide the robot operator with a wealth of information including robot navigation and threat assessment. However, simple countermeasures such as darkness, smoke, or blinding by a laser, can easily neutralize visual sensors. In order to provide a robust visual sensing system, an infrared sensor is required to augment the primary visual sensor. An infrared sensor can acquire useful imagery in conditions that incapacitate a visual sensor. Amore » simple approach to incorporating an infrared sensor into the visual sensing system is to display two images to the operator: side-by-side visual and infrared images. However, dual images might overwhelm the operator with information, and result in degraded robot performance. A better solution is to combine the visual and infrared images into a single image that maximizes scene information. Fusing visual and infrared images into a single image demands balancing the mixture of visual and infrared information. Humans are accustom to viewing and interpreting visual images. They are not accustom to viewing or interpreting infrared images. Hence, the infrared image must be used to enhance the visual image, not obfuscate it.« less

  14. Multiocular image sensor with on-chip beam-splitter and inner meta-micro-lens for single-main-lens stereo camera.

    PubMed

    Koyama, Shinzo; Onozawa, Kazutoshi; Tanaka, Keisuke; Saito, Shigeru; Kourkouss, Sahim Mohamed; Kato, Yoshihisa

    2016-08-08

    We developed multiocular 1/3-inch 2.75-μm-pixel-size 2.1M- pixel image sensors by co-design of both on-chip beam-splitter and 100-nm-width 800-nm-depth patterned inner meta-micro-lens for single-main-lens stereo camera systems. A camera with the multiocular image sensor can capture horizontally one-dimensional light filed by both the on-chip beam-splitter horizontally dividing ray according to incident angle, and the inner meta-micro-lens collecting the divided ray into pixel with small optical loss. Cross-talks between adjacent light field images of a fabricated binocular image sensor and of a quad-ocular image sensor are as low as 6% and 7% respectively. With the selection of two images from one-dimensional light filed images, a selective baseline for stereo vision is realized to view close objects with single-main-lens. In addition, by adding multiple light field images with different ratios, baseline distance can be tuned within an aperture of a main lens. We suggest the electrically selective or tunable baseline stereo vision to reduce 3D fatigue of viewers.

  15. High-Sensitivity Fiber-Optic Ultrasound Sensors for Medical Imaging Applications

    PubMed Central

    Wen, H.; Wiesler, D.G.; Tveten, A.; Danver, B.; Dandridge, A.

    2010-01-01

    This paper presents several designs of high-sensitivity, compact fiber-optic ultrasound sensors that may be used for medical imaging applications. These sensors translate ultrasonic pulses into strains in single-mode optical fibers, which are measured with fiber-based laser interferometers at high precision. The sensors are simpler and less expensive to make than piezoelectric sensors, and are not susceptible to electromagnetic interference. It is possible to make focal sensors with these designs, and several schemes are discussed. Because of the minimum bending radius of optical fibers, the designs are suitable for single element sensors rather than for arrays. PMID:9691368

  16. Evaluation of Sun Glint Correction Algorithms for High-Spatial Resolution Hyperspectral Imagery

    DTIC Science & Technology

    2012-09-01

    ACRONYMS AND ABBREVIATIONS AISA Airborne Imaging Spectrometer for Applications AVIRIS Airborne Visible/Infrared Imaging Spectrometer BIL Band...sensor bracket mount combining Airborne Imaging Spectrometer for Applications ( AISA ) Eagle and Hawk sensors into a single imaging system (SpecTIR 2011...The AISA Eagle is a VNIR sensor with a wavelength range of approximately 400–970 nm and the AISA Hawk sensor is a SWIR sensor with a wavelength

  17. Blur spot limitations in distal endoscope sensors

    NASA Astrophysics Data System (ADS)

    Yaron, Avi; Shechterman, Mark; Horesh, Nadav

    2006-02-01

    In years past, the picture quality of electronic video systems was limited by the image sensor. In the present, the resolution of miniature image sensors, as in medical endoscopy, is typically superior to the resolution of the optical system. This "excess resolution" is utilized by Visionsense to create stereoscopic vision. Visionsense has developed a single chip stereoscopic camera that multiplexes the horizontal dimension of the image sensor into two (left and right) images, compensates the blur phenomena, and provides additional depth resolution without sacrificing planar resolution. The camera is based on a dual-pupil imaging objective and an image sensor coated by an array of microlenses (a plenoptic camera). The camera has the advantage of being compact, providing simultaneous acquisition of left and right images, and offering resolution comparable to a dual chip stereoscopic camera with low to medium resolution imaging lenses. A stereoscopic vision system provides an improved 3-dimensional perspective of intra-operative sites that is crucial for advanced minimally invasive surgery and contributes to surgeon performance. An additional advantage of single chip stereo sensors is improvement of tolerance to electronic signal noise.

  18. Depth map generation using a single image sensor with phase masks.

    PubMed

    Jang, Jinbeum; Park, Sangwoo; Jo, Jieun; Paik, Joonki

    2016-06-13

    Conventional stereo matching systems generate a depth map using two or more digital imaging sensors. It is difficult to use the small camera system because of their high costs and bulky sizes. In order to solve this problem, this paper presents a stereo matching system using a single image sensor with phase masks for the phase difference auto-focusing. A novel pattern of phase mask array is proposed to simultaneously acquire two pairs of stereo images. Furthermore, a noise-invariant depth map is generated from the raw format sensor output. The proposed method consists of four steps to compute the depth map: (i) acquisition of stereo images using the proposed mask array, (ii) variational segmentation using merging criteria to simplify the input image, (iii) disparity map generation using the hierarchical block matching for disparity measurement, and (iv) image matting to fill holes to generate the dense depth map. The proposed system can be used in small digital cameras without additional lenses or sensors.

  19. Celestial Object Imaging Model and Parameter Optimization for an Optical Navigation Sensor Based on the Well Capacity Adjusting Scheme.

    PubMed

    Wang, Hao; Jiang, Jie; Zhang, Guangjun

    2017-04-21

    The simultaneous extraction of optical navigation measurements from a target celestial body and star images is essential for autonomous optical navigation. Generally, a single optical navigation sensor cannot simultaneously image the target celestial body and stars well-exposed because their irradiance difference is generally large. Multi-sensor integration or complex image processing algorithms are commonly utilized to solve the said problem. This study analyzes and demonstrates the feasibility of simultaneously imaging the target celestial body and stars well-exposed within a single exposure through a single field of view (FOV) optical navigation sensor using the well capacity adjusting (WCA) scheme. First, the irradiance characteristics of the celestial body are analyzed. Then, the celestial body edge model and star spot imaging model are established when the WCA scheme is applied. Furthermore, the effect of exposure parameters on the accuracy of star centroiding and edge extraction is analyzed using the proposed model. Optimal exposure parameters are also derived by conducting Monte Carlo simulation to obtain the best performance of the navigation sensor. Finally, laboratorial and night sky experiments are performed to validate the correctness of the proposed model and optimal exposure parameters.

  20. Celestial Object Imaging Model and Parameter Optimization for an Optical Navigation Sensor Based on the Well Capacity Adjusting Scheme

    PubMed Central

    Wang, Hao; Jiang, Jie; Zhang, Guangjun

    2017-01-01

    The simultaneous extraction of optical navigation measurements from a target celestial body and star images is essential for autonomous optical navigation. Generally, a single optical navigation sensor cannot simultaneously image the target celestial body and stars well-exposed because their irradiance difference is generally large. Multi-sensor integration or complex image processing algorithms are commonly utilized to solve the said problem. This study analyzes and demonstrates the feasibility of simultaneously imaging the target celestial body and stars well-exposed within a single exposure through a single field of view (FOV) optical navigation sensor using the well capacity adjusting (WCA) scheme. First, the irradiance characteristics of the celestial body are analyzed. Then, the celestial body edge model and star spot imaging model are established when the WCA scheme is applied. Furthermore, the effect of exposure parameters on the accuracy of star centroiding and edge extraction is analyzed using the proposed model. Optimal exposure parameters are also derived by conducting Monte Carlo simulation to obtain the best performance of the navigation sensor. Finally, laboratorial and night sky experiments are performed to validate the correctness of the proposed model and optimal exposure parameters. PMID:28430132

  1. Phase aided 3D imaging and modeling: dedicated systems and case studies

    NASA Astrophysics Data System (ADS)

    Yin, Yongkai; He, Dong; Liu, Zeyi; Liu, Xiaoli; Peng, Xiang

    2014-05-01

    Dedicated prototype systems for 3D imaging and modeling (3DIM) are presented. The 3D imaging systems are based on the principle of phase-aided active stereo, which have been developed in our laboratory over the past few years. The reported 3D imaging prototypes range from single 3D sensor to a kind of optical measurement network composed of multiple node 3D-sensors. To enable these 3D imaging systems, we briefly discuss the corresponding calibration techniques for both single sensor and multi-sensor optical measurement network, allowing good performance of the 3DIM prototype systems in terms of measurement accuracy and repeatability. Furthermore, two case studies including the generation of high quality color model of movable cultural heritage and photo booth from body scanning are presented to demonstrate our approach.

  2. Single-exposure quantitative phase imaging in color-coded LED microscopy.

    PubMed

    Lee, Wonchan; Jung, Daeseong; Ryu, Suho; Joo, Chulmin

    2017-04-03

    We demonstrate single-shot quantitative phase imaging (QPI) in a platform of color-coded LED microscopy (cLEDscope). The light source in a conventional microscope is replaced by a circular LED pattern that is trisected into subregions with equal area, assigned to red, green, and blue colors. Image acquisition with a color image sensor and subsequent computation based on weak object transfer functions allow for the QPI of a transparent specimen. We also provide a correction method for color-leakage, which may be encountered in implementing our method with consumer-grade LEDs and image sensors. Most commercially available LEDs and image sensors do not provide spectrally isolated emissions and pixel responses, generating significant error in phase estimation in our method. We describe the correction scheme for this color-leakage issue, and demonstrate improved phase measurement accuracy. The computational model and single-exposure QPI capability of our method are presented by showing images of calibrated phase samples and cellular specimens.

  3. Smart image sensors: an emerging key technology for advanced optical measurement and microsystems

    NASA Astrophysics Data System (ADS)

    Seitz, Peter

    1996-08-01

    Optical microsystems typically include photosensitive devices, analog preprocessing circuitry and digital signal processing electronics. The advances in semiconductor technology have made it possible today to integrate all photosensitive and electronical devices on one 'smart image sensor' or photo-ASIC (application-specific integrated circuits containing photosensitive elements). It is even possible to provide each 'smart pixel' with additional photoelectronic functionality, without compromising the fill factor substantially. This technological capability is the basis for advanced cameras and optical microsystems showing novel on-chip functionality: Single-chip cameras with on- chip analog-to-digital converters for less than $10 are advertised; image sensors have been developed including novel functionality such as real-time selectable pixel size and shape, the capability of performing arbitrary convolutions simultaneously with the exposure, as well as variable, programmable offset and sensitivity of the pixels leading to image sensors with a dynamic range exceeding 150 dB. Smart image sensors have been demonstrated offering synchronous detection and demodulation capabilities in each pixel (lock-in CCD), and conventional image sensors are combined with an on-chip digital processor for complete, single-chip image acquisition and processing systems. Technological problems of the monolithic integration of smart image sensors include offset non-uniformities, temperature variations of electronic properties, imperfect matching of circuit parameters, etc. These problems can often be overcome either by designing additional compensation circuitry or by providing digital correction routines. Where necessary for technological or economic reasons, smart image sensors can also be combined with or realized as hybrids, making use of commercially available electronic components. It is concluded that the possibilities offered by custom smart image sensors will influence the design and the performance of future electronic imaging systems in many disciplines, reaching from optical metrology to machine vision on the factory floor and in robotics applications.

  4. Multiple-Event, Single-Photon Counting Imaging Sensor

    NASA Technical Reports Server (NTRS)

    Zheng, Xinyu; Cunningham, Thomas J.; Sun, Chao; Wang, Kang L.

    2011-01-01

    The single-photon counting imaging sensor is typically an array of silicon Geiger-mode avalanche photodiodes that are monolithically integrated with CMOS (complementary metal oxide semiconductor) readout, signal processing, and addressing circuits located in each pixel and the peripheral area of the chip. The major problem is its single-event method for photon count number registration. A single-event single-photon counting imaging array only allows registration of up to one photon count in each of its pixels during a frame time, i.e., the interval between two successive pixel reset operations. Since the frame time can t be too short, this will lead to very low dynamic range and make the sensor merely useful for very low flux environments. The second problem of the prior technique is a limited fill factor resulting from consumption of chip area by the monolithically integrated CMOS readout in pixels. The resulting low photon collection efficiency will substantially ruin any benefit gained from the very sensitive single-photon counting detection. The single-photon counting imaging sensor developed in this work has a novel multiple-event architecture, which allows each of its pixels to register as more than one million (or more) photon-counting events during a frame time. Because of a consequently boosted dynamic range, the imaging array of the invention is capable of performing single-photon counting under ultra-low light through high-flux environments. On the other hand, since the multiple-event architecture is implemented in a hybrid structure, back-illumination and close-to-unity fill factor can be realized, and maximized quantum efficiency can also be achieved in the detector array.

  5. Wavefront detection method of a single-sensor based adaptive optics system.

    PubMed

    Wang, Chongchong; Hu, Lifa; Xu, Huanyu; Wang, Yukun; Li, Dayu; Wang, Shaoxin; Mu, Quanquan; Yang, Chengliang; Cao, Zhaoliang; Lu, Xinghai; Xuan, Li

    2015-08-10

    In adaptive optics system (AOS) for optical telescopes, the reported wavefront sensing strategy consists of two parts: a specific sensor for tip-tilt (TT) detection and another wavefront sensor for other distortions detection. Thus, a part of incident light has to be used for TT detection, which decreases the light energy used by wavefront sensor and eventually reduces the precision of wavefront correction. In this paper, a single Shack-Hartmann wavefront sensor based wavefront measurement method is presented for both large amplitude TT and other distortions' measurement. Experiments were performed for testing the presented wavefront method and validating the wavefront detection and correction ability of the single-sensor based AOS. With adaptive correction, the root-mean-square of residual TT was less than 0.2 λ, and a clear image was obtained in the lab. Equipped on a 1.23-meter optical telescope, the binary stars with angle distance of 0.6″ were clearly resolved using the AOS. This wavefront measurement method removes the separate TT sensor, which not only simplifies the AOS but also saves light energy for subsequent wavefront sensing and imaging, and eventually improves the detection and imaging capability of the AOS.

  6. Application of Sensor Fusion to Improve Uav Image Classification

    NASA Astrophysics Data System (ADS)

    Jabari, S.; Fathollahi, F.; Zhang, Y.

    2017-08-01

    Image classification is one of the most important tasks of remote sensing projects including the ones that are based on using UAV images. Improving the quality of UAV images directly affects the classification results and can save a huge amount of time and effort in this area. In this study, we show that sensor fusion can improve image quality which results in increasing the accuracy of image classification. Here, we tested two sensor fusion configurations by using a Panchromatic (Pan) camera along with either a colour camera or a four-band multi-spectral (MS) camera. We use the Pan camera to benefit from its higher sensitivity and the colour or MS camera to benefit from its spectral properties. The resulting images are then compared to the ones acquired by a high resolution single Bayer-pattern colour camera (here referred to as HRC). We assessed the quality of the output images by performing image classification tests. The outputs prove that the proposed sensor fusion configurations can achieve higher accuracies compared to the images of the single Bayer-pattern colour camera. Therefore, incorporating a Pan camera on-board in the UAV missions and performing image fusion can help achieving higher quality images and accordingly higher accuracy classification results.

  7. Imaging intracellular pH in live cells with a genetically encoded red fluorescent protein sensor.

    PubMed

    Tantama, Mathew; Hung, Yin Pun; Yellen, Gary

    2011-07-06

    Intracellular pH affects protein structure and function, and proton gradients underlie the function of organelles such as lysosomes and mitochondria. We engineered a genetically encoded pH sensor by mutagenesis of the red fluorescent protein mKeima, providing a new tool to image intracellular pH in live cells. This sensor, named pHRed, is the first ratiometric, single-protein red fluorescent sensor of pH. Fluorescence emission of pHRed peaks at 610 nm while exhibiting dual excitation peaks at 440 and 585 nm that can be used for ratiometric imaging. The intensity ratio responds with an apparent pK(a) of 6.6 and a >10-fold dynamic range. Furthermore, pHRed has a pH-responsive fluorescence lifetime that changes by ~0.4 ns over physiological pH values and can be monitored with single-wavelength two-photon excitation. After characterizing the sensor, we tested pHRed's ability to monitor intracellular pH by imaging energy-dependent changes in cytosolic and mitochondrial pH.

  8. A 12-bit high-speed column-parallel two-step single-slope analog-to-digital converter (ADC) for CMOS image sensors.

    PubMed

    Lyu, Tao; Yao, Suying; Nie, Kaiming; Xu, Jiangtao

    2014-11-17

    A 12-bit high-speed column-parallel two-step single-slope (SS) analog-to-digital converter (ADC) for CMOS image sensors is proposed. The proposed ADC employs a single ramp voltage and multiple reference voltages, and the conversion is divided into coarse phase and fine phase to improve the conversion rate. An error calibration scheme is proposed to correct errors caused by offsets among the reference voltages. The digital-to-analog converter (DAC) used for the ramp generator is based on the split-capacitor array with an attenuation capacitor. Analysis of the DAC's linearity performance versus capacitor mismatch and parasitic capacitance is presented. A prototype 1024 × 32 Time Delay Integration (TDI) CMOS image sensor with the proposed ADC architecture has been fabricated in a standard 0.18 μm CMOS process. The proposed ADC has average power consumption of 128 μW and a conventional rate 6 times higher than the conventional SS ADC. A high-quality image, captured at the line rate of 15.5 k lines/s, shows that the proposed ADC is suitable for high-speed CMOS image sensors.

  9. Image Processing for Cameras with Fiber Bundle Image Relay

    DTIC Science & Technology

    length. Optical fiber bundles have been used to couple between this focal surface and planar image sensors . However, such fiber-coupled imaging systems...coupled to six discrete CMOS focal planes. We characterize the locally space-variant system impulse response at various stages: monocentric lens image...vignetting, and stitch together the image data from discrete sensors into a single panorama. We compare processed images from the prototype to those taken with

  10. Combined imaging and chemical sensing using a single optical imaging fiber.

    PubMed

    Bronk, K S; Michael, K L; Pantano, P; Walt, D R

    1995-09-01

    Despite many innovations and developments in the field of fiber-optic chemical sensors, optical fibers have not been employed to both view a sample and concurrently detect an analyte of interest. While chemical sensors employing a single optical fiber or a noncoherent fiberoptic bundle have been applied to a wide variety of analytical determinations, they cannot be used for imaging. Similarly, coherent imaging fibers have been employed only for their originally intended purpose, image transmission. We herein report a new technique for viewing a sample and measuring surface chemical concentrations that employs a coherent imaging fiber. The method is based on the deposition of a thin, analyte-sensitive polymer layer on the distal surface of a 350-microns-diameter imaging fiber. We present results from a pH sensor array and an acetylcholine biosensor array, each of which contains approximately 6000 optical sensors. The acetylcholine biosensor has a detection limit of 35 microM and a fast (< 1 s) response time. In association with an epifluorescence microscope and a charge-coupled device, these modified imaging fibers can display visual information of a remote sample with 4-microns spatial resolution, allowing for alternating acquisition of both chemical analysis and visual histology.

  11. Evaluation of a HDR image sensor with logarithmic response for mobile video-based applications

    NASA Astrophysics Data System (ADS)

    Tektonidis, Marco; Pietrzak, Mateusz; Monnin, David

    2017-10-01

    The performance of mobile video-based applications using conventional LDR (Low Dynamic Range) image sensors highly depends on the illumination conditions. As an alternative, HDR (High Dynamic Range) image sensors with logarithmic response are capable to acquire illumination-invariant HDR images in a single shot. We have implemented a complete image processing framework for a HDR sensor, including preprocessing methods (nonuniformity correction (NUC), cross-talk correction (CTC), and demosaicing) as well as tone mapping (TM). We have evaluated the HDR sensor for video-based applications w.r.t. the display of images and w.r.t. image analysis techniques. Regarding the display we have investigated the image intensity statistics over time, and regarding image analysis we assessed the number of feature correspondences between consecutive frames of temporal image sequences. For the evaluation we used HDR image data recorded from a vehicle on outdoor or combined outdoor/indoor itineraries, and we performed a comparison with corresponding conventional LDR image data.

  12. A four-lens based plenoptic camera for depth measurements

    NASA Astrophysics Data System (ADS)

    Riou, Cécile; Deng, Zhiyuan; Colicchio, Bruno; Lauffenburger, Jean-Philippe; Kohler, Sophie; Haeberlé, Olivier; Cudel, Christophe

    2015-04-01

    In previous works, we have extended the principles of "variable homography", defined by Zhang and Greenspan, for measuring height of emergent fibers on glass and non-woven fabrics. This method has been defined for working with fabric samples progressing on a conveyor belt. Triggered acquisition of two successive images was needed to perform the 3D measurement. In this work, we have retained advantages of homography variable for measurements along Z axis, but we have reduced acquisitions number to a single one, by developing an acquisition device characterized by 4 lenses placed in front of a single image sensor. The idea is then to obtain four projected sub-images on a single CCD sensor. The device becomes a plenoptic or light field camera, capturing multiple views on the same image sensor. We have adapted the variable homography formulation for this device and we propose a new formulation to calculate a depth with plenoptic cameras. With these results, we have transformed our plenoptic camera in a depth camera and first results given are very promising.

  13. A study of thermographic diagnosis system and imaging algorithm by distributed thermal data using single infrared sensor.

    PubMed

    Yoon, Se Jin; Noh, Si Cheol; Choi, Heung Ho

    2007-01-01

    The infrared diagnosis device provides two-dimensional images and patient-oriented results that can be easily understood by the inspection target by using infrared cameras; however, it has disadvantages such as large size, high price, and inconvenient maintenance. In this regard, this study has proposed small-sized diagnosis device for body heat using a single infrared sensor and implemented an infrared detection system using a single infrared sensor and an algorithm that represents thermography using the obtained data on the temperature of the point source. The developed systems had the temperature resolution of 0.1 degree and the reproducibility of +/-0.1 degree. The accuracy was 90.39% at the error bound of +/-0 degree and 99.98% at that of +/-0.1 degree. In order to evaluate the proposed algorithm and system, the infrared images of camera method was compared. The thermal images that have clinical meaning were obtained from a patient who has lesion to verify its clinical applicability.

  14. An Over 90 dB Intra-Scene Single-Exposure Dynamic Range CMOS Image Sensor Using a 3.0 μm Triple-Gain Pixel Fabricated in a Standard BSI Process.

    PubMed

    Takayanagi, Isao; Yoshimura, Norio; Mori, Kazuya; Matsuo, Shinichiro; Tanaka, Shunsuke; Abe, Hirofumi; Yasuda, Naoto; Ishikawa, Kenichiro; Okura, Shunsuke; Ohsawa, Shinji; Otaka, Toshinori

    2018-01-12

    To respond to the high demand for high dynamic range imaging suitable for moving objects with few artifacts, we have developed a single-exposure dynamic range image sensor by introducing a triple-gain pixel and a low noise dual-gain readout circuit. The developed 3 μm pixel is capable of having three conversion gains. Introducing a new split-pinned photodiode structure, linear full well reaches 40 ke - . Readout noise under the highest pixel gain condition is 1 e - with a low noise readout circuit. Merging two signals, one with high pixel gain and high analog gain, and the other with low pixel gain and low analog gain, a single exposure dynamic rage (SEHDR) signal is obtained. Using this technology, a 1/2.7", 2M-pixel CMOS image sensor has been developed and characterized. The image sensor also employs an on-chip linearization function, yielding a 16-bit linear signal at 60 fps, and an intra-scene dynamic range of higher than 90 dB was successfully demonstrated. This SEHDR approach inherently mitigates the artifacts from moving objects or time-varying light sources that can appear in the multiple exposure high dynamic range (MEHDR) approach.

  15. An Over 90 dB Intra-Scene Single-Exposure Dynamic Range CMOS Image Sensor Using a 3.0 μm Triple-Gain Pixel Fabricated in a Standard BSI Process †

    PubMed Central

    Takayanagi, Isao; Yoshimura, Norio; Mori, Kazuya; Matsuo, Shinichiro; Tanaka, Shunsuke; Abe, Hirofumi; Yasuda, Naoto; Ishikawa, Kenichiro; Okura, Shunsuke; Ohsawa, Shinji; Otaka, Toshinori

    2018-01-01

    To respond to the high demand for high dynamic range imaging suitable for moving objects with few artifacts, we have developed a single-exposure dynamic range image sensor by introducing a triple-gain pixel and a low noise dual-gain readout circuit. The developed 3 μm pixel is capable of having three conversion gains. Introducing a new split-pinned photodiode structure, linear full well reaches 40 ke−. Readout noise under the highest pixel gain condition is 1 e− with a low noise readout circuit. Merging two signals, one with high pixel gain and high analog gain, and the other with low pixel gain and low analog gain, a single exposure dynamic rage (SEHDR) signal is obtained. Using this technology, a 1/2.7”, 2M-pixel CMOS image sensor has been developed and characterized. The image sensor also employs an on-chip linearization function, yielding a 16-bit linear signal at 60 fps, and an intra-scene dynamic range of higher than 90 dB was successfully demonstrated. This SEHDR approach inherently mitigates the artifacts from moving objects or time-varying light sources that can appear in the multiple exposure high dynamic range (MEHDR) approach. PMID:29329210

  16. Intelligent Luminance Control of Lighting Systems Based on Imaging Sensor Feedback

    PubMed Central

    Liu, Haoting; Zhou, Qianxiang; Yang, Jin; Jiang, Ting; Liu, Zhizhen; Li, Jie

    2017-01-01

    An imaging sensor-based intelligent Light Emitting Diode (LED) lighting system for desk use is proposed. In contrast to the traditional intelligent lighting system, such as the photosensitive resistance sensor-based or the infrared sensor-based system, the imaging sensor can realize a finer perception of the environmental light; thus it can guide a more precise lighting control. Before this system works, first lots of typical imaging lighting data of the desk application are accumulated. Second, a series of subjective and objective Lighting Effect Evaluation Metrics (LEEMs) are defined and assessed for these datasets above. Then the cluster benchmarks of these objective LEEMs can be obtained. Third, both a single LEEM-based control and a multiple LEEMs-based control are developed to realize a kind of optimal luminance tuning. When this system works, first it captures the lighting image using a wearable camera. Then it computes the objective LEEMs of the captured image and compares them with the cluster benchmarks of the objective LEEMs. Finally, the single LEEM-based or the multiple LEEMs-based control can be implemented to get a kind of optimal lighting effect. Many experiment results have shown the proposed system can tune the LED lamp automatically according to environment luminance changes. PMID:28208781

  17. Intelligent Luminance Control of Lighting Systems Based on Imaging Sensor Feedback.

    PubMed

    Liu, Haoting; Zhou, Qianxiang; Yang, Jin; Jiang, Ting; Liu, Zhizhen; Li, Jie

    2017-02-09

    An imaging sensor-based intelligent Light Emitting Diode (LED) lighting system for desk use is proposed. In contrast to the traditional intelligent lighting system, such as the photosensitive resistance sensor-based or the infrared sensor-based system, the imaging sensor can realize a finer perception of the environmental light; thus it can guide a more precise lighting control. Before this system works, first lots of typical imaging lighting data of the desk application are accumulated. Second, a series of subjective and objective Lighting Effect Evaluation Metrics (LEEMs) are defined and assessed for these datasets above. Then the cluster benchmarks of these objective LEEMs can be obtained. Third, both a single LEEM-based control and a multiple LEEMs-based control are developed to realize a kind of optimal luminance tuning. When this system works, first it captures the lighting image using a wearable camera. Then it computes the objective LEEMs of the captured image and compares them with the cluster benchmarks of the objective LEEMs. Finally, the single LEEM-based or the multiple LEEMs-based control can be implemented to get a kind of optimal lighting effect. Many experiment results have shown the proposed system can tune the LED lamp automatically according to environment luminance changes.

  18. Single-shot digital holography by use of the fractional Talbot effect.

    PubMed

    Martínez-León, Lluís; Araiza-E, María; Javidi, Bahram; Andrés, Pedro; Climent, Vicent; Lancis, Jesús; Tajahuerce, Enrique

    2009-07-20

    We present a method for recording in-line single-shot digital holograms based on the fractional Talbot effect. In our system, an image sensor records the interference between the light field scattered by the object and a properly codified parallel reference beam. A simple binary two-dimensional periodic grating is used to codify the reference beam generating a periodic three-step phase distribution over the sensor plane by fractional Talbot effect. This provides a method to perform single-shot phase-shifting interferometry at frame rates only limited by the sensor capabilities. Our technique is well adapted for dynamic wavefront sensing applications. Images of the object are digitally reconstructed from the digital hologram. Both computer simulations and experimental results are presented.

  19. Performance study of double SOI image sensors

    NASA Astrophysics Data System (ADS)

    Miyoshi, T.; Arai, Y.; Fujita, Y.; Hamasaki, R.; Hara, K.; Ikegami, Y.; Kurachi, I.; Nishimura, R.; Ono, S.; Tauchi, K.; Tsuboyama, T.; Yamada, M.

    2018-02-01

    Double silicon-on-insulator (DSOI) sensors composed of two thin silicon layers and one thick silicon layer have been developed since 2011. The thick substrate consists of high resistivity silicon with p-n junctions while the thin layers are used as SOI-CMOS circuitry and as shielding to reduce the back-gate effect and crosstalk between the sensor and the circuitry. In 2014, a high-resolution integration-type pixel sensor, INTPIX8, was developed based on the DSOI concept. This device is fabricated using a Czochralski p-type (Cz-p) substrate in contrast to a single SOI (SSOI) device having a single thin silicon layer and a Float Zone p-type (FZ-p) substrate. In the present work, X-ray spectra of both DSOI and SSOI sensors were obtained using an Am-241 radiation source at four gain settings. The gain of the DSOI sensor was found to be approximately three times that of the SSOI device because the coupling capacitance is reduced by the DSOI structure. An X-ray imaging demonstration was also performed and high spatial resolution X-ray images were obtained.

  20. Single-frame 3D fluorescence microscopy with ultraminiature lensless FlatScope

    PubMed Central

    Adams, Jesse K.; Boominathan, Vivek; Avants, Benjamin W.; Vercosa, Daniel G.; Ye, Fan; Baraniuk, Richard G.; Robinson, Jacob T.; Veeraraghavan, Ashok

    2017-01-01

    Modern biology increasingly relies on fluorescence microscopy, which is driving demand for smaller, lighter, and cheaper microscopes. However, traditional microscope architectures suffer from a fundamental trade-off: As lenses become smaller, they must either collect less light or image a smaller field of view. To break this fundamental trade-off between device size and performance, we present a new concept for three-dimensional (3D) fluorescence imaging that replaces lenses with an optimized amplitude mask placed a few hundred micrometers above the sensor and an efficient algorithm that can convert a single frame of captured sensor data into high-resolution 3D images. The result is FlatScope: perhaps the world’s tiniest and lightest microscope. FlatScope is a lensless microscope that is scarcely larger than an image sensor (roughly 0.2 g in weight and less than 1 mm thick) and yet able to produce micrometer-resolution, high–frame rate, 3D fluorescence movies covering a total volume of several cubic millimeters. The ability of FlatScope to reconstruct full 3D images from a single frame of captured sensor data allows us to image 3D volumes roughly 40,000 times faster than a laser scanning confocal microscope while providing comparable resolution. We envision that this new flat fluorescence microscopy paradigm will lead to implantable endoscopes that minimize tissue damage, arrays of imagers that cover large areas, and bendable, flexible microscopes that conform to complex topographies. PMID:29226243

  1. Two-step single slope/SAR ADC with error correction for CMOS image sensor.

    PubMed

    Tang, Fang; Bermak, Amine; Amira, Abbes; Amor Benammar, Mohieddine; He, Debiao; Zhao, Xiaojin

    2014-01-01

    Conventional two-step ADC for CMOS image sensor requires full resolution noise performance in the first stage single slope ADC, leading to high power consumption and large chip area. This paper presents an 11-bit two-step single slope/successive approximation register (SAR) ADC scheme for CMOS image sensor applications. The first stage single slope ADC generates a 3-bit data and 1 redundant bit. The redundant bit is combined with the following 8-bit SAR ADC output code using a proposed error correction algorithm. Instead of requiring full resolution noise performance, the first stage single slope circuit of the proposed ADC can tolerate up to 3.125% quantization noise. With the proposed error correction mechanism, the power consumption and chip area of the single slope ADC are significantly reduced. The prototype ADC is fabricated using 0.18 μ m CMOS technology. The chip area of the proposed ADC is 7 μ m × 500 μ m. The measurement results show that the energy efficiency figure-of-merit (FOM) of the proposed ADC core is only 125 pJ/sample under 1.4 V power supply and the chip area efficiency is 84 k  μ m(2) · cycles/sample.

  2. Advances in multi-sensor data fusion: algorithms and applications.

    PubMed

    Dong, Jiang; Zhuang, Dafang; Huang, Yaohuan; Fu, Jingying

    2009-01-01

    With the development of satellite and remote sensing techniques, more and more image data from airborne/satellite sensors have become available. Multi-sensor image fusion seeks to combine information from different images to obtain more inferences than can be derived from a single sensor. In image-based application fields, image fusion has emerged as a promising research area since the end of the last century. The paper presents an overview of recent advances in multi-sensor satellite image fusion. Firstly, the most popular existing fusion algorithms are introduced, with emphasis on their recent improvements. Advances in main applications fields in remote sensing, including object identification, classification, change detection and maneuvering targets tracking, are described. Both advantages and limitations of those applications are then discussed. Recommendations are addressed, including: (1) Improvements of fusion algorithms; (2) Development of "algorithm fusion" methods; (3) Establishment of an automatic quality assessment scheme.

  3. Time-of-flight camera via a single-pixel correlation image sensor

    NASA Astrophysics Data System (ADS)

    Mao, Tianyi; Chen, Qian; He, Weiji; Dai, Huidong; Ye, Ling; Gu, Guohua

    2018-04-01

    A time-of-flight imager based on single-pixel correlation image sensors is proposed for noise-free depth map acquisition in presence of ambient light. Digital micro-mirror device and time-modulated IR-laser provide spatial and temporal illumination on the unknown object. Compressed sensing and ‘four bucket principle’ method are combined to reconstruct the depth map from a sequence of measurements at a low sampling rate. Second-order correlation transform is also introduced to reduce the noise from the detector itself and direct ambient light. Computer simulations are presented to validate the computational models and improvement of reconstructions.

  4. Optimization of Self-Directed Target Coverage in Wireless Multimedia Sensor Network

    PubMed Central

    Yang, Yang; Wang, Yufei; Pi, Dechang; Wang, Ruchuan

    2014-01-01

    Video and image sensors in wireless multimedia sensor networks (WMSNs) have directed view and limited sensing angle. So the methods to solve target coverage problem for traditional sensor networks, which use circle sensing model, are not suitable for WMSNs. Based on the FoV (field of view) sensing model and FoV disk model proposed, how expected multimedia sensor covers the target is defined by the deflection angle between target and the sensor's current orientation and the distance between target and the sensor. Then target coverage optimization algorithms based on expected coverage value are presented for single-sensor single-target, multisensor single-target, and single-sensor multitargets problems distinguishingly. Selecting the orientation that sensor rotated to cover every target falling in the FoV disk of that sensor for candidate orientations and using genetic algorithm to multisensor multitargets problem, which has NP-complete complexity, then result in the approximated minimum subset of sensors which covers all the targets in networks. Simulation results show the algorithm's performance and the effect of number of targets on the resulting subset. PMID:25136667

  5. Practical design and evaluation methods of omnidirectional vision sensors

    NASA Astrophysics Data System (ADS)

    Ohte, Akira; Tsuzuki, Osamu

    2012-01-01

    A practical omnidirectional vision sensor, consisting of a curved mirror, a mirror-supporting structure, and a megapixel digital imaging system, can view a field of 360 deg horizontally and 135 deg vertically. The authors theoretically analyzed and evaluated several curved mirrors, namely, a spherical mirror, an equidistant mirror, and a single viewpoint mirror (hyperboloidal mirror). The focus of their study was mainly on the image-forming characteristics, position of the virtual images, and size of blur spot images. The authors propose here a practical design method that satisfies the required characteristics. They developed image-processing software for converting circular images to images of the desired characteristics in real time. They also developed several prototype vision sensors using spherical mirrors. Reports dealing with virtual images and blur-spot size of curved mirrors are few; therefore, this paper will be very useful for the development of omnidirectional vision sensors.

  6. Enhanced image capture through fusion

    NASA Technical Reports Server (NTRS)

    Burt, Peter J.; Hanna, Keith; Kolczynski, Raymond J.

    1993-01-01

    Image fusion may be used to combine images from different sensors, such as IR and visible cameras, to obtain a single composite with extended information content. Fusion may also be used to combine multiple images from a given sensor to form a composite image in which information of interest is enhanced. We present a general method for performing image fusion and show that this method is effective for diverse fusion applications. We suggest that fusion may provide a powerful tool for enhanced image capture with broad utility in image processing and computer vision.

  7. Biomimetic machine vision system.

    PubMed

    Harman, William M; Barrett, Steven F; Wright, Cameron H G; Wilcox, Michael

    2005-01-01

    Real-time application of digital imaging for use in machine vision systems has proven to be prohibitive when used within control systems that employ low-power single processors without compromising the scope of vision or resolution of captured images. Development of a real-time machine analog vision system is the focus of research taking place at the University of Wyoming. This new vision system is based upon the biological vision system of the common house fly. Development of a single sensor is accomplished, representing a single facet of the fly's eye. This new sensor is then incorporated into an array of sensors capable of detecting objects and tracking motion in 2-D space. This system "preprocesses" incoming image data resulting in minimal data processing to determine the location of a target object. Due to the nature of the sensors in the array, hyperacuity is achieved thereby eliminating resolutions issues found in digital vision systems. In this paper, we will discuss the biological traits of the fly eye and the specific traits that led to the development of this machine vision system. We will also discuss the process of developing an analog based sensor that mimics the characteristics of interest in the biological vision system. This paper will conclude with a discussion of how an array of these sensors can be applied toward solving real-world machine vision issues.

  8. Medipix2 based CdTe microprobe for dental imaging

    NASA Astrophysics Data System (ADS)

    Vykydal, Z.; Fauler, A.; Fiederle, M.; Jakubek, J.; Svestkova, M.; Zwerger, A.

    2011-12-01

    Medical imaging devices and techniques are demanded to provide high resolution and low dose images of samples or patients. Hybrid semiconductor single photon counting devices together with suitable sensor materials and advanced techniques of image reconstruction fulfil these requirements. In particular cases such as the direct observation of dental implants also the size of the imaging device itself plays a critical role. This work presents the comparison of 2D radiographs of tooth provided by a standard commercial dental imaging system (Gendex 765DC X-ray tube with VisualiX scintillation detector) and two Medipix2 USB Lite detectors one equipped with a Si sensor (300 μm thick) and one with a CdTe sensor (1 mm thick). Single photon counting capability of the Medipix2 device allows virtually unlimited dynamic range of the images and thus increases the contrast significantly. The dimensions of the whole USB Lite device are only 15 mm × 60 mm of which 25% consists of the sensitive area. Detector of this compact size can be used directly inside the patients' mouth.

  9. Dosimetry of heavy ions by use of CCD detectors

    NASA Technical Reports Server (NTRS)

    Schott, J. U.

    1994-01-01

    The design and the atomic composition of Charge Coupled Devices (CCD's) make them unique for investigations of single energetic particle events. As detector system for ionizing particles they detect single particles with local resolution and near real time particle tracking. In combination with its properties as optical sensor, particle transversals of single particles are to be correlated to any objects attached to the light sensitive surface of the sensor by simple imaging of their shadow and subsequent image analysis of both, optical image and particle effects, observed in affected pixels. With biological objects it is possible for the first time to investigate effects of single heavy ions in tissue or extinguished organs of metabolizing (i.e. moving) systems with a local resolution better than 15 microns. Calibration data for particle detection in CCD's are presented for low energetic protons and heavy ions.

  10. A Multi-Resolution Mode CMOS Image Sensor with a Novel Two-Step Single-Slope ADC for Intelligent Surveillance Systems.

    PubMed

    Kim, Daehyeok; Song, Minkyu; Choe, Byeongseong; Kim, Soo Youn

    2017-06-25

    In this paper, we present a multi-resolution mode CMOS image sensor (CIS) for intelligent surveillance system (ISS) applications. A low column fixed-pattern noise (CFPN) comparator is proposed in 8-bit two-step single-slope analog-to-digital converter (TSSS ADC) for the CIS that supports normal, 1/2, 1/4, 1/8, 1/16, 1/32, and 1/64 mode of pixel resolution. We show that the scaled-resolution images enable CIS to reduce total power consumption while images hold steady without events. A prototype sensor of 176 × 144 pixels has been fabricated with a 0.18 μm 1-poly 4-metal CMOS process. The area of 4-shared 4T-active pixel sensor (APS) is 4.4 μm × 4.4 μm and the total chip size is 2.35 mm × 2.35 mm. The maximum power consumption is 10 mW (with full resolution) with supply voltages of 3.3 V (analog) and 1.8 V (digital) and 14 frame/s of frame rates.

  11. Multispectral image fusion for detecting land mines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clark, G.A.; Sengupta, S.K.; Aimonetti, W.D.

    1995-04-01

    This report details a system which fuses information contained in registered images from multiple sensors to reduce the effects of clutter and improve the ability to detect surface and buried land mines. The sensor suite currently consists of a camera that acquires images in six bands (400nm, 500nm, 600nm, 700nm, 800nm and 900nm). Past research has shown that it is extremely difficult to distinguish land mines from background clutter in images obtained from a single sensor. It is hypothesized, however, that information fused from a suite of various sensors is likely to provide better detection reliability, because the suite ofmore » sensors detects a variety of physical properties that are more separable in feature space. The materials surrounding the mines can include natural materials (soil, rocks, foliage, water, etc.) and some artifacts.« less

  12. PCA-based spatially adaptive denoising of CFA images for single-sensor digital cameras.

    PubMed

    Zheng, Lei; Lukac, Rastislav; Wu, Xiaolin; Zhang, David

    2009-04-01

    Single-sensor digital color cameras use a process called color demosiacking to produce full color images from the data captured by a color filter array (CAF). The quality of demosiacked images is degraded due to the sensor noise introduced during the image acquisition process. The conventional solution to combating CFA sensor noise is demosiacking first, followed by a separate denoising processing. This strategy will generate many noise-caused color artifacts in the demosiacking process, which are hard to remove in the denoising process. Few denoising schemes that work directly on the CFA images have been presented because of the difficulties arisen from the red, green and blue interlaced mosaic pattern, yet a well-designed "denoising first and demosiacking later" scheme can have advantages such as less noise-caused color artifacts and cost-effective implementation. This paper presents a principle component analysis (PCA)-based spatially-adaptive denoising algorithm, which works directly on the CFA data using a supporting window to analyze the local image statistics. By exploiting the spatial and spectral correlations existing in the CFA image, the proposed method can effectively suppress noise while preserving color edges and details. Experiments using both simulated and real CFA images indicate that the proposed scheme outperforms many existing approaches, including those sophisticated demosiacking and denoising schemes, in terms of both objective measurement and visual evaluation.

  13. Multispectral image-fused head-tracked vision system (HTVS) for driving applications

    NASA Astrophysics Data System (ADS)

    Reese, Colin E.; Bender, Edward J.

    2001-08-01

    Current military thermal driver vision systems consist of a single Long Wave Infrared (LWIR) sensor mounted on a manually operated gimbal, which is normally locked forward during driving. The sensor video imagery is presented on a large area flat panel display for direct view. The Night Vision and Electronics Sensors Directorate and Kaiser Electronics are cooperatively working to develop a driver's Head Tracked Vision System (HTVS) which directs dual waveband sensors in a more natural head-slewed imaging mode. The HTVS consists of LWIR and image intensified sensors, a high-speed gimbal, a head mounted display, and a head tracker. The first prototype systems have been delivered and have undergone preliminary field trials to characterize the operational benefits of a head tracked sensor system for tactical military ground applications. This investigation will address the advantages of head tracked vs. fixed sensor systems regarding peripheral sightings of threats, road hazards, and nearby vehicles. An additional thrust will investigate the degree to which additive (A+B) fusion of LWIR and image intensified sensors enhances overall driving performance. Typically, LWIR sensors are better for detecting threats, while image intensified sensors provide more natural scene cues, such as shadows and texture. This investigation will examine the degree to which the fusion of these two sensors enhances the driver's overall situational awareness.

  14. Optical Inspection In Hostile Industrial Environments: Single-Sensor VS. Imaging Methods

    NASA Astrophysics Data System (ADS)

    Cielo, P.; Dufour, M.; Sokalski, A.

    1988-11-01

    On-line and unsupervised industrial inspection for quality control and process monitoring is increasingly required in the modern automated factory. Optical techniques are particularly well suited to industrial inspection in hostile environments because of their noncontact nature, fast response time and imaging capabilities. Optical sensors can be used for remote inspection of high temperature products or otherwise inaccessible parts, provided they are in a line-of-sight relation with the sensor. Moreover, optical sensors are much easier to adapt to a variety of part shapes, position or orientation and conveyor speeds as compared to contact-based sensors. This is an important requirement in a flexible automation environment. A number of choices are possible in the design of optical inspection systems. General-purpose two-dimensional (2-D) or three-dimensional (3-D) imaging techniques have advanced very rapidly in the last years thanks to a substantial research effort as well as to the availability of increasingly powerful and affordable hardware and software. Imaging can be realized using 2-D arrays or simpler one-dimensional (1-D) line-array detectors. Alternatively, dedicated single-spot sensors require a smaller amount of data processing and often lead to robust sensors which are particularly appropriate to on-line operation in hostile industrial environments. Many specialists now feel that dedicated sensors or clusters of sensors are often more effective for specific industrial automation and control tasks, at least in the short run. This paper will discuss optomechanical and electro-optical choices with reference to the design of a number of on-line inspection sensors which have been recently developed at our institute. Case studies will include real-time surface roughness evaluation on polymer cables extruded at high speed, surface characterization of hot-rolled or galvanized-steel sheets, temperature evaluation and pinhole detection in aluminum foil, multi-wavelength polymer sheet thickness gauging and thermographic imaging, 3-D lumber profiling, line-array inspection of textiles and glassware, as well as on-line optical inspection for the control of automated arc welding. In each case the design choices between single or multiple-element detectors, mechanical vs. electronic scanning, laser vs. incoherent illumination, etc. will be discussed in terms of industrial constraints such as speed requirements, protection against the environment or reliability of the sensor output.

  15. Wide-field microscopy using microcamera arrays

    NASA Astrophysics Data System (ADS)

    Marks, Daniel L.; Youn, Seo Ho; Son, Hui S.; Kim, Jungsang; Brady, David J.

    2013-02-01

    A microcamera is a relay lens paired with image sensors. Microcameras are grouped into arrays to relay overlapping views of a single large surface to the sensors to form a continuous synthetic image. The imaged surface may be curved or irregular as each camera may independently be dynamically focused to a different depth. Microcamera arrays are akin to microprocessors in supercomputers in that both join individual processors by an optoelectronic routing fabric to increase capacity and performance. A microcamera may image ten or more megapixels and grouped into an array of several hundred, as has already been demonstrated by the DARPA AWARE Wide-Field program with multiscale gigapixel photography. We adapt gigapixel microcamera array architectures to wide-field microscopy of irregularly shaped surfaces to greatly increase area imaging over 1000 square millimeters at resolutions of 3 microns or better in a single snapshot. The system includes a novel relay design, a sensor electronics package, and a FPGA-based networking fabric. Biomedical applications of this include screening for skin lesions, wide-field and resolution-agile microsurgical imaging, and microscopic cytometry of millions of cells performed in situ.

  16. A real-time ultrasonic field mapping system using a Fabry Pérot single pixel camera for 3D photoacoustic imaging

    NASA Astrophysics Data System (ADS)

    Huynh, Nam; Zhang, Edward; Betcke, Marta; Arridge, Simon R.; Beard, Paul; Cox, Ben

    2015-03-01

    A system for dynamic mapping of broadband ultrasound fields has been designed, with high frame rate photoacoustic imaging in mind. A Fabry-Pérot interferometric ultrasound sensor was interrogated using a coherent light single-pixel camera. Scrambled Hadamard measurement patterns were used to sample the acoustic field at the sensor, and either a fast Hadamard transform or a compressed sensing reconstruction algorithm were used to recover the acoustic pressure data. Frame rates of 80 Hz were achieved for 32x32 images even though no specialist hardware was used for the on-the-fly reconstructions. The ability of the system to obtain photocacoustic images with data compressions as low as 10% was also demonstrated.

  17. Synthetic Foveal Imaging Technology

    NASA Technical Reports Server (NTRS)

    Nikzad, Shouleh (Inventor); Monacos, Steve P. (Inventor); Hoenk, Michael E. (Inventor)

    2013-01-01

    Apparatuses and methods are disclosed that create a synthetic fovea in order to identify and highlight interesting portions of an image for further processing and rapid response. Synthetic foveal imaging implements a parallel processing architecture that uses reprogrammable logic to implement embedded, distributed, real-time foveal image processing from different sensor types while simultaneously allowing for lossless storage and retrieval of raw image data. Real-time, distributed, adaptive processing of multi-tap image sensors with coordinated processing hardware used for each output tap is enabled. In mosaic focal planes, a parallel-processing network can be implemented that treats the mosaic focal plane as a single ensemble rather than a set of isolated sensors. Various applications are enabled for imaging and robotic vision where processing and responding to enormous amounts of data quickly and efficiently is important.

  18. Wave analysis of a plenoptic system and its applications

    NASA Astrophysics Data System (ADS)

    Shroff, Sapna A.; Berkner, Kathrin

    2013-03-01

    Traditional imaging systems directly image a 2D object plane on to the sensor. Plenoptic imaging systems contain a lenslet array at the conventional image plane and a sensor at the back focal plane of the lenslet array. In this configuration the data captured at the sensor is not a direct image of the object. Each lenslet effectively images the aperture of the main imaging lens at the sensor. Therefore the sensor data retains angular light-field information which can be used for a posteriori digital computation of multi-angle images and axially refocused images. If a filter array, containing spectral filters or neutral density or polarization filters, is placed at the pupil aperture of the main imaging lens, then each lenslet images the filters on to the sensor. This enables the digital separation of multiple filter modalities giving single snapshot, multi-modal images. Due to the diversity of potential applications of plenoptic systems, their investigation is increasing. As the application space moves towards microscopes and other complex systems, and as pixel sizes become smaller, the consideration of diffraction effects in these systems becomes increasingly important. We discuss a plenoptic system and its wave propagation analysis for both coherent and incoherent imaging. We simulate a system response using our analysis and discuss various applications of the system response pertaining to plenoptic system design, implementation and calibration.

  19. Single-event transient imaging with an ultra-high-speed temporally compressive multi-aperture CMOS image sensor.

    PubMed

    Mochizuki, Futa; Kagawa, Keiichiro; Okihara, Shin-ichiro; Seo, Min-Woong; Zhang, Bo; Takasawa, Taishi; Yasutomi, Keita; Kawahito, Shoji

    2016-02-22

    In the work described in this paper, an image reproduction scheme with an ultra-high-speed temporally compressive multi-aperture CMOS image sensor was demonstrated. The sensor captures an object by compressing a sequence of images with focal-plane temporally random-coded shutters, followed by reconstruction of time-resolved images. Because signals are modulated pixel-by-pixel during capturing, the maximum frame rate is defined only by the charge transfer speed and can thus be higher than those of conventional ultra-high-speed cameras. The frame rate and optical efficiency of the multi-aperture scheme are discussed. To demonstrate the proposed imaging method, a 5×3 multi-aperture image sensor was fabricated. The average rising and falling times of the shutters were 1.53 ns and 1.69 ns, respectively. The maximum skew among the shutters was 3 ns. The sensor observed plasma emission by compressing it to 15 frames, and a series of 32 images at 200 Mfps was reconstructed. In the experiment, by correcting disparities and considering temporal pixel responses, artifacts in the reconstructed images were reduced. An improvement in PSNR from 25.8 dB to 30.8 dB was confirmed in simulations.

  20. Compact SPAD-Based Pixel Architectures for Time-Resolved Image Sensors

    PubMed Central

    Perenzoni, Matteo; Pancheri, Lucio; Stoppa, David

    2016-01-01

    This paper reviews the state of the art of single-photon avalanche diode (SPAD) image sensors for time-resolved imaging. The focus of the paper is on pixel architectures featuring small pixel size (<25 μm) and high fill factor (>20%) as a key enabling technology for the successful implementation of high spatial resolution SPAD-based image sensors. A summary of the main CMOS SPAD implementations, their characteristics and integration challenges, is provided from the perspective of targeting large pixel arrays, where one of the key drivers is the spatial uniformity. The main analog techniques aimed at time-gated photon counting and photon timestamping suitable for compact and low-power pixels are critically discussed. The main features of these solutions are the adoption of analog counting techniques and time-to-analog conversion, in NMOS-only pixels. Reliable quantum-limited single-photon counting, self-referenced analog-to-digital conversion, time gating down to 0.75 ns and timestamping with 368 ps jitter are achieved. PMID:27223284

  1. Vidicon intensifier

    NASA Technical Reports Server (NTRS)

    Carpentier, R. P.; Pietrzyk, J. P.; Beyer, R. R.; Kalafut, J. S.

    1976-01-01

    Computer-designed sensor, consisting of single-stage electrostatically-focused, triode image intensifier, provides high quality imaging characterized by exceptionally low geometric distortion, low shading, and high center-and-corner modulation transfer function.

  2. General Model of Photon-Pair Detection with an Image Sensor

    NASA Astrophysics Data System (ADS)

    Defienne, Hugo; Reichert, Matthew; Fleischer, Jason W.

    2018-05-01

    We develop an analytic model that relates intensity correlation measurements performed by an image sensor to the properties of photon pairs illuminating it. Experiments using an effective single-photon counting camera, a linear electron-multiplying charge-coupled device camera, and a standard CCD camera confirm the model. The results open the field of quantum optical sensing using conventional detectors.

  3. Photon counting phosphorescence lifetime imaging with TimepixCam

    DOE PAGES

    Hirvonen, Liisa M.; Fisher-Levine, Merlin; Suhling, Klaus; ...

    2017-01-12

    TimepixCam is a novel fast optical imager based on an optimized silicon pixel sensor with a thin entrance window, and read out by a Timepix ASIC. The 256 x 256 pixel sensor has a time resolution of 15 ns at a sustained frame rate of 10 Hz. We used this sensor in combination with an image intensifier for wide-field time-correlated single photon counting (TCSPC) imaging. We have characterised the photon detection capabilities of this detector system, and employed it on a wide-field epifluorescence microscope to map phosphorescence decays of various iridium complexes with lifetimes of about 1 μs in 200more » μm diameter polystyrene beads.« less

  4. Photon counting phosphorescence lifetime imaging with TimepixCam.

    PubMed

    Hirvonen, Liisa M; Fisher-Levine, Merlin; Suhling, Klaus; Nomerotski, Andrei

    2017-01-01

    TimepixCam is a novel fast optical imager based on an optimized silicon pixel sensor with a thin entrance window and read out by a Timepix Application Specific Integrated Circuit. The 256 × 256 pixel sensor has a time resolution of 15 ns at a sustained frame rate of 10 Hz. We used this sensor in combination with an image intensifier for wide-field time-correlated single photon counting imaging. We have characterised the photon detection capabilities of this detector system and employed it on a wide-field epifluorescence microscope to map phosphorescence decays of various iridium complexes with lifetimes of about 1 μs in 200 μm diameter polystyrene beads.

  5. Photon counting phosphorescence lifetime imaging with TimepixCam

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hirvonen, Liisa M.; Fisher-Levine, Merlin; Suhling, Klaus

    TimepixCam is a novel fast optical imager based on an optimized silicon pixel sensor with a thin entrance window, and read out by a Timepix ASIC. The 256 x 256 pixel sensor has a time resolution of 15 ns at a sustained frame rate of 10 Hz. We used this sensor in combination with an image intensifier for wide-field time-correlated single photon counting (TCSPC) imaging. We have characterised the photon detection capabilities of this detector system, and employed it on a wide-field epifluorescence microscope to map phosphorescence decays of various iridium complexes with lifetimes of about 1 μs in 200more » μm diameter polystyrene beads.« less

  6. Photon counting phosphorescence lifetime imaging with TimepixCam

    NASA Astrophysics Data System (ADS)

    Hirvonen, Liisa M.; Fisher-Levine, Merlin; Suhling, Klaus; Nomerotski, Andrei

    2017-01-01

    TimepixCam is a novel fast optical imager based on an optimized silicon pixel sensor with a thin entrance window and read out by a Timepix Application Specific Integrated Circuit. The 256 × 256 pixel sensor has a time resolution of 15 ns at a sustained frame rate of 10 Hz. We used this sensor in combination with an image intensifier for wide-field time-correlated single photon counting imaging. We have characterised the photon detection capabilities of this detector system and employed it on a wide-field epifluorescence microscope to map phosphorescence decays of various iridium complexes with lifetimes of about 1 μs in 200 μm diameter polystyrene beads.

  7. Evaluation of space SAR as a land-cover classification

    NASA Technical Reports Server (NTRS)

    Brisco, B.; Ulaby, F. T.; Williams, T. H. L.

    1985-01-01

    The multidimensional approach to the mapping of land cover, crops, and forests is reported. Dimensionality is achieved by using data from sensors such as LANDSAT to augment Seasat and Shuttle Image Radar (SIR) data, using different image features such as tone and texture, and acquiring multidate data. Seasat, Shuttle Imaging Radar (SIR-A), and LANDSAT data are used both individually and in combination to map land cover in Oklahoma. The results indicates that radar is the best single sensor (72% accuracy) and produces the best sensor combination (97.5% accuracy) for discriminating among five land cover categories. Multidate Seasat data and a single data of LANDSAT coverage are then used in a crop classification study of western Kansas. The highest accuracy for a single channel is achieved using a Seasat scene, which produces a classification accuracy of 67%. Classification accuracy increases to approximately 75% when either a multidate Seasat combination or LANDSAT data in a multisensor combination is used. The tonal and textural elements of SIR-A data are then used both alone and in combination to classify forests into five categories.

  8. Joint estimation of high resolution images and depth maps from light field cameras

    NASA Astrophysics Data System (ADS)

    Ohashi, Kazuki; Takahashi, Keita; Fujii, Toshiaki

    2014-03-01

    Light field cameras are attracting much attention as tools for acquiring 3D information of a scene through a single camera. The main drawback of typical lenselet-based light field cameras is the limited resolution. This limitation comes from the structure where a microlens array is inserted between the sensor and the main lens. The microlens array projects 4D light field on a single 2D image sensor at the sacrifice of the resolution; the angular resolution and the position resolution trade-off under the fixed resolution of the image sensor. This fundamental trade-off remains after the raw light field image is converted to a set of sub-aperture images. The purpose of our study is to estimate a higher resolution image from low resolution sub-aperture images using a framework of super-resolution reconstruction. In this reconstruction, these sub-aperture images should be registered as accurately as possible. This registration is equivalent to depth estimation. Therefore, we propose a method where super-resolution and depth refinement are performed alternatively. Most of the process of our method is implemented by image processing operations. We present several experimental results using a Lytro camera, where we increased the resolution of a sub-aperture image by three times horizontally and vertically. Our method can produce clearer images compared to the original sub-aperture images and the case without depth refinement.

  9. Design and Analysis of a Single-Camera Omnistereo Sensor for Quadrotor Micro Aerial Vehicles (MAVs).

    PubMed

    Jaramillo, Carlos; Valenti, Roberto G; Guo, Ling; Xiao, Jizhong

    2016-02-06

    We describe the design and 3D sensing performance of an omnidirectional stereo (omnistereo) vision system applied to Micro Aerial Vehicles (MAVs). The proposed omnistereo sensor employs a monocular camera that is co-axially aligned with a pair of hyperboloidal mirrors (a vertically-folded catadioptric configuration). We show that this arrangement provides a compact solution for omnidirectional 3D perception while mounted on top of propeller-based MAVs (not capable of large payloads). The theoretical single viewpoint (SVP) constraint helps us derive analytical solutions for the sensor's projective geometry and generate SVP-compliant panoramic images to compute 3D information from stereo correspondences (in a truly synchronous fashion). We perform an extensive analysis on various system characteristics such as its size, catadioptric spatial resolution, field-of-view. In addition, we pose a probabilistic model for the uncertainty estimation of 3D information from triangulation of back-projected rays. We validate the projection error of the design using both synthetic and real-life images against ground-truth data. Qualitatively, we show 3D point clouds (dense and sparse) resulting out of a single image captured from a real-life experiment. We expect the reproducibility of our sensor as its model parameters can be optimized to satisfy other catadioptric-based omnistereo vision under different circumstances.

  10. Photonic-crystal membranes for optical detection of single nano-particles, designed for biosensor application.

    PubMed

    Grepstad, Jon Olav; Kaspar, Peter; Solgaard, Olav; Johansen, Ib-Rune; Sudbø, Aasmund S

    2012-03-26

    A sensor designed to detect bio-molecules is presented. The sensor exploits a planar 2D photonic crystal (PC) membrane with sub-micron thickness and through holes, to induce high optical fields that allow detection of nano-particles smaller than the diffraction limit of an optical microscope. We report on our design and fabrication of a PC membrane with a nano-particle trapped inside. We have also designed and built an imaging system where an optical microscope and a CCD camera are used to take images of the PC membrane. Results show how the trapped nano-particle appears as a bright spot in the image. In a first experimental realization of the imaging system, single particles with a radius of 75 nm can be detected.

  11. Research on multi-source image fusion technology in haze environment

    NASA Astrophysics Data System (ADS)

    Ma, GuoDong; Piao, Yan; Li, Bing

    2017-11-01

    In the haze environment, the visible image collected by a single sensor can express the details of the shape, color and texture of the target very well, but because of the haze, the sharpness is low and some of the target subjects are lost; Because of the expression of thermal radiation and strong penetration ability, infrared image collected by a single sensor can clearly express the target subject, but it will lose detail information. Therefore, the multi-source image fusion method is proposed to exploit their respective advantages. Firstly, the improved Dark Channel Prior algorithm is used to preprocess the visible haze image. Secondly, the improved SURF algorithm is used to register the infrared image and the haze-free visible image. Finally, the weighted fusion algorithm based on information complementary is used to fuse the image. Experiments show that the proposed method can improve the clarity of the visible target and highlight the occluded infrared target for target recognition.

  12. Summary of current radiometric calibration coefficients for Landsat MSS, TM, ETM+, and EO-1 ALI sensors

    USGS Publications Warehouse

    Chander, G.; Markham, B.L.; Helder, D.L.

    2009-01-01

    This paper provides a summary of the current equations and rescaling factors for converting calibrated Digital Numbers (DNs) to absolute units of at-sensor spectral radiance, Top-Of-Atmosphere (TOA) reflectance, and at-sensor brightness temperature. It tabulates the necessary constants for the Multispectral Scanner (MSS), Thematic Mapper (TM), Enhanced Thematic Mapper Plus (ETM+), and Advanced Land Imager (ALI) sensors. These conversions provide a basis for standardized comparison of data in a single scene or between images acquired on different dates or by different sensors. This paper forms a needed guide for Landsat data users who now have access to the entire Landsat archive at no cost.

  13. Summary of Current Radiometric Calibration Coefficients for Landsat MSS, TM, ETM+, and EO-1 ALI Sensors

    NASA Technical Reports Server (NTRS)

    Chander, Gyanesh; Markham, Brian L.; Helder, Dennis L.

    2009-01-01

    This paper provides a summary of the current equations and rescaling factors for converting calibrated Digital Numbers (DNs) to absolute units of at-sensor spectral radiance, Top-Of- Atmosphere (TOA) reflectance, and at-sensor brightness temperature. It tabulates the necessary constants for the Multispectral Scanner (MSS), Thematic Mapper (TM), Enhanced Thematic Mapper Plus (ETM+), and Advanced Land Imager (ALI) sensors. These conversions provide a basis for standardized comparison of data in a single scene or between images acquired on different dates or by different sensors. This paper forms a needed guide for Landsat data users who now have access to the entire Landsat archive at no cost.

  14. Surface chemistry and morphology in single particle optical imaging

    NASA Astrophysics Data System (ADS)

    Ekiz-Kanik, Fulya; Sevenler, Derin Deniz; Ünlü, Neşe Lortlar; Chiari, Marcella; Ünlü, M. Selim

    2017-05-01

    Biological nanoparticles such as viruses and exosomes are important biomarkers for a range of medical conditions, from infectious diseases to cancer. Biological sensors that detect whole viruses and exosomes with high specificity, yet without additional labeling, are promising because they reduce the complexity of sample preparation and may improve measurement quality by retaining information about nanoscale physical structure of the bio-nanoparticle (BNP). Towards this end, a variety of BNP biosensor technologies have been developed, several of which are capable of enumerating the precise number of detected viruses or exosomes and analyzing physical properties of each individual particle. Optical imaging techniques are promising candidates among broad range of label-free nanoparticle detectors. These imaging BNP sensors detect the binding of single nanoparticles on a flat surface functionalized with a specific capture molecule or an array of multiplexed capture probes. The functionalization step confers all molecular specificity for the sensor's target but can introduce an unforeseen problem; a rough and inhomogeneous surface coating can be a source of noise, as these sensors detect small local changes in optical refractive index. In this paper, we review several optical technologies for label-free BNP detectors with a focus on imaging systems. We compare the surface-imaging methods including dark-field, surface plasmon resonance imaging and interference reflectance imaging. We discuss the importance of ensuring consistently uniform and smooth surface coatings of capture molecules for these types of biosensors and finally summarize several methods that have been developed towards addressing this challenge.

  15. NASA Tech Briefs, July 2008

    NASA Technical Reports Server (NTRS)

    2008-01-01

    Topics covered include: Torque Sensor Based on Tunnel-Diode Oscillator; Shaft-Angle Sensor Based on Tunnel-Diode Oscillator; Ground Facility for Vicarious Calibration of Skyborne Sensors; Optical Pressure-Temperature Sensor for a Combustion Chamber; Impact-Locator Sensor Panels; Low-Loss Waveguides for Terahertz Frequencies; MEMS/ECD Method for Making Bi(2-x)Sb(x)Te3 Thermoelectric Devices; Low-Temperature Supercapacitors; Making a Back-Illuminated Imager with Back-Side Contact and Alignment Markers; Compact, Single-Stage MMIC InP HEMT Amplifier; Nb(x)Ti(1-x)N Superconducting-Nanowire Single-Photon Detectors; Improved Sand-Compaction Method for Lost-Foam Metal Casting; Improved Probe for Evaluating Compaction of Mold Sand; Polymer-Based Composite Catholytes for Li Thin-Film Cells; Using ALD To Bond CNTs to Substrates and Matrices; Alternating-Composition Layered Ceramic Barrier Coatings; Variable-Structure Control of a Model Glider Airplane; Axial Halbach Magnetic Bearings; Compact, Non-Pneumatic Rock-Powder Samplers; Biochips Containing Arrays of Carbon-Nanotube Electrodes; Nb(x)Ti(1-x)N Superconducting-Nanowire Single-Photon Detectors; Neon as a Buffer Gas for a Mercury-Ion Clock; Miniature Incandescent Lamps as Fiber-Optic Light Sources; Bidirectional Pressure-Regulator System; and Prism Window for Optical Alignment. Single-Grid-Pair Fourier Telescope for Imaging in Hard-X Rays and gamma Rays Range-Gated Metrology with Compact Optical Head Lossless, Multi-Spectral Data Compressor for Improved Compression for Pushbroom-Typetruments.

  16. Computational multispectral video imaging [Invited].

    PubMed

    Wang, Peng; Menon, Rajesh

    2018-01-01

    Multispectral imagers reveal information unperceivable to humans and conventional cameras. Here, we demonstrate a compact single-shot multispectral video-imaging camera by placing a micro-structured diffractive filter in close proximity to the image sensor. The diffractive filter converts spectral information to a spatial code on the sensor pixels. Following a calibration step, this code can be inverted via regularization-based linear algebra to compute the multispectral image. We experimentally demonstrated spectral resolution of 9.6 nm within the visible band (430-718 nm). We further show that the spatial resolution is enhanced by over 30% compared with the case without the diffractive filter. We also demonstrate Vis-IR imaging with the same sensor. Because no absorptive color filters are utilized, sensitivity is preserved as well. Finally, the diffractive filters can be easily manufactured using optical lithography and replication techniques.

  17. The Quanta Image Sensor: Every Photon Counts

    PubMed Central

    Fossum, Eric R.; Ma, Jiaju; Masoodian, Saleh; Anzagira, Leo; Zizza, Rachel

    2016-01-01

    The Quanta Image Sensor (QIS) was conceived when contemplating shrinking pixel sizes and storage capacities, and the steady increase in digital processing power. In the single-bit QIS, the output of each field is a binary bit plane, where each bit represents the presence or absence of at least one photoelectron in a photodetector. A series of bit planes is generated through high-speed readout, and a kernel or “cubicle” of bits (x, y, t) is used to create a single output image pixel. The size of the cubicle can be adjusted post-acquisition to optimize image quality. The specialized sub-diffraction-limit photodetectors in the QIS are referred to as “jots” and a QIS may have a gigajot or more, read out at 1000 fps, for a data rate exceeding 1 Tb/s. Basically, we are trying to count photons as they arrive at the sensor. This paper reviews the QIS concept and its imaging characteristics. Recent progress towards realizing the QIS for commercial and scientific purposes is discussed. This includes implementation of a pump-gate jot device in a 65 nm CIS BSI process yielding read noise as low as 0.22 e− r.m.s. and conversion gain as high as 420 µV/e−, power efficient readout electronics, currently as low as 0.4 pJ/b in the same process, creating high dynamic range images from jot data, and understanding the imaging characteristics of single-bit and multi-bit QIS devices. The QIS represents a possible major paradigm shift in image capture. PMID:27517926

  18. CMOS Imaging Sensor Technology for Aerial Mapping Cameras

    NASA Astrophysics Data System (ADS)

    Neumann, Klaus; Welzenbach, Martin; Timm, Martin

    2016-06-01

    In June 2015 Leica Geosystems launched the first large format aerial mapping camera using CMOS sensor technology, the Leica DMC III. This paper describes the motivation to change from CCD sensor technology to CMOS for the development of this new aerial mapping camera. In 2002 the DMC first generation was developed by Z/I Imaging. It was the first large format digital frame sensor designed for mapping applications. In 2009 Z/I Imaging designed the DMC II which was the first digital aerial mapping camera using a single ultra large CCD sensor to avoid stitching of smaller CCDs. The DMC III is now the third generation of large format frame sensor developed by Z/I Imaging and Leica Geosystems for the DMC camera family. It is an evolution of the DMC II using the same system design with one large monolithic PAN sensor and four multi spectral camera heads for R,G, B and NIR. For the first time a 391 Megapixel large CMOS sensor had been used as PAN chromatic sensor, which is an industry record. Along with CMOS technology goes a range of technical benefits. The dynamic range of the CMOS sensor is approx. twice the range of a comparable CCD sensor and the signal to noise ratio is significantly better than with CCDs. Finally results from the first DMC III customer installations and test flights will be presented and compared with other CCD based aerial sensors.

  19. Method and apparatus for distinguishing actual sparse events from sparse event false alarms

    DOEpatents

    Spalding, Richard E.; Grotbeck, Carter L.

    2000-01-01

    Remote sensing method and apparatus wherein sparse optical events are distinguished from false events. "Ghost" images of actual optical phenomena are generated using an optical beam splitter and optics configured to direct split beams to a single sensor or segmented sensor. True optical signals are distinguished from false signals or noise based on whether the ghost image is presence or absent. The invention obviates the need for dual sensor systems to effect a false target detection capability, thus significantly reducing system complexity and cost.

  20. Peptide secondary structure modulates single-walled carbon nanotube fluorescence as a chaperone sensor for nitroaromatics

    PubMed Central

    Heller, Daniel A.; Pratt, George W.; Zhang, Jingqing; Nair, Nitish; Hansborough, Adam J.; Boghossian, Ardemis A.; Reuel, Nigel F.; Barone, Paul W.; Strano, Michael S.

    2011-01-01

    A class of peptides from the bombolitin family, not previously identified for nitroaromatic recognition, allows near-infrared fluorescent single-walled carbon nanotubes to transduce specific changes in their conformation. In response to the binding of specific nitroaromatic species, such peptide–nanotube complexes form a virtual “chaperone sensor,” which reports modulation of the peptide secondary structure via changes in single-walled carbon nanotubes, near-infrared photoluminescence. A split-channel microscope constructed to image quantized spectral wavelength shifts in real time, in response to nitroaromatic adsorption, results in the first single-nanotube imaging of solvatochromic events. The described indirect detection mechanism, as well as an additional exciton quenching-based optical nitroaromatic detection method, illustrate that functionalization of the carbon nanotube surface can result in completely unique sites for recognition, resolvable at the single-molecule level. PMID:21555544

  1. MOSES: a modular sensor electronics system for space science and commercial applications

    NASA Astrophysics Data System (ADS)

    Michaelis, Harald; Behnke, Thomas; Tschentscher, Matthias; Mottola, Stefano; Neukum, Gerhard

    1999-10-01

    The camera group of the DLR--Institute of Space Sensor Technology and Planetary Exploration is developing imaging instruments for scientific and space applications. One example is the ROLIS imaging system of the ESA scientific space mission `Rosetta', which consists of a descent/downlooking and a close-up imager. Both are parts of the Rosetta-Lander payload and will operate in the extreme environment of a cometary nucleus. The Rosetta Lander Imaging System (ROLIS) will introduce a new concept for the sensor electronics, which is referred to as MOSES (Modula Sensor Electronics System). MOSES is a 3D miniaturized CCD- sensor-electronics which is based on single modules. Each of the modules has some flexibility and enables a simple adaptation to specific application requirements. MOSES is mainly designed for space applications where high performance and high reliability are required. This concept, however, can also be used in other science or commercial applications. This paper describes the concept of MOSES, its characteristics, performance and applications.

  2. NRL Fact Book

    DTIC Science & Technology

    2008-01-01

    Distributed network-based battle management High performance computing supporting uniform and nonuniform memory access with single and multithreaded...pallet Airborne EO/IR and radar sensors VNIR through SWIR hyperspectral systems VNIR, MWIR, and LWIR high-resolution sys- tems Wideband SAR systems...meteorological sensors Hyperspectral sensor systems (PHILLS) Mid-wave infrared (MWIR) Indium Antimonide (InSb) imaging system Long-wave infrared ( LWIR

  3. Single sensor that outputs narrowband multispectral images

    PubMed Central

    Kong, Linghua; Yi, Dingrong; Sprigle, Stephen; Wang, Fengtao; Wang, Chao; Liu, Fuhan; Adibi, Ali; Tummala, Rao

    2010-01-01

    We report the work of developing a hand-held (or miniaturized), low-cost, stand-alone, real-time-operation, narrow bandwidth multispectral imaging device for the detection of early stage pressure ulcers. PMID:20210418

  4. Single-snapshot 2D color measurement by plenoptic imaging system

    NASA Astrophysics Data System (ADS)

    Masuda, Kensuke; Yamanaka, Yuji; Maruyama, Go; Nagai, Sho; Hirai, Hideaki; Meng, Lingfei; Tosic, Ivana

    2014-03-01

    Plenoptic cameras enable capture of directional light ray information, thus allowing applications such as digital refocusing, depth estimation, or multiband imaging. One of the most common plenoptic camera architectures contains a microlens array at the conventional image plane and a sensor at the back focal plane of the microlens array. We leverage the multiband imaging (MBI) function of this camera and develop a single-snapshot, single-sensor high color fidelity camera. Our camera is based on a plenoptic system with XYZ filters inserted in the pupil plane of the main lens. To achieve high color measurement precision of this system, we perform an end-to-end optimization of the system model that includes light source information, object information, optical system information, plenoptic image processing and color estimation processing. Optimized system characteristics are exploited to build an XYZ plenoptic colorimetric camera prototype that achieves high color measurement precision. We describe an application of our colorimetric camera to color shading evaluation of display and show that it achieves color accuracy of ΔE<0.01.

  5. Detection of Obstacles in Monocular Image Sequences

    NASA Technical Reports Server (NTRS)

    Kasturi, Rangachar; Camps, Octavia

    1997-01-01

    The ability to detect and locate runways/taxiways and obstacles in images captured using on-board sensors is an essential first step in the automation of low-altitude flight, landing, takeoff, and taxiing phase of aircraft navigation. Automation of these functions under different weather and lighting situations, can be facilitated by using sensors of different modalities. An aircraft-based Synthetic Vision System (SVS), with sensors of different modalities mounted on-board, complements the current ground-based systems in functions such as detection and prevention of potential runway collisions, airport surface navigation, and landing and takeoff in all weather conditions. In this report, we address the problem of detection of objects in monocular image sequences obtained from two types of sensors, a Passive Millimeter Wave (PMMW) sensor and a video camera mounted on-board a landing aircraft. Since the sensors differ in their spatial resolution, and the quality of the images obtained using these sensors is not the same, different approaches are used for detecting obstacles depending on the sensor type. These approaches are described separately in two parts of this report. The goal of the first part of the report is to develop a method for detecting runways/taxiways and objects on the runway in a sequence of images obtained from a moving PMMW sensor. Since the sensor resolution is low and the image quality is very poor, we propose a model-based approach for detecting runways/taxiways. We use the approximate runway model and the position information of the camera provided by the Global Positioning System (GPS) to define regions of interest in the image plane to search for the image features corresponding to the runway markers. Once the runway region is identified, we use histogram-based thresholding to detect obstacles on the runway and regions outside the runway. This algorithm is tested using image sequences simulated from a single real PMMW image.

  6. Room temperature infrared imaging sensors based on highly purified semiconducting carbon nanotubes.

    PubMed

    Liu, Yang; Wei, Nan; Zhao, Qingliang; Zhang, Dehui; Wang, Sheng; Peng, Lian-Mao

    2015-04-21

    High performance infrared (IR) imaging systems usually require expensive cooling systems, which are highly undesirable. Here we report the fabrication and performance characteristics of room temperature carbon nanotube (CNT) IR imaging sensors. The CNT IR imaging sensor is based on aligned semiconducting CNT films with 99% purity, and each pixel or device of the imaging sensor consists of aligned strips of CNT asymmetrically contacted by Sc and Pd. We found that the performance of the device is dependent on the CNT channel length. While short channel devices provide a large photocurrent and a rapid response of about 110 μs, long channel length devices exhibit a low dark current and a high signal-to-noise ratio which are critical for obtaining high detectivity. In total, 36 CNT IR imagers are constructed on a single chip, each consists of 3 × 3 pixel arrays. The demonstrated advantages of constructing a high performance IR system using purified semiconducting CNT aligned films include, among other things, fast response, excellent stability and uniformity, ideal linear photocurrent response, high imaging polarization sensitivity and low power consumption.

  7. Commercial CMOS image sensors as X-ray imagers and particle beam monitors

    NASA Astrophysics Data System (ADS)

    Castoldi, A.; Guazzoni, C.; Maffessanti, S.; Montemurro, G. V.; Carraresi, L.

    2015-01-01

    CMOS image sensors are widely used in several applications such as mobile handsets webcams and digital cameras among others. Furthermore they are available across a wide range of resolutions with excellent spectral and chromatic responses. In order to fulfill the need of cheap systems as beam monitors and high resolution image sensors for scientific applications we exploited the possibility of using commercial CMOS image sensors as X-rays and proton detectors. Two different sensors have been mounted and tested. An Aptina MT9v034, featuring 752 × 480 pixels, 6μm × 6μm pixel size has been mounted and successfully tested as bi-dimensional beam profile monitor, able to take pictures of the incoming proton bunches at the DeFEL beamline (1-6 MeV pulsed proton beam) of the LaBeC of INFN in Florence. The naked sensor is able to successfully detect the interactions of the single protons. The sensor point-spread-function (PSF) has been qualified with 1MeV protons and is equal to one pixel (6 mm) r.m.s. in both directions. A second sensor MT9M032, featuring 1472 × 1096 pixels, 2.2 × 2.2 μm pixel size has been mounted on a dedicated board as high-resolution imager to be used in X-ray imaging experiments with table-top generators. In order to ease and simplify the data transfer and the image acquisition the system is controlled by a dedicated micro-processor board (DM3730 1GHz SoC ARM Cortex-A8) on which a modified LINUX kernel has been implemented. The paper presents the architecture of the sensor systems and the results of the experimental measurements.

  8. Active-Pixel Image Sensor With Analog-To-Digital Converters

    NASA Technical Reports Server (NTRS)

    Fossum, Eric R.; Mendis, Sunetra K.; Pain, Bedabrata; Nixon, Robert H.

    1995-01-01

    Proposed single-chip integrated-circuit image sensor contains 128 x 128 array of active pixel sensors at 50-micrometer pitch. Output terminals of all pixels in each given column connected to analog-to-digital (A/D) converter located at bottom of column. Pixels scanned in semiparallel fashion, one row at time; during time allocated to scanning row, outputs of all active pixel sensors in row fed to respective A/D converters. Design of chip based on complementary metal oxide semiconductor (CMOS) technology, and individual circuit elements fabricated according to 2-micrometer CMOS design rules. Active pixel sensors designed to operate at video rate of 30 frames/second, even at low light levels. A/D scheme based on first-order Sigma-Delta modulation.

  9. Plenoptic camera image simulation for reconstruction algorithm verification

    NASA Astrophysics Data System (ADS)

    Schwiegerling, Jim

    2014-09-01

    Plenoptic cameras have emerged in recent years as a technology for capturing light field data in a single snapshot. A conventional digital camera can be modified with the addition of a lenslet array to create a plenoptic camera. Two distinct camera forms have been proposed in the literature. The first has the camera image focused onto the lenslet array. The lenslet array is placed over the camera sensor such that each lenslet forms an image of the exit pupil onto the sensor. The second plenoptic form has the lenslet array relaying the image formed by the camera lens to the sensor. We have developed a raytracing package that can simulate images formed by a generalized version of the plenoptic camera. Several rays from each sensor pixel are traced backwards through the system to define a cone of rays emanating from the entrance pupil of the camera lens. Objects that lie within this cone are integrated to lead to a color and exposure level for that pixel. To speed processing three-dimensional objects are approximated as a series of planes at different depths. Repeating this process for each pixel in the sensor leads to a simulated plenoptic image on which different reconstruction algorithms can be tested.

  10. Evaluation of excitation strategy with multi-plane electrical capacitance tomography sensor

    NASA Astrophysics Data System (ADS)

    Mao, Mingxu; Ye, Jiamin; Wang, Haigang; Zhang, Jiaolong; Yang, Wuqiang

    2016-11-01

    Electrical capacitance tomography (ECT) is an imaging technique for measuring the permittivity change of materials. Using a multi-plane ECT sensor, three-dimensional (3D) distribution of permittivity may be represented. In this paper, three excitation strategies, including single-electrode excitation, dual-electrode excitation in the same plane, and dual-electrode excitation in different planes are investigated by numerical simulation and experiment for two three-plane ECT sensors with 12 electrodes in total. In one sensor, the electrodes on the middle plane are in line with the others. In the other sensor, they are rotated 45° with reference to the other two planes. A linear back projection algorithm is used to reconstruct the images and a correlation coefficient is used to evaluate the image quality. The capacitance data and sensitivity distribution with each measurement strategy and sensor model are analyzed. Based on simulation and experimental results using noise-free and noisy capacitance data, the performance of the three strategies is evaluated.

  11. CMOS active pixel sensor type imaging system on a chip

    NASA Technical Reports Server (NTRS)

    Fossum, Eric R. (Inventor); Nixon, Robert (Inventor)

    2011-01-01

    A single chip camera which includes an .[.intergrated.]. .Iadd.integrated .Iaddend.image acquisition portion and control portion and which has double sampling/noise reduction capabilities thereon. Part of the .[.intergrated.]. .Iadd.integrated .Iaddend.structure reduces the noise that is picked up during imaging.

  12. Retinal fundus imaging with a plenoptic sensor

    NASA Astrophysics Data System (ADS)

    Thurin, Brice; Bloch, Edward; Nousias, Sotiris; Ourselin, Sebastien; Keane, Pearse; Bergeles, Christos

    2018-02-01

    Vitreoretinal surgery is moving towards 3D visualization of the surgical field. This require acquisition system capable of recording such 3D information. We propose a proof of concept imaging system based on a light-field camera where an array of micro-lenses is placed in front of a conventional sensor. With a single snapshot, a stack of images focused at different depth are produced on the fly, which provides enhanced depth perception for the surgeon. Difficulty in depth localization of features and frequent focus-change during surgery are making current vitreoretinal heads-up surgical imaging systems cumbersome to use. To improve the depth perception and eliminate the need to manually refocus on the instruments during the surgery, we designed and implemented a proof-of-concept ophthalmoscope equipped with a commercial light-field camera. The sensor of our camera is composed of an array of micro-lenses which are projecting an array of overlapped micro-images. We show that with a single light-field snapshot we can digitally refocus between the retina and a tool located in front of the retina or display an extended depth-of-field image where everything is in focus. The design and system performances of the plenoptic fundus camera are detailed. We will conclude by showing in vivo data recorded with our device.

  13. A source number estimation method for single optical fiber sensor

    NASA Astrophysics Data System (ADS)

    Hu, Junpeng; Huang, Zhiping; Su, Shaojing; Zhang, Yimeng; Liu, Chunwu

    2015-10-01

    The single-channel blind source separation (SCBSS) technique makes great significance in many fields, such as optical fiber communication, sensor detection, image processing and so on. It is a wide range application to realize blind source separation (BSS) from a single optical fiber sensor received data. The performance of many BSS algorithms and signal process methods will be worsened with inaccurate source number estimation. Many excellent algorithms have been proposed to deal with the source number estimation in array signal process which consists of multiple sensors, but they can not be applied directly to the single sensor condition. This paper presents a source number estimation method dealing with the single optical fiber sensor received data. By delay process, this paper converts the single sensor received data to multi-dimension form. And the data covariance matrix is constructed. Then the estimation algorithms used in array signal processing can be utilized. The information theoretic criteria (ITC) based methods, presented by AIC and MDL, Gerschgorin's disk estimation (GDE) are introduced to estimate the source number of the single optical fiber sensor's received signal. To improve the performance of these estimation methods at low signal noise ratio (SNR), this paper make a smooth process to the data covariance matrix. By the smooth process, the fluctuation and uncertainty of the eigenvalues of the covariance matrix are reduced. Simulation results prove that ITC base methods can not estimate the source number effectively under colored noise. The GDE method, although gets a poor performance at low SNR, but it is able to accurately estimate the number of sources with colored noise. The experiments also show that the proposed method can be applied to estimate the source number of single sensor received data.

  14. A Label-Free Fluorescent Array Sensor Utilizing Liposome Encapsulating Calcein for Discriminating Target Proteins by Principal Component Analysis

    PubMed Central

    Imamura, Ryota; Murata, Naoki; Shimanouchi, Toshinori; Yamashita, Kaoru; Fukuzawa, Masayuki; Noda, Minoru

    2017-01-01

    A new fluorescent arrayed biosensor has been developed to discriminate species and concentrations of target proteins by using plural different phospholipid liposome species encapsulating fluorescent molecules, utilizing differences in permeation of the fluorescent molecules through the membrane to modulate liposome-target protein interactions. This approach proposes a basically new label-free fluorescent sensor, compared with the common technique of developed fluorescent array sensors with labeling. We have confirmed a high output intensity of fluorescence emission related to characteristics of the fluorescent molecules dependent on their concentrations when they leak from inside the liposomes through the perturbed lipid membrane. After taking an array image of the fluorescence emission from the sensor using a CMOS imager, the output intensities of the fluorescence were analyzed by a principal component analysis (PCA) statistical method. It is found from PCA plots that different protein species with several concentrations were successfully discriminated by using the different lipid membranes with high cumulative contribution ratio. We also confirmed that the accuracy of the discrimination by the array sensor with a single shot is higher than that of a single sensor with multiple shots. PMID:28714873

  15. A Label-Free Fluorescent Array Sensor Utilizing Liposome Encapsulating Calcein for Discriminating Target Proteins by Principal Component Analysis.

    PubMed

    Imamura, Ryota; Murata, Naoki; Shimanouchi, Toshinori; Yamashita, Kaoru; Fukuzawa, Masayuki; Noda, Minoru

    2017-07-15

    A new fluorescent arrayed biosensor has been developed to discriminate species and concentrations of target proteins by using plural different phospholipid liposome species encapsulating fluorescent molecules, utilizing differences in permeation of the fluorescent molecules through the membrane to modulate liposome-target protein interactions. This approach proposes a basically new label-free fluorescent sensor, compared with the common technique of developed fluorescent array sensors with labeling. We have confirmed a high output intensity of fluorescence emission related to characteristics of the fluorescent molecules dependent on their concentrations when they leak from inside the liposomes through the perturbed lipid membrane. After taking an array image of the fluorescence emission from the sensor using a CMOS imager, the output intensities of the fluorescence were analyzed by a principal component analysis (PCA) statistical method. It is found from PCA plots that different protein species with several concentrations were successfully discriminated by using the different lipid membranes with high cumulative contribution ratio. We also confirmed that the accuracy of the discrimination by the array sensor with a single shot is higher than that of a single sensor with multiple shots.

  16. Estimation of the particle concentration in hydraulic liquid by the in-line automatic particle counter based on the CMOS image sensor

    NASA Astrophysics Data System (ADS)

    Kornilin, Dmitriy V.; Kudryavtsev, Ilya A.; McMillan, Alison J.; Osanlou, Ardeshir; Ratcliffe, Ian

    2017-06-01

    Modern hydraulic systems should be monitored on the regular basis. One of the most effective ways to address this task is utilizing in-line automatic particle counters (APC) built inside of the system. The measurement of particle concentration in hydraulic liquid by APC is crucial because increasing numbers of particles should mean functional problems. Existing automatic particle counters have significant limitation for the precise measurement of relatively low concentration of particle in aerospace systems or they are unable to measure higher concentration in industrial ones. Both issues can be addressed by implementation of the CMOS image sensor instead of single photodiode used in the most of APC. CMOS image sensor helps to overcome the problem of the errors in volume measurement caused by inequality of particle speed inside of tube. Correction is based on the determination of the particle position and parabolic velocity distribution profile. Proposed algorithms are also suitable for reducing the errors related to the particles matches in measurement volume. The results of simulation show that the accuracy increased up to 90 per cent and the resolution improved ten times more compared to the single photodiode sensor.

  17. Improved detection probability of low level light and infrared image fusion system

    NASA Astrophysics Data System (ADS)

    Luo, Yuxiang; Fu, Rongguo; Zhang, Junju; Wang, Wencong; Chang, Benkang

    2018-02-01

    Low level light(LLL) image contains rich information on environment details, but is easily affected by the weather. In the case of smoke, rain, cloud or fog, much target information will lose. Infrared image, which is from the radiation produced by the object itself, can be "active" to obtain the target information in the scene. However, the image contrast and resolution is bad, the ability of the acquisition of target details is very poor, and the imaging mode does not conform to the human visual habit. The fusion of LLL and infrared image can make up for the deficiency of each sensor and give play to the advantages of single sensor. At first, we show the hardware design of fusion circuit. Then, through the recognition probability calculation of the target(one person) and the background image(trees), we find that the trees detection probability of LLL image is higher than that of the infrared image, and the person detection probability of the infrared image is obviously higher than that of LLL image. The detection probability of fusion image for one person and trees is higher than that of single detector. Therefore, image fusion can significantly enlarge recognition probability and improve detection efficiency.

  18. Sensor fusion display evaluation using information integration models in enhanced/synthetic vision applications

    NASA Technical Reports Server (NTRS)

    Foyle, David C.

    1993-01-01

    Based on existing integration models in the psychological literature, an evaluation framework is developed to assess sensor fusion displays as might be implemented in an enhanced/synthetic vision system. The proposed evaluation framework for evaluating the operator's ability to use such systems is a normative approach: The pilot's performance with the sensor fusion image is compared to models' predictions based on the pilot's performance when viewing the original component sensor images prior to fusion. This allows for the determination as to when a sensor fusion system leads to: poorer performance than one of the original sensor displays, clearly an undesirable system in which the fused sensor system causes some distortion or interference; better performance than with either single sensor system alone, but at a sub-optimal level compared to model predictions; optimal performance compared to model predictions; or, super-optimal performance, which may occur if the operator were able to use some highly diagnostic 'emergent features' in the sensor fusion display, which were unavailable in the original sensor displays.

  19. A highly flexible platform for nanowire sensor assembly using a combination of optically induced and conventional dielectrophoresis.

    PubMed

    Lin, Yen-Heng; Ho, Kai-Siang; Yang, Chin-Tien; Wang, Jung-Hao; Lai, Chao-Sung

    2014-06-02

    The number and position of assembled nanowires cannot be controlled using most nanowire sensor assembling methods. In this paper, we demonstrate a high-yield, highly flexible platform for nanowire sensor assembly using a combination of optically induced dielectrophoresis (ODEP) and conventional dielectrophoresis (DEP). With the ODEP platform, optical images can be used as virtual electrodes to locally turn on a non-contact DEP force and manipulate a micron- or nano-scale substance suspended in fluid. Nanowires were first moved next to the previously deposited metal electrodes using optical images and, then, were attracted to and arranged in the gap between two electrodes through DEP forces generated by switching on alternating current signals to the metal electrodes. A single nanowire can be assembled within 24 seconds using this approach. In addition, the number of nanowires in a single nanowire sensor can be controlled, and the assembly of a single nanowire on each of the adjacent electrodes can also be achieved. The electrical properties of the assembled nanowires were characterized by IV curve measurement. Additionally, the contact resistance between the nanowires and electrodes and the stickiness between the nanowires and substrates were further investigated in this study.

  20. Detecting higher-order wavefront errors with an astigmatic hybrid wavefront sensor.

    PubMed

    Barwick, Shane

    2009-06-01

    The reconstruction of wavefront errors from measurements over subapertures can be made more accurate if a fully characterized quadratic surface can be fitted to the local wavefront surface. An astigmatic hybrid wavefront sensor with added neural network postprocessing is shown to have this capability, provided that the focal image of each subaperture is sufficiently sampled. Furthermore, complete local curvature information is obtained with a single image without splitting beam power.

  1. 3D imaging of translucent media with a plenoptic sensor based on phase space optics

    NASA Astrophysics Data System (ADS)

    Zhang, Xuanzhe; Shu, Bohong; Du, Shaojun

    2015-05-01

    Traditional stereo imaging technology is not working for dynamical translucent media, because there are no obvious characteristic patterns on it and it's not allowed using multi-cameras in most cases, while phase space optics can solve the problem, extracting depth information directly from "space-spatial frequency" distribution of the target obtained by plenoptic sensor with single lens. This paper discussed the presentation of depth information in phase space data, and calculating algorithms with different transparency. A 3D imaging example of waterfall was given at last.

  2. Active Sensor for Microwave Tissue Imaging with Bias-Switched Arrays.

    PubMed

    Foroutan, Farzad; Nikolova, Natalia K

    2018-05-06

    A prototype of a bias-switched active sensor was developed and measured to establish the achievable dynamic range in a new generation of active arrays for microwave tissue imaging. The sensor integrates a printed slot antenna, a low-noise amplifier (LNA) and an active mixer in a single unit, which is sufficiently small to enable inter-sensor separation distance as small as 12 mm. The sensor’s input covers the bandwidth from 3 GHz to 7.5 GHz. Its output intermediate frequency (IF) is 30 MHz. The sensor is controlled by a simple bias-switching circuit, which switches ON and OFF the bias of the LNA and the mixer simultaneously. It was demonstrated experimentally that the dynamic range of the sensor, as determined by its ON and OFF states, is 109 dB and 118 dB at resolution bandwidths of 1 kHz and 100 Hz, respectively.

  3. Development of integrated semiconductor optical sensors for functional brain imaging

    NASA Astrophysics Data System (ADS)

    Lee, Thomas T.

    Optical imaging of neural activity is a widely accepted technique for imaging brain function in the field of neuroscience research, and has been used to study the cerebral cortex in vivo for over two decades. Maps of brain activity are obtained by monitoring intensity changes in back-scattered light, called Intrinsic Optical Signals (IOS), that correspond to fluctuations in blood oxygenation and volume associated with neural activity. Current imaging systems typically employ bench-top equipment including lamps and CCD cameras to study animals using visible light. Such systems require the use of anesthetized or immobilized subjects with craniotomies, which imposes limitations on the behavioral range and duration of studies. The ultimate goal of this work is to overcome these limitations by developing a single-chip semiconductor sensor using arrays of sources and detectors operating at near-infrared (NIR) wavelengths. A single-chip implementation, combined with wireless telemetry, will eliminate the need for immobilization or anesthesia of subjects and allow in vivo studies of free behavior. NIR light offers additional advantages because it experiences less absorption in animal tissue than visible light, which allows for imaging through superficial tissues. This, in turn, reduces or eliminates the need for traumatic surgery and enables long-term brain-mapping studies in freely-behaving animals. This dissertation concentrates on key engineering challenges of implementing the sensor. This work shows the feasibility of using a GaAs-based array of vertical-cavity surface emitting lasers (VCSELs) and PIN photodiodes for IOS imaging. I begin with in-vivo studies of IOS imaging through the skull in mice, and use these results along with computer simulations to establish minimum performance requirements for light sources and detectors. I also evaluate the performance of a current commercial VCSEL for IOS imaging, and conclude with a proposed prototype sensor.

  4. Single sensor processing to obtain high resolution color component signals

    NASA Technical Reports Server (NTRS)

    Glenn, William E. (Inventor)

    2010-01-01

    A method for generating color video signals representative of color images of a scene includes the following steps: focusing light from the scene on an electronic image sensor via a filter having a tri-color filter pattern; producing, from outputs of the sensor, first and second relatively low resolution luminance signals; producing, from outputs of the sensor, a relatively high resolution luminance signal; producing, from a ratio of the relatively high resolution luminance signal to the first relatively low resolution luminance signal, a high band luminance component signal; producing, from outputs of the sensor, relatively low resolution color component signals; and combining each of the relatively low resolution color component signals with the high band luminance component signal to obtain relatively high resolution color component signals.

  5. A Combined Laser-Communication and Imager for Microspacecraft (ACLAIM)

    NASA Technical Reports Server (NTRS)

    Hemmati, H.; Lesh, J.

    1998-01-01

    ACLAIM is a multi-function instrument consisting of a laser communication terminal and an imaging camera that share a common telescope. A single APS- (Active Pixel Sensor) based focal-plane-array is used to perform both the acquisition and tracking (for laser communication) and science imaging functions.

  6. Single-Grating Talbot Imaging for Wavefront Sensing and X-Ray Metrology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grizolli, Walan; Shi, Xianbo; Kolodziej, Tomasz

    2017-01-01

    Single-grating Talbot imaging relies on high-spatial-resolution detectors to perform accurate measurements of X-ray beam wavefronts. The wavefront can be retrieved with a single image, and a typical measurement and data analysis can be performed in few seconds. These qualities make it an ideal tool for synchrotron beamline diagnostics and in-situ metrology. The wavefront measurement can be used both to obtain a phase contrast image of an object and to characterize an X-ray beam. In this work, we explore the concept in two cases: at-wavelength metrology of 2D parabolic beryllium lenses and a wavefront sensor using a diamond crystal beam splitter.

  7. High Dynamic Range Imaging at the Quantum Limit with Single Photon Avalanche Diode-Based Image Sensors †

    PubMed Central

    Mattioli Della Rocca, Francescopaolo

    2018-01-01

    This paper examines methods to best exploit the High Dynamic Range (HDR) of the single photon avalanche diode (SPAD) in a high fill-factor HDR photon counting pixel that is scalable to megapixel arrays. The proposed method combines multi-exposure HDR with temporal oversampling in-pixel. We present a silicon demonstration IC with 96 × 40 array of 8.25 µm pitch 66% fill-factor SPAD-based pixels achieving >100 dB dynamic range with 3 back-to-back exposures (short, mid, long). Each pixel sums 15 bit-planes or binary field images internally to constitute one frame providing 3.75× data compression, hence the 1k frames per second (FPS) output off-chip represents 45,000 individual field images per second on chip. Two future projections of this work are described: scaling SPAD-based image sensors to HDR 1 MPixel formats and shrinking the pixel pitch to 1–3 µm. PMID:29641479

  8. Image sensor with high dynamic range linear output

    NASA Technical Reports Server (NTRS)

    Yadid-Pecht, Orly (Inventor); Fossum, Eric R. (Inventor)

    2007-01-01

    Designs and operational methods to increase the dynamic range of image sensors and APS devices in particular by achieving more than one integration times for each pixel thereof. An APS system with more than one column-parallel signal chains for readout are described for maintaining a high frame rate in readout. Each active pixel is sampled for multiple times during a single frame readout, thus resulting in multiple integration times. The operation methods can also be used to obtain multiple integration times for each pixel with an APS design having a single column-parallel signal chain for readout. Furthermore, analog-to-digital conversion of high speed and high resolution can be implemented.

  9. Phase-sensitive two-dimensional neutron shearing interferometer and Hartmann sensor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, Kevin

    2015-12-08

    A neutron imaging system detects both the phase shift and absorption of neutrons passing through an object. The neutron imaging system is based on either of two different neutron wavefront sensor techniques: 2-D shearing interferometry and Hartmann wavefront sensing. Both approaches measure an entire two-dimensional neutron complex field, including its amplitude and phase. Each measures the full-field, two-dimensional phase gradients and, concomitantly, the two-dimensional amplitude mapping, requiring only a single measurement.

  10. A model-based approach for detection of runways and other objects in image sequences acquired using an on-board camera

    NASA Technical Reports Server (NTRS)

    Kasturi, Rangachar; Devadiga, Sadashiva; Tang, Yuan-Liang

    1994-01-01

    This research was initiated as a part of the Advanced Sensor and Imaging System Technology (ASSIST) program at NASA Langley Research Center. The primary goal of this research is the development of image analysis algorithms for the detection of runways and other objects using an on-board camera. Initial effort was concentrated on images acquired using a passive millimeter wave (PMMW) sensor. The images obtained using PMMW sensors under poor visibility conditions due to atmospheric fog are characterized by very low spatial resolution but good image contrast compared to those images obtained using sensors operating in the visible spectrum. Algorithms developed for analyzing these images using a model of the runway and other objects are described in Part 1 of this report. Experimental verification of these algorithms was limited to a sequence of images simulated from a single frame of PMMW image. Subsequent development and evaluation of algorithms was done using video image sequences. These images have better spatial and temporal resolution compared to PMMW images. Algorithms for reliable recognition of runways and accurate estimation of spatial position of stationary objects on the ground have been developed and evaluated using several image sequences. These algorithms are described in Part 2 of this report. A list of all publications resulting from this work is also included.

  11. Passive IR polarization sensors: a new technology for mine detection

    NASA Astrophysics Data System (ADS)

    Barbour, Blair A.; Jones, Michael W.; Barnes, Howard B.; Lewis, Charles P.

    1998-09-01

    The problem of mine and minefield detection continues to provide a significant challenge to sensor systems. Although the various sensor technologies (infrared, ground penetrating radar, etc.) may excel in certain situations there does not exist a single sensor technology that can adequately detect mines in all conditions such as time of day, weather, buried or surface laid, etc. A truly robust mine detection system will likely require the fusion of data from multiple sensor technologies. The performance of these systems, however, will ultimately depend on the performance of the individual sensors. Infrared (IR) polarimetry is a new and innovative sensor technology that adds substantial capabilities to the detection of mines. IR polarimetry improves on basic IR imaging by providing improved spatial resolution of the target, an inherent ability to suppress clutter, and the capability for zero (Delta) T imaging. Nichols Research Corporation (Nichols) is currently evaluating the effectiveness of IR polarization for mine detection. This study is partially funded by the U.S. Army Night Vision & Electronic Sensors Directorate (NVESD). The goal of the study is to demonstrate, through phenomenology studies and limited field trials, that IR polarizaton outperforms conventional IR imaging in the mine detection arena.

  12. A Method for Imaging Oxygen Distribution and Respiration at a Microscopic Level of Resolution.

    PubMed

    Rolletschek, Hardy; Liebsch, Gregor

    2017-01-01

    Conventional oxygen (micro-) sensors assess oxygen concentration within a particular region or across a transect of tissue, but provide no information regarding its bidimensional distribution. Here, a novel imaging technology is presented, in which an optical sensor foil (i.e., the planar optode) is attached to the surface of the sample. The sensor converts a fluorescent signal into an oxygen value. Since each single image captures an entire area of the sample surface, the system is able to deduce the distribution of oxygen at a resolution level of few micrometers. It can be deployed to dynamically monitor oxygen consumption, thereby providing a detailed respiration map at close to cellular resolution. Here, we demonstrate the application of the imaging tool to developing plant seeds; the protocol is explained step by step and some potential pitfalls are discussed.

  13. Advanced Image Processing for NASA Applications

    NASA Technical Reports Server (NTRS)

    LeMoign, Jacqueline

    2007-01-01

    The future of space exploration will involve cooperating fleets of spacecraft or sensor webs geared towards coordinated and optimal observation of Earth Science phenomena. The main advantage of such systems is to utilize multiple viewing angles as well as multiple spatial and spectral resolutions of sensors carried on multiple spacecraft but acting collaboratively as a single system. Within this framework, our research focuses on all areas related to sensing in collaborative environments, which means systems utilizing intracommunicating spatially distributed sensor pods or crafts being deployed to monitor or explore different environments. This talk will describe the general concept of sensing in collaborative environments, will give a brief overview of several technologies developed at NASA Goddard Space Flight Center in this area, and then will concentrate on specific image processing research related to that domain, specifically image registration and image fusion.

  14. Mathematical models and photogrammetric exploitation of image sensing

    NASA Astrophysics Data System (ADS)

    Puatanachokchai, Chokchai

    Mathematical models of image sensing are generally categorized into physical/geometrical sensor models and replacement sensor models. While the former is determined from image sensing geometry, the latter is based on knowledge of the physical/geometric sensor models and on using such models for its implementation. The main thrust of this research is in replacement sensor models which have three important characteristics: (1) Highly accurate ground-to-image functions; (2) Rigorous error propagation that is essentially of the same accuracy as the physical model; and, (3) Adjustability, or the ability to upgrade the replacement sensor model parameters when additional control information becomes available after the replacement sensor model has replaced the physical model. In this research, such replacement sensor models are considered as True Replacement Models or TRMs. TRMs provide a significant advantage of universality, particularly for image exploitation functions. There have been several writings about replacement sensor models, and except for the so called RSM (Replacement Sensor Model as a product described in the Manual of Photogrammetry), almost all of them pay very little or no attention to errors and their propagation. This is because, it is suspected, the few physical sensor parameters are usually replaced by many more parameters, thus presenting a potential error estimation difficulty. The third characteristic, adjustability, is perhaps the most demanding. It provides an equivalent flexibility to that of triangulation using the physical model. Primary contributions of this thesis include not only "the eigen-approach", a novel means of replacing the original sensor parameter covariance matrices at the time of estimating the TRM, but also the implementation of the hybrid approach that combines the eigen-approach with the added parameters approach used in the RSM. Using either the eigen-approach or the hybrid approach, rigorous error propagation can be performed during image exploitation. Further, adjustability can be performed when additional control information becomes available after the TRM has been implemented. The TRM is shown to apply to imagery from sensors having different geometries, including an aerial frame camera, a spaceborne linear array sensor, an airborne pushbroom sensor, and an airborne whiskbroom sensor. TRM results show essentially negligible differences as compared to those from rigorous physical sensor models, both for geopositioning from single and overlapping images. Simulated as well as real image data are used to address all three characteristics of the TRM.

  15. Architecture and applications of a high resolution gated SPAD image sensor

    PubMed Central

    Burri, Samuel; Maruyama, Yuki; Michalet, Xavier; Regazzoni, Francesco; Bruschini, Claudio; Charbon, Edoardo

    2014-01-01

    We present the architecture and three applications of the largest resolution image sensor based on single-photon avalanche diodes (SPADs) published to date. The sensor, fabricated in a high-voltage CMOS process, has a resolution of 512 × 128 pixels and a pitch of 24 μm. The fill-factor of 5% can be increased to 30% with the use of microlenses. For precise control of the exposure and for time-resolved imaging, we use fast global gating signals to define exposure windows as small as 4 ns. The uniformity of the gate edges location is ∼140 ps (FWHM) over the whole array, while in-pixel digital counting enables frame rates as high as 156 kfps. Currently, our camera is used as a highly sensitive sensor with high temporal resolution, for applications ranging from fluorescence lifetime measurements to fluorescence correlation spectroscopy and generation of true random numbers. PMID:25090572

  16. High responsivity CMOS imager pixel implemented in SOI technology

    NASA Technical Reports Server (NTRS)

    Zheng, X.; Wrigley, C.; Yang, G.; Pain, B.

    2000-01-01

    Availability of mature sub-micron CMOS technology and the advent of the new low noise active pixel sensor (APS) concept have enabled the development of low power, miniature, single-chip, CMOS digital imagers in the decade of the 1990's.

  17. Small Imaging Depth LIDAR and DCNN-Based Localization for Automated Guided Vehicle †

    PubMed Central

    Ito, Seigo; Hiratsuka, Shigeyoshi; Ohta, Mitsuhiko; Matsubara, Hiroyuki; Ogawa, Masaru

    2018-01-01

    We present our third prototype sensor and a localization method for Automated Guided Vehicles (AGVs), for which small imaging LIght Detection and Ranging (LIDAR) and fusion-based localization are fundamentally important. Our small imaging LIDAR, named the Single-Photon Avalanche Diode (SPAD) LIDAR, uses a time-of-flight method and SPAD arrays. A SPAD is a highly sensitive photodetector capable of detecting at the single-photon level, and the SPAD LIDAR has two SPAD arrays on the same chip for detection of laser light and environmental light. Therefore, the SPAD LIDAR simultaneously outputs range image data and monocular image data with the same coordinate system and does not require external calibration among outputs. As AGVs travel both indoors and outdoors with vibration, this calibration-less structure is particularly useful for AGV applications. We also introduce a fusion-based localization method, named SPAD DCNN, which uses the SPAD LIDAR and employs a Deep Convolutional Neural Network (DCNN). SPAD DCNN can fuse the outputs of the SPAD LIDAR: range image data, monocular image data and peak intensity image data. The SPAD DCNN has two outputs: the regression result of the position of the SPAD LIDAR and the classification result of the existence of a target to be approached. Our third prototype sensor and the localization method are evaluated in an indoor environment by assuming various AGV trajectories. The results show that the sensor and localization method improve the localization accuracy. PMID:29320434

  18. Small Imaging Depth LIDAR and DCNN-Based Localization for Automated Guided Vehicle.

    PubMed

    Ito, Seigo; Hiratsuka, Shigeyoshi; Ohta, Mitsuhiko; Matsubara, Hiroyuki; Ogawa, Masaru

    2018-01-10

    We present our third prototype sensor and a localization method for Automated Guided Vehicles (AGVs), for which small imaging LIght Detection and Ranging (LIDAR) and fusion-based localization are fundamentally important. Our small imaging LIDAR, named the Single-Photon Avalanche Diode (SPAD) LIDAR, uses a time-of-flight method and SPAD arrays. A SPAD is a highly sensitive photodetector capable of detecting at the single-photon level, and the SPAD LIDAR has two SPAD arrays on the same chip for detection of laser light and environmental light. Therefore, the SPAD LIDAR simultaneously outputs range image data and monocular image data with the same coordinate system and does not require external calibration among outputs. As AGVs travel both indoors and outdoors with vibration, this calibration-less structure is particularly useful for AGV applications. We also introduce a fusion-based localization method, named SPAD DCNN, which uses the SPAD LIDAR and employs a Deep Convolutional Neural Network (DCNN). SPAD DCNN can fuse the outputs of the SPAD LIDAR: range image data, monocular image data and peak intensity image data. The SPAD DCNN has two outputs: the regression result of the position of the SPAD LIDAR and the classification result of the existence of a target to be approached. Our third prototype sensor and the localization method are evaluated in an indoor environment by assuming various AGV trajectories. The results show that the sensor and localization method improve the localization accuracy.

  19. Digital Image Processing Overview For Helmet Mounted Displays

    NASA Astrophysics Data System (ADS)

    Parise, Michael J.

    1989-09-01

    Digital image processing provides a means to manipulate an image and presents a user with a variety of display formats that are not available in the analog image processing environment. When performed in real time and presented on a Helmet Mounted Display, system capability and flexibility are greatly enhanced. The information content of a display can be increased by the addition of real time insets and static windows from secondary sensor sources, near real time 3-D imaging from a single sensor can be achieved, graphical information can be added, and enhancement techniques can be employed. Such increased functionality is generating a considerable amount of interest in the military and commercial markets. This paper discusses some of these image processing techniques and their applications.

  20. Machine Learning Based Single-Frame Super-Resolution Processing for Lensless Blood Cell Counting

    PubMed Central

    Huang, Xiwei; Jiang, Yu; Liu, Xu; Xu, Hang; Han, Zhi; Rong, Hailong; Yang, Haiping; Yan, Mei; Yu, Hao

    2016-01-01

    A lensless blood cell counting system integrating microfluidic channel and a complementary metal oxide semiconductor (CMOS) image sensor is a promising technique to miniaturize the conventional optical lens based imaging system for point-of-care testing (POCT). However, such a system has limited resolution, making it imperative to improve resolution from the system-level using super-resolution (SR) processing. Yet, how to improve resolution towards better cell detection and recognition with low cost of processing resources and without degrading system throughput is still a challenge. In this article, two machine learning based single-frame SR processing types are proposed and compared for lensless blood cell counting, namely the Extreme Learning Machine based SR (ELMSR) and Convolutional Neural Network based SR (CNNSR). Moreover, lensless blood cell counting prototypes using commercial CMOS image sensors and custom designed backside-illuminated CMOS image sensors are demonstrated with ELMSR and CNNSR. When one captured low-resolution lensless cell image is input, an improved high-resolution cell image will be output. The experimental results show that the cell resolution is improved by 4×, and CNNSR has 9.5% improvement over the ELMSR on resolution enhancing performance. The cell counting results also match well with a commercial flow cytometer. Such ELMSR and CNNSR therefore have the potential for efficient resolution improvement in lensless blood cell counting systems towards POCT applications. PMID:27827837

  1. Piezoelectric single crystals for ultrasonic transducers in biomedical applications

    PubMed Central

    Zhou, Qifa; Lam, Kwok Ho; Zheng, Hairong; Qiu, Weibao; Shung, K. Kirk

    2014-01-01

    Piezoelectric single crystals, which have excellent piezoelectric properties, have extensively been employed for various sensors and actuators applications. In this paper, the state–of–art in piezoelectric single crystals for ultrasonic transducer applications is reviewed. Firstly, the basic principles and design considerations of piezoelectric ultrasonic transducers will be addressed. Then, the popular piezoelectric single crystals used for ultrasonic transducer applications, including LiNbO3 (LN), PMN–PT and PIN–PMN–PT, will be introduced. After describing the preparation and performance of the single crystals, the recent development of both the single–element and array transducers fabricated using the single crystals will be presented. Finally, various biomedical applications including eye imaging, intravascular imaging, blood flow measurement, photoacoustic imaging, and microbeam applications of the single crystal transducers will be discussed. PMID:25386032

  2. Aspects of detection and tracking of ground targets from an airborne EO/IR sensor

    NASA Astrophysics Data System (ADS)

    Balaji, Bhashyam; Sithiravel, Rajiv; Daya, Zahir; Kirubarajan, Thiagalingam

    2015-05-01

    An airborne EO/IR (electro-optical/infrared) camera system comprises of a suite of sensors, such as a narrow and wide field of view (FOV) EO and mid-wave IR sensors. EO/IR camera systems are regularly employed on military and search and rescue aircrafts. The EO/IR system can be used to detect and identify objects rapidly in daylight and at night, often with superior performance in challenging conditions such as fog. There exist several algorithms for detecting potential targets in the bearing elevation grid. The nonlinear filtering problem is one of estimation of the kinematic parameters from bearing and elevation measurements from a moving platform. In this paper, we developed a complete model for the state of a target as detected by an airborne EO/IR system and simulated a typical scenario with single target with 1 or 2 airborne sensors. We have demonstrated the ability to track the target with `high precision' and noted the improvement from using two sensors on a single platform or on separate platforms. The performance of the Extended Kalman filter (EKF) is investigated on simulated data. Image/video data collected from an IR sensor on an airborne platform are processed using an image tracking by detection algorithm.

  3. Quantum Random Number Generation Using a Quanta Image Sensor

    PubMed Central

    Amri, Emna; Felk, Yacine; Stucki, Damien; Ma, Jiaju; Fossum, Eric R.

    2016-01-01

    A new quantum random number generation method is proposed. The method is based on the randomness of the photon emission process and the single photon counting capability of the Quanta Image Sensor (QIS). It has the potential to generate high-quality random numbers with remarkable data output rate. In this paper, the principle of photon statistics and theory of entropy are discussed. Sample data were collected with QIS jot device, and its randomness quality was analyzed. The randomness assessment method and results are discussed. PMID:27367698

  4. The Dynamic Photometric Stereo Method Using a Multi-Tap CMOS Image Sensor.

    PubMed

    Yoda, Takuya; Nagahara, Hajime; Taniguchi, Rin-Ichiro; Kagawa, Keiichiro; Yasutomi, Keita; Kawahito, Shoji

    2018-03-05

    The photometric stereo method enables estimation of surface normals from images that have been captured using different but known lighting directions. The classical photometric stereo method requires at least three images to determine the normals in a given scene. However, this method cannot be applied to dynamic scenes because it is assumed that the scene remains static while the required images are captured. In this work, we present a dynamic photometric stereo method for estimation of the surface normals in a dynamic scene. We use a multi-tap complementary metal-oxide-semiconductor (CMOS) image sensor to capture the input images required for the proposed photometric stereo method. This image sensor can divide the electrons from the photodiode from a single pixel into the different taps of the exposures and can thus capture multiple images under different lighting conditions with almost identical timing. We implemented a camera lighting system and created a software application to enable estimation of the normal map in real time. We also evaluated the accuracy of the estimated surface normals and demonstrated that our proposed method can estimate the surface normals of dynamic scenes.

  5. Novel snapshot hyperspectral imager for fluorescence imaging

    NASA Astrophysics Data System (ADS)

    Chandler, Lynn; Chandler, Andrea; Periasamy, Ammasi

    2018-02-01

    Hyperspectral imaging has emerged as a new technique for the identification and classification of biological tissue1. Benefitting recent developments in sensor technology, the new class of hyperspectral imagers can capture entire hypercubes with single shot operation and it shows great potential for real-time imaging in biomedical sciences. This paper explores the use of a SnapShot imager in fluorescence imaging via microscope for the very first time. Utilizing the latest imaging sensor, the Snapshot imager is both compact and attachable via C-mount to any commercially available light microscope. Using this setup, fluorescence hypercubes of several cells were generated, containing both spatial and spectral information. The fluorescence images were acquired with one shot operation for all the emission range from visible to near infrared (VIS-IR). The paper will present the hypercubes obtained images from example tissues (475-630nm). This study demonstrates the potential of application in cell biology or biomedical applications for real time monitoring.

  6. Thin wetting film lensless imaging

    NASA Astrophysics Data System (ADS)

    Allier, C. P.; Poher, V.; Coutard, J. G.; Hiernard, G.; Dinten, J. M.

    2011-03-01

    Lensless imaging has recently attracted a lot of attention as a compact, easy-to-use method to image or detect biological objects like cells, but failed at detecting micron size objects like bacteria that often do not scatter enough light. In order to detect single bacterium, we have developed a method based on a thin wetting film that produces a micro-lens effect. Compared with previously reported results, a large improvement in signal to noise ratio is obtained due to the presence of a micro-lens on top of each bacterium. In these conditions, standard CMOS sensors are able to detect single bacterium, e.g. E.coli, Bacillus subtilis and Bacillus thuringiensis, with a large signal to noise ratio. This paper presents our sensor optimization to enhance the SNR; improve the detection of sub-micron objects; and increase the imaging FOV, from 4.3 mm2 to 12 mm2 to 24 mm2, which allows the detection of bacteria contained in 0.5μl to 4μl to 10μl, respectively.

  7. Cross delay line sensor characterization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Owens, Israel J; Remelius, Dennis K; Tiee, Joe J

    There exists a wealth of information in the scientific literature on the physical properties and device characterization procedures for complementary metal oxide semiconductor (CMOS), charge coupled device (CCD) and avalanche photodiode (APD) format detectors. Numerous papers and books have also treated photocathode operation in the context of photomultiplier tube (PMT) operation for either non imaging applications or limited night vision capability. However, much less information has been reported in the literature about the characterization procedures and properties of photocathode detectors with novel cross delay line (XDL) anode structures. These allow one to detect single photons and create images by recordingmore » space and time coordinate (X, Y & T) information. In this paper, we report on the physical characteristics and performance of a cross delay line anode sensor with an enhanced near infrared wavelength response photocathode and high dynamic range micro channel plate (MCP) gain (> 10{sup 6}) multiplier stage. Measurement procedures and results including the device dark event rate (DER), pulse height distribution, quantum and electronic device efficiency (QE & DQE) and spatial resolution per effective pixel region in a 25 mm sensor array are presented. The overall knowledge and information obtained from XDL sensor characterization allow us to optimize device performance and assess capability. These device performance properties and capabilities make XDL detectors ideal for remote sensing field applications that require single photon detection, imaging, sub nano-second timing response, high spatial resolution (10's of microns) and large effective image format.« less

  8. Integrated infrared and visible image sensors

    NASA Technical Reports Server (NTRS)

    Fossum, Eric R. (Inventor); Pain, Bedabrata (Inventor)

    2000-01-01

    Semiconductor imaging devices integrating an array of visible detectors and another array of infrared detectors into a single module to simultaneously detect both the visible and infrared radiation of an input image. The visible detectors and the infrared detectors may be formed either on two separate substrates or on the same substrate by interleaving visible and infrared detectors.

  9. Automatically augmenting lifelog events using pervasively generated content from millions of people.

    PubMed

    Doherty, Aiden R; Smeaton, Alan F

    2010-01-01

    In sensor research we take advantage of additional contextual sensor information to disambiguate potentially erroneous sensor readings or to make better informed decisions on a single sensor's output. This use of additional information reinforces, validates, semantically enriches, and augments sensed data. Lifelog data is challenging to augment, as it tracks one's life with many images including the places they go, making it non-trivial to find associated sources of information. We investigate realising the goal of pervasive user-generated content based on sensors, by augmenting passive visual lifelogs with "Web 2.0" content collected by millions of other individuals.

  10. Direct-Solve Image-Based Wavefront Sensing

    NASA Technical Reports Server (NTRS)

    Lyon, Richard G.

    2009-01-01

    A method of wavefront sensing (more precisely characterized as a method of determining the deviation of a wavefront from a nominal figure) has been invented as an improved means of assessing the performance of an optical system as affected by such imperfections as misalignments, design errors, and fabrication errors. The method is implemented by software running on a single-processor computer that is connected, via a suitable interface, to the image sensor (typically, a charge-coupled device) in the system under test. The software collects a digitized single image from the image sensor. The image is displayed on a computer monitor. The software directly solves for the wavefront in a time interval of a fraction of a second. A picture of the wavefront is displayed. The solution process involves, among other things, fast Fourier transforms. It has been reported to the effect that some measure of the wavefront is decomposed into modes of the optical system under test, but it has not been reported whether this decomposition is postprocessing of the solution or part of the solution process.

  11. Noise reduction techniques for Bayer-matrix images

    NASA Astrophysics Data System (ADS)

    Kalevo, Ossi; Rantanen, Henry

    2002-04-01

    In this paper, some arrangements to apply Noise Reduction (NR) techniques for images captured by a single sensor digital camera are studied. Usually, the NR filter processes full three-color component image data. This requires that raw Bayer-matrix image data, available from the image sensor, is first interpolated by using Color Filter Array Interpolation (CFAI) method. Another choice is that the raw Bayer-matrix image data is processed directly. The advantages and disadvantages of both processing orders, before (pre-) CFAI and after (post-) CFAI, are studied with linear, multi-stage median, multistage median hybrid and median-rational filters .The comparison is based on the quality of the output image, the processing power requirements and the amount of memory needed. Also the solution, which improves preservation of details in the NR filtering before the CFAI, is proposed.

  12. Note: A disposable x-ray camera based on mass produced complementary metal-oxide-semiconductor sensors and single-board computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoidn, Oliver R.; Seidler, Gerald T., E-mail: seidler@uw.edu

    We have integrated mass-produced commercial complementary metal-oxide-semiconductor (CMOS) image sensors and off-the-shelf single-board computers into an x-ray camera platform optimized for acquisition of x-ray spectra and radiographs at energies of 2–6 keV. The CMOS sensor and single-board computer are complemented by custom mounting and interface hardware that can be easily acquired from rapid prototyping services. For single-pixel detection events, i.e., events where the deposited energy from one photon is substantially localized in a single pixel, we establish ∼20% quantum efficiency at 2.6 keV with ∼190 eV resolution and a 100 kHz maximum detection rate. The detector platform’s useful intrinsic energymore » resolution, 5-μm pixel size, ease of use, and obvious potential for parallelization make it a promising candidate for many applications at synchrotron facilities, in laser-heating plasma physics studies, and in laboratory-based x-ray spectrometry.« less

  13. Design and Analysis of a Single-Camera Omnistereo Sensor for Quadrotor Micro Aerial Vehicles (MAVs) †

    PubMed Central

    Jaramillo, Carlos; Valenti, Roberto G.; Guo, Ling; Xiao, Jizhong

    2016-01-01

    We describe the design and 3D sensing performance of an omnidirectional stereo (omnistereo) vision system applied to Micro Aerial Vehicles (MAVs). The proposed omnistereo sensor employs a monocular camera that is co-axially aligned with a pair of hyperboloidal mirrors (a vertically-folded catadioptric configuration). We show that this arrangement provides a compact solution for omnidirectional 3D perception while mounted on top of propeller-based MAVs (not capable of large payloads). The theoretical single viewpoint (SVP) constraint helps us derive analytical solutions for the sensor’s projective geometry and generate SVP-compliant panoramic images to compute 3D information from stereo correspondences (in a truly synchronous fashion). We perform an extensive analysis on various system characteristics such as its size, catadioptric spatial resolution, field-of-view. In addition, we pose a probabilistic model for the uncertainty estimation of 3D information from triangulation of back-projected rays. We validate the projection error of the design using both synthetic and real-life images against ground-truth data. Qualitatively, we show 3D point clouds (dense and sparse) resulting out of a single image captured from a real-life experiment. We expect the reproducibility of our sensor as its model parameters can be optimized to satisfy other catadioptric-based omnistereo vision under different circumstances. PMID:26861351

  14. Nanomechanical DNA origami pH sensors.

    PubMed

    Kuzuya, Akinori; Watanabe, Ryosuke; Yamanaka, Yusei; Tamaki, Takuya; Kaino, Masafumi; Ohya, Yuichi

    2014-10-16

    Single-molecule pH sensors have been developed by utilizing molecular imaging of pH-responsive shape transition of nanomechanical DNA origami devices with atomic force microscopy (AFM). Short DNA fragments that can form i-motifs were introduced to nanomechanical DNA origami devices with pliers-like shape (DNA Origami Pliers), which consist of two levers of 170-nm long and 20-nm wide connected at a Holliday-junction fulcrum. DNA Origami Pliers can be observed as in three distinct forms; cross, antiparallel and parallel forms, and cross form is the dominant species when no additional interaction is introduced to DNA Origami Pliers. Introduction of nine pairs of 12-mer sequence (5'-AACCCCAACCCC-3'), which dimerize into i-motif quadruplexes upon protonation of cytosine, drives transition of DNA Origami Pliers from open cross form into closed parallel form under acidic conditions. Such pH-dependent transition was clearly imaged on mica in molecular resolution by AFM, showing potential application of the system to single-molecular pH sensors.

  15. Calibration of Kinect for Xbox One and Comparison between the Two Generations of Microsoft Sensors

    PubMed Central

    Pagliari, Diana; Pinto, Livio

    2015-01-01

    In recent years, the videogame industry has been characterized by a great boost in gesture recognition and motion tracking, following the increasing request of creating immersive game experiences. The Microsoft Kinect sensor allows acquiring RGB, IR and depth images with a high frame rate. Because of the complementary nature of the information provided, it has proved an attractive resource for researchers with very different backgrounds. In summer 2014, Microsoft launched a new generation of Kinect on the market, based on time-of-flight technology. This paper proposes a calibration of Kinect for Xbox One imaging sensors, focusing on the depth camera. The mathematical model that describes the error committed by the sensor as a function of the distance between the sensor itself and the object has been estimated. All the analyses presented here have been conducted for both generations of Kinect, in order to quantify the improvements that characterize every single imaging sensor. Experimental results show that the quality of the delivered model improved applying the proposed calibration procedure, which is applicable to both point clouds and the mesh model created with the Microsoft Fusion Libraries. PMID:26528979

  16. Calibration of Kinect for Xbox One and Comparison between the Two Generations of Microsoft Sensors.

    PubMed

    Pagliari, Diana; Pinto, Livio

    2015-10-30

    In recent years, the videogame industry has been characterized by a great boost in gesture recognition and motion tracking, following the increasing request of creating immersive game experiences. The Microsoft Kinect sensor allows acquiring RGB, IR and depth images with a high frame rate. Because of the complementary nature of the information provided, it has proved an attractive resource for researchers with very different backgrounds. In summer 2014, Microsoft launched a new generation of Kinect on the market, based on time-of-flight technology. This paper proposes a calibration of Kinect for Xbox One imaging sensors, focusing on the depth camera. The mathematical model that describes the error committed by the sensor as a function of the distance between the sensor itself and the object has been estimated. All the analyses presented here have been conducted for both generations of Kinect, in order to quantify the improvements that characterize every single imaging sensor. Experimental results show that the quality of the delivered model improved applying the proposed calibration procedure, which is applicable to both point clouds and the mesh model created with the Microsoft Fusion Libraries.

  17. Multi-spectral imaging with infrared sensitive organic light emitting diode

    PubMed Central

    Kim, Do Young; Lai, Tzung-Han; Lee, Jae Woong; Manders, Jesse R.; So, Franky

    2014-01-01

    Commercially available near-infrared (IR) imagers are fabricated by integrating expensive epitaxial grown III-V compound semiconductor sensors with Si-based readout integrated circuits (ROIC) by indium bump bonding which significantly increases the fabrication costs of these image sensors. Furthermore, these typical III-V compound semiconductors are not sensitive to the visible region and thus cannot be used for multi-spectral (visible to near-IR) sensing. Here, a low cost infrared (IR) imaging camera is demonstrated with a commercially available digital single-lens reflex (DSLR) camera and an IR sensitive organic light emitting diode (IR-OLED). With an IR-OLED, IR images at a wavelength of 1.2 µm are directly converted to visible images which are then recorded in a Si-CMOS DSLR camera. This multi-spectral imaging system is capable of capturing images at wavelengths in the near-infrared as well as visible regions. PMID:25091589

  18. Multi-spectral imaging with infrared sensitive organic light emitting diode

    NASA Astrophysics Data System (ADS)

    Kim, Do Young; Lai, Tzung-Han; Lee, Jae Woong; Manders, Jesse R.; So, Franky

    2014-08-01

    Commercially available near-infrared (IR) imagers are fabricated by integrating expensive epitaxial grown III-V compound semiconductor sensors with Si-based readout integrated circuits (ROIC) by indium bump bonding which significantly increases the fabrication costs of these image sensors. Furthermore, these typical III-V compound semiconductors are not sensitive to the visible region and thus cannot be used for multi-spectral (visible to near-IR) sensing. Here, a low cost infrared (IR) imaging camera is demonstrated with a commercially available digital single-lens reflex (DSLR) camera and an IR sensitive organic light emitting diode (IR-OLED). With an IR-OLED, IR images at a wavelength of 1.2 µm are directly converted to visible images which are then recorded in a Si-CMOS DSLR camera. This multi-spectral imaging system is capable of capturing images at wavelengths in the near-infrared as well as visible regions.

  19. Simultaneous glacier surface elevation and flow velocity mapping from cross-track pushbroom satellite Imagery

    NASA Astrophysics Data System (ADS)

    Noh, M. J.; Howat, I. M.

    2017-12-01

    Glaciers and ice sheets are changing rapidly. Digital Elevation Models (DEMs) and Velocity Maps (VMs) obtained from repeat satellite imagery provide critical measurements of changes in glacier dynamics and mass balance over large, remote areas. DEMs created from stereopairs obtained during the same satellite pass through sensor re-pointing (i.e. "in-track stereo") have been most commonly used. In-track stereo has the advantage of minimizing the time separation and, thus, surface motion between image acquisitions, so that the ice surface can be assumed motionless in when collocating pixels between image pairs. Since the DEM extraction process assumes that all motion between collocated pixels is due to parallax or sensor model error, significant ice motion results in DEM quality loss or failure. In-track stereo, however, puts a greater demand on satellite tasking resources and, therefore, is much less abundant than single-scan imagery. Thus, if ice surface motion can be mitigated, the ability to extract surface elevation measurements from pairs of repeat single-scan "cross-track" imagery would greatly increase the extent and temporal resolution of ice surface change. Additionally, the ice motion measured by the DEM extraction process would itself provide a useful velocity measurement. We develop a novel algorithm for generating high-quality DEMs and VMs from cross-track image pairs without any prior information using the Surface Extraction from TIN-based Searchspace Minimization (SETSM) algorithm and its sensor model bias correction capabilities. Using a test suite of repeat, single-scan imagery from WorldView and QuickBird sensors collected over fast-moving outlet glaciers, we develop a method by which RPC biases between images are first calculated and removed over ice-free surfaces. Subpixel displacements over the ice are then constrained and used to correct the parallax estimate. Initial tests yield DEM results with the same quality as in-track stereo for cases where snowfall has not occurred between the two images and when the images have similar ground sample distances. The resulting velocity map also closely matches independent measurements.

  20. The Characterization of a DIRSIG Simulation Environment to Support the Inter-Calibration of Spaceborne Sensors

    NASA Technical Reports Server (NTRS)

    Ambeau, Brittany L.; Gerace, Aaron D.; Montanaro, Matthew; McCorkel, Joel

    2016-01-01

    Climate change studies require long-term, continuous records that extend beyond the lifetime, and the temporal resolution, of a single remote sensing satellite sensor. The inter-calibration of spaceborne sensors is therefore desired to provide spatially, spectrally, and temporally homogeneous datasets. The Digital Imaging and Remote Sensing Image Generation (DIRSIG) tool is a first principle-based synthetic image generation model that has the potential to characterize the parameters that impact the accuracy of the inter-calibration of spaceborne sensors. To demonstrate the potential utility of the model, we compare the radiance observed in real image data to the radiance observed in simulated image from DIRSIG. In the present work, a synthetic landscape of the Algodones Sand Dunes System is created. The terrain is facetized using a 2-meter digital elevation model generated from NASA Goddard's LiDAR, Hyperspectral, and Thermal (G-LiHT) imager. The material spectra are assigned using hyperspectral measurements of sand collected from the Algodones Sand Dunes System. Lastly, the bidirectional reflectance distribution function (BRDF) properties are assigned to the modeled terrain using the Moderate Resolution Imaging Spectroradiometer (MODIS) BRDF product in conjunction with DIRSIG's Ross-Li capability. The results of this work indicate that DIRSIG is in good agreement with real image data. The potential sources of residual error are identified and the possibilities for future work are discussed..

  1. The characterization of a DIRSIG simulation environment to support the inter-calibration of spaceborne sensors

    NASA Astrophysics Data System (ADS)

    Ambeau, Brittany L.; Gerace, Aaron D.; Montanaro, Matthew; McCorkel, Joel

    2016-09-01

    Climate change studies require long-term, continuous records that extend beyond the lifetime, and the temporal resolution, of a single remote sensing satellite sensor. The inter-calibration of spaceborne sensors is therefore desired to provide spatially, spectrally, and temporally homogeneous datasets. The Digital Imaging and Remote Sensing Image Generation (DIRSIG) tool is a first principle-based synthetic image generation model that has the potential to characterize the parameters that impact the accuracy of the inter-calibration of spaceborne sensors. To demonstrate the potential utility of the model, we compare the radiance observed in real image data to the radiance observed in simulated image from DIRSIG. In the present work, a synthetic landscape of the Algodones Sand Dunes System is created. The terrain is facetized using a 2-meter digital elevation model generated from NASA Goddard's LiDAR, Hyperspectral, and Thermal (G-LiHT) imager. The material spectra are assigned using hyperspectral measurements of sand collected from the Algodones Sand Dunes System. Lastly, the bidirectional reflectance distribution function (BRDF) properties are assigned to the modeled terrain using the Moderate Resolution Imaging Spectroradiometer (MODIS) BRDF product in conjunction with DIRSIG's Ross-Li capability. The results of this work indicate that DIRSIG is in good agreement with real image data. The potential sources of residual error are identified and the possibilities for future work are discussed.

  2. Measurement of cosmic-ray muons with the Distributed Electronic Cosmic-ray Observatory, a network of smartphones

    NASA Astrophysics Data System (ADS)

    Vandenbroucke, J.; BenZvi, S.; Bravo, S.; Jensen, K.; Karn, P.; Meehan, M.; Peacock, J.; Plewa, M.; Ruggles, T.; Santander, M.; Schultz, D.; Simons, A. L.; Tosi, D.

    2016-04-01

    Solid-state camera image sensors can be used to detect ionizing radiation in addition to optical photons. We describe the Distributed Electronic Cosmic-ray Observatory (DECO), an app and associated public database that enables a network of consumer devices to detect cosmic rays and other ionizing radiation. In addition to terrestrial background radiation, cosmic-ray muon candidate events are detected as long, straight tracks passing through multiple pixels. The distribution of track lengths can be related to the thickness of the active (depleted) region of the camera image sensor through the known angular distribution of muons at sea level. We use a sample of candidate muon events detected by DECO to measure the thickness of the depletion region of the camera image sensor in a particular consumer smartphone model, the HTC Wildfire S. The track length distribution is fit better by a cosmic-ray muon angular distribution than an isotropic distribution, demonstrating that DECO can detect and identify cosmic-ray muons despite a background of other particle detections. Using the cosmic-ray distribution, we measure the depletion thickness to be 26.3 ± 1.4 μm. With additional data, the same method can be applied to additional models of image sensor. Once measured, the thickness can be used to convert track length to incident polar angle on a per-event basis. Combined with a determination of the incident azimuthal angle directly from the track orientation in the sensor plane, this enables direction reconstruction of individual cosmic-ray events using a single consumer device. The results simultaneously validate the use of cell phone camera image sensors as cosmic-ray muon detectors and provide a measurement of a parameter of camera image sensor performance which is not otherwise publicly available.

  3. Convolutional Sparse Coding for RGB+NIR Imaging.

    PubMed

    Hu, Xuemei; Heide, Felix; Dai, Qionghai; Wetzstein, Gordon

    2018-04-01

    Emerging sensor designs increasingly rely on novel color filter arrays (CFAs) to sample the incident spectrum in unconventional ways. In particular, capturing a near-infrared (NIR) channel along with conventional RGB color is an exciting new imaging modality. RGB+NIR sensing has broad applications in computational photography, such as low-light denoising, it has applications in computer vision, such as facial recognition and tracking, and it paves the way toward low-cost single-sensor RGB and depth imaging using structured illumination. However, cost-effective commercial CFAs suffer from severe spectral cross talk. This cross talk represents a major challenge in high-quality RGB+NIR imaging, rendering existing spatially multiplexed sensor designs impractical. In this work, we introduce a new approach to RGB+NIR image reconstruction using learned convolutional sparse priors. We demonstrate high-quality color and NIR imaging for challenging scenes, even including high-frequency structured NIR illumination. The effectiveness of the proposed method is validated on a large data set of experimental captures, and simulated benchmark results which demonstrate that this work achieves unprecedented reconstruction quality.

  4. Attitude-correlated frames approach for a star sensor to improve attitude accuracy under highly dynamic conditions.

    PubMed

    Ma, Liheng; Zhan, Dejun; Jiang, Guangwen; Fu, Sihua; Jia, Hui; Wang, Xingshu; Huang, Zongsheng; Zheng, Jiaxing; Hu, Feng; Wu, Wei; Qin, Shiqiao

    2015-09-01

    The attitude accuracy of a star sensor decreases rapidly when star images become motion-blurred under dynamic conditions. Existing techniques concentrate on a single frame of star images to solve this problem and improvements are obtained to a certain extent. An attitude-correlated frames (ACF) approach, which concentrates on the features of the attitude transforms of the adjacent star image frames, is proposed to improve upon the existing techniques. The attitude transforms between different star image frames are measured by the strap-down gyro unit precisely. With the ACF method, a much larger star image frame is obtained through the combination of adjacent frames. As a result, the degradation of attitude accuracy caused by motion-blurring are compensated for. The improvement of the attitude accuracy is approximately proportional to the square root of the number of correlated star image frames. Simulations and experimental results indicate that the ACF approach is effective in removing random noises and improving the attitude determination accuracy of the star sensor under highly dynamic conditions.

  5. Time stamping of single optical photons with 10 ns resolution

    NASA Astrophysics Data System (ADS)

    Chakaberia, Irakli; Cotlet, Mircea; Fisher-Levine, Merlin; Hodges, Diedra R.; Nguyen, Jayke; Nomerotski, Andrei

    2017-05-01

    High spatial and temporal resolution are key features for many modern applications, e.g. mass spectrometry, probing the structure of materials via neutron scattering, studying molecular structure, etc.1-5 Fast imaging also provides the capability of coincidence detection, and the further addition of sensitivity to single optical photons with the capability of timestamping them further broadens the field of potential applications. Photon counting is already widely used in X-ray imaging,6 where the high energy of the photons makes their detection easier. TimepixCam is a novel optical imager,7 which achieves high spatial resolution using an array of 256×256 55 μm × 55μm pixels which have individually controlled functionality. It is based on a thin-entrance-window silicon sensor, bump-bonded to a Timepix ASIC.8 TimepixCam provides high quantum efficiency in the optical wavelength range (400-1000 nm). We perform the timestamping of single photons with a time resolution of 20 ns, by coupling TimepixCam to a fast image-intensifier with a P47 phosphor screen. The fast emission time of the P479 allows us to preserve good time resolution while maintaining the capability to focus the optical output of the intensifier onto the 256×256 pixel Timepix sensor area. We demonstrate the capability of the (TimepixCam + image intensifier) setup to provide high-resolution single-photon timestamping, with an effective frame rate of 50 MHz.

  6. Overview of Digital Forensics Algorithms in Dslr Cameras

    NASA Astrophysics Data System (ADS)

    Aminova, E.; Trapeznikov, I.; Priorov, A.

    2017-05-01

    The widespread usage of the mobile technologies and the improvement of the digital photo devices getting has led to more frequent cases of falsification of images including in the judicial practice. Consequently, the actual task for up-to-date digital image processing tools is the development of algorithms for determining the source and model of the DSLR (Digital Single Lens Reflex) camera and improve image formation algorithms. Most research in this area based on the mention that the extraction of unique sensor trace of DSLR camera could be possible on the certain stage of the imaging process into the camera. It is considered that the study focuses on the problem of determination of unique feature of DSLR cameras based on optical subsystem artifacts and sensor noises.

  7. Superresolution with the focused plenoptic camera

    NASA Astrophysics Data System (ADS)

    Georgiev, Todor; Chunev, Georgi; Lumsdaine, Andrew

    2011-03-01

    Digital images from a CCD or CMOS sensor with a color filter array must undergo a demosaicing process to combine the separate color samples into a single color image. This interpolation process can interfere with the subsequent superresolution process. Plenoptic superresolution, which relies on precise sub-pixel sampling across captured microimages, is particularly sensitive to such resampling of the raw data. In this paper we present an approach for superresolving plenoptic images that takes place at the time of demosaicing the raw color image data. Our approach exploits the interleaving provided by typical color filter arrays (e.g., Bayer filter) to further refine plenoptic sub-pixel sampling. Our rendering algorithm treats the color channels in a plenoptic image separately, which improves final superresolution by a factor of two. With appropriate plenoptic capture we show the theoretical possibility for rendering final images at full sensor resolution.

  8. Image restoration techniques as applied to Landsat MSS and TM data

    USGS Publications Warehouse

    Meyer, David

    1987-01-01

    Two factors are primarily responsible for the loss of image sharpness in processing digital Landsat images. The first factor is inherent in the data because the sensor's optics and electronics, along with other sensor elements, blur and smear the data. Digital image restoration can be used to reduce this degradation. The second factor, which further degrades by blurring or aliasing, is the resampling performed during geometric correction. An image restoration procedure, when used in place of typical resampled techniques, reduces sensor degradation without introducing the artifacts associated with resampling. The EROS Data Center (EDC) has implemented the restoration proceed for Landsat multispectral scanner (MSS) and thematic mapper (TM) data. This capability, developed at the University of Arizona by Dr. Robert Schowengerdt and Lynette Wood, combines restoration and resampling in a single step to produce geometrically corrected MSS and TM imagery. As with resampling, restoration demands a tradeoff be made between aliasing, which occurs when attempting to extract maximum sharpness from an image, and blurring, which reduces the aliasing problem but sacrifices image sharpness. The restoration procedure used at EDC minimizes these artifacts by being adaptive, tailoring the tradeoff to be optimal for individual images.

  9. The Dynamic Photometric Stereo Method Using a Multi-Tap CMOS Image Sensor †

    PubMed Central

    Yoda, Takuya; Nagahara, Hajime; Taniguchi, Rin-ichiro; Kagawa, Keiichiro; Yasutomi, Keita; Kawahito, Shoji

    2018-01-01

    The photometric stereo method enables estimation of surface normals from images that have been captured using different but known lighting directions. The classical photometric stereo method requires at least three images to determine the normals in a given scene. However, this method cannot be applied to dynamic scenes because it is assumed that the scene remains static while the required images are captured. In this work, we present a dynamic photometric stereo method for estimation of the surface normals in a dynamic scene. We use a multi-tap complementary metal-oxide-semiconductor (CMOS) image sensor to capture the input images required for the proposed photometric stereo method. This image sensor can divide the electrons from the photodiode from a single pixel into the different taps of the exposures and can thus capture multiple images under different lighting conditions with almost identical timing. We implemented a camera lighting system and created a software application to enable estimation of the normal map in real time. We also evaluated the accuracy of the estimated surface normals and demonstrated that our proposed method can estimate the surface normals of dynamic scenes. PMID:29510599

  10. Single-Image Super Resolution for Multispectral Remote Sensing Data Using Convolutional Neural Networks

    NASA Astrophysics Data System (ADS)

    Liebel, L.; Körner, M.

    2016-06-01

    In optical remote sensing, spatial resolution of images is crucial for numerous applications. Space-borne systems are most likely to be affected by a lack of spatial resolution, due to their natural disadvantage of a large distance between the sensor and the sensed object. Thus, methods for single-image super resolution are desirable to exceed the limits of the sensor. Apart from assisting visual inspection of datasets, post-processing operations—e.g., segmentation or feature extraction—can benefit from detailed and distinguishable structures. In this paper, we show that recently introduced state-of-the-art approaches for single-image super resolution of conventional photographs, making use of deep learning techniques, such as convolutional neural networks (CNN), can successfully be applied to remote sensing data. With a huge amount of training data available, end-to-end learning is reasonably easy to apply and can achieve results unattainable using conventional handcrafted algorithms. We trained our CNN on a specifically designed, domain-specific dataset, in order to take into account the special characteristics of multispectral remote sensing data. This dataset consists of publicly available SENTINEL-2 images featuring 13 spectral bands, a ground resolution of up to 10m, and a high radiometric resolution and thus satisfying our requirements in terms of quality and quantity. In experiments, we obtained results superior compared to competing approaches trained on generic image sets, which failed to reasonably scale satellite images with a high radiometric resolution, as well as conventional interpolation methods.

  11. Shear sensing in bonded composites with cantilever beam microsensors and dual-plane digital image correlation

    NASA Astrophysics Data System (ADS)

    Baur, Jeffery W.; Slinker, Keith; Kondash, Corey

    2017-04-01

    Understanding the shear strain, viscoelastic response, and onset of damage within bonded composites is critical to their design, processing, and reliability. This presentation will discuss the multidisciplinary research conducted which led to the conception, development, and demonstration of two methods for measuring the shear within a bonded joint - dualplane digital image correlation (DIC) and a micro-cantilever shear sensor. The dual plane DIC method was developed to measure the strain field on opposing sides of a transparent single-lap joint in order to spatially quantify the joint shear strain. The sensor consists of a single glass fiber cantilever beam with a radially-grown forest of carbon nanotubes (CNTs) within a capillary pore. When the fiber is deflected, the internal radial CNT array is compressed against an electrode within the pore and the corresponding decrease in electrical resistance is correlated with the external loading. When this small, simple, and low-cost sensor was integrated within a composite bonded joint and cycled in tension, the onset of damage prior to joint failure was observed. In a second sample configuration, both the dual plane DIC and the hair sensor detected viscoplastic changes in the strain of the sample in response to continued loading.

  12. Advances in image compression and automatic target recognition; Proceedings of the Meeting, Orlando, FL, Mar. 30, 31, 1989

    NASA Technical Reports Server (NTRS)

    Tescher, Andrew G. (Editor)

    1989-01-01

    Various papers on image compression and automatic target recognition are presented. Individual topics addressed include: target cluster detection in cluttered SAR imagery, model-based target recognition using laser radar imagery, Smart Sensor front-end processor for feature extraction of images, object attitude estimation and tracking from a single video sensor, symmetry detection in human vision, analysis of high resolution aerial images for object detection, obscured object recognition for an ATR application, neural networks for adaptive shape tracking, statistical mechanics and pattern recognition, detection of cylinders in aerial range images, moving object tracking using local windows, new transform method for image data compression, quad-tree product vector quantization of images, predictive trellis encoding of imagery, reduced generalized chain code for contour description, compact architecture for a real-time vision system, use of human visibility functions in segmentation coding, color texture analysis and synthesis using Gibbs random fields.

  13. Restoration of out-of-focus images based on circle of confusion estimate

    NASA Astrophysics Data System (ADS)

    Vivirito, Paolo; Battiato, Sebastiano; Curti, Salvatore; La Cascia, M.; Pirrone, Roberto

    2002-11-01

    In this paper a new method for a fast out-of-focus blur estimation and restoration is proposed. It is suitable for CFA (Color Filter Array) images acquired by typical CCD/CMOS sensor. The method is based on the analysis of a single image and consists of two steps: 1) out-of-focus blur estimation via Bayer pattern analysis; 2) image restoration. Blur estimation is based on a block-wise edge detection technique. This edge detection is carried out on the green pixels of the CFA sensor image also called Bayer pattern. Once the blur level has been estimated the image is restored through the application of a new inverse filtering technique. This algorithm gives sharp images reducing ringing and crisping artifact, involving wider region of frequency. Experimental results show the effectiveness of the method, both in subjective and numerical way, by comparison with other techniques found in literature.

  14. High-Throughput and Label-Free Single Nanoparticle Sizing Based on Time-Resolved On-Chip Microscopy

    DTIC Science & Technology

    2015-02-17

    12,13 soot ,6,14 ice crystals in clouds,15 and engineered nano- materials,16 among others. While there exist various nanoparticle detection and sizing...the sample of interest is placed on an optoelectronic sensor -array with typically less than 0.5 mm gap (z2) between the sample and sensor planes such...that, under unit mag- nification, the entire sensor active area serves as the imaging FOV, easily reaching >2030 mm2 with state-of-the-art CMOS

  15. Imaging Voltage in Genetically Defined Neuronal Subpopulations with a Cre Recombinase-Targeted Hybrid Voltage Sensor.

    PubMed

    Bayguinov, Peter O; Ma, Yihe; Gao, Yu; Zhao, Xinyu; Jackson, Meyer B

    2017-09-20

    Genetically encoded voltage indicators create an opportunity to monitor electrical activity in defined sets of neurons as they participate in the complex patterns of coordinated electrical activity that underlie nervous system function. Taking full advantage of genetically encoded voltage indicators requires a generalized strategy for targeting the probe to genetically defined populations of cells. To this end, we have generated a mouse line with an optimized hybrid voltage sensor (hVOS) probe within a locus designed for efficient Cre recombinase-dependent expression. Crossing this mouse with Cre drivers generated double transgenics expressing hVOS probe in GABAergic, parvalbumin, and calretinin interneurons, as well as hilar mossy cells, new adult-born neurons, and recently active neurons. In each case, imaging in brain slices from male or female animals revealed electrically evoked optical signals from multiple individual neurons in single trials. These imaging experiments revealed action potentials, dynamic aspects of dendritic integration, and trial-to-trial fluctuations in response latency. The rapid time response of hVOS imaging revealed action potentials with high temporal fidelity, and enabled accurate measurements of spike half-widths characteristic of each cell type. Simultaneous recording of rapid voltage changes in multiple neurons with a common genetic signature offers a powerful approach to the study of neural circuit function and the investigation of how neural networks encode, process, and store information. SIGNIFICANCE STATEMENT Genetically encoded voltage indicators hold great promise in the study of neural circuitry, but realizing their full potential depends on targeting the sensor to distinct cell types. Here we present a new mouse line that expresses a hybrid optical voltage sensor under the control of Cre recombinase. Crossing this line with Cre drivers generated double-transgenic mice, which express this sensor in targeted cell types. In brain slices from these animals, single-trial hybrid optical voltage sensor recordings revealed voltage changes with submillisecond resolution in multiple neurons simultaneously. This imaging tool will allow for the study of the emergent properties of neural circuits and permit experimental tests of the roles of specific types of neurons in complex circuit activity. Copyright © 2017 the authors 0270-6474/17/379305-15$15.00/0.

  16. Photometric Calibration and Image Stitching for a Large Field of View Multi-Camera System

    PubMed Central

    Lu, Yu; Wang, Keyi; Fan, Gongshu

    2016-01-01

    A new compact large field of view (FOV) multi-camera system is introduced. The camera is based on seven tiny complementary metal-oxide-semiconductor sensor modules covering over 160° × 160° FOV. Although image stitching has been studied extensively, sensor and lens differences have not been considered in previous multi-camera devices. In this study, we have calibrated the photometric characteristics of the multi-camera device. Lenses were not mounted on the sensor in the process of radiometric response calibration to eliminate the influence of the focusing effect of uniform light from an integrating sphere. Linearity range of the radiometric response, non-linearity response characteristics, sensitivity, and dark current of the camera response function are presented. The R, G, and B channels have different responses for the same illuminance. Vignetting artifact patterns have been tested. The actual luminance of the object is retrieved by sensor calibration results, and is used to blend images to make panoramas reflect the objective luminance more objectively. This compensates for the limitation of stitching images that are more realistic only through the smoothing method. The dynamic range limitation of can be resolved by using multiple cameras that cover a large field of view instead of a single image sensor with a wide-angle lens. The dynamic range is expanded by 48-fold in this system. We can obtain seven images in one shot with this multi-camera system, at 13 frames per second. PMID:27077857

  17. Application of a genetically encoded biosensor for live cell imaging of L-valine production in pyruvate dehydrogenase complex-deficient Corynebacterium glutamicum strains.

    PubMed

    Mustafi, Nurije; Grünberger, Alexander; Mahr, Regina; Helfrich, Stefan; Nöh, Katharina; Blombach, Bastian; Kohlheyer, Dietrich; Frunzke, Julia

    2014-01-01

    The majority of biotechnologically relevant metabolites do not impart a conspicuous phenotype to the producing cell. Consequently, the analysis of microbial metabolite production is still dominated by bulk techniques, which may obscure significant variation at the single-cell level. In this study, we have applied the recently developed Lrp-biosensor for monitoring of amino acid production in single cells of gradually engineered L-valine producing Corynebacterium glutamicum strains based on the pyruvate dehydrogenase complex-deficient (PDHC) strain C. glutamicum ΔaceE. Online monitoring of the sensor output (eYFP fluorescence) during batch cultivation proved the sensor's suitability for visualizing different production levels. In the following, we conducted live cell imaging studies on C. glutamicum sensor strains using microfluidic chip devices. As expected, the sensor output was higher in microcolonies of high-yield producers in comparison to the basic strain C. glutamicum ΔaceE. Microfluidic cultivation in minimal medium revealed a typical Gaussian distribution of single cell fluorescence during the production phase. Remarkably, low amounts of complex nutrients completely changed the observed phenotypic pattern of all strains, resulting in a phenotypic split of the population. Whereas some cells stopped growing and initiated L-valine production, others continued to grow or showed a delayed transition to production. Depending on the cultivation conditions, a considerable fraction of non-fluorescent cells was observed, suggesting a loss of metabolic activity. These studies demonstrate that genetically encoded biosensors are a valuable tool for monitoring single cell productivity and to study the phenotypic pattern of microbial production strains.

  18. Application of a Genetically Encoded Biosensor for Live Cell Imaging of L-Valine Production in Pyruvate Dehydrogenase Complex-Deficient Corynebacterium glutamicum Strains

    PubMed Central

    Mahr, Regina; Helfrich, Stefan; Nöh, Katharina; Blombach, Bastian; Kohlheyer, Dietrich; Frunzke, Julia

    2014-01-01

    The majority of biotechnologically relevant metabolites do not impart a conspicuous phenotype to the producing cell. Consequently, the analysis of microbial metabolite production is still dominated by bulk techniques, which may obscure significant variation at the single-cell level. In this study, we have applied the recently developed Lrp-biosensor for monitoring of amino acid production in single cells of gradually engineered L-valine producing Corynebacterium glutamicum strains based on the pyruvate dehydrogenase complex-deficient (PDHC) strain C. glutamicum ΔaceE. Online monitoring of the sensor output (eYFP fluorescence) during batch cultivation proved the sensor's suitability for visualizing different production levels. In the following, we conducted live cell imaging studies on C. glutamicum sensor strains using microfluidic chip devices. As expected, the sensor output was higher in microcolonies of high-yield producers in comparison to the basic strain C. glutamicum ΔaceE. Microfluidic cultivation in minimal medium revealed a typical Gaussian distribution of single cell fluorescence during the production phase. Remarkably, low amounts of complex nutrients completely changed the observed phenotypic pattern of all strains, resulting in a phenotypic split of the population. Whereas some cells stopped growing and initiated L-valine production, others continued to grow or showed a delayed transition to production. Depending on the cultivation conditions, a considerable fraction of non-fluorescent cells was observed, suggesting a loss of metabolic activity. These studies demonstrate that genetically encoded biosensors are a valuable tool for monitoring single cell productivity and to study the phenotypic pattern of microbial production strains. PMID:24465669

  19. Multispectral imaging with vertical silicon nanowires

    PubMed Central

    Park, Hyunsung; Crozier, Kenneth B.

    2013-01-01

    Multispectral imaging is a powerful tool that extends the capabilities of the human eye. However, multispectral imaging systems generally are expensive and bulky, and multiple exposures are needed. Here, we report the demonstration of a compact multispectral imaging system that uses vertical silicon nanowires to realize a filter array. Multiple filter functions covering visible to near-infrared (NIR) wavelengths are simultaneously defined in a single lithography step using a single material (silicon). Nanowires are then etched and embedded into polydimethylsiloxane (PDMS), thereby realizing a device with eight filter functions. By attaching it to a monochrome silicon image sensor, we successfully realize an all-silicon multispectral imaging system. We demonstrate visible and NIR imaging. We show that the latter is highly sensitive to vegetation and furthermore enables imaging through objects opaque to the eye. PMID:23955156

  20. Real time in vivo imaging and measurement of serine protease activity in the mouse hippocampus using a dedicated complementary metal-oxide semiconductor imaging device.

    PubMed

    Ng, David C; Tamura, Hideki; Tokuda, Takashi; Yamamoto, Akio; Matsuo, Masamichi; Nunoshita, Masahiro; Ishikawa, Yasuyuki; Shiosaka, Sadao; Ohta, Jun

    2006-09-30

    The aim of the present study is to demonstrate the application of complementary metal-oxide semiconductor (CMOS) imaging technology for studying the mouse brain. By using a dedicated CMOS image sensor, we have successfully imaged and measured brain serine protease activity in vivo, in real-time, and for an extended period of time. We have developed a biofluorescence imaging device by packaging the CMOS image sensor which enabled on-chip imaging configuration. In this configuration, no optics are required whereby an excitation filter is applied onto the sensor to replace the filter cube block found in conventional fluorescence microscopes. The fully packaged device measures 350 microm thick x 2.7 mm wide, consists of an array of 176 x 144 pixels, and is small enough for measurement inside a single hemisphere of the mouse brain, while still providing sufficient imaging resolution. In the experiment, intraperitoneally injected kainic acid induced upregulation of serine protease activity in the brain. These events were captured in real time by imaging and measuring the fluorescence from a fluorogenic substrate that detected this activity. The entire device, which weighs less than 1% of the body weight of the mouse, holds promise for studying freely moving animals.

  1. Multichannel imager for littoral zone characterization

    NASA Astrophysics Data System (ADS)

    Podobna, Yuliya; Schoonmaker, Jon; Dirbas, Joe; Sofianos, James; Boucher, Cynthia; Gilbert, Gary

    2010-04-01

    This paper describes an approach to utilize a multi-channel, multi-spectral electro-optic (EO) system for littoral zone characterization. Advanced Coherent Technologies, LLC (ACT) presents their EO sensor systems for the surf zone environmental assessment and potential surf zone target detection. Specifically, an approach is presented to determine a Surf Zone Index (SZI) from the multi-spectral EO sensor system. SZI provides a single quantitative value of the surf zone conditions delivering an immediate understanding of the area and an assessment as to how well an airborne optical system might perform in a mine countermeasures (MCM) operation. Utilizing consecutive frames of SZI images, ACT is able to measure variability over time. A surf zone nomograph, which incorporates targets, sensor, and environmental data, including the SZI to determine the environmental impact on system performance, is reviewed in this work. ACT's electro-optical multi-channel, multi-spectral imaging system and test results are presented and discussed.

  2. Stereo Cloud Height and Wind Determination Using Measurements from a Single Focal Plane

    NASA Astrophysics Data System (ADS)

    Demajistre, R.; Kelly, M. A.

    2014-12-01

    We present here a method for extracting cloud heights and winds from an aircraft or orbital platform using measurements from a single focal plane, exploiting the motion of the platform to provide multiple views of the cloud tops. To illustrate this method we use data acquired during aircraft flight tests of a set of simple stereo imagers that are well suited to this purpose. Each of these imagers has three linear arrays on the focal plane, one looking forward, one looking aft, and one looking down. Push-broom images from each of these arrays are constructed, and then a spatial correlation analysis is used to deduce the delays and displacements required for wind and cloud height determination. We will present the algorithms necessary for the retrievals, as well as the methods used to determine the uncertainties of the derived cloud heights and winds. We will apply the retrievals and uncertainty determination to a number of image sets acquired by the airborne sensors. We then generalize these results to potential space based observations made by similar types of sensors.

  3. Capability of long distance 100  GHz FMCW using a single GDD lamp sensor.

    PubMed

    Levanon, Assaf; Rozban, Daniel; Aharon Akram, Avihai; Kopeika, Natan S; Yitzhaky, Yitzhak; Abramovich, Amir

    2014-12-20

    Millimeter wave (MMW)-based imaging systems are required for applications in medicine, homeland security, concealed weapon detection, and space technology. The lack of inexpensive room temperature imaging sensors makes it difficult to provide a suitable MMW system for many of the above applications. A 3D MMW imaging system based on chirp radar was studied previously using a scanning imaging system of a single detector. The radar system requires that the millimeter wave detector will be able to operate as a heterodyne detector. Since the source of radiation is a frequency modulated continuous wave (FMCW), the detected signal as a result of heterodyne detection gives the object's depth information according to value of difference frequency, in addition to the reflectance of the 2D image. New experiments show the capability of long distance FMCW detection by using a large scale Cassegrain projection system, described first (to our knowledge) in this paper. The system presents the capability to employ a long distance of at least 20 m with a low-cost plasma-based glow discharge detector (GDD) focal plane array (FPA). Each point on the object corresponds to a point in the image and includes the distance information. This will enable relatively inexpensive 3D MMW imaging.

  4. Panoramic thermal imaging: challenges and tradeoffs

    NASA Astrophysics Data System (ADS)

    Aburmad, Shimon

    2014-06-01

    Over the past decade, we have witnessed a growing demand for electro-optical systems that can provide continuous 3600 coverage. Applications such as perimeter security, autonomous vehicles, and military warning systems are a few of the most common applications for panoramic imaging. There are several different technological approaches for achieving panoramic imaging. Solutions based on rotating elements do not provide continuous coverage as there is a time lag between updates. Continuous panoramic solutions either use "stitched" images from multiple adjacent sensors, or sophisticated optical designs which warp a panoramic view onto a single sensor. When dealing with panoramic imaging in the visible spectrum, high volume production and advancement of semiconductor technology has enabled the use of CMOS/CCD image sensors with a huge number of pixels, small pixel dimensions, and low cost devices. However, in the infrared spectrum, the growth of detector pixel counts, pixel size reduction, and cost reduction is taking place at a slower rate due to the complexity of the technology and limitations caused by the laws of physics. In this work, we will explore the challenges involved in achieving 3600 panoramic thermal imaging, and will analyze aspects such as spatial resolution, FOV, data complexity, FPA utilization, system complexity, coverage and cost of the different solutions. We will provide illustrations, calculations, and tradeoffs between three solutions evaluated by Opgal: A unique 3600 lens design using an LWIR XGA detector, stitching of three adjacent LWIR sensors equipped with a low distortion 1200 lens, and a fisheye lens with a HFOV of 180º and an XGA sensor.

  5. Novel compact panomorph lens based vision system for monitoring around a vehicle

    NASA Astrophysics Data System (ADS)

    Thibault, Simon

    2008-04-01

    Automotive applications are one of the largest vision-sensor market segments and one of the fastest growing ones. The trend to use increasingly more sensors in cars is driven both by legislation and consumer demands for higher safety and better driving experiences. Awareness of what directly surrounds a vehicle affects safe driving and manoeuvring of a vehicle. Consequently, panoramic 360° Field of View imaging can contributes most to the perception of the world around the driver than any other sensors. However, to obtain a complete vision around the car, several sensor systems are necessary. To solve this issue, a customized imaging system based on a panomorph lens will provide the maximum information for the drivers with a reduced number of sensors. A panomorph lens is a hemispheric wide angle anamorphic lens with enhanced resolution in predefined zone of interest. Because panomorph lenses are optimized to a custom angle-to-pixel relationship, vision systems provide ideal image coverage that reduces and optimizes the processing. We present various scenarios which may benefit from the use of a custom panoramic sensor. We also discuss the technical requirements of such vision system. Finally we demonstrate how the panomorph based visual sensor is probably one of the most promising ways to fuse many sensors in one. For example, a single panoramic sensor on the front of a vehicle could provide all necessary information for assistance in crash avoidance, lane tracking, early warning, park aids, road sign detection, and various video monitoring views.

  6. Multisource image fusion method using support value transform.

    PubMed

    Zheng, Sheng; Shi, Wen-Zhong; Liu, Jian; Zhu, Guang-Xi; Tian, Jin-Wen

    2007-07-01

    With the development of numerous imaging sensors, many images can be simultaneously pictured by various sensors. However, there are many scenarios where no one sensor can give the complete picture. Image fusion is an important approach to solve this problem and produces a single image which preserves all relevant information from a set of different sensors. In this paper, we proposed a new image fusion method using the support value transform, which uses the support value to represent the salient features of image. This is based on the fact that, in support vector machines (SVMs), the data with larger support values have a physical meaning in the sense that they reveal relative more importance of the data points for contributing to the SVM model. The mapped least squares SVM (mapped LS-SVM) is used to efficiently compute the support values of image. The support value analysis is developed by using a series of multiscale support value filters, which are obtained by filling zeros in the basic support value filter deduced from the mapped LS-SVM to match the resolution of the desired level. Compared with the widely used image fusion methods, such as the Laplacian pyramid, discrete wavelet transform methods, the proposed method is an undecimated transform-based approach. The fusion experiments are undertaken on multisource images. The results demonstrate that the proposed approach is effective and is superior to the conventional image fusion methods in terms of the pertained quantitative fusion evaluation indexes, such as quality of visual information (Q(AB/F)), the mutual information, etc.

  7. Detecting single-electron events in TEM using low-cost electronics and a silicon strip sensor.

    PubMed

    Gontard, Lionel C; Moldovan, Grigore; Carmona-Galán, Ricardo; Lin, Chao; Kirkland, Angus I

    2014-04-01

    There is great interest in developing novel position-sensitive direct detectors for transmission electron microscopy (TEM) that do not rely in the conversion of electrons into photons. Direct imaging improves contrast and efficiency and allows the operation of the microscope at lower energies and at lower doses without loss in resolution, which is especially important for studying soft materials and biological samples. We investigate the feasibility of employing a silicon strip detector as an imaging detector for TEM. This device, routinely used in high-energy particle physics, can detect small variations in electric current associated with the impact of a single charged particle. The main advantages of using this type of sensor for direct imaging in TEM are its intrinsic radiation hardness and large detection area. Here, we detail design, simulation, fabrication and tests in a TEM of the front-end electronics developed using low-cost discrete components and discuss the limitations and applications of this technology for TEM.

  8. A quantum spin-probe molecular microscope

    NASA Astrophysics Data System (ADS)

    Perunicic, V. S.; Hill, C. D.; Hall, L. T.; Hollenberg, L. C. L.

    2016-10-01

    Imaging the atomic structure of a single biomolecule is an important challenge in the physical biosciences. Whilst existing techniques all rely on averaging over large ensembles of molecules, the single-molecule realm remains unsolved. Here we present a protocol for 3D magnetic resonance imaging of a single molecule using a quantum spin probe acting simultaneously as the magnetic resonance sensor and source of magnetic field gradient. Signals corresponding to specific regions of the molecule's nuclear spin density are encoded on the quantum state of the probe, which is used to produce a 3D image of the molecular structure. Quantum simulations of the protocol applied to the rapamycin molecule (C51H79NO13) show that the hydrogen and carbon substructure can be imaged at the angstrom level using current spin-probe technology. With prospects for scaling to large molecules and/or fast dynamic conformation mapping using spin labels, this method provides a realistic pathway for single-molecule microscopy.

  9. CMOS imager for pointing and tracking applications

    NASA Technical Reports Server (NTRS)

    Sun, Chao (Inventor); Pain, Bedabrata (Inventor); Yang, Guang (Inventor); Heynssens, Julie B. (Inventor)

    2006-01-01

    Systems and techniques to realize pointing and tracking applications with CMOS imaging devices. In general, in one implementation, the technique includes: sampling multiple rows and multiple columns of an active pixel sensor array into a memory array (e.g., an on-chip memory array), and reading out the multiple rows and multiple columns sampled in the memory array to provide image data with reduced motion artifact. Various operation modes may be provided, including TDS, CDS, CQS, a tracking mode to read out multiple windows, and/or a mode employing a sample-first-read-later readout scheme. The tracking mode can take advantage of a diagonal switch array. The diagonal switch array, the active pixel sensor array and the memory array can be integrated onto a single imager chip with a controller. This imager device can be part of a larger imaging system for both space-based applications and terrestrial applications.

  10. FPGA-based multi-channel fluorescence lifetime analysis of Fourier multiplexed frequency-sweeping lifetime imaging

    PubMed Central

    Zhao, Ming; Li, Yu; Peng, Leilei

    2014-01-01

    We report a fast non-iterative lifetime data analysis method for the Fourier multiplexed frequency-sweeping confocal FLIM (Fm-FLIM) system [ Opt. Express22, 10221 ( 2014)24921725]. The new method, named R-method, allows fast multi-channel lifetime image analysis in the system’s FPGA data processing board. Experimental tests proved that the performance of the R-method is equivalent to that of single-exponential iterative fitting, and its sensitivity is well suited for time-lapse FLIM-FRET imaging of live cells, for example cyclic adenosine monophosphate (cAMP) level imaging with GFP-Epac-mCherry sensors. With the R-method and its FPGA implementation, multi-channel lifetime images can now be generated in real time on the multi-channel frequency-sweeping FLIM system, and live readout of FRET sensors can be performed during time-lapse imaging. PMID:25321778

  11. Compressive hyperspectral sensor for LWIR gas detection

    NASA Astrophysics Data System (ADS)

    Russell, Thomas A.; McMackin, Lenore; Bridge, Bob; Baraniuk, Richard

    2012-06-01

    Focal plane arrays with associated electronics and cooling are a substantial portion of the cost, complexity, size, weight, and power requirements of Long-Wave IR (LWIR) imagers. Hyperspectral LWIR imagers add significant data volume burden as they collect a high-resolution spectrum at each pixel. We report here on a LWIR Hyperspectral Sensor that applies Compressive Sensing (CS) in order to achieve benefits in these areas. The sensor applies single-pixel detection technology demonstrated by Rice University. The single-pixel approach uses a Digital Micro-mirror Device (DMD) to reflect and multiplex the light from a random assortment of pixels onto the detector. This is repeated for a number of measurements much less than the total number of scene pixels. We have extended this architecture to hyperspectral LWIR sensing by inserting a Fabry-Perot spectrometer in the optical path. This compressive hyperspectral imager collects all three dimensions on a single detection element, greatly reducing the size, weight and power requirements of the system relative to traditional approaches, while also reducing data volume. The CS architecture also supports innovative adaptive approaches to sensing, as the DMD device allows control over the selection of spatial scene pixels to be multiplexed on the detector. We are applying this advantage to the detection of plume gases, by adaptively locating and concentrating target energy. A key challenge in this system is the diffraction loss produce by the DMD in the LWIR. We report the results of testing DMD operation in the LWIR, as well as system spatial and spectral performance.

  12. Development of an electromagnetic imaging system for well bore integrity inspection

    NASA Astrophysics Data System (ADS)

    Plotnikov, Yuri; Wheeler, Frederick W.; Mandal, Sudeep; Climent, Helene C.; Kasten, A. Matthias; Ross, William

    2017-02-01

    State-of-the-art imaging technologies for monitoring the integrity of oil and gas well bores are typically limited to the inspection of metal casings and cement bond interfaces close to the first casing region. The objective of this study is to develop and evaluate a novel well-integrity inspection system that is capable of providing enhanced information about the flaw structure and topology of hydrocarbon producing well bores. In order to achieve this, we propose the development of a multi-element electromagnetic (EM) inspection tool that can provide information about material loss in the first and second casing structure as well as information about eccentricity between multiple casing strings. Furthermore, the information gathered from the EM inspection tool will be combined with other imaging modalities (e.g. data from an x-ray backscatter imaging device). The independently acquired data are then fused to achieve a comprehensive assessment of integrity with greater accuracy. A test rig composed of several concentric metal casings with various defect structures was assembled and imaged. Initial test results were obtained with a scanning system design that includes a single transmitting coil and several receiving coils mounted on a single rod. A mechanical linear translation stage was used to move the EM sensors in the axial direction during data acquisition. For simplicity, a single receiving coil and repetitive scans were employed to simulate performance of the designed receiving sensor array system. The resulting electromagnetic images enable the detection of the metal defects in the steel pipes. Responses from several sensors were used to assess the location and amount of material loss in the first and second metal pipe as well as the relative eccentric position between these two pipes. The results from EM measurements and x-ray backscatter simulations demonstrate that data fusion from several sensing modalities can provide an enhanced assessment of flaw structures in producing well bores and potentially allow for early detection of anomalies that if undetected might lead to catastrophic failures.

  13. Phase-sensitive X-ray imager

    DOEpatents

    Baker, Kevin Louis

    2013-01-08

    X-ray phase sensitive wave-front sensor techniques are detailed that are capable of measuring the entire two-dimensional x-ray electric field, both the amplitude and phase, with a single measurement. These Hartmann sensing and 2-D Shear interferometry wave-front sensors do not require a temporally coherent source and are therefore compatible with x-ray tubes and also with laser-produced or x-pinch x-ray sources.

  14. Accurate reconstruction of hyperspectral images from compressive sensing measurements

    NASA Astrophysics Data System (ADS)

    Greer, John B.; Flake, J. C.

    2013-05-01

    The emerging field of Compressive Sensing (CS) provides a new way to capture data by shifting the heaviest burden of data collection from the sensor to the computer on the user-end. This new means of sensing requires fewer measurements for a given amount of information than traditional sensors. We investigate the efficacy of CS for capturing HyperSpectral Imagery (HSI) remotely. We also introduce a new family of algorithms for constructing HSI from CS measurements with Split Bregman Iteration [Goldstein and Osher,2009]. These algorithms combine spatial Total Variation (TV) with smoothing in the spectral dimension. We examine models for three different CS sensors: the Coded Aperture Snapshot Spectral Imager-Single Disperser (CASSI-SD) [Wagadarikar et al.,2008] and Dual Disperser (CASSI-DD) [Gehm et al.,2007] cameras, and a hypothetical random sensing model closer to CS theory, but not necessarily implementable with existing technology. We simulate the capture of remotely sensed images by applying the sensor forward models to well-known HSI scenes - an AVIRIS image of Cuprite, Nevada and the HYMAP Urban image. To measure accuracy of the CS models, we compare the scenes constructed with our new algorithm to the original AVIRIS and HYMAP cubes. The results demonstrate the possibility of accurately sensing HSI remotely with significantly fewer measurements than standard hyperspectral cameras.

  15. An Overview of the Landsat Data Continuity Mission

    NASA Technical Reports Server (NTRS)

    Irons, James R.; Dwyer, John L.

    2010-01-01

    The advent of the Landsat Data Continuity Mission (LDCM), currently with a launch readiness date of December, 2012, will see evolutionary changes in the Landsat data products available from the U.S. Geological Survey (USGS) Earth Resources Observation and Science (EROS) Center. The USGS initiated a revolution in 2009 when EROS began distributing Landsat data products at no cost to requestors in contrast to the past practice of charging the cost of fulfilling a request; that is, charging $600 per Landsat scene. To implement this drastic change, EROS terminated data processing options for requestors and began to produce all data products using a consistent processing recipe. EROS plans to continue this practice for the LDCM and will required new algorithms to process data from the LDCM sensors. All previous Landsat satellites flew multispectral scanners to collect image data of the global land surface. Additionally, Landsats 4, 5, and 7 flew sensors that acquired imagery for both reflective spectral bands and a single thermal band. In contrast, the LDCM will carry two pushbroom sensors; the Operational Land Imager (OLI) for reflective spectral bands and the Thermal InfraRed Sensor (TIRS) for two thermal bands. EROS is developing the ground data processing system that will both calibrate and correct the data from the thousands of detectors employed by the pushbroom sensors and that will also combine the data from the two sensors to create a single data product with registered data for all of the OLI and TIRS bands.

  16. Performance of a Medipix3RX spectroscopic pixel detector with a high resistivity gallium arsenide sensor.

    PubMed

    Hamann, Elias; Koenig, Thomas; Zuber, Marcus; Cecilia, Angelica; Tyazhev, Anton; Tolbanov, Oleg; Procz, Simon; Fauler, Alex; Baumbach, Tilo; Fiederle, Michael

    2015-03-01

    High resistivity gallium arsenide is considered a suitable sensor material for spectroscopic X-ray imaging detectors. These sensors typically have thicknesses between a few hundred μm and 1 mm to ensure a high photon detection efficiency. However, for small pixel sizes down to several tens of μm, an effect called charge sharing reduces a detector's spectroscopic performance. The recently developed Medipix3RX readout chip overcomes this limitation by implementing a charge summing circuit, which allows the reconstruction of the full energy information of a photon interaction in a single pixel. In this work, we present the characterization of the first Medipix3RX detector assembly with a 500 μm thick high resistivity, chromium compensated gallium arsenide sensor. We analyze its properties and demonstrate the functionality of the charge summing mode by means of energy response functions recorded at a synchrotron. Furthermore, the imaging properties of the detector, in terms of its modulation transfer functions and signal-to-noise ratios, are investigated. After more than one decade of attempts to establish gallium arsenide as a sensor material for photon counting detectors, our results represent a breakthrough in obtaining detector-grade material. The sensor we introduce is therefore suitable for high resolution X-ray imaging applications.

  17. Case studies for observation planning algorithm of a Japanese spaceborne sensor: Hyperspectral Imager Suite (HISUI)

    NASA Astrophysics Data System (ADS)

    Ogawa, Kenta; Konno, Yukiko; Yamamoto, Satoru; Matsunaga, Tsuneo; Tachikawa, Tetsushi; Komoda, Mako; Kashimura, Osamu; Rokugawa, Shuichi

    2016-10-01

    Hyperspectral Imager Suite (HISUI)[1] is a Japanese future spaceborne hyperspectral instrument being developed by Ministry of Economy, Trade, and Industry (METI) and will be delivered to ISS in 2018. In HISUI project, observation strategy is important especially for hyperspectral sensor, and relationship between the limitations of sensor operation and the planned observation scenarios have to be studied. We have developed concept of multiple algorithms approach. The concept is to use two (or more) algorithm models (Long Strip Model and Score Downfall Model) for selecting observing scenes from complex data acquisition requests with satisfactory of sensor constrains. We have tested the algorithm, and found that the performance of two models depends on remaining data acquisition requests, i.e. distribution score along with orbits. We conclude that the multiple algorithms approach will be make better collection plans for HISUI comparing with single fixed approach.

  18. A software package for evaluating the performance of a star sensor operation

    NASA Astrophysics Data System (ADS)

    Sarpotdar, Mayuresh; Mathew, Joice; Sreejith, A. G.; Nirmal, K.; Ambily, S.; Prakash, Ajin; Safonova, Margarita; Murthy, Jayant

    2017-02-01

    We have developed a low-cost off-the-shelf component star sensor ( StarSense) for use in minisatellites and CubeSats to determine the attitude of a satellite in orbit. StarSense is an imaging camera with a limiting magnitude of 6.5, which extracts information from star patterns it records in the images. The star sensor implements a centroiding algorithm to find centroids of the stars in the image, a Geometric Voting algorithm for star pattern identification, and a QUEST algorithm for attitude quaternion calculation. Here, we describe the software package to evaluate the performance of these algorithms as a star sensor single operating system. We simulate the ideal case where sky background and instrument errors are omitted, and a more realistic case where noise and camera parameters are added to the simulated images. We evaluate such performance parameters of the algorithms as attitude accuracy, calculation time, required memory, star catalog size, sky coverage, etc., and estimate the errors introduced by each algorithm. This software package is written for use in MATLAB. The testing is parametrized for different hardware parameters, such as the focal length of the imaging setup, the field of view (FOV) of the camera, angle measurement accuracy, distortion effects, etc., and therefore, can be applied to evaluate the performance of such algorithms in any star sensor. For its hardware implementation on our StarSense, we are currently porting the codes in form of functions written in C. This is done keeping in view its easy implementation on any star sensor electronics hardware.

  19. A fiber-optic sensor based on no-core fiber and Faraday rotator mirror structure

    NASA Astrophysics Data System (ADS)

    Lu, Heng; Wang, Xu; Zhang, Songling; Wang, Fang; Liu, Yufang

    2018-05-01

    An optical fiber sensor based on the single-mode/no-core/single-mode (SNS) core-offset technology along with a Faraday rotator mirror structure has been proposed and experimentally demonstrated. A transverse optical field distribution of self-imaging has been simulated and experimental parameters have been selected under theoretical guidance. Results of the experiments demonstrate that the temperature sensitivity of the sensor is 0.0551 nm/°C for temperatures between 25 and 80 °C, and the correlation coefficient is 0.99582. The concentration sensitivity of the device for sucrose and glucose solutions was found to be as high as 12.5416 and 6.02248 nm/(g/ml), respectively. Curves demonstrating a linear fit between wavelength shift and solution concentration for three different heavy metal solutions have also been derived on the basis of experimental results. The proposed fiber-optic sensor design provides valuable guidance for the measurement of concentration and temperature.

  20. Chemical bond imaging using higher eigenmodes of tuning fork sensors in atomic force microscopy

    NASA Astrophysics Data System (ADS)

    Ebeling, Daniel; Zhong, Qigang; Ahles, Sebastian; Chi, Lifeng; Wegner, Hermann A.; Schirmeisen, André

    2017-05-01

    We demonstrate the ability of resolving the chemical structure of single organic molecules using non-contact atomic force microscopy with higher normal eigenmodes of quartz tuning fork sensors. In order to achieve submolecular resolution, CO-functionalized tips at low temperatures are used. The tuning fork sensors are operated in ultrahigh vacuum in the frequency modulation mode by exciting either their first or second eigenmode. Despite the high effective spring constant of the second eigenmode (on the order of several tens of kN/m), the force sensitivity is sufficiently high to achieve atomic resolution above the organic molecules. This is observed for two different tuning fork sensors with different tip geometries (small tip vs. large tip). These results represent an important step towards resolving the chemical structure of single molecules with multifrequency atomic force microscopy techniques where two or more eigenmodes are driven simultaneously.

  1. Single photon detection imaging of Cherenkov light emitted during radiation therapy

    NASA Astrophysics Data System (ADS)

    Adamson, Philip M.; Andreozzi, Jacqueline M.; LaRochelle, Ethan; Gladstone, David J.; Pogue, Brian W.

    2018-03-01

    Cherenkov imaging during radiation therapy has been developed as a tool for dosimetry, which could have applications in patient delivery verification or in regular quality audit. The cameras used are intensified imaging sensors, either ICCD or ICMOS cameras, which allow important features of imaging, including: (1) nanosecond time gating, (2) amplification by 103-104, which together allow for imaging which has (1) real time capture at 10-30 frames per second, (2) sensitivity at the level of single photon event level, and (3) ability to suppress background light from the ambient room. However, the capability to achieve single photon imaging has not been fully analyzed to date, and as such was the focus of this study. The ability to quantitatively characterize how a single photon event appears in amplified camera imaging from the Cherenkov images was analyzed with image processing. The signal seen at normal gain levels appears to be a blur of about 90 counts in the CCD detector, after going through the chain of photocathode detection, amplification through a microchannel plate PMT, excitation onto a phosphor screen and then imaged on the CCD. The analysis of single photon events requires careful interpretation of the fixed pattern noise, statistical quantum noise distributions, and the spatial spread of each pulse through the ICCD.

  2. In-situ device integration of large-area patterned organic nanowire arrays for high-performance optical sensors

    PubMed Central

    Wu, Yiming; Zhang, Xiujuan; Pan, Huanhuan; Deng, Wei; Zhang, Xiaohong; Zhang, Xiwei; Jie, Jiansheng

    2013-01-01

    Single-crystalline organic nanowires (NWs) are important building blocks for future low-cost and efficient nano-optoelectronic devices due to their extraordinary properties. However, it remains a critical challenge to achieve large-scale organic NW array assembly and device integration. Herein, we demonstrate a feasible one-step method for large-area patterned growth of cross-aligned single-crystalline organic NW arrays and their in-situ device integration for optical image sensors. The integrated image sensor circuitry contained a 10 × 10 pixel array in an area of 1.3 × 1.3 mm2, showing high spatial resolution, excellent stability and reproducibility. More importantly, 100% of the pixels successfully operated at a high response speed and relatively small pixel-to-pixel variation. The high yield and high spatial resolution of the operational pixels, along with the high integration level of the device, clearly demonstrate the great potential of the one-step organic NW array growth and device construction approach for large-scale optoelectronic device integration. PMID:24287887

  3. DUSTER: demonstration of an integrated LWIR-VNIR-SAR imaging system

    NASA Astrophysics Data System (ADS)

    Wilson, Michael L.; Linne von Berg, Dale; Kruer, Melvin; Holt, Niel; Anderson, Scott A.; Long, David G.; Margulis, Yuly

    2008-04-01

    The Naval Research Laboratory (NRL) and Space Dynamics Laboratory (SDL) are executing a joint effort, DUSTER (Deployable Unmanned System for Targeting, Exploitation, and Reconnaissance), to develop and test a new tactical sensor system specifically designed for Tier II UAVs. The system is composed of two coupled near-real-time sensors: EyePod (VNIR/LWIR ball gimbal) and NuSAR (L-band synthetic aperture radar). EyePod consists of a jitter-stabilized LWIR sensor coupled with a dual focal-length optical system and a bore-sighted high-resolution VNIR sensor. The dual focal-length design coupled with precision pointing an step-stare capabilities enable EyePod to conduct wide-area survey and high resolution inspection missions from a single flight pass. NuSAR is being developed with partners Brigham Young University (BYU) and Artemis, Inc and consists of a wideband L-band SAR capable of large area survey and embedded real-time image formation. Both sensors employ standard Ethernet interfaces and provide geo-registered NITFS output imagery. In the fall of 2007, field tests were conducted with both sensors, results of which will be presented.

  4. Probing mass-transport and binding inhomogeneity in macromolecular interactions by molecular interferometric imaging

    NASA Astrophysics Data System (ADS)

    Zhao, Ming; Wang, Xuefeng; Nolte, David

    2009-02-01

    In solid-support immunoassays, the transport of target analyte in sample solution to capture molecules on the sensor surface controls the detected binding signal. Depletion of the target analyte in the sample solution adjacent to the sensor surface leads to deviations from ideal association, and causes inhomogeneity of surface binding as analyte concentration varies spatially across the sensor surface. In the field of label-free optical biosensing, studies of mass-transport-limited reaction kinetics have focused on the average response on the sensor surface, but have not addressed binding inhomogeneities caused by mass-transport limitations. In this paper, we employ Molecular Interferometric Imaging (MI2) to study mass-transport-induced inhomogeneity of analyte binding within a single protein spot. Rabbit IgG binding to immobilized protein A/G was imaged at various concentrations and under different flow rates. In the mass-transport-limited regime, enhanced binding at the edges of the protein spots was caused by depletion of analyte towards the center of the protein spots. The magnitude of the inhomogeneous response was a function of analyte reaction rate and sample flow rate.

  5. Room temperature 1040fps, 1 megapixel photon-counting image sensor with 1.1um pixel pitch

    NASA Astrophysics Data System (ADS)

    Masoodian, S.; Ma, J.; Starkey, D.; Wang, T. J.; Yamashita, Y.; Fossum, E. R.

    2017-05-01

    A 1Mjot single-bit quanta image sensor (QIS) implemented in a stacked backside-illuminated (BSI) process is presented. This is the first work to report a megapixel photon-counting CMOS-type image sensor to the best of our knowledge. A QIS with 1.1μm pitch tapered-pump-gate jots is implemented with cluster-parallel readout, where each cluster of jots is associated with its own dedicated readout electronics stacked under the cluster. Power dissipation is reduced with this cluster readout because of the reduced column bus parasitic capacitance, which is important for the development of 1Gjot arrays. The QIS functions at 1040fps with binary readout and dissipates only 17.6mW, including I/O pads. The readout signal chain uses a fully differential charge-transfer amplifier (CTA) gain stage before a 1b-ADC to achieve an energy/bit FOM of 16.1pJ/b and 6.9pJ/b for the whole sensor and gain stage+ADC, respectively. Analog outputs with on-chip gain are implemented for pixel characterization purposes.

  6. High-resolution depth profiling using a range-gated CMOS SPAD quanta image sensor.

    PubMed

    Ren, Ximing; Connolly, Peter W R; Halimi, Abderrahim; Altmann, Yoann; McLaughlin, Stephen; Gyongy, Istvan; Henderson, Robert K; Buller, Gerald S

    2018-03-05

    A CMOS single-photon avalanche diode (SPAD) quanta image sensor is used to reconstruct depth and intensity profiles when operating in a range-gated mode used in conjunction with pulsed laser illumination. By designing the CMOS SPAD array to acquire photons within a pre-determined temporal gate, the need for timing circuitry was avoided and it was therefore possible to have an enhanced fill factor (61% in this case) and a frame rate (100,000 frames per second) that is more difficult to achieve in a SPAD array which uses time-correlated single-photon counting. When coupled with appropriate image reconstruction algorithms, millimeter resolution depth profiles were achieved by iterating through a sequence of temporal delay steps in synchronization with laser illumination pulses. For photon data with high signal-to-noise ratios, depth images with millimeter scale depth uncertainty can be estimated using a standard cross-correlation approach. To enhance the estimation of depth and intensity images in the sparse photon regime, we used a bespoke clustering-based image restoration strategy, taking into account the binomial statistics of the photon data and non-local spatial correlations within the scene. For sparse photon data with total exposure times of 75 ms or less, the bespoke algorithm can reconstruct depth images with millimeter scale depth uncertainty at a stand-off distance of approximately 2 meters. We demonstrate a new approach to single-photon depth and intensity profiling using different target scenes, taking full advantage of the high fill-factor, high frame rate and large array format of this range-gated CMOS SPAD array.

  7. Real-time, wide-area hyperspectral imaging sensors for standoff detection of explosives and chemical warfare agents

    NASA Astrophysics Data System (ADS)

    Gomer, Nathaniel R.; Tazik, Shawna; Gardner, Charles W.; Nelson, Matthew P.

    2017-05-01

    Hyperspectral imaging (HSI) is a valuable tool for the detection and analysis of targets located within complex backgrounds. HSI can detect threat materials on environmental surfaces, where the concentration of the target of interest is often very low and is typically found within complex scenery. Unfortunately, current generation HSI systems have size, weight, and power limitations that prohibit their use for field-portable and/or real-time applications. Current generation systems commonly provide an inefficient area search rate, require close proximity to the target for screening, and/or are not capable of making real-time measurements. ChemImage Sensor Systems (CISS) is developing a variety of real-time, wide-field hyperspectral imaging systems that utilize shortwave infrared (SWIR) absorption and Raman spectroscopy. SWIR HSI sensors provide wide-area imagery with at or near real time detection speeds. Raman HSI sensors are being developed to overcome two obstacles present in standard Raman detection systems: slow area search rate (due to small laser spot sizes) and lack of eye-safety. SWIR HSI sensors have been integrated into mobile, robot based platforms and handheld variants for the detection of explosives and chemical warfare agents (CWAs). In addition, the fusion of these two technologies into a single system has shown the feasibility of using both techniques concurrently to provide higher probability of detection and lower false alarm rates. This paper will provide background on Raman and SWIR HSI, discuss the applications for these techniques, and provide an overview of novel CISS HSI sensors focusing on sensor design and detection results.

  8. Exploiting Satellite Focal Plane Geometry for Automatic Extraction of Traffic Flow from Single Optical Satellite Imagery

    NASA Astrophysics Data System (ADS)

    Krauß, T.

    2014-11-01

    The focal plane assembly of most pushbroom scanner satellites is built up in a way that different multispectral or multispectral and panchromatic bands are not all acquired exactly at the same time. This effect is due to offsets of some millimeters of the CCD-lines in the focal plane. Exploiting this special configuration allows the detection of objects moving during this small time span. In this paper we present a method for automatic detection and extraction of moving objects - mainly traffic - from single very high resolution optical satellite imagery of different sensors. The sensors investigated are WorldView-2, RapidEye, Pléiades and also the new SkyBox satellites. Different sensors require different approaches for detecting moving objects. Since the objects are mapped on different positions only in different spectral bands also the change of spectral properties have to be taken into account. In case the main distance in the focal plane is between the multispectral and the panchromatic CCD-line like for Pléiades an approach for weighted integration to receive mostly identical images is investigated. Other approaches for RapidEye and WorldView-2 are also shown. From these intermediate bands difference images are calculated and a method for detecting the moving objects from these difference images is proposed. Based on these presented methods images from different sensors are processed and the results are assessed for detection quality - how many moving objects can be detected, how many are missed - and accuracy - how accurate is the derived speed and size of the objects. Finally the results are discussed and an outlook for possible improvements towards operational processing is presented.

  9. A novel digital image sensor with row wise gain compensation for Hyper Spectral Imager (HySI) application

    NASA Astrophysics Data System (ADS)

    Lin, Shengmin; Lin, Chi-Pin; Wang, Weng-Lyang; Hsiao, Feng-Ke; Sikora, Robert

    2009-08-01

    A 256x512 element digital image sensor has been developed which has a large pixel size, slow scan and low power consumption for Hyper Spectral Imager (HySI) applications. The device is a mixed mode, silicon on chip (SOC) IC. It combines analog circuitry, digital circuitry and optical sensor circuitry into a single chip. This chip integrates a 256x512 active pixel sensor array, a programming gain amplifier (PGA) for row wise gain setting, I2C interface, SRAM, 12 bit analog to digital convertor (ADC), voltage regulator, low voltage differential signal (LVDS) and timing generator. The device can be used for 256 pixels of spatial resolution and 512 bands of spectral resolution ranged from 400 nm to 950 nm in wavelength. In row wise gain readout mode, one can set a different gain on each row of the photo detector by storing the gain setting data on the SRAM thru the I2C interface. This unique row wise gain setting can be used to compensate the silicon spectral response non-uniformity problem. Due to this unique function, the device is suitable for hyper-spectral imager applications. The HySI camera located on-board the Chandrayaan-1 satellite, was successfully launched to the moon on Oct. 22, 2008. The device is currently mapping the moon and sending back excellent images of the moon surface. The device design and the moon image data will be presented in the paper.

  10. Relating transverse ray error and light fields in plenoptic camera images

    NASA Astrophysics Data System (ADS)

    Schwiegerling, Jim; Tyo, J. Scott

    2013-09-01

    Plenoptic cameras have emerged in recent years as a technology for capturing light field data in a single snapshot. A conventional digital camera can be modified with the addition of a lenslet array to create a plenoptic camera. The camera image is focused onto the lenslet array. The lenslet array is placed over the camera sensor such that each lenslet forms an image of the exit pupil onto the sensor. The resultant image is an array of circular exit pupil images, each corresponding to the overlying lenslet. The position of the lenslet encodes the spatial information of the scene, whereas as the sensor pixels encode the angular information for light incident on the lenslet. The 4D light field is therefore described by the 2D spatial information and 2D angular information captured by the plenoptic camera. In aberration theory, the transverse ray error relates the pupil coordinates of a given ray to its deviation from the ideal image point in the image plane and is consequently a 4D function as well. We demonstrate a technique for modifying the traditional transverse ray error equations to recover the 4D light field of a general scene. In the case of a well corrected optical system, this light field is easily related to the depth of various objects in the scene. Finally, the effects of sampling with both the lenslet array and the camera sensor on the 4D light field data are analyzed to illustrate the limitations of such systems.

  11. Acoustic emission linear pulse holography

    DOEpatents

    Collins, H.D.; Busse, L.J.; Lemon, D.K.

    1983-10-25

    This device relates to the concept of and means for performing Acoustic Emission Linear Pulse Holography, which combines the advantages of linear holographic imaging and Acoustic Emission into a single non-destructive inspection system. This unique system produces a chronological, linear holographic image of a flaw by utilizing the acoustic energy emitted during crack growth. The innovation is the concept of utilizing the crack-generated acoustic emission energy to generate a chronological series of images of a growing crack by applying linear, pulse holographic processing to the acoustic emission data. The process is implemented by placing on a structure an array of piezoelectric sensors (typically 16 or 32 of them) near the defect location. A reference sensor is placed between the defect and the array.

  12. A robust approach for a filter-based monocular simultaneous localization and mapping (SLAM) system.

    PubMed

    Munguía, Rodrigo; Castillo-Toledo, Bernardino; Grau, Antoni

    2013-07-03

    Simultaneous localization and mapping (SLAM) is an important problem to solve in robotics theory in order to build truly autonomous mobile robots. This work presents a novel method for implementing a SLAM system based on a single camera sensor. The SLAM with a single camera, or monocular SLAM, is probably one of the most complex SLAM variants. In this case, a single camera, which is freely moving through its environment, represents the sole sensor input to the system. The sensors have a large impact on the algorithm used for SLAM. Cameras are used more frequently, because they provide a lot of information and are well adapted for embedded systems: they are light, cheap and power-saving. Nevertheless, and unlike range sensors, which provide range and angular information, a camera is a projective sensor providing only angular measurements of image features. Therefore, depth information (range) cannot be obtained in a single step. In this case, special techniques for feature system-initialization are needed in order to enable the use of angular sensors (as cameras) in SLAM systems. The main contribution of this work is to present a novel and robust scheme for incorporating and measuring visual features in filtering-based monocular SLAM systems. The proposed method is based in a two-step technique, which is intended to exploit all the information available in angular measurements. Unlike previous schemes, the values of parameters used by the initialization technique are derived directly from the sensor characteristics, thus simplifying the tuning of the system. The experimental results show that the proposed method surpasses the performance of previous schemes.

  13. Atomic-Scale Nuclear Spin Imaging Using Quantum-Assisted Sensors in Diamond

    NASA Astrophysics Data System (ADS)

    Ajoy, A.; Bissbort, U.; Lukin, M. D.; Walsworth, R. L.; Cappellaro, P.

    2015-01-01

    Nuclear spin imaging at the atomic level is essential for the understanding of fundamental biological phenomena and for applications such as drug discovery. The advent of novel nanoscale sensors promises to achieve the long-standing goal of single-protein, high spatial-resolution structure determination under ambient conditions. In particular, quantum sensors based on the spin-dependent photoluminescence of nitrogen-vacancy (NV) centers in diamond have recently been used to detect nanoscale ensembles of external nuclear spins. While NV sensitivity is approaching single-spin levels, extracting relevant information from a very complex structure is a further challenge since it requires not only the ability to sense the magnetic field of an isolated nuclear spin but also to achieve atomic-scale spatial resolution. Here, we propose a method that, by exploiting the coupling of the NV center to an intrinsic quantum memory associated with the nitrogen nuclear spin, can reach a tenfold improvement in spatial resolution, down to atomic scales. The spatial resolution enhancement is achieved through coherent control of the sensor spin, which creates a dynamic frequency filter selecting only a few nuclear spins at a time. We propose and analyze a protocol that would allow not only sensing individual spins in a complex biomolecule, but also unraveling couplings among them, thus elucidating local characteristics of the molecule structure.

  14. Development of a fusion approach selection tool

    NASA Astrophysics Data System (ADS)

    Pohl, C.; Zeng, Y.

    2015-06-01

    During the last decades number and quality of available remote sensing satellite sensors for Earth observation has grown significantly. The amount of available multi-sensor images along with their increased spatial and spectral resolution provides new challenges to Earth scientists. With a Fusion Approach Selection Tool (FAST) the remote sensing community would obtain access to an optimized and improved image processing technology. Remote sensing image fusion is a mean to produce images containing information that is not inherent in the single image alone. In the meantime the user has access to sophisticated commercialized image fusion techniques plus the option to tune the parameters of each individual technique to match the anticipated application. This leaves the operator with an uncountable number of options to combine remote sensing images, not talking about the selection of the appropriate images, resolution and bands. Image fusion can be a machine and time-consuming endeavour. In addition it requires knowledge about remote sensing, image fusion, digital image processing and the application. FAST shall provide the user with a quick overview of processing flows to choose from to reach the target. FAST will ask for available images, application parameters and desired information to process this input to come out with a workflow to quickly obtain the best results. It will optimize data and image fusion techniques. It provides an overview on the possible results from which the user can choose the best. FAST will enable even inexperienced users to use advanced processing methods to maximize the benefit of multi-sensor image exploitation.

  15. Gun muzzle flash detection using a CMOS single photon avalanche diode

    NASA Astrophysics Data System (ADS)

    Merhav, Tomer; Savuskan, Vitali; Nemirovsky, Yael

    2013-10-01

    Si based sensors, in particular CMOS Image sensors, have revolutionized low cost imaging systems but to date have hardly been considered as possible candidates for gun muzzle flash detection, due to performance limitations, and low SNR in the visible spectrum. In this study, a CMOS Single Photon Avalanche Diode (SPAD) module is used to record and sample muzzle flash events in the visible spectrum, from representative weapons, common on the modern battlefield. SPADs possess two crucial properties for muzzle flash imaging - Namely, very high photon detection sensitivity, coupled with a unique ability to convert the optical signal to a digital signal at the source pixel, thus practically eliminating readout noise. This enables high sampling frequencies in the kilohertz range without SNR degradation, in contrast to regular CMOS image sensors. To date, the SPAD has not been utilized for flash detection in an uncontrolled environment, such as gun muzzle flash detection. Gun propellant manufacturers use alkali salts to suppress secondary flashes ignited during the muzzle flash event. Common alkali salts are compounds based on Potassium or Sodium, with spectral emission lines around 769nm and 589nm, respectively. A narrow band filter around the Potassium emission doublet is used in this study to favor the muzzle flash signal over solar radiation. This research will demonstrate the SPAD's ability to accurately sample and reconstruct the temporal behavior of the muzzle flash in the visible wavelength under the specified imaging conditions. The reconstructed signal is clearly distinguishable from background clutter, through exploitation of flash temporal characteristics.

  16. Development of a prototype sensor system for ultra-high-speed LDA-PIV

    NASA Astrophysics Data System (ADS)

    Griffiths, Jennifer A.; Royle, Gary J.; Bohndiek, Sarah E.; Turchetta, Renato; Chen, Daoyi

    2008-04-01

    Laser Doppler Anemometry (LDA) and Particle Image Velocimetry (PIV) are commonly used in the analysis of particulates in fluid flows. Despite the successes of these techniques, current instrumentation has placed limitations on the size and shape of the particles undergoing measurement, thus restricting the available data for the many industrial processes now utilising nano/micro particles. Data for spherical and irregularly shaped particles down to the order of 0.1 µm is now urgently required. Therefore, an ultra-fast LDA-PIV system is being constructed for the acquisition of this data. A key component of this instrument is the PIV optical detection system. Both the size and speed of the particles under investigation place challenging constraints on the system specifications: magnification is required within the system in order to visualise particles of the size of interest, but this restricts the corresponding field of view in a linearly inverse manner. Thus, for several images of a single particle in a fast fluid flow to be obtained, the image capture rate and sensitivity of the system must be sufficiently high. In order to fulfil the instrumentation criteria, the optical detection system chosen is a high-speed, lensed, digital imaging system based on state-of-the-art CMOS technology - the 'Vanilla' sensor developed by the UK based MI3 consortium. This novel Active Pixel Sensor is capable of high frame rates and sparse readout. When coupled with an image intensifier, it will have single photon detection capabilities. An FPGA based DAQ will allow real-time operation with minimal data transfer.

  17. Open architecture of smart sensor suites

    NASA Astrophysics Data System (ADS)

    Müller, Wilmuth; Kuwertz, Achim; Grönwall, Christina; Petersson, Henrik; Dekker, Rob; Reinert, Frank; Ditzel, Maarten

    2017-10-01

    Experiences from recent conflicts show the strong need for smart sensor suites comprising different multi-spectral imaging sensors as core elements as well as additional non-imaging sensors. Smart sensor suites should be part of a smart sensor network - a network of sensors, databases, evaluation stations and user terminals. Its goal is to optimize the use of various information sources for military operations such as situation assessment, intelligence, surveillance, reconnaissance, target recognition and tracking. Such a smart sensor network will enable commanders to achieve higher levels of situational awareness. Within the study at hand, an open system architecture was developed in order to increase the efficiency of sensor suites. The open system architecture for smart sensor suites, based on a system-of-systems approach, enables combining different sensors in multiple physical configurations, such as distributed sensors, co-located sensors combined in a single package, tower-mounted sensors, sensors integrated in a mobile platform, and trigger sensors. The architecture was derived from a set of system requirements and relevant scenarios. Its mode of operation is adaptable to a series of scenarios with respect to relevant objects of interest, activities to be observed, available transmission bandwidth, etc. The presented open architecture is designed in accordance with the NATO Architecture Framework (NAF). The architecture allows smart sensor suites to be part of a surveillance network, linked e.g. to a sensor planning system and a C4ISR center, and to be used in combination with future RPAS (Remotely Piloted Aircraft Systems) for supporting a more flexible dynamic configuration of RPAS payloads.

  18. Chemistry integrated circuit: chemical system on a complementary metal oxide semiconductor integrated circuit.

    PubMed

    Nakazato, Kazuo

    2014-03-28

    By integrating chemical reactions on a large-scale integration (LSI) chip, new types of device can be created. For biomedical applications, monolithically integrated sensor arrays for potentiometric, amperometric and impedimetric sensing of biomolecules have been developed. The potentiometric sensor array detects pH and redox reaction as a statistical distribution of fluctuations in time and space. For the amperometric sensor array, a microelectrode structure for measuring multiple currents at high speed has been proposed. The impedimetric sensor array is designed to measure impedance up to 10 MHz. The multimodal sensor array will enable synthetic analysis and make it possible to standardize biosensor chips. Another approach is to create new functional devices by integrating molecular systems with LSI chips, for example image sensors that incorporate biological materials with a sensor array. The quantum yield of the photoelectric conversion of photosynthesis is 100%, which is extremely difficult to achieve by artificial means. In a recently developed process, a molecular wire is plugged directly into a biological photosynthetic system to efficiently conduct electrons to a gold electrode. A single photon can be detected at room temperature using such a system combined with a molecular single-electron transistor.

  19. Long-range time-of-flight scanning sensor based on high-speed time-correlated single-photon counting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCarthy, Aongus; Collins, Robert J.; Krichel, Nils J.

    2009-11-10

    We describe a scanning time-of-flight system which uses the time-correlated single-photon counting technique to produce three-dimensional depth images of distant, noncooperative surfaces when these targets are illuminated by a kHz to MHz repetition rate pulsed laser source. The data for the scene are acquired using a scanning optical system and an individual single-photon detector. Depth images have been successfully acquired with centimeter xyz resolution, in daylight conditions, for low-signature targets in field trials at distances of up to 325 m using an output illumination with an average optical power of less than 50 {mu}W.

  20. An update of commercial infrared sensing and imaging instruments

    NASA Technical Reports Server (NTRS)

    Kaplan, Herbert

    1989-01-01

    A classification of infrared sensing instruments by type and application, listing commercially available instruments, from single point thermal probes to on-line control sensors, to high speed, high resolution imaging systems is given. A review of performance specifications follows, along with a discussion of typical thermographic display approaches utilized by various imager manufacturers. An update report on new instruments, new display techniques and newly introduced features of existing instruments is given.

  1. An SSM/I radiometer simulator for studies of microwave emission from soil. [Special Sensor Microwave/Imager

    NASA Technical Reports Server (NTRS)

    Galantowicz, J. F.; England, A. W.

    1992-01-01

    A ground-based simulator of the defense meterological satellite program special sensor microwave/imager (DMSP SSM/I) is described, and its integration with micrometeorological instrumentation for an investigation of microwave emission from moist and frozen soils is discussed. The simulator consists of three single polarization radiometers which are capable of both Dicke radiometer and total power radiometer modes of operation. The radiometers are designed for untended operation through a local computer and a daily telephone link to a laboratory. The functional characteristics of the radiometers are described, together with their field deployment configuration and an example of performance parameters.

  2. 1T Pixel Using Floating-Body MOSFET for CMOS Image Sensors.

    PubMed

    Lu, Guo-Neng; Tournier, Arnaud; Roy, François; Deschamps, Benoît

    2009-01-01

    We present a single-transistor pixel for CMOS image sensors (CIS). It is a floating-body MOSFET structure, which is used as photo-sensing device and source-follower transistor, and can be controlled to store and evacuate charges. Our investigation into this 1T pixel structure includes modeling to obtain analytical description of conversion gain. Model validation has been done by comparing theoretical predictions and experimental results. On the other hand, the 1T pixel structure has been implemented in different configurations, including rectangular-gate and ring-gate designs, and variations of oxidation parameters for the fabrication process. The pixel characteristics are presented and discussed.

  3. Land mine detection using multispectral image fusion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clark, G.A.; Sengupta, S.K.; Aimonetti, W.D.

    1995-03-29

    Our system fuses information contained in registered images from multiple sensors to reduce the effects of clutter and improve the ability to detect surface and buried land mines. The sensor suite currently consists of a camera that acquires images in six bands (400nm, 500nm, 600nm, 700nm, 800nm and 900nm). Past research has shown that it is extremely difficult to distinguish land mines from background clutter in images obtained from a single sensor. It is hypothesized, however, that information fused from a suite of various sensors is likely to provide better detection reliability, because the suite of sensors detects a varietymore » of physical properties that are more separable in feature space. The materials surrounding the mines can include natural materials (soil, rocks, foliage, water, etc.) and some artifacts. We use a supervised learning pattern recognition approach to detecting the metal and plastic land mines. The overall process consists of four main parts: Preprocessing, feature extraction, feature selection, and classification. These parts are used in a two step process to classify a subimage. We extract features from the images, and use feature selection algorithms to select only the most important features according to their contribution to correct detections. This allows us to save computational complexity and determine which of the spectral bands add value to the detection system. The most important features from the various sensors are fused using a supervised learning pattern classifier (the probabilistic neural network). We present results of experiments to detect land mines from real data collected from an airborne platform, and evaluate the usefulness of fusing feature information from multiple spectral bands.« less

  4. Airborne net-centric multi-INT sensor control, display, fusion, and exploitation systems

    NASA Astrophysics Data System (ADS)

    Linne von Berg, Dale C.; Lee, John N.; Kruer, Melvin R.; Duncan, Michael D.; Olchowski, Fred M.; Allman, Eric; Howard, Grant

    2004-08-01

    The NRL Optical Sciences Division has initiated a multi-year effort to develop and demonstrate an airborne net-centric suite of multi-intelligence (multi-INT) sensors and exploitation systems for real-time target detection and targeting product dissemination. The goal of this Net-centric Multi-Intelligence Fusion Targeting Initiative (NCMIFTI) is to develop an airborne real-time intelligence gathering and targeting system that can be used to detect concealed, camouflaged, and mobile targets. The multi-INT sensor suite will include high-resolution visible/infrared (EO/IR) dual-band cameras, hyperspectral imaging (HSI) sensors in the visible-to-near infrared, short-wave and long-wave infrared (VNIR/SWIR/LWIR) bands, Synthetic Aperture Radar (SAR), electronics intelligence sensors (ELINT), and off-board networked sensors. Other sensors are also being considered for inclusion in the suite to address unique target detection needs. Integrating a suite of multi-INT sensors on a single platform should optimize real-time fusion of the on-board sensor streams, thereby improving the detection probability and reducing the false alarms that occur in reconnaissance systems that use single-sensor types on separate platforms, or that use independent target detection algorithms on multiple sensors. In addition to the integration and fusion of the multi-INT sensors, the effort is establishing an open-systems net-centric architecture that will provide a modular "plug and play" capability for additional sensors and system components and provide distributed connectivity to multiple sites for remote system control and exploitation.

  5. Exploiting the speckle-correlation scattering matrix for a compact reference-free holographic image sensor

    PubMed Central

    Lee, KyeoReh; Park, YongKeun

    2016-01-01

    The word ‘holography' means a drawing that contains all of the information for light—both amplitude and wavefront. However, because of the insufficient bandwidth of current electronics, the direct measurement of the wavefront of light has not yet been achieved. Though reference-field-assisted interferometric methods have been utilized in numerous applications, introducing a reference field raises several fundamental and practical issues. Here we demonstrate a reference-free holographic image sensor. To achieve this, we propose a speckle-correlation scattering matrix approach; light-field information passing through a thin disordered layer is recorded and retrieved from a single-shot recording of speckle intensity patterns. Self-interference via diffusive scattering enables access to impinging light-field information, when light transport in the diffusive layer is precisely calibrated. As a proof-of-concept, we demonstrate direct holographic measurements of three-dimensional optical fields using a compact device consisting of a regular image sensor and a diffusor. PMID:27796290

  6. Exploiting the speckle-correlation scattering matrix for a compact reference-free holographic image sensor.

    PubMed

    Lee, KyeoReh; Park, YongKeun

    2016-10-31

    The word 'holography' means a drawing that contains all of the information for light-both amplitude and wavefront. However, because of the insufficient bandwidth of current electronics, the direct measurement of the wavefront of light has not yet been achieved. Though reference-field-assisted interferometric methods have been utilized in numerous applications, introducing a reference field raises several fundamental and practical issues. Here we demonstrate a reference-free holographic image sensor. To achieve this, we propose a speckle-correlation scattering matrix approach; light-field information passing through a thin disordered layer is recorded and retrieved from a single-shot recording of speckle intensity patterns. Self-interference via diffusive scattering enables access to impinging light-field information, when light transport in the diffusive layer is precisely calibrated. As a proof-of-concept, we demonstrate direct holographic measurements of three-dimensional optical fields using a compact device consisting of a regular image sensor and a diffusor.

  7. Bioinspired design of a polymer gel sensor for the realization of extracellular Ca2+ imaging

    NASA Astrophysics Data System (ADS)

    Ishiwari, Fumitaka; Hasebe, Hanako; Matsumura, Satoko; Hajjaj, Fatin; Horii-Hayashi, Noriko; Nishi, Mayumi; Someya, Takao; Fukushima, Takanori

    2016-04-01

    Although the role of extracellular Ca2+ draws increasing attention as a messenger in intercellular communications, there is currently no tool available for imaging Ca2+ dynamics in extracellular regions. Here we report the first solid-state fluorescent Ca2+ sensor that fulfills the essential requirements for realizing extracellular Ca2+ imaging. Inspired by natural extracellular Ca2+-sensing receptors, we designed a particular type of chemically-crosslinked polyacrylic acid gel, which can undergo single-chain aggregation in the presence of Ca2+. By attaching aggregation-induced emission luminogen to the polyacrylic acid as a pendant, the conformational state of the main chain at a given Ca2+ concentration is successfully translated into fluorescence property. The Ca2+ sensor has a millimolar-order apparent dissociation constant compatible with extracellular Ca2+ concentrations, and exhibits sufficient dynamic range and excellent selectivity in the presence of physiological concentrations of biologically relevant ions, thus enabling monitoring of submillimolar fluctuations of Ca2+ in flowing analytes containing millimolar Ca2+ concentrations.

  8. High-Accuracy Multisensor Geolocation Technology to Support Geophysical Data Collection at MEC Sites

    DTIC Science & Technology

    2012-12-01

    image with intensity data in a single step. Flash LiDAR can use both basic solutions to emit laser , either a single pulse with large aperture will...45 6. LASER SENSOR DEVELOPMENTS...and a terrestrial laser scanner (TLS). State-of-the-art GPS navigation allows for cm- accurate positioning in open areas where a sufficient number

  9. Design of a Low-Light-Level Image Sensor with On-Chip Sigma-Delta Analog-to- Digital Conversion

    NASA Technical Reports Server (NTRS)

    Mendis, Sunetra K.; Pain, Bedabrata; Nixon, Robert H.; Fossum, Eric R.

    1993-01-01

    The design and projected performance of a low-light-level active-pixel-sensor (APS) chip with semi-parallel analog-to-digital (A/D) conversion is presented. The individual elements have been fabricated and tested using MOSIS* 2 micrometer CMOS technology, although the integrated system has not yet been fabricated. The imager consists of a 128 x 128 array of active pixels at a 50 micrometer pitch. Each column of pixels shares a 10-bit A/D converter based on first-order oversampled sigma-delta (Sigma-Delta) modulation. The 10-bit outputs of each converter are multiplexed and read out through a single set of outputs. A semi-parallel architecture is chosen to achieve 30 frames/second operation even at low light levels. The sensor is designed for less than 12 e^- rms noise performance.

  10. Ultrafast Radiation Detection by Modulation of an Optical Probe Beam

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vernon, S P; Lowry, M E

    2006-02-22

    We describe a new class of radiation sensor that utilizes optical interferometry to measure radiation-induced changes in the optical refractive index of a semiconductor sensor medium. Radiation absorption in the sensor material produces a transient, non-equilibrium, electron-hole pair distribution that locally modifies the complex, optical refractive index of the sensor medium. Changes in the real (imaginary) part of the local refractive index produce a differential phase shift (absorption) of an optical probe used to interrogate the sensor material. In contrast to conventional radiation detectors where signal levels are proportional to the incident energy, signal levels in these optical sensors aremore » proportional to the incident radiation energy flux. This allows for reduction of the sensor form factor with no degradation in detection sensitivity. Furthermore, since the radiation induced, non-equilibrium electron-hole pair distribution is effectively measured ''in place'' there is no requirement to spatially separate and collect the generated charges; consequently, the sensor risetime is of the order of the hot-electron thermalization time {le} 10 fs and the duration of the index perturbation is determined by the carrier recombination time which is of order {approx} 600 fs in, direct-bandgap semiconductors, with a high density of recombination defects; consequently, the optical sensors can be engineered with sub-ps temporal response. A series of detectors were designed, and incorporated into Mach Zehnder and Fabry-Perot interferometer-based detection systems: proof of concept, lower detection sensitivity, Mach-Zehnder detectors were characterized at beamline 6.3 at SSRL; three generations of high sensitivity single element and imaging Fabry-Perot detectors were measured at the LLNL Europa facility. Our results indicate that this technology can be used to provide x-ray detectors and x-ray imaging systems with single x-ray sensitivity and S/N {approx} 30 at x-ray energies {approx} 10 keV.« less

  11. Direct Sensor Orientation of a Land-Based Mobile Mapping System

    PubMed Central

    Rau, Jiann-Yeou; Habib, Ayman F.; Kersting, Ana P.; Chiang, Kai-Wei; Bang, Ki-In; Tseng, Yi-Hsing; Li, Yu-Hua

    2011-01-01

    A land-based mobile mapping system (MMS) is flexible and useful for the acquisition of road environment geospatial information. It integrates a set of imaging sensors and a position and orientation system (POS). The positioning quality of such systems is highly dependent on the accuracy of the utilized POS. This limitation is the major drawback due to the elevated cost associated with high-end GPS/INS units, particularly the inertial system. The potential accuracy of the direct sensor orientation depends on the architecture and quality of the GPS/INS integration process as well as the validity of the system calibration (i.e., calibration of the individual sensors as well as the system mounting parameters). In this paper, a novel single-step procedure using integrated sensor orientation with relative orientation constraint for the estimation of the mounting parameters is introduced. A comparative analysis between the proposed single-step and the traditional two-step procedure is carried out. Moreover, the estimated mounting parameters using the different methods are used in a direct geo-referencing procedure to evaluate their performance and the feasibility of the implemented system. Experimental results show that the proposed system using single-step system calibration method can achieve high 3D positioning accuracy. PMID:22164015

  12. Multi Reflection of Lamb Wave Emission in an Acoustic Waveguide Sensor

    PubMed Central

    Schmitt, Martin; Olfert, Sergei; Rautenberg, Jens; Lindner, Gerhard; Henning, Bernd; Reindl, Leonhard Michael

    2013-01-01

    Recently, an acoustic waveguide sensor based on multiple mode conversion of surface acoustic waves at the solid—liquid interfaces has been introduced for the concentration measurement of binary and ternary mixtures, liquid level sensing, investigation of spatial inhomogenities or bubble detection. In this contribution the sound wave propagation within this acoustic waveguide sensor is visualized by Schlieren imaging for continuous and burst operation the first time. In the acoustic waveguide the antisymmetrical zero order Lamb wave mode is excited by a single phase transducer of 1 MHz on thin glass plates of 1 mm thickness. By contact to the investigated liquid Lamb waves propagating on the first plate emit pressure waves into the adjacent liquid, which excites Lamb waves on the second plate, what again causes pressure waves traveling inside the liquid back to the first plate and so on. The Schlieren images prove this multi reflection within the acoustic waveguide, which confirms former considerations and calculations based on the receiver signal. With this knowledge the sensor concepts with the acoustic waveguide sensor can be interpreted in a better manner. PMID:23447010

  13. NASA Tech Briefs, April 2006

    NASA Technical Reports Server (NTRS)

    2006-01-01

    The topics covered include: 1) Replaceable Sensor System for Bioreactor Monitoring; 2) Unitary Shaft-Angle and Shaft-Speed Sensor Assemblies; 3) Arrays of Nano Tunnel Junctions as Infrared Image Sensors; 4) Catalytic-Metal/PdO(sub x)/SiC Schottky-Diode Gas Sensors; 5) Compact, Precise Inertial Rotation Sensors for Spacecraft; 6) Universal Controller for Spacecraft Mechanisms; 7) The Flostation - an Immersive Cyberspace System; 8) Algorithm for Aligning an Array of Receiving Radio Antennas; 9) Single-Chip T/R Module for 1.2 GHz; 10) Quantum Entanglement Molecular Absorption Spectrum Simulator; 11) FuzzObserver; 12) Internet Distribution of Spacecraft Telemetry Data; 13) Semi-Automated Identification of Rocks in Images; 14) Pattern-Recognition Algorithm for Locking Laser Frequency; 15) Designing Cure Cycles for Matrix/Fiber Composite Parts; 16) Controlling Herds of Cooperative Robots; 17) Modification of a Limbed Robot to Favor Climbing; 18) Vacuum-Assisted, Constant-Force Exercise Device; 19) Production of Tuber-Inducing Factor; 20) Quantum-Dot Laser for Wavelengths of 1.8 to 2.3 micron; 21) Tunable Filter Made From Three Coupled WGM Resonators; and 22) Dynamic Pupil Masking for Phasing Telescope Mirror Segments.

  14. Multi reflection of Lamb wave emission in an acoustic waveguide sensor.

    PubMed

    Schmitt, Martin; Olfert, Sergei; Rautenberg, Jens; Lindner, Gerhard; Henning, Bernd; Reindl, Leonhard Michael

    2013-02-27

    Recently, an acoustic waveguide sensor based on multiple mode conversion of surface acoustic waves at the solid-liquid interfaces has been introduced for the concentration measurement of binary and ternary mixtures, liquid level sensing, investigation of spatial inhomogenities or bubble detection. In this contribution the sound wave propagation within this acoustic waveguide sensor is visualized by Schlieren imaging for continuous and burst operation the first time. In the acoustic waveguide the antisymmetrical zero order Lamb wave mode is excited by a single phase transducer of 1 MHz on thin glass plates of 1 mm thickness. By contact to the investigated liquid Lamb waves propagating on the first plate emit pressure waves into the adjacent liquid, which excites Lamb waves on the second plate, what again causes pressure waves traveling inside the liquid back to the first plate and so on. The Schlieren images prove this multi reflection within the acoustic waveguide, which confirms former considerations and calculations based on the receiver signal. With this knowledge the sensor concepts with the acoustic waveguide sensor can be interpreted in a better manner.

  15. UW Imaging of Seismic-Physical-Models in Air Using Fiber-Optic Fabry-Perot Interferometer.

    PubMed

    Rong, Qiangzhou; Hao, Yongxin; Zhou, Ruixiang; Yin, Xunli; Shao, Zhihua; Liang, Lei; Qiao, Xueguang

    2017-02-17

    A fiber-optic Fabry-Perot interferometer (FPI) has been proposed and demonstrated for the ultrasound wave (UW) imaging of seismic-physical models. The sensor probe comprises a single mode fiber (SMF) that is inserted into a ceramic tube terminated by an ultra-thin gold film. The probe performs with an excellent UW sensitivity thanks to the nanolayer gold film, and thus is capable of detecting a weak UW in air medium. Furthermore, the compact sensor is a symmetrical structure so that it presents a good directionality in the UW detection. The spectral band-side filter technique is used for UW interrogation. After scanning the models using the sensing probe in air, the two-dimensional (2D) images of four physical models are reconstructed.

  16. A Robust Approach for a Filter-Based Monocular Simultaneous Localization and Mapping (SLAM) System

    PubMed Central

    Munguía, Rodrigo; Castillo-Toledo, Bernardino; Grau, Antoni

    2013-01-01

    Simultaneous localization and mapping (SLAM) is an important problem to solve in robotics theory in order to build truly autonomous mobile robots. This work presents a novel method for implementing a SLAM system based on a single camera sensor. The SLAM with a single camera, or monocular SLAM, is probably one of the most complex SLAM variants. In this case, a single camera, which is freely moving through its environment, represents the sole sensor input to the system. The sensors have a large impact on the algorithm used for SLAM. Cameras are used more frequently, because they provide a lot of information and are well adapted for embedded systems: they are light, cheap and power-saving. Nevertheless, and unlike range sensors, which provide range and angular information, a camera is a projective sensor providing only angular measurements of image features. Therefore, depth information (range) cannot be obtained in a single step. In this case, special techniques for feature system-initialization are needed in order to enable the use of angular sensors (as cameras) in SLAM systems. The main contribution of this work is to present a novel and robust scheme for incorporating and measuring visual features in filtering-based monocular SLAM systems. The proposed method is based in a two-step technique, which is intended to exploit all the information available in angular measurements. Unlike previous schemes, the values of parameters used by the initialization technique are derived directly from the sensor characteristics, thus simplifying the tuning of the system. The experimental results show that the proposed method surpasses the performance of previous schemes. PMID:23823972

  17. Putting a finishing touch on GECIs.

    PubMed

    Rose, Tobias; Goltstein, Pieter M; Portugues, Ruben; Griesbeck, Oliver

    2014-01-01

    More than a decade ago genetically encoded calcium indicators (GECIs) entered the stage as new promising tools to image calcium dynamics and neuronal activity in living tissues and designated cell types in vivo. From a variety of initial designs two have emerged as promising prototypes for further optimization: FRET (Förster Resonance Energy Transfer)-based sensors and single fluorophore sensors of the GCaMP family. Recent efforts in structural analysis, engineering and screening have broken important performance thresholds in the latest generation for both classes. While these improvements have made GECIs a powerful means to perform physiology in living animals, a number of other aspects of sensor function deserve attention. These aspects include indicator linearity, toxicity and slow response kinetics. Furthermore creating high performance sensors with optically more favorable emission in red or infrared wavelengths as well as new stably or conditionally GECI-expressing animal lines are on the wish list. When the remaining issues are solved, imaging of GECIs will finally have crossed the last milestone, evolving from an initial promise into a fully matured technology.

  18. Sensor performance and weather effects modeling for intelligent transportation systems (ITS) applications

    NASA Astrophysics Data System (ADS)

    Everson, Jeffrey H.; Kopala, Edward W.; Lazofson, Laurence E.; Choe, Howard C.; Pomerleau, Dean A.

    1995-01-01

    Optical sensors are used for several ITS applications, including lateral control of vehicles, traffic sign recognition, car following, autonomous vehicle navigation, and obstacle detection. This paper treats the performance assessment of a sensor/image processor used as part of an on-board countermeasure system to prevent single vehicle roadway departure crashes. Sufficient image contrast between objects of interest and backgrounds is an essential factor influencing overall system performance. Contrast is determined by material properties affecting reflected/radiated intensities, as well as weather and visibility conditions. This paper discusses the modeling of these parameters and characterizes the contrast performance effects due to reduced visibility. The analysis process first involves generation of inherent road/off- road contrasts, followed by weather effects as a contrast modification. The sensor is modeled as a charge coupled device (CCD), with variable parameters. The results of the sensor/weather modeling are used to predict the performance on an in-vehicle warning system under various levels of adverse weather. Software employed in this effort was previously developed for the U.S. Air Force Wright Laboratory to determine target/background detection and recognition ranges for different sensor systems operating under various mission scenarios.

  19. Updates to SCORPION persistent surveillance system with universal gateway

    NASA Astrophysics Data System (ADS)

    Coster, Michael; Chambers, Jon; Winters, Michael; Brunck, Al

    2008-10-01

    This paper addresses benefits derived from the universal gateway utilized in Northrop Grumman Systems Corporation's (NGSC) SCORPION, a persistent surveillance and target recognition system produced by the Xetron campus in Cincinnati, Ohio. SCORPION is currently deployed in Operations Iraqi Freedom (OIF) and Enduring Freedom (OEF). The SCORPION universal gateway is a flexible, field programmable system that provides integration of over forty Unattended Ground Sensor (UGS) types from a variety of manufacturers, multiple visible and thermal electro-optical (EO) imagers, and numerous long haul satellite and terrestrial communications links, including the Army Research Lab (ARL) Blue Radio. Xetron has been integrating best in class sensors with this universal gateway to provide encrypted data exfiltration to Common Operational Picture (COP) systems and remote sensor command and control since 1998. In addition to being fed to COP systems, SCORPION data can be visualized in the Common sensor Status (CStat) graphical user interface that allows for viewing and analysis of images and sensor data from up to seven hundred SCORPION system gateways on single or multiple displays. This user friendly visualization enables a large amount of sensor data and imagery to be used as actionable intelligence by a minimum number of analysts.

  20. Vulnerability of CMOS image sensors in Megajoule Class Laser harsh environment.

    PubMed

    Goiffon, V; Girard, S; Chabane, A; Paillet, P; Magnan, P; Cervantes, P; Martin-Gonthier, P; Baggio, J; Estribeau, M; Bourgade, J-L; Darbon, S; Rousseau, A; Glebov, V Yu; Pien, G; Sangster, T C

    2012-08-27

    CMOS image sensors (CIS) are promising candidates as part of optical imagers for the plasma diagnostics devoted to the study of fusion by inertial confinement. However, the harsh radiative environment of Megajoule Class Lasers threatens the performances of these optical sensors. In this paper, the vulnerability of CIS to the transient and mixed pulsed radiation environment associated with such facilities is investigated during an experiment at the OMEGA facility at the Laboratory for Laser Energetics (LLE), Rochester, NY, USA. The transient and permanent effects of the 14 MeV neutron pulse on CIS are presented. The behavior of the tested CIS shows that active pixel sensors (APS) exhibit a better hardness to this harsh environment than a CCD. A first order extrapolation of the reported results to the higher level of radiation expected for Megajoule Class Laser facilities (Laser Megajoule in France or National Ignition Facility in the USA) shows that temporarily saturated pixels due to transient neutron-induced single event effects will be the major issue for the development of radiation-tolerant plasma diagnostic instruments whereas the permanent degradation of the CIS related to displacement damage or total ionizing dose effects could be reduced by applying well known mitigation techniques.

  1. Optimizing Floating Guard Ring Designs for FASPAX N-in-P Silicon Sensors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shin, Kyung-Wook; Bradford, Robert; Lipton, Ronald

    2016-10-06

    FASPAX (Fermi-Argonne Semiconducting Pixel Array X-ray detector) is being developed as a fast integrating area detector with wide dynamic range for time resolved applications at the upgraded Advanced Photon Source (APS.) A burst mode detector with intendedmore » $$\\mbox{13 $$MHz$}$ image rate, FASPAX will also incorporate a novel integration circuit to achieve wide dynamic range, from single photon sensitivity to $$10^{\\text{5}}$$ x-rays/pixel/pulse. To achieve these ambitious goals, a novel silicon sensor design is required. This paper will detail early design of the FASPAX sensor. Results from TCAD optimization studies, and characterization of prototype sensors will be presented.« less

  2. Collation of earth resources data collected by ERIM airborne sensors

    NASA Technical Reports Server (NTRS)

    Hasell, P. G., Jr.

    1975-01-01

    Earth resources imagery from nine years of data collection with developmental airborne sensors is cataloged for reference. The imaging sensors include single and multiband line scanners and side-looking radars. The operating wavelengths of the sensors include ultraviolet, visible and infrared band scanners, and X- and L-band radar. Imagery from all bands (radar and scanner) were collected at some sites and many sites had repeated coverage. The multiband scanner data was radiometrically calibrated. Illustrations show how the data can be used in earth resource investigations. References are made to published reports which have made use of the data in completed investigations. Data collection sponsors are identified and a procedure described for gaining access to the data.

  3. Tracking initially unresolved thrusting objects in 3D using a single stationary optical sensor

    NASA Astrophysics Data System (ADS)

    Lu, Qin; Bar-Shalom, Yaakov; Willett, Peter; Granström, Karl; Ben-Dov, R.; Milgrom, B.

    2017-05-01

    This paper considers the problem of estimating the 3D states of a salvo of thrusting/ballistic endo-atmospheric objects using 2D Cartesian measurements from the focal plane array (FPA) of a single fixed optical sensor. Since the initial separations in the FPA are smaller than the resolution of the sensor, this results in merged measurements in the FPA, compounding the usual false-alarm and missed-detection uncertainty. We present a two-step methodology. First, we assume a Wiener process acceleration (WPA) model for the motion of the images of the projectiles in the optical sensor's FPA. We model the merged measurements with increased variance, and thence employ a multi-Bernoulli (MB) filter using the 2D measurements in the FPA. Second, using the set of associated measurements for each confirmed MB track, we formulate a parameter estimation problem, whose maximum likelihood estimate can be obtained via numerical search and can be used for impact point prediction. Simulation results illustrate the performance of the proposed method.

  4. A Novel Multi-Aperture Based Sun Sensor Based on a Fast Multi-Point MEANSHIFT (FMMS) Algorithm

    PubMed Central

    You, Zheng; Sun, Jian; Xing, Fei; Zhang, Gao-Fei

    2011-01-01

    With the current increased widespread interest in the development and applications of micro/nanosatellites, it was found that we needed to design a small high accuracy satellite attitude determination system, because the star trackers widely used in large satellites are large and heavy, and therefore not suitable for installation on micro/nanosatellites. A Sun sensor + magnetometer is proven to be a better alternative, but the conventional sun sensor has low accuracy, and cannot meet the requirements of the attitude determination systems of micro/nanosatellites, so the development of a small high accuracy sun sensor with high reliability is very significant. This paper presents a multi-aperture based sun sensor, which is composed of a micro-electro-mechanical system (MEMS) mask with 36 apertures and an active pixels sensor (APS) CMOS placed below the mask at a certain distance. A novel fast multi-point MEANSHIFT (FMMS) algorithm is proposed to improve the accuracy and reliability, the two key performance features, of an APS sun sensor. When the sunlight illuminates the sensor, a sun spot array image is formed on the APS detector. Then the sun angles can be derived by analyzing the aperture image location on the detector via the FMMS algorithm. With this system, the centroid accuracy of the sun image can reach 0.01 pixels, without increasing the weight and power consumption, even when some missing apertures and bad pixels appear on the detector due to aging of the devices and operation in a harsh space environment, while the pointing accuracy of the single-aperture sun sensor using the conventional correlation algorithm is only 0.05 pixels. PMID:22163770

  5. Foveated optics

    NASA Astrophysics Data System (ADS)

    Bryant, Kyle R.

    2016-05-01

    Foveated imaging can deliver two different resolutions on a single focal plane, which might inexpensively allow more capability for military systems. The following design study results provide starting examples, lessons learned, and helpful setup equations and pointers to aid the lens designer in any foveated lens design effort. Our goal is to put robust sensor in a small package with no moving parts, but still be able to perform some of the functions of a sensor in a moving gimbal. All of the elegant solutions are out (for various reasons). This study is an attempt to see if lens designs can solve this problem and realize some gains in performance versus cost for airborne sensors. We determined a series of design concepts to simultaneously deliver wide field of view and high foveal resolution without scanning or gimbals. Separate sensors for each field of view are easy and relatively inexpensive, but lead to bulky detectors and electronics. Folding and beam-combining of separate optical channels reduces sensor footprint, but induces image inversions and reduced transmission. Entirely common optics provide good resolution, but cannot provide a significant magnification increase in the foveal region. Offsetting the foveal region from the wide field center may not be physically realizable, but may be required for some applications. The design study revealed good general guidance for foveated optics designs with a cold stop. Key lessons learned involve managing distortion, telecentric imagers, matching image inversions and numerical apertures between channels, reimaging lenses, and creating clean resolution zone splits near internal focal planes.

  6. Adaptive time-sequential binary sensing for high dynamic range imaging

    NASA Astrophysics Data System (ADS)

    Hu, Chenhui; Lu, Yue M.

    2012-06-01

    We present a novel image sensor for high dynamic range imaging. The sensor performs an adaptive one-bit quantization at each pixel, with the pixel output switched from 0 to 1 only if the number of photons reaching that pixel is greater than or equal to a quantization threshold. With an oracle knowledge of the incident light intensity, one can pick an optimal threshold (for that light intensity) and the corresponding Fisher information contained in the output sequence follows closely that of an ideal unquantized sensor over a wide range of intensity values. This observation suggests the potential gains one may achieve by adaptively updating the quantization thresholds. As the main contribution of this work, we propose a time-sequential threshold-updating rule that asymptotically approaches the performance of the oracle scheme. With every threshold mapped to a number of ordered states, the dynamics of the proposed scheme can be modeled as a parametric Markov chain. We show that the frequencies of different thresholds converge to a steady-state distribution that is concentrated around the optimal choice. Moreover, numerical experiments show that the theoretical performance measures (Fisher information and Craḿer-Rao bounds) can be achieved by a maximum likelihood estimator, which is guaranteed to find globally optimal solution due to the concavity of the log-likelihood functions. Compared with conventional image sensors and the strategy that utilizes a constant single-photon threshold considered in previous work, the proposed scheme attains orders of magnitude improvement in terms of sensor dynamic ranges.

  7. Laser-induced damage threshold of camera sensors and micro-optoelectromechanical systems

    NASA Astrophysics Data System (ADS)

    Schwarz, Bastian; Ritt, Gunnar; Koerber, Michael; Eberle, Bernd

    2017-03-01

    The continuous development of laser systems toward more compact and efficient devices constitutes an increasing threat to electro-optical imaging sensors, such as complementary metal-oxide-semiconductors (CMOS) and charge-coupled devices. These types of electronic sensors are used in day-to-day life but also in military or civil security applications. In camera systems dedicated to specific tasks, micro-optoelectromechanical systems, such as a digital micromirror device (DMD), are part of the optical setup. In such systems, the DMD can be located at an intermediate focal plane of the optics and it is also susceptible to laser damage. The goal of our work is to enhance the knowledge of damaging effects on such devices exposed to laser light. The experimental setup for the investigation of laser-induced damage is described in detail. As laser sources, both pulsed lasers and continuous-wave (CW)-lasers are used. The laser-induced damage threshold is determined by the single-shot method by increasing the pulse energy from pulse to pulse or in the case of CW-lasers, by increasing the laser power. Furthermore, we investigate the morphology of laser-induced damage patterns and the dependence of the number of destructive device elements on the laser pulse energy or laser power. In addition to the destruction of single pixels, we observe aftereffects, such as persistent dead columns or rows of pixels in the sensor image.

  8. Volumetric Forest Change Detection Through Vhr Satellite Imagery

    NASA Astrophysics Data System (ADS)

    Akca, Devrim; Stylianidis, Efstratios; Smagas, Konstantinos; Hofer, Martin; Poli, Daniela; Gruen, Armin; Sanchez Martin, Victor; Altan, Orhan; Walli, Andreas; Jimeno, Elisa; Garcia, Alejandro

    2016-06-01

    Quick and economical ways of detecting of planimetric and volumetric changes of forest areas are in high demand. A research platform, called FORSAT (A satellite processing platform for high resolution forest assessment), was developed for the extraction of 3D geometric information from VHR (very-high resolution) imagery from satellite optical sensors and automatic change detection. This 3D forest information solution was developed during a Eurostars project. FORSAT includes two main units. The first one is dedicated to the geometric and radiometric processing of satellite optical imagery and 2D/3D information extraction. This includes: image radiometric pre-processing, image and ground point measurement, improvement of geometric sensor orientation, quasiepipolar image generation for stereo measurements, digital surface model (DSM) extraction by using a precise and robust image matching approach specially designed for VHR satellite imagery, generation of orthoimages, and 3D measurements in single images using mono-plotting and in stereo images as well as triplets. FORSAT supports most of the VHR optically imagery commonly used for civil applications: IKONOS, OrbView - 3, SPOT - 5 HRS, SPOT - 5 HRG, QuickBird, GeoEye-1, WorldView-1/2, Pléiades 1A/1B, SPOT 6/7, and sensors of similar type to be expected in the future. The second unit of FORSAT is dedicated to 3D surface comparison for change detection. It allows users to import digital elevation models (DEMs), align them using an advanced 3D surface matching approach and calculate the 3D differences and volume changes between epochs. To this end our 3D surface matching method LS3D is being used. FORSAT is a single source and flexible forest information solution with a very competitive price/quality ratio, allowing expert and non-expert remote sensing users to monitor forests in three and four dimensions from VHR optical imagery for many forest information needs. The capacity and benefits of FORSAT have been tested in six case studies located in Austria, Cyprus, Spain, Switzerland and Turkey, using optical data from different sensors and with the purpose to monitor forest with different geometric characteristics. The validation run on Cyprus dataset is reported and commented.

  9. Determining the 3-D structure and motion of objects using a scanning laser range sensor

    NASA Technical Reports Server (NTRS)

    Nandhakumar, N.; Smith, Philip W.

    1993-01-01

    In order for the EVAHR robot to autonomously track and grasp objects, its vision system must be able to determine the 3-D structure and motion of an object from a sequence of sensory images. This task is accomplished by the use of a laser radar range sensor which provides dense range maps of the scene. Unfortunately, the currently available laser radar range cameras use a sequential scanning approach which complicates image analysis. Although many algorithms have been developed for recognizing objects from range images, none are suited for use with single beam, scanning, time-of-flight sensors because all previous algorithms assume instantaneous acquisition of the entire image. This assumption is invalid since the EVAHR robot is equipped with a sequential scanning laser range sensor. If an object is moving while being imaged by the device, the apparent structure of the object can be significantly distorted due to the significant non-zero delay time between sampling each image pixel. If an estimate of the motion of the object can be determined, this distortion can be eliminated; but, this leads to the motion-structure paradox - most existing algorithms for 3-D motion estimation use the structure of objects to parameterize their motions. The goal of this research is to design a rigid-body motion recovery technique which overcomes this limitation. The method being developed is an iterative, linear, feature-based approach which uses the non-zero image acquisition time constraint to accurately recover the motion parameters from the distorted structure of the 3-D range maps. Once the motion parameters are determined, the structural distortion in the range images is corrected.

  10. Optimizing Radiometric Fidelity to Enhance Aerial Image Change Detection Utilizing Digital Single Lens Reflex (DSLR) Cameras

    NASA Astrophysics Data System (ADS)

    Kerr, Andrew D.

    Determining optimal imaging settings and best practices related to the capture of aerial imagery using consumer-grade digital single lens reflex (DSLR) cameras, should enable remote sensing scientists to generate consistent, high quality, and low cost image data sets. Radiometric optimization, image fidelity, image capture consistency and repeatability were evaluated in the context of detailed image-based change detection. The impetus for this research is in part, a dearth of relevant, contemporary literature, on the utilization of consumer grade DSLR cameras for remote sensing, and the best practices associated with their use. The main radiometric control settings on a DSLR camera, EV (Exposure Value), WB (White Balance), light metering, ISO, and aperture (f-stop), are variables that were altered and controlled over the course of several image capture missions. These variables were compared for their effects on dynamic range, intra-frame brightness variation, visual acuity, temporal consistency, and the detectability of simulated cracks placed in the images. This testing was conducted from a terrestrial, rather than an airborne collection platform, due to the large number of images per collection, and the desire to minimize inter-image misregistration. The results point to a range of slightly underexposed image exposure values as preferable for change detection and noise minimization fidelity. The makeup of the scene, the sensor, and aerial platform, influence the selection of the aperture and shutter speed which along with other variables, allow for estimation of the apparent image motion (AIM) motion blur in the resulting images. The importance of the image edges in the image application, will in part dictate the lowest usable f-stop, and allow the user to select a more optimal shutter speed and ISO. The single most important camera capture variable is exposure bias (EV), with a full dynamic range, wide distribution of DN values, and high visual contrast and acuity occurring around -0.7 to -0.3EV exposure bias. The ideal values for sensor gain, was found to be ISO 100, with ISO 200 a less desirable. This study offers researchers a better understanding of the effects of camera capture settings on RSI pairs and their influence on image-based change detection.

  11. Calibration Procedures on Oblique Camera Setups

    NASA Astrophysics Data System (ADS)

    Kemper, G.; Melykuti, B.; Yu, C.

    2016-06-01

    Beside the creation of virtual animated 3D City models, analysis for homeland security and city planning, the accurately determination of geometric features out of oblique imagery is an important task today. Due to the huge number of single images the reduction of control points force to make use of direct referencing devices. This causes a precise camera-calibration and additional adjustment procedures. This paper aims to show the workflow of the various calibration steps and will present examples of the calibration flight with the final 3D City model. In difference to most other software, the oblique cameras are used not as co-registered sensors in relation to the nadir one, all camera images enter the AT process as single pre-oriented data. This enables a better post calibration in order to detect variations in the single camera calibration and other mechanical effects. The shown sensor (Oblique Imager) is based o 5 Phase One cameras were the nadir one has 80 MPIX equipped with a 50 mm lens while the oblique ones capture images with 50 MPix using 80 mm lenses. The cameras are mounted robust inside a housing to protect this against physical and thermal deformations. The sensor head hosts also an IMU which is connected to a POS AV GNSS Receiver. The sensor is stabilized by a gyro-mount which creates floating Antenna -IMU lever arms. They had to be registered together with the Raw GNSS-IMU Data. The camera calibration procedure was performed based on a special calibration flight with 351 shoots of all 5 cameras and registered the GPS/IMU data. This specific mission was designed in two different altitudes with additional cross lines on each flying heights. The five images from each exposure positions have no overlaps but in the block there are many overlaps resulting in up to 200 measurements per points. On each photo there were in average 110 well distributed measured points which is a satisfying number for the camera calibration. In a first step with the help of the nadir camera and the GPS/IMU data, an initial orientation correction and radial correction were calculated. With this approach, the whole project was calculated and calibrated in one step. During the iteration process the radial and tangential parameters were switched on individually for the camera heads and after that the camera constants and principal point positions were checked and finally calibrated. Besides that, the bore side calibration can be performed either on basis of the nadir camera and their offsets, or independently for each camera without correlation to the others. This must be performed in a complete mission anyway to get stability between the single camera heads. Determining the lever arms of the nodal-points to the IMU centre needs more caution than for a single camera especially due to the strong tilt angle. Prepared all these previous steps, you get a highly accurate sensor that enables a fully automated data extraction with a rapid update of you existing data. Frequently monitoring urban dynamics is then possible in fully 3D environment.

  12. An ECT/ERT dual-modality sensor for oil-water two-phase flow measurement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Pitao; Wang, Huaxiang; Sun, Benyuan

    2014-04-11

    This paper presents a new sensor for ECT/ERT dual-modality system which can simultaneously obtain the permittivity and conductivity of the materials in the pipeline. Quasi-static electromagnetic fields are produced by the inner electrodes array sensor of electrical capacitance tomography (ECT) system. The results of simulation show that the data of permittivity and conductivity can be simultaneously obtained from the same measurement electrode and the fusion of two kinds of data may improve the quality of the reconstructed images. For uniform oil-water mixtures, the performance of designed dual-modality sensor for measuring the various oil fractions has been tested on representative datamore » and the results of experiments show that the designed sensor broadens the measurement range compared to single modality.« less

  13. Optomechanical System Development of the AWARE Gigapixel Scale Camera

    NASA Astrophysics Data System (ADS)

    Son, Hui S.

    Electronic focal plane arrays (FPA) such as CMOS and CCD sensors have dramatically improved to the point that digital cameras have essentially phased out film (except in very niche applications such as hobby photography and cinema). However, the traditional method of mating a single lens assembly to a single detector plane, as required for film cameras, is still the dominant design used in cameras today. The use of electronic sensors and their ability to capture digital signals that can be processed and manipulated post acquisition offers much more freedom of design at system levels and opens up many interesting possibilities for the next generation of computational imaging systems. The AWARE gigapixel scale camera is one such computational imaging system. By utilizing a multiscale optical design, in which a large aperture objective lens is mated with an array of smaller, well corrected relay lenses, we are able to build an optically simple system that is capable of capturing gigapixel scale images via post acquisition stitching of the individual pictures from the array. Properly shaping the array of digital cameras allows us to form an effectively continuous focal surface using off the shelf (OTS) flat sensor technology. This dissertation details developments and physical implementations of the AWARE system architecture. It illustrates the optomechanical design principles and system integration strategies we have developed through the course of the project by summarizing the results of the two design phases for AWARE: AWARE-2 and AWARE-10. These systems represent significant advancements in the pursuit of scalable, commercially viable snapshot gigapixel imaging systems and should serve as a foundation for future development of such systems.

  14. Miniature Compressive Ultra-spectral Imaging System Utilizing a Single Liquid Crystal Phase Retarder

    NASA Astrophysics Data System (ADS)

    August, Isaac; Oiknine, Yaniv; Abuleil, Marwan; Abdulhalim, Ibrahim; Stern, Adrian

    2016-03-01

    Spectroscopic imaging has been proved to be an effective tool for many applications in a variety of fields, such as biology, medicine, agriculture, remote sensing and industrial process inspection. However, due to the demand for high spectral and spatial resolution it became extremely challenging to design and implement such systems in a miniaturized and cost effective manner. Using a Compressive Sensing (CS) setup based on a single variable Liquid Crystal (LC) retarder and a sensor array, we present an innovative Miniature Ultra-Spectral Imaging (MUSI) system. The LC retarder acts as a compact wide band spectral modulator. Within the framework of CS, a sequence of spectrally modulated images is used to recover ultra-spectral image cubes. Using the presented compressive MUSI system, we demonstrate the reconstruction of gigapixel spatio-spectral image cubes from spectral scanning shots numbering an order of magnitude less than would be required using conventional systems.

  15. Miniature Compressive Ultra-spectral Imaging System Utilizing a Single Liquid Crystal Phase Retarder.

    PubMed

    August, Isaac; Oiknine, Yaniv; AbuLeil, Marwan; Abdulhalim, Ibrahim; Stern, Adrian

    2016-03-23

    Spectroscopic imaging has been proved to be an effective tool for many applications in a variety of fields, such as biology, medicine, agriculture, remote sensing and industrial process inspection. However, due to the demand for high spectral and spatial resolution it became extremely challenging to design and implement such systems in a miniaturized and cost effective manner. Using a Compressive Sensing (CS) setup based on a single variable Liquid Crystal (LC) retarder and a sensor array, we present an innovative Miniature Ultra-Spectral Imaging (MUSI) system. The LC retarder acts as a compact wide band spectral modulator. Within the framework of CS, a sequence of spectrally modulated images is used to recover ultra-spectral image cubes. Using the presented compressive MUSI system, we demonstrate the reconstruction of gigapixel spatio-spectral image cubes from spectral scanning shots numbering an order of magnitude less than would be required using conventional systems.

  16. Dual-mode lensless imaging device for digital enzyme linked immunosorbent assay

    NASA Astrophysics Data System (ADS)

    Sasagawa, Kiyotaka; Kim, Soo Heyon; Miyazawa, Kazuya; Takehara, Hironari; Noda, Toshihiko; Tokuda, Takashi; Iino, Ryota; Noji, Hiroyuki; Ohta, Jun

    2014-03-01

    Digital enzyme linked immunosorbent assay (ELISA) is an ultra-sensitive technology for detecting biomarkers and viruses etc. As a conventional ELISA technique, a target molecule is bonded to an antibody with an enzyme by antigen-antibody reaction. In this technology, a femto-liter droplet chamber array is used as reaction chambers. Due to its small volume, the concentration of fluorescent product by single enzyme can be sufficient for detection by a fluorescent microscopy. In this work, we demonstrate a miniaturized lensless imaging device for digital ELISA by using a custom image sensor. The pixel array of the sensor is coated with a 20 μm-thick yellow filter to eliminate excitation light at 470 nm and covered by a fiber optic plate (FOP) to protect the sensor without resolution degradation. The droplet chamber array formed on a 50μm-thick glass plate is directly placed on the FOP. In the digital ELISA, microbeads coated with antibody are loaded into the droplet chamber array, and the ratio of the fluorescent to the non-fluorescent chambers with the microbeads are observed. In the fluorescence imaging, the spatial resolution is degraded by the spreading through the glass plate because the fluorescence is irradiated omnidirectionally. This degradation is compensated by image processing and the resolution of ~35 μm was achieved. In the bright field imaging, the projected images of the beads with collimated illumination are observed. By varying the incident angle and image composition, microbeads were successfully imaged.

  17. Electrophoretically mediated microanalysis of a nicotinamide adenine dinucleotide-dependent enzyme and its facile multiplexing using an active pixel sensor UV detector.

    PubMed

    Urban, Pawel L; Goodall, David M; Bergström, Edmund T; Bruce, Neil C

    2007-08-31

    An electrophoretically mediated microanalysis (EMMA) method has been developed for yeast alcohol dehydrogenase and quantification of reactant and product cofactors, NAD and NADH. The enzyme substrate ethanol (1% (v/v)) was added to the buffer (50 mM borate, pH 8.8). Results are presented for parallel capillary electrophoresis with a novel miniature UV area detector, with an active pixel sensor imaging an array of two or six parallel capillaries connected via a manifold to a single output capillary in a commercial CE instrument, allowing conversions with five different yeast alcohol dehydrogenase concentrations to be quantified in a single experiment.

  18. Single-sensor multispeaker listening with acoustic metamaterials

    PubMed Central

    Xie, Yangbo; Tsai, Tsung-Han; Konneker, Adam; Popa, Bogdan-Ioan; Brady, David J.; Cummer, Steven A.

    2015-01-01

    Designing a “cocktail party listener” that functionally mimics the selective perception of a human auditory system has been pursued over the past decades. By exploiting acoustic metamaterials and compressive sensing, we present here a single-sensor listening device that separates simultaneous overlapping sounds from different sources. The device with a compact array of resonant metamaterials is demonstrated to distinguish three overlapping and independent sources with 96.67% correct audio recognition. Segregation of the audio signals is achieved using physical layer encoding without relying on source characteristics. This hardware approach to multichannel source separation can be applied to robust speech recognition and hearing aids and may be extended to other acoustic imaging and sensing applications. PMID:26261314

  19. Investigation of Matlab® as platform in navigation and control of an Automatic Guided Vehicle utilising an omnivision sensor.

    PubMed

    Kotze, Ben; Jordaan, Gerrit

    2014-08-25

    Automatic Guided Vehicles (AGVs) are navigated utilising multiple types of sensors for detecting the environment. In this investigation such sensors are replaced and/or minimized by the use of a single omnidirectional camera picture stream. An area of interest is extracted, and by using image processing the vehicle is navigated on a set path. Reconfigurability is added to the route layout by signs incorporated in the navigation process. The result is the possible manipulation of a number of AGVs, each on its own designated colour-signed path. This route is reconfigurable by the operator with no programming alteration or intervention. A low resolution camera and a Matlab® software development platform are utilised. The use of Matlab® lends itself to speedy evaluation and implementation of image processing options on the AGV, but its functioning in such an environment needs to be assessed.

  20. Origami silicon optoelectronics for hemispherical electronic eye systems.

    PubMed

    Zhang, Kan; Jung, Yei Hwan; Mikael, Solomon; Seo, Jung-Hun; Kim, Munho; Mi, Hongyi; Zhou, Han; Xia, Zhenyang; Zhou, Weidong; Gong, Shaoqin; Ma, Zhenqiang

    2017-11-24

    Digital image sensors in hemispherical geometries offer unique imaging advantages over their planar counterparts, such as wide field of view and low aberrations. Deforming miniature semiconductor-based sensors with high-spatial resolution into such format is challenging. Here we report a simple origami approach for fabricating single-crystalline silicon-based focal plane arrays and artificial compound eyes that have hemisphere-like structures. Convex isogonal polyhedral concepts allow certain combinations of polygons to fold into spherical formats. Using each polygon block as a sensor pixel, the silicon-based devices are shaped into maps of truncated icosahedron and fabricated on flexible sheets and further folded either into a concave or convex hemisphere. These two electronic eye prototypes represent simple and low-cost methods as well as flexible optimization parameters in terms of pixel density and design. Results demonstrated in this work combined with miniature size and simplicity of the design establish practical technology for integration with conventional electronic devices.

  1. Investigation of Matlab® as Platform in Navigation and Control of an Automatic Guided Vehicle Utilising an Omnivision Sensor

    PubMed Central

    Kotze, Ben; Jordaan, Gerrit

    2014-01-01

    Automatic Guided Vehicles (AGVs) are navigated utilising multiple types of sensors for detecting the environment. In this investigation such sensors are replaced and/or minimized by the use of a single omnidirectional camera picture stream. An area of interest is extracted, and by using image processing the vehicle is navigated on a set path. Reconfigurability is added to the route layout by signs incorporated in the navigation process. The result is the possible manipulation of a number of AGVs, each on its own designated colour-signed path. This route is reconfigurable by the operator with no programming alteration or intervention. A low resolution camera and a Matlab® software development platform are utilised. The use of Matlab® lends itself to speedy evaluation and implementation of image processing options on the AGV, but its functioning in such an environment needs to be assessed. PMID:25157548

  2. Quantum efficiency and dark current evaluation of a backside illuminated CMOS image sensor

    NASA Astrophysics Data System (ADS)

    Vereecke, Bart; Cavaco, Celso; De Munck, Koen; Haspeslagh, Luc; Minoglou, Kyriaki; Moore, George; Sabuncuoglu, Deniz; Tack, Klaas; Wu, Bob; Osman, Haris

    2015-04-01

    We report on the development and characterization of monolithic backside illuminated (BSI) imagers at imec. Different surface passivation, anti-reflective coatings (ARCs), and anneal conditions were implemented and their effect on dark current (DC) and quantum efficiency (QE) are analyzed. Two different single layer ARC materials were developed for visible light and near UV applications, respectively. QE above 75% over the entire visible spectrum range from 400 to 700 nm is measured. In the spectral range from 260 to 400 nm wavelength, QE values above 50% over the entire range are achieved. A new technique, high pressure hydrogen anneal at 20 atm, was applied on photodiodes and improvement in DC of 30% for the BSI imager with HfO2 as ARC as well as for the front side imager was observed. The entire BSI process was developed 200 mm wafers and evaluated on test diode structures. The knowhow is then transferred to real imager sensors arrays.

  3. Multi-Sensor Mud Detection

    NASA Technical Reports Server (NTRS)

    Rankin, Arturo L.; Matthies, Larry H.

    2010-01-01

    Robust mud detection is a critical perception requirement for Unmanned Ground Vehicle (UGV) autonomous offroad navigation. A military UGV stuck in a mud body during a mission may have to be sacrificed or rescued, both of which are unattractive options. There are several characteristics of mud that may be detectable with appropriate UGV-mounted sensors. For example, mud only occurs on the ground surface, is cooler than surrounding dry soil during the daytime under nominal weather conditions, is generally darker than surrounding dry soil in visible imagery, and is highly polarized. However, none of these cues are definitive on their own. Dry soil also occurs on the ground surface, shadows, snow, ice, and water can also be cooler than surrounding dry soil, shadows are also darker than surrounding dry soil in visible imagery, and cars, water, and some vegetation are also highly polarized. Shadows, snow, ice, water, cars, and vegetation can all be disambiguated from mud by using a suite of sensors that span multiple bands in the electromagnetic spectrum. Because there are military operations when it is imperative for UGV's to operate without emitting strong, detectable electromagnetic signals, passive sensors are desirable. JPL has developed a daytime mud detection capability using multiple passive imaging sensors. Cues for mud from multiple passive imaging sensors are fused into a single mud detection image using a rule base, and the resultant mud detection is localized in a terrain map using range data generated from a stereo pair of color cameras.

  4. A monolithic 640 × 512 CMOS imager with high-NIR sensitivity

    NASA Astrophysics Data System (ADS)

    Lauxtermann, Stefan; Fisher, John; McDougal, Michael

    2014-06-01

    In this paper we present first results from a backside illuminated CMOS image sensor that we fabricated on high resistivity silicon. Compared to conventional CMOS imagers, a thicker photosensitive membrane can be depleted when using silicon with low background doping concentration while maintaining low dark current and good MTF performance. The benefits of such a fully depleted silicon sensor are high quantum efficiency over a wide spectral range and a fast photo detector response. Combining these characteristics with the circuit complexity and manufacturing maturity available from a modern, mixed signal CMOS technology leads to a new type of sensor, with an unprecedented performance spectrum in a monolithic device. Our fully depleted, backside illuminated CMOS sensor was designed to operate at integration times down to 100nsec and frame rates up to 1000Hz. Noise in Integrate While Read (IWR) snapshot shutter operation for these conditions was simulated to be below 10e- at room temperature. 2×2 binning with a 4× increase in sensitivity and a maximum frame rate of 4000 Hz is supported. For application in hyperspectral imaging systems the full well capacity in each row can individually be programmed between 10ke-, 60ke- and 500ke-. On test structures we measured a room temperature dark current of 360pA/cm2 at a reverse bias of 3.3V. A peak quantum efficiency of 80% was measured with a single layer AR coating on the backside. Test images captured with the 50μm thick VGA imager between 30Hz and 90Hz frame rate show a strong response at NIR wavelengths.

  5. Nanobridge SQUIDs as calorimetric inductive particle detectors

    NASA Astrophysics Data System (ADS)

    Gallop, John; Cox, David; Hao, Ling

    2015-08-01

    Superconducting transition edge sensors (TESs) have made dramatic progress since their invention some 65 years ago (Andrews et al 1949 Phys. Rev. 76 154-155 Irwin and Hilton 2005 Topics Appl. Phys. 99 63-149) until now there are major imaging arrays of TESs with as many as 7588 separate sensors. These are extensively used by astronomers for some ground-breaking observations (Hattori et al 2013 Nucl. Instrum. Methods Phys. Res. A 732 299-302). The great success of TES systems has tended to overshadow other superconducting sensor developments. However there are other types (Sobolewski et al 2003 IEEE Trans. Appl. Supercond. 13 1151-7 Hadfield 2009 Nat. Photonics 3 696-705) which are discussed in papers within this special edition of the journal. Here we describe a quite different type of detector, also applicable to single photon detection but possessing possible advantages (higher sensitivity, higher operating temperature) over the conventional TES, at least for single detectors.

  6. Enhancement of low light level images using color-plus-mono dual camera.

    PubMed

    Jung, Yong Ju

    2017-05-15

    In digital photography, the improvement of imaging quality in low light shooting is one of the users' needs. Unfortunately, conventional smartphone cameras that use a single, small image sensor cannot provide satisfactory quality in low light level images. A color-plus-mono dual camera that consists of two horizontally separate image sensors, which simultaneously captures both a color and mono image pair of the same scene, could be useful for improving the quality of low light level images. However, an incorrect image fusion between the color and mono image pair could also have negative effects, such as the introduction of severe visual artifacts in the fused images. This paper proposes a selective image fusion technique that applies an adaptive guided filter-based denoising and selective detail transfer to only those pixels deemed reliable with respect to binocular image fusion. We employ a dissimilarity measure and binocular just-noticeable-difference (BJND) analysis to identify unreliable pixels that are likely to cause visual artifacts during image fusion via joint color image denoising and detail transfer from the mono image. By constructing an experimental system of color-plus-mono camera, we demonstrate that the BJND-aware denoising and selective detail transfer is helpful in improving the image quality during low light shooting.

  7. Handheld and mobile hyperspectral imaging sensors for wide-area standoff detection of explosives and chemical warfare agents

    NASA Astrophysics Data System (ADS)

    Gomer, Nathaniel R.; Gardner, Charles W.; Nelson, Matthew P.

    2016-05-01

    Hyperspectral imaging (HSI) is a valuable tool for the investigation and analysis of targets in complex background with a high degree of autonomy. HSI is beneficial for the detection of threat materials on environmental surfaces, where the concentration of the target of interest is often very low and is typically found within complex scenery. Two HSI techniques that have proven to be valuable are Raman and shortwave infrared (SWIR) HSI. Unfortunately, current generation HSI systems have numerous size, weight, and power (SWaP) limitations that make their potential integration onto a handheld or field portable platform difficult. The systems that are field-portable do so by sacrificing system performance, typically by providing an inefficient area search rate, requiring close proximity to the target for screening, and/or eliminating the potential to conduct real-time measurements. To address these shortcomings, ChemImage Sensor Systems (CISS) is developing a variety of wide-field hyperspectral imaging systems. Raman HSI sensors are being developed to overcome two obstacles present in standard Raman detection systems: slow area search rate (due to small laser spot sizes) and lack of eye-safety. SWIR HSI sensors have been integrated into mobile, robot based platforms and handheld variants for the detection of explosives and chemical warfare agents (CWAs). In addition, the fusion of these two technologies into a single system has shown the feasibility of using both techniques concurrently to provide higher probability of detection and lower false alarm rates. This paper will provide background on Raman and SWIR HSI, discuss the applications for these techniques, and provide an overview of novel CISS HSI sensors focused on sensor design and detection results.

  8. Bio-inspired multi-mode optic flow sensors for micro air vehicles

    NASA Astrophysics Data System (ADS)

    Park, Seokjun; Choi, Jaehyuk; Cho, Jihyun; Yoon, Euisik

    2013-06-01

    Monitoring wide-field surrounding information is essential for vision-based autonomous navigation in micro-air-vehicles (MAV). Our image-cube (iCube) module, which consists of multiple sensors that are facing different angles in 3-D space, can be applied to the wide-field of view optic flows estimation (μ-Compound eyes) and to attitude control (μ- Ocelli) in the Micro Autonomous Systems and Technology (MAST) platforms. In this paper, we report an analog/digital (A/D) mixed-mode optic-flow sensor, which generates both optic flows and normal images in different modes for μ- Compound eyes and μ-Ocelli applications. The sensor employs a time-stamp based optic flow algorithm which is modified from the conventional EMD (Elementary Motion Detector) algorithm to give an optimum partitioning of hardware blocks in analog and digital domains as well as adequate allocation of pixel-level, column-parallel, and chip-level signal processing. Temporal filtering, which may require huge hardware resources if implemented in digital domain, is remained in a pixel-level analog processing unit. The rest of the blocks, including feature detection and timestamp latching, are implemented using digital circuits in a column-parallel processing unit. Finally, time-stamp information is decoded into velocity from look-up tables, multiplications, and simple subtraction circuits in a chip-level processing unit, thus significantly reducing core digital processing power consumption. In the normal image mode, the sensor generates 8-b digital images using single slope ADCs in the column unit. In the optic flow mode, the sensor estimates 8-b 1-D optic flows from the integrated mixed-mode algorithm core and 2-D optic flows with an external timestamp processing, respectively.

  9. High-Frequency Fiber-Optic Ultrasonic Sensor Using Air Micro-Bubble for Imaging of Seismic Physical Models.

    PubMed

    Gang, Tingting; Hu, Manli; Rong, Qiangzhou; Qiao, Xueguang; Liang, Lei; Liu, Nan; Tong, Rongxin; Liu, Xiaobo; Bian, Ce

    2016-12-14

    A micro-fiber-optic Fabry-Perot interferometer (FPI) is proposed and demonstrated experimentally for ultrasonic imaging of seismic physical models. The device consists of a micro-bubble followed by the end of a single-mode fiber (SMF). The micro-structure is formed by the discharging operation on a short segment of hollow-core fiber (HCF) that is spliced to the SMF. This micro FPI is sensitive to ultrasonic waves (UWs), especially to the high-frequency (up to 10 MHz) UW, thanks to its ultra-thin cavity wall and micro-diameter. A side-band filter technology is employed for the UW interrogation, and then the high signal-to-noise ratio (SNR) UW signal is achieved. Eventually the sensor is used for lateral imaging of the physical model by scanning UW detection and two-dimensional signal reconstruction.

  10. Multisensor fusion for 3-D defect characterization using wavelet basis function neural networks

    NASA Astrophysics Data System (ADS)

    Lim, Jaein; Udpa, Satish S.; Udpa, Lalita; Afzal, Muhammad

    2001-04-01

    The primary objective of multi-sensor data fusion, which offers both quantitative and qualitative benefits, has the ability to draw inferences that may not be feasible with data from a single sensor alone. In this paper, data from two sets of sensors are fused to estimate the defect profile from magnetic flux leakage (MFL) inspection data. The two sensors measure the axial and circumferential components of the MFL. Data is fused at the signal level. If the flux is oriented axially, the samples of the axial signal are measured along a direction parallel to the flaw, while the circumferential signal is measured in a direction that is perpendicular to the flaw. The two signals are combined as the real and imaginary components of a complex valued signal. Signals from an array of sensors are arranged in contiguous rows to obtain a complex valued image. A boundary extraction algorithm is used to extract the defect areas in the image. Signals from the defect regions are then processed to minimize noise and the effects of lift-off. Finally, a wavelet basis function (WBF) neural network is employed to map the complex valued image appropriately to obtain the geometrical profile of the defect. The feasibility of the approach was evaluated using the data obtained from the MFL inspection of natural gas transmission pipelines. Results show the effectiveness of the approach.

  11. Single-sensor system for spatially resolved, continuous, and multiparametric optical mapping of cardiac tissue

    PubMed Central

    Lee, Peter; Bollensdorff, Christian; Quinn, T. Alexander; Wuskell, Joseph P.; Loew, Leslie M.; Kohl, Peter

    2011-01-01

    Background Simultaneous optical mapping of multiple electrophysiologically relevant parameters in living myocardium is desirable for integrative exploration of mechanisms underlying heart rhythm generation under normal and pathophysiologic conditions. Current multiparametric methods are technically challenging, usually involving multiple sensors and moving parts, which contributes to high logistic and economic thresholds that prevent easy application of the technique. Objective The purpose of this study was to develop a simple, affordable, and effective method for spatially resolved, continuous, simultaneous, and multiparametric optical mapping of the heart, using a single camera. Methods We present a new method to simultaneously monitor multiple parameters using inexpensive off-the-shelf electronic components and no moving parts. The system comprises a single camera, commercially available optical filters, and light-emitting diodes (LEDs), integrated via microcontroller-based electronics for frame-accurate illumination of the tissue. For proof of principle, we illustrate measurement of four parameters, suitable for ratiometric mapping of membrane potential (di-4-ANBDQPQ) and intracellular free calcium (fura-2), in an isolated Langendorff-perfused rat heart during sinus rhythm and ectopy, induced by local electrical or mechanical stimulation. Results The pilot application demonstrates suitability of this imaging approach for heart rhythm research in the isolated heart. In addition, locally induced excitation, whether stimulated electrically or mechanically, gives rise to similar ventricular propagation patterns. Conclusion Combining an affordable camera with suitable optical filters and microprocessor-controlled LEDs, single-sensor multiparametric optical mapping can be practically implemented in a simple yet powerful configuration and applied to heart rhythm research. The moderate system complexity and component cost is destined to lower the threshold to broader application of functional imaging and to ease implementation of more complex optical mapping approaches, such as multiparametric panoramic imaging. A proof-of-principle application confirmed that although electrically and mechanically induced excitation occur by different mechanisms, their electrophysiologic consequences downstream from the point of activation are not dissimilar. PMID:21459161

  12. Development of an optical Zn 2+ probe based on a single fluorescent protein

    DOE PAGES

    Qin, Yan; Sammond, Deanne W.; Braselmann, Esther; ...

    2016-07-28

    Various fluorescent probes have been developed to reveal the biological functions of intracellular labile Zn 2+. Here we present Green Zinc Probe (GZnP), a novel genetically encoded Zn 2+ sensor design based on a single fluorescent protein (single-FP). The GZnP sensor is generated by attaching two zinc fingers (ZF) of the transcription factor Zap1 (ZF1 and ZF2) to the two ends of a circularly permuted green fluorescent protein (cpGFP). Formation of ZF folds induces interaction between the two ZFs, which induces a change in the cpGFP conformation, leading to an increase in fluorescence. A small sensor library is created tomore » include mutations in the ZFs, cpGFP and linkers between ZF and cpGFP to improve signal stability, sensor brightness and dynamic range based on rational protein engineering and computational design by Rosetta. Using a cell-based library screen, we identify sensor GZnP1 which demonstrates a stable maximum signal, decent brightness (QY = 0.42 at apo state), as well as specific and sensitive response to Zn 2+ in HeLa cells (F max/F min = 2.6, K d = 58 pM, pH 7.4). The subcellular localizing sensors mito-GZnP1 (in mitochondria matrix) and Lck-GZnP1 (on plasma membrane) display sensitivity to Zn 2+ (F max/F min = 2.2). In conclusion, this sensor design provides freedom to be used in combination with other optical indicators and optogenetic tools for simultaneous imaging and advancing our understanding of cellular Zn 2+ function.« less

  13. How Many Pixels Does It Take to Make a Good 4"×6" Print? Pixel Count Wars Revisited

    NASA Astrophysics Data System (ADS)

    Kriss, Michael A.

    Digital still cameras emerged following the introduction of the Sony Mavica analog prototype camera in 1981. These early cameras produced poor image quality and did not challenge film cameras for overall quality. By 1995 digital still cameras in expensive SLR formats had 6 mega-pixels and produced high quality images (with significant image processing). In 2005 significant improvement in image quality was apparent and lower prices for digital still cameras (DSCs) started a rapid decline in film usage and film camera sells. By 2010 film usage was mostly limited to professionals and the motion picture industry. The rise of DSCs was marked by a “pixel war” where the driving feature of the cameras was the pixel count where even moderate cost, ˜120, DSCs would have 14 mega-pixels. The improvement of CMOS technology pushed this trend of lower prices and higher pixel counts. Only the single lens reflex cameras had large sensors and large pixels. The drive for smaller pixels hurt the quality aspects of the final image (sharpness, noise, speed, and exposure latitude). Only today are camera manufactures starting to reverse their course and producing DSCs with larger sensors and pixels. This paper will explore why larger pixels and sensors are key to the future of DSCs.

  14. A Flexible Spatiotemporal Method for Fusing Satellite Images with Different Resolutions

    USDA-ARS?s Scientific Manuscript database

    Studies of land surface dynamics in heterogeneous landscapes often require remote sensing data with high acquisition frequency and high spatial resolution. However, no single sensor meets this requirement. This study presents a new spatiotemporal data fusion method, the Flexible Spatiotemporal DAta ...

  15. Advanced Sensors Boost Optical Communication, Imaging

    NASA Technical Reports Server (NTRS)

    2009-01-01

    Brooklyn, New York-based Amplification Technologies Inc. (ATI), employed Phase I and II SBIR funding from NASA s Jet Propulsion Laboratory to forward the company's solid-state photomultiplier technology. Under the SBIR, ATI developed a small, energy-efficient, extremely high-gain sensor capable of detecting light down to single photons in the near infrared wavelength range. The company has commercialized this technology in the form of its NIRDAPD photomultiplier, ideal for use in free space optical communications, lidar and ladar, night vision goggles, and other light sensing applications.

  16. Imaging Detonations of Explosives

    DTIC Science & Technology

    2016-04-01

    made using a full-color single-camera pyrometer where wavelength resolution is achieved using the Bayer-type mask covering the sensor chip17 and a...many CHNO- based explosives (e.g., TNT [C7H5N3O6], the formulation C-4 [92% RDX, C3H6N6O6]), hot detonation products are mainly soot and permanent...unreferenced). Essentially, 2 light sensors (cameras), each filtered over a narrow wavelength region, observe an event over the same line of sight. The

  17. Video sensor with range measurement capability

    NASA Technical Reports Server (NTRS)

    Howard, Richard T. (Inventor); Briscoe, Jeri M. (Inventor); Corder, Eric L. (Inventor); Broderick, David J. (Inventor)

    2008-01-01

    A video sensor device is provided which incorporates a rangefinder function. The device includes a single video camera and a fixed laser spaced a predetermined distance from the camera for, when activated, producing a laser beam. A diffractive optic element divides the beam so that multiple light spots are produced on a target object. A processor calculates the range to the object based on the known spacing and angles determined from the light spots on the video images produced by the camera.

  18. Multi sensor satellite imagers for commercial remote sensing

    NASA Astrophysics Data System (ADS)

    Cronje, T.; Burger, H.; Du Plessis, J.; Du Toit, J. F.; Marais, L.; Strumpfer, F.

    2005-10-01

    This paper will discuss and compare recent refractive and catodioptric imager designs developed and manufactured at SunSpace for Multi Sensor Satellite Imagers with Panchromatic, Multi-spectral, Area and Hyperspectral sensors on a single Focal Plane Array (FPA). These satellite optical systems were designed with applications to monitor food supplies, crop yield and disaster monitoring in mind. The aim of these imagers is to achieve medium to high resolution (2.5m to 15m) spatial sampling, wide swaths (up to 45km) and noise equivalent reflectance (NER) values of less than 0.5%. State-of-the-art FPA designs are discussed and address the choice of detectors to achieve these performances. Special attention is given to thermal robustness and compactness, the use of folding prisms to place multiple detectors in a large FPA and a specially developed process to customize the spectral selection with the need to minimize mass, power and cost. A refractive imager with up to 6 spectral bands (6.25m GSD) and a catodioptric imager with panchromatic (2.7m GSD), multi-spectral (6 bands, 4.6m GSD), hyperspectral (400nm to 2.35μm, 200 bands, 15m GSD) sensors on the same FPA will be discussed. Both of these imagers are also equipped with real time video view finding capabilities. The electronic units could be subdivided into the Front-End Electronics and Control Electronics with analogue and digital signal processing. A dedicated Analogue Front-End is used for Correlated Double Sampling (CDS), black level correction, variable gain and up to 12-bit digitizing and high speed LVDS data link to a mass memory unit.

  19. Thermal microphotonic sensor and sensor array

    DOEpatents

    Watts, Michael R [Albuquerque, NM; Shaw, Michael J [Tijeras, NM; Nielson, Gregory N [Albuquerque, NM; Lentine, Anthony L [Albuquerque, NM

    2010-02-23

    A thermal microphotonic sensor is disclosed for detecting infrared radiation using heat generated by the infrared radiation to shift the resonant frequency of an optical resonator (e.g. a ring resonator) to which the heat is coupled. The shift in the resonant frequency can be determined from light in an optical waveguide which is evanescently coupled to the optical resonator. An infrared absorber can be provided on the optical waveguide either as a coating or as a plate to aid in absorption of the infrared radiation. In some cases, a vertical resonant cavity can be formed about the infrared absorber to further increase the absorption of the infrared radiation. The sensor can be formed as a single device, or as an array for imaging the infrared radiation.

  20. Compact and portable X-ray imager system using Medipix3RX

    NASA Astrophysics Data System (ADS)

    Garcia-Nathan, T. B.; Kachatkou, A.; Jiang, C.; Omar, D.; Marchal, J.; Changani, H.; Tartoni, N.; van Silfhout, R. G.

    2017-10-01

    In this paper the design and implementation of a novel portable X-ray imager system is presented. The design features a direct X-ray detection scheme by making use of a hybrid detector (Medipix3RX). Taking advantages of the capabilities of the Medipix3RX, like a high resolution, zero dead-time, single photon detection and charge-sharing mode, the imager has a better resolution and higher sensitivity compared to using traditional indirect detection schemes. A detailed description of the system is presented, which consists of a vacuum chamber containing the sensor, an electronic board for temperature management, conditioning and readout of the sensor and a data processing unit which also handles network connection and allow communication with clients by acting as a server. A field programmable gate array (FPGA) device is used to implement the readout protocol for the Medipix3RX, apart from the readout the FPGA can perform complex image processing functions such as feature extraction, histogram, profiling and image compression at high speeds. The temperature of the sensor is monitored and controlled through a PID algorithm making use of a Peltier cooler, improving the energy resolution and response stability of the sensor. Without implementing data compression techniques, the system is capable of transferring 680 profiles/s or 240 images/s in a continuous mode. Implementation of equalization procedures and tests on colour mode are presented in this paper. For the experimental measurements the Medipix3RX sensor was used with a Silicon layer. One of the tested applications of the system is as an X-ray beam position monitor (XBPM) device for synchrotron applications. The XBPM allows a non-destructive real time measurement of the beam position, size and intensity. A Kapton foil is placed in the beam path scattering radiation towards a pinhole camera setup that allows the sensor to obtain an image of the beam. By using profiles of the synchrotron X-ray beam, high frequency movement of the beam position can be studied, up to 340 Hz. The system is capable of realizing an independent energy measure of the beam by using the Medipix3RX variable energy threshold feature.

  1. Fast Fourier single-pixel imaging via binary illumination.

    PubMed

    Zhang, Zibang; Wang, Xueying; Zheng, Guoan; Zhong, Jingang

    2017-09-20

    Fourier single-pixel imaging (FSI) employs Fourier basis patterns for encoding spatial information and is capable of reconstructing high-quality two-dimensional and three-dimensional images. Fourier-domain sparsity in natural scenes allows FSI to recover sharp images from undersampled data. The original FSI demonstration, however, requires grayscale Fourier basis patterns for illumination. This requirement imposes a limitation on the imaging speed as digital micro-mirror devices (DMDs) generate grayscale patterns at a low refreshing rate. In this paper, we report a new strategy to increase the speed of FSI by two orders of magnitude. In this strategy, we binarize the Fourier basis patterns based on upsampling and error diffusion dithering. We demonstrate a 20,000 Hz projection rate using a DMD and capture 256-by-256-pixel dynamic scenes at a speed of 10 frames per second. The reported technique substantially accelerates image acquisition speed of FSI. It may find broad imaging applications at wavebands that are not accessible using conventional two-dimensional image sensors.

  2. Visualization of Content Release from Cell Surface-Attached Single HIV-1 Particles Carrying an Extra-Viral Fluorescent pH-Sensor.

    PubMed

    Sood, Chetan; Marin, Mariana; Mason, Caleb S; Melikyan, Gregory B

    2016-01-01

    HIV-1 fusion leading to productive entry has long been thought to occur at the plasma membrane. However, our previous single virus imaging data imply that, after Env engagement of CD4 and coreceptors at the cell surface, the virus enters into and fuses with intracellular compartments. We were unable to reliably detect viral fusion at the plasma membrane. Here, we implement a novel virus labeling strategy that biases towards detection of virus fusion that occurs in a pH-neutral environment-at the plasma membrane or, possibly, in early pH-neutral vesicles. Virus particles are co-labeled with an intra-viral content marker, which is released upon fusion, and an extra-viral pH sensor consisting of ecliptic pHluorin fused to the transmembrane domain of ICAM-1. This sensor fully quenches upon virus trafficking to a mildly acidic compartment, thus precluding subsequent detection of viral content release. As an interesting secondary observation, the incorporation of the pH-sensor revealed that HIV-1 particles occasionally shuttle between neutral and acidic compartments in target cells expressing CD4, suggesting a small fraction of viral particles is recycled to the plasma membrane and re-internalized. By imaging viruses bound to living cells, we found that HIV-1 content release in neutral-pH environment was a rare event (~0.4% particles). Surprisingly, viral content release was not significantly reduced by fusion inhibitors, implying that content release was due to spontaneous formation of viral membrane defects occurring at the cell surface. We did not measure a significant occurrence of HIV-1 fusion at neutral pH above this defect-mediated background loss of content, suggesting that the pH sensor may destabilize the membrane of the HIV-1 pseudovirus and, thus, preclude reliable detection of single virus fusion events at neutral pH.

  3. Reconfigurable Mobile System - Ground, sea and air applications

    NASA Astrophysics Data System (ADS)

    Lamonica, Gary L.; Sturges, James W.

    1990-11-01

    The Reconfigurable Mobile System (RMS) is a highly mobile data-processing unit for military users requiring real-time access to data gathered by airborne (and other) reconnaissance data. RMS combines high-performance computation and image processing workstations with resources for command/control/communications in a single, lightweight shelter. RMS is composed of off-the-shelf components, and is easily reconfigurable to land-vehicle or shipboard versions. Mission planning, which involves an airborne sensor platform's sensor coverage, considered aircraft/sensor capabilities in conjunction with weather, terrain, and threat scenarios. RMS's man-machine interface concept facilitates user familiarization and features iron-based function selection and windowing.

  4. Localization Using Visual Odometry and a Single Downward-Pointing Camera

    NASA Technical Reports Server (NTRS)

    Swank, Aaron J.

    2012-01-01

    Stereo imaging is a technique commonly employed for vision-based navigation. For such applications, two images are acquired from different vantage points and then compared using transformations to extract depth information. The technique is commonly used in robotics for obstacle avoidance or for Simultaneous Localization And Mapping, (SLAM). Yet, the process requires a number of image processing steps and therefore tends to be CPU-intensive, which limits the real-time data rate and use in power-limited applications. Evaluated here is a technique where a monocular camera is used for vision-based odometry. In this work, an optical flow technique with feature recognition is performed to generate odometry measurements. The visual odometry sensor measurements are intended to be used as control inputs or measurements in a sensor fusion algorithm using low-cost MEMS based inertial sensors to provide improved localization information. Presented here are visual odometry results which demonstrate the challenges associated with using ground-pointing cameras for visual odometry. The focus is for rover-based robotic applications for localization within GPS-denied environments.

  5. Microscopic resolution broadband dielectric spectroscopy

    NASA Astrophysics Data System (ADS)

    Mukherjee, S.; Watson, P.; Prance, R. J.

    2011-08-01

    Results are presented for a non-contact measurement system capable of micron level spatial resolution. It utilises the novel electric potential sensor (EPS) technology, invented at Sussex, to image the electric field above a simple composite dielectric material. EP sensors may be regarded as analogous to a magnetometer and require no adjustments or offsets during either setup or use. The sample consists of a standard glass/epoxy FR4 circuit board, with linear defects machined into the surface by a PCB milling machine. The sample is excited with an a.c. signal over a range of frequencies from 10 kHz to 10 MHz, from the reverse side, by placing it on a conducting sheet connected to the source. The single sensor is raster scanned over the surface at a constant working distance, consistent with the spatial resolution, in order to build up an image of the electric field, with respect to the reference potential. The results demonstrate that both the surface defects and the internal dielectric variations within the composite may be imaged in this way, with good contrast being observed between the glass mat and the epoxy resin.

  6. A new omni-directional multi-camera system for high resolution surveillance

    NASA Astrophysics Data System (ADS)

    Cogal, Omer; Akin, Abdulkadir; Seyid, Kerem; Popovic, Vladan; Schmid, Alexandre; Ott, Beat; Wellig, Peter; Leblebici, Yusuf

    2014-05-01

    Omni-directional high resolution surveillance has a wide application range in defense and security fields. Early systems used for this purpose are based on parabolic mirror or fisheye lens where distortion due to the nature of the optical elements cannot be avoided. Moreover, in such systems, the image resolution is limited to a single image sensor's image resolution. Recently, the Panoptic camera approach that mimics the eyes of flying insects using multiple imagers has been presented. This approach features a novel solution for constructing a spherically arranged wide FOV plenoptic imaging system where the omni-directional image quality is limited by low-end sensors. In this paper, an overview of current Panoptic camera designs is provided. New results for a very-high resolution visible spectrum imaging and recording system inspired from the Panoptic approach are presented. The GigaEye-1 system, with 44 single cameras and 22 FPGAs, is capable of recording omni-directional video in a 360°×100° FOV at 9.5 fps with a resolution over (17,700×4,650) pixels (82.3MP). Real-time video capturing capability is also verified at 30 fps for a resolution over (9,000×2,400) pixels (21.6MP). The next generation system with significantly higher resolution and real-time processing capacity, called GigaEye-2, is currently under development. The important capacity of GigaEye-1 opens the door to various post-processing techniques in surveillance domain such as large perimeter object tracking, very-high resolution depth map estimation and high dynamicrange imaging which are beyond standard stitching and panorama generation methods.

  7. Universal Stochastic Multiscale Image Fusion: An Example Application for Shale Rock.

    PubMed

    Gerke, Kirill M; Karsanina, Marina V; Mallants, Dirk

    2015-11-02

    Spatial data captured with sensors of different resolution would provide a maximum degree of information if the data were to be merged into a single image representing all scales. We develop a general solution for merging multiscale categorical spatial data into a single dataset using stochastic reconstructions with rescaled correlation functions. The versatility of the method is demonstrated by merging three images of shale rock representing macro, micro and nanoscale spatial information on mineral, organic matter and porosity distribution. Merging multiscale images of shale rock is pivotal to quantify more reliably petrophysical properties needed for production optimization and environmental impacts minimization. Images obtained by X-ray microtomography and scanning electron microscopy were fused into a single image with predefined resolution. The methodology is sufficiently generic for implementation of other stochastic reconstruction techniques, any number of scales, any number of material phases, and any number of images for a given scale. The methodology can be further used to assess effective properties of fused porous media images or to compress voluminous spatial datasets for efficient data storage. Practical applications are not limited to petroleum engineering or more broadly geosciences, but will also find their way in material sciences, climatology, and remote sensing.

  8. Universal Stochastic Multiscale Image Fusion: An Example Application for Shale Rock

    PubMed Central

    Gerke, Kirill M.; Karsanina, Marina V.; Mallants, Dirk

    2015-01-01

    Spatial data captured with sensors of different resolution would provide a maximum degree of information if the data were to be merged into a single image representing all scales. We develop a general solution for merging multiscale categorical spatial data into a single dataset using stochastic reconstructions with rescaled correlation functions. The versatility of the method is demonstrated by merging three images of shale rock representing macro, micro and nanoscale spatial information on mineral, organic matter and porosity distribution. Merging multiscale images of shale rock is pivotal to quantify more reliably petrophysical properties needed for production optimization and environmental impacts minimization. Images obtained by X-ray microtomography and scanning electron microscopy were fused into a single image with predefined resolution. The methodology is sufficiently generic for implementation of other stochastic reconstruction techniques, any number of scales, any number of material phases, and any number of images for a given scale. The methodology can be further used to assess effective properties of fused porous media images or to compress voluminous spatial datasets for efficient data storage. Practical applications are not limited to petroleum engineering or more broadly geosciences, but will also find their way in material sciences, climatology, and remote sensing. PMID:26522938

  9. Airborne digital-image data for monitoring the Colorado River corridor below Glen Canyon Dam, Arizona, 2009 - Image-mosaic production and comparison with 2002 and 2005 image mosaics

    USGS Publications Warehouse

    Davis, Philip A.

    2012-01-01

    Airborne digital-image data were collected for the Arizona part of the Colorado River ecosystem below Glen Canyon Dam in 2009. These four-band image data are similar in wavelength band (blue, green, red, and near infrared) and spatial resolution (20 centimeters) to image collections of the river corridor in 2002 and 2005. These periodic image collections are used by the Grand Canyon Monitoring and Research Center (GCMRC) of the U.S. Geological Survey to monitor the effects of Glen Canyon Dam operations on the downstream ecosystem. The 2009 collection used the latest model of the Leica ADS40 airborne digital sensor (the SH52), which uses a single optic for all four bands and collects and stores band radiance in 12-bits, unlike the image sensors that GCMRC used in 2002 and 2005. This study examined the performance of the SH52 sensor, on the basis of the collected image data, and determined that the SH52 sensor provided superior data relative to the previously employed sensors (that is, an early ADS40 model and Zeiss Imaging's Digital Mapping Camera) in terms of band-image registration, dynamic range, saturation, linearity to ground reflectance, and noise level. The 2009 image data were provided as orthorectified segments of each flightline to constrain the size of the image files; each river segment was covered by 5 to 6 overlapping, linear flightlines. Most flightline images for each river segment had some surface-smear defects and some river segments had cloud shadows, but these two conditions did not generally coincide in the majority of the overlapping flightlines for a particular river segment. Therefore, the final image mosaic for the 450-kilometer (km)-long river corridor required careful selection and editing of numerous flightline segments (a total of 513 segments, each 3.2 km long) to minimize surface defects and cloud shadows. The final image mosaic has a total of only 3 km of surface defects. The final image mosaic for the western end of the corridor has areas of cloud shadow because of persistent inclement weather during data collection. This report presents visual comparisons of the 2002, 2005, and 2009 digital-image mosaics for various physical, biological, and cultural resources within the Colorado River ecosystem. All of the comparisons show the superior quality of the 2009 image data. In fact, the 2009 four-band image mosaic is perhaps the best image dataset that exists for the entire Arizona part of the Colorado River.

  10. New image-stabilizing system

    NASA Astrophysics Data System (ADS)

    Zhao, Yuejin

    1996-06-01

    In this paper, a new method for image stabilization with a three-axis image- stabilizing reflecting prism assembly is presented, and the principle of image stabilization in this prism assembly, formulae for image stabilization and working formulae with an approximation up to the third power are given in detail. In this image-stabilizing system, a single chip microcomputer is used to calculate value of compensating angles and thus to control the prism assembly. Two gyroscopes act as sensors from which information of angular perturbation is obtained, three stepping motors drive the prism assembly to compensate for the movement of image produced by angular perturbation. The image-stabilizing device so established is a multifold system which involves optics, mechanics, electronics and computer.

  11. Single-photon imaging in complementary metal oxide semiconductor processes

    PubMed Central

    Charbon, E.

    2014-01-01

    This paper describes the basics of single-photon counting in complementary metal oxide semiconductors, through single-photon avalanche diodes (SPADs), and the making of miniaturized pixels with photon-counting capability based on SPADs. Some applications, which may take advantage of SPAD image sensors, are outlined, such as fluorescence-based microscopy, three-dimensional time-of-flight imaging and biomedical imaging, to name just a few. The paper focuses on architectures that are best suited to those applications and the trade-offs they generate. In this context, architectures are described that efficiently collect the output of single pixels when designed in large arrays. Off-chip readout circuit requirements are described for a variety of applications in physics, medicine and the life sciences. Owing to the dynamic nature of SPADs, designs featuring a large number of SPADs require careful analysis of the target application for an optimal use of silicon real estate and of limited readout bandwidth. The paper also describes the main trade-offs involved in architecting such chips and the solutions adopted with focus on scalability and miniaturization. PMID:24567470

  12. Designing a practical system for spectral imaging of skylight.

    PubMed

    López-Alvarez, Miguel A; Hernández-Andrés, Javier; Romero, Javier; Lee, Raymond L

    2005-09-20

    In earlier work [J. Opt. Soc. Am. A 21, 13-23 (2004)], we showed that a combination of linear models and optimum Gaussian sensors obtained by an exhaustive search can recover daylight spectra reliably from broadband sensor data. Thus our algorithm and sensors could be used to design an accurate, relatively inexpensive system for spectral imaging of daylight. Here we improve our simulation of the multispectral system by (1) considering the different kinds of noise inherent in electronic devices such as change-coupled devices (CCDs) or complementary metal-oxide semiconductors (CMOS) and (2) extending our research to a different kind of natural illumination, skylight. Because exhaustive searches are expensive computationally, here we switch to a simulated annealing algorithm to define the optimum sensors for recovering skylight spectra. The annealing algorithm requires us to minimize a single cost function, and so we develop one that calculates both the spectral and colorimetric similarity of any pair of skylight spectra. We show that the simulated annealing algorithm yields results similar to the exhaustive search but with much less computational effort. Our technique lets us study the properties of optimum sensors in the presence of noise, one side effect of which is that adding more sensors may not improve the spectral recovery.

  13. A fast calibration method for 3-D tracking of ultrasound images using a spatial localizer.

    PubMed

    Pagoulatos, N; Haynor, D R; Kim, Y

    2001-09-01

    We have developed a fast calibration method for computing the position and orientation of 2-D ultrasound (US) images in 3-D space where a position sensor is mounted on the US probe. This calibration is required in the fields of 3-D ultrasound and registration of ultrasound with other imaging modalities. Most of the existing calibration methods require a complex and tedious experimental procedure. Our method is simple and it is based on a custom-built phantom. Thirty N-fiducials (markers in the shape of the letter "N") embedded in the phantom provide the basis for our calibration procedure. We calibrated a 3.5-MHz sector phased-array probe with a magnetic position sensor, and we studied the accuracy and precision of our method. A typical calibration procedure requires approximately 2 min. We conclude that we can achieve accurate and precise calibration using a single US image, provided that a large number (approximately ten) of N-fiducials are captured within the US image, enabling a representative sampling of the imaging plane.

  14. An Investigation of the Application of Artificial Neural Networks to Adaptive Optics Imaging Systems

    DTIC Science & Technology

    1991-12-01

    neural network and the feedforward neural network studied is the single layer perceptron artificial neural network . The recurrent artificial neural network input...features are the wavefront sensor slope outputs and neighboring actuator feedback commands. The feedforward artificial neural network input

  15. Active Multimodal Sensor System for Target Recognition and Tracking

    PubMed Central

    Zhang, Guirong; Zou, Zhaofan; Liu, Ziyue; Mao, Jiansen

    2017-01-01

    High accuracy target recognition and tracking systems using a single sensor or a passive multisensor set are susceptible to external interferences and exhibit environmental dependencies. These difficulties stem mainly from limitations to the available imaging frequency bands, and a general lack of coherent diversity of the available target-related data. This paper proposes an active multimodal sensor system for target recognition and tracking, consisting of a visible, an infrared, and a hyperspectral sensor. The system makes full use of its multisensor information collection abilities; furthermore, it can actively control different sensors to collect additional data, according to the needs of the real-time target recognition and tracking processes. This level of integration between hardware collection control and data processing is experimentally shown to effectively improve the accuracy and robustness of the target recognition and tracking system. PMID:28657609

  16. Progress towards barium daughter tagging in Xe136 decay using single molecule fluorescence imaging

    NASA Astrophysics Data System (ADS)

    McDonald, Austin; NEXT Collaboration

    2017-09-01

    The existence of Majorana fermions is of great interest as it may be related to the asymmetry between matter and anti-matter particles in the universe. However, the search for them has proven to be a difficult one. Neutrino-less Double Beta decay (NLDB) offers a possible opportunity for direct observation of a Majorana Fermion. The rate for NLDB decay may be as low as 1 count /ton /year if the mass ordering is inverted. Current detector technologies have background rates between 4 to 300 count /ton /year /ROI at the 100kg scale which is much larger than the universal goal of 0.1 count /ton /year /ROI desired for ton-scale detectors. The premise of my research is to develop new detector technologies that will allow for a background-free experiment. My current work is to develop a sensor that will tag the daughter ion Ba++ from the Xe136 decay. The development of a sensor that is sensitive to single barium ion detection based on the single molecule fluorescence imaging technique is the major focus of this work. If successful, this could provide a path to a background-free experiment.

  17. Progress towards barium daughter tagging in Xe136 decay using single molecule fluorescence imaging

    NASA Astrophysics Data System (ADS)

    McDonald, Austin; Jones, Ben; Benson, Jordan; Nygren, David; NEXT Collaboration

    2017-01-01

    The existence of Majorana Fermions has been predicted, and is of great interest as it may be related to the asymmetry between matter and anti-matter particles in the universe. However, the search for them has proven to be a difficult one. Neutrino-less Double Beta decay (NLDB) offers a possible opportunity for direct observation of a Majorana Fermion. The rate for NLDB decay may be as low as 1 count / ton / year . Current detector technologies have background rates between 4 to 300 count / ton / year / ROI which is much larger than the universal goal of 0 . 1 count / ton / year / ROI desired for ton-scale detectors. The premise of my research is to develop new detector technologies that will allow for a background-free experiment. My current work is to develop a sensor that will tag the daughter ion Ba++ from the Xe136 decay. The development of a sensor that is sensitive to single barium ion detection based on the single molecule fluorescence imaging technique is the major focus of this work. If successful, this could provide a path to a background-free experiment.

  18. Single-Photon Detectors for Time-of-Flight Range Imaging

    NASA Astrophysics Data System (ADS)

    Stoppa, David; Simoni, Andrea

    We live in a three-dimensional (3D) world and thanks to the stereoscopic vision provided by our two eyes, in combination with the powerful neural network of the brain we are able to perceive the distance of the objects. Nevertheless, despite the huge market volume of digital cameras, solid-state image sensors can capture only a two-dimensional (2D) projection, of the scene under observation, losing a variable of paramount importance, i.e., the scene depth. On the contrary, 3D vision tools could offer amazing possibilities of improvement in many areas thanks to the increased accuracy and reliability of the models representing the environment. Among the great variety of distance measuring techniques and detection systems available, this chapter will treat only the emerging niche of solid-state, scannerless systems based on the TOF principle and using a detector SPAD-based pixels. The chapter is organized into three main parts. At first, TOF systems and measuring techniques will be described. In the second part, most meaningful sensor architectures for scannerless TOF distance measurements will be analyzed, focusing onto the circuital building blocks required by time-resolved image sensors. Finally, a performance summary is provided and a perspective view for the near future developments of SPAD-TOF sensors is given.

  19. A Multi-Resolution Approach for an Automated Fusion of Different Low-Cost 3D Sensors

    PubMed Central

    Dupuis, Jan; Paulus, Stefan; Behmann, Jan; Plümer, Lutz; Kuhlmann, Heiner

    2014-01-01

    The 3D acquisition of object structures has become a common technique in many fields of work, e.g., industrial quality management, cultural heritage or crime scene documentation. The requirements on the measuring devices are versatile, because spacious scenes have to be imaged with a high level of detail for selected objects. Thus, the used measuring systems are expensive and require an experienced operator. With the rise of low-cost 3D imaging systems, their integration into the digital documentation process is possible. However, common low-cost sensors have the limitation of a trade-off between range and accuracy, providing either a low resolution of single objects or a limited imaging field. Therefore, the use of multiple sensors is desirable. We show the combined use of two low-cost sensors, the Microsoft Kinect and the David laserscanning system, to achieve low-resolved scans of the whole scene and a high level of detail for selected objects, respectively. Afterwards, the high-resolved David objects are automatically assigned to their corresponding Kinect object by the use of surface feature histograms and SVM-classification. The corresponding objects are fitted using an ICP-implementation to produce a multi-resolution map. The applicability is shown for a fictional crime scene and the reconstruction of a ballistic trajectory. PMID:24763255

  20. A multi-resolution approach for an automated fusion of different low-cost 3D sensors.

    PubMed

    Dupuis, Jan; Paulus, Stefan; Behmann, Jan; Plümer, Lutz; Kuhlmann, Heiner

    2014-04-24

    The 3D acquisition of object structures has become a common technique in many fields of work, e.g., industrial quality management, cultural heritage or crime scene documentation. The requirements on the measuring devices are versatile, because spacious scenes have to be imaged with a high level of detail for selected objects. Thus, the used measuring systems are expensive and require an experienced operator. With the rise of low-cost 3D imaging systems, their integration into the digital documentation process is possible. However, common low-cost sensors have the limitation of a trade-off between range and accuracy, providing either a low resolution of single objects or a limited imaging field. Therefore, the use of multiple sensors is desirable. We show the combined use of two low-cost sensors, the Microsoft Kinect and the David laserscanning system, to achieve low-resolved scans of the whole scene and a high level of detail for selected objects, respectively. Afterwards, the high-resolved David objects are automatically assigned to their corresponding Kinect object by the use of surface feature histograms and SVM-classification. The corresponding objects are fitted using an ICP-implementation to produce a multi-resolution map. The applicability is shown for a fictional crime scene and the reconstruction of a ballistic trajectory.

  1. Estimating Morning Change in Land Surface Temperature from MODIS Day/Night Observations: Applications for Surface Energy Balance Modeling.

    PubMed

    Hain, Christopher R; Anderson, Martha C

    2017-10-16

    Observations of land surface temperature (LST) are crucial for the monitoring of surface energy fluxes from satellite. Methods that require high temporal resolution LST observations (e.g., from geostationary orbit) can be difficult to apply globally because several geostationary sensors are required to attain near-global coverage (60°N to 60°S). While these LST observations are available from polar-orbiting sensors, providing global coverage at higher spatial resolutions, the temporal sampling (twice daily observations) can pose significant limitations. For example, the Atmosphere Land Exchange Inverse (ALEXI) surface energy balance model, used for monitoring evapotranspiration and drought, requires an observation of the morning change in LST - a quantity not directly observable from polar-orbiting sensors. Therefore, we have developed and evaluated a data-mining approach to estimate the mid-morning rise in LST from a single sensor (2 observations per day) of LST from the Moderate Resolution Imaging Spectroradiometer (MODIS) sensor on the Aqua platform. In general, the data-mining approach produced estimates with low relative error (5 to 10%) and statistically significant correlations when compared against geostationary observations. This approach will facilitate global, near real-time applications of ALEXI at higher spatial and temporal coverage from a single sensor than currently achievable with current geostationary datasets.

  2. Angstrom-Resolution Magnetic Resonance Imaging of Single Molecules via Wave-Function Fingerprints of Nuclear Spins

    NASA Astrophysics Data System (ADS)

    Ma, Wen-Long; Liu, Ren-Bao

    2016-08-01

    Single-molecule sensitivity of nuclear magnetic resonance (NMR) and angstrom resolution of magnetic resonance imaging (MRI) are the highest challenges in magnetic microscopy. Recent development in dynamical-decoupling- (DD) enhanced diamond quantum sensing has enabled single-nucleus NMR and nanoscale NMR. Similar to conventional NMR and MRI, current DD-based quantum sensing utilizes the "frequency fingerprints" of target nuclear spins. The frequency fingerprints by their nature cannot resolve different nuclear spins that have the same noise frequency or differentiate different types of correlations in nuclear-spin clusters, which limit the resolution of single-molecule MRI. Here we show that this limitation can be overcome by using "wave-function fingerprints" of target nuclear spins, which is much more sensitive than the frequency fingerprints to the weak hyperfine interaction between the targets and a sensor under resonant DD control. We demonstrate a scheme of angstrom-resolution MRI that is capable of counting and individually localizing single nuclear spins of the same frequency and characterizing the correlations in nuclear-spin clusters. A nitrogen-vacancy-center spin sensor near a diamond surface, provided that the coherence time is improved by surface engineering in the near future, may be employed to determine with angstrom resolution the positions and conformation of single molecules that are isotope labeled. The scheme in this work offers an approach to breaking the resolution limit set by the "frequency gradients" in conventional MRI and to reaching the angstrom-scale resolution.

  3. Putting a finishing touch on GECIs

    PubMed Central

    Rose, Tobias; Goltstein, Pieter M.; Portugues, Ruben; Griesbeck, Oliver

    2014-01-01

    More than a decade ago genetically encoded calcium indicators (GECIs) entered the stage as new promising tools to image calcium dynamics and neuronal activity in living tissues and designated cell types in vivo. From a variety of initial designs two have emerged as promising prototypes for further optimization: FRET (Förster Resonance Energy Transfer)-based sensors and single fluorophore sensors of the GCaMP family. Recent efforts in structural analysis, engineering and screening have broken important performance thresholds in the latest generation for both classes. While these improvements have made GECIs a powerful means to perform physiology in living animals, a number of other aspects of sensor function deserve attention. These aspects include indicator linearity, toxicity and slow response kinetics. Furthermore creating high performance sensors with optically more favorable emission in red or infrared wavelengths as well as new stably or conditionally GECI-expressing animal lines are on the wish list. When the remaining issues are solved, imaging of GECIs will finally have crossed the last milestone, evolving from an initial promise into a fully matured technology. PMID:25477779

  4. Photographic films as remote sensors for measuring albedos of terrestrial surfaces

    NASA Technical Reports Server (NTRS)

    Pease, S. R.; Pease, R. W.

    1972-01-01

    To test the feasibility of remotely measuring the albedos of terrestrial surfaces from photographic images, an inquiry was carried out at ground level using several representative common surface targets. Problems of making such measurements with a spectrally selective sensor, such as photographic film, have been compared to previous work utilizing silicon cells. Two photographic approaches have been developed: a multispectral method which utilizes two or three photographic images made through conventional multispectral filters and a single shot method which utilizes the broad spectral sensitivity of black and white infrared film. Sensitometry related to the methods substitutes a Log Albedo scale for the conventional Log Exposure for creating characteristic curves. Certain constraints caused by illumination goemetry are discussed.

  5. A Microfluidic Cytometer for Complete Blood Count With a 3.2-Megapixel, 1.1- μm-Pitch Super-Resolution Image Sensor in 65-nm BSI CMOS.

    PubMed

    Liu, Xu; Huang, Xiwei; Jiang, Yu; Xu, Hang; Guo, Jing; Hou, Han Wei; Yan, Mei; Yu, Hao

    2017-08-01

    Based on a 3.2-Megapixel 1.1- μm-pitch super-resolution (SR) CMOS image sensor in a 65-nm backside-illumination process, a lens-free microfluidic cytometer for complete blood count (CBC) is demonstrated in this paper. Backside-illumination improves resolution and contrast at the device level with elimination of surface treatment when integrated with microfluidic channels. A single-frame machine-learning-based SR processing is further realized at system level for resolution correction with minimum hardware resources. The demonstrated microfluidic cytometer can detect the platelet cells (< 2 μm) required in CBC, hence is promising for point-of-care diagnostics.

  6. Novel instrumentation of multispectral imaging technology for detecting tissue abnormity

    NASA Astrophysics Data System (ADS)

    Yi, Dingrong; Kong, Linghua

    2012-10-01

    Multispectral imaging is becoming a powerful tool in a wide range of biological and clinical studies by adding spectral, spatial and temporal dimensions to visualize tissue abnormity and the underlying biological processes. A conventional spectral imaging system includes two physically separated major components: a band-passing selection device (such as liquid crystal tunable filter and diffraction grating) and a scientific-grade monochromatic camera, and is expensive and bulky. Recently micro-arrayed narrow-band optical mosaic filter was invented and successfully fabricated to reduce the size and cost of multispectral imaging devices in order to meet the clinical requirement for medical diagnostic imaging applications. However the challenging issue of how to integrate and place the micro filter mosaic chip to the targeting focal plane, i.e., the imaging sensor, of an off-shelf CMOS/CCD camera is not reported anywhere. This paper presents the methods and results of integrating such a miniaturized filter with off-shelf CMOS imaging sensors to produce handheld real-time multispectral imaging devices for the application of early stage pressure ulcer (ESPU) detection. Unlike conventional multispectral imaging devices which are bulky and expensive, the resulting handheld real-time multispectral ESPU detector can produce multiple images at different center wavelengths with a single shot, therefore eliminates the image registration procedure required by traditional multispectral imaging technologies.

  7. Woofer-tweeter adaptive optics scanning laser ophthalmoscopic imaging based on Lagrange-multiplier damped least-squares algorithm.

    PubMed

    Zou, Weiyao; Qi, Xiaofeng; Burns, Stephen A

    2011-07-01

    We implemented a Lagrange-multiplier (LM)-based damped least-squares (DLS) control algorithm in a woofer-tweeter dual deformable-mirror (DM) adaptive optics scanning laser ophthalmoscope (AOSLO). The algorithm uses data from a single Shack-Hartmann wavefront sensor to simultaneously correct large-amplitude low-order aberrations by a woofer DM and small-amplitude higher-order aberrations by a tweeter DM. We measured the in vivo performance of high resolution retinal imaging with the dual DM AOSLO. We compared the simultaneous LM-based DLS dual DM controller with both single DM controller, and a successive dual DM controller. We evaluated performance using both wavefront (RMS) and image quality metrics including brightness and power spectrum. The simultaneous LM-based dual DM AO can consistently provide near diffraction-limited in vivo routine imaging of human retina.

  8. A two-step A/D conversion and column self-calibration technique for low noise CMOS image sensors.

    PubMed

    Bae, Jaeyoung; Kim, Daeyun; Ham, Seokheon; Chae, Youngcheol; Song, Minkyu

    2014-07-04

    In this paper, a 120 frames per second (fps) low noise CMOS Image Sensor (CIS) based on a Two-Step Single Slope ADC (TS SS ADC) and column self-calibration technique is proposed. The TS SS ADC is suitable for high speed video systems because its conversion speed is much faster (by more than 10 times) than that of the Single Slope ADC (SS ADC). However, there exist some mismatching errors between the coarse block and the fine block due to the 2-step operation of the TS SS ADC. In general, this makes it difficult to implement the TS SS ADC beyond a 10-bit resolution. In order to improve such errors, a new 4-input comparator is discussed and a high resolution TS SS ADC is proposed. Further, a feedback circuit that enables column self-calibration to reduce the Fixed Pattern Noise (FPN) is also described. The proposed chip has been fabricated with 0.13 μm Samsung CIS technology and the chip satisfies the VGA resolution. The pixel is based on the 4-TR Active Pixel Sensor (APS). The high frame rate of 120 fps is achieved at the VGA resolution. The measured FPN is 0.38 LSB, and measured dynamic range is about 64.6 dB.

  9. Laser-induced damage threshold of camera sensors and micro-opto-electro-mechanical systems

    NASA Astrophysics Data System (ADS)

    Schwarz, Bastian; Ritt, Gunnar; Körber, Michael; Eberle, Bernd

    2016-10-01

    The continuous development of laser systems towards more compact and efficient devices constitutes an increasing threat to electro-optical imaging sensors such as complementary metal-oxide-semiconductors (CMOS) and charge-coupled devices (CCD). These types of electronic sensors are used in day-to-day life but also in military or civil security applications. In camera systems dedicated to specific tasks, also micro-opto-electro-mechanical systems (MOEMS) like a digital micromirror device (DMD) are part of the optical setup. In such systems, the DMD can be located at an intermediate focal plane of the optics and it is also susceptible to laser damage. The goal of our work is to enhance the knowledge of damaging effects on such devices exposed to laser light. The experimental setup for the investigation of laser-induced damage is described in detail. As laser sources both pulsed lasers and continuous-wave (CW) lasers are used. The laser-induced damage threshold (LIDT) is determined by the single-shot method by increasing the pulse energy from pulse to pulse or in the case of CW-lasers, by increasing the laser power. Furthermore, we investigate the morphology of laser-induced damage patterns and the dependence of the number of destructed device elements on the laser pulse energy or laser power. In addition to the destruction of single pixels, we observe aftereffects like persisting dead columns or rows of pixels in the sensor image.

  10. Design, Fabrication, and Packaging of Mach-Zehnder Interferometers for Biological Sensing Applications

    NASA Astrophysics Data System (ADS)

    Novak, Joseph

    Optical biological sensors are widely used in the fields of medical testing, water treatment and safety, gene identification, and many others due to advances in nanofabrication technology. This work focuses on the design of fiber-coupled Mach-Zehnder Interferometer (MZI) based biosensors fabricated on silicon-on-insulator (SOI) wafer. Silicon waveguide sensors are designed with multimode and single-mode dimensions. Input coupling efficiency is investigated by design of various taper structures. Integration processing and packaging is performed for fiber attachment and enhancement of input coupling efficiency. Optical guided-wave sensors rely on single-mode operation to extract an induced phase-shift from the output signal. A silicon waveguide MZI sensor designed and fabricated for both multimode and single-mode dimensions. Sensitivity of the sensors is analyzed for waveguide dimensions and materials. An s-bend structure is designed for the multimode waveguide to eliminate higher-order mode power as an alternative to single-mode confinement. Single-mode confinement is experimentally demonstrated through near field imaging of waveguide output. Y-junctions are designed for 3dB power splitting to the MZI arms and for power recombination after sensing to utilize the interferometric function of the MZI. Ultra-short 10microm taper structures with curved geometries are designed to improve insertion loss from fiber-to-chip without significantly increasing device area and show potential for applications requiring misalignment tolerance. An novel v-groove process is developed for self-aligned integration of fiber grooves for attachment to sensor chips. Thermal oxidation at temperatures from 1050-1150°C during groove processing creates an SiO2 layer on the waveguide end facet to protect the waveguide facet during integration etch processing without additional e-beam lithography processing. Experimental results show improvement of insertion loss compared to dicing preparation and Focused Ion Beam methods using the thermal oxidation process.

  11. Image Accumulation in Pixel Detector Gated by Late External Trigger Signal and its Application in Imaging Activation Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jakubek, J.; Cejnarova, A.; Platkevic, M.

    Single quantum counting pixel detectors of Medipix type are starting to be used in various radiographic applications. Compared to standard devices for digital imaging (such as CCDs or CMOS sensors) they present significant advantages: direct conversion of radiation to electric signal, energy sensitivity, noiseless image integration, unlimited dynamic range, absolute linearity. In this article we describe usage of the pixel device TimePix for image accumulation gated by late trigger signal. Demonstration of the technique is given on imaging coincidence instrumental neutron activation analysis (Imaging CINAA). This method allows one to determine concentration and distribution of certain preselected element in anmore » inspected sample.« less

  12. Single lens 3D-camera with extended depth-of-field

    NASA Astrophysics Data System (ADS)

    Perwaß, Christian; Wietzke, Lennart

    2012-03-01

    Placing a micro lens array in front of an image sensor transforms a normal camera into a single lens 3D camera, which also allows the user to change the focus and the point of view after a picture has been taken. While the concept of such plenoptic cameras is known since 1908, only recently the increased computing power of low-cost hardware and the advances in micro lens array production, have made the application of plenoptic cameras feasible. This text presents a detailed analysis of plenoptic cameras as well as introducing a new type of plenoptic camera with an extended depth of field and a maximal effective resolution of up to a quarter of the sensor resolution.

  13. Method and algorithm for efficient calibration of compressive hyperspectral imaging system based on a liquid crystal retarder

    NASA Astrophysics Data System (ADS)

    Shecter, Liat; Oiknine, Yaniv; August, Isaac; Stern, Adrian

    2017-09-01

    Recently we presented a Compressive Sensing Miniature Ultra-spectral Imaging System (CS-MUSI)1 . This system consists of a single Liquid Crystal (LC) phase retarder as a spectral modulator and a gray scale sensor array to capture a multiplexed signal of the imaged scene. By designing the LC spectral modulator in compliance with the Compressive Sensing (CS) guidelines and applying appropriate algorithms we demonstrated reconstruction of spectral (hyper/ ultra) datacubes from an order of magnitude fewer samples than taken by conventional sensors. The LC modulator is designed to have an effective width of a few tens of micrometers, therefore it is prone to imperfections and spatial nonuniformity. In this work, we present the study of this nonuniformity and present a mathematical algorithm that allows the inference of the spectral transmission over the entire cell area from only a few calibration measurements.

  14. Shack-Hartmann reflective micro profilometer

    NASA Astrophysics Data System (ADS)

    Gong, Hai; Soloviev, Oleg; Verhaegen, Michel; Vdovin, Gleb

    2018-01-01

    We present a quantitative phase imaging microscope based on a Shack-Hartmann sensor, that directly reconstructs the optical path difference (OPD) in reflective mode. Comparing with the holographic or interferometric methods, the SH technique needs no reference beam in the setup, which simplifies the system. With a preregistered reference, the OPD image can be reconstructed from a single shot. Also, the method has a rather relaxed requirement on the illumination coherence, thus a cheap light source such as a LED is feasible in the setup. In our previous research, we have successfully verified that a conventional transmissive microscope can be transformed into an optical path difference microscope by using a Shack-Hartmann wavefront sensor under incoherent illumination. The key condition is that the numerical aperture of illumination should be smaller than the numerical aperture of imaging lens. This approach is also applicable to characterization of reflective and slightly scattering surfaces.

  15. BOREAS Level-4b AVHRR-LAC Ten-Day Composite Images: At-sensor Radiance

    NASA Technical Reports Server (NTRS)

    Cihlar, Josef; Chen, Jing; Nickerson, Jaime; Newcomer, Jeffrey A.; Huang, Feng-Ting; Hall, Forrest G. (Editor)

    2000-01-01

    The BOReal Ecosystem-Atmosphere Study (BOREAS) Staff Science Satellite Data Acquisition Program focused on providing the research teams with the remotely sensed satellite data products they needed to compare and spatially extend point results. Manitoba Remote Sensing Center (MRSC) and BOREAS Information System (BORIS) personnel acquired, processed, and archived data from the Advanced Very High Resolution Radiometer (AVHRR) instruments on the National Oceanic and Atmospheric Administration (NOAA-11) and -14 satellites. The AVHRR data were acquired by CCRS and were provided to BORIS for use by BOREAS researchers. These AVHRR level-4b data are gridded, 10-day composites of at-sensor radiance values produced from sets of single-day images. Temporally, the 10- day compositing periods begin 11-Apr-1994 and end 10-Sep-1994. Spatially, the data cover the entire BOREAS region. The data are stored in binary image format files.

  16. Intelligent Network-Centric Sensors Development Program

    DTIC Science & Technology

    2012-07-31

    Image sensor Configuration: ; Cone 360 degree LWIR PFx Sensor: •■. Image sensor . Configuration: Image MWIR Configuration; Cone 360 degree... LWIR PFx Sensor: Video Configuration: Cone 360 degree SW1R, 2. Reasoning Process to Match Sensor Systems to Algorithms The ontological...effects of coherent imaging because of aberrations. Another reason is the specular nature of active imaging. Both contribute to the nonuniformity

  17. Towards Single Biomolecule Imaging via Optical Nanoscale Magnetic Resonance Imaging.

    PubMed

    Boretti, Alberto; Rosa, Lorenzo; Castelletto, Stefania

    2015-09-09

    Nuclear magnetic resonance (NMR) spectroscopy is a physical marvel in which electromagnetic radiation is charged and discharged by nuclei in a magnetic field. In conventional NMR, the specific nuclei resonance frequency depends on the strength of the magnetic field and the magnetic properties of the isotope of the atoms. NMR is routinely utilized in clinical tests by converting nuclear spectroscopy in magnetic resonance imaging (MRI) and providing 3D, noninvasive biological imaging. While this technique has revolutionized biomedical science, measuring the magnetic resonance spectrum of single biomolecules is still an intangible aspiration, due to MRI resolution being limited to tens of micrometers. MRI and NMR have, however, recently greatly advanced, with many breakthroughs in nano-NMR and nano-MRI spurred by using spin sensors based on an atomic impurities in diamond. These techniques rely on magnetic dipole-dipole interactions rather than inductive detection. Here, novel nano-MRI methods based on nitrogen vacancy centers in diamond are highlighted, that provide a solution to the imaging of single biomolecules with nanoscale resolution in-vivo and in ambient conditions. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Neuronal network imaging in acute slices using Ca2+ sensitive bioluminescent reporter.

    PubMed

    Tricoire, Ludovic; Lambolez, Bertrand

    2014-01-01

    Genetically encoded indicators are valuable tools to study intracellular signaling cascades in real time using fluorescent or bioluminescent imaging techniques. Imaging of Ca(2+) indicators is widely used to record transient intracellular Ca(2+) increases associated with bioelectrical activity. The natural bioluminescent Ca(2+) sensor aequorin has been historically the first Ca(2+) indicator used to address biological questions. Aequorin imaging offers several advantages over fluorescent reporters: it is virtually devoid of background signal; it does not require light excitation and interferes little with intracellular processes. Genetically encoded sensors such as aequorin are commonly used in dissociated cultured cells; however it becomes more challenging to express them in differentiated intact specimen such as brain tissue. Here we describe a method to express a GFP-aequorin (GA) fusion protein in pyramidal cells of neocortical acute slices using recombinant Sindbis virus. This technique allows expressing GA in several hundreds of neurons on the same slice and to perform the bioluminescence recording of Ca(2+) transients in single neurons or multiple neurons simultaneously.

  19. 3D FaceCam: a fast and accurate 3D facial imaging device for biometrics applications

    NASA Astrophysics Data System (ADS)

    Geng, Jason; Zhuang, Ping; May, Patrick; Yi, Steven; Tunnell, David

    2004-08-01

    Human faces are fundamentally three-dimensional (3D) objects, and each face has its unique 3D geometric profile. The 3D geometric features of a human face can be used, together with its 2D texture, for rapid and accurate face recognition purposes. Due to the lack of low-cost and robust 3D sensors and effective 3D facial recognition (FR) algorithms, almost all existing FR systems use 2D face images. Genex has developed 3D solutions that overcome the inherent problems in 2D while also addressing limitations in other 3D alternatives. One important aspect of our solution is a unique 3D camera (the 3D FaceCam) that combines multiple imaging sensors within a single compact device to provide instantaneous, ear-to-ear coverage of a human face. This 3D camera uses three high-resolution CCD sensors and a color encoded pattern projection system. The RGB color information from each pixel is used to compute the range data and generate an accurate 3D surface map. The imaging system uses no moving parts and combines multiple 3D views to provide detailed and complete 3D coverage of the entire face. Images are captured within a fraction of a second and full-frame 3D data is produced within a few seconds. This described method provides much better data coverage and accuracy in feature areas with sharp features or details (such as the nose and eyes). Using this 3D data, we have been able to demonstrate that a 3D approach can significantly improve the performance of facial recognition. We have conducted tests in which we have varied the lighting conditions and angle of image acquisition in the "field." These tests have shown that the matching results are significantly improved when enrolling a 3D image rather than a single 2D image. With its 3D solutions, Genex is working toward unlocking the promise of powerful 3D FR and transferring FR from a lab technology into a real-world biometric solution.

  20. Multispectral data processing from unmanned aerial vehicles: application in precision agriculture using different sensors and platforms

    NASA Astrophysics Data System (ADS)

    Piermattei, Livia; Bozzi, Carlo Alberto; Mancini, Adriano; Tassetti, Anna Nora; Karel, Wilfried; Pfeifer, Norbert

    2017-04-01

    Unmanned aerial vehicles (UAVs) in combination with consumer grade cameras have become standard tools for photogrammetric applications and surveying. The recent generation of multispectral, cost-efficient and lightweight cameras has fostered a breakthrough in the practical application of UAVs for precision agriculture. For this application, multispectral cameras typically use Green, Red, Red-Edge (RE) and Near Infrared (NIR) wavebands to capture both visible and invisible images of crops and vegetation. These bands are very effective for deriving characteristics like soil productivity, plant health and overall growth. However, the quality of results is affected by the sensor architecture, the spatial and spectral resolutions, the pattern of image collection, and the processing of the multispectral images. In particular, collecting data with multiple sensors requires an accurate spatial co-registration of the various UAV image datasets. Multispectral processed data in precision agriculture are mainly presented as orthorectified mosaics used to export information maps and vegetation indices. This work aims to investigate the acquisition parameters and processing approaches of this new type of image data in order to generate orthoimages using different sensors and UAV platforms. Within our experimental area we placed a grid of artificial targets, whose position was determined with differential global positioning system (dGPS) measurements. Targets were used as ground control points to georeference the images and as checkpoints to verify the accuracy of the georeferenced mosaics. The primary aim is to present a method for the spatial co-registration of visible, Red-Edge, and NIR image sets. To demonstrate the applicability and accuracy of our methodology, multi-sensor datasets were collected over the same area and approximately at the same time using the fixed-wing UAV senseFly "eBee". The images were acquired with the camera Canon S110 RGB, the multispectral cameras Canon S110 NIR and S110 RE and with the multi-camera system Parrot Sequoia, which is composed of single-band cameras (Green, Red, Red Edge, NIR and RGB). Imagery from each sensor was georeferenced and mosaicked with the commercial software Agisoft PhotoScan Pro and different approaches for image orientation were compared. To assess the overall spatial accuracy of each dataset the root mean square error was computed between check point coordinates measured with dGPS and coordinates retrieved from georeferenced image mosaics. Additionally, image datasets from different UAV platforms (i.e. DJI Phantom 4Pro, DJI Phantom 3 professional, and DJI Inspire 1 Pro) were acquired over the same area and the spatial accuracy of the orthoimages was evaluated.

  1. A 128 x 128 CMOS Active Pixel Image Sensor for Highly Integrated Imaging Systems

    NASA Technical Reports Server (NTRS)

    Mendis, Sunetra K.; Kemeny, Sabrina E.; Fossum, Eric R.

    1993-01-01

    A new CMOS-based image sensor that is intrinsically compatible with on-chip CMOS circuitry is reported. The new CMOS active pixel image sensor achieves low noise, high sensitivity, X-Y addressability, and has simple timing requirements. The image sensor was fabricated using a 2 micrometer p-well CMOS process, and consists of a 128 x 128 array of 40 micrometer x 40 micrometer pixels. The CMOS image sensor technology enables highly integrated smart image sensors, and makes the design, incorporation and fabrication of such sensors widely accessible to the integrated circuit community.

  2. Nanophotonic Image Sensors

    PubMed Central

    Hu, Xin; Wen, Long; Yu, Yan; Cumming, David R. S.

    2016-01-01

    The increasing miniaturization and resolution of image sensors bring challenges to conventional optical elements such as spectral filters and polarizers, the properties of which are determined mainly by the materials used, including dye polymers. Recent developments in spectral filtering and optical manipulating techniques based on nanophotonics have opened up the possibility of an alternative method to control light spectrally and spatially. By integrating these technologies into image sensors, it will become possible to achieve high compactness, improved process compatibility, robust stability and tunable functionality. In this Review, recent representative achievements on nanophotonic image sensors are presented and analyzed including image sensors with nanophotonic color filters and polarizers, metamaterial‐based THz image sensors, filter‐free nanowire image sensors and nanostructured‐based multispectral image sensors. This novel combination of cutting edge photonics research and well‐developed commercial products may not only lead to an important application of nanophotonics but also offer great potential for next generation image sensors beyond Moore's Law expectations. PMID:27239941

  3. Image quality evaluation of eight complementary metal-oxide semiconductor intraoral digital X-ray sensors.

    PubMed

    Teich, Sorin; Al-Rawi, Wisam; Heima, Masahiro; Faddoul, Fady F; Goldzweig, Gil; Gutmacher, Zvi; Aizenbud, Dror

    2016-10-01

    To evaluate the image quality generated by eight commercially available intraoral sensors. Eighteen clinicians ranked the quality of a bitewing acquired from one subject using eight different intraoral sensors. Analytical methods used to evaluate clinical image quality included the Visual Grading Characteristics method, which helps to quantify subjective opinions to make them suitable for analysis. The Dexis sensor was ranked significantly better than Sirona and Carestream-Kodak sensors; and the image captured using the Carestream-Kodak sensor was ranked significantly worse than those captured using Dexis, Schick and Cyber Medical Imaging sensors. The Image Works sensor image was rated the lowest by all clinicians. Other comparisons resulted in non-significant results. None of the sensors was considered to generate images of significantly better quality than the other sensors tested. Further research should be directed towards determining the clinical significance of the differences in image quality reported in this study. © 2016 FDI World Dental Federation.

  4. Single-sided magnetic resonance profiling in biological and materials science.

    PubMed

    Danieli, Ernesto; Blümich, Bernhard

    2013-04-01

    Single-sided NMR was inspired by the oil industry that strived to improve the performance of well-logging tools to measure the properties of fluids confined downhole. This unconventional way of implementing NMR, in which stray magnetic and radio frequency fields are used to recover information of arbitrarily large objects placed outside the magnet, motivated the development of handheld NMR sensors. These devices have moved the technique to different scientific disciplines. The current work gives a review of the most relevant magnets and methodologies developed to generate NMR information from spatially localized regions of samples placed in close proximity to the sensors. When carried out systematically, such measurements lead to 'single-sided depth profiles' or one-dimensional images. This paper presents recent and most relevant applications as well as future perspectives of this growing branch of MRI. Copyright © 2012 Elsevier Inc. All rights reserved.

  5. CMOS sensors for atmospheric imaging

    NASA Astrophysics Data System (ADS)

    Pratlong, Jérôme; Burt, David; Jerram, Paul; Mayer, Frédéric; Walker, Andrew; Simpson, Robert; Johnson, Steven; Hubbard, Wendy

    2017-09-01

    Recent European atmospheric imaging missions have seen a move towards the use of CMOS sensors for the visible and NIR parts of the spectrum. These applications have particular challenges that are completely different to those that have driven the development of commercial sensors for applications such as cell-phone or SLR cameras. This paper will cover the design and performance of general-purpose image sensors that are to be used in the MTG (Meteosat Third Generation) and MetImage satellites and the technology challenges that they have presented. We will discuss how CMOS imagers have been designed with 4T pixel sizes of up to 250 μm square achieving good charge transfer efficiency, or low lag, with signal levels up to 2M electrons and with high line rates. In both devices a low noise analogue read-out chain is used with correlated double sampling to suppress the readout noise and give a maximum dynamic range that is significantly larger than in standard commercial devices. Radiation hardness is a particular challenge for CMOS detectors and both of these sensors have been designed to be fully radiation hard with high latch-up and single-event-upset tolerances, which is now silicon proven on MTG. We will also cover the impact of ionising radiation on these devices. Because with such large pixels the photodiodes have a large open area, front illumination technology is sufficient to meet the detection efficiency requirements but with thicker than standard epitaxial silicon to give improved IR response (note that this makes latch up protection even more important). However with narrow band illumination reflections from the front and back of the dielectric stack on the top of the sensor produce Fabry-Perot étalon effects, which have been minimised with process modifications. We will also cover the addition of precision narrow band filters inside the MTG package to provide a complete imaging subsystem. Control of reflected light is also critical in obtaining the required optical performance and this has driven the development of a black coating layer that can be applied between the active silicon regions.

  6. Health monitoring with optical fiber sensors: from human body to civil structures

    NASA Astrophysics Data System (ADS)

    Pinet, Éric; Hamel, Caroline; Glišić, Branko; Inaudi, Daniele; Miron, Nicolae

    2007-04-01

    Although structural health monitoring and patient monitoring may benefit from the unique advantages of optical fiber sensors (OFS) such as electromagnetic interferences (EMI) immunity, sensor small size and long term reliability, both applications are facing different realities. This paper presents, with practical examples, several OFS technologies ranging from single-point to distributed sensors used to address the health monitoring challenges in medical and in civil engineering fields. OFS for medical applications are single-point, measuring mainly vital parameters such as pressure or temperature. In the intra-aortic balloon pumping (IABP) therapy, a miniature OFS can monitor in situ aortic blood pressure to trigger catheter balloon inflation/deflation in counter-pulsation with heartbeats. Similar sensors reliably monitor the intracranial pressure (ICP) of critical care patients, even during surgical interventions or examinations under medical resonance imaging (MRI). Temperature OFS are also the ideal monitoring solution for such harsh environments. Most of OFS for structural health monitoring are distributed or have long gage length, although quasi-distributed short gage sensors are also used. Those sensors measure mainly strain/load, temperature, pressure and elongation. SOFO type deformation sensors were used to monitor and secure the Bolshoi Moskvoretskiy Bridge in Moscow. Safety of Plavinu dam built on clay and sand in Latvia was increased by monitoring bitumen joints displacement and temperature changes using SMARTape and Temperature Sensitive Cable read with DiTeSt unit. A similar solution was used for monitoring a pipeline built in an unstable area near Rimini in Italy.

  7. Quantum measurement of a rapidly rotating spin qubit in diamond.

    PubMed

    Wood, Alexander A; Lilette, Emmanuel; Fein, Yaakov Y; Tomek, Nikolas; McGuinness, Liam P; Hollenberg, Lloyd C L; Scholten, Robert E; Martin, Andy M

    2018-05-01

    A controlled qubit in a rotating frame opens new opportunities to probe fundamental quantum physics, such as geometric phases in physically rotating frames, and can potentially enhance detection of magnetic fields. Realizing a single qubit that can be measured and controlled during physical rotation is experimentally challenging. We demonstrate quantum control of a single nitrogen-vacancy (NV) center within a diamond rotated at 200,000 rpm, a rotational period comparable to the NV spin coherence time T 2 . We stroboscopically image individual NV centers that execute rapid circular motion in addition to rotation and demonstrate preparation, control, and readout of the qubit quantum state with lasers and microwaves. Using spin-echo interferometry of the rotating qubit, we are able to detect modulation of the NV Zeeman shift arising from the rotating NV axis and an external DC magnetic field. Our work establishes single NV qubits in diamond as quantum sensors in the physically rotating frame and paves the way for the realization of single-qubit diamond-based rotation sensors.

  8. Quantum measurement of a rapidly rotating spin qubit in diamond

    PubMed Central

    Fein, Yaakov Y.; Hollenberg, Lloyd C. L.; Scholten, Robert E.

    2018-01-01

    A controlled qubit in a rotating frame opens new opportunities to probe fundamental quantum physics, such as geometric phases in physically rotating frames, and can potentially enhance detection of magnetic fields. Realizing a single qubit that can be measured and controlled during physical rotation is experimentally challenging. We demonstrate quantum control of a single nitrogen-vacancy (NV) center within a diamond rotated at 200,000 rpm, a rotational period comparable to the NV spin coherence time T2. We stroboscopically image individual NV centers that execute rapid circular motion in addition to rotation and demonstrate preparation, control, and readout of the qubit quantum state with lasers and microwaves. Using spin-echo interferometry of the rotating qubit, we are able to detect modulation of the NV Zeeman shift arising from the rotating NV axis and an external DC magnetic field. Our work establishes single NV qubits in diamond as quantum sensors in the physically rotating frame and paves the way for the realization of single-qubit diamond-based rotation sensors. PMID:29736417

  9. Object acquisition and tracking for space-based surveillance

    NASA Astrophysics Data System (ADS)

    1991-11-01

    This report presents the results of research carried out by Space Computer Corporation under the U.S. government's Small Business Innovation Research (SBIR) Program. The work was sponsored by the Strategic Defense Initiative Organization and managed by the Office of Naval Research under Contracts N00014-87-C-0801 (Phase 1) and N00014-89-C-0015 (Phase 2). The basic purpose of this research was to develop and demonstrate a new approach to the detection of, and initiation of track on, moving targets using data from a passive infrared or visual sensor. This approach differs in very significant ways from the traditional approach of dividing the required processing into time dependent, object dependent, and data dependent processing stages. In that approach individual targets are first detected in individual image frames, and the detections are then assembled into tracks. That requires that the signal to noise ratio in each image frame be sufficient for fairly reliable target detection. In contrast, our approach bases detection of targets on multiple image frames, and, accordingly, requires a smaller signal to noise ratio. It is sometimes referred to as track before detect, and can lead to a significant reduction in total system cost. For example, it can allow greater detection range for a single sensor, or it can allow the use of smaller sensor optics. Both the traditional and track before detect approaches are applicable to systems using scanning sensors, as well as those which use staring sensors.

  10. Object acquisition and tracking for space-based surveillance. Final report, Dec 88-May 90

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1991-11-27

    This report presents the results of research carried out by Space Computer Corporation under the U.S. government's Small Business Innovation Research (SBIR) Program. The work was sponsored by the Strategic Defense Initiative Organization and managed by the Office of Naval Research under Contracts N00014-87-C-0801 (Phase I) and N00014-89-C-0015 (Phase II). The basic purpose of this research was to develop and demonstrate a new approach to the detection of, and initiation of track on, moving targets using data from a passive infrared or visual sensor. This approach differs in very significant ways from the traditional approach of dividing the required processingmore » into time dependent, object-dependent, and data-dependent processing stages. In that approach individual targets are first detected in individual image frames, and the detections are then assembled into tracks. That requires that the signal to noise ratio in each image frame be sufficient for fairly reliable target detection. In contrast, our approach bases detection of targets on multiple image frames, and, accordingly, requires a smaller signal to noise ratio. It is sometimes referred to as track before detect, and can lead to a significant reduction in total system cost. For example, it can allow greater detection range for a single sensor, or it can allow the use of smaller sensor optics. Both the traditional and track before detect approaches are applicable to systems using scanning sensors, as well as those which use staring sensors.« less

  11. Wavelength interrogation of fiber Bragg grating sensors using tapered hollow Bragg waveguides.

    PubMed

    Potts, C; Allen, T W; Azar, A; Melnyk, A; Dennison, C R; DeCorby, R G

    2014-10-15

    We describe an integrated system for wavelength interrogation, which uses tapered hollow Bragg waveguides coupled to an image sensor. Spectral shifts are extracted from the wavelength dependence of the light radiated at mode cutoff. Wavelength shifts as small as ~10  pm were resolved by employing a simple peak detection algorithm. Si/SiO₂-based cladding mirrors enable a potential operational range of several hundred nanometers in the 1550 nm wavelength region for a taper length of ~1  mm. Interrogation of a strain-tuned grating was accomplished using a broadband amplified spontaneous emission (ASE) source, and potential for single-chip interrogation of multiplexed sensor arrays is demonstrated.

  12. Ultrasonic Fingerprint Sensor With Transmit Beamforming Based on a PMUT Array Bonded to CMOS Circuitry.

    PubMed

    Jiang, Xiaoyue; Tang, Hao-Yen; Lu, Yipeng; Ng, Eldwin J; Tsai, Julius M; Boser, Bernhard E; Horsley, David A

    2017-09-01

    In this paper, we present a single-chip 65 ×42 element ultrasonic pulse-echo fingerprint sensor with transmit (TX) beamforming based on piezoelectric micromachined ultrasonic transducers directly bonded to a CMOS readout application-specific integrated circuit (ASIC). The readout ASIC was realized in a standard 180-nm CMOS process with a 24-V high-voltage transistor option. Pulse-echo measurements are performed column-by-column in sequence using either one column or five columns to TX the ultrasonic pulse at 20 MHz. TX beamforming is used to focus the ultrasonic beam at the imaging plane where the finger is located, increasing the ultrasonic pressure and narrowing the 3-dB beamwidth to [Formula: see text], a factor of 6.4 narrower than nonbeamformed measurements. The surface of the sensor is coated with a poly-dimethylsiloxane (PDMS) layer to provide good acoustic impedance matching to skin. Scanning laser Doppler vibrometry of the PDMS surface was used to map the ultrasonic pressure field at the imaging surface, demonstrating the expected increase in pressure, and reduction in beamwidth. Imaging experiments were conducted using both PDMS phantoms and real fingerprints. The average image contrast is increased by a factor of 1.5 when beamforming is used.

  13. Single image super-resolution via regularized extreme learning regression for imagery from microgrid polarimeters

    NASA Astrophysics Data System (ADS)

    Sargent, Garrett C.; Ratliff, Bradley M.; Asari, Vijayan K.

    2017-08-01

    The advantage of division of focal plane imaging polarimeters is their ability to obtain temporally synchronized intensity measurements across a scene; however, they sacrifice spatial resolution in doing so due to their spatially modulated arrangement of the pixel-to-pixel polarizers and often result in aliased imagery. Here, we propose a super-resolution method based upon two previously trained extreme learning machines (ELM) that attempt to recover missing high frequency and low frequency content beyond the spatial resolution of the sensor. This method yields a computationally fast and simple way of recovering lost high and low frequency content from demosaicing raw microgrid polarimetric imagery. The proposed method outperforms other state-of-the-art single-image super-resolution algorithms in terms of structural similarity and peak signal-to-noise ratio.

  14. Pixel pitch and particle energy influence on the dark current distribution of neutron irradiated CMOS image sensors.

    PubMed

    Belloir, Jean-Marc; Goiffon, Vincent; Virmontois, Cédric; Raine, Mélanie; Paillet, Philippe; Duhamel, Olivier; Gaillardin, Marc; Molina, Romain; Magnan, Pierre; Gilard, Olivier

    2016-02-22

    The dark current produced by neutron irradiation in CMOS Image Sensors (CIS) is investigated. Several CIS with different photodiode types and pixel pitches are irradiated with various neutron energies and fluences to study the influence of each of these optical detector and irradiation parameters on the dark current distribution. An empirical model is tested on the experimental data and validated on all the irradiated optical imagers. This model is able to describe all the presented dark current distributions with no parameter variation for neutron energies of 14 MeV or higher, regardless of the optical detector and irradiation characteristics. For energies below 1 MeV, it is shown that a single parameter has to be adjusted because of the lower mean damage energy per nuclear interaction. This model and these conclusions can be transposed to any silicon based solid-state optical imagers such as CIS or Charged Coupled Devices (CCD). This work can also be used when designing an optical imager instrument, to anticipate the dark current increase or to choose a mitigation technique.

  15. Design and testing of a dual-band enhanced vision system

    NASA Astrophysics Data System (ADS)

    Way, Scott P.; Kerr, Richard; Imamura, Joseph J.; Arnoldy, Dan; Zeylmaker, Dick; Zuro, Greg

    2003-09-01

    An effective enhanced vision system must operate over a broad spectral range in order to offer a pilot an optimized scene that includes runway background as well as airport lighting and aircraft operations. The large dynamic range of intensities of these images is best handled with separate imaging sensors. The EVS 2000 is a patented dual-band Infrared Enhanced Vision System (EVS) utilizing image fusion concepts. It has the ability to provide a single image from uncooled infrared imagers combined with SWIR, NIR or LLLTV sensors. The system is designed to provide commercial and corporate airline pilots with improved situational awareness at night and in degraded weather conditions but can also be used in a variety of applications where the fusion of dual band or multiband imagery is required. A prototype of this system was recently fabricated and flown on the Boeing Advanced Technology Demonstrator 737-900 aircraft. This paper will discuss the current EVS 2000 concept, show results taken from the Boeing Advanced Technology Demonstrator program, and discuss future plans for the fusion system.

  16. Luminescence materials for pH and oxygen sensing in microbial cells - structures, optical properties, and biological applications.

    PubMed

    Zou, Xianshao; Pan, Tingting; Chen, Lei; Tian, Yanqing; Zhang, Weiwen

    2017-09-01

    Luminescence including fluorescence and phosphorescence sensors have been demonstrated to be important for studying cell metabolism, and diagnosing diseases and cancer. Various design principles have been employed for the development of sensors in different formats, such as organic molecules, polymers, polymeric hydrogels, and nanoparticles. The integration of the sensing with fluorescence imaging provides valuable tools for biomedical research and applications at not only bulk-cell level but also at single-cell level. In this article, we critically reviewed recent progresses on pH, oxygen, and dual pH and oxygen sensors specifically for their application in microbial cells. In addition, we focused not only on sensor materials with different chemical structures, but also on design and applications of sensors for better understanding cellular metabolism of microbial cells. Finally, we also provided an outlook for future materials design and key challenges in reaching broad applications in microbial cells.

  17. Multi-Sensor Characterization of the Boreal Forest: Initial Findings

    NASA Technical Reports Server (NTRS)

    Reith, Ernest; Roberts, Dar A.; Prentiss, Dylan

    2001-01-01

    Results are presented in an initial apriori knowledge approach toward using complementary multi-sensor multi-temporal imagery in characterizing vegetated landscapes over a site in the Boreal Ecosystem-Atmosphere Study (BOREAS). Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) and Airborne Synthetic Aperture Radar (AIRSAR) data were segmented using multiple endmember spectral mixture analysis and binary decision tree approaches. Individual date/sensor land cover maps had overall accuracies between 55.0% - 69.8%. The best eight land cover layers from all dates and sensors correctly characterized 79.3% of the cover types. An overlay approach was used to create a final land cover map. An overall accuracy of 71.3% was achieved in this multi-sensor approach, a 1.5% improvement over our most accurate single scene technique, but 8% less than the original input. Black spruce was evaluated to be particularly undermapped in the final map possibly because it was also contained within jack pine and muskeg land coverages.

  18. Developing handheld real time multispectral imager to clinically detect erythema in darkly pigmented skin

    NASA Astrophysics Data System (ADS)

    Kong, Linghua; Sprigle, Stephen; Yi, Dingrong; Wang, Fengtao; Wang, Chao; Liu, Fuhan

    2010-02-01

    Pressure ulcers have been identified as a public health concern by the US government through the Healthy People 2010 initiative and the National Quality Forum (NQF). Currently, no tools are available to assist clinicians in erythema, i.e. the early stage pressure ulcer detection. The results from our previous research (supported by NIH grant) indicate that erythema in different skin tones can be identified using a set of wavelengths 540, 577, 650 and 970nm. This paper will report our recent work which is developing a handheld, point-of-care, clinicallyviable and affordable, real time multispectral imager to detect erythema in persons with darkly pigmented skin. Instead of using traditional filters, e.g. filter wheels, generalized Lyot filter, electrical tunable filter or the methods of dispersing light, e.g. optic-acoustic crystal, a novel custom filter mosaic has been successfully designed and fabricated using lithography and vacuum multi layer film technologies. The filter has been integrated with CMOS and CCD sensors. The filter incorporates four or more different wavelengths within the visual to nearinfrared range each having a narrow bandwidth of 30nm or less. Single wavelength area is chosen as 20.8μx 20.8μ. The filter can be deposited on regular optical glass as substrate or directly on a CMOS and CCD imaging sensor. This design permits a multi-spectral image to be acquired in a single exposure, thereby providing overwhelming convenience in multi spectral imaging acquisition.

  19. Imaging Sensor Development for Scattering Atmospheres.

    DTIC Science & Technology

    1983-03-01

    subtracted out- put from a CCD imaging detector for a single frame can be written as A _ S (2-22) V B + B{ shot noise thermal noise , dark current shot ...addition, the spectral re- sponses of current devices are limited to the visible region and their sensitivities are not very high. Solid state detectors ...are generally much more sensitive than spatial light modulators, and some (e.g., HgCdTe detectors ) can re- spond up to the 10 um region. Several

  20. Nanophotonic Image Sensors.

    PubMed

    Chen, Qin; Hu, Xin; Wen, Long; Yu, Yan; Cumming, David R S

    2016-09-01

    The increasing miniaturization and resolution of image sensors bring challenges to conventional optical elements such as spectral filters and polarizers, the properties of which are determined mainly by the materials used, including dye polymers. Recent developments in spectral filtering and optical manipulating techniques based on nanophotonics have opened up the possibility of an alternative method to control light spectrally and spatially. By integrating these technologies into image sensors, it will become possible to achieve high compactness, improved process compatibility, robust stability and tunable functionality. In this Review, recent representative achievements on nanophotonic image sensors are presented and analyzed including image sensors with nanophotonic color filters and polarizers, metamaterial-based THz image sensors, filter-free nanowire image sensors and nanostructured-based multispectral image sensors. This novel combination of cutting edge photonics research and well-developed commercial products may not only lead to an important application of nanophotonics but also offer great potential for next generation image sensors beyond Moore's Law expectations. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. A diamond-based scanning probe spin sensor operating at low temperature in ultra-high vacuum

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schaefer-Nolte, E.; Wrachtrup, J.; 3rd Institute of Physics and Research Center SCoPE, University Stuttgart, 70569 Stuttgart

    2014-01-15

    We present the design and performance of an ultra-high vacuum (UHV) low temperature scanning probe microscope employing the nitrogen-vacancy color center in diamond as an ultrasensitive magnetic field sensor. Using this center as an atomic-size scanning probe has enabled imaging of nanoscale magnetic fields and single spins under ambient conditions. In this article we describe an experimental setup to operate this sensor in a cryogenic UHV environment. This will extend the applicability to a variety of molecular systems due to the enhanced target spin lifetimes at low temperature and the controlled sample preparation under UHV conditions. The instrument combines amore » tuning-fork based atomic force microscope (AFM) with a high numeric aperture confocal microscope and the facilities for application of radio-frequency (RF) fields for spin manipulation. We verify a sample temperature of <50 K even for strong laser and RF excitation and demonstrate magnetic resonance imaging with a magnetic AFM tip.« less

  2. Design and Application of Hybrid Magnetic Field-Eddy Current Probe

    NASA Technical Reports Server (NTRS)

    Wincheski, Buzz; Wallace, Terryl; Newman, Andy; Leser, Paul; Simpson, John

    2013-01-01

    The incorporation of magnetic field sensors into eddy current probes can result in novel probe designs with unique performance characteristics. One such example is a recently developed electromagnetic probe consisting of a two-channel magnetoresistive sensor with an embedded single-strand eddy current inducer. Magnetic flux leakage maps of ferrous materials are generated from the DC sensor response while high-resolution eddy current imaging is simultaneously performed at frequencies up to 5 megahertz. In this work the design and optimization of this probe will be presented, along with an application toward analysis of sensory materials with embedded ferromagnetic shape-memory alloy (FSMA) particles. The sensory material is designed to produce a paramagnetic to ferromagnetic transition in the FSMA particles under strain. Mapping of the stray magnetic field and eddy current response of the sample with the hybrid probe can thereby image locations in the structure which have experienced an overstrain condition. Numerical modeling of the probe response is performed with good agreement with experimental results.

  3. Lightweight UAV with on-board photogrammetry and single-frequency GPS positioning for metrology applications

    NASA Astrophysics Data System (ADS)

    Daakir, M.; Pierrot-Deseilligny, M.; Bosser, P.; Pichard, F.; Thom, C.; Rabot, Y.; Martin, O.

    2017-05-01

    This article presents a coupled system consisting of a single-frequency GPS receiver and a light photogrammetric quality camera embedded in an Unmanned Aerial Vehicle (UAV). The aim is to produce high quality data that can be used in metrology applications. The issue of Integrated Sensor Orientation (ISO) of camera poses using only GPS measurements is presented and discussed. The accuracy reached by our system based on sensors developed at the French Mapping Agency (IGN) Opto-Electronics, Instrumentation and Metrology Laboratory (LOEMI) is qualified. These sensors are specially designed for close-range aerial image acquisition with a UAV. Lever-arm calibration and time synchronization are explained and performed to reach maximum accuracy. All processing steps are detailed from data acquisition to quality control of final products. We show that an accuracy of a few centimeters can be reached with this system which uses low-cost UAV and GPS module coupled with the IGN-LOEMI home-made camera.

  4. Evaluation of a single-pixel one-transistor active pixel sensor for fingerprint imaging

    NASA Astrophysics Data System (ADS)

    Xu, Man; Ou, Hai; Chen, Jun; Wang, Kai

    2015-08-01

    Since it first appeared in iPhone 5S in 2013, fingerprint identification (ID) has rapidly gained popularity among consumers. Current fingerprint-enabled smartphones unanimously consists of a discrete sensor to perform fingerprint ID. This architecture not only incurs higher material and manufacturing cost, but also provides only static identification and limited authentication. Hence as the demand for a thinner, lighter, and more secure handset grows, we propose a novel pixel architecture that is a photosensitive device embedded in a display pixel and detects the reflected light from the finger touch for high resolution, high fidelity and dynamic biometrics. To this purpose, an amorphous silicon (a-Si:H) dual-gate photo TFT working in both fingerprint-imaging mode and display-driving mode will be developed.

  5. Development of a DNA Sensor Based on Nanoporous Pt-Rich Electrodes

    NASA Astrophysics Data System (ADS)

    Van Hao, Pham; Thanh, Pham Duc; Xuan, Chu Thi; Hai, Nguyen Hoang; Tuan, Mai Anh

    2017-06-01

    Nanoporous Pt-rich electrodes with 72 at.% Pt composition were fabricated by sputtering a Pt-Ag alloy, followed by an electrochemical dealloying process to selectively etch away Ag atoms. The surface properties of nanoporous membranes were investigated by energy-dispersive x-ray spectroscopy (EDS), scanning electron microscopy (SEM), atomic force microscopy (AFM), a documentation system, and a gel image system (Gel Doc Imager). A single strand of probe deoxyribonucleic acid (DNA) was immobilized onto the electrode surface by physical adsorption. The DNA probe and target hybridization were measured using a lock-in amplifier and an electrochemical impedance spectroscope (EIS). The nanoporous Pt-rich electrode-based DNA sensor offers a fast response time of 3.7 s, with a limit of detection (LOD) of 4.35 × 10-10 M of DNA target.

  6. Reborn quadrant anode image sensor

    NASA Astrophysics Data System (ADS)

    Prokazov, Yury; Turbin, Evgeny; Vitali, Marco; Herzog, Andreas; Michaelis, Bernd; Zuschratter, Werner; Kemnitz, Klaus

    2009-06-01

    We describe a position sensitive photon counting microchannel plate based detector with an improved quadrant anode (QA) readout system. The technique relies on a combination of the four planar elements pattern and an additional fifth electrode. The charge cloud induced by single particle detection is split between the electrodes. The measured charge values uniquely define the position of the initial event. QA has been first published in 1976 by Lampton and Malina. This anode configuration was undeservedly forgotten and its potential has been hardly underestimated. The presented approach extends the operating spatial range to the whole sensitive area of the microchannel plate surface and demonstrates good linearity over the field of view. Therefore, the novel image sensor results in spatial resolution better then 50 μm and count rates up to one million events per second.

  7. New optical sensor systems for high-resolution satellite, airborne and terrestrial imaging systems

    NASA Astrophysics Data System (ADS)

    Eckardt, Andreas; Börner, Anko; Lehmann, Frank

    2007-10-01

    The department of Optical Information Systems (OS) at the Institute of Robotics and Mechatronics of the German Aerospace Center (DLR) has more than 25 years experience with high-resolution imaging technology. The technology changes in the development of detectors, as well as the significant change of the manufacturing accuracy in combination with the engineering research define the next generation of spaceborne sensor systems focusing on Earth observation and remote sensing. The combination of large TDI lines, intelligent synchronization control, fast-readable sensors and new focal-plane concepts open the door to new remote-sensing instruments. This class of instruments is feasible for high-resolution sensor systems regarding geometry and radiometry and their data products like 3D virtual reality. Systemic approaches are essential for such designs of complex sensor systems for dedicated tasks. The system theory of the instrument inside a simulated environment is the beginning of the optimization process for the optical, mechanical and electrical designs. Single modules and the entire system have to be calibrated and verified. Suitable procedures must be defined on component, module and system level for the assembly test and verification process. This kind of development strategy allows the hardware-in-the-loop design. The paper gives an overview about the current activities at DLR in the field of innovative sensor systems for photogrammetric and remote sensing purposes.

  8. Photoacoustic imaging with planoconcave optical microresonator sensors: feasibility studies based on phantom imaging

    NASA Astrophysics Data System (ADS)

    Guggenheim, James A.; Zhang, Edward Z.; Beard, Paul C.

    2017-03-01

    The planar Fabry-Pérot (FP) sensor provides high quality photoacoustic (PA) images but beam walk-off limits sensitivity and thus penetration depth to ≍1 cm. Planoconcave microresonator sensors eliminate beam walk-off enabling sensitivity to be increased by an order-of-magnitude whilst retaining the highly favourable frequency response and directional characteristics of the FP sensor. The first tomographic PA images obtained in a tissue-realistic phantom using the new sensors are described. These show that the microresonator sensors provide near identical image quality as the planar FP sensor but with significantly greater penetration depth (e.g. 2-3cm) due to their higher sensitivity. This offers the prospect of whole body small animal imaging and clinical imaging to depths previously unattainable using the FP planar sensor.

  9. Enhancement of Tropical Land Cover Mapping with Wavelet-Based Fusion and Unsupervised Clustering of SAR and Landsat Image Data

    NASA Technical Reports Server (NTRS)

    LeMoigne, Jacqueline; Laporte, Nadine; Netanyahuy, Nathan S.; Zukor, Dorothy (Technical Monitor)

    2001-01-01

    The characterization and the mapping of land cover/land use of forest areas, such as the Central African rainforest, is a very complex task. This complexity is mainly due to the extent of such areas and, as a consequence, to the lack of full and continuous cloud-free coverage of those large regions by one single remote sensing instrument, In order to provide improved vegetation maps of Central Africa and to develop forest monitoring techniques for applications at the local and regional scales, we propose to utilize multi-sensor remote sensing observations coupled with in-situ data. Fusion and clustering of multi-sensor data are the first steps towards the development of such a forest monitoring system. In this paper, we will describe some preliminary experiments involving the fusion of SAR and Landsat image data of the Lope Reserve in Gabon. Similarly to previous fusion studies, our fusion method is wavelet-based. The fusion provides a new image data set which contains more detailed texture features and preserves the large homogeneous regions that are observed by the Thematic Mapper sensor. The fusion step is followed by unsupervised clustering and provides a vegetation map of the area.

  10. Color filter array design based on a human visual model

    NASA Astrophysics Data System (ADS)

    Parmar, Manu; Reeves, Stanley J.

    2004-05-01

    To reduce cost and complexity associated with registering multiple color sensors, most consumer digital color cameras employ a single sensor. A mosaic of color filters is overlaid on a sensor array such that only one color channel is sampled per pixel location. The missing color values must be reconstructed from available data before the image is displayed. The quality of the reconstructed image depends fundamentally on the array pattern and the reconstruction technique. We present a design method for color filter array patterns that use red, green, and blue color channels in an RGB array. A model of the human visual response for luminance and opponent chrominance channels is used to characterize the perceptual error between a fully sampled and a reconstructed sparsely-sampled image. Demosaicking is accomplished using Wiener reconstruction. To ensure that the error criterion reflects perceptual effects, reconstruction is done in a perceptually uniform color space. A sequential backward selection algorithm is used to optimize the error criterion to obtain the sampling arrangement. Two different types of array patterns are designed: non-periodic and periodic arrays. The resulting array patterns outperform commonly used color filter arrays in terms of the error criterion.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leary, T.J.; Lamb, A.

    The Department of Energy`s Office of Arms Control and Non-Proliferation (NN-20) has developed a suite of airborne remote sensing systems that simultaneously collect coincident data from a US Navy P-3 aircraft. The primary objective of the Airborne Multisensor Pod System (AMPS) Program is {open_quotes}to collect multisensor data that can be used for data research, both to reduce interpretation problems associated with data overload and to develop information products more complete than can be obtained from any single sensor.{close_quotes} The sensors are housed in wing-mounted pods and include: a Ku-Band Synthetic Aperture Radar; a CASI Hyperspectral Imager; a Daedalus 3600 Airbornemore » Multispectral Scanner; a Wild Heerbrugg RC-30 motion compensated large format camera; various high resolution, light intensified and thermal video cameras; and several experimental sensors (e.g. the Portable Hyperspectral Imager of Low-Light Spectroscopy (PHILLS)). Over the past year or so, the Coastal Marine Resource Assessment (CAMRA) group at the Florida Department of Environmental Protection`s Marine Research Institute (FMRI) has been working with the Department of Energy through the Naval Research Laboratory to develop applications and products from existing data. Considerable effort has been spent identifying image formats integration parameters. 2 refs., 3 figs., 2 tabs.« less

  12. VisNAV 100: a robust, compact imaging sensor for enabling autonomous air-to-air refueling of aircraft and unmanned aerial vehicles

    NASA Astrophysics Data System (ADS)

    Katake, Anup; Choi, Heeyoul

    2010-01-01

    To enable autonomous air-to-refueling of manned and unmanned vehicles a robust high speed relative navigation sensor capable of proving high accuracy 3DOF information in diverse operating conditions is required. To help address this problem, StarVision Technologies Inc. has been developing a compact, high update rate (100Hz), wide field-of-view (90deg) direction and range estimation imaging sensor called VisNAV 100. The sensor is fully autonomous requiring no communication from the tanker aircraft and contains high reliability embedded avionics to provide range, azimuth, elevation (3 degrees of freedom solution 3DOF) and closing speed relative to the tanker aircraft. The sensor is capable of providing 3DOF with an error of 1% in range and 0.1deg in azimuth/elevation up to a range of 30m and 1 deg error in direction for ranges up to 200m at 100Hz update rates. In this paper we will discuss the algorithms that were developed in-house to enable robust beacon pattern detection, outlier rejection and 3DOF estimation in adverse conditions and present the results of several outdoor tests. Results from the long range single beacon detection tests will also be discussed.

  13. A flexible spatiotemporal method for fusing satellite images with different resolutions

    Treesearch

    Xiaolin Zhu; Eileen H. Helmer; Feng Gao; Desheng Liu; Jin Chen; Michael A. Lefsky

    2016-01-01

    Studies of land surface dynamics in heterogeneous landscapes often require remote sensing datawith high acquisition frequency and high spatial resolution. However, no single sensor meets this requirement. This study presents a new spatiotemporal data fusion method, the Flexible Spatiotemporal DAta Fusion (FSDAF) method, to generate synthesized frequent high spatial...

  14. High spatial resolution LWIR hyperspectral sensor

    NASA Astrophysics Data System (ADS)

    Roberts, Carson B.; Bodkin, Andrew; Daly, James T.; Meola, Joseph

    2015-06-01

    Presented is a new hyperspectral imager design based on multiple slit scanning. This represents an innovation in the classic trade-off between speed and resolution. This LWIR design has been able to produce data-cubes at 3 times the rate of conventional single slit scan devices. The instrument has a built-in radiometric and spectral calibrator.

  15. Nanoparticle-Enhanced Plasmonic Biosensor for Digital Biomarker Detection in a Microarray.

    PubMed

    Belushkin, Alexander; Yesilkoy, Filiz; Altug, Hatice

    2018-05-22

    Nanoplasmonic devices have become a paradigm for biomolecular detection enabled by enhanced light-matter interactions in the fields from biological and pharmaceutical research to medical diagnostics and global health. In this work, we present a bright-field imaging plasmonic biosensor that allows visualization of single subwavelength gold nanoparticles (NPs) on large-area gold nanohole arrays (Au-NHAs). The sensor generates image heatmaps that reveal the locations of single NPs as high-contrast spikes, enabling the detection of individual NP-labeled molecules. We implemented the proposed method in a sandwich immunoassay for the detection of biotinylated bovine serum albumin (bBSA) and human C-reactive protein (CRP), a clinical biomarker of acute inflammatory diseases. Our method can detect 10 pg/mL of bBSA and 27 pg/mL CRP in 2 h, which is at least 4 orders of magnitude lower than the clinically relevant concentrations. Our sensitive and rapid detection approach paired with the robust large-area plasmonic sensor chips, which are fabricated using scalable and low-cost manufacturing, provides a powerful platform for multiplexed biomarker detection in various settings.

  16. Vibration Pattern Imager (VPI): A control and data acquisition system for scanning laser vibrometers

    NASA Technical Reports Server (NTRS)

    Rizzi, Stephen A.; Brown, Donald E.; Shaffer, Thomas A.

    1993-01-01

    The Vibration Pattern Imager (VPI) system was designed to control and acquire data from scanning laser vibrometer sensors. The PC computer based system uses a digital signal processing (DSP) board and an analog I/O board to control the sensor and to process the data. The VPI system was originally developed for use with the Ometron VPI Sensor, but can be readily adapted to any commercially available sensor which provides an analog output signal and requires analog inputs for control of mirror positioning. The sensor itself is not part of the VPI system. A graphical interface program, which runs on a PC under the MS-DOS operating system, functions in an interactive mode and communicates with the DSP and I/O boards in a user-friendly fashion through the aid of pop-up menus. Two types of data may be acquired with the VPI system: single point or 'full field.' In the single point mode, time series data is sampled by the A/D converter on the I/O board (at a user-defined sampling rate for a selectable number of samples) and is stored by the PC. The position of the measuring point (adjusted by mirrors in the sensor) is controlled via a mouse input. The mouse input is translated to output voltages by the D/A converter on the I/O board to control the mirror servos. In the 'full field' mode, the measurement point is moved over a user-selectable rectangular area. The time series data is sampled by the A/D converter on the I/O board (at a user-defined sampling rate for a selectable number of samples) and converted to a root-mean-square (rms) value by the DSP board. The rms 'full field' velocity distribution is then uploaded for display and storage on the PC.

  17. Combining Radar and Optical Data for Forest Disturbance Studies

    NASA Technical Reports Server (NTRS)

    Ranson, K. Jon; Smith, David E. (Technical Monitor)

    2002-01-01

    Disturbance is an important factor in determining the carbon balance and succession of forests. Until the early 1990's researchers have focused on using optical or thermal sensors to detect and map forest disturbances from wild fires, logging or insect outbreaks. As part of a NASA Siberian mapping project, a study evaluated the capability of three different radar sensors (ERS, JERS and Radarsat) and an optical sensor (Landsat 7) to detect fire scars, logging and insect damage in the boreal forest. This paper describes the data sets and techniques used to evaluate the use of remote sensing to detect disturbance in central Siberian forests. Using images from each sensor individually and combined an assessment of the utility of using these sensors was developed. Transformed Divergence analysis and maximum likelihood classification revealed that Landsat data was the single best data type for this purpose. However, the combined use of the three radar and optical sensors did improve the results of discriminating these disturbances.

  18. Wide field-of-view dual-band multispectral muzzle flash detection

    NASA Astrophysics Data System (ADS)

    Montoya, J.; Melchor, J.; Spiliotis, P.; Taplin, L.

    2013-06-01

    Sensor technologies are undergoing revolutionary advances, as seen in the rapid growth of multispectral methodologies. Increases in spatial, spectral, and temporal resolution, and in breadth of spectral coverage, render feasible sensors that function with unprecedented performance. A system was developed that addresses many of the key hardware requirements for a practical dual-band multispectral acquisition system, including wide field of view and spectral/temporal shift between dual bands. The system was designed using a novel dichroic beam splitter and dual band-pass filter configuration that creates two side-by-side images of a scene on a single sensor. A high-speed CMOS sensor was used to simultaneously capture data from the entire scene in both spectral bands using a short focal-length lens that provided a wide field-of-view. The beam-splitter components were arranged such that the two images were maintained in optical alignment and real-time intra-band processing could be carried out using only simple arithmetic on the image halves. An experiment related to limitations of the system to address multispectral detection requirements was performed. This characterized the system's low spectral variation across its wide field of view. This paper provides lessons learned on the general limitation of key hardware components required for multispectral muzzle flash detection, using the system as a hardware example combined with simulated multispectral muzzle flash and background signatures.

  19. Radiometric Correction of Multitemporal Hyperspectral Uas Image Mosaics of Seedling Stands

    NASA Astrophysics Data System (ADS)

    Markelin, L.; Honkavaara, E.; Näsi, R.; Viljanen, N.; Rosnell, T.; Hakala, T.; Vastaranta, M.; Koivisto, T.; Holopainen, M.

    2017-10-01

    Novel miniaturized multi- and hyperspectral imaging sensors on board of unmanned aerial vehicles have recently shown great potential in various environmental monitoring and measuring tasks such as precision agriculture and forest management. These systems can be used to collect dense 3D point clouds and spectral information over small areas such as single forest stands or sample plots. Accurate radiometric processing and atmospheric correction is required when data sets from different dates and sensors, collected in varying illumination conditions, are combined. Performance of novel radiometric block adjustment method, developed at Finnish Geospatial Research Institute, is evaluated with multitemporal hyperspectral data set of seedling stands collected during spring and summer 2016. Illumination conditions during campaigns varied from bright to overcast. We use two different methods to produce homogenous image mosaics and hyperspectral point clouds: image-wise relative correction and image-wise relative correction with BRDF. Radiometric datasets are converted to reflectance using reference panels and changes in reflectance spectra is analysed. Tested methods improved image mosaic homogeneity by 5 % to 25 %. Results show that the evaluated method can produce consistent reflectance mosaics and reflectance spectra shape between different areas and dates.

  20. Multi-view line-scan inspection system using planar mirrors

    NASA Astrophysics Data System (ADS)

    Holländer, Bransilav; Štolc, Svorad; Huber-Mörk, Reinhold

    2013-04-01

    We demonstrate the design, setup, and results for a line-scan stereo image acquisition system using a single area- scan sensor, single lens and two planar mirrors attached to the acquisition device. The acquired object is moving relatively to the acquisition device and is observed under three different angles at the same time. Depending on the specific configuration it is possible to observe the object under a straight view (i.e., looking along the optical axis) and two skewed views. The relative motion between an object and the acquisition device automatically fulfills the epipolar constraint in stereo vision. The choice of lines to be extracted from the CMOS sensor depends on various factors such as the number, position and size of the mirrors, the optical and sensor configuration, or other application-specific parameters like desired depth resolution. The acquisition setup presented in this paper is suitable for the inspection of a printed matter, small parts or security features such as optical variable devices and holograms. The image processing pipeline applied to the extracted sensor lines is explained in detail. The effective depth resolution achieved by the presented system, assembled from only off-the-shelf components, is approximately equal to the spatial resolution and can be smoothly controlled by changing positions and angles of the mirrors. Actual performance of the device is demonstrated on a 3D-printed ground-truth object as well as two real-world examples: (i) the EUR-100 banknote - a high-quality printed matter and (ii) the hologram at the EUR-50 banknote { an optical variable device.

  1. Miniaturized optical wavelength sensors

    NASA Astrophysics Data System (ADS)

    Kung, Helen Ling-Ning

    Recently semiconductor processing technology has been applied to the miniaturization of optical wavelength sensors. Compact sensors enable new applications such as integrated diode-laser wavelength monitors and frequency lockers, portable chemical and biological detection, and portable and adaptive hyperspectral imaging arrays. Small sensing systems have trade-offs between resolution, operating range, throughput, multiplexing and complexity. We have developed a new wavelength sensing architecture that balances these parameters for applications involving hyperspectral imaging spectrometer arrays. In this thesis we discuss and demonstrate two new wavelength-sensing architectures whose single-pixel designs can easily be extended into spectrometer arrays. The first class of devices is based on sampling a standing wave. These devices are based on measuring the wavelength-dependent period of optical standing waves formed by the interference of forward and reflected waves at a mirror. We fabricated two different devices based on this principle. The first device is a wavelength monitor, which measures the wavelength and power of a monochromatic source. The second device is a spectrometer that can also act as a selective spectral coherence sensor. The spectrometer contains a large displacement piston-motion MEMS mirror and a thin GaAs photodiode flip-chip bonded to a quartz substrate. The performance of this spectrometer is similar to that of a Michelson in resolution, operating range, throughput and multiplexing but with the added advantages of fewer components and one-dimensional architecture. The second class of devices is based on the Talbot self-imaging effect. The Talbot effect occurs when a periodic object is illuminated with a spatially coherent wave. Periodically spaced self-images are formed behind the object. The spacing of the self-images is proportional to wavelength of the incident light. We discuss and demonstrate how this effect can be used for spectroscopy. In the conclusion we compare these two new miniaturized spectrometer architectures to existing miniaturized spectrometers. We believe that the combination of miniaturized wavelength sensors and smart processing should facilitate the development real-time, adaptive and portable sensing systems.

  2. Commercialization of Australian advanced infrared technology

    NASA Astrophysics Data System (ADS)

    Redpath, John; Brown, Allen; Woods, William F.

    1995-09-01

    For several decades, the main thrust in infrared technology developments in Australia has been in two main sensor technologies: uncooled silicon chip printed bolometric sensors pioneered by DSTO's Kevin Liddiard, and precision engineered high quality Cadmium Mercury Telluride developed at DSTO under the guidance of Dr. Richard Hartley. In late 1993 a low cost infrared imaging device was developed at DSTO as a sensor for guided missiles. The combination of these three innovations made up a unique package that enabled Australian industry to break through the barriers of commercializing infrared technology. The privately owned company, R.J. Optronics Pty Ltd undertook the process of re-engineering a selection of these DSTO developments to be applicable to a wide range of infrared products. The first project was a novel infrared imager based on a Palmer scan (translated circle) mechanism. This device applies a spinning wedge and a single detector, it uses a video processor to convert the image into a standard rectangular format. Originally developed as an imaging seeker for a stand-off weapon, it is producing such high quality images at such a low cost that it is now also being adapted for a wide variety of other military and commercial applications. A technique for electronically stabilizing it has been developed which uses the inertial signals from co-mounted sensors to compensate for platform motions. This enables it to meet the requirements of aircraft, marine vessels and masthead sight applications without the use of gimbals. After tests on a three-axis motion table, several system configurations have now been successfully operated on a number of lightweight platforms, including a Cessna 172 and the Australian made Seabird Seeker aircraft.

  3. Single-camera stereo-digital image correlation with a four-mirror adapter: optimized design and validation

    NASA Astrophysics Data System (ADS)

    Yu, Liping; Pan, Bing

    2016-12-01

    A low-cost, easy-to-implement but practical single-camera stereo-digital image correlation (DIC) system using a four-mirror adapter is established for accurate shape and three-dimensional (3D) deformation measurements. The mirrors assisted pseudo-stereo imaging system can convert a single camera into two virtual cameras, which view a specimen from different angles and record the surface images of the test object onto two halves of the camera sensor. To enable deformation measurement in non-laboratory conditions or extreme high temperature environments, an active imaging optical design, combining an actively illuminated monochromatic source with a coupled band-pass optical filter, is compactly integrated to the pseudo-stereo DIC system. The optical design, basic principles and implementation procedures of the established system for 3D profile and deformation measurements are described in detail. The effectiveness and accuracy of the established system are verified by measuring the profile of a regular cylinder surface and displacements of a translated planar plate. As an application example, the established system is used to determine the tensile strains and Poisson's ratio of a composite solid propellant specimen during stress relaxation test. Since the established single-camera stereo-DIC system only needs a single camera and presents strong robustness against variations in ambient light or the thermal radiation of a hot object, it demonstrates great potential in determining transient deformation in non-laboratory or high-temperature environments with the aid of a single high-speed camera.

  4. Telemetry Standards, Part 1

    DTIC Science & Technology

    2015-07-01

    IMAGE FRAME RATE (R-x\\ IFR -n) PRE-TRIGGER FRAMES (R-x\\PTG-n) TOTAL FRAMES (R-x\\TOTF-n) EXPOSURE TIME (R-x\\EXP-n) SENSOR ROTATION (R-x...0” (Single frame). “1” (Multi-frame). “2” (Continuous). Allowed when: When R\\CDT is “IMGIN” IMAGE FRAME RATE R-x\\ IFR -n R/R Ch 10 Status: RO...the settings that the user wishes to modify. Return Value The impact : A partial IHAL <configuration> element containing only the new settings for

  5. Multidirectional seismo-acoustic wavefield of strombolian explosions at Yasur, Vanuatu using a broadband seismo-acoustic network, infrasound arrays, and infrasonic sensors on tethered balloons

    NASA Astrophysics Data System (ADS)

    Matoza, R. S.; Jolly, A. D.; Fee, D.; Johnson, R.; Kilgour, G.; Christenson, B. W.; Garaebiti, E.; Iezzi, A. M.; Austin, A.; Kennedy, B.; Fitzgerald, R.; Key, N.

    2016-12-01

    Seismo-acoustic wavefields at volcanoes contain rich information on shallow magma transport and subaerial eruption processes. Acoustic wavefields from eruptions are predicted to be directional, but sampling this wavefield directivity is challenging because infrasound sensors are usually deployed on the ground surface. We attempt to overcome this observational limitation using a novel deployment of infrasound sensors on tethered balloons in tandem with a suite of dense ground-based seismo-acoustic, geochemical, and eruption imaging instrumentation. We present preliminary results from a field experiment at Yasur Volcano, Vanuatu from July 26th to August 4th 2016. Our observations include data from a temporary network of 11 broadband seismometers, 6 single infrasonic microphones, 7 small-aperture 3-element infrasound arrays, 2 infrasound sensor packages on tethered balloons, an FTIR, a FLIR, 2 scanning Flyspecs, and various visual imaging data. An introduction to the dataset and preliminary analysis of the 3D seismo-acoustic wavefield and source process will be presented. This unprecedented dataset should provide a unique window into processes operating in the shallow magma plumbing system and their relation to subaerial eruption dynamics.

  6. Robotic Vehicle Communications Interoperability

    DTIC Science & Technology

    1988-08-01

    starter (cold start) X X Fire suppression X Fording control X Fuel control X Fuel tank selector X Garage toggle X Gear selector X X X X Hazard warning...optic Sensors Sensor switch Video Radar IR Thermal imaging system Image intensifier Laser ranger Video camera selector Forward Stereo Rear Sensor control...optic sensors Sensor switch Video Radar IR Thermal imaging system Image intensifier Laser ranger Video camera selector Forward Stereo Rear Sensor

  7. Efficient single-pixel multispectral imaging via non-mechanical spatio-spectral modulation.

    PubMed

    Li, Ziwei; Suo, Jinli; Hu, Xuemei; Deng, Chao; Fan, Jingtao; Dai, Qionghai

    2017-01-27

    Combining spectral imaging with compressive sensing (CS) enables efficient data acquisition by fully utilizing the intrinsic redundancies in natural images. Current compressive multispectral imagers, which are mostly based on array sensors (e.g, CCD or CMOS), suffer from limited spectral range and relatively low photon efficiency. To address these issues, this paper reports a multispectral imaging scheme with a single-pixel detector. Inspired by the spatial resolution redundancy of current spatial light modulators (SLMs) relative to the target reconstruction, we design an all-optical spectral splitting device to spatially split the light emitted from the object into several counterparts with different spectrums. Separated spectral channels are spatially modulated simultaneously with individual codes by an SLM. This no-moving-part modulation ensures a stable and fast system, and the spatial multiplexing ensures an efficient acquisition. A proof-of-concept setup is built and validated for 8-channel multispectral imaging within 420~720 nm wavelength range on both macro and micro objects, showing a potential for efficient multispectral imager in macroscopic and biomedical applications.

  8. Depth-aware image seam carving.

    PubMed

    Shen, Jianbing; Wang, Dapeng; Li, Xuelong

    2013-10-01

    Image seam carving algorithm should preserve important and salient objects as much as possible when changing the image size, while not removing the secondary objects in the scene. However, it is still difficult to determine the important and salient objects that avoid the distortion of these objects after resizing the input image. In this paper, we develop a novel depth-aware single image seam carving approach by taking advantage of the modern depth cameras such as the Kinect sensor, which captures the RGB color image and its corresponding depth map simultaneously. By considering both the depth information and the just noticeable difference (JND) model, we develop an efficient JND-based significant computation approach using the multiscale graph cut based energy optimization. Our method achieves the better seam carving performance by cutting the near objects less seams while removing distant objects more seams. To the best of our knowledge, our algorithm is the first work to use the true depth map captured by Kinect depth camera for single image seam carving. The experimental results demonstrate that the proposed approach produces better seam carving results than previous content-aware seam carving methods.

  9. 3D digital image correlation using single color camera pseudo-stereo system

    NASA Astrophysics Data System (ADS)

    Li, Junrui; Dan, Xizuo; Xu, Wan; Wang, Yonghong; Yang, Guobiao; Yang, Lianxiang

    2017-10-01

    Three dimensional digital image correlation (3D-DIC) has been widely used by industry to measure the 3D contour and whole-field displacement/strain. In this paper, a novel single color camera 3D-DIC setup, using a reflection-based pseudo-stereo system, is proposed. Compared to the conventional single camera pseudo-stereo system, which splits the CCD sensor into two halves to capture the stereo views, the proposed system achieves both views using the whole CCD chip and without reducing the spatial resolution. In addition, similarly to the conventional 3D-DIC system, the center of the two views stands in the center of the CCD chip, which minimizes the image distortion relative to the conventional pseudo-stereo system. The two overlapped views in the CCD are separated by the color domain, and the standard 3D-DIC algorithm can be utilized directly to perform the evaluation. The system's principle and experimental setup are described in detail, and multiple tests are performed to validate the system.

  10. Applications of iQID cameras

    NASA Astrophysics Data System (ADS)

    Han, Ling; Miller, Brian W.; Barrett, Harrison H.; Barber, H. Bradford; Furenlid, Lars R.

    2017-09-01

    iQID is an intensified quantum imaging detector developed in the Center for Gamma-Ray Imaging (CGRI). Originally called BazookaSPECT, iQID was designed for high-resolution gamma-ray imaging and preclinical gamma-ray single-photon emission computed tomography (SPECT). With the use of a columnar scintillator, an image intensifier and modern CCD/CMOS sensors, iQID cameras features outstanding intrinsic spatial resolution. In recent years, many advances have been achieved that greatly boost the performance of iQID, broadening its applications to cover nuclear and particle imaging for preclinical, clinical and homeland security settings. This paper presents an overview of the recent advances of iQID technology and its applications in preclinical and clinical scintigraphy, preclinical SPECT, particle imaging (alpha, neutron, beta, and fission fragment), and digital autoradiography.

  11. A Plenoptic Multi-Color Imaging Pyrometer

    NASA Technical Reports Server (NTRS)

    Danehy, Paul M.; Hutchins, William D.; Fahringer, Timothy; Thurow, Brian S.

    2017-01-01

    A three-color pyrometer has been developed based on plenoptic imaging technology. Three bandpass filters placed in front of a camera lens allow separate 2D images to be obtained on a single image sensor at three different and adjustable wavelengths selected by the user. Images were obtained of different black- or grey-bodies including a calibration furnace, a radiation heater, and a luminous sulfur match flame. The images obtained of the calibration furnace and radiation heater were processed to determine 2D temperature distributions. Calibration results in the furnace showed that the instrument can measure temperature with an accuracy and precision of 10 Kelvins between 1100 and 1350 K. Time-resolved 2D temperature measurements of the radiation heater are shown.

  12. International remote monitoring project Argentina Nuclear Power Station Spent Fuel Transfer Remote Monitoring System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schneider, S.; Lucero, R.; Glidewell, D.

    1997-08-01

    The Autoridad Regulataria Nuclear (ARN) and the United States Department of Energy (DOE) are cooperating on the development of a Remote Monitoring System for nuclear nonproliferation efforts. A Remote Monitoring System for spent fuel transfer will be installed at the Argentina Nuclear Power Station in Embalse, Argentina. The system has been designed by Sandia National Laboratories (SNL), with Los Alamos National Laboratory (LANL) and Oak Ridge National Laboratory (ORNL) providing gamma and neutron sensors. This project will test and evaluate the fundamental design and implementation of the Remote Monitoring System in its application to regional and international safeguards efficiency. Thismore » paper provides a description of the monitoring system and its functions. The Remote Monitoring System consists of gamma and neutron radiation sensors, RF systems, and video systems integrated into a coherent functioning whole. All sensor data communicate over an Echelon LonWorks Network to a single data logger. The Neumann DCM 14 video module is integrated into the Remote Monitoring System. All sensor and image data are stored on a Data Acquisition System (DAS) and archived and reviewed on a Data and Image Review Station (DIRS). Conventional phone lines are used as the telecommunications link to transmit on-site collected data and images to remote locations. The data and images are authenticated before transmission. Data review stations will be installed at ARN in Buenos Aires, Argentina, ABACC in Rio De Janeiro, IAEA Headquarters in Vienna, and Sandia National Laboratories in Albuquerque, New Mexico. 2 refs., 2 figs.« less

  13. Computational imaging with a balanced detector.

    PubMed

    Soldevila, F; Clemente, P; Tajahuerce, E; Uribe-Patarroyo, N; Andrés, P; Lancis, J

    2016-06-29

    Single-pixel cameras allow to obtain images in a wide range of challenging scenarios, including broad regions of the electromagnetic spectrum and through scattering media. However, there still exist several drawbacks that single-pixel architectures must address, such as acquisition speed and imaging in the presence of ambient light. In this work we introduce balanced detection in combination with simultaneous complementary illumination in a single-pixel camera. This approach enables to acquire information even when the power of the parasite signal is higher than the signal itself. Furthermore, this novel detection scheme increases both the frame rate and the signal-to-noise ratio of the system. By means of a fast digital micromirror device together with a low numerical aperture collecting system, we are able to produce a live-feed video with a resolution of 64 × 64 pixels at 5 Hz. With advanced undersampling techniques, such as compressive sensing, we can acquire information at rates of 25 Hz. By using this strategy, we foresee real-time biological imaging with large area detectors in conditions where array sensors are unable to operate properly, such as infrared imaging and dealing with objects embedded in turbid media.

  14. Computational imaging with a balanced detector

    NASA Astrophysics Data System (ADS)

    Soldevila, F.; Clemente, P.; Tajahuerce, E.; Uribe-Patarroyo, N.; Andrés, P.; Lancis, J.

    2016-06-01

    Single-pixel cameras allow to obtain images in a wide range of challenging scenarios, including broad regions of the electromagnetic spectrum and through scattering media. However, there still exist several drawbacks that single-pixel architectures must address, such as acquisition speed and imaging in the presence of ambient light. In this work we introduce balanced detection in combination with simultaneous complementary illumination in a single-pixel camera. This approach enables to acquire information even when the power of the parasite signal is higher than the signal itself. Furthermore, this novel detection scheme increases both the frame rate and the signal-to-noise ratio of the system. By means of a fast digital micromirror device together with a low numerical aperture collecting system, we are able to produce a live-feed video with a resolution of 64 × 64 pixels at 5 Hz. With advanced undersampling techniques, such as compressive sensing, we can acquire information at rates of 25 Hz. By using this strategy, we foresee real-time biological imaging with large area detectors in conditions where array sensors are unable to operate properly, such as infrared imaging and dealing with objects embedded in turbid media.

  15. Computational imaging with a balanced detector

    PubMed Central

    Soldevila, F.; Clemente, P.; Tajahuerce, E.; Uribe-Patarroyo, N.; Andrés, P.; Lancis, J.

    2016-01-01

    Single-pixel cameras allow to obtain images in a wide range of challenging scenarios, including broad regions of the electromagnetic spectrum and through scattering media. However, there still exist several drawbacks that single-pixel architectures must address, such as acquisition speed and imaging in the presence of ambient light. In this work we introduce balanced detection in combination with simultaneous complementary illumination in a single-pixel camera. This approach enables to acquire information even when the power of the parasite signal is higher than the signal itself. Furthermore, this novel detection scheme increases both the frame rate and the signal-to-noise ratio of the system. By means of a fast digital micromirror device together with a low numerical aperture collecting system, we are able to produce a live-feed video with a resolution of 64 × 64 pixels at 5 Hz. With advanced undersampling techniques, such as compressive sensing, we can acquire information at rates of 25 Hz. By using this strategy, we foresee real-time biological imaging with large area detectors in conditions where array sensors are unable to operate properly, such as infrared imaging and dealing with objects embedded in turbid media. PMID:27353733

  16. Importance of network density of nanotube: Effect on nitrogen dioxide gas sensing by solid state resistive sensor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mishra, Prabhash; Grachyova, D. V.; Moskalenko, A. S.

    2016-04-13

    Dispersion of single-walled carbon nanotubes (SWCNTs) is an established fact, however, its effect on toxic gas sensing for the development of solid state resistive sensor was not well reported. In this report, the dispersion quality of SWCNTs has been investigated and improved, and this well-dispersed SWCNTs network was used for sensor fabrication to monitor nitrogen dioxide gas. Ultraviolet (UV)-visible spectroscopic studies shows the strength of SWNTs dispersion and scanning electron microscopy (SEM) imaging provides the morphological properties of the sensor device. In this gas sensor device, two sets of resistive type sensors were fabricated that consisting of a pair ofmore » interdigitated electrodes (IDEs) using dielectrophoresis technique with different SWCNTs network density. With low-density SWCNTs networks, this fabricated sensor exhibits a high response for nitrogen dioxide sensing. The sensing of nitrogen dioxide is mainly due to charge transfer from absorbed molecules to sidewalls of nanotube and tube-tube screening acting a major role for the transport properties of charge carriers.« less

  17. CMOS Image Sensors: Electronic Camera On A Chip

    NASA Technical Reports Server (NTRS)

    Fossum, E. R.

    1995-01-01

    Recent advancements in CMOS image sensor technology are reviewed, including both passive pixel sensors and active pixel sensors. On- chip analog to digital converters and on-chip timing and control circuits permit realization of an electronic camera-on-a-chip. Highly miniaturized imaging systems based on CMOS image sensor technology are emerging as a competitor to charge-coupled devices for low cost uses.

  18. NeuroSeek dual-color image processing infrared focal plane array

    NASA Astrophysics Data System (ADS)

    McCarley, Paul L.; Massie, Mark A.; Baxter, Christopher R.; Huynh, Buu L.

    1998-09-01

    Several technologies have been developed in recent years to advance the state of the art of IR sensor systems including dual color affordable focal planes, on-focal plane array biologically inspired image and signal processing techniques and spectral sensing techniques. Pacific Advanced Technology (PAT) and the Air Force Research Lab Munitions Directorate have developed a system which incorporates the best of these capabilities into a single device. The 'NeuroSeek' device integrates these technologies into an IR focal plane array (FPA) which combines multicolor Midwave IR/Longwave IR radiometric response with on-focal plane 'smart' neuromorphic analog image processing. The readout and processing integrated circuit very large scale integration chip which was developed under this effort will be hybridized to a dual color detector array to produce the NeuroSeek FPA, which will have the capability to fuse multiple pixel-based sensor inputs directly on the focal plane. Great advantages are afforded by application of massively parallel processing algorithms to image data in the analog domain; the high speed and low power consumption of this device mimic operations performed in the human retina.

  19. The Filament Sensor for Near Real-Time Detection of Cytoskeletal Fiber Structures

    PubMed Central

    Eltzner, Benjamin; Wollnik, Carina; Gottschlich, Carsten; Huckemann, Stephan; Rehfeldt, Florian

    2015-01-01

    A reliable extraction of filament data from microscopic images is of high interest in the analysis of acto-myosin structures as early morphological markers in mechanically guided differentiation of human mesenchymal stem cells and the understanding of the underlying fiber arrangement processes. In this paper, we propose the filament sensor (FS), a fast and robust processing sequence which detects and records location, orientation, length, and width for each single filament of an image, and thus allows for the above described analysis. The extraction of these features has previously not been possible with existing methods. We evaluate the performance of the proposed FS in terms of accuracy and speed in comparison to three existing methods with respect to their limited output. Further, we provide a benchmark dataset of real cell images along with filaments manually marked by a human expert as well as simulated benchmark images. The FS clearly outperforms existing methods in terms of computational runtime and filament extraction accuracy. The implementation of the FS and the benchmark database are available as open source. PMID:25996921

  20. [Modeling continuous scaling of NDVI based on fractal theory].

    PubMed

    Luan, Hai-Jun; Tian, Qing-Jiu; Yu, Tao; Hu, Xin-Li; Huang, Yan; Du, Ling-Tong; Zhao, Li-Min; Wei, Xi; Han, Jie; Zhang, Zhou-Wei; Li, Shao-Peng

    2013-07-01

    Scale effect was one of the very important scientific problems of remote sensing. The scale effect of quantitative remote sensing can be used to study retrievals' relationship between different-resolution images, and its research became an effective way to confront the challenges, such as validation of quantitative remote sensing products et al. Traditional up-scaling methods cannot describe scale changing features of retrievals on entire series of scales; meanwhile, they are faced with serious parameters correction issues because of imaging parameters' variation of different sensors, such as geometrical correction, spectral correction, etc. Utilizing single sensor image, fractal methodology was utilized to solve these problems. Taking NDVI (computed by land surface radiance) as example and based on Enhanced Thematic Mapper Plus (ETM+) image, a scheme was proposed to model continuous scaling of retrievals. Then the experimental results indicated that: (a) For NDVI, scale effect existed, and it could be described by fractal model of continuous scaling; (2) The fractal method was suitable for validation of NDVI. All of these proved that fractal was an effective methodology of studying scaling of quantitative remote sensing.

  1. Efficient space-time sampling with pixel-wise coded exposure for high-speed imaging.

    PubMed

    Liu, Dengyu; Gu, Jinwei; Hitomi, Yasunobu; Gupta, Mohit; Mitsunaga, Tomoo; Nayar, Shree K

    2014-02-01

    Cameras face a fundamental trade-off between spatial and temporal resolution. Digital still cameras can capture images with high spatial resolution, but most high-speed video cameras have relatively low spatial resolution. It is hard to overcome this trade-off without incurring a significant increase in hardware costs. In this paper, we propose techniques for sampling, representing, and reconstructing the space-time volume to overcome this trade-off. Our approach has two important distinctions compared to previous works: 1) We achieve sparse representation of videos by learning an overcomplete dictionary on video patches, and 2) we adhere to practical hardware constraints on sampling schemes imposed by architectures of current image sensors, which means that our sampling function can be implemented on CMOS image sensors with modified control units in the future. We evaluate components of our approach, sampling function and sparse representation, by comparing them to several existing approaches. We also implement a prototype imaging system with pixel-wise coded exposure control using a liquid crystal on silicon device. System characteristics such as field of view and modulation transfer function are evaluated for our imaging system. Both simulations and experiments on a wide range of scenes show that our method can effectively reconstruct a video from a single coded image while maintaining high spatial resolution.

  2. Leonardo (formerly Selex ES) infrared sensors for astronomy: present and future

    NASA Astrophysics Data System (ADS)

    Baker, Ian; Maxey, Chris; Hipwood, Les; Barnes, Keith

    2016-07-01

    Many branches of science require infrared detectors sensitive to individual photons. Applications range from low background astronomy to high speed imaging. Leonardo in Southampton, UK, has been developing HgCdTe avalanche photodiode (APD) sensors for astronomy in collaboration with European Southern Observatory (ESO) since 2008 and more recently the University of Hawaii. The devices utilise Metal Organic Vapour Phase Epitaxy, MOVPE, grown on low-cost GaAs substrates and in combination with a mesa device structure achieve very low dark current and near-ideal MTF. MOVPE provides the ability to grow complex HgCdTe heterostructures and these have proved crucial to suppress breakdown currents and allow high avalanche gain in low background situations. A custom device called Saphira (320x256/24μm) has been developed for wavefront sensors, interferometry and transient event imaging. This device has achieved read noise as low as 0.26 electrons rms and single photon imaging with avalanche gain up to x450. It is used in the ESO Gravity program for adaptive optics and fringe tracking and has been successfully trialled on the 3m NASA IRTF, 8.2m Subaru and 60 inch Mt Palomar for lucky imaging and wavefront sensing. In future the technology offers much shorter observation times for read-noise limited instruments, particularly spectroscopy. The paper will describe the MOVPE APD technology and current performance status.

  3. Dual-mode photosensitive arrays based on the integration of liquid crystal microlenses and CMOS sensors for obtaining the intensity images and wavefronts of objects.

    PubMed

    Tong, Qing; Lei, Yu; Xin, Zhaowei; Zhang, Xinyu; Sang, Hongshi; Xie, Changsheng

    2016-02-08

    In this paper, we present a kind of dual-mode photosensitive arrays (DMPAs) constructed by hybrid integration a liquid crystal microlens array (LCMLA) driven electrically and a CMOS sensor array, which can be used to measure both the conventional intensity images and corresponding wavefronts of objects. We utilize liquid crystal materials to shape the microlens array with the electrically tunable focal length. Through switching the voltage signal on and off, the wavefronts and the intensity images can be acquired through the DMPAs, sequentially. We use white light to obtain the object's wavefronts for avoiding losing important wavefront information. We separate the white light wavefronts with a large number of spectral components and then experimentally compare them with single spectral wavefronts of typical red, green and blue lasers, respectively. Then we mix the red, green and blue wavefronts to a composite wavefront containing more optical information of the object.

  4. An Inventory of Impact Craters on the Martian South Polar Layered Deposits

    NASA Technical Reports Server (NTRS)

    Plaut, J. J.

    2005-01-01

    The polar layered deposits (PLD) of Mars continue to be a focus of study due to the possibility that these finely layered, volatile-rich deposits hold a record of recent eras in Martian climate history. Recently, the visible sensor on 2001 Mars Odyssey s Thermal Emission Imaging System (THEMIS) has acquired 36 meter/pixel contiguous single-band visible image data sets of both the north and the south polar layered deposits, during the local spring and summer seasons. In addition, significant coverage has been obtained at the THEMIS visible sensor s full resolution of 18 meters/pixel. This paper reports on the use of these data sets to further characterize the population of impact craters on the south polar layered deposits (SPLD), and the implications of the observed population for the age and evolution of the SPLD.

  5. Confocal Microscopy Imaging with an Optical Transition Edge Sensor

    NASA Astrophysics Data System (ADS)

    Fukuda, D.; Niwa, K.; Hattori, K.; Inoue, S.; Kobayashi, R.; Numata, T.

    2018-05-01

    Fluorescence color imaging at an extremely low excitation intensity was performed using an optical transition edge sensor (TES) embedded in a confocal microscope for the first time. Optical TES has the ability to resolve incident single photon energy; therefore, the wavelength of each photon can be measured without spectroscopic elements such as diffraction gratings. As target objects, animal cells labeled with two fluorescent dyes were irradiated with an excitation laser at an intensity below 1 μW. In our confocal system, an optical fiber-coupled TES device is used to detect photons instead of the pinhole and photomultiplier tube used in typical confocal microscopes. Photons emitted from the dyes were collected by the objective lens, and sent to the optical TES via the fiber. The TES measures the wavelength of each photon arriving in an exposure time of 70 ms, and a fluorescent photon spectrum is constructed. This measurement is repeated by scanning the target sample, and finally a two-dimensional RGB-color image is obtained. The obtained image showed that the photons emitted from the dyes of mitochondria and cytoskeletons were clearly resolved at a detection intensity level of tens of photons. TES exhibits ideal performance as a photon detector with a low dark count rate (< 1 Hz) and wavelength resolving power. In the single-mode fiber-coupled system, the confocal microscope can be operated in the super-resolution mode. These features are very promising to realize high-sensitivity and high-resolution photon spectral imaging, and would help avoid cell damage and photobleaching of fluorescence dyes.

  6. Wide-field surface plasmon microscopy of nano- and microparticles: features, benchmarking, limitations, and bioanalytical applications

    NASA Astrophysics Data System (ADS)

    Nizamov, Shavkat; Scherbahn, Vitali; Mirsky, Vladimir M.

    2017-05-01

    Detection of nano- and micro-particles is an important task for chemical analytics, food industry, biotechnology, environmental monitoring and many other fields of science and industry. For this purpose, a method based on the detection and analysis of minute signals in surface plasmon resonance images due to adsorption of single nanopartciles was developed. This new technology allows one a real-time detection of interaction of single nano- and micro-particles with sensor surface. Adsorption of each nanoparticle leads to characteristic diffraction image whose intensity depends on the size and chemical composition of the particle. The adsorption rate characterizes volume concentration of nano- and micro-particles. Large monitored surface area of sensor enables a high dynamic range of counting and to a correspondingly high dynamic range in concentration scale. Depending on the type of particles and experimental conditions, the detection limit for aqueous samples can be below 1000 particles per microliter. For application of method in complex media, nanoparticle images are discriminated from image perturbations due to matrix components. First, the characteristic SPRM images of nanoparticles (templates) are collected in aqueous suspensions or spiked real samples. Then, the detection of nanoparticles in complex media using template matching is performed. The detection of various NPs in consumer products like cosmetics, mineral water, juices, and wines was shown at sub-ppb level. The method can be applied for ultrasensitive detection and analysis of nano- and micro-particles of biological (bacteria, viruses, endosomes), biotechnological (liposomes, protein nanoparticles for drug delivery) or technical origin.

  7. A Wearable Real-Time and Non-Invasive Thoracic Cavity Monitoring System

    NASA Astrophysics Data System (ADS)

    Salman, Safa

    A surgery-free on-body monitoring system is proposed to evaluate the dielectric constant of internal body tissues (especially lung and heart) and effectively determine irregularities in real-time. The proposed surgery-free on-body monitoring system includes a sensor, a post-processing technique, and an automated data collection circuit. Data are automatically collected from the sensor electrodes and then post processed to extract the electrical properties of the underlying biological tissue(s). To demonstrate the imaging concept, planar and wrap-around sensors are devised. These sensors are designed to detect changes in the dielectric constant of inner tissues (lung and heart). The planar sensor focuses on a single organ while the wrap-around sensors allows for imaging of the thoracic cavity's cross section. Moreover, post-processing techniques are proposed to complement sensors for a more complete on-body monitoring system. The idea behind the post-processing technique is to suppress interference from the outer layers (skin, fat, muscle, and bone). The sensors and post-processing techniques yield high signal (from the inner layers) to noise (from the outer layers) ratio. Additionally, data collection circuits are proposed for a more robust and stand-alone system. The circuit design aims to sequentially activate each port of the sensor and portions of the propagating signal are to be received at all passive ports in the form of a voltage at the probes. The voltages are converted to scattering parameters which are then used in the post-processing technique to obtain epsilonr. The concept of wearability is also considered through the use of electrically conductive fibers (E-fibers). These fibers show matching performance to that of copper, especially at low frequencies making them a viable substitute. For the cases considered, the proposed sensors show promising results in recovering the permittivity of deep tissues with a maximum error of 13.5%. These sensors provide a way for a new class of medical sensors through accuracy improvements and avoidance of inverse scattering techniques.

  8. 77 FR 26787 - Certain CMOS Image Sensors and Products Containing Same; Notice of Receipt of Complaint...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-07

    ... INTERNATIONAL TRADE COMMISSION [Docket No. 2895] Certain CMOS Image Sensors and Products.... International Trade Commission has received a complaint entitled Certain CMOS Image Sensors and Products... importation, and the sale within the United States after importation of certain CMOS image sensors and...

  9. Virtually transparent epidermal imagery (VTEI): on new approaches to in vivo wireless high-definition video and image processing.

    PubMed

    Anderson, Adam L; Lin, Bingxiong; Sun, Yu

    2013-12-01

    This work first overviews a novel design, and prototype implementation, of a virtually transparent epidermal imagery (VTEI) system for laparo-endoscopic single-site (LESS) surgery. The system uses a network of multiple, micro-cameras and multiview mosaicking to obtain a panoramic view of the surgery area. The prototype VTEI system also projects the generated panoramic view on the abdomen area to create a transparent display effect that mimics equivalent, but higher risk, open-cavity surgeries. The specific research focus of this paper is on two important aspects of a VTEI system: 1) in vivo wireless high-definition (HD) video transmission and 2) multi-image processing-both of which play key roles in next-generation systems. For transmission and reception, this paper proposes a theoretical wireless communication scheme for high-definition video in situations that require extremely small-footprint image sensors and in zero-latency applications. In such situations the typical optimized metrics in communication schemes, such as power and data rate, are far less important than latency and hardware footprint that absolutely preclude their use if not satisfied. This work proposes the use of a novel Frequency-Modulated Voltage-Division Multiplexing (FM-VDM) scheme where sensor data is kept analog and transmitted via "voltage-multiplexed" signals that are also frequency-modulated. Once images are received, a novel Homographic Image Mosaicking and Morphing (HIMM) algorithm is proposed to stitch images from respective cameras, that also compensates for irregular surfaces in real-time, into a single cohesive view of the surgical area. In VTEI, this view is then visible to the surgeon directly on the patient to give an "open cavity" feel to laparoscopic procedures.

  10. Evaluation of sensor, environment and operational factors impacting the use of multiple sensor constellations for long term resource monitoring

    NASA Astrophysics Data System (ADS)

    Rengarajan, Rajagopalan

    Moderate resolution remote sensing data offers the potential to monitor the long and short term trends in the condition of the Earth's resources at finer spatial scales and over longer time periods. While improved calibration (radiometric and geometric), free access (Landsat, Sentinel, CBERS), and higher level products in reflectance units have made it easier for the science community to derive the biophysical parameters from these remotely sensed data, a number of issues still affect the analysis of multi-temporal datasets. These are primarily due to sources that are inherent in the process of imaging from single or multiple sensors. Some of these undesired or uncompensated sources of variation include variation in the view angles, illumination angles, atmospheric effects, and sensor effects such as Relative Spectral Response (RSR) variation between different sensors. The complex interaction of these sources of variation would make their study extremely difficult if not impossible with real data, and therefore, a simulated analysis approach is used in this study. A synthetic forest canopy is produced using the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model and its measured BRDFs are modeled using the RossLi canopy BRDF model. The simulated BRDF matches the real data to within 2% of the reflectance in the red and the NIR spectral bands studied. The BRDF modeling process is extended to model and characterize the defoliation of a forest, which is used in factor sensitivity studies to estimate the effect of each factor for varying environment and sensor conditions. Finally, a factorial experiment is designed to understand the significance of the sources of variation, and regression based analysis are performed to understand the relative importance of the factors. The design of experiment and the sensitivity analysis conclude that the atmospheric attenuation and variations due to the illumination angles are the dominant sources impacting the at-sensor radiance.

  11. An Autonomous Sensor Tasking Approach for Large Scale Space Object Cataloging

    NASA Astrophysics Data System (ADS)

    Linares, R.; Furfaro, R.

    The field of Space Situational Awareness (SSA) has progressed over the last few decades with new sensors coming online, the development of new approaches for making observations, and new algorithms for processing them. Although there has been success in the development of new approaches, a missing piece is the translation of SSA goals to sensors and resource allocation; otherwise known as the Sensor Management Problem (SMP). This work solves the SMP using an artificial intelligence approach called Deep Reinforcement Learning (DRL). Stable methods for training DRL approaches based on neural networks exist, but most of these approaches are not suitable for high dimensional systems. The Asynchronous Advantage Actor-Critic (A3C) method is a recently developed and effective approach for high dimensional systems, and this work leverages these results and applies this approach to decision making in SSA. The decision space for the SSA problems can be high dimensional, even for tasking of a single telescope. Since the number of SOs in space is relatively high, each sensor will have a large number of possible actions at a given time. Therefore, efficient DRL approaches are required when solving the SMP for SSA. This work develops a A3C based method for DRL applied to SSA sensor tasking. One of the key benefits of DRL approaches is the ability to handle high dimensional data. For example DRL methods have been applied to image processing for the autonomous car application. For example, a 256x256 RGB image has 196608 parameters (256*256*3=196608) which is very high dimensional, and deep learning approaches routinely take images like this as inputs. Therefore, when applied to the whole catalog the DRL approach offers the ability to solve this high dimensional problem. This work has the potential to, for the first time, solve the non-myopic sensor tasking problem for the whole SO catalog (over 22,000 objects) providing a truly revolutionary result.

  12. Hemispherical Field-of-View Above-Water Surface Imager for Submarines

    NASA Technical Reports Server (NTRS)

    Hemmati, Hamid; Kovalik, Joseph M.; Farr, William H.; Dannecker, John D.

    2012-01-01

    A document discusses solutions to the problem of submarines having to rise above water to detect airplanes in the general vicinity. Two solutions are provided, in which a sensor is located just under the water surface, and at a few to tens of meter depth under the water surface. The first option is a Fish Eye Lens (FEL) digital-camera combination, situated just under the water surface that will have near-full- hemisphere (360 azimuth and 90 elevation) field of view for detecting objects on the water surface. This sensor can provide a three-dimensional picture of the airspace both in the marine and in the land environment. The FEL is coupled to a camera and can continuously look at the entire sky above it. The camera can have an Active Pixel Sensor (APS) focal plane array that allows logic circuitry to be built directly in the sensor. The logic circuitry allows data processing to occur on the sensor head without the need for any other external electronics. In the second option, a single-photon sensitive (photon counting) detector-array is used at depth, without the need for any optics in front of it, since at this location, optical signals are scattered and arrive at a wide (tens of degrees) range of angles. Beam scattering through clouds and seawater effectively negates optical imaging at depths below a few meters under cloudy or turbulent conditions. Under those conditions, maximum collection efficiency can be achieved by using a non-imaging photon-counting detector behind narrowband filters. In either case, signals from these sensors may be fused and correlated or decorrelated with other sensor data to get an accurate picture of the object(s) above the submarine. These devices can complement traditional submarine periscopes that have a limited field of view in the elevation direction. Also, these techniques circumvent the need for exposing the entire submarine or its periscopes to the outside environment.

  13. Monitoring of sludge dewatering equipment by image classification

    NASA Astrophysics Data System (ADS)

    Maquine de Souza, Sandro; Grandvalet, Yves; Denoeux, Thierry

    2004-11-01

    Belt filter presses represent an economical means to dewater the residual sludge generated in wastewater treatment plants. In order to assure maximal water removal, the raw sludge is mixed with a chemical conditioner prior to being fed into the belt filter press. When the conditioner is properly dosed, the sludge acquires a coarse texture, with space between flocs. This information was exploited for the development of a software sensor, where digital images are the input signal, and the output is a numeric value proportional to the dewatered sludge dry content. Three families of features were used to characterize the textures. Gabor filtering, wavelet decomposition and co-occurrence matrix computation were the techniques used. A database of images, ordered by their corresponding dry contents, was used to calibrate the model that calculates the sensor output. The images were separated in groups that correspond to single experimental sessions. With the calibrated model, all images were correctly ranked within an experiment session. The results were very similar regardless of the family of features used. The output can be fed to a control system, or, in the case of fixed experiment conditions, it can be used to directly estimate the dewatered sludge dry content.

  14. High-speed particle tracking in microscopy using SPAD image sensors

    NASA Astrophysics Data System (ADS)

    Gyongy, Istvan; Davies, Amy; Miguelez Crespo, Allende; Green, Andrew; Dutton, Neale A. W.; Duncan, Rory R.; Rickman, Colin; Henderson, Robert K.; Dalgarno, Paul A.

    2018-02-01

    Single photon avalanche diodes (SPADs) are used in a wide range of applications, from fluorescence lifetime imaging microscopy (FLIM) to time-of-flight (ToF) 3D imaging. SPAD arrays are becoming increasingly established, combining the unique properties of SPADs with widefield camera configurations. Traditionally, the photosensitive area (fill factor) of SPAD arrays has been limited by the in-pixel digital electronics. However, recent designs have demonstrated that by replacing the complex digital pixel logic with simple binary pixels and external frame summation, the fill factor can be increased considerably. A significant advantage of such binary SPAD arrays is the high frame rates offered by the sensors (>100kFPS), which opens up new possibilities for capturing ultra-fast temporal dynamics in, for example, life science cellular imaging. In this work we consider the use of novel binary SPAD arrays in high-speed particle tracking in microscopy. We demonstrate the tracking of fluorescent microspheres undergoing Brownian motion, and in intra-cellular vesicle dynamics, at high frame rates. We thereby show how binary SPAD arrays can offer an important advance in live cell imaging in such fields as intercellular communication, cell trafficking and cell signaling.

  15. Hyperspectral Imaging of Forest Resources: The Malaysian Experience

    NASA Astrophysics Data System (ADS)

    Mohd Hasmadi, I.; Kamaruzaman, J.

    2008-08-01

    Remote sensing using satellite and aircraft images are well established technology. Remote sensing application of hyperspectral imaging, however, is relatively new to Malaysian forestry. Through a wide range of wavelengths hyperspectral data are precisely capable to capture narrow bands of spectra. Airborne sensors typically offer greatly enhanced spatial and spectral resolution over their satellite counterparts, and able to control experimental design closely during image acquisition. The first study using hyperspectral imaging for forest inventory in Malaysia were conducted by Professor Hj. Kamaruzaman from the Faculty of Forestry, Universiti Putra Malaysia in 2002 using the AISA sensor manufactured by Specim Ltd, Finland. The main objective has been to develop methods that are directly suited for practical tropical forestry application at the high level of accuracy. Forest inventory and tree classification including development of single spectral signatures have been the most important interest at the current practices. Experiences from the studies showed that retrieval of timber volume and tree discrimination using this system is well and some or rather is better than other remote sensing methods. This article reviews the research and application of airborne hyperspectral remote sensing for forest survey and assessment in Malaysia.

  16. Remote sensing of deep hermatypic coral reefs in Puerto Rico and the U.S. Virgin Islands using the Seabed autonomous underwater vehicle

    NASA Astrophysics Data System (ADS)

    Armstrong, Roy A.; Singh, Hanumant

    2006-09-01

    Optical imaging of coral reefs and other benthic communities present below one attenuation depth, the limit of effective airborne and satellite remote sensing, requires the use of in situ platforms such as autonomous underwater vehicles (AUVs). The Seabed AUV, which was designed for high-resolution underwater optical and acoustic imaging, was used to characterize several deep insular shelf reefs of Puerto Rico and the US Virgin Islands using digital imagery. The digital photo transects obtained by the Seabed AUV provided quantitative data on living coral, sponge, gorgonian, and macroalgal cover as well as coral species richness and diversity. Rugosity, an index of structural complexity, was derived from the pencil-beam acoustic data. The AUV benthic assessments could provide the required information for selecting unique areas of high coral cover, biodiversity and structural complexity for habitat protection and ecosystem-based management. Data from Seabed sensors and related imaging technologies are being used to conduct multi-beam sonar surveys, 3-D image reconstruction from a single camera, photo mosaicking, image based navigation, and multi-sensor fusion of acoustic and optical data.

  17. Inkjet-compatible single-component polydiacetylene precursors for thermochromic paper sensors.

    PubMed

    Yoon, Bora; Shin, Hyora; Kang, Eun-Mi; Cho, Dae Won; Shin, Kayeong; Chung, Hoeil; Lee, Chan Woo; Kim, Jong-Man

    2013-06-12

    Inkjet-printable diacetylene (DA) supramolecules, which can be dispersed in water without using additional surfactants, have been developed. The supramolecules are generated from DA monomers that contain bisurea groups, which are capable of forming hydrogen-bonding networks, and hydrophilic oligoethylene oxide moieties. Because of suitable size distribution and stability characteristics, the single DA component ink can be readily transferred to paper substrates by utilizing a common office inkjet printer. UV irradiation of the DA-printed paper results in generation of blue-colored polydiacetylene (PDA) images, which show reversible thermochromic transitions in specific temperature ranges. Inkjet-printed PDAs, in the format of a two-dimensional (2D) quick response (QR) code on a real parking ticket, serve as a dual anticounterfeiting system that combines easy decoding of the QR code and colorimetric PDA reversibility for validating the authenticity of the tickets. This single-component ink system has great potential for use in paper-based devices, temperature sensors, and anticounterfeiting barcodes.

  18. Underwater Inherent Optical Properties Estimation Using a Depth Aided Deep Neural Network.

    PubMed

    Yu, Zhibin; Wang, Yubo; Zheng, Bing; Zheng, Haiyong; Wang, Nan; Gu, Zhaorui

    2017-01-01

    Underwater inherent optical properties (IOPs) are the fundamental clues to many research fields such as marine optics, marine biology, and underwater vision. Currently, beam transmissometers and optical sensors are considered as the ideal IOPs measuring methods. But these methods are inflexible and expensive to be deployed. To overcome this problem, we aim to develop a novel measuring method using only a single underwater image with the help of deep artificial neural network. The power of artificial neural network has been proved in image processing and computer vision fields with deep learning technology. However, image-based IOPs estimation is a quite different and challenging task. Unlike the traditional applications such as image classification or localization, IOP estimation looks at the transparency of the water between the camera and the target objects to estimate multiple optical properties simultaneously. In this paper, we propose a novel Depth Aided (DA) deep neural network structure for IOPs estimation based on a single RGB image that is even noisy. The imaging depth information is considered as an aided input to help our model make better decision.

  19. Report on recent results of the PERCIVAL soft X-ray imager

    NASA Astrophysics Data System (ADS)

    Khromova, A.; Cautero, G.; Giuressi, D.; Menk, R.; Pinaroli, G.; Stebel, L.; Correa, J.; Marras, A.; Wunderer, C. B.; Lange, S.; Tennert, M.; Niemann, M.; Hirsemann, H.; Smoljanin, S.; Reza, S.; Graafsma, H.; Göttlicher, P.; Shevyakov, I.; Supra, J.; Xia, Q.; Zimmer, M.; Guerrini, N.; Marsh, B.; Sedgwick, I.; Nicholls, T.; Turchetta, R.; Pedersen, U.; Tartoni, N.; Hyun, H. J.; Kim, K. S.; Rah, S. Y.; Hoenk, M. E.; Jewell, A. D.; Jones, T. J.; Nikzad, S.

    2016-11-01

    The PERCIVAL (Pixelated Energy Resolving CMOS Imager, Versatile And Large) soft X-ray 2D imaging detector is based on stitched, wafer-scale sensors possessing a thick epi-layer, which together with back-thinning and back-side illumination yields elevated quantum efficiency in the photon energy range of 125-1000 eV. Main application fields of PERCIVAL are foreseen in photon science with FELs and synchrotron radiation. This requires high dynamic range up to 105 ph @ 250 eV paired with single photon sensitivity with high confidence at moderate frame rates in the range of 10-120 Hz. These figures imply the availability of dynamic gain switching on a pixel-by-pixel basis and a highly parallel, low noise analog and digital readout, which has been realized in the PERCIVAL sensor layout. Different aspects of the detector performance have been assessed using prototype sensors with different pixel and ADC types. This work will report on the recent test results performed on the newest chip prototypes with the improved pixel and ADC architecture. For the target frame rates in the 10-120 Hz range an average noise floor of 14e- has been determined, indicating the ability of detecting single photons with energies above 250 eV. Owing to the successfully implemented adaptive 3-stage multiple-gain switching, the integrated charge level exceeds 4 · 106 e- or 57000 X-ray photons at 250 eV per frame at 120 Hz. For all gains the noise level remains below the Poisson limit also in high-flux conditions. Additionally, a short overview over the updates on an oncoming 2 Mpixel (P2M) detector system (expected at the end of 2016) will be reported.

  20. NASA Tech Briefs, October 2007

    NASA Technical Reports Server (NTRS)

    2007-01-01

    Topics covered include; Wirelessly Interrogated Position or Displacement Sensors; Ka-Band Radar Terminal Descent Sensor; Metal/Metal Oxide Differential Electrode pH Sensors; Improved Sensing Coils for SQUIDs; Inductive Linear-Position Sensor/Limit-Sensor Units; Hilbert-Curve Fractal Antenna With Radiation- Pattern Diversity; Single-Camera Panoramic-Imaging Systems; Interface Electronic Circuitry for an Electronic Tongue; Inexpensive Clock for Displaying Planetary or Sidereal Time; Efficient Switching Arrangement for (N + 1)/N Redundancy; Lightweight Reflectarray Antenna for 7.115 and 32 GHz; Opto-Electronic Oscillator Using Suppressed Phase Modulation; Alternative Controller for a Fiber-Optic Switch; Strong, Lightweight, Porous Materials; Nanowicks; Lightweight Thermal Protection System for Atmospheric Entry; Rapid and Quiet Drill; Hydrogen Peroxide Concentrator; MMIC Amplifiers for 90 to 130 GHz; Robot Would Climb Steep Terrain; Measuring Dynamic Transfer Functions of Cavitating Pumps; Advanced Resistive Exercise Device; Rapid Engineering of Three-Dimensional, Multicellular Tissues With Polymeric Scaffolds; Resonant Tunneling Spin Pump; Enhancing Spin Filters by Use of Bulk Inversion Asymmetry; Optical Magnetometer Incorporating Photonic Crystals; WGM-Resonator/Tapered-Waveguide White-Light Sensor Optics; Raman-Suppressing Coupling for Optical Parametric Oscillator; CO2-Reduction Primary Cell for Use on Venus; Cold Atom Source Containing Multiple Magneto- Optical Traps; POD Model Reconstruction for Gray-Box Fault Detection; System for Estimating Horizontal Velocity During Descent; Software Framework for Peer Data-Management Services; Autogen Version 2.0; Tracking-Data-Conversion Tool; NASA Enterprise Visual Analysis; Advanced Reference Counting Pointers for Better Performance; C Namelist Facility; and Efficient Mosaicking of Spitzer Space Telescope Images.

  1. Inertial sensor self-calibration in a visually-aided navigation approach for a micro-AUV.

    PubMed

    Bonin-Font, Francisco; Massot-Campos, Miquel; Negre-Carrasco, Pep Lluis; Oliver-Codina, Gabriel; Beltran, Joan P

    2015-01-16

    This paper presents a new solution for underwater observation, image recording, mapping and 3D reconstruction in shallow waters. The platform, designed as a research and testing tool, is based on a small underwater robot equipped with a MEMS-based IMU, two stereo cameras and a pressure sensor. The data given by the sensors are fused, adjusted and corrected in a multiplicative error state Kalman filter (MESKF), which returns a single vector with the pose and twist of the vehicle and the biases of the inertial sensors (the accelerometer and the gyroscope). The inclusion of these biases in the state vector permits their self-calibration and stabilization, improving the estimates of the robot orientation. Experiments in controlled underwater scenarios and in the sea have demonstrated a satisfactory performance and the capacity of the vehicle to operate in real environments and in real time.

  2. Autonomous Kinematic Calibration of the Robot Manipulator with a Linear Laser-Vision Sensor

    NASA Astrophysics Data System (ADS)

    Kang, Hee-Jun; Jeong, Jeong-Woo; Shin, Sung-Weon; Suh, Young-Soo; Ro, Young-Schick

    This paper presents a new autonomous kinematic calibration technique by using a laser-vision sensor called "Perceptron TriCam Contour". Because the sensor measures by capturing the image of a projected laser line on the surface of the object, we set up a long, straight line of a very fine string inside the robot workspace, and then allow the sensor mounted on a robot to measure the point intersection of the line of string and the projected laser line. The data collected by changing robot configuration and measuring the intersection points are constrained to on a single straght line such that the closed-loop calibration method can be applied. The obtained calibration method is simple and accurate and also suitable for on-site calibration in an industrial environment. The method is implemented using Hyundai VORG-35 for its effectiveness.

  3. Imaging mitochondrial flux in single cells with a FRET sensor for pyruvate.

    PubMed

    San Martín, Alejandro; Ceballo, Sebastián; Baeza-Lehnert, Felipe; Lerchundi, Rodrigo; Valdebenito, Rocío; Contreras-Baeza, Yasna; Alegría, Karin; Barros, L Felipe

    2014-01-01

    Mitochondrial flux is currently accessible at low resolution. Here we introduce a genetically-encoded FRET sensor for pyruvate, and methods for quantitative measurement of pyruvate transport, pyruvate production and mitochondrial pyruvate consumption in intact individual cells at high temporal resolution. In HEK293 cells, neurons and astrocytes, mitochondrial pyruvate uptake was saturated at physiological levels, showing that the metabolic rate is determined by intrinsic properties of the organelle and not by substrate availability. The potential of the sensor was further demonstrated in neurons, where mitochondrial flux was found to rise by 300% within seconds of a calcium transient triggered by a short theta burst, while glucose levels remained unaltered. In contrast, astrocytic mitochondria were insensitive to a similar calcium transient elicited by extracellular ATP. We expect the improved resolution provided by the pyruvate sensor will be of practical interest for basic and applied researchers interested in mitochondrial function.

  4. Inertial Sensor Self-Calibration in a Visually-Aided Navigation Approach for a Micro-AUV

    PubMed Central

    Bonin-Font, Francisco; Massot-Campos, Miquel; Negre-Carrasco, Pep Lluis; Oliver-Codina, Gabriel; Beltran, Joan P.

    2015-01-01

    This paper presents a new solution for underwater observation, image recording, mapping and 3D reconstruction in shallow waters. The platform, designed as a research and testing tool, is based on a small underwater robot equipped with a MEMS-based IMU, two stereo cameras and a pressure sensor. The data given by the sensors are fused, adjusted and corrected in a multiplicative error state Kalman filter (MESKF), which returns a single vector with the pose and twist of the vehicle and the biases of the inertial sensors (the accelerometer and the gyroscope). The inclusion of these biases in the state vector permits their self-calibration and stabilization, improving the estimates of the robot orientation. Experiments in controlled underwater scenarios and in the sea have demonstrated a satisfactory performance and the capacity of the vehicle to operate in real environments and in real time. PMID:25602263

  5. Imaging Mitochondrial Flux in Single Cells with a FRET Sensor for Pyruvate

    PubMed Central

    Baeza-Lehnert, Felipe; Lerchundi, Rodrigo; Valdebenito, Rocío; Contreras-Baeza, Yasna; Alegría, Karin; Barros, L. Felipe

    2014-01-01

    Mitochondrial flux is currently accessible at low resolution. Here we introduce a genetically-encoded FRET sensor for pyruvate, and methods for quantitative measurement of pyruvate transport, pyruvate production and mitochondrial pyruvate consumption in intact individual cells at high temporal resolution. In HEK293 cells, neurons and astrocytes, mitochondrial pyruvate uptake was saturated at physiological levels, showing that the metabolic rate is determined by intrinsic properties of the organelle and not by substrate availability. The potential of the sensor was further demonstrated in neurons, where mitochondrial flux was found to rise by 300% within seconds of a calcium transient triggered by a short theta burst, while glucose levels remained unaltered. In contrast, astrocytic mitochondria were insensitive to a similar calcium transient elicited by extracellular ATP. We expect the improved resolution provided by the pyruvate sensor will be of practical interest for basic and applied researchers interested in mitochondrial function. PMID:24465702

  6. Evaluation and comparison of the IRS-P6 and the landsat sensors

    USGS Publications Warehouse

    Chander, G.; Coan, M.J.; Scaramuzza, P.L.

    2008-01-01

    The Indian Remote Sensing Satellite (IRS-P6), also called ResourceSat-1, was launched in a polar sun-synchronous orbit on October 17, 2003. It carries three sensors: the highresolution Linear Imaging Self-Scanner (LISS-IV), the mediumresolution Linear Imaging Self-Scanner (LISS-III), and the Advanced Wide-Field Sensor (AWiFS). These three sensors provide images of different resolutions and coverage. To understand the absolute radiometric calibration accuracy of IRS-P6 AWiFS and LISS-III sensors, image pairs from these sensors were compared to images from the Landsat-5 Thematic Mapper (TM) and Landsat-7 Enhanced TM Plus (ETM+) sensors. The approach involves calibration of surface observations based on image statistics from areas observed nearly simultaneously by the two sensors. This paper also evaluated the viability of data from these nextgeneration imagers for use in creating three National Land Cover Dataset (NLCD) products: land cover, percent tree canopy, and percent impervious surface. Individual products were consistent with previous studies but had slightly lower overall accuracies as compared to data from the Landsat sensors.

  7. Integral imaging with Fourier-plane recording

    NASA Astrophysics Data System (ADS)

    Martínez-Corral, M.; Barreiro, J. C.; Llavador, A.; Sánchez-Ortiga, E.; Sola-Pikabea, J.; Scrofani, G.; Saavedra, G.

    2017-05-01

    Integral Imaging is well known for its capability of recording both the spatial and the angular information of threedimensional (3D) scenes. Based on such an idea, the plenoptic concept has been developed in the past two decades, and therefore a new camera has been designed with the capacity of capturing the spatial-angular information with a single sensor and after a single shot. However, the classical plenoptic design presents two drawbacks, one is the oblique recording made by external microlenses. Other is loss of information due to diffraction effects. In this contribution report a change in the paradigm and propose the combination of telecentric architecture and Fourier-plane recording. This new capture geometry permits substantial improvements in resolution, depth of field and computation time

  8. Performance test and image correction of CMOS image sensor in radiation environment

    NASA Astrophysics Data System (ADS)

    Wang, Congzheng; Hu, Song; Gao, Chunming; Feng, Chang

    2016-09-01

    CMOS image sensors rival CCDs in domains that include strong radiation resistance as well as simple drive signals, so it is widely applied in the high-energy radiation environment, such as space optical imaging application and video monitoring of nuclear power equipment. However, the silicon material of CMOS image sensors has the ionizing dose effect in the high-energy rays, and then the indicators of image sensors, such as signal noise ratio (SNR), non-uniformity (NU) and bad point (BP) are degraded because of the radiation. The radiation environment of test experiments was generated by the 60Co γ-rays source. The camera module based on image sensor CMV2000 from CMOSIS Inc. was chosen as the research object. The ray dose used for the experiments was with a dose rate of 20krad/h. In the test experiences, the output signals of the pixels of image sensor were measured on the different total dose. The results of data analysis showed that with the accumulation of irradiation dose, SNR of image sensors decreased, NU of sensors was enhanced, and the number of BP increased. The indicators correction of image sensors was necessary, as it was the main factors to image quality. The image processing arithmetic was adopt to the data from the experiences in the work, which combined local threshold method with NU correction based on non-local means (NLM) method. The results from image processing showed that image correction can effectively inhibit the BP, improve the SNR, and reduce the NU.

  9. High speed three-dimensional laser scanner with real time processing

    NASA Technical Reports Server (NTRS)

    Lavelle, Joseph P. (Inventor); Schuet, Stefan R. (Inventor)

    2008-01-01

    A laser scanner computes a range from a laser line to an imaging sensor. The laser line illuminates a detail within an area covered by the imaging sensor, the area having a first dimension and a second dimension. The detail has a dimension perpendicular to the area. A traverse moves a laser emitter coupled to the imaging sensor, at a height above the area. The laser emitter is positioned at an offset along the scan direction with respect to the imaging sensor, and is oriented at a depression angle with respect to the area. The laser emitter projects the laser line along the second dimension of the area at a position where a image frame is acquired. The imaging sensor is sensitive to laser reflections from the detail produced by the laser line. The imaging sensor images the laser reflections from the detail to generate the image frame. A computer having a pipeline structure is connected to the imaging sensor for reception of the image frame, and for computing the range to the detail using height, depression angle and/or offset. The computer displays the range to the area and detail thereon covered by the image frame.

  10. CMOS Active-Pixel Image Sensor With Intensity-Driven Readout

    NASA Technical Reports Server (NTRS)

    Langenbacher, Harry T.; Fossum, Eric R.; Kemeny, Sabrina

    1996-01-01

    Proposed complementary metal oxide/semiconductor (CMOS) integrated-circuit image sensor automatically provides readouts from pixels in order of decreasing illumination intensity. Sensor operated in integration mode. Particularly useful in number of image-sensing tasks, including diffractive laser range-finding, three-dimensional imaging, event-driven readout of sparse sensor arrays, and star tracking.

  11. In Vivo Deep Tissue Fluorescence and Magnetic Imaging Employing Hybrid Nanostructures.

    PubMed

    Ortgies, Dirk H; de la Cueva, Leonor; Del Rosal, Blanca; Sanz-Rodríguez, Francisco; Fernández, Nuria; Iglesias-de la Cruz, M Carmen; Salas, Gorka; Cabrera, David; Teran, Francisco J; Jaque, Daniel; Martín Rodríguez, Emma

    2016-01-20

    Breakthroughs in nanotechnology have made it possible to integrate different nanoparticles in one single hybrid nanostructure (HNS), constituting multifunctional nanosized sensors, carriers, and probes with great potential in the life sciences. In addition, such nanostructures could also offer therapeutic capabilities to achieve a wider variety of multifunctionalities. In this work, the encapsulation of both magnetic and infrared emitting nanoparticles into a polymeric matrix leads to a magnetic-fluorescent HNS with multimodal magnetic-fluorescent imaging abilities. The magnetic-fluorescent HNS are capable of simultaneous magnetic resonance imaging and deep tissue infrared fluorescence imaging, overcoming the tissue penetration limits of classical visible-light based optical imaging as reported here in living mice. Additionally, their applicability for magnetic heating in potential hyperthermia treatments is assessed.

  12. Flash LIDAR Systems for Planetary Exploration

    NASA Astrophysics Data System (ADS)

    Dissly, Richard; Weinberg, J.; Weimer, C.; Craig, R.; Earhart, P.; Miller, K.

    2009-01-01

    Ball Aerospace offers a mature, highly capable 3D flash-imaging LIDAR system for planetary exploration. Multi mission applications include orbital, standoff and surface terrain mapping, long distance and rapid close-in ranging, descent and surface navigation and rendezvous and docking. Our flash LIDAR is an optical, time-of-flight, topographic imaging system, leveraging innovations in focal plane arrays, readout integrated circuit real time processing, and compact and efficient pulsed laser sources. Due to its modular design, it can be easily tailored to satisfy a wide range of mission requirements. Flash LIDAR offers several distinct advantages over traditional scanning systems. The entire scene within the sensor's field of view is imaged with a single laser flash. This directly produces an image with each pixel already correlated in time, making the sensor resistant to the relative motion of a target subject. Additionally, images may be produced at rates much faster than are possible with a scanning system. And because the system captures a new complete image with each flash, optical glint and clutter are easily filtered and discarded. This allows for imaging under any lighting condition and makes the system virtually insensitive to stray light. Finally, because there are no moving parts, our flash LIDAR system is highly reliable and has a long life expectancy. As an industry leader in laser active sensor system development, Ball Aerospace has been working for more than four years to mature flash LIDAR systems for space applications, and is now under contract to provide the Vision Navigation System for NASA's Orion spacecraft. Our system uses heritage optics and electronics from our star tracker products, and space qualified lasers similar to those used in our CALIPSO LIDAR, which has been in continuous operation since 2006, providing more than 1.3 billion laser pulses to date.

  13. Vector sensor for scanning SQUID microscopy

    NASA Astrophysics Data System (ADS)

    Dang, Vu The; Toji, Masaki; Thanh Huy, Ho; Miyajima, Shigeyuki; Shishido, Hiroaki; Hidaka, Mutsuo; Hayashi, Masahiko; Ishida, Takekazu

    2017-07-01

    We plan to build a novel 3-dimensional (3D) scanning SQUID microscope with high sensitivity and high spatial resolution. In the system, a vector sensor consists of three SQUID sensors and three pick-up coils realized on a single chip. Three pick-up coils are configured in orthogonal with each other to measure the magnetic field vector of X, Y, Z components. We fabricated some SQUID chips with one uniaxial pick-up coil or three vector pick-up coils and carried out fundamental measurements to reveal the basic characteristics. Josephson junctions (JJs) of sensors are designed to have the critical current density J c of 320 A/cm2, and the critical current I c becomes 12.5 μA for the 2.2μm × 2.2μm JJ. We carefully positioned the three pickup coils so as to keep them at the same height at the centers of all three X, Y and Z coils. This can be done by arranging them along single line parallel to a sample surface. With the aid of multilayer technology of Nb-based fabrication, we attempted to reduce an inner diameter of the pickup coils to enhance both sensitivity and spatial resolution. The method for improving a spatial resolution of a local magnetic field image is to employ an XYZ piezo-driven scanner for controlling the positions of the pick-up coils. The fundamental characteristics of our SQUID sensors confirmed the proper operation of our SQUID sensors and found a good agreement with our design parameters.

  14. Generating Vegetation Leaf Area Index Earth System Data Record from Multiple Sensors. Part 1; Theory

    NASA Technical Reports Server (NTRS)

    Ganguly, Sangram; Schull, Mitchell A.; Samanta, Arindam; Shabanov, Nikolay V.; Milesi, Cristina; Nemani, Ramakrishna R.; Knyazikhin, Yuri; Myneni, Ranga B.

    2008-01-01

    The generation of multi-decade long Earth System Data Records (ESDRs) of Leaf Area Index (LAI) and Fraction of Photosynthetically Active Radiation absorbed by vegetation (FPAR) from remote sensing measurements of multiple sensors is key to monitoring long-term changes in vegetation due to natural and anthropogenic influences. Challenges in developing such ESDRs include problems in remote sensing science (modeling of variability in global vegetation, scaling, atmospheric correction) and sensor hardware (differences in spatial resolution, spectral bands, calibration, and information content). In this paper, we develop a physically based approach for deriving LAI and FPAR products from the Advanced Very High Resolution Radiometer (AVHRR) data that are of comparable quality to the Moderate resolution Imaging Spectroradiometer (MODIS) LAI and FPAR products, thus realizing the objective of producing a long (multi-decadal) time series of these products. The approach is based on the radiative transfer theory of canopy spectral invariants which facilitates parameterization of the canopy spectral bidirectional reflectance factor (BRF). The methodology permits decoupling of the structural and radiometric components and obeys the energy conservation law. The approach is applicable to any optical sensor, however, it requires selection of sensor-specific values of configurable parameters, namely, the single scattering albedo and data uncertainty. According to the theory of spectral invariants, the single scattering albedo is a function of the spatial scale, and thus, accounts for the variation in BRF with sensor spatial resolution. Likewise, the single scattering albedo accounts for the variation in spectral BRF with sensor bandwidths. The second adjustable parameter is data uncertainty, which accounts for varying information content of the remote sensing measurements, i.e., Normalized Difference Vegetation Index (NDVI, low information content), vs. spectral BRF (higher information content). Implementation of this approach indicates good consistency in LAI values retrieved from NDVI (AVHRRmode) and spectral BRF (MODIS-mode). Specific details of the implementation and evaluation of the derived products are detailed in the second part of this two-paper series.

  15. SPADnet: a fully digital, scalable, and networked photonic component for time-of-flight PET applications

    NASA Astrophysics Data System (ADS)

    Bruschini, Claudio; Charbon, Edoardo; Veerappan, Chockalingam; Braga, Leo H. C.; Massari, Nicola; Perenzoni, Matteo; Gasparini, Leonardo; Stoppa, David; Walker, Richard; Erdogan, Ahmet; Henderson, Robert K.; East, Steve; Grant, Lindsay; Játékos, Balázs; Ujhelyi, Ferenc; Erdei, Gábor; Lörincz, Emöke; André, Luc; Maingault, Laurent; Jacolin, David; Verger, L.; Gros d'Aillon, Eric; Major, Peter; Papp, Zoltan; Nemeth, Gabor

    2014-05-01

    The SPADnet FP7 European project is aimed at a new generation of fully digital, scalable and networked photonic components to enable large area image sensors, with primary target gamma-ray and coincidence detection in (Time-of- Flight) Positron Emission Tomography (PET). SPADnet relies on standard CMOS technology, therefore allowing for MRI compatibility. SPADnet innovates in several areas of PET systems, from optical coupling to single-photon sensor architectures, from intelligent ring networks to reconstruction algorithms. It is built around a natively digital, intelligent SPAD (Single-Photon Avalanche Diode)-based sensor device which comprises an array of 8×16 pixels, each composed of 4 mini-SiPMs with in situ time-to-digital conversion, a multi-ring network to filter, carry, and process data produced by the sensors at 2Gbps, and a 130nm CMOS process enabling mass-production of photonic modules that are optically interfaced to scintillator crystals. A few tens of sensor devices are tightly abutted on a single PCB to form a so-called sensor tile, thanks to TSV (Through Silicon Via) connections to their backside (replacing conventional wire bonding). The sensor tile is in turn interfaced to an FPGA-based PCB on its back. The resulting photonic module acts as an autonomous sensing and computing unit, individually detecting gamma photons as well as thermal and Compton events. It determines in real time basic information for each scintillation event, such as exact time of arrival, position and energy, and communicates it to its peers in the field of view. Coincidence detection does therefore occur directly in the ring itself, in a differed and distributed manner to ensure scalability. The selected true coincidence events are then collected by a snooper module, from which they are transferred to an external reconstruction computer using Gigabit Ethernet.

  16. Guidance Of A Mobile Robot Using An Omnidirectional Vision Navigation System

    NASA Astrophysics Data System (ADS)

    Oh, Sung J.; Hall, Ernest L.

    1987-01-01

    Navigation and visual guidance are key topics in the design of a mobile robot. Omnidirectional vision using a very wide angle or fisheye lens provides a hemispherical view at a single instant that permits target location without mechanical scanning. The inherent image distortion with this view and the numerical errors accumulated from vision components can be corrected to provide accurate position determination for navigation and path control. The purpose of this paper is to present the experimental results and analyses of the imaging characteristics of the omnivision system including the design of robot-oriented experiments and the calibration of raw results. Errors less than one picture element on each axis were observed by testing the accuracy and repeatability of the experimental setup and the alignment between the robot and the sensor. Similar results were obtained for four different locations using corrected results of the linearity test between zenith angle and image location. Angular error of less than one degree and radial error of less than one Y picture element were observed at moderate relative speed. The significance of this work is that the experimental information and the test of coordinated operation of the equipment provide a greater understanding of the dynamic omnivision system characteristics, as well as insight into the evaluation and improvement of the prototype sensor for a mobile robot. Also, the calibration of the sensor is important, since the results provide a cornerstone for future developments. This sensor system is currently being developed for a robot lawn mower.

  17. Retrieving Land Surface Temperature and Emissivity from Multispectral and Hyperspectral Thermal Infrared Instruments

    NASA Astrophysics Data System (ADS)

    Hook, Simon; Hulley, Glynn; Nicholson, Kerry

    2017-04-01

    Land Surface Temperature and Emissivity (LST&E) data are critical variables for studying a variety of Earth surface processes and surface-atmosphere interactions such as evapotranspiration, surface energy balance and water vapor retrievals. LST&E have been identified as an important Earth System Data Record (ESDR) by NASA and many other international organizations Accurate knowledge of the LST&E is a key requirement for many energy balance models to estimate important surface biophysical variables such as evapotranspiration and plant-available soil moisture. LST&E products are currently generated from sensors in low earth orbit (LEO) such as the NASA Moderate Resolution Imaging Spectroradiometer (MODIS) instruments on the Terra and Aqua satellites as well as from sensors in geostationary Earth orbit (GEO) such as the Geostationary Operational Environmental Satellites (GOES) and airborne sensors such as the Hyperspectral Thermal Emission Spectrometer (HyTES). LST&E products are generated with varying accuracies depending on the input data, including ancillary data such as atmospheric water vapor, as well as algorithmic approaches. NASA has identified the need to develop long-term, consistent, and calibrated data and products that are valid across multiple missions and satellite sensors. We will discuss the different approaches that can be used to retrieve surface temperature and emissivity from multispectral and hyperspectral thermal infrared sensors using examples from a variety of different sensors such as those mentioned, and planned new sensors like the ECOsystem Spaceborne Thermal Radiometer Experiment on Space Station (ECOSTRESS) and the Hyperspectral Infrared Imager (HyspIRI). We will also discuss a project underway at NASA to develop a single unified product from some the individual sensor products and assess the errors associated with the product.

  18. Micro-Hall devices for magnetic, electric and photo-detection

    NASA Astrophysics Data System (ADS)

    Gilbertson, A.; Sadeghi, H.; Panchal, V.; Kazakova, O.; Lambert, C. J.; Solin, S. A.; Cohen, L. F.

    Multifunctional mesoscopic sensors capable of detecting local magnetic (B) , electric (E) , and optical fields can greatly facilitate image capture in nano-arrays that address a multitude of disciplines. The use of micro-Hall devices as B-field sensors and, more recently as E-field sensors is well established. Here we report the real-space voltage response of InSb/AlInSb micro-Hall devices to not only local E-, and B-fields but also to photo-excitation using scanning probe microscopy. We show that the ultrafast generation of localised photocarriers results in conductance perturbations analogous to those produced by local E-fields. Our experimental results are in good agreement with tight-binding transport calculations in the diffusive regime. At room temperature, samples exhibit a magnetic sensitivity of >500 nT/ √Hz, an optical noise equivalent power of >20 pW/ √Hz (λ = 635 nm) comparable to commercial photoconductive detectors, and charge sensitivity of >0.04 e/ √Hz comparable to that of single electron transistors. Work done while on sabbatical from Washington University. Co-founder of PixelEXX, a start-up whose focus is imaging nano-arrays.

  19. Remote sensing of aerosol plumes: a semianalytical model

    NASA Astrophysics Data System (ADS)

    Alakian, Alexandre; Marion, Rodolphe; Briottet, Xavier

    2008-04-01

    A semianalytical model, named APOM (aerosol plume optical model) and predicting the radiative effects of aerosol plumes in the spectral range [0.4,2.5 μm], is presented in the case of nadir viewing. It is devoted to the analysis of plumes arising from single strong emission events (high optical depths) such as fires or industrial discharges. The scene is represented by a standard atmosphere (molecules and natural aerosols) on which a plume layer is added at the bottom. The estimated at-sensor reflectance depends on the atmosphere without plume, the solar zenith angle, the plume optical properties (optical depth, single-scattering albedo, and asymmetry parameter), the ground reflectance, and the wavelength. Its mathematical expression as well as its numerical coefficients are derived from MODTRAN4 radiative transfer simulations. The DISORT option is used with 16 fluxes to provide a sufficiently accurate calculation of multiple scattering effects that are important for dense smokes. Model accuracy is assessed by using a set of simulations performed in the case of biomass burning and industrial plumes. APOM proves to be accurate and robust for solar zenith angles between 0° and 60° whatever the sensor altitude, the standard atmosphere, for plume phase functions defined from urban and rural models, and for plume locations that extend from the ground to a height below 3 km. The modeling errors in the at-sensor reflectance are on average below 0.002. They can reach values of 0.01 but correspond to low relative errors then (below 3% on average). This model can be used for forward modeling (quick simulations of multi/hyperspectral images and help in sensor design) as well as for the retrieval of the plume optical properties from remotely sensed images.

  20. Remote sensing of aerosol plumes: a semianalytical model.

    PubMed

    Alakian, Alexandre; Marion, Rodolphe; Briottet, Xavier

    2008-04-10

    A semianalytical model, named APOM (aerosol plume optical model) and predicting the radiative effects of aerosol plumes in the spectral range [0.4,2.5 microm], is presented in the case of nadir viewing. It is devoted to the analysis of plumes arising from single strong emission events (high optical depths) such as fires or industrial discharges. The scene is represented by a standard atmosphere (molecules and natural aerosols) on which a plume layer is added at the bottom. The estimated at-sensor reflectance depends on the atmosphere without plume, the solar zenith angle, the plume optical properties (optical depth, single-scattering albedo, and asymmetry parameter), the ground reflectance, and the wavelength. Its mathematical expression as well as its numerical coefficients are derived from MODTRAN4 radiative transfer simulations. The DISORT option is used with 16 fluxes to provide a sufficiently accurate calculation of multiple scattering effects that are important for dense smokes. Model accuracy is assessed by using a set of simulations performed in the case of biomass burning and industrial plumes. APOM proves to be accurate and robust for solar zenith angles between 0 degrees and 60 degrees whatever the sensor altitude, the standard atmosphere, for plume phase functions defined from urban and rural models, and for plume locations that extend from the ground to a height below 3 km. The modeling errors in the at-sensor reflectance are on average below 0.002. They can reach values of 0.01 but correspond to low relative errors then (below 3% on average). This model can be used for forward modeling (quick simulations of multi/hyperspectral images and help in sensor design) as well as for the retrieval of the plume optical properties from remotely sensed images.

  1. Recent Improvements in Retrieving Near-Surface Air Temperature and Humidity Using Microwave Remote Sensing

    NASA Technical Reports Server (NTRS)

    Roberts, J. Brent

    2010-01-01

    Detailed studies of the energy and water cycles require accurate estimation of the turbulent fluxes of moisture and heat across the atmosphere-ocean interface at regional to basin scale. Providing estimates of these latent and sensible heat fluxes over the global ocean necessitates the use of satellite or reanalysis-based estimates of near surface variables. Recent studies have shown that errors in the surface (10 meter)estimates of humidity and temperature are currently the largest sources of uncertainty in the production of turbulent fluxes from satellite observations. Therefore, emphasis has been placed on reducing the systematic errors in the retrieval of these parameters from microwave radiometers. This study discusses recent improvements in the retrieval of air temperature and humidity through improvements in the choice of algorithms (linear vs. nonlinear) and the choice of microwave sensors. Particular focus is placed on improvements using a neural network approach with a single sensor (Special Sensor Microwave/Imager) and the use of combined sensors from the NASA AQUA satellite platform. The latter algorithm utilizes the unique sampling available on AQUA from the Advanced Microwave Scanning Radiometer (AMSR-E) and the Advanced Microwave Sounding Unit (AMSU-A). Current estimates of uncertainty in the near-surface humidity and temperature from single and multi-sensor approaches are discussed and used to estimate errors in the turbulent fluxes.

  2. Three-Dimensional Reconstruction from Single Image Base on Combination of CNN and Multi-Spectral Photometric Stereo.

    PubMed

    Lu, Liang; Qi, Lin; Luo, Yisong; Jiao, Hengchao; Dong, Junyu

    2018-03-02

    Multi-spectral photometric stereo can recover pixel-wise surface normal from a single RGB image. The difficulty lies in that the intensity in each channel is the tangle of illumination, albedo and camera response; thus, an initial estimate of the normal is required in optimization-based solutions. In this paper, we propose to make a rough depth estimation using the deep convolutional neural network (CNN) instead of using depth sensors or binocular stereo devices. Since high-resolution ground-truth data is expensive to obtain, we designed a network and trained it with rendered images of synthetic 3D objects. We use the model to predict initial normal of real-world objects and iteratively optimize the fine-scale geometry in the multi-spectral photometric stereo framework. The experimental results illustrate the improvement of the proposed method compared with existing methods.

  3. Three-Dimensional Reconstruction from Single Image Base on Combination of CNN and Multi-Spectral Photometric Stereo

    PubMed Central

    Lu, Liang; Qi, Lin; Luo, Yisong; Jiao, Hengchao; Dong, Junyu

    2018-01-01

    Multi-spectral photometric stereo can recover pixel-wise surface normal from a single RGB image. The difficulty lies in that the intensity in each channel is the tangle of illumination, albedo and camera response; thus, an initial estimate of the normal is required in optimization-based solutions. In this paper, we propose to make a rough depth estimation using the deep convolutional neural network (CNN) instead of using depth sensors or binocular stereo devices. Since high-resolution ground-truth data is expensive to obtain, we designed a network and trained it with rendered images of synthetic 3D objects. We use the model to predict initial normal of real-world objects and iteratively optimize the fine-scale geometry in the multi-spectral photometric stereo framework. The experimental results illustrate the improvement of the proposed method compared with existing methods. PMID:29498703

  4. Smart sensors II; Proceedings of the Seminar, San Diego, CA, July 31, August 1, 1980

    NASA Astrophysics Data System (ADS)

    Barbe, D. F.

    1980-01-01

    Topics discussed include technology for smart sensors, smart sensors for tracking and surveillance, and techniques and algorithms for smart sensors. Papers are presented on the application of very large scale integrated circuits to smart sensors, imaging charge-coupled devices for deep-space surveillance, ultra-precise star tracking using charge coupled devices, and automatic target identification of blurred images with super-resolution features. Attention is also given to smart sensors for terminal homing, algorithms for estimating image position, and the computational efficiency of multiple image registration algorithms.

  5. Multispectral interference filter arrays with compensation of angular dependence or extended spectral range.

    PubMed

    Frey, Laurent; Masarotto, Lilian; Armand, Marilyn; Charles, Marie-Lyne; Lartigue, Olivier

    2015-05-04

    Thin film Fabry-Perot filter arrays with high selectivity can be realized with a single patterning step, generating a spatial modulation of the effective refractive index in the optical cavity. In this paper, we investigate the ability of this technology to address two applications in the field of image sensors. First, the spectral tuning may be used to compensate the blue-shift of the filters in oblique incidence, provided the filter array is located in an image plane of an optical system with higher field of view than aperture angle. The technique is analyzed for various types of filters and experimental evidence is shown with copper-dielectric infrared filters. Then, we propose a design of a multispectral filter array with an extended spectral range spanning the visible and near-infrared range, using a single set of materials and realizable on a single substrate.

  6. Supercontinuum as a light source for miniaturized endoscopes.

    PubMed

    Lu, M K; Lin, H Y; Hsieh, C C; Kao, F J

    2016-09-01

    In this work, we have successfully implemented supercontinuum based illumination through single fiber coupling. The integration of a single fiber illumination with a miniature CMOS sensor forms a very slim and powerful camera module for endoscopic imaging. A set of tests and in vivo animal experiments are conducted accordingly to characterize the corresponding illuminance, spectral profile, intensity distribution, and image quality. The key illumination parameters of the supercontinuum, including color rendering index (CRI: 72%~97%) and correlated color temperature (CCT: 3,100K~5,200K), are modified with external filters and compared with those from a LED light source (CRI~76% & CCT~6,500K). The very high spatial coherence of the supercontinuum allows high luminosity conduction through a single multimode fiber (core size~400μm), whose distal end tip is attached with a diffussion tip to broaden the solid angle of illumination (from less than 10° to more than 80°).

  7. CMOS image sensor-based implantable glucose sensor using glucose-responsive fluorescent hydrogel.

    PubMed

    Tokuda, Takashi; Takahashi, Masayuki; Uejima, Kazuhiro; Masuda, Keita; Kawamura, Toshikazu; Ohta, Yasumi; Motoyama, Mayumi; Noda, Toshihiko; Sasagawa, Kiyotaka; Okitsu, Teru; Takeuchi, Shoji; Ohta, Jun

    2014-11-01

    A CMOS image sensor-based implantable glucose sensor based on an optical-sensing scheme is proposed and experimentally verified. A glucose-responsive fluorescent hydrogel is used as the mediator in the measurement scheme. The wired implantable glucose sensor was realized by integrating a CMOS image sensor, hydrogel, UV light emitting diodes, and an optical filter on a flexible polyimide substrate. Feasibility of the glucose sensor was verified by both in vitro and in vivo experiments.

  8. Pesticide residue quantification analysis by hyperspectral imaging sensors

    NASA Astrophysics Data System (ADS)

    Liao, Yuan-Hsun; Lo, Wei-Sheng; Guo, Horng-Yuh; Kao, Ching-Hua; Chou, Tau-Meu; Chen, Junne-Jih; Wen, Chia-Hsien; Lin, Chinsu; Chen, Hsian-Min; Ouyang, Yen-Chieh; Wu, Chao-Cheng; Chen, Shih-Yu; Chang, Chein-I.

    2015-05-01

    Pesticide residue detection in agriculture crops is a challenging issue and is even more difficult to quantify pesticide residue resident in agriculture produces and fruits. This paper conducts a series of base-line experiments which are particularly designed for three specific pesticides commonly used in Taiwan. The materials used for experiments are single leaves of vegetable produces which are being contaminated by various amount of concentration of pesticides. Two sensors are used to collected data. One is Fourier Transform Infrared (FTIR) spectroscopy. The other is a hyperspectral sensor, called Geophysical and Environmental Research (GER) 2600 spectroradiometer which is a batteryoperated field portable spectroradiometer with full real-time data acquisition from 350 nm to 2500 nm. In order to quantify data with different levels of pesticide residue concentration, several measures for spectral discrimination are developed. Mores specifically, new measures for calculating relative power between two sensors are particularly designed to be able to evaluate effectiveness of each of sensors in quantifying the used pesticide residues. The experimental results show that the GER is a better sensor than FTIR in the sense of pesticide residue quantification.

  9. Computational imaging through a fiber-optic bundle

    NASA Astrophysics Data System (ADS)

    Lodhi, Muhammad A.; Dumas, John Paul; Pierce, Mark C.; Bajwa, Waheed U.

    2017-05-01

    Compressive sensing (CS) has proven to be a viable method for reconstructing high-resolution signals using low-resolution measurements. Integrating CS principles into an optical system allows for higher-resolution imaging using lower-resolution sensor arrays. In contrast to prior works on CS-based imaging, our focus in this paper is on imaging through fiber-optic bundles, in which manufacturing constraints limit individual fiber spacing to around 2 μm. This limitation essentially renders fiber-optic bundles as low-resolution sensors with relatively few resolvable points per unit area. These fiber bundles are often used in minimally invasive medical instruments for viewing tissue at macro and microscopic levels. While the compact nature and flexibility of fiber bundles allow for excellent tissue access in-vivo, imaging through fiber bundles does not provide the fine details of tissue features that is demanded in some medical situations. Our hypothesis is that adapting existing CS principles to fiber bundle-based optical systems will overcome the resolution limitation inherent in fiber-bundle imaging. In a previous paper we examined the practical challenges involved in implementing a highly parallel version of the single-pixel camera while focusing on synthetic objects. This paper extends the same architecture for fiber-bundle imaging under incoherent illumination and addresses some practical issues associated with imaging physical objects. Additionally, we model the optical non-idealities in the system to get lower modelling errors.

  10. Radiometric Normalization of Large Airborne Image Data Sets Acquired by Different Sensor Types

    NASA Astrophysics Data System (ADS)

    Gehrke, S.; Beshah, B. T.

    2016-06-01

    Generating seamless mosaics of aerial images is a particularly challenging task when the mosaic comprises a large number of im-ages, collected over longer periods of time and with different sensors under varying imaging conditions. Such large mosaics typically consist of very heterogeneous image data, both spatially (different terrain types and atmosphere) and temporally (unstable atmo-spheric properties and even changes in land coverage). We present a new radiometric normalization or, respectively, radiometric aerial triangulation approach that takes advantage of our knowledge about each sensor's properties. The current implementation supports medium and large format airborne imaging sensors of the Leica Geosystems family, namely the ADS line-scanner as well as DMC and RCD frame sensors. A hierarchical modelling - with parameters for the overall mosaic, the sensor type, different flight sessions, strips and individual images - allows for adaptation to each sensor's geometric and radiometric properties. Additional parameters at different hierarchy levels can compensate radiome-tric differences of various origins to compensate for shortcomings of the preceding radiometric sensor calibration as well as BRDF and atmospheric corrections. The final, relative normalization is based on radiometric tie points in overlapping images, absolute radiometric control points and image statistics. It is computed in a global least squares adjustment for the entire mosaic by altering each image's histogram using a location-dependent mathematical model. This model involves contrast and brightness corrections at radiometric fix points with bilinear interpolation for corrections in-between. The distribution of the radiometry fixes is adaptive to each image and generally increases with image size, hence enabling optimal local adaptation even for very long image strips as typi-cally captured by a line-scanner sensor. The normalization approach is implemented in HxMap software. It has been successfully applied to large sets of heterogeneous imagery, including the adjustment of original sensor images prior to quality control and further processing as well as radiometric adjustment for ortho-image mosaic generation.

  11. Development of a driving method suitable for ultrahigh-speed shooting in a 2M-fps 300k-pixel single-chip color camera

    NASA Astrophysics Data System (ADS)

    Yonai, J.; Arai, T.; Hayashida, T.; Ohtake, H.; Namiki, J.; Yoshida, T.; Etoh, T. Goji

    2012-03-01

    We have developed an ultrahigh-speed CCD camera that can capture instantaneous phenomena not visible to the human eye and impossible to capture with a regular video camera. The ultrahigh-speed CCD was specially constructed so that the CCD memory between the photodiode and the vertical transfer path of each pixel can store 144 frames each. For every one-frame shot, the electric charges generated from the photodiodes are transferred in one step to the memory of all the parallel pixels, making ultrahigh-speed shooting possible. Earlier, we experimentally manufactured a 1M-fps ultrahigh-speed camera and tested it for broadcasting applications. Through those tests, we learned that there are cases that require shooting speeds (frame rate) of more than 1M fps; hence we aimed to develop a new ultrahigh-speed camera that will enable much faster shooting speeds than what is currently possible. Since shooting at speeds of more than 200,000 fps results in decreased image quality and abrupt heating of the image sensor and drive circuit board, faster speeds cannot be achieved merely by increasing the drive frequency. We therefore had to improve the image sensor wiring layout and the driving method to develop a new 2M-fps, 300k-pixel ultrahigh-speed single-chip color camera for broadcasting purposes.

  12. Quantitative phase imaging using a programmable wavefront sensor

    NASA Astrophysics Data System (ADS)

    Soldevila, F.; Durán, V.; Clemente, P.; Lancis, J.; Tajahuerce, E.

    2018-02-01

    We perform phase imaging using a non-interferometric approach to measure the complex amplitude of a wavefront. We overcome the limitations in spatial resolution, optical efficiency, and dynamic range that are found in Shack-Hartmann wavefront sensing. To do so, we sample the wavefront with a high-speed spatial light modulator. A single lens forms a time-dependent light distribution on its focal plane, where a position detector is placed. Our approach is lenslet-free and does not rely on any kind of iterative or unwrap algorithm. The validity of our technique is demonstrated by performing both aberration sensing and phase imaging of transparent samples.

  13. Spectral X-Ray Diffraction using a 6 Megapixel Photon Counting Array Detector.

    PubMed

    Muir, Ryan D; Pogranichniy, Nicholas R; Muir, J Lewis; Sullivan, Shane Z; Battaile, Kevin P; Mulichak, Anne M; Toth, Scott J; Keefe, Lisa J; Simpson, Garth J

    2015-03-12

    Pixel-array array detectors allow single-photon counting to be performed on a massively parallel scale, with several million counting circuits and detectors in the array. Because the number of photoelectrons produced at the detector surface depends on the photon energy, these detectors offer the possibility of spectral imaging. In this work, a statistical model of the instrument response is used to calibrate the detector on a per-pixel basis. In turn, the calibrated sensor was used to perform separation of dual-energy diffraction measurements into two monochromatic images. Targeting applications include multi-wavelength diffraction to aid in protein structure determination and X-ray diffraction imaging.

  14. Spectral x-ray diffraction using a 6 megapixel photon counting array detector

    NASA Astrophysics Data System (ADS)

    Muir, Ryan D.; Pogranichniy, Nicholas R.; Muir, J. Lewis; Sullivan, Shane Z.; Battaile, Kevin P.; Mulichak, Anne M.; Toth, Scott J.; Keefe, Lisa J.; Simpson, Garth J.

    2015-03-01

    Pixel-array array detectors allow single-photon counting to be performed on a massively parallel scale, with several million counting circuits and detectors in the array. Because the number of photoelectrons produced at the detector surface depends on the photon energy, these detectors offer the possibility of spectral imaging. In this work, a statistical model of the instrument response is used to calibrate the detector on a per-pixel basis. In turn, the calibrated sensor was used to perform separation of dual-energy diffraction measurements into two monochromatic images. Targeting applications include multi-wavelength diffraction to aid in protein structure determination and X-ray diffraction imaging.

  15. Single photon imaging and timing array sensor apparatus and method

    DOEpatents

    Smith, R. Clayton

    2003-06-24

    An apparatus and method are disclosed for generating a three-dimension image of an object or target. The apparatus is comprised of a photon source for emitting a photon at a target. The emitted photons are received by a photon receiver for receiving the photon when reflected from the target. The photon receiver determines a reflection time of the photon and further determines an arrival position of the photon on the photon receiver. An analyzer is communicatively coupled to the photon receiver, wherein the analyzer generates a three-dimensional image of the object based upon the reflection time and the arrival position.

  16. Unmanned Vehicle Guidance Using Video Camera/Vehicle Model

    NASA Technical Reports Server (NTRS)

    Sutherland, T.

    1999-01-01

    A video guidance sensor (VGS) system has flown on both STS-87 and STS-95 to validate a single camera/target concept for vehicle navigation. The main part of the image algorithm was the subtraction of two consecutive images using software. For a nominal size image of 256 x 256 pixels this subtraction can take a large portion of the time between successive frames in standard rate video leaving very little time for other computations. The purpose of this project was to integrate the software subtraction into hardware to speed up the subtraction process and allow for more complex algorithms to be performed, both in hardware and software.

  17. Development of a distributed read-out imaging TES X-ray microcalorimeter

    NASA Astrophysics Data System (ADS)

    Trowell, S.; Holland, A. D.; Fraser, G. W.; Goldie, D.; Gu, E.

    2002-02-01

    We report on the development of a linear absorber detector for one-dimensional imaging spectroscopy, read-out by two Transition Edge Sensors (TESs). The TESs, based on a single layer of iridium, demonstrate stable and controllable superconducting-to-normal transitions in the region of 130 mK. Results from Monte Carlo simulations are presented indicating that the device configuration is capable of detecting photon positions to better than 200 μm, thereby meeting the resolution specification for missions such as XEUS of ~250 μm. .

  18. Genetically encoded calcium indicators for multi-color neural activity imaging and combination with optogenetics

    PubMed Central

    Akerboom, Jasper; Carreras Calderón, Nicole; Tian, Lin; Wabnig, Sebastian; Prigge, Matthias; Tolö, Johan; Gordus, Andrew; Orger, Michael B.; Severi, Kristen E.; Macklin, John J.; Patel, Ronak; Pulver, Stefan R.; Wardill, Trevor J.; Fischer, Elisabeth; Schüler, Christina; Chen, Tsai-Wen; Sarkisyan, Karen S.; Marvin, Jonathan S.; Bargmann, Cornelia I.; Kim, Douglas S.; Kügler, Sebastian; Lagnado, Leon; Hegemann, Peter; Gottschalk, Alexander; Schreiter, Eric R.; Looger, Loren L.

    2013-01-01

    Genetically encoded calcium indicators (GECIs) are powerful tools for systems neuroscience. Here we describe red, single-wavelength GECIs, “RCaMPs,” engineered from circular permutation of the thermostable red fluorescent protein mRuby. High-resolution crystal structures of mRuby, the red sensor RCaMP, and the recently published red GECI R-GECO1 give insight into the chromophore environments of the Ca2+-bound state of the sensors and the engineered protein domain interfaces of the different indicators. We characterized the biophysical properties and performance of RCaMP sensors in vitro and in vivo in Caenorhabditis elegans, Drosophila larvae, and larval zebrafish. Further, we demonstrate 2-color calcium imaging both within the same cell (registering mitochondrial and somatic [Ca2+]) and between two populations of cells: neurons and astrocytes. Finally, we perform integrated optogenetics experiments, wherein neural activation via channelrhodopsin-2 (ChR2) or a red-shifted variant, and activity imaging via RCaMP or GCaMP, are conducted simultaneously, with the ChR2/RCaMP pair providing independently addressable spectral channels. Using this paradigm, we measure calcium responses of naturalistic and ChR2-evoked muscle contractions in vivo in crawling C. elegans. We systematically compare the RCaMP sensors to R-GECO1, in terms of action potential-evoked fluorescence increases in neurons, photobleaching, and photoswitching. R-GECO1 displays higher Ca2+ affinity and larger dynamic range than RCaMP, but exhibits significant photoactivation with blue and green light, suggesting that integrated channelrhodopsin-based optogenetics using R-GECO1 may be subject to artifact. Finally, we create and test blue, cyan, and yellow variants engineered from GCaMP by rational design. This engineered set of chromatic variants facilitates new experiments in functional imaging and optogenetics. PMID:23459413

  19. Defining the uncertainty of electro-optical identification system performance estimates using a 3D optical environment derived from satellite

    NASA Astrophysics Data System (ADS)

    Ladner, S. D.; Arnone, R.; Casey, B.; Weidemann, A.; Gray, D.; Shulman, I.; Mahoney, K.; Giddings, T.; Shirron, J.

    2009-05-01

    Current United States Navy Mine-Counter-Measure (MCM) operations primarily use electro-optical identification (EOID) sensors to identify underwater targets after detection via acoustic sensors. These EOID sensors which are based on laser underwater imaging by design work best in "clear" waters and are limited in coastal waters especially with strong optical layers. Optical properties and in particular scattering and absorption play an important role on systems performance. Surface optical properties alone from satellite are not adequate to determine how well a system will perform at depth due to the existence of optical layers. The spatial and temporal characteristics of the 3d optical variability of the coastal waters along with strength and location of subsurface optical layers maximize chances of identifying underwater targets by exploiting optimum sensor deployment. Advanced methods have been developed to fuse the optical measurements from gliders, optical properties from "surface" satellite snapshot and 3-D ocean circulation models to extend the two-dimensional (2-D) surface satellite optical image into a three-dimensional (3-D) optical volume with subsurface optical layers. Modifications were made to an EOID performance model to integrate a 3-D optical volume covering an entire region of interest as input and derive system performance field. These enhancements extend present capability based on glider optics and EOID sensor models to estimate the system's "image quality". This only yields system performance information for a single glider profile location in a very large operational region. Finally, we define the uncertainty of the system performance by coupling the EOID performance model with the 3-D optical volume uncertainties. Knowing the ensemble spread of EOID performance field provides a new and unique capability for tactical decision makers and Navy Operations.

  20. Microwave Sensors for Breast Cancer Detection

    PubMed Central

    2018-01-01

    Breast cancer is the leading cause of death among females, early diagnostic methods with suitable treatments improve the 5-year survival rates significantly. Microwave breast imaging has been reported as the most potential to become the alternative or additional tool to the current gold standard X-ray mammography for detecting breast cancer. The microwave breast image quality is affected by the microwave sensor, sensor array, the number of sensors in the array and the size of the sensor. In fact, microwave sensor array and sensor play an important role in the microwave breast imaging system. Numerous microwave biosensors have been developed for biomedical applications, with particular focus on breast tumor detection. Compared to the conventional medical imaging and biosensor techniques, these microwave sensors not only enable better cancer detection and improve the image resolution, but also provide attractive features such as label-free detection. This paper aims to provide an overview of recent important achievements in microwave sensors for biomedical imaging applications, with particular focus on breast cancer detection. The electric properties of biological tissues at microwave spectrum, microwave imaging approaches, microwave biosensors, current challenges and future works are also discussed in the manuscript. PMID:29473867

  1. Microwave Sensors for Breast Cancer Detection.

    PubMed

    Wang, Lulu

    2018-02-23

    Breast cancer is the leading cause of death among females, early diagnostic methods with suitable treatments improve the 5-year survival rates significantly. Microwave breast imaging has been reported as the most potential to become the alternative or additional tool to the current gold standard X-ray mammography for detecting breast cancer. The microwave breast image quality is affected by the microwave sensor, sensor array, the number of sensors in the array and the size of the sensor. In fact, microwave sensor array and sensor play an important role in the microwave breast imaging system. Numerous microwave biosensors have been developed for biomedical applications, with particular focus on breast tumor detection. Compared to the conventional medical imaging and biosensor techniques, these microwave sensors not only enable better cancer detection and improve the image resolution, but also provide attractive features such as label-free detection. This paper aims to provide an overview of recent important achievements in microwave sensors for biomedical imaging applications, with particular focus on breast cancer detection. The electric properties of biological tissues at microwave spectrum, microwave imaging approaches, microwave biosensors, current challenges and future works are also discussed in the manuscript.

  2. Performance Assessment of the Optical Transient Detector and Lightning Imaging Sensor. Part 2; Clustering Algorithm

    NASA Technical Reports Server (NTRS)

    Mach, Douglas M.; Christian, Hugh J.; Blakeslee, Richard; Boccippio, Dennis J.; Goodman, Steve J.; Boeck, William

    2006-01-01

    We describe the clustering algorithm used by the Lightning Imaging Sensor (LIS) and the Optical Transient Detector (OTD) for combining the lightning pulse data into events, groups, flashes, and areas. Events are single pixels that exceed the LIS/OTD background level during a single frame (2 ms). Groups are clusters of events that occur within the same frame and in adjacent pixels. Flashes are clusters of groups that occur within 330 ms and either 5.5 km (for LIS) or 16.5 km (for OTD) of each other. Areas are clusters of flashes that occur within 16.5 km of each other. Many investigators are utilizing the LIS/OTD flash data; therefore, we test how variations in the algorithms for the event group and group-flash clustering affect the flash count for a subset of the LIS data. We divided the subset into areas with low (1-3), medium (4-15), high (16-63), and very high (64+) flashes to see how changes in the clustering parameters affect the flash rates in these different sizes of areas. We found that as long as the cluster parameters are within about a factor of two of the current values, the flash counts do not change by more than about 20%. Therefore, the flash clustering algorithm used by the LIS and OTD sensors create flash rates that are relatively insensitive to reasonable variations in the clustering algorithms.

  3. Flexible ultrathin-body single-photon avalanche diode sensors and CMOS integration.

    PubMed

    Sun, Pengfei; Ishihara, Ryoichi; Charbon, Edoardo

    2016-02-22

    We proposed the world's first flexible ultrathin-body single-photon avalanche diode (SPAD) as photon counting device providing a suitable solution to advanced implantable bio-compatible chronic medical monitoring, diagnostics and other applications. In this paper, we investigate the Geiger-mode performance of this flexible ultrathin-body SPAD comprehensively and we extend this work to the first flexible SPAD image sensor with in-pixel and off-pixel electronics integrated in CMOS. Experimental results show that dark count rate (DCR) by band-to-band tunneling can be reduced by optimizing multiplication doping. DCR by trap-assisted avalanche, which is believed to be originated from the trench etching process, could be further reduced, resulting in a DCR density of tens to hundreds of Hertz per micrometer square at cryogenic temperature. The influence of the trench etching process onto DCR is also proved by comparison with planar ultrathin-body SPAD structures without trench. Photon detection probability (PDP) can be achieved by wider depletion and drift regions and by carefully optimizing body thickness. PDP in frontside- (FSI) and backside-illumination (BSI) are comparable, thus making this technology suitable for both modes of illumination. Afterpulsing and crosstalk are negligible at 2µs dead time, while it has been proved, for the first time, that a CMOS SPAD pixel of this kind could work in a cryogenic environment. By appropriate choice of substrate, this technology is amenable to implantation for biocompatible photon-counting applications and wherever bended imaging sensors are essential.

  4. Current Status Of The NAVSEA Backscatter Absorption Gas Imaging (BAGI) Development Project

    NASA Astrophysics Data System (ADS)

    Kulp, Thomas J.; Kennedy, Randall B.; Garvis, Darrel G.; McRae, Thomas G.; Stahovec, Joe

    1989-07-01

    During the last five years, work has been underway at the Lawrence Livermore National Laboratory (LLNL) to develop a method for imaging gas clouds that are normally invisible to the human eye. The effort was initiated to provide an effective means of locating leaks of hazardous vapors. Although conventional point or line-of-sight detectors are well suited to the measurement of gas concentrations, their utility in identifying the origin and direction of travel of gas plumes is limited. To obtain spatial information from sensors that provide only zero- or one-dimensional readings, either sequential readings at many different locations from a single device, or multiplexed simultaneous measurements from a sensor array must be taken. The former approach is time consuming and, therefore, impractical in emergency situations where rapid action is required. The latter is useful only in cases where the probability of a hazardous release is high enough to warrant the prior installation of a sensor network. Either method demands high measuremental precision and sufficient discrimination against both interfering gases and interfering sources of the target gas. Backscatter Absorption Gas Imaging (BAGI) is a new technique that makes gas clouds and their surroundings "visible" in a real-time video image. It is superior to conventional sensors in characterizing the spatial properties of gas clouds because it provides data that are inherently two-dimensional. Less measuremental precision is required by the BAGI technique because it conveys information as contrasts between different areas in an image rather than as absolute concentration values. Furthermore, the pictorial display of this information allows it to be rapidly assimilated by emergency-response teams. The size and orientation of the plume are evident through comparison with familiar objects that also appear in the image. Subtler evaluations can be made as well, such as the distinction between innocous and hazardous sources of the target gas. For example, in using a conventional sensor to search for the source of a gas that is also present at low levels in automobile exhaust, one might be led astray near a highway. Gas imaging allows the searcher to recognize that the cars are producing the gas, and that they are not the objective of the search.

  5. Feasibility Study of Inexpensive Thermal Sensors and Small Uas Deployment for Living Human Detection in Rescue Missions Application Scenarios

    NASA Astrophysics Data System (ADS)

    Levin, E.; Zarnowski, A.; McCarty, J. L.; Bialas, J.; Banaszek, A.; Banaszek, S.

    2016-06-01

    Significant efforts are invested by rescue agencies worldwide to save human lives during natural and man-made emergency situations including those that happen in wilderness locations. These emergency situations include but not limited to: accidents with alpinists, mountainous skiers, people hiking and lost in remote areas. Sometimes in a rescue operation hundreds of first responders are involved to save a single human life. There are two critical issues where geospatial imaging can be a very useful asset in rescue operations support: 1) human detection and 2) confirming a fact that detected a human being is alive. International group of researchers from the Unites States and Poland collaborated on a pilot research project devoted to identify a feasibility of use for the human detection and alive-human state confirmation small unmanned aerial vehicles (SUAVs) and inexpensive forward looking infrared (FLIR) sensors. Equipment price for both research teams was below 8,000 including 3DR quadrotor UAV and Lepton longwave infrared (LWIR) imager which costs around 250 (for the US team); DJI Inspire 1 UAS with commercial Tamarisc-320 thermal camera (for the Polish team). Specifically both collaborating groups performed independent experiments in the USA and Poland and shared imaging data of on the ground and airborne electro-optical and FLIR sensor imaging collected. In these experiments dead bodies were emulated by use of medical training dummies. Real humans were placed nearby as live human subjects. Electro-optical imagery was used for the research in optimal human detection algorithms. Furthermore, given the fact that a dead human body after several hours has a temperature of the surrounding environment our experiments were challenged by the SUAS data optimization, i.e., distance from SUAV to object so that the FLIR sensor is still capable to distinguish temperature differences between a dummy and a real human. Our experiments indicated feasibility of use SUAVs and small thermal sensors for the human detection scenarios described above. Differences in temperatures were collected by deployed imaging acquisition platform are interpretable on FLIR images visually. Moreover, we applied ENVI image processing functions for calibration and numerical estimations of such a temperature differences. There are more potential system functionalities such as voice messages from rescue teams and even distant medication delivery for the victims of described emergencies. This paper describes experiments, processing results, and future research in more details.

  6. An Imaging Sensor-Aided Vision Navigation Approach that Uses a Geo-Referenced Image Database.

    PubMed

    Li, Yan; Hu, Qingwu; Wu, Meng; Gao, Yang

    2016-01-28

    In determining position and attitude, vision navigation via real-time image processing of data collected from imaging sensors is advanced without a high-performance global positioning system (GPS) and an inertial measurement unit (IMU). Vision navigation is widely used in indoor navigation, far space navigation, and multiple sensor-integrated mobile mapping. This paper proposes a novel vision navigation approach aided by imaging sensors and that uses a high-accuracy geo-referenced image database (GRID) for high-precision navigation of multiple sensor platforms in environments with poor GPS. First, the framework of GRID-aided vision navigation is developed with sequence images from land-based mobile mapping systems that integrate multiple sensors. Second, a highly efficient GRID storage management model is established based on the linear index of a road segment for fast image searches and retrieval. Third, a robust image matching algorithm is presented to search and match a real-time image with the GRID. Subsequently, the image matched with the real-time scene is considered to calculate the 3D navigation parameter of multiple sensor platforms. Experimental results show that the proposed approach retrieves images efficiently and has navigation accuracies of 1.2 m in a plane and 1.8 m in height under GPS loss in 5 min and within 1500 m.

  7. An Imaging Sensor-Aided Vision Navigation Approach that Uses a Geo-Referenced Image Database

    PubMed Central

    Li, Yan; Hu, Qingwu; Wu, Meng; Gao, Yang

    2016-01-01

    In determining position and attitude, vision navigation via real-time image processing of data collected from imaging sensors is advanced without a high-performance global positioning system (GPS) and an inertial measurement unit (IMU). Vision navigation is widely used in indoor navigation, far space navigation, and multiple sensor-integrated mobile mapping. This paper proposes a novel vision navigation approach aided by imaging sensors and that uses a high-accuracy geo-referenced image database (GRID) for high-precision navigation of multiple sensor platforms in environments with poor GPS. First, the framework of GRID-aided vision navigation is developed with sequence images from land-based mobile mapping systems that integrate multiple sensors. Second, a highly efficient GRID storage management model is established based on the linear index of a road segment for fast image searches and retrieval. Third, a robust image matching algorithm is presented to search and match a real-time image with the GRID. Subsequently, the image matched with the real-time scene is considered to calculate the 3D navigation parameter of multiple sensor platforms. Experimental results show that the proposed approach retrieves images efficiently and has navigation accuracies of 1.2 m in a plane and 1.8 m in height under GPS loss in 5 min and within 1500 m. PMID:26828496

  8. CMOS image sensor-based implantable glucose sensor using glucose-responsive fluorescent hydrogel

    PubMed Central

    Tokuda, Takashi; Takahashi, Masayuki; Uejima, Kazuhiro; Masuda, Keita; Kawamura, Toshikazu; Ohta, Yasumi; Motoyama, Mayumi; Noda, Toshihiko; Sasagawa, Kiyotaka; Okitsu, Teru; Takeuchi, Shoji; Ohta, Jun

    2014-01-01

    A CMOS image sensor-based implantable glucose sensor based on an optical-sensing scheme is proposed and experimentally verified. A glucose-responsive fluorescent hydrogel is used as the mediator in the measurement scheme. The wired implantable glucose sensor was realized by integrating a CMOS image sensor, hydrogel, UV light emitting diodes, and an optical filter on a flexible polyimide substrate. Feasibility of the glucose sensor was verified by both in vitro and in vivo experiments. PMID:25426316

  9. Experimental extractions of particle position from inline holograms using single coefficient of Wigner-Ville analysis

    NASA Astrophysics Data System (ADS)

    Widjaja, Joewono; Dawprateep, Saowaros; Chuamchaitrakool, Porntip

    2017-07-01

    Extractions of particle positions from inline holograms using a single coefficient of Wigner-Ville distribution (WVD) are experimentally verified. WVD analysis of holograms gives local variation of fringe frequency. Regardless of an axial position of particles, one of the WVD coefficients has the unique characteristics of having the lowest amplitude and being located on a line with a slope inversely proportional to the particle position. Experimental results obtained using two image sensors with different resolutions verify the feasibility of the present method.

  10. Minimal Power Latch for Single-Slope ADCs

    NASA Technical Reports Server (NTRS)

    Hancock, Bruce R. (Inventor)

    2015-01-01

    A latch circuit that uses two interoperating latches. The latch circuit has the beneficial feature that it switches only a single time during a measurement that uses a stair step or ramp function as an input signal in an analog to digital converter. This feature minimizes the amount of power that is consumed in the latch and also minimizes the amount of high frequency noise that is generated by the latch. An application using a plurality of such latch circuits in a parallel decoding ADC for use in an image sensor is given as an example.

  11. Charge integration successive approximation analog-to-digital converter for focal plane applications using a single amplifier

    NASA Technical Reports Server (NTRS)

    Zhou, Zhimin (Inventor); Pain, Bedabrata (Inventor)

    1999-01-01

    An analog-to-digital converter for on-chip focal-plane image sensor applications. The analog-to-digital converter utilizes a single charge integrating amplifier in a charge balancing architecture to implement successive approximation analog-to-digital conversion. This design requires minimal chip area and has high speed and low power dissipation for operation in the 2-10 bit range. The invention is particularly well suited to CMOS on-chip applications requiring many analog-to-digital converters, such as column-parallel focal-plane architectures.

  12. Spin electronic magnetic sensor based on functional oxides for medical imaging

    NASA Astrophysics Data System (ADS)

    Solignac, A.; Kurij, G.; Guerrero, R.; Agnus, G.; Maroutian, T.; Fermon, C.; Pannetier-Lecoeur, M.; Lecoeur, Ph.

    2015-09-01

    To detect magnetic signals coming from the body, in particular those produced by the electrical activity of the heart or of the brain, the development of ultrasensitive sensors is required. In this regard, magnetoresistive sensors, stemming from spin electronics, are very promising devices. For example, tunnel magnetoresistance (TMR) junctions based on MgO tunnel barrier have a high sensitivity. Nevertheless, TMR also often have high level of noise. Full spin polarized materials like manganite La0.67Sr0.33MnO3 (LSMO) are attractive alternative candidates to develop such sensors because LSMO exhibits a very low 1/f noise when grown on single crystals, and a TMR response has been observed with values up to 2000%. This kind of tunnel junctions, when combined with a high Tc superconductor loop, opens up possibilities to develop full oxide structures working at liquid nitrogen temperature and suitable for medical imaging. In this work, we investigated on LSMO-based tunnel junctions the parameters controlling the overall system performances, including not only the TMR ratio, but also the pinning of the reference layer and the noise floor. We especially focused on studying the effects of the quality of the barrier, the interface and the electrode, by playing with materials and growth conditions.

  13. NASA Tech Briefs, July 2007

    NASA Technical Reports Server (NTRS)

    2007-01-01

    Topics covered include: Miniature Intelligent Sensor Module; "Smart" Sensor Module; Portable Apparatus for Electrochemical Sensing of Ethylene; Increasing Linear Dynamic Range of a CMOS Image Sensor; Flight Qualified Micro Sun Sensor; Norbornene-Based Polymer Electrolytes for Lithium Cells; Making Single-Source Precursors of Ternary Semiconductors; Water-Free Proton-Conducting Membranes for Fuel Cells; Mo/Ti Diffusion Bonding for Making Thermoelectric Devices; Photodetectors on Coronagraph Mask for Pointing Control; High-Energy-Density, Low-Temperature Li/CFx Primary Cells; G4-FETs as Universal and Programmable Logic Gates; Fabrication of Buried Nanochannels From Nanowire Patterns; Diamond Smoothing Tools; Infrared Imaging System for Studying Brain Function; Rarefying Spectra of Whispering-Gallery-Mode Resonators; Large-Area Permanent-Magnet ECR Plasma Source; Slot-Antenna/Permanent-Magnet Device for Generating Plasma; Fiber-Optic Strain Gauge With High Resolution And Update Rate; Broadband Achromatic Telecentric Lens; Temperature-Corrected Model of Turbulence in Hot Jet Flows; Enhanced Elliptic Grid Generation; Automated Knowledge Discovery From Simulators; Electro-Optical Modulator Bias Control Using Bipolar Pulses; Generative Representations for Automated Design of Robots; Mars-Approach Navigation Using In Situ Orbiters; Efficient Optimization of Low-Thrust Spacecraft Trajectories; Cylindrical Asymmetrical Capacitors for Use in Outer Space; Protecting Against Faults in JPL Spacecraft; Algorithm Optimally Allocates Actuation of a Spacecraft; and Radar Interferometer for Topographic Mapping of Glaciers and Ice Sheets.

  14. Beam imaging sensor and method for using same

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McAninch, Michael D.; Root, Jeffrey J.

    The present invention relates generally to the field of sensors for beam imaging and, in particular, to a new and useful beam imaging sensor for use in determining, for example, the power density distribution of a beam including, but not limited to, an electron beam or an ion beam. In one embodiment, the beam imaging sensor of the present invention comprises, among other items, a circumferential slit that is either circular, elliptical or polygonal in nature. In another embodiment, the beam imaging sensor of the present invention comprises, among other things, a discontinuous partially circumferential slit. Also disclosed is amore » method for using the various beams sensor embodiments of the present invention.« less

  15. Analysis on the Effect of Sensor Views in Image Reconstruction Produced by Optical Tomography System Using Charge-Coupled Device.

    PubMed

    Jamaludin, Juliza; Rahim, Ruzairi Abdul; Fazul Rahiman, Mohd Hafiz; Mohd Rohani, Jemmy

    2018-04-01

    Optical tomography (OPT) is a method to capture a cross-sectional image based on the data obtained by sensors, distributed around the periphery of the analyzed system. This system is based on the measurement of the final light attenuation or absorption of radiation after crossing the measured objects. The number of sensor views will affect the results of image reconstruction, where the high number of sensor views per projection will give a high image quality. This research presents an application of charge-coupled device linear sensor and laser diode in an OPT system. Experiments in detecting solid and transparent objects in crystal clear water were conducted. Two numbers of sensors views, 160 and 320 views are evaluated in this research in reconstructing the images. The image reconstruction algorithms used were filtered images of linear back projection algorithms. Analysis on comparing the simulation and experiments image results shows that, with 320 image views giving less area error than 160 views. This suggests that high image view resulted in the high resolution of image reconstruction.

  16. Integration of piezo-capacitive and piezo-electric nanoweb based pressure sensors for imaging of static and dynamic pressure distribution.

    PubMed

    Jeong, Y J; Oh, T I; Woo, E J; Kim, K J

    2017-07-01

    Recently, highly flexible and soft pressure distribution imaging sensor is in great demand for tactile sensing, gait analysis, ubiquitous life-care based on activity recognition, and therapeutics. In this study, we integrate the piezo-capacitive and piezo-electric nanowebs with the conductive fabric sheets for detecting static and dynamic pressure distributions on a large sensing area. Electrical impedance tomography (EIT) and electric source imaging are applied for reconstructing pressure distribution images from measured current-voltage data on the boundary of the hybrid fabric sensor. We evaluated the piezo-capacitive nanoweb sensor, piezo-electric nanoweb sensor, and hybrid fabric sensor. The results show the feasibility of static and dynamic pressure distribution imaging from the boundary measurements of the fabric sensors.

  17. Terrestrial Applications of the Thermal Infrared Sensor, TIRS

    NASA Technical Reports Server (NTRS)

    Smith, Ramsey L.; Thome, Kurtis; Richardson, Cathleen; Irons, James; Reuter, Dennis

    2009-01-01

    Landsat satellites have acquired single-band thermal images since 1978. The next satellile in the heritage, Landsat Data Continuity Mission (LDCM), is scheduled to launch in December 2012. LDCM will contain the Operational Land Imager (OLI) and the Thermal Infrared Sensor (TIRS), where TIRS operates in concert with, but independently of OLI. This paper will provide an overview of the remote sensing instrument TIRS. The T1RS instrument was designed at National Aeronautics and Space Administration's (NASA) Goddard Space Flight Center (GSFC) where it will be fabricated and calibrated as well. Protecting the integrity of the Scientific Data that will be collected from TIRS played a strong role in definition of the calibration test equipment and procedures used for the optical, radiometric, and spatial calibration. The data that will be produced from LCDM will continue to be used world wide for environment monitoring and resource management.

  18. Introductory review on `Flying Triangulation': a motion-robust optical 3D measurement principle

    NASA Astrophysics Data System (ADS)

    Ettl, Svenja

    2015-04-01

    'Flying Triangulation' (FlyTri) is a recently developed principle which allows for a motion-robust optical 3D measurement of rough surfaces. It combines a simple sensor with sophisticated algorithms: a single-shot sensor acquires 2D camera images. From each camera image, a 3D profile is generated. The series of 3D profiles generated are aligned to one another by algorithms, without relying on any external tracking device. It delivers real-time feedback of the measurement process which enables an all-around measurement of objects. The principle has great potential for small-space acquisition environments, such as the measurement of the interior of a car, and motion-sensitive measurement tasks, such as the intraoral measurement of teeth. This article gives an overview of the basic ideas and applications of FlyTri. The main challenges and their solutions are discussed. Measurement examples are also given to demonstrate the potential of the measurement principle.

  19. Interferometric Reflectance Imaging Sensor (IRIS)—A Platform Technology for Multiplexed Diagnostics and Digital Detection

    PubMed Central

    Avci, Oguzhan; Lortlar Ünlü, Nese; Yalçın Özkumur, Ayça; Ünlü, M. Selim

    2015-01-01

    Over the last decade, the growing need in disease diagnostics has stimulated rapid development of new technologies with unprecedented capabilities. Recent emerging infectious diseases and epidemics have revealed the shortcomings of existing diagnostics tools, and the necessity for further improvements. Optical biosensors can lay the foundations for future generation diagnostics by providing means to detect biomarkers in a highly sensitive, specific, quantitative and multiplexed fashion. Here, we review an optical sensing technology, Interferometric Reflectance Imaging Sensor (IRIS), and the relevant features of this multifunctional platform for quantitative, label-free and dynamic detection. We discuss two distinct modalities for IRIS: (i) low-magnification (ensemble biomolecular mass measurements) and (ii) high-magnification (digital detection of individual nanoparticles) along with their applications, including label-free detection of multiplexed protein chips, measurement of single nucleotide polymorphism, quantification of transcription factor DNA binding, and high sensitivity digital sensing and characterization of nanoparticles and viruses. PMID:26205273

  20. Single-cell imaging tools for brain energy metabolism: a review

    PubMed Central

    San Martín, Alejandro; Sotelo-Hitschfeld, Tamara; Lerchundi, Rodrigo; Fernández-Moncada, Ignacio; Ceballo, Sebastian; Valdebenito, Rocío; Baeza-Lehnert, Felipe; Alegría, Karin; Contreras-Baeza, Yasna; Garrido-Gerter, Pamela; Romero-Gómez, Ignacio; Barros, L. Felipe

    2014-01-01

    Abstract. Neurophotonics comes to light at a time in which advances in microscopy and improved calcium reporters are paving the way toward high-resolution functional mapping of the brain. This review relates to a parallel revolution in metabolism. We argue that metabolism needs to be approached both in vitro and in vivo, and that it does not just exist as a low-level platform but is also a relevant player in information processing. In recent years, genetically encoded fluorescent nanosensors have been introduced to measure glucose, glutamate, ATP, NADH, lactate, and pyruvate in mammalian cells. Reporting relative metabolite levels, absolute concentrations, and metabolic fluxes, these sensors are instrumental for the discovery of new molecular mechanisms. Sensors continue to be developed, which together with a continued improvement in protein expression strategies and new imaging technologies, herald an exciting era of high-resolution characterization of metabolism in the brain and other organs. PMID:26157964

  1. Improved Denoising via Poisson Mixture Modeling of Image Sensor Noise.

    PubMed

    Zhang, Jiachao; Hirakawa, Keigo

    2017-04-01

    This paper describes a study aimed at comparing the real image sensor noise distribution to the models of noise often assumed in image denoising designs. A quantile analysis in pixel, wavelet transform, and variance stabilization domains reveal that the tails of Poisson, signal-dependent Gaussian, and Poisson-Gaussian models are too short to capture real sensor noise behavior. A new Poisson mixture noise model is proposed to correct the mismatch of tail behavior. Based on the fact that noise model mismatch results in image denoising that undersmoothes real sensor data, we propose a mixture of Poisson denoising method to remove the denoising artifacts without affecting image details, such as edge and textures. Experiments with real sensor data verify that denoising for real image sensor data is indeed improved by this new technique.

  2. Can single empirical algorithms accurately predict inland shallow water quality status from high resolution, multi-sensor, multi-temporal satellite data?

    NASA Astrophysics Data System (ADS)

    Theologou, I.; Patelaki, M.; Karantzalos, K.

    2015-04-01

    Assessing and monitoring water quality status through timely, cost effective and accurate manner is of fundamental importance for numerous environmental management and policy making purposes. Therefore, there is a current need for validated methodologies which can effectively exploit, in an unsupervised way, the enormous amount of earth observation imaging datasets from various high-resolution satellite multispectral sensors. To this end, many research efforts are based on building concrete relationships and empirical algorithms from concurrent satellite and in-situ data collection campaigns. We have experimented with Landsat 7 and Landsat 8 multi-temporal satellite data, coupled with hyperspectral data from a field spectroradiometer and in-situ ground truth data with several physico-chemical and other key monitoring indicators. All available datasets, covering a 4 years period, in our case study Lake Karla in Greece, were processed and fused under a quantitative evaluation framework. The performed comprehensive analysis posed certain questions regarding the applicability of single empirical models across multi-temporal, multi-sensor datasets towards the accurate prediction of key water quality indicators for shallow inland systems. Single linear regression models didn't establish concrete relations across multi-temporal, multi-sensor observations. Moreover, the shallower parts of the inland system followed, in accordance with the literature, different regression patterns. Landsat 7 and 8 resulted in quite promising results indicating that from the recreation of the lake and onward consistent per-sensor, per-depth prediction models can be successfully established. The highest rates were for chl-a (r2=89.80%), dissolved oxygen (r2=88.53%), conductivity (r2=88.18%), ammonium (r2=87.2%) and pH (r2=86.35%), while the total phosphorus (r2=70.55%) and nitrates (r2=55.50%) resulted in lower correlation rates.

  3. Multiple Sensor Camera for Enhanced Video Capturing

    NASA Astrophysics Data System (ADS)

    Nagahara, Hajime; Kanki, Yoshinori; Iwai, Yoshio; Yachida, Masahiko

    A resolution of camera has been drastically improved under a current request for high-quality digital images. For example, digital still camera has several mega pixels. Although a video camera has the higher frame-rate, the resolution of a video camera is lower than that of still camera. Thus, the high-resolution is incompatible with the high frame rate of ordinary cameras in market. It is difficult to solve this problem by a single sensor, since it comes from physical limitation of the pixel transfer rate. In this paper, we propose a multi-sensor camera for capturing a resolution and frame-rate enhanced video. Common multi-CCDs camera, such as 3CCD color camera, has same CCD for capturing different spectral information. Our approach is to use different spatio-temporal resolution sensors in a single camera cabinet for capturing higher resolution and frame-rate information separately. We build a prototype camera which can capture high-resolution (2588×1958 pixels, 3.75 fps) and high frame-rate (500×500, 90 fps) videos. We also proposed the calibration method for the camera. As one of the application of the camera, we demonstrate an enhanced video (2128×1952 pixels, 90 fps) generated from the captured videos for showing the utility of the camera.

  4. Apparatus and method for a light direction sensor

    NASA Technical Reports Server (NTRS)

    Leviton, Douglas B. (Inventor)

    2011-01-01

    The present invention provides a light direction sensor for determining the direction of a light source. The system includes an image sensor; a spacer attached to the image sensor, and a pattern mask attached to said spacer. The pattern mask has a slit pattern that as light passes through the slit pattern it casts a diffraction pattern onto the image sensor. The method operates by receiving a beam of light onto a patterned mask, wherein the patterned mask as a plurality of a slit segments. Then, diffusing the beam of light onto an image sensor and determining the direction of the light source.

  5. Optical cell monitoring system for underwater targets

    NASA Astrophysics Data System (ADS)

    Moon, SangJun; Manzur, Fahim; Manzur, Tariq; Demirci, Utkan

    2008-10-01

    We demonstrate a cell based detection system that could be used for monitoring an underwater target volume and environment using a microfluidic chip and charge-coupled-device (CCD). This technique allows us to capture specific cells and enumerate these cells on a large area on a microchip. The microfluidic chip and a lens-less imaging platform were then merged to monitor cell populations and morphologies as a system that may find use in distributed sensor networks. The chip, featuring surface chemistry and automatic cell imaging, was fabricated from a cover glass slide, double sided adhesive film and a transparent Polymethlymetacrylate (PMMA) slab. The optically clear chip allows detecting cells with a CCD sensor. These chips were fabricated with a laser cutter without the use of photolithography. We utilized CD4+ cells that are captured on the floor of a microfluidic chip due to the ability to address specific target cells using antibody-antigen binding. Captured CD4+ cells were imaged with a fluorescence microscope to verify the chip specificity and efficiency. We achieved 70.2 +/- 6.5% capturing efficiency and 88.8 +/- 5.4% specificity for CD4+ T lymphocytes (n = 9 devices). Bright field images of the captured cells in the 24 mm × 4 mm × 50 μm microfluidic chip were obtained with the CCD sensor in one second. We achieved an inexpensive system that rapidly captures cells and images them using a lens-less CCD system. This microfluidic device can be modified for use in single cell detection utilizing a cheap light-emitting diode (LED) chip instead of a wide range CCD system.

  6. Common-path low-coherence interferometry fiber-optic sensor guided microincision

    NASA Astrophysics Data System (ADS)

    Zhang, Kang; Kang, Jin U.

    2011-09-01

    We propose and demonstrate a common-path low-coherence interferometry (CP-LCI) fiber-optic sensor guided precise microincision. The method tracks the target surface and compensates the tool-to-surface relative motion with better than +/-5 μm resolution using a precision micromotor connected to the tool tip. A single-fiber distance probe integrated microdissector was used to perform an accurate 100 μm incision into the surface of an Intralipid phantom. The CP-LCI guided incision quality in terms of depth was evaluated afterwards using three-dimensional Fourier-domain optical coherence tomography imaging, which showed significant improvement of incision accuracy compared to free-hand-only operations.

  7. Nanospectrofluorometry inside single living cell by scanning near-field optical microscopy

    NASA Astrophysics Data System (ADS)

    Lei, F. H.; Shang, G. Y.; Troyon, M.; Spajer, M.; Morjani, H.; Angiboust, J. F.; Manfait, M.

    2001-10-01

    Near-field fluorescence spectra with subdiffraction limit spatial resolution have been taken in the proximity of mitochondrial membrane inside breast adenocarcinoma cells (MCF7) treated with the fluorescent dye (JC-1) by using a scanning near-field optical microscope coupled with a confocal laser microspectrofluorometer. The probe-sample distance control is based on a piezoelectric bimorph shear force sensor having a static spring constant k=5 μN/nm and a quality factor Q=40 in a physiological medium of viscosity η=1.0 cp. The sensitivity of the force sensor has been tested by imaging a MCF7 cell surface.

  8. SCORPION II persistent surveillance system update

    NASA Astrophysics Data System (ADS)

    Coster, Michael; Chambers, Jon

    2010-04-01

    This paper updates the improvements and benefits demonstrated in the next generation Northrop Grumman SCORPION II family of persistent surveillance and target recognition systems produced by the Xetron Campus in Cincinnati, Ohio. SCORPION II reduces the size, weight, and cost of all SCORPION components in a flexible, field programmable system that is easier to conceal and enables integration of over fifty different Unattended Ground Sensor (UGS) and camera types from a variety of manufacturers, with a modular approach to supporting multiple Line of Sight (LOS) and Beyond Line of Sight (BLOS) communications interfaces. Since 1998 Northrop Grumman has been integrating best in class sensors with its proven universal modular Gateway to provide encrypted data exfiltration to Common Operational Picture (COP) systems and remote sensor command and control. In addition to feeding COP systems, SCORPION and SCORPION II data can be directly processed using a common sensor status graphical user interface (GUI) that allows for viewing and analysis of images and sensor data from up to seven hundred SCORPION system gateways on single or multiple displays. This GUI enables a large amount of sensor data and imagery to be used for actionable intelligence as well as remote sensor command and control by a minimum number of analysts.

  9. Study the performance of star sensor influenced by space radiation damage of image sensor

    NASA Astrophysics Data System (ADS)

    Feng, Jie; Li, Yudong; Wen, Lin; Guo, Qi; Zhang, Xingyao

    2018-03-01

    Star sensor is an essential component of spacecraft attitude control system. Spatial radiation can cause star sensor performance degradation, abnormal work, attitude measurement accuracy and reliability reduction. Many studies have already been dedicated to the radiation effect on Charge-Coupled Device(CCD) image sensor, but fewer studies focus on the radiation effect of star sensor. The innovation of this paper is to study the radiation effects from the device level to the system level. The influence of the degradation of CCD image sensor radiation sensitive parameters on the performance parameters of star sensor is studied in this paper. The correlation among the radiation effect of proton, the non-uniformity noise of CCD image sensor and the performance parameter of star sensor is analyzed. This paper establishes a foundation for the study of error prediction and correction technology of star sensor on-orbit attitude measurement, and provides some theoretical basis for the design of high performance star sensor.

  10. Sensor, signal, and image informatics - state of the art and current topics.

    PubMed

    Lehmann, T M; Aach, T; Witte, H

    2006-01-01

    The number of articles published annually in the fields of biomedical signal and image acquisition and processing is increasing. Based on selected examples, this survey aims at comprehensively demonstrating the recent trends and developments. Four articles are selected for biomedical data acquisition covering topics such as dose saving in CT, C-arm X-ray imaging systems for volume imaging, and the replacement of dose-intensive CT-based diagnostic with harmonic ultrasound imaging. Regarding biomedical signal analysis (BSA), the four selected articles discuss the equivalence of different time-frequency approaches for signal analysis, an application to Cochlea implants, where time-frequency analysis is applied for controlling the replacement system, recent trends for fusion of different modalities, and the role of BSA as part of a brain machine interfaces. To cover the broad spectrum of publications in the field of biomedical image processing, six papers are focused. Important topics are content-based image retrieval in medical applications, automatic classification of tongue photographs from traditional Chinese medicine, brain perfusion analysis in single photon emission computed tomography (SPECT), model-based visualization of vascular trees, and virtual surgery, where enhanced visualization and haptic feedback techniques are combined with a sphere-filled model of the organ. The selected papers emphasize the five fields forming the chain of biomedical data processing: (1) data acquisition, (2) data reconstruction and pre-processing, (3) data handling, (4) data analysis, and (5) data visualization. Fields 1 and 2 form the sensor informatics, while fields 2 to 5 form signal or image informatics with respect to the nature of the data considered. Biomedical data acquisition and pre-processing, as well as data handling, analysis and visualization aims at providing reliable tools for decision support that improve the quality of health care. Comprehensive evaluation of the processing methods and their reliable integration in routine applications are future challenges in the field of sensor, signal and image informatics.

  11. Depth-of-interaction estimates in pixelated scintillator sensors using Monte Carlo techniques

    NASA Astrophysics Data System (ADS)

    Sharma, Diksha; Sze, Christina; Bhandari, Harish; Nagarkar, Vivek; Badano, Aldo

    2017-01-01

    Image quality in thick scintillator detectors can be improved by minimizing parallax errors through depth-of-interaction (DOI) estimation. A novel sensor for low-energy single photon imaging having a thick, transparent, crystalline pixelated micro-columnar CsI:Tl scintillator structure has been described, with possible future application in small-animal single photon emission computed tomography (SPECT) imaging when using thicker structures under development. In order to understand the fundamental limits of this new structure, we introduce cartesianDETECT2, an open-source optical transport package that uses Monte Carlo methods to obtain estimates of DOI for improving spatial resolution of nuclear imaging applications. Optical photon paths are calculated as a function of varying simulation parameters such as columnar surface roughness, bulk, and top-surface absorption. We use scanning electron microscope images to estimate appropriate surface roughness coefficients. Simulation results are analyzed to model and establish patterns between DOI and photon scattering. The effect of varying starting locations of optical photons on the spatial response is studied. Bulk and top-surface absorption fractions were varied to investigate their effect on spatial response as a function of DOI. We investigated the accuracy of our DOI estimation model for a particular screen with various training and testing sets, and for all cases the percent error between the estimated and actual DOI over the majority of the detector thickness was ±5% with a maximum error of up to ±10% at deeper DOIs. In addition, we found that cartesianDETECT2 is computationally five times more efficient than MANTIS. Findings indicate that DOI estimates can be extracted from a double-Gaussian model of the detector response. We observed that our model predicts DOI in pixelated scintillator detectors reasonably well.

  12. The image of motor units architecture in the mechanomyographic signal during the single motor unit contraction: in vivo and simulation study.

    PubMed

    Kaczmarek, P; Celichowski, J; Drzymała-Celichowska, H; Kasiński, A

    2009-08-01

    The mechanomyographic (MMG) signal analysis has been performed during single motor unit (MU) contractions of the rat medial gastrocnemius muscle. The MMG has been recorded as a muscle surface displacement by using a laser distance sensor. The profiles of the MMG signal let to categorize these signals for particular MUs into three classes. Class MMG-P (positive) comprises MUs with the MMG signal similar to the force signal profile, where the distance between the muscle surface and the laser sensor increases with the force increase. The class MMG-N (negative) has also the MMG profile similar to the force profile, however the MMG is inverted in comparison to the force signal and the distance measured by using laser sensor decreases with the force increase. The third class MMG-M (mixed) characterize the MMG which initially increases with the force increases and when the force exceeds some level it starts to decrease towards the negative values. The semi-pennate muscle model has been proposed, enabling estimation of the MMG generated by a single MU depending on its localization. The analysis have shown that in the semi-pennate muscle the localization of the MU and the relative position of the laser distance sensor determine the MMG profile and amplitude. Thus, proposed classification of the MMG recordings is not related to the physiological types of MUs, but only to the MU localization and mentioned sensor position. When the distance sensor is located over the middle of the muscle belly, a part of the muscle fibers have endings near the location of the sensor beam. For the MU MMG of class MMG-N the deflection of the muscle surface proximal to the sensor mainly influences the MMG recording, whereas for the MU MMG class MMG-P, it is mainly the distal muscle surface deformation. For the MU MMG of MMG-M type the effects of deformation within the proximal and distal muscle surfaces overlap. The model has been verified with experimental recordings, and its responses are consistent and adequate in comparison to the experimental data.

  13. Energy dispersive CdTe and CdZnTe detectors for spectral clinical CT and NDT applications

    NASA Astrophysics Data System (ADS)

    Barber, W. C.; Wessel, J. C.; Nygard, E.; Iwanczyk, J. S.

    2015-06-01

    We are developing room temperature compound semiconductor detectors for applications in energy-resolved high-flux single x-ray photon-counting spectral computed tomography (CT), including functional imaging with nanoparticle contrast agents for medical applications and non-destructive testing (NDT) for security applications. Energy-resolved photon-counting can provide reduced patient dose through optimal energy weighting for a particular imaging task in CT, functional contrast enhancement through spectroscopic imaging of metal nanoparticles in CT, and compositional analysis through multiple basis function material decomposition in CT and NDT. These applications produce high input count rates from an x-ray generator delivered to the detector. Therefore, in order to achieve energy-resolved single photon counting in these applications, a high output count rate (OCR) for an energy-dispersive detector must be achieved at the required spatial resolution and across the required dynamic range for the application. The required performance in terms of the OCR, spatial resolution, and dynamic range must be obtained with sufficient field of view (FOV) for the application thus requiring the tiling of pixel arrays and scanning techniques. Room temperature cadmium telluride (CdTe) and cadmium zinc telluride (CdZnTe) compound semiconductors, operating as direct conversion x-ray sensors, can provide the required speed when connected to application specific integrated circuits (ASICs) operating at fast peaking times with multiple fixed thresholds per pixel provided the sensors are designed for rapid signal formation across the x-ray energy ranges of the application at the required energy and spatial resolutions, and at a sufficiently high detective quantum efficiency (DQE). We have developed high-flux energy-resolved photon-counting x-ray imaging array sensors using pixellated CdTe and CdZnTe semiconductors optimized for clinical CT and security NDT. We have also fabricated high-flux ASICs with a two dimensional (2D) array of inputs for readout from the sensors. The sensors are guard ring free and have a 2D array of pixels and can be tiled in 2D while preserving pixel pitch. The 2D ASICs have four energy bins with a linear energy response across sufficient dynamic range for clinical CT and some NDT applications. The ASICs can also be tiled in 2D and are designed to fit within the active area of the sensors. We have measured several important performance parameters including: the output count rate (OCR) in excess of 20 million counts per second per square mm with a minimum loss of counts due to pulse pile-up, an energy resolution of 7 keV full width at half-maximum (FWHM) across the entire dynamic range, and a noise floor about 20 keV. This is achieved by directly interconnecting the ASIC inputs to the pixels of the CdZnTe sensors incurring very little input capacitance to the ASICs. We present measurements of the performance of the CdTe and CdZnTe sensors including the OCR, FWHM energy resolution, noise floor, as well as the temporal stability and uniformity under the rapidly varying high flux expected in CT and NDT applications.

  14. Energy dispersive CdTe and CdZnTe detectors for spectral clinical CT and NDT applications

    PubMed Central

    Barber, W. C.; Wessel, J. C.; Nygard, E.; Iwanczyk, J. S.

    2014-01-01

    We are developing room temperature compound semiconductor detectors for applications in energy-resolved high-flux single x-ray photon-counting spectral computed tomography (CT), including functional imaging with nanoparticle contrast agents for medical applications and non destructive testing (NDT) for security applications. Energy-resolved photon-counting can provide reduced patient dose through optimal energy weighting for a particular imaging task in CT, functional contrast enhancement through spectroscopic imaging of metal nanoparticles in CT, and compositional analysis through multiple basis function material decomposition in CT and NDT. These applications produce high input count rates from an x-ray generator delivered to the detector. Therefore, in order to achieve energy-resolved single photon counting in these applications, a high output count rate (OCR) for an energy-dispersive detector must be achieved at the required spatial resolution and across the required dynamic range for the application. The required performance in terms of the OCR, spatial resolution, and dynamic range must be obtained with sufficient field of view (FOV) for the application thus requiring the tiling of pixel arrays and scanning techniques. Room temperature cadmium telluride (CdTe) and cadmium zinc telluride (CdZnTe) compound semiconductors, operating as direct conversion x-ray sensors, can provide the required speed when connected to application specific integrated circuits (ASICs) operating at fast peaking times with multiple fixed thresholds per pixel provided the sensors are designed for rapid signal formation across the x-ray energy ranges of the application at the required energy and spatial resolutions, and at a sufficiently high detective quantum efficiency (DQE). We have developed high-flux energy-resolved photon-counting x-ray imaging array sensors using pixellated CdTe and CdZnTe semiconductors optimized for clinical CT and security NDT. We have also fabricated high-flux ASICs with a two dimensional (2D) array of inputs for readout from the sensors. The sensors are guard ring free and have a 2D array of pixels and can be tiled in 2D while preserving pixel pitch. The 2D ASICs have four energy bins with a linear energy response across sufficient dynamic range for clinical CT and some NDT applications. The ASICs can also be tiled in 2D and are designed to fit within the active area of the sensors. We have measured several important performance parameters including; the output count rate (OCR) in excess of 20 million counts per second per square mm with a minimum loss of counts due to pulse pile-up, an energy resolution of 7 keV full width at half maximum (FWHM) across the entire dynamic range, and a noise floor about 20keV. This is achieved by directly interconnecting the ASIC inputs to the pixels of the CdZnTe sensors incurring very little input capacitance to the ASICs. We present measurements of the performance of the CdTe and CdZnTe sensors including the OCR, FWHM energy resolution, noise floor, as well as the temporal stability and uniformity under the rapidly varying high flux expected in CT and NDT applications. PMID:25937684

  15. Energy dispersive CdTe and CdZnTe detectors for spectral clinical CT and NDT applications.

    PubMed

    Barber, W C; Wessel, J C; Nygard, E; Iwanczyk, J S

    2015-06-01

    We are developing room temperature compound semiconductor detectors for applications in energy-resolved high-flux single x-ray photon-counting spectral computed tomography (CT), including functional imaging with nanoparticle contrast agents for medical applications and non destructive testing (NDT) for security applications. Energy-resolved photon-counting can provide reduced patient dose through optimal energy weighting for a particular imaging task in CT, functional contrast enhancement through spectroscopic imaging of metal nanoparticles in CT, and compositional analysis through multiple basis function material decomposition in CT and NDT. These applications produce high input count rates from an x-ray generator delivered to the detector. Therefore, in order to achieve energy-resolved single photon counting in these applications, a high output count rate (OCR) for an energy-dispersive detector must be achieved at the required spatial resolution and across the required dynamic range for the application. The required performance in terms of the OCR, spatial resolution, and dynamic range must be obtained with sufficient field of view (FOV) for the application thus requiring the tiling of pixel arrays and scanning techniques. Room temperature cadmium telluride (CdTe) and cadmium zinc telluride (CdZnTe) compound semiconductors, operating as direct conversion x-ray sensors, can provide the required speed when connected to application specific integrated circuits (ASICs) operating at fast peaking times with multiple fixed thresholds per pixel provided the sensors are designed for rapid signal formation across the x-ray energy ranges of the application at the required energy and spatial resolutions, and at a sufficiently high detective quantum efficiency (DQE). We have developed high-flux energy-resolved photon-counting x-ray imaging array sensors using pixellated CdTe and CdZnTe semiconductors optimized for clinical CT and security NDT. We have also fabricated high-flux ASICs with a two dimensional (2D) array of inputs for readout from the sensors. The sensors are guard ring free and have a 2D array of pixels and can be tiled in 2D while preserving pixel pitch. The 2D ASICs have four energy bins with a linear energy response across sufficient dynamic range for clinical CT and some NDT applications. The ASICs can also be tiled in 2D and are designed to fit within the active area of the sensors. We have measured several important performance parameters including; the output count rate (OCR) in excess of 20 million counts per second per square mm with a minimum loss of counts due to pulse pile-up, an energy resolution of 7 keV full width at half maximum (FWHM) across the entire dynamic range, and a noise floor about 20keV. This is achieved by directly interconnecting the ASIC inputs to the pixels of the CdZnTe sensors incurring very little input capacitance to the ASICs. We present measurements of the performance of the CdTe and CdZnTe sensors including the OCR, FWHM energy resolution, noise floor, as well as the temporal stability and uniformity under the rapidly varying high flux expected in CT and NDT applications.

  16. The lucky image-motion prediction for simple scene observation based soft-sensor technology

    NASA Astrophysics Data System (ADS)

    Li, Yan; Su, Yun; Hu, Bin

    2015-08-01

    High resolution is important to earth remote sensors, while the vibration of the platforms of the remote sensors is a major factor restricting high resolution imaging. The image-motion prediction and real-time compensation are key technologies to solve this problem. For the reason that the traditional autocorrelation image algorithm cannot meet the demand for the simple scene image stabilization, this paper proposes to utilize soft-sensor technology in image-motion prediction, and focus on the research of algorithm optimization in imaging image-motion prediction. Simulations results indicate that the improving lucky image-motion stabilization algorithm combining the Back Propagation Network (BP NN) and support vector machine (SVM) is the most suitable for the simple scene image stabilization. The relative error of the image-motion prediction based the soft-sensor technology is below 5%, the training computing speed of the mathematical predication model is as fast as the real-time image stabilization in aerial photography.

  17. Fusion: ultra-high-speed and IR image sensors

    NASA Astrophysics Data System (ADS)

    Etoh, T. Goji; Dao, V. T. S.; Nguyen, Quang A.; Kimata, M.

    2015-08-01

    Most targets of ultra-high-speed video cameras operating at more than 1 Mfps, such as combustion, crack propagation, collision, plasma, spark discharge, an air bag at a car accident and a tire under a sudden brake, generate sudden heat. Researchers in these fields require tools to measure the high-speed motion and heat simultaneously. Ultra-high frame rate imaging is achieved by an in-situ storage image sensor. Each pixel of the sensor is equipped with multiple memory elements to record a series of image signals simultaneously at all pixels. Image signals stored in each pixel are read out after an image capturing operation. In 2002, we developed an in-situ storage image sensor operating at 1 Mfps 1). However, the fill factor of the sensor was only 15% due to a light shield covering the wide in-situ storage area. Therefore, in 2011, we developed a backside illuminated (BSI) in-situ storage image sensor to increase the sensitivity with 100% fill factor and a very high quantum efficiency 2). The sensor also achieved a much higher frame rate,16.7 Mfps, thanks to the wiring on the front side with more freedom 3). The BSI structure has another advantage that it has less difficulties in attaching an additional layer on the backside, such as scintillators. This paper proposes development of an ultra-high-speed IR image sensor in combination of advanced nano-technologies for IR imaging and the in-situ storage technology for ultra-highspeed imaging with discussion on issues in the integration.

  18. Colorizing SENTINEL-1 SAR Images Using a Variational Autoencoder Conditioned on SENTINEL-2 Imagery

    NASA Astrophysics Data System (ADS)

    Schmitt, M.; Hughes, L. H.; Körner, M.; Zhu, X. X.

    2018-05-01

    In this paper, we have shown an approach for the automatic colorization of SAR backscatter images, which are usually provided in the form of single-channel gray-scale imagery. Using a deep generative model proposed for the purpose of photograph colorization and a Lab-space-based SAR-optical image fusion formulation, we are able to predict artificial color SAR images, which disclose much more information to the human interpreter than the original SAR data. Future work will aim at further adaption of the employed procedure to our special case of multi-sensor remote sensing imagery. Furthermore, we will investigate if the low-level representations learned intrinsically by the deep network can be used for SAR image interpretation in an end-to-end manner.

  19. Electrochemical imaging of cells and tissues

    PubMed Central

    Lin, Tzu-En; Rapino, Stefania; Girault, Hubert H.

    2018-01-01

    The technological and experimental progress in electrochemical imaging of biological specimens is discussed with a view on potential applications for skin cancer diagnostics, reproductive medicine and microbial testing. The electrochemical analysis of single cell activity inside cell cultures, 3D cellular aggregates and microtissues is based on the selective detection of electroactive species involved in biological functions. Electrochemical imaging strategies, based on nano/micrometric probes scanning over the sample and sensor array chips, respectively, can be made sensitive and selective without being affected by optical interference as many other microscopy techniques. The recent developments in microfabrication, electronics and cell culturing/tissue engineering have evolved in affordable and fast-sampling electrochemical imaging platforms. We believe that the topics discussed herein demonstrate the applicability of electrochemical imaging devices in many areas related to cellular functions. PMID:29899947

  20. Perspective: Advanced particle imaging

    DOE PAGES

    Chandler, David W.; Houston, Paul L.; Parker, David H.

    2017-05-26

    This study discuss, the first ion imaging experiment demonstrating the capability of collecting an image of the photofragments from a unimolecular dissociation event and analyzing that image to obtain the three-dimensional velocity distribution of the fragments, the efficacy and breadth of application of the ion imaging technique have continued to improve and grow. With the addition of velocity mapping, ion/electron centroiding, and slice imaging techniques, the versatility and velocity resolution have been unmatched. Recent improvements in molecular beam, laser, sensor, and computer technology are allowing even more advanced particle imaging experiments, and eventually we can expect multi-mass imaging with co-variancemore » and full coincidence capability on a single shot basis with repetition rates in the kilohertz range. This progress should further enable “complete” experiments—the holy grail of molecular dynamics—where all quantum numbers of reactants and products of a bimolecular scattering event are fully determined and even under our control.« less

  1. Design of intelligent vehicle control system based on single chip microcomputer

    NASA Astrophysics Data System (ADS)

    Zhang, Congwei

    2018-06-01

    The smart car microprocessor uses the KL25ZV128VLK4 in the Freescale series of single-chip microcomputers. The image sampling sensor uses the CMOS digital camera OV7725. The obtained track data is processed by the corresponding algorithm to obtain track sideline information. At the same time, the pulse width modulation control (PWM) is used to control the motor and servo movements, and based on the digital incremental PID algorithm, the motor speed control and servo steering control are realized. In the project design, IAR Embedded Workbench IDE is used as the software development platform to program and debug the micro-control module, camera image processing module, hardware power distribution module, motor drive and servo control module, and then complete the design of the intelligent car control system.

  2. Advanced sensor-simulation capability

    NASA Astrophysics Data System (ADS)

    Cota, Stephen A.; Kalman, Linda S.; Keller, Robert A.

    1990-09-01

    This paper provides an overview of an advanced simulation capability currently in use for analyzing visible and infrared sensor systems. The software system, called VISTAS (VISIBLE/INFRARED SENSOR TRADES, ANALYSES, AND SIMULATIONS) combines classical image processing techniques with detailed sensor models to produce static and time dependent simulations of a variety of sensor systems including imaging, tracking, and point target detection systems. Systems modelled to date include space-based scanning line-array sensors as well as staring 2-dimensional array sensors which can be used for either imaging or point source detection.

  3. Fundamental performance differences between CMOS and CCD imagers: Part II

    NASA Astrophysics Data System (ADS)

    Janesick, James; Andrews, James; Tower, John; Grygon, Mark; Elliott, Tom; Cheng, John; Lesser, Michael; Pinter, Jeff

    2007-09-01

    A new class of CMOS imagers that compete with scientific CCDs is presented. The sensors are based on deep depletion backside illuminated technology to achieve high near infrared quantum efficiency and low pixel cross-talk. The imagers deliver very low read noise suitable for single photon counting - Fano-noise limited soft x-ray applications. Digital correlated double sampling signal processing necessary to achieve low read noise performance is analyzed and demonstrated for CMOS use. Detailed experimental data products generated by different pixel architectures (notably 3TPPD, 5TPPD and 6TPG designs) are presented including read noise, charge capacity, dynamic range, quantum efficiency, charge collection and transfer efficiency and dark current generation. Radiation damage data taken for the imagers is also reported.

  4. Flexible phosphor sensors: a digital supplement or option to rigid sensors.

    PubMed

    Glazer, Howard S

    2014-01-01

    An increasing number of dental practices are upgrading from film radiography to digital radiography, for reasons that include faster image processing, easier image access, better patient education, enhanced data storage, and improved office productivity. Most practices that have converted to digital technology use rigid, or direct, sensors. Another digital option is flexible phosphor sensors, also called indirect sensors or phosphor storage plates (PSPs). Flexible phosphor sensors can be advantageous for use with certain patients who may be averse to direct sensors, and they can deliver a larger image area. Additionally, sensor cost for replacement PSPs is considerably lower than for hard sensors. As such, flexible phosphor sensors appear to be a viable supplement or option to direct sensors.

  5. Solar Weather Ice Monitoring Station (SWIMS). A low cost, extreme/harsh environment, solar powered, autonomous sensor data gathering and transmission system

    NASA Astrophysics Data System (ADS)

    Chetty, S.; Field, L. A.

    2013-12-01

    The Arctic ocean's continuing decrease of summer-time ice is related to rapidly diminishing multi-year ice due to the effects of climate change. Ice911 Research aims to develop environmentally respectful materials that when deployed will increase the albedo, enhancing the formation and/preservation of multi-year ice. Small scale deployments using various materials have been done in Canada, California's Sierra Nevada Mountains and a pond in Minnesota to test the albedo performance and environmental characteristics of these materials. SWIMS is a sophisticated autonomous sensor system being developed to measure the albedo, weather, water temperature and other environmental parameters. The system (SWIMS) employs low cost, high accuracy/precision sensors, high resolution cameras, and an extreme environment command and data handling computer system using satellite and terrestrial wireless communication. The entire system is solar powered with redundant battery backup on a floating buoy platform engineered for low temperature (-40C) and high wind conditions. The system also incorporates tilt sensors, sonar based ice thickness sensors and a weather station. To keep the costs low, each SWIMS unit measures incoming and reflected radiation from the four quadrants around the buoy. This allows data from four sets of sensors, cameras, weather station, water temperature probe to be collected and transmitted by a single on-board solar powered computer. This presentation covers the technical, logistical and cost challenges in designing, developing and deploying these stations in remote, extreme environments. Image captured by camera #3 of setting sun on the SWIMS station One of the images captured by SWIMS Camera #4

  6. Using Imaging Spectrometry to Approach Crop Classification from a Water Management Perspective

    NASA Astrophysics Data System (ADS)

    Shivers, S.; Roberts, D. A.

    2017-12-01

    We use hyperspectral remote sensing imagery to classify crops in the Central Valley of California at a level that would be of use to water managers. In California irrigated agriculture uses 80 percent of the state's water supply with differences in water application rate varying by as large as a factor of three, dependent on crop type. Therefore, accurate water resource accounting is dependent upon accurate crop mapping. While on-the-ground crop accounting at the county level requires significant labor and time inputs, remote sensing has the potential to map crops over a greater spatial area with more frequent time intervals. Specifically, imaging spectrometry with its wide spectral range has the ability to detect small spectral differences at the field-level scale that may be indiscernible to multispectral sensors such as Landsat. In this study, crops in the Central Valley were classified into nine categories defined and used by the California Department of Water Resources as having similar water usages. We used the random forest classifier on Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) imagery from June 2013, 2014 and 2015 to analyze accuracy of multi-temporal images and to investigate the extent to which cropping patterns have changed over the course of the 2013-2015 drought. Initial results show accuracies of over 90% for all three years, indicating that hyperspectral imagery has the potential to identify crops by water use group at a single time step with a single sensor, allowing cropping patterns to be monitored in anticipation of water needs.

  7. Two micron pore size MCP-based image intensifiers

    NASA Astrophysics Data System (ADS)

    Glesener, John; Estrera, Joseph

    2010-02-01

    Image intensifiers (I2) have many advantages as detectors. They offer single photon sensitivity in an imaging format, they're light in weight and analog I2 systems can operate for hours on a single AA battery. Their light output is such as to exploit the peak in color sensitivity of the human eye. Until recent developments in CMOS sensors, they also were one of the highest resolution sensors available. The closest all solid state solution, the Texas Instruments Impactron chip, comes in a 1 megapixel format. Depending on the level of integration, an Impactron based system can consume 20 to 40 watts in a system configuration. In further investing in I2 technology, L-3 EOS determined that increasing I2 resolution merited a high priority. Increased I2 resolution offers the system user two desirable options: 1) increased detection and identification ranges while maintaining field-of-view (FOV) or 2) increasing FOV while maintaining the original system resolution. One of the areas where an investment in resolution is being made is in the microchannel plate (MCP). Incorporation of a 2 micron MCP into an image tube has the potential of increasing the system resolution of currently fielded systems. Both inverting and non-inverting configurations are being evaluated. Inverting tubes are being characterized in night vision goggle (NVG) and sights. The non-inverting 2 micron tube is being characterized for high resolution I2CMOS camera applications. Preliminary measurements show an increase in the MTF over a standard 5 micron pore size, 6 micron pitch plate. Current results will be presented.

  8. Monitoring of Building Construction by 4D Change Detection Using Multi-temporal SAR Images

    NASA Astrophysics Data System (ADS)

    Yang, C. H.; Pang, Y.; Soergel, U.

    2017-05-01

    Monitoring urban changes is important for city management, urban planning, updating of cadastral map, etc. In contrast to conventional field surveys, which are usually expensive and slow, remote sensing techniques are fast and cost-effective alternatives. Spaceborne synthetic aperture radar (SAR) sensors provide radar images captured rapidly over vast areas at fine spatiotemporal resolution. In addition, the active microwave sensors are capable of day-and-night vision and independent of weather conditions. These advantages make multi-temporal SAR images suitable for scene monitoring. Persistent scatterer interferometry (PSI) detects and analyses PS points, which are characterized by strong, stable, and coherent radar signals throughout a SAR image sequence and can be regarded as substructures of buildings in built-up cities. Attributes of PS points, for example, deformation velocities, are derived and used for further analysis. Based on PSI, a 4D change detection technique has been developed to detect disappearance and emergence of PS points (3D) at specific times (1D). In this paper, we apply this 4D technique to the centre of Berlin, Germany, to investigate its feasibility and application for construction monitoring. The aims of the three case studies are to monitor construction progress, business districts, and single buildings, respectively. The disappearing and emerging substructures of the buildings are successfully recognized along with their occurrence times. The changed substructures are then clustered into single construction segments based on DBSCAN clustering and α-shape outlining for object-based analysis. Compared with the ground truth, these spatiotemporal results have proven able to provide more detailed information for construction monitoring.

  9. Visualization of Concrete Slump Flow Using the Kinect Sensor

    PubMed Central

    Park, Minbeom

    2018-01-01

    Workability is regarded as one of the important parameters of high-performance concrete and monitoring it is essential in concrete quality management at construction sites. The conventional workability test methods are basically based on length and time measured by a ruler and a stopwatch and, as such, inevitably involves human error. In this paper, we propose a 4D slump test method based on digital measurement and data processing as a novel concrete workability test. After acquiring the dynamically changing 3D surface of fresh concrete using a 3D depth sensor during the slump flow test, the stream images are processed with the proposed 4D slump processing algorithm and the results are compressed into a single 4D slump image. This image basically represents the dynamically spreading cross-section of fresh concrete along the time axis. From the 4D slump image, it is possible to determine the slump flow diameter, slump flow time, and slump height at any location simultaneously. The proposed 4D slump test will be able to activate research related to concrete flow simulation and concrete rheology by providing spatiotemporal measurement data of concrete flow. PMID:29510510

  10. Visualization of Concrete Slump Flow Using the Kinect Sensor.

    PubMed

    Kim, Jung-Hoon; Park, Minbeom

    2018-03-03

    Workability is regarded as one of the important parameters of high-performance concrete and monitoring it is essential in concrete quality management at construction sites. The conventional workability test methods are basically based on length and time measured by a ruler and a stopwatch and, as such, inevitably involves human error. In this paper, we propose a 4D slump test method based on digital measurement and data processing as a novel concrete workability test. After acquiring the dynamically changing 3D surface of fresh concrete using a 3D depth sensor during the slump flow test, the stream images are processed with the proposed 4D slump processing algorithm and the results are compressed into a single 4D slump image. This image basically represents the dynamically spreading cross-section of fresh concrete along the time axis. From the 4D slump image, it is possible to determine the slump flow diameter, slump flow time, and slump height at any location simultaneously. The proposed 4D slump test will be able to activate research related to concrete flow simulation and concrete rheology by providing spatiotemporal measurement data of concrete flow.

  11. Spaceborne imaging radar research in the 90's

    NASA Technical Reports Server (NTRS)

    Elachi, Charles

    1986-01-01

    The imaging radar experiments on SEASAT and on the space shuttle (SIR-A and SIR-B) have led to a wide interest in the use of spaceborne imaging radars in Earth and planetary sciences. The radar sensors provide unique and complimentary information to what is acquired with visible and infrared imagers. This includes subsurface imaging in arid regions, all weather observation of ocean surface dynamic phenomena, structural mapping, soil moisture mapping, stereo imaging and resulting topographic mapping. However, experiments up to now have exploited only a very limited range of the generic capability of radar sensors. With planned sensor developments in the late 80's and early 90's, a quantum jump will be made in our ability to fully exploit the potential of these sensors. These developments include: multiparameter research sensors such as SIR-C and X-SAR, long-term and global monitoring sensors such as ERS-1, JERS-1, EOS, Radarsat, GLORI and the spaceborne sounder, planetary mapping sensors such as the Magellan and Cassini/Titan mappers, topographic three-dimensional imagers such as the scanning radar altimeter and three-dimensional rain mapping. These sensors and their associated research are briefly described.

  12. Star centroiding error compensation for intensified star sensors.

    PubMed

    Jiang, Jie; Xiong, Kun; Yu, Wenbo; Yan, Jinyun; Zhang, Guangjun

    2016-12-26

    A star sensor provides high-precision attitude information by capturing a stellar image; however, the traditional star sensor has poor dynamic performance, which is attributed to its low sensitivity. Regarding the intensified star sensor, the image intensifier is utilized to improve the sensitivity, thereby further improving the dynamic performance of the star sensor. However, the introduction of image intensifier results in star centroiding accuracy decrease, further influencing the attitude measurement precision of the star sensor. A star centroiding error compensation method for intensified star sensors is proposed in this paper to reduce the influences. First, the imaging model of the intensified detector, which includes the deformation parameter of the optical fiber panel, is established based on the orthographic projection through the analysis of errors introduced by the image intensifier. Thereafter, the position errors at the target points based on the model are obtained by using the Levenberg-Marquardt (LM) optimization method. Last, the nearest trigonometric interpolation method is presented to compensate for the arbitrary centroiding error of the image plane. Laboratory calibration result and night sky experiment result show that the compensation method effectively eliminates the error introduced by the image intensifier, thus remarkably improving the precision of the intensified star sensors.

  13. Beam imaging sensor

    DOEpatents

    McAninch, Michael D.; Root, Jeffrey J.

    2016-07-05

    The present invention relates generally to the field of sensors for beam imaging and, in particular, to a new and useful beam imaging sensor for use in determining, for example, the power density distribution of a beam including, but not limited to, an electron beam or an ion beam. In one embodiment, the beam imaging sensor of the present invention comprises, among other items, a circumferential slit that is either circular, elliptical or polygonal in nature.

  14. Dynamical Nuclear Magnetic Resonance Imaging of Micron-scale Liquids

    NASA Astrophysics Data System (ADS)

    Sixta, Aimee; Choate, Alexandra; Maeker, Jake; Bogat, Sophia; Tennant, Daniel; Mozaffari, Shirin; Markert, John

    We report our efforts in the development of Nuclear Magnetic Resonance Force Microscopy (NMRFM) for dynamical imaging of liquid media at the micron scale. Our probe contains microfluidic samples sealed in thin-walled (µm) quartz tubes, with a micro-oscillator sensor nearby in vacuum to maintain its high mechanical resonance quality factor. Using 10 µm spherical permalloy magnets at the oscillator tips, a 3D T1-resolved image of spin density can be obtained by reconstruction from our magnetostatics-modelled resonance slices; as part of this effort, we are exploring single-shot T1 measurements for faster dynamical imaging. We aim to further enhance imaging by using a 2 ω technique to eliminate artifact signals during the cyclic inversion of nuclear spins. The ultimate intent of these efforts is to perform magnetic resonance imaging of individual biological cells.

  15. Energy deposition measurements of single 1H, 4He and 12C ions of therapeutic energies in a silicon pixel detector

    NASA Astrophysics Data System (ADS)

    Gehrke, T.; Burigo, L.; Arico, G.; Berke, S.; Jakubek, J.; Turecek, D.; Tessonnier, T.; Mairani, A.; Martišíková, M.

    2017-04-01

    In the field of ion-beam radiotherapy and space applications, measurements of the energy deposition of single ions in thin layers are of interest for dosimetry and imaging. The present work investigates the capability of a pixelated detector Timepix to measure the energy deposition of single ions in therapeutic proton, helium- and carbon-ion beams in a 300 μm-thick sensitive silicon layer. For twelve different incident beams, the measured energy deposition distributions of single ions are compared to the expected energy deposition spectra, which were predicted by detailed Monte Carlo simulations using the FLUKA code. A methodology for the analysis of the measured data is introduced in order to identify and reject signals that are either degraded or caused by multiple overlapping ions. Applying a newly proposed linear recalibration, the energy deposition measurements are in good agreement with the simulations. The twelve measured mean energy depositions between 0.72 MeV/mm and 56.63 MeV/mm in a partially depleted silicon sensor do not deviate more than 7% from the corresponding simulated values. Measurements of energy depositions above 10 MeV/mm with a fully depleted sensor are found to suffer from saturation effects due to the too high per-pixel signal. The utilization of thinner sensors, in which a lower signal is induced, could further improve the performance of the Timepix detector for energy deposition measurements.

  16. Development of a novel omnidirectional magnetostrictive transducer for plate applications

    NASA Astrophysics Data System (ADS)

    Vinogradov, Sergey; Cobb, Adam; Bartlett, Jonathan; Udagawa, Youichi

    2018-04-01

    The application of guided waves for the testing of plate-type structures has been recently investigated by a number of research groups due to the ability of guided waves to detect corrosion in remote and hidden areas. Guided wave sensors for plate applications can be either directed (i.e., the waves propagate in a single direction) or omnidirectional. Each type has certain advantages and disadvantages. Omnidirectional sensors can inspect large areas from a single location, but it is challenging to define where a feature is located. Conversely, directed sensors can be used to precisely locate an indication, but have no sensitivity to flaws away from the wave propagation direction. This work describes a newly developed sensor that combines the strengths of both sensor types to create a novel omnidirectional transducer. The sensor transduction is based on a custom magnetostrictive transducer (MsT). In this new probe design, a directed, plate-application MsT with known characteristics was incorporated into an automated scanner. This scanner rotates the directed MsT for data collection at regular intervals. Coupling of the transducer to the plate is accomplished using a shear wave couplant. The array of data that is received is used for compiling B-scans and imaging, utilizing a synthetic aperture focusing algorithm (SAFT). The performance of the probe was evaluated on a 0.5-inch thick carbon steel plate mockup with a surface area of over 100 square feet. The mockup had a variety of known anomalies representing localized and distributed pitting corrosion, gradual wall thinning, and notches of different depths. Experimental data was also acquired using the new probe on a retired storage tank with known corrosion damage. The performance of the new sensor and its limitations are discussed together with general directions in technology development.

  17. Estimating Single and Multiple Target Locations Using K-Means Clustering with Radio Tomographic Imaging in Wireless Sensor Networks

    DTIC Science & Technology

    2015-03-26

    dB) Lx, Ly, Lz Number of Pixels or Voxels in Respective Cartesian Dimension λ Width of Weighting Ellipse (ft) λi Diagonal Entries of Λ (Square Root...Barrett, and L. R. Furenlid, “Calibration Method for ML Estimation of 3D Interaction Position in a Thick Gamma-Ray Detector ,” IEEE Transactions on

  18. A System for Video Surveillance and Monitoring CMU VSAM Final Report

    DTIC Science & Technology

    1999-11-30

    motion-based skeletonization, neural network , spatio-temporal salience Patterns inside image chips, spurious motion rejection, model -based... network of sensors with respect to the model coordinate system, computation of 3D geolocation estimates, and graphical display of object hypotheses...rithms have been developed. The first uses view dependent visual properties to train a neural network classifier to recognize four classes: single

  19. Optical and Electric Multifunctional CMOS Image Sensors for On-Chip Biosensing Applications.

    PubMed

    Tokuda, Takashi; Noda, Toshihiko; Sasagawa, Kiyotaka; Ohta, Jun

    2010-12-29

    In this review, the concept, design, performance, and a functional demonstration of multifunctional complementary metal-oxide-semiconductor (CMOS) image sensors dedicated to on-chip biosensing applications are described. We developed a sensor architecture that allows flexible configuration of a sensing pixel array consisting of optical and electric sensing pixels, and designed multifunctional CMOS image sensors that can sense light intensity and electric potential or apply a voltage to an on-chip measurement target. We describe the sensors' architecture on the basis of the type of electric measurement or imaging functionalities.

  20. A 100 Mfps image sensor for biological applications

    NASA Astrophysics Data System (ADS)

    Etoh, T. Goji; Shimonomura, Kazuhiro; Nguyen, Anh Quang; Takehara, Kosei; Kamakura, Yoshinari; Goetschalckx, Paul; Haspeslagh, Luc; De Moor, Piet; Dao, Vu Truong Son; Nguyen, Hoang Dung; Hayashi, Naoki; Mitsui, Yo; Inumaru, Hideo

    2018-02-01

    Two ultrahigh-speed CCD image sensors with different characteristics were fabricated for applications to advanced scientific measurement apparatuses. The sensors are BSI MCG (Backside-illuminated Multi-Collection-Gate) image sensors with multiple collection gates around the center of the front side of each pixel, placed like petals of a flower. One has five collection gates and one drain gate at the center, which can capture consecutive five frames at 100 Mfps with the pixel count of about 600 kpixels (512 x 576 x 2 pixels). In-pixel signal accumulation is possible for repetitive image capture of reproducible events. The target application is FLIM. The other is equipped with four collection gates each connected to an in-situ CCD memory with 305 elements, which enables capture of 1,220 (4 x 305) consecutive images at 50 Mfps. The CCD memory is folded and looped with the first element connected to the last element, which also makes possible the in-pixel signal accumulation. The sensor is a small test sensor with 32 x 32 pixels. The target applications are imaging TOF MS, pulse neutron tomography and dynamic PSP. The paper also briefly explains an expression of the temporal resolution of silicon image sensors theoretically derived by the authors in 2017. It is shown that the image sensor designed based on the theoretical analysis achieves imaging of consecutive frames at the frame interval of 50 ps.

  1. Testing and evaluation of tactical electro-optical sensors

    NASA Astrophysics Data System (ADS)

    Middlebrook, Christopher T.; Smith, John G.

    2002-07-01

    As integrated electro-optical sensor payloads (multi- sensors) comprised of infrared imagers, visible imagers, and lasers advance in performance, the tests and testing methods must also advance in order to fully evaluate them. Future operational requirements will require integrated sensor payloads to perform missions at further ranges and with increased targeting accuracy. In order to meet these requirements sensors will require advanced imaging algorithms, advanced tracking capability, high-powered lasers, and high-resolution imagers. To meet the U.S. Navy's testing requirements of such multi-sensors, the test and evaluation group in the Night Vision and Chemical Biological Warfare Department at NAVSEA Crane is developing automated testing methods, and improved tests to evaluate imaging algorithms, and procuring advanced testing hardware to measure high resolution imagers and line of sight stabilization of targeting systems. This paper addresses: descriptions of the multi-sensor payloads tested, testing methods used and under development, and the different types of testing hardware and specific payload tests that are being developed and used at NAVSEA Crane.

  2. Evaluation of realistic layouts for next generation on-scalp MEG: spatial information density maps.

    PubMed

    Riaz, Bushra; Pfeiffer, Christoph; Schneiderman, Justin F

    2017-08-01

    While commercial magnetoencephalography (MEG) systems are the functional neuroimaging state-of-the-art in terms of spatio-temporal resolution, MEG sensors have not changed significantly since the 1990s. Interest in newer sensors that operate at less extreme temperatures, e.g., high critical temperature (high-T c ) SQUIDs, optically-pumped magnetometers, etc., is growing because they enable significant reductions in head-to-sensor standoff (on-scalp MEG). Various metrics quantify the advantages of on-scalp MEG, but a single straightforward one is lacking. Previous works have furthermore been limited to arbitrary and/or unrealistic sensor layouts. We introduce spatial information density (SID) maps for quantitative and qualitative evaluations of sensor arrays. SID-maps present the spatial distribution of information a sensor array extracts from a source space while accounting for relevant source and sensor parameters. We use it in a systematic comparison of three practical on-scalp MEG sensor array layouts (based on high-T c SQUIDs) and the standard Elekta Neuromag TRIUX magnetometer array. Results strengthen the case for on-scalp and specifically high-T c SQUID-based MEG while providing a path for the practical design of future MEG systems. SID-maps are furthermore general to arbitrary magnetic sensor technologies and source spaces and can thus be used for design and evaluation of sensor arrays for magnetocardiography, magnetic particle imaging, etc.

  3. Study on the special vision sensor for detecting position error in robot precise TIG welding of some key part of rocket engine

    NASA Astrophysics Data System (ADS)

    Zhang, Wenzeng; Chen, Nian; Wang, Bin; Cao, Yipeng

    2005-01-01

    Rocket engine is a hard-core part of aerospace transportation and thrusting system, whose research and development is very important in national defense, aviation and aerospace. A novel vision sensor is developed, which can be used for error detecting in arc length control and seam tracking in precise pulse TIG welding of the extending part of the rocket engine jet tube. The vision sensor has many advantages, such as imaging with high quality, compactness and multiple functions. The optics design, mechanism design and circuit design of the vision sensor have been described in detail. Utilizing the mirror imaging of Tungsten electrode in the weld pool, a novel method is proposed to detect the arc length and seam tracking error of Tungsten electrode to the center line of joint seam from a single weld image. A calculating model of the method is proposed according to the relation of the Tungsten electrode, weld pool, the mirror of Tungsten electrode in weld pool and joint seam. The new methodologies are given to detect the arc length and seam tracking error. Through analyzing the results of the experiments, a system error modifying method based on a linear function is developed to improve the detecting precise of arc length and seam tracking error. Experimental results show that the final precision of the system reaches 0.1 mm in detecting the arc length and the seam tracking error of Tungsten electrode to the center line of joint seam.

  4. Geometric Calibration and Validation of Ultracam Aerial Sensors

    NASA Astrophysics Data System (ADS)

    Gruber, Michael; Schachinger, Bernhard; Muick, Marc; Neuner, Christian; Tschemmernegg, Helfried

    2016-03-01

    We present details of the calibration and validation procedure of UltraCam Aerial Camera systems. Results from the laboratory calibration and from validation flights are presented for both, the large format nadir cameras and the oblique cameras as well. Thus in this contribution we show results from the UltraCam Eagle and the UltraCam Falcon, both nadir mapping cameras, and the UltraCam Osprey, our oblique camera system. This sensor offers a mapping grade nadir component together with the four oblique camera heads. The geometric processing after the flight mission is being covered by the UltraMap software product. Thus we present details about the workflow as well. The first part consists of the initial post-processing which combines image information as well as camera parameters derived from the laboratory calibration. The second part, the traditional automated aerial triangulation (AAT) is the step from single images to blocks and enables an additional optimization process. We also present some special features of our software, which are designed to better support the operator to analyze large blocks of aerial images and to judge the quality of the photogrammetric set-up.

  5. Wireless image-data transmission from an implanted image sensor through a living mouse brain by intra body communication

    NASA Astrophysics Data System (ADS)

    Hayami, Hajime; Takehara, Hiroaki; Nagata, Kengo; Haruta, Makito; Noda, Toshihiko; Sasagawa, Kiyotaka; Tokuda, Takashi; Ohta, Jun

    2016-04-01

    Intra body communication technology allows the fabrication of compact implantable biomedical sensors compared with RF wireless technology. In this paper, we report the fabrication of an implantable image sensor of 625 µm width and 830 µm length and the demonstration of wireless image-data transmission through a brain tissue of a living mouse. The sensor was designed to transmit output signals of pixel values by pulse width modulation (PWM). The PWM signals from the sensor transmitted through a brain tissue were detected by a receiver electrode. Wireless data transmission of a two-dimensional image was successfully demonstrated in a living mouse brain. The technique reported here is expected to provide useful methods of data transmission using micro sized implantable biomedical sensors.

  6. Orientation sensors by defocused imaging of single gold nano-bipyramids

    NASA Astrophysics Data System (ADS)

    Zhang, Fanwei; Li, Qiang; Rao, Wenye; Hu, Hongjin; Gao, Ye; Wu, Lijun

    2018-01-01

    Optical probes for nanoscale orientation sensing have attracted much attention in the field of single-molecule detections. Noble metal especially Au nanoparticles (NPs) exhibit extraordinary plasmonic properties, great photostability, excellent biocompatibility and nontoxicity, and thereby could be alternative labels to conventional applied organic dyes or quantum dots. One type of the most interesting metallic NPs is Au nanorods (AuNRs). Its anisotropic emission accompanied with anisotropic shape is potentially applicable in orientation sensing. Recently, we resolved the 3D orientation of single AuNRs within one frame by deliberately introducing an aberration (slight shift of the dipole away from the focal plane) to the imaging system1 . This defocused imaging technique is based on the electron transition dipole approximation and the fact that the dipole radiation exhibits an angular anisotropy. Since the photoluminescence quantum yield (PLQY) can be enhanced by the "lightning rod effect" (at a sharp angled surface) and localized SPR modes, that of the single Au nano-bipyramid (AuNB) with more sharp tips or edges was found to be doubled comparing to AuNRs with a same effective size2. Here, with a 532 nm excitation, we find that the PL properties of individual AuNBs can be described by three perpendicularly-arranged dipoles (with different ratios). Their PL defocused images are bright, clear and exhibit obvious anisotropy. These properties suggest that AuNBs are excellent candidates for orientation sensing labels in single molecule detections.

  7. Cross-calibration of the Landsat-7 ETM+ and Landsat-5 TM with the ResourceSat-1 (IRS-P6) AWiFS and LISS-III sensors

    USGS Publications Warehouse

    Chander, G.; Scaramuzza, P.L.

    2006-01-01

    Increasingly, data from multiple sensors are used to gain a more complete understanding of land surface processes at a variety of scales. The Landsat suite of satellites has collected the longest continuous archive of multispectral data. The ResourceSat-1 Satellite (also called as IRS-P6) was launched into the polar sunsynchronous orbit on Oct 17, 2003. It carries three remote sensing sensors: the High Resolution Linear Imaging Self-Scanner (LISS-IV), Medium Resolution Linear Imaging Self-Scanner (LISS-III), and the Advanced Wide Field Sensor (AWiFS). These three sensors are used together to provide images with different resolution and coverage. To understand the absolute radiometric calibration accuracy of IRS-P6 AWiFS and LISS-III sensors, image pairs from these sensors were compared to the Landsat-5 TM and Landsat-7 ETM+ sensors. The approach involved the calibration of nearly simultaneous surface observations based on image statistics from areas observed simultaneously by the two sensors.

  8. Built-in hyperspectral camera for smartphone in visible, near-infrared and middle-infrared lights region (second report): sensitivity improvement of Fourier-spectroscopic imaging to detect diffuse reflection lights from internal human tissues for healthcare sensors

    NASA Astrophysics Data System (ADS)

    Kawashima, Natsumi; Hosono, Satsuki; Ishimaru, Ichiro

    2016-05-01

    We proposed the snapshot-type Fourier spectroscopic imaging for smartphone that was mentioned in 1st. report in this conference. For spectroscopic components analysis, such as non-invasive blood glucose sensors, the diffuse reflection lights from internal human skins are very weak for conventional hyperspectral cameras, such as AOTF (Acousto-Optic Tunable Filter) type. Furthermore, it is well known that the spectral absorption of mid-infrared lights or Raman spectroscopy especially in long wavelength region is effective to distinguish specific biomedical components quantitatively, such as glucose concentration. But the main issue was that photon energies of middle infrared lights and light intensities of Raman scattering are extremely weak. For improving sensitivity of our spectroscopic imager, the wide-field-stop & beam-expansion method was proposed. Our line spectroscopic imager introduced a single slit for field stop on the conjugate objective plane. Obviously to increase detected light intensities, the wider slit width of the field stop makes light intensities higher, regardless of deterioration of spatial resolutions. Because our method is based on wavefront-division interferometry, it becomes problems that the wider width of single slit makes the diffraction angle narrower. This means that the narrower diameter of collimated objective beams deteriorates visibilities of interferograms. By installing the relative inclined phaseshifter onto optical Fourier transform plane of infinity corrected optical systems, the collimated half flux of objective beams derived from single-bright points on objective surface penetrate through the wedge prism and the cuboid glass respectively. These two beams interfere each other and form the infererogram as spatial fringe patterns. Thus, we installed concave-cylindrical lens between the wider slit and objective lens as a beam expander. We successfully obtained the spectroscopic characters of hemoglobin from reflected lights from human fingers.

  9. Proceedings of the Augmented VIsual Display (AVID) Research Workshop

    NASA Technical Reports Server (NTRS)

    Kaiser, Mary K. (Editor); Sweet, Barbara T. (Editor)

    1993-01-01

    The papers, abstracts, and presentations were presented at a three day workshop focused on sensor modeling and simulation, and image enhancement, processing, and fusion. The technical sessions emphasized how sensor technology can be used to create visual imagery adequate for aircraft control and operations. Participants from industry, government, and academic laboratories contributed to panels on Sensor Systems, Sensor Modeling, Sensor Fusion, Image Processing (Computer and Human Vision), and Image Evaluation and Metrics.

  10. Integrated Spectral Low Noise Image Sensor with Nanowire Polarization Filters for Low Contrast Imaging

    DTIC Science & Technology

    2015-11-05

    AFRL-AFOSR-VA-TR-2015-0359 Integrated Spectral Low Noise Image Sensor with Nanowire Polarization Filters for Low Contrast Imaging Viktor Gruev...To) 02/15/2011 - 08/15/2015 4. TITLE AND SUBTITLE Integrated Spectral Low Noise Image Sensor with Nanowire Polarization Filters for Low Contrast...investigate alternative spectral imaging architectures based on my previous experience in this research area. I will develop nanowire polarization

  11. High-Resolution Gamma-Ray Imaging Measurements Using Externally Segmented Germanium Detectors

    NASA Technical Reports Server (NTRS)

    Callas, J.; Mahoney, W.; Skelton, R.; Varnell, L.; Wheaton, W.

    1994-01-01

    Fully two-dimensional gamma-ray imaging with simultaneous high-resolution spectroscopy has been demonstrated using an externally segmented germanium sensor. The system employs a single high-purity coaxial detector with its outer electrode segmented into 5 distinct charge collection regions and a lead coded aperture with a uniformly redundant array (URA) pattern. A series of one-dimensional responses was collected around 511 keV while the system was rotated in steps through 180 degrees. A non-negative, linear least-squares algorithm was then employed to reconstruct a 2-dimensional image. Corrections for multiple scattering in the detector, and the finite distance of source and detector are made in the reconstruction process.

  12. Development of a Bioaerosol single particle detector (BIO IN) for the Fast Ice Nucleus CHamber FINCH

    NASA Astrophysics Data System (ADS)

    Bundke, U.; Reimann, B.; Nillius, B.; Jaenicke, R.; Bingemer, H.

    2010-02-01

    In this work we present the setup and first tests of our new BIO IN detector. This detector was constructed to classify atmospheric ice nuclei (IN) for their biological content. It is designed to be coupled to the Fast Ice Nucleus CHamber FINCH. If one particle acts as an ice nucleus, it will be at least partly covered with ice at the end of the development section of the FINCH chamber. The device combines an auto-fluorescence detector and a circular depolarization detector for simultaneous detection of biological material and discrimination between water droplets, ice crystals and non activated large aerosol particles. The excitation of biological material with UV light and analysis of auto-fluorescence is a common principle used for flow cytometry, fluorescence microscopy, spectroscopy and imaging. The detection of auto-fluorescence of airborne single particles demands some more experimental effort. However, expensive commercial sensors are available for special purposes, e.g. size distribution measurements. But these sensors will not fit the specifications needed for the FINCH IN counter (e.g. high sample flow of up 10 LPM). The newly developed -low cost- BIO IN sensor uses a single high-power UV LED for the electronic excitation instead of much more expensive UV lasers. Other key advantages of the new sensor are the low weight, compact size, and the little effect on the aerosol sample, which allows it to be coupled with other instruments for further analysis. The instrument will be flown on one of the first missions of the new German research aircraft "HALO" (High Altitude and LOng range).

  13. Application of passive imaging polarimetry in the discrimination and detection of different color targets of identical shapes using color-blind imaging sensors

    NASA Astrophysics Data System (ADS)

    El-Saba, A. M.; Alam, M. S.; Surpanani, A.

    2006-05-01

    Important aspects of automatic pattern recognition systems are their ability to efficiently discriminate and detect proper targets with low false alarms. In this paper we extend the applications of passive imaging polarimetry to effectively discriminate and detect different color targets of identical shapes using color-blind imaging sensor. For this case of study we demonstrate that traditional color-blind polarization-insensitive imaging sensors that rely only on the spatial distribution of targets suffer from high false detection rates, especially in scenarios where multiple identical shape targets are present. On the other hand we show that color-blind polarization-sensitive imaging sensors can successfully and efficiently discriminate and detect true targets based on their color only. We highlight the main advantages of using our proposed polarization-encoded imaging sensor.

  14. CMOS image sensor with organic photoconductive layer having narrow absorption band and proposal of stack type solid-state image sensors

    NASA Astrophysics Data System (ADS)

    Takada, Shunji; Ihama, Mikio; Inuiya, Masafumi

    2006-02-01

    Digital still cameras overtook film cameras in Japanese market in 2000 in terms of sales volume owing to their versatile functions. However, the image-capturing capabilities such as sensitivity and latitude of color films are still superior to those of digital image sensors. In this paper, we attribute the cause for the high performance of color films to their multi-layered structure, and propose the solid-state image sensors with stacked organic photoconductive layers having narrow absorption bands on CMOS read-out circuits.

  15. ALLFlight: detection of moving objects in IR and ladar images

    NASA Astrophysics Data System (ADS)

    Doehler, H.-U.; Peinecke, Niklas; Lueken, Thomas; Schmerwitz, Sven

    2013-05-01

    Supporting a helicopter pilot during landing and takeoff in degraded visual environment (DVE) is one of the challenges within DLR's project ALLFlight (Assisted Low Level Flight and Landing on Unprepared Landing Sites). Different types of sensors (TV, Infrared, mmW radar and laser radar) are mounted onto DLR's research helicopter FHS (flying helicopter simulator) for gathering different sensor data of the surrounding world. A high performance computer cluster architecture acquires and fuses all the information to get one single comprehensive description of the outside situation. While both TV and IR cameras deliver images with frame rates of 25 Hz or 30 Hz, Ladar and mmW radar provide georeferenced sensor data with only 2 Hz or even less. Therefore, it takes several seconds to detect or even track potential moving obstacle candidates in mmW or Ladar sequences. Especially if the helicopter is flying with higher speed, it is very important to minimize the detection time of obstacles in order to initiate a re-planning of the helicopter's mission timely. Applying feature extraction algorithms on IR images in combination with data fusion algorithms of extracted features and Ladar data can decrease the detection time appreciably. Based on real data from flight tests, the paper describes applied feature extraction methods for moving object detection, as well as data fusion techniques for combining features from TV/IR and Ladar data.

  16. DeMi Payload Progress Update and Adaptive Optics (AO) Control Comparisons – Meeting Space AO Requirements on a CubeSat

    NASA Astrophysics Data System (ADS)

    Grunwald, Warren; Holden, Bobby; Barnes, Derek; Allan, Gregory; Mehrle, Nicholas; Douglas, Ewan S.; Cahoy, Kerri

    2018-01-01

    The Deformable Mirror (DeMi) CubeSat mission utilizes an Adaptive Optics (AO) control loop to correct incoming wavefronts as a technology demonstration for space-based imaging missions, such as high contrast observations (Earthlike exoplanets) and steering light into core single mode fibers for amplification. While AO has been used extensively on ground based systems to correct for atmospheric aberrations, operating an AO system on-board a small satellite presents different challenges. The DeMi payload 140 actuator MEMS deformable mirror (DM) corrects the incoming wavefront in four different control modes: 1) internal observation with a Shack-Hartmann Wavefront Sensor (SHWFS), 2) internal observation with an image plane sensor, 3) external observation with a SHWFS, and 4) external observation with an image plane sensor. All modes have wavefront aberration from two main sources, time-invariant launch disturbances that have changed the optical path from the expected path when calibrated in the lab and very low temporal frequency thermal variations as DeMi orbits the Earth. The external observation modes has additional error from: the pointing precision error from the attitude control system and reaction wheel jitter. Updates on DeMi’s mechanical, thermal, electrical, and mission design are also presented. The analysis from the DeMi payload simulations and testing provides information on the design options when developing space-based AO systems.

  17. FRET-Based Nanobiosensors for Imaging Intracellular Ca²⁺ and H⁺ Microdomains.

    PubMed

    Zamaleeva, Alsu I; Despras, Guillaume; Luccardini, Camilla; Collot, Mayeul; de Waard, Michel; Oheim, Martin; Mallet, Jean-Maurice; Feltz, Anne

    2015-09-23

    Semiconductor nanocrystals (NCs) or quantum dots (QDs) are luminous point emitters increasingly being used to tag and track biomolecules in biological/biomedical imaging. However, their intracellular use as highlighters of single-molecule localization and nanobiosensors reporting ion microdomains changes has remained a major challenge. Here, we report the design, generation and validation of FRET-based nanobiosensors for detection of intracellular Ca(2+) and H⁺ transients. Our sensors combine a commercially available CANdot(®)565QD as an energy donor with, as an acceptor, our custom-synthesized red-emitting Ca(2+) or H⁺ probes. These 'Rubies' are based on an extended rhodamine as a fluorophore and a phenol or BAPTA (1,2-bis(o-aminophenoxy)ethane-N,N,N',N'-tetra-acetic acid) for H⁺ or Ca(2+) sensing, respectively, and additionally bear a linker arm for conjugation. QDs were stably functionalized using the same SH/maleimide crosslink chemistry for all desired reactants. Mixing ion sensor and cell-penetrating peptides (that facilitate cytoplasmic delivery) at the desired stoichiometric ratio produced controlled multi-conjugated assemblies. Multiple acceptors on the same central donor allow up-concentrating the ion sensor on the QD surface to concentrations higher than those that could be achieved in free solution, increasing FRET efficiency and improving the signal. We validate these nanosensors for the detection of intracellular Ca(2+) and pH transients using live-cell fluorescence imaging.

  18. Robust adaptive optics systems for vision science

    NASA Astrophysics Data System (ADS)

    Burns, S. A.; de Castro, A.; Sawides, L.; Luo, T.; Sapoznik, K.

    2018-02-01

    Adaptive Optics (AO) is of growing importance for understanding the impact of retinal and systemic diseases on the retina. While AO retinal imaging in healthy eyes is now routine, AO imaging in older eyes and eyes with optical changes to the anterior eye can be difficult and requires a control and an imaging system that is resilient when there is scattering and occlusion from the cornea and lens, as well as in the presence of irregular and small pupils. Our AO retinal imaging system combines evaluation of local image quality of the pupil, with spatially programmable detection. The wavefront control system uses a woofer tweeter approach, combining an electromagnetic mirror and a MEMS mirror and a single Shack Hartmann sensor. The SH sensor samples an 8 mm exit pupil and the subject is aligned to a region within this larger system pupil using a chin and forehead rest. A spot quality metric is calculated in real time for each lenslet. Individual lenslets that do not meet the quality metric are eliminated from the processing. Mirror shapes are smoothed outside the region of wavefront control when pupils are small. The system allows imaging even with smaller irregular pupils, however because the depth of field increases under these conditions, sectioning performance decreases. A retinal conjugate micromirror array selectively directs mid-range scatter to additional detectors. This improves detection of retinal capillaries even when the confocal image has poorer image quality that includes both photoreceptors and blood vessels.

  19. Germanium ``hexa'' detector: production and testing

    NASA Astrophysics Data System (ADS)

    Sarajlić, M.; Pennicard, D.; Smoljanin, S.; Hirsemann, H.; Struth, B.; Fritzsch, T.; Rothermund, M.; Zuvic, M.; Lampert, M. O.; Askar, M.; Graafsma, H.

    2017-01-01

    Here we present new result on the testing of a Germanium sensor for X-ray radiation. The system is made of 3 × 2 Medipix3RX chips, bump-bonded to a monolithic sensor, and is called ``hexa''. Its dimensions are 45 × 30 mm2 and the sensor thickness was 1.5 mm. The total number of the pixels is 393216 in the matrix 768 × 512 with pixel pitch 55 μ m. Medipix3RX read-out chip provides photon counting read-out with single photon sensitivity. The sensor is cooled to -126°C and noise levels together with flat field response are measured. For -200 V polarization bias, leakage current was 4.4 mA (3.2 μ A/mm2). Due to higher leakage around 2.5% of all pixels stay non-responsive. More than 99% of all pixels are bump bonded correctly. In this paper we present the experimental set-up, threshold equalization procedure, image acquisition and the technique for bump bond quality estimate.

  20. Measurement of curvature and temperature using multimode interference devices

    NASA Astrophysics Data System (ADS)

    Guzman-Sepulveda, J. R.; Aguilar-Soto, J. G.; Torres-Cisneros, M.; Ibarra-Manzano, O. G.; May-Arrioja, D. A.

    2011-09-01

    In this paper we propose the fabrication, implementation, and testing of a novel fiber optic sensor based on Multimode Interference (MMI) effects for independent measurement of curvature and temperature. The development of fiber based MMI devices is relatively new and since they exhibit a band-pass filter response they can be used in different applications. The operating mechanism of our sensor is based on the self-imaging phenomena that occur in multimode fibers (MMF), which is related to the interference of the propagating modes and their accumulated phase. We demonstrate that the peak wavelength shifts with temperature variations as a result of changes in the accumulated phase through thermo-optics effects, while the intensity of the peak wavelength is reduced as the curvature increases since we start to loss higher order modes. In this way both measurements are obtained independently with a single fiber device. Compared to other fiber-optic sensors, our sensor features an extremely simple structure and fabrication process, and hence cost effectiveness.

Top