Design and implementation of non-linear image processing functions for CMOS image sensor
NASA Astrophysics Data System (ADS)
Musa, Purnawarman; Sudiro, Sunny A.; Wibowo, Eri P.; Harmanto, Suryadi; Paindavoine, Michel
2012-11-01
Today, solid state image sensors are used in many applications like in mobile phones, video surveillance systems, embedded medical imaging and industrial vision systems. These image sensors require the integration in the focal plane (or near the focal plane) of complex image processing algorithms. Such devices must meet the constraints related to the quality of acquired images, speed and performance of embedded processing, as well as low power consumption. To achieve these objectives, low-level analog processing allows extracting the useful information in the scene directly. For example, edge detection step followed by a local maxima extraction will facilitate the high-level processing like objects pattern recognition in a visual scene. Our goal was to design an intelligent image sensor prototype achieving high-speed image acquisition and non-linear image processing (like local minima and maxima calculations). For this purpose, we present in this article the design and test of a 64×64 pixels image sensor built in a standard CMOS Technology 0.35 μm including non-linear image processing. The architecture of our sensor, named nLiRIC (non-Linear Rapid Image Capture), is based on the implementation of an analog Minima/Maxima Unit. This MMU calculates the minimum and maximum values (non-linear functions), in real time, in a 2×2 pixels neighbourhood. Each MMU needs 52 transistors and the pitch of one pixel is 40×40 mu m. The total area of the 64×64 pixels is 12.5mm2. Our tests have shown the validity of the main functions of our new image sensor like fast image acquisition (10K frames per second), minima/maxima calculations in less then one ms.
Evaluation and comparison of the IRS-P6 and the landsat sensors
Chander, G.; Coan, M.J.; Scaramuzza, P.L.
2008-01-01
The Indian Remote Sensing Satellite (IRS-P6), also called ResourceSat-1, was launched in a polar sun-synchronous orbit on October 17, 2003. It carries three sensors: the highresolution Linear Imaging Self-Scanner (LISS-IV), the mediumresolution Linear Imaging Self-Scanner (LISS-III), and the Advanced Wide-Field Sensor (AWiFS). These three sensors provide images of different resolutions and coverage. To understand the absolute radiometric calibration accuracy of IRS-P6 AWiFS and LISS-III sensors, image pairs from these sensors were compared to images from the Landsat-5 Thematic Mapper (TM) and Landsat-7 Enhanced TM Plus (ETM+) sensors. The approach involves calibration of surface observations based on image statistics from areas observed nearly simultaneously by the two sensors. This paper also evaluated the viability of data from these nextgeneration imagers for use in creating three National Land Cover Dataset (NLCD) products: land cover, percent tree canopy, and percent impervious surface. Individual products were consistent with previous studies but had slightly lower overall accuracies as compared to data from the Landsat sensors.
Moving-Article X-Ray Imaging System and Method for 3-D Image Generation
NASA Technical Reports Server (NTRS)
Fernandez, Kenneth R. (Inventor)
2012-01-01
An x-ray imaging system and method for a moving article are provided for an article moved along a linear direction of travel while the article is exposed to non-overlapping x-ray beams. A plurality of parallel linear sensor arrays are disposed in the x-ray beams after they pass through the article. More specifically, a first half of the plurality are disposed in a first of the x-ray beams while a second half of the plurality are disposed in a second of the x-ray beams. Each of the parallel linear sensor arrays is oriented perpendicular to the linear direction of travel. Each of the parallel linear sensor arrays in the first half is matched to a corresponding one of the parallel linear sensor arrays in the second half in terms of an angular position in the first of the x-ray beams and the second of the x-ray beams, respectively.
Jamaludin, Juliza; Rahim, Ruzairi Abdul; Fazul Rahiman, Mohd Hafiz; Mohd Rohani, Jemmy
2018-04-01
Optical tomography (OPT) is a method to capture a cross-sectional image based on the data obtained by sensors, distributed around the periphery of the analyzed system. This system is based on the measurement of the final light attenuation or absorption of radiation after crossing the measured objects. The number of sensor views will affect the results of image reconstruction, where the high number of sensor views per projection will give a high image quality. This research presents an application of charge-coupled device linear sensor and laser diode in an OPT system. Experiments in detecting solid and transparent objects in crystal clear water were conducted. Two numbers of sensors views, 160 and 320 views are evaluated in this research in reconstructing the images. The image reconstruction algorithms used were filtered images of linear back projection algorithms. Analysis on comparing the simulation and experiments image results shows that, with 320 image views giving less area error than 160 views. This suggests that high image view resulted in the high resolution of image reconstruction.
An improved triangulation laser rangefinder using a custom CMOS HDR linear image sensor
NASA Astrophysics Data System (ADS)
Liscombe, Michael
3-D triangulation laser rangefinders are used in many modern applications, from terrain mapping to biometric identification. Although a wide variety of designs have been proposed, laser speckle noise still provides a fundamental limitation on range accuracy. These works propose a new triangulation laser rangefinder designed specifically to mitigate the effects of laser speckle noise. The proposed rangefinder uses a precision linear translator to laterally reposition the imaging system (e.g., image sensor and imaging lens). For a given spatial location of the laser spot, capturing N spatially uncorrelated laser spot profiles is shown to improve range accuracy by a factor of N . This technique has many advantages over past speckle-reduction technologies, such as a fixed system cost and form factor, and the ability to virtually eliminate laser speckle noise. These advantages are made possible through spatial diversity and come at the cost of increased acquisition time. The rangefinder makes use of the ICFYKWG1 linear image sensor, a custom CMOS sensor developed at the Vision Sensor Laboratory (York University). Tests are performed on the image sensor's innovative high dynamic range technology to determine its effects on range accuracy. As expected, experimental results have shown that the sensor provides a trade-off between dynamic range and range accuracy.
Chander, G.; Scaramuzza, P.L.
2006-01-01
Increasingly, data from multiple sensors are used to gain a more complete understanding of land surface processes at a variety of scales. The Landsat suite of satellites has collected the longest continuous archive of multispectral data. The ResourceSat-1 Satellite (also called as IRS-P6) was launched into the polar sunsynchronous orbit on Oct 17, 2003. It carries three remote sensing sensors: the High Resolution Linear Imaging Self-Scanner (LISS-IV), Medium Resolution Linear Imaging Self-Scanner (LISS-III), and the Advanced Wide Field Sensor (AWiFS). These three sensors are used together to provide images with different resolution and coverage. To understand the absolute radiometric calibration accuracy of IRS-P6 AWiFS and LISS-III sensors, image pairs from these sensors were compared to the Landsat-5 TM and Landsat-7 ETM+ sensors. The approach involved the calibration of nearly simultaneous surface observations based on image statistics from areas observed simultaneously by the two sensors.
Detection and recognition of simple spatial forms
NASA Technical Reports Server (NTRS)
Watson, A. B.
1983-01-01
A model of human visual sensitivity to spatial patterns is constructed. The model predicts the visibility and discriminability of arbitrary two-dimensional monochrome images. The image is analyzed by a large array of linear feature sensors, which differ in spatial frequency, phase, orientation, and position in the visual field. All sensors have one octave frequency bandwidths, and increase in size linearly with eccentricity. Sensor responses are processed by an ideal Bayesian classifier, subject to uncertainty. The performance of the model is compared to that of the human observer in detecting and discriminating some simple images.
Fixed Pattern Noise pixel-wise linear correction for crime scene imaging CMOS sensor
NASA Astrophysics Data System (ADS)
Yang, Jie; Messinger, David W.; Dube, Roger R.; Ientilucci, Emmett J.
2017-05-01
Filtered multispectral imaging technique might be a potential method for crime scene documentation and evidence detection due to its abundant spectral information as well as non-contact and non-destructive nature. Low-cost and portable multispectral crime scene imaging device would be highly useful and efficient. The second generation crime scene imaging system uses CMOS imaging sensor to capture spatial scene and bandpass Interference Filters (IFs) to capture spectral information. Unfortunately CMOS sensors suffer from severe spatial non-uniformity compared to CCD sensors and the major cause is Fixed Pattern Noise (FPN). IFs suffer from "blue shift" effect and introduce spatial-spectral correlated errors. Therefore, Fixed Pattern Noise (FPN) correction is critical to enhance crime scene image quality and is also helpful for spatial-spectral noise de-correlation. In this paper, a pixel-wise linear radiance to Digital Count (DC) conversion model is constructed for crime scene imaging CMOS sensor. Pixel-wise conversion gain Gi,j and Dark Signal Non-Uniformity (DSNU) Zi,j are calculated. Also, conversion gain is divided into four components: FPN row component, FPN column component, defects component and effective photo response signal component. Conversion gain is then corrected to average FPN column and row components and defects component so that the sensor conversion gain is uniform. Based on corrected conversion gain and estimated image incident radiance from the reverse of pixel-wise linear radiance to DC model, corrected image spatial uniformity can be enhanced to 7 times as raw image, and the bigger the image DC value within its dynamic range, the better the enhancement.
Acoustic emission linear pulse holography
Collins, H.D.; Busse, L.J.; Lemon, D.K.
1983-10-25
This device relates to the concept of and means for performing Acoustic Emission Linear Pulse Holography, which combines the advantages of linear holographic imaging and Acoustic Emission into a single non-destructive inspection system. This unique system produces a chronological, linear holographic image of a flaw by utilizing the acoustic energy emitted during crack growth. The innovation is the concept of utilizing the crack-generated acoustic emission energy to generate a chronological series of images of a growing crack by applying linear, pulse holographic processing to the acoustic emission data. The process is implemented by placing on a structure an array of piezoelectric sensors (typically 16 or 32 of them) near the defect location. A reference sensor is placed between the defect and the array.
A Chip and Pixel Qualification Methodology on Imaging Sensors
NASA Technical Reports Server (NTRS)
Chen, Yuan; Guertin, Steven M.; Petkov, Mihail; Nguyen, Duc N.; Novak, Frank
2004-01-01
This paper presents a qualification methodology on imaging sensors. In addition to overall chip reliability characterization based on sensor s overall figure of merit, such as Dark Rate, Linearity, Dark Current Non-Uniformity, Fixed Pattern Noise and Photon Response Non-Uniformity, a simulation technique is proposed and used to project pixel reliability. The projected pixel reliability is directly related to imaging quality and provides additional sensor reliability information and performance control.
Takayanagi, Isao; Yoshimura, Norio; Mori, Kazuya; Matsuo, Shinichiro; Tanaka, Shunsuke; Abe, Hirofumi; Yasuda, Naoto; Ishikawa, Kenichiro; Okura, Shunsuke; Ohsawa, Shinji; Otaka, Toshinori
2018-01-12
To respond to the high demand for high dynamic range imaging suitable for moving objects with few artifacts, we have developed a single-exposure dynamic range image sensor by introducing a triple-gain pixel and a low noise dual-gain readout circuit. The developed 3 μm pixel is capable of having three conversion gains. Introducing a new split-pinned photodiode structure, linear full well reaches 40 ke - . Readout noise under the highest pixel gain condition is 1 e - with a low noise readout circuit. Merging two signals, one with high pixel gain and high analog gain, and the other with low pixel gain and low analog gain, a single exposure dynamic rage (SEHDR) signal is obtained. Using this technology, a 1/2.7", 2M-pixel CMOS image sensor has been developed and characterized. The image sensor also employs an on-chip linearization function, yielding a 16-bit linear signal at 60 fps, and an intra-scene dynamic range of higher than 90 dB was successfully demonstrated. This SEHDR approach inherently mitigates the artifacts from moving objects or time-varying light sources that can appear in the multiple exposure high dynamic range (MEHDR) approach.
Takayanagi, Isao; Yoshimura, Norio; Mori, Kazuya; Matsuo, Shinichiro; Tanaka, Shunsuke; Abe, Hirofumi; Yasuda, Naoto; Ishikawa, Kenichiro; Okura, Shunsuke; Ohsawa, Shinji; Otaka, Toshinori
2018-01-01
To respond to the high demand for high dynamic range imaging suitable for moving objects with few artifacts, we have developed a single-exposure dynamic range image sensor by introducing a triple-gain pixel and a low noise dual-gain readout circuit. The developed 3 μm pixel is capable of having three conversion gains. Introducing a new split-pinned photodiode structure, linear full well reaches 40 ke−. Readout noise under the highest pixel gain condition is 1 e− with a low noise readout circuit. Merging two signals, one with high pixel gain and high analog gain, and the other with low pixel gain and low analog gain, a single exposure dynamic rage (SEHDR) signal is obtained. Using this technology, a 1/2.7”, 2M-pixel CMOS image sensor has been developed and characterized. The image sensor also employs an on-chip linearization function, yielding a 16-bit linear signal at 60 fps, and an intra-scene dynamic range of higher than 90 dB was successfully demonstrated. This SEHDR approach inherently mitigates the artifacts from moving objects or time-varying light sources that can appear in the multiple exposure high dynamic range (MEHDR) approach. PMID:29329210
NASA Astrophysics Data System (ADS)
Liang, Shiguo; Ye, Jiamin; Wang, Haigang; Wu, Meng; Yang, Wuqiang
2018-03-01
In the design of electrical capacitance tomography (ECT) sensors, the internal wall thickness can vary with specific applications, and it is a key factor that influences the sensitivity distribution and image quality. This paper will discuss the effect of the wall thickness of ECT sensors on image quality. Three flow patterns are simulated for wall thicknesses of 2.5 mm to 15 mm on eight-electrode ECT sensors. The sensitivity distributions and potential distributions are compared for different wall thicknesses. Linear back-projection and Landweber iteration algorithms are used for image reconstruction. Relative image error and correlation coefficients are used for image evaluation using both simulation and experimental data.
NASA Astrophysics Data System (ADS)
Hutcheson, Joshua A.; Majid, Aneeka A.; Powless, Amy J.; Muldoon, Timothy J.
2015-09-01
Linear image sensors have been widely used in numerous research and industry applications to provide continuous imaging of moving objects. Here, we present a widefield fluorescence microscope with a linear image sensor used to image translating objects for image cytometry. First, a calibration curve was characterized for a custom microfluidic chamber over a span of volumetric pump rates. Image data were also acquired using 15 μm fluorescent polystyrene spheres on a slide with a motorized translation stage in order to match linear translation speed with line exposure periods to preserve the image aspect ratio. Aspect ratios were then calculated after imaging to ensure quality control of image data. Fluorescent beads were imaged in suspension flowing through the microfluidics chamber being pumped by a mechanical syringe pump at 16 μl min-1 with a line exposure period of 150 μs. The line period was selected to acquire images of fluorescent beads with a 40 dB signal-to-background ratio. A motorized translation stage was then used to transport conventional glass slides of stained cellular biospecimens. Whole blood collected from healthy volunteers was stained with 0.02% (w/v) proflavine hemisulfate was imaged to highlight leukocyte morphology with a 1.56 mm × 1.28 mm field of view (1540 ms total acquisition time). Oral squamous cells were also collected from healthy volunteers and stained with 0.01% (w/v) proflavine hemisulfate to demonstrate quantifiable subcellular features and an average nuclear to cytoplasmic ratio of 0.03 (n = 75), with a resolution of 0.31 μm pixels-1.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hutcheson, Joshua A.; Majid, Aneeka A.; Powless, Amy J.
Linear image sensors have been widely used in numerous research and industry applications to provide continuous imaging of moving objects. Here, we present a widefield fluorescence microscope with a linear image sensor used to image translating objects for image cytometry. First, a calibration curve was characterized for a custom microfluidic chamber over a span of volumetric pump rates. Image data were also acquired using 15 μm fluorescent polystyrene spheres on a slide with a motorized translation stage in order to match linear translation speed with line exposure periods to preserve the image aspect ratio. Aspect ratios were then calculated aftermore » imaging to ensure quality control of image data. Fluorescent beads were imaged in suspension flowing through the microfluidics chamber being pumped by a mechanical syringe pump at 16 μl min{sup −1} with a line exposure period of 150 μs. The line period was selected to acquire images of fluorescent beads with a 40 dB signal-to-background ratio. A motorized translation stage was then used to transport conventional glass slides of stained cellular biospecimens. Whole blood collected from healthy volunteers was stained with 0.02% (w/v) proflavine hemisulfate was imaged to highlight leukocyte morphology with a 1.56 mm × 1.28 mm field of view (1540 ms total acquisition time). Oral squamous cells were also collected from healthy volunteers and stained with 0.01% (w/v) proflavine hemisulfate to demonstrate quantifiable subcellular features and an average nuclear to cytoplasmic ratio of 0.03 (n = 75), with a resolution of 0.31 μm pixels{sup −1}.« less
Network compensation for missing sensors
NASA Technical Reports Server (NTRS)
Ahumada, Albert J., Jr.; Mulligan, Jeffrey B.
1991-01-01
A network learning translation invariance algorithm to compute interpolation functions is presented. This algorithm with one fixed receptive field can construct a linear transformation compensating for gain changes, sensor position jitter, and sensor loss when there are enough remaining sensors to adequately sample the input images. However, when the images are undersampled and complete compensation is not possible, the algorithm need to be modified. For moderate sensor losses, the algorithm works if the transformation weight adjustment is restricted to the weights to output units affected by the loss.
The Design of Optical Sensor for the Pinhole/Occulter Facility
NASA Technical Reports Server (NTRS)
Greene, Michael E.
1990-01-01
Three optical sight sensor systems were designed, built and tested. Two optical lines of sight sensor system are capable of measuring the absolute pointing angle to the sun. The system is for use with the Pinhole/Occulter Facility (P/OF), a solar hard x ray experiment to be flown from Space Shuttle or Space Station. The sensor consists of a pinhole camera with two pairs of perpendicularly mounted linear photodiode arrays to detect the intensity distribution of the solar image produced by the pinhole, track and hold circuitry for data reduction, an analog to digital converter, and a microcomputer. The deflection of the image center is calculated from these data using an approximation for the solar image. A second system consists of a pinhole camera with a pair of perpendicularly mounted linear photodiode arrays, amplification circuitry, threshold detection circuitry, and a microcomputer board. The deflection of the image is calculated by knowing the position of each pixel of the photodiode array and merely counting the pixel numbers until threshold is surpassed. A third optical sensor system is capable of measuring the internal vibration of the P/OF between the mask and base. The system consists of a white light source, a mirror and a pair of perpendicularly mounted linear photodiode arrays to detect the intensity distribution of the solar image produced by the mirror, amplification circuitry, threshold detection circuitry, and a microcomputer board. The deflection of the image and hence the vibration of the structure is calculated by knowing the position of each pixel of the photodiode array and merely counting the pixel numbers until threshold is surpassed.
The AOLI Non-Linear Curvature Wavefront Sensor: High sensitivity reconstruction for low-order AO
NASA Astrophysics Data System (ADS)
Crass, Jonathan; King, David; Mackay, Craig
2013-12-01
Many adaptive optics (AO) systems in use today require bright reference objects to determine the effects of atmospheric distortions on incoming wavefronts. This requirement is because Shack Hartmann wavefront sensors (SHWFS) distribute incoming light from reference objects into a large number of sub-apertures. Bright natural reference objects occur infrequently across the sky leading to the use of laser guide stars which add complexity to wavefront measurement systems. The non-linear curvature wavefront sensor as described by Guyon et al. has been shown to offer a significant increase in sensitivity when compared to a SHWFS. This facilitates much greater sky coverage using natural guide stars alone. This paper describes the current status of the non-linear curvature wavefront sensor being developed as part of an adaptive optics system for the Adaptive Optics Lucky Imager (AOLI) project. The sensor comprises two photon-counting EMCCD detectors from E2V Technologies, recording intensity at four near-pupil planes. These images are used with a reconstruction algorithm to determine the phase correction to be applied by an ALPAO 241-element deformable mirror. The overall system is intended to provide low-order correction for a Lucky Imaging based multi CCD imaging camera. We present the current optical design of the instrument including methods to minimise inherent optical effects, principally chromaticity. Wavefront reconstruction methods are discussed and strategies for their optimisation to run at the required real-time speeds are introduced. Finally, we discuss laboratory work with a demonstrator setup of the system.
Multispectral photoacoustic tomography for detection of small tumors inside biological tissues
NASA Astrophysics Data System (ADS)
Hirasawa, Takeshi; Okawa, Shinpei; Tsujita, Kazuhiro; Kushibiki, Toshihiro; Fujita, Masanori; Urano, Yasuteru; Ishihara, Miya
2018-02-01
Visualization of small tumors inside biological tissue is important in cancer treatment because that promotes accurate surgical resection and enables therapeutic effect monitoring. For sensitive detection of tumor, we have been developing photoacoustic (PA) imaging technique to visualize tumor-specific contrast agents, and have already succeeded to image a subcutaneous tumor of a mouse using the contrast agents. To image tumors inside biological tissues, extension of imaging depth and improvement of sensitivity were required. In this study, to extend imaging depth, we developed a PA tomography (PAT) system that can image entire cross section of mice. To improve sensitivity, we discussed the use of the P(VDF-TrFE) linear array acoustic sensor that can detect PA signals with wide ranges of frequencies. Because PA signals produced from low absorbance optical absorbers shifts to low frequency, we hypothesized that the detection of low frequency PA signals improves sensitivity to low absorbance optical absorbers. We developed a PAT system with both a PZT linear array acoustic sensor and the P(VDF-TrFE) sensor, and performed experiment using tissue-mimicking phantoms to evaluate lower detection limits of absorbance. As a result, PAT images calculated from low frequency components of PA signals detected by the P(VDF-TrFE) sensor could visualize optical absorbers with lower absorbance.
An Imaging Sensor-Aided Vision Navigation Approach that Uses a Geo-Referenced Image Database.
Li, Yan; Hu, Qingwu; Wu, Meng; Gao, Yang
2016-01-28
In determining position and attitude, vision navigation via real-time image processing of data collected from imaging sensors is advanced without a high-performance global positioning system (GPS) and an inertial measurement unit (IMU). Vision navigation is widely used in indoor navigation, far space navigation, and multiple sensor-integrated mobile mapping. This paper proposes a novel vision navigation approach aided by imaging sensors and that uses a high-accuracy geo-referenced image database (GRID) for high-precision navigation of multiple sensor platforms in environments with poor GPS. First, the framework of GRID-aided vision navigation is developed with sequence images from land-based mobile mapping systems that integrate multiple sensors. Second, a highly efficient GRID storage management model is established based on the linear index of a road segment for fast image searches and retrieval. Third, a robust image matching algorithm is presented to search and match a real-time image with the GRID. Subsequently, the image matched with the real-time scene is considered to calculate the 3D navigation parameter of multiple sensor platforms. Experimental results show that the proposed approach retrieves images efficiently and has navigation accuracies of 1.2 m in a plane and 1.8 m in height under GPS loss in 5 min and within 1500 m.
An Imaging Sensor-Aided Vision Navigation Approach that Uses a Geo-Referenced Image Database
Li, Yan; Hu, Qingwu; Wu, Meng; Gao, Yang
2016-01-01
In determining position and attitude, vision navigation via real-time image processing of data collected from imaging sensors is advanced without a high-performance global positioning system (GPS) and an inertial measurement unit (IMU). Vision navigation is widely used in indoor navigation, far space navigation, and multiple sensor-integrated mobile mapping. This paper proposes a novel vision navigation approach aided by imaging sensors and that uses a high-accuracy geo-referenced image database (GRID) for high-precision navigation of multiple sensor platforms in environments with poor GPS. First, the framework of GRID-aided vision navigation is developed with sequence images from land-based mobile mapping systems that integrate multiple sensors. Second, a highly efficient GRID storage management model is established based on the linear index of a road segment for fast image searches and retrieval. Third, a robust image matching algorithm is presented to search and match a real-time image with the GRID. Subsequently, the image matched with the real-time scene is considered to calculate the 3D navigation parameter of multiple sensor platforms. Experimental results show that the proposed approach retrieves images efficiently and has navigation accuracies of 1.2 m in a plane and 1.8 m in height under GPS loss in 5 min and within 1500 m. PMID:26828496
Image quality assessment using deep convolutional networks
NASA Astrophysics Data System (ADS)
Li, Yezhou; Ye, Xiang; Li, Yong
2017-12-01
This paper proposes a method of accurately assessing image quality without a reference image by using a deep convolutional neural network. Existing training based methods usually utilize a compact set of linear filters for learning features of images captured by different sensors to assess their quality. These methods may not be able to learn the semantic features that are intimately related with the features used in human subject assessment. Observing this drawback, this work proposes training a deep convolutional neural network (CNN) with labelled images for image quality assessment. The ReLU in the CNN allows non-linear transformations for extracting high-level image features, providing a more reliable assessment of image quality than linear filters. To enable the neural network to take images of any arbitrary size as input, the spatial pyramid pooling (SPP) is introduced connecting the top convolutional layer and the fully-connected layer. In addition, the SPP makes the CNN robust to object deformations to a certain extent. The proposed method taking an image as input carries out an end-to-end learning process, and outputs the quality of the image. It is tested on public datasets. Experimental results show that it outperforms existing methods by a large margin and can accurately assess the image quality on images taken by different sensors of varying sizes.
Evaluating sensor linearity of chosen infrared sensors
NASA Astrophysics Data System (ADS)
Walczykowski, P.; Orych, A.; Jenerowicz, A.; Karcz, P.
2014-11-01
The paper describes a series of experiments conducted as part of the IRAMSWater Project, the aim of which is to establish methodologies for detecting and identifying pollutants in water bodies using aerial imagery data. The main idea is based on the hypothesis, that it is possible to identify certain types of physical, biological and chemical pollutants based on their spectral reflectance characteristics. The knowledge of these spectral curves is then used to determine very narrow spectral bands in which greatest reflectance variations occur between these pollutants. A frame camera is then equipped with a band pass filter, which allows only the selected bandwidth to be registered. In order to obtain reliable reflectance data straight from the images, the team at the Military University of Technology had developed a methodology for determining the necessary acquisition parameters for the sensor (integration time and f-stop depending on the distance from the scene and it's illumination). This methodology however is based on the assumption, that the imaging sensors have a linear response. This paper shows the results of experiments used to evaluate this linearity.
Photometric Calibration and Image Stitching for a Large Field of View Multi-Camera System
Lu, Yu; Wang, Keyi; Fan, Gongshu
2016-01-01
A new compact large field of view (FOV) multi-camera system is introduced. The camera is based on seven tiny complementary metal-oxide-semiconductor sensor modules covering over 160° × 160° FOV. Although image stitching has been studied extensively, sensor and lens differences have not been considered in previous multi-camera devices. In this study, we have calibrated the photometric characteristics of the multi-camera device. Lenses were not mounted on the sensor in the process of radiometric response calibration to eliminate the influence of the focusing effect of uniform light from an integrating sphere. Linearity range of the radiometric response, non-linearity response characteristics, sensitivity, and dark current of the camera response function are presented. The R, G, and B channels have different responses for the same illuminance. Vignetting artifact patterns have been tested. The actual luminance of the object is retrieved by sensor calibration results, and is used to blend images to make panoramas reflect the objective luminance more objectively. This compensates for the limitation of stitching images that are more realistic only through the smoothing method. The dynamic range limitation of can be resolved by using multiple cameras that cover a large field of view instead of a single image sensor with a wide-angle lens. The dynamic range is expanded by 48-fold in this system. We can obtain seven images in one shot with this multi-camera system, at 13 frames per second. PMID:27077857
International Symposium on Applications of Ferroelectrics
1993-02-01
neighborhood of the Curie point. A high dielectric constant The technology of producing monolithic IR detectors using is also useful in many imaging applications...a linear array of sensors. Eacha detector (pixel) or group of Work on new infrared (IR) sensors is at present them, can thus produce a signal ... recorded . The signal beam , was expanded to certain input image (or a partial one) is illuminated only with the 15mm to carry images and was then
Yoon, Yeomin; Noh, Suwoo; Jeong, Jiseong; Park, Kyihwan
2018-05-01
The topology image is constructed from the 2D matrix (XY directions) of heights Z captured from the force-feedback loop controller. For small height variations, nonlinear effects such as hysteresis or creep of the PZT-driven Z nano scanner can be neglected and its calibration is quite straightforward. For large height variations, the linear approximation of the PZT-driven Z nano scanner fail and nonlinear behaviors must be considered because this would cause inaccuracies in the measurement image. In order to avoid such inaccuracies, an additional strain gauge sensor is used to directly measure displacement of the PZT-driven Z nano scanner. However, this approach also has a disadvantage in its relatively low precision. In order to obtain high precision data with good linearity, we propose a method of overcoming the low precision problem of the strain gauge while its feature of good linearity is maintained. We expect that the topology image obtained from the strain gauge sensor showing significant noise at high frequencies. On the other hand, the topology image obtained from the controller output showing low noise at high frequencies. If the low and high frequency signals are separable from both topology images, the image can be constructed so that it is represented with high accuracy and low noise. In order to separate the low frequencies from high frequencies, a 2D Haar wavelet transform is used. Our proposed method use the 2D wavelet transform for obtaining good linearity from strain gauge sensor and good precision from controller output. The advantages of the proposed method are experimentally validated by using topology images. Copyright © 2018 Elsevier B.V. All rights reserved.
General Model of Photon-Pair Detection with an Image Sensor
NASA Astrophysics Data System (ADS)
Defienne, Hugo; Reichert, Matthew; Fleischer, Jason W.
2018-05-01
We develop an analytic model that relates intensity correlation measurements performed by an image sensor to the properties of photon pairs illuminating it. Experiments using an effective single-photon counting camera, a linear electron-multiplying charge-coupled device camera, and a standard CCD camera confirm the model. The results open the field of quantum optical sensing using conventional detectors.
Estimation of Image Sensor Fill Factor Using a Single Arbitrary Image
Wen, Wei; Khatibi, Siamak
2017-01-01
Achieving a high fill factor is a bottleneck problem for capturing high-quality images. There are hardware and software solutions to overcome this problem. In the solutions, the fill factor is known. However, this is an industrial secrecy by most image sensor manufacturers due to its direct effect on the assessment of the sensor quality. In this paper, we propose a method to estimate the fill factor of a camera sensor from an arbitrary single image. The virtual response function of the imaging process and sensor irradiance are estimated from the generation of virtual images. Then the global intensity values of the virtual images are obtained, which are the result of fusing the virtual images into a single, high dynamic range radiance map. A non-linear function is inferred from the original and global intensity values of the virtual images. The fill factor is estimated by the conditional minimum of the inferred function. The method is verified using images of two datasets. The results show that our method estimates the fill factor correctly with significant stability and accuracy from one single arbitrary image according to the low standard deviation of the estimated fill factors from each of images and for each camera. PMID:28335459
A novel method to increase LinLog CMOS sensors' performance in high dynamic range scenarios.
Martínez-Sánchez, Antonio; Fernández, Carlos; Navarro, Pedro J; Iborra, Andrés
2011-01-01
Images from high dynamic range (HDR) scenes must be obtained with minimum loss of information. For this purpose it is necessary to take full advantage of the quantification levels provided by the CCD/CMOS image sensor. LinLog CMOS sensors satisfy the above demand by offering an adjustable response curve that combines linear and logarithmic responses. This paper presents a novel method to quickly adjust the parameters that control the response curve of a LinLog CMOS image sensor. We propose to use an Adaptive Proportional-Integral-Derivative controller to adjust the exposure time of the sensor, together with control algorithms based on the saturation level and the entropy of the images. With this method the sensor's maximum dynamic range (120 dB) can be used to acquire good quality images from HDR scenes with fast, automatic adaptation to scene conditions. Adaptation to a new scene is rapid, with a sensor response adjustment of less than eight frames when working in real time video mode. At least 67% of the scene entropy can be retained with this method.
Onboard Image Processing System for Hyperspectral Sensor
Hihara, Hiroki; Moritani, Kotaro; Inoue, Masao; Hoshi, Yoshihiro; Iwasaki, Akira; Takada, Jun; Inada, Hitomi; Suzuki, Makoto; Seki, Taeko; Ichikawa, Satoshi; Tanii, Jun
2015-01-01
Onboard image processing systems for a hyperspectral sensor have been developed in order to maximize image data transmission efficiency for large volume and high speed data downlink capacity. Since more than 100 channels are required for hyperspectral sensors on Earth observation satellites, fast and small-footprint lossless image compression capability is essential for reducing the size and weight of a sensor system. A fast lossless image compression algorithm has been developed, and is implemented in the onboard correction circuitry of sensitivity and linearity of Complementary Metal Oxide Semiconductor (CMOS) sensors in order to maximize the compression ratio. The employed image compression method is based on Fast, Efficient, Lossless Image compression System (FELICS), which is a hierarchical predictive coding method with resolution scaling. To improve FELICS’s performance of image decorrelation and entropy coding, we apply a two-dimensional interpolation prediction and adaptive Golomb-Rice coding. It supports progressive decompression using resolution scaling while still maintaining superior performance measured as speed and complexity. Coding efficiency and compression speed enlarge the effective capacity of signal transmission channels, which lead to reducing onboard hardware by multiplexing sensor signals into a reduced number of compression circuits. The circuitry is embedded into the data formatter of the sensor system without adding size, weight, power consumption, and fabrication cost. PMID:26404281
Incorporating signal-dependent noise for hyperspectral target detection
NASA Astrophysics Data System (ADS)
Morman, Christopher J.; Meola, Joseph
2015-05-01
The majority of hyperspectral target detection algorithms are developed from statistical data models employing stationary background statistics or white Gaussian noise models. Stationary background models are inaccurate as a result of two separate physical processes. First, varying background classes often exist in the imagery that possess different clutter statistics. Many algorithms can account for this variability through the use of subspaces or clustering techniques. The second physical process, which is often ignored, is a signal-dependent sensor noise term. For photon counting sensors that are often used in hyperspectral imaging systems, sensor noise increases as the measured signal level increases as a result of Poisson random processes. This work investigates the impact of this sensor noise on target detection performance. A linear noise model is developed describing sensor noise variance as a linear function of signal level. The linear noise model is then incorporated for detection of targets using data collected at Wright Patterson Air Force Base.
Peng, Mingzeng; Li, Zhou; Liu, Caihong; Zheng, Qiang; Shi, Xieqing; Song, Ming; Zhang, Yang; Du, Shiyu; Zhai, Junyi; Wang, Zhong Lin
2015-03-24
A high-resolution dynamic tactile/pressure display is indispensable to the comprehensive perception of force/mechanical stimulations such as electronic skin, biomechanical imaging/analysis, or personalized signatures. Here, we present a dynamic pressure sensor array based on pressure/strain tuned photoluminescence imaging without the need for electricity. Each sensor is a nanopillar that consists of InGaN/GaN multiple quantum wells. Its photoluminescence intensity can be modulated dramatically and linearly by small strain (0-0.15%) owing to the piezo-phototronic effect. The sensor array has a high pixel density of 6350 dpi and exceptional small standard deviation of photoluminescence. High-quality tactile/pressure sensing distribution can be real-time recorded by parallel photoluminescence imaging without any cross-talk. The sensor array can be inexpensively fabricated over large areas by semiconductor product lines. The proposed dynamic all-optical pressure imaging with excellent resolution, high sensitivity, good uniformity, and ultrafast response time offers a suitable way for smart sensing, micro/nano-opto-electromechanical systems.
Method of orthogonally splitting imaging pose measurement
NASA Astrophysics Data System (ADS)
Zhao, Na; Sun, Changku; Wang, Peng; Yang, Qian; Liu, Xintong
2018-01-01
In order to meet the aviation's and machinery manufacturing's pose measurement need of high precision, fast speed and wide measurement range, and to resolve the contradiction between measurement range and resolution of vision sensor, this paper proposes an orthogonally splitting imaging pose measurement method. This paper designs and realizes an orthogonally splitting imaging vision sensor and establishes a pose measurement system. The vision sensor consists of one imaging lens, a beam splitter prism, cylindrical lenses and dual linear CCD. Dual linear CCD respectively acquire one dimensional image coordinate data of the target point, and two data can restore the two dimensional image coordinates of the target point. According to the characteristics of imaging system, this paper establishes the nonlinear distortion model to correct distortion. Based on cross ratio invariability, polynomial equation is established and solved by the least square fitting method. After completing distortion correction, this paper establishes the measurement mathematical model of vision sensor, and determines intrinsic parameters to calibrate. An array of feature points for calibration is built by placing a planar target in any different positions for a few times. An terative optimization method is presented to solve the parameters of model. The experimental results show that the field angle is 52 °, the focus distance is 27.40 mm, image resolution is 5185×5117 pixels, displacement measurement error is less than 0.1mm, and rotation angle measurement error is less than 0.15°. The method of orthogonally splitting imaging pose measurement can satisfy the pose measurement requirement of high precision, fast speed and wide measurement range.
NASA Technical Reports Server (NTRS)
Macdonald, H.; Waite, W.; Elachi, C.; Babcock, R.; Konig, R.; Gattis, J.; Borengasser, M.; Tolman, D.
1980-01-01
Imaging radar was evaluated as an adjunct to conventional petroleum exploration techniques, especially linear mapping. Linear features were mapped from several remote sensor data sources including stereo photography, enhanced LANDSAT imagery, SLAR radar imagery, enhanced SAR radar imagery, and SAR radar/LANDSAT combinations. Linear feature maps were compared with surface joint data, subsurface and geophysical data, and gas production in the Arkansas part of the Arkoma basin. The best LANDSAT enhanced product for linear detection was found to be a winter scene, band 7, uniform distribution stretch. Of the individual SAR data products, the VH (cross polarized) SAR radar mosaic provides for detection of most linears; however, none of the SAR enhancements is significantly better than the others. Radar/LANDSAT merges may provide better linear detection than a single sensor mapping mode, but because of operator variability, the results are inconclusive. Radar/LANDSAT combinations appear promising as an optimum linear mapping technique, if the advantages and disadvantages of each remote sensor are considered.
NASA Astrophysics Data System (ADS)
Taylor, James S., Jr.; Davis, P. S.; Wolff, Lawrence B.
2003-09-01
Research has shown that naturally occurring light outdoors and underwater is partially linearly polarized. The polarized components can be combined to form an image that describes the polarization of the light in the scene. This image is known as the degree of linear polarization (DOLP) image or partial polarization image. These naturally occurring polarization signatures can provide a diver or an unmanned underwater vehicle (UUV) with more information to detect, classify, and identify threats such as obstacles and/or mines in the shallow water environment. The SHallow water Real-time IMaging Polarimeter (SHRIMP), recently developed under sponsorship of Dr. Tom Swean at the Office of Naval Research (Code 321OE), can measure underwater partial polarization imagery. This sensor is a passive, three-channel device that simultaneously measures the three components of the Stokes vector needed to determine the partial linear polarization of the scene. The testing of this sensor has been completed and the data has been analyzed. This paper presents performance results from the field-testing and quantifies the gain provided by the partial polarization signature of targets in the Very Shallow Water (VSW) and Surf Zone (SZ) regions.
Toward CMOS image sensor based glucose monitoring.
Devadhasan, Jasmine Pramila; Kim, Sanghyo
2012-09-07
Complementary metal oxide semiconductor (CMOS) image sensor is a powerful tool for biosensing applications. In this present study, CMOS image sensor has been exploited for detecting glucose levels by simple photon count variation with high sensitivity. Various concentrations of glucose (100 mg dL(-1) to 1000 mg dL(-1)) were added onto a simple poly-dimethylsiloxane (PDMS) chip and the oxidation of glucose was catalyzed with the aid of an enzymatic reaction. Oxidized glucose produces a brown color with the help of chromogen during enzymatic reaction and the color density varies with the glucose concentration. Photons pass through the PDMS chip with varying color density and hit the sensor surface. Photon count was recognized by CMOS image sensor depending on the color density with respect to the glucose concentration and it was converted into digital form. By correlating the obtained digital results with glucose concentration it is possible to measure a wide range of blood glucose levels with great linearity based on CMOS image sensor and therefore this technique will promote a convenient point-of-care diagnosis.
Shafie, Suhaidi; Kawahito, Shoji; Halin, Izhal Abdul; Hasan, Wan Zuha Wan
2009-01-01
The partial charge transfer technique can expand the dynamic range of a CMOS image sensor by synthesizing two types of signal, namely the long and short accumulation time signals. However the short accumulation time signal obtained from partial transfer operation suffers of non-linearity with respect to the incident light. In this paper, an analysis of the non-linearity in partial charge transfer technique has been carried, and the relationship between dynamic range and the non-linearity is studied. The results show that the non-linearity is caused by two factors, namely the current diffusion, which has an exponential relation with the potential barrier, and the initial condition of photodiodes in which it shows that the error in the high illumination region increases as the ratio of the long to the short accumulation time raises. Moreover, the increment of the saturation level of photodiodes also increases the error in the high illumination region.
NASA Astrophysics Data System (ADS)
Cabrera, Blas; Brink, Paul L.; Leman, Steven W.; Castle, Joseph P.; Tomada, Astrid; Young, Betty A.; Martínez-Galarce, Dennis S.; Stern, Robert A.; Deiker, Steve; Irwin, Kent D.
2004-03-01
For future solar X-ray satellite missions, we are developing a phonon-mediated macro-pixel composed of a Ge crystal absorber with four superconducting transition-edge sensors (TES) distributed on the backside. The X-rays are absorbed on the opposite side and the energy is converted into phonons, which are absorbed into the four TES sensors. By connecting together parallel elements into four channels, fractional total energy absorbed between two of the sensors provides x-position information and the other two provide y-position information. We determine the optimal distribution for the TES sub-elements to obtain linear position information while minimizing the degradation of energy resolution.
Computational multispectral video imaging [Invited].
Wang, Peng; Menon, Rajesh
2018-01-01
Multispectral imagers reveal information unperceivable to humans and conventional cameras. Here, we demonstrate a compact single-shot multispectral video-imaging camera by placing a micro-structured diffractive filter in close proximity to the image sensor. The diffractive filter converts spectral information to a spatial code on the sensor pixels. Following a calibration step, this code can be inverted via regularization-based linear algebra to compute the multispectral image. We experimentally demonstrated spectral resolution of 9.6 nm within the visible band (430-718 nm). We further show that the spatial resolution is enhanced by over 30% compared with the case without the diffractive filter. We also demonstrate Vis-IR imaging with the same sensor. Because no absorptive color filters are utilized, sensitivity is preserved as well. Finally, the diffractive filters can be easily manufactured using optical lithography and replication techniques.
Tri-linear color multi-linescan sensor with 200 kHz line rate
NASA Astrophysics Data System (ADS)
Schrey, Olaf; Brockherde, Werner; Nitta, Christian; Bechen, Benjamin; Bodenstorfer, Ernst; Brodersen, Jörg; Mayer, Konrad J.
2016-11-01
In this paper we present a newly developed linear CMOS high-speed line-scanning sensor realized in a 0.35 μm CMOS OPTO process for line-scan with 200 kHz true RGB and 600 kHz monochrome line rate, respectively. In total, 60 lines are integrated in the sensor allowing for electronic position adjustment. The lines are read out in rolling shutter manner. The high readout speed is achieved by a column-wise organization of the readout chain. At full speed, the sensor provides RGB color images with a spatial resolution down to 50 μm. This feature enables a variety of applications like quality assurance in print inspection, real-time surveillance of railroad tracks, in-line monitoring in flat panel fabrication lines and many more. The sensor has a fill-factor close to 100%, preventing aliasing and color artefacts. Hence the tri-linear technology is robust against aliasing ensuring better inspection quality and thus less waste in production lines.
A Novel Method to Increase LinLog CMOS Sensors’ Performance in High Dynamic Range Scenarios
Martínez-Sánchez, Antonio; Fernández, Carlos; Navarro, Pedro J.; Iborra, Andrés
2011-01-01
Images from high dynamic range (HDR) scenes must be obtained with minimum loss of information. For this purpose it is necessary to take full advantage of the quantification levels provided by the CCD/CMOS image sensor. LinLog CMOS sensors satisfy the above demand by offering an adjustable response curve that combines linear and logarithmic responses. This paper presents a novel method to quickly adjust the parameters that control the response curve of a LinLog CMOS image sensor. We propose to use an Adaptive Proportional-Integral-Derivative controller to adjust the exposure time of the sensor, together with control algorithms based on the saturation level and the entropy of the images. With this method the sensor’s maximum dynamic range (120 dB) can be used to acquire good quality images from HDR scenes with fast, automatic adaptation to scene conditions. Adaptation to a new scene is rapid, with a sensor response adjustment of less than eight frames when working in real time video mode. At least 67% of the scene entropy can be retained with this method. PMID:22164083
Gao, Zhiyuan; Yang, Congjie; Xu, Jiangtao; Nie, Kaiming
2015-11-06
This paper presents a dynamic range (DR) enhanced readout technique with a two-step time-to-digital converter (TDC) for high speed linear CMOS image sensors. A multi-capacitor and self-regulated capacitive trans-impedance amplifier (CTIA) structure is employed to extend the dynamic range. The gain of the CTIA is auto adjusted by switching different capacitors to the integration node asynchronously according to the output voltage. A column-parallel ADC based on a two-step TDC is utilized to improve the conversion rate. The conversion is divided into coarse phase and fine phase. An error calibration scheme is also proposed to correct quantization errors caused by propagation delay skew within -T(clk)~+T(clk). A linear CMOS image sensor pixel array is designed in the 0.13 μm CMOS process to verify this DR-enhanced high speed readout technique. The post simulation results indicate that the dynamic range of readout circuit is 99.02 dB and the ADC achieves 60.22 dB SNDR and 9.71 bit ENOB at a conversion rate of 2 MS/s after calibration, with 14.04 dB and 2.4 bit improvement, compared with SNDR and ENOB of that without calibration.
Model of human visual-motion sensing
NASA Technical Reports Server (NTRS)
Watson, A. B.; Ahumada, A. J., Jr.
1985-01-01
A model of how humans sense the velocity of moving images is proposed. The model exploits constraints provided by human psychophysics, notably that motion-sensing elements appear tuned for two-dimensional spatial frequency, and by the frequency spectrum of a moving image, namely, that its support lies in the plane in which the temporal frequency equals the dot product of the spatial frequency and the image velocity. The first stage of the model is a set of spatial-frequency-tuned, direction-selective linear sensors. The temporal frequency of the response of each sensor is shown to encode the component of the image velocity in the sensor direction. At the second stage, these components are resolved in order to measure the velocity of image motion at each of a number of spatial locations and spatial frequencies. The model has been applied to several illustrative examples, including apparent motion, coherent gratings, and natural image sequences. The model agrees qualitatively with human perception.
Room temperature infrared imaging sensors based on highly purified semiconducting carbon nanotubes.
Liu, Yang; Wei, Nan; Zhao, Qingliang; Zhang, Dehui; Wang, Sheng; Peng, Lian-Mao
2015-04-21
High performance infrared (IR) imaging systems usually require expensive cooling systems, which are highly undesirable. Here we report the fabrication and performance characteristics of room temperature carbon nanotube (CNT) IR imaging sensors. The CNT IR imaging sensor is based on aligned semiconducting CNT films with 99% purity, and each pixel or device of the imaging sensor consists of aligned strips of CNT asymmetrically contacted by Sc and Pd. We found that the performance of the device is dependent on the CNT channel length. While short channel devices provide a large photocurrent and a rapid response of about 110 μs, long channel length devices exhibit a low dark current and a high signal-to-noise ratio which are critical for obtaining high detectivity. In total, 36 CNT IR imagers are constructed on a single chip, each consists of 3 × 3 pixel arrays. The demonstrated advantages of constructing a high performance IR system using purified semiconducting CNT aligned films include, among other things, fast response, excellent stability and uniformity, ideal linear photocurrent response, high imaging polarization sensitivity and low power consumption.
Evaluation of excitation strategy with multi-plane electrical capacitance tomography sensor
NASA Astrophysics Data System (ADS)
Mao, Mingxu; Ye, Jiamin; Wang, Haigang; Zhang, Jiaolong; Yang, Wuqiang
2016-11-01
Electrical capacitance tomography (ECT) is an imaging technique for measuring the permittivity change of materials. Using a multi-plane ECT sensor, three-dimensional (3D) distribution of permittivity may be represented. In this paper, three excitation strategies, including single-electrode excitation, dual-electrode excitation in the same plane, and dual-electrode excitation in different planes are investigated by numerical simulation and experiment for two three-plane ECT sensors with 12 electrodes in total. In one sensor, the electrodes on the middle plane are in line with the others. In the other sensor, they are rotated 45° with reference to the other two planes. A linear back projection algorithm is used to reconstruct the images and a correlation coefficient is used to evaluate the image quality. The capacitance data and sensitivity distribution with each measurement strategy and sensor model are analyzed. Based on simulation and experimental results using noise-free and noisy capacitance data, the performance of the three strategies is evaluated.
Charge-coupled device image sensor study
NASA Technical Reports Server (NTRS)
1973-01-01
The design specifications and predicted performance characteristics of a Charge-Coupled Device Area Imager and a Charge-Coupled Device Linear Imager are presented. The Imagers recommended are intended for use in space-borne imaging systems and therefore would meet the requirements for the intended application. A unique overlapping metal electrode structure and a buried channel structure are described. Reasons for the particular imager designs are discussed.
A digital sedimentator for measuring erythrocyte sedimentation rate using a linear image sensor
NASA Astrophysics Data System (ADS)
Yoshikoshi, Akio; Sakanishi, Akio; Toyama, Yoshiharu
2004-11-01
A digital apparatus was fabricated to determine accurately the erythrocyte sedimentation rate (ESR) using a linear image sensor. Currently, ESR is utilized for clinical diagnosis, and in the laboratory as one of the many rheological properties of blood through the settling of red blood cells (RBCs). In this work, we aimed to measure ESR automatically using a small amount of a sample and without moving parts. The linear image sensor was placed behind a microhematocrit tube containing 36 μl of RBC suspension on a holder plate; the holder plate was fixed on an optical bench together with a tungsten lamp and an opal glass placed in front. RBC suspensions were prepared in autologous plasma with hematocrit H from 25% to 44%. The intensity profiles of transmitted light in 36 μl of RBC suspension were detected using the linear image sensor and sent to a personal computer every minute. ESR was observed at the settling interface between the plasma and RBC suspension in the profile in 1024 pixels (25 μm/pixel) along a microhematocrit tube of 25.6 mm total length for 1 h at a temperature of 37.0±0.1 °C. First, we determined the initial pixel position of the sample at the boundary with air. The boundary and the interface were defined by inflection points in the profile with 25 μm resolution. We obtained sedimentation curves that were determined by the RBC settling distance l(t) at the time t from the difference between pixel locations at the boundary and the interface. The sedimentation curves were well fitted to an empirical equation [Puccini et al., Biorheol. 14, 43 (1977)] from which we calculated the maximum sedimentation velocity smax at the time tmax. We reached tmax within 30 min at any H, and smax linearly related to the settling distance l(60) at 60 min after the start of sedimentation from 30% to 44% H with the correlation coefficient r=0.993. Thus, we may estimate conventional ESR at 1 h from smax more quickly and accurately with less effort.
Spatial filtering self-velocimeter for vehicle application using a CMOS linear image sensor
NASA Astrophysics Data System (ADS)
He, Xin; Zhou, Jian; Nie, Xiaoming; Long, Xingwu
2015-03-01
The idea of using a spatial filtering velocimeter (SFV) to measure the velocity of a vehicle for an inertial navigation system is put forward. The presented SFV is based on a CMOS linear image sensor with a high-speed data rate, large pixel size, and built-in timing generator. These advantages make the image sensor suitable to measure vehicle velocity. The power spectrum of the output signal is obtained by fast Fourier transform and is corrected by a frequency spectrum correction algorithm. This velocimeter was used to measure the velocity of a conveyor belt driven by a rotary table and the measurement uncertainty is ˜0.54%. Furthermore, it was also installed on a vehicle together with a laser Doppler velocimeter (LDV) to measure self-velocity. The measurement result of the designed SFV is compared with that of the LDV. It is shown that the measurement result of the SFV is coincident with that of the LDV. Therefore, the designed SFV is suitable for a vehicle self-contained inertial navigation system.
Fiber optic sensors and systems at the Federal University of Rio de Janeiro
NASA Astrophysics Data System (ADS)
Werneck, Marcelo M.; dos Santos, Paulo A. M.; Ferreira, Aldo P.; Maggi, Luis E.; de Carvalho, Carlos R., Jr.; Ribeiro, R. M.
1998-08-01
As widely known, fiberoptics (FO) are being used in a large variety of sensors and systems particularly for their small dimensions and low cost, large bandwidth and favorable dielectric properties. These properties have allowed us to develop sensors and systems for general applications and, particularly, for biomedical engineering. The intravascular pressure sensor was designed for small dimensions and high bandwidth. The system is based on light-intensity modulation technique and uses a 2 mm-diameter elastomer membrane as the sensor element and a pigtailed laser as a light source. The optical power output curve was linear for pressures within the range of 0 to 300 mmHg. The real time optical biosensor uses the evanescent field technique for monitoring Escherichia coli growth in culture media. The optical biosensor monitors interactions between the analytic (bacteria) and the evanescent field of an optical fiber passing through it. The FO based high voltage and current sensor is a measuring system designed for monitoring voltage and current in high voltage transmission lines. The linearity of the system is better than 2% in both ranges of 0 to 25 kV and 0 to 1000 A. The optical flowmeter uses a cross-correlation technique that analyses two light beams crossing the flow separated by a fixed distance. The x-ray image sensor uses a scintillating FO array, one FO for each image pixel to form an image of the x-ray field. The systems described in these paper use general-purpose components including optical fibers and optoelectronic devices, which are readily available, and of low cost.
Research progress in fiber optic sensors and systems at the Federal University of Rio de Janeiro
NASA Astrophysics Data System (ADS)
Werneck, Marcelo M.; Ferreira, Aldo P.; Maggi, Luis E.; De Carvalho, C. C.; Ribeiro, R. M.
1999-02-01
As widely known, fiberoptics (FO) are being used in a large variety of sensor an systems particularly for their small dimensions and low cost, large bandwidth and favorable dielectric properties. These properties have allowed us to develop sensor and systems for general applications and, particularly, for biomedical engineering. The intravasculator pressure sensor was designed for small dimensions and high bandwidth. The system is based on light- intensity modulation technique and use a 2 mm-diameter elastomer membrane as the sensor element and a pigtailed laser as a light source. The optical power out put curve was linear for pressures within the range of 0 to 300 mmHg. The real time optical biosensor uses the evanescent field technique for monitoring Escherichia coli growth in culture media. The optical biosensor monitors interactions between the analytic and the evanescent field of an optical fiber passing through it. The FO based high voltage and current sensor is a measuring system designed for monitoring voltage and current in high voltage transmission lines. The linearity of the system is better than 2 percent in both ranges of 0 to 25 kV and 0 to 1000 A. The optical flowmeter uses a cross-correlation technique that analyzes two light beams crossing the flow separated by a fixed distance. The x-ray image sensor uses a scintillating FO array, one FO for each image pixel to form an image of the x-ray field. The systems described in this paper use general-purpose components including optical fibers and optoelectronic devices, which are readily available, and of low cost.
Low temperature performance of a commercially available InGaAs image sensor
NASA Astrophysics Data System (ADS)
Nakaya, Hidehiko; Komiyama, Yutaka; Kashikawa, Nobunari; Uchida, Tomohisa; Nagayama, Takahiro; Yoshida, Michitoshi
2016-08-01
We report the evaluation results of a commercially available InGaAs image sensor manufactured by Hamamatsu Photonics K. K., which has sensitivity between 0.95μm and 1.7μm at a room temperature. The sensor format was 128×128 pixels with 20 μm pitch. It was tested with our original readout electronics and cooled down to 80 K by a mechanical cooler to minimize the dark current. Although the readout noise and dark current were 200 e- and 20 e- /sec/pixel, respectively, we found no serious problems for the linearity, wavelength response, and intra-pixel response.
A NIR-BODIPY derivative for sensing copper(II) in blood and mitochondrial imaging
NASA Astrophysics Data System (ADS)
He, Shao-Jun; Xie, Yu-Wen; Chen, Qiu-Yun
2018-04-01
In order to develop NIR BODIPY for mitochondria targeting imaging agents and metal sensors, a side chain modified BODIPY (BPN) was synthesized and spectroscopically characterized. BPN has NIR emission at 765 nm when excited at 704 nm. The emission at 765 nm responded differently to Cu2+ and Mn2+ ions, respectively. The BPN coordinated with Cu2+ forming [BPNCu]2+ complex with quenched emission, while Mn2+ induced aggregation of BPN with specific fluorescence enhancement. Moreover, BPN can be applied to monitor Cu2+ in live cells and image mitochondria. Further, BPN was used as sensor for the detection of Cu2+ ions in serum with linear detection range of 0.45 μM-36.30 μM. Results indicate that BPN is a good sensor for the detection of Cu2+ in serum and image mitochondria. This study gives strategies for future design of NIR sensors for the analysis of metal ions in blood.
A NIR-BODIPY derivative for sensing copper(II) in blood and mitochondrial imaging.
He, Shao-Jun; Xie, Yu-Wen; Chen, Qiu-Yun
2018-04-15
In order to develop NIR BODIPY for mitochondria targeting imaging agents and metal sensors, a side chain modified BODIPY (BPN) was synthesized and spectroscopically characterized. BPN has NIR emission at 765nm when excited at 704nm. The emission at 765nm responded differently to Cu 2+ and Mn 2+ ions, respectively. The BPN coordinated with Cu 2+ forming [BPNCu] 2+ complex with quenched emission, while Mn 2+ induced aggregation of BPN with specific fluorescence enhancement. Moreover, BPN can be applied to monitor Cu 2+ in live cells and image mitochondria. Further, BPN was used as sensor for the detection of Cu 2+ ions in serum with linear detection range of 0.45μM-36.30μM. Results indicate that BPN is a good sensor for the detection of Cu 2+ in serum and image mitochondria. This study gives strategies for future design of NIR sensors for the analysis of metal ions in blood. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
2004-01-01
Topics covered include: Analysis of SSEM Sensor Data Using BEAM; Hairlike Percutaneous Photochemical Sensors; Video Guidance Sensors Using Remotely Activated Targets; Simulating Remote Sensing Systems; EHW Approach to Temperature Compensation of Electronics; Polymorphic Electronic Circuits; Micro-Tubular Fuel Cells; Whispering-Gallery-Mode Tunable Narrow-Band-Pass Filter; PVM Wrapper; Simulation of Hyperspectral Images; Algorithm for Controlling a Centrifugal Compressor; Hybrid Inflatable Pressure Vessel; Double-Acting, Locking Carabiners; Position Sensor Integral with a Linear Actuator; Improved Electromagnetic Brake; Flow Straightener for a Rotating-Drum Liquid Separator; Sensory-Feedback Exoskeletal Arm Controller; Active Suppression of Instabilities in Engine Combustors; Fabrication of Robust, Flat, Thinned, UV-Imaging CCDs; Chemical Thinning Process for Fabricating UV-Imaging CCDs; Pseudoslit Spectrometer; Waste-Heat-Driven Cooling Using Complex Compound Sorbents; Improved Refractometer for Measuring Temperatures of Drops; Semiconductor Lasers Containing Quantum Wells in Junctions; Phytoplankton-Fluorescence-Lifetime Vertical Profiler; Hexagonal Pixels and Indexing Scheme for Binary Images; Finding Minimum-Power Broadcast Trees for Wireless Networks; and Automation of Design Engineering Processes.
Measurement of charge transfer potential barrier in pinned photodiode CMOS image sensors
NASA Astrophysics Data System (ADS)
Chen, Cao; Bing, Zhang; Junfeng, Wang; Longsheng, Wu
2016-05-01
The charge transfer potential barrier (CTPB) formed beneath the transfer gate causes a noticeable image lag issue in pinned photodiode (PPD) CMOS image sensors (CIS), and is difficult to measure straightforwardly since it is embedded inside the device. From an understanding of the CTPB formation mechanism, we report on an alternative method to feasibly measure the CTPB height by performing a linear extrapolation coupled with a horizontal left-shift on the sensor photoresponse curve under the steady-state illumination. The theoretical study was performed in detail on the principle of the proposed method. Application of the measurements on a prototype PPD-CIS chip with an array of 160 × 160 pixels is demonstrated. Such a method intends to shine new light on the guidance for the lag-free and high-speed sensors optimization based on PPD devices. Project supported by the National Defense Pre-Research Foundation of China (No. 51311050301095).
High-density Schottky barrier IRCCD sensors for remote sensing applications
NASA Astrophysics Data System (ADS)
Elabd, H.; Tower, J. R.; McCarthy, B. M.
1983-01-01
It is pointed out that the ambitious goals envisaged for the next generation of space-borne sensors challenge the state-of-the-art in solid-state imaging technology. Studies are being conducted with the aim to provide focal plane array technology suitable for use in future Multispectral Linear Array (MLA) earth resource instruments. An important new technology for IR-image sensors involves the use of monolithic Schottky barrier infrared charge-coupled device arrays. This technology is suitable for earth sensing applications in which moderate quantum efficiency and intermediate operating temperatures are required. This IR sensor can be fabricated by using standard integrated circuit (IC) processing techniques, and it is possible to employ commercial IC grade silicon. For this reason, it is feasible to construct Schottky barrier area and line arrays with large numbers of elements and high-density designs. A Pd2Si Schottky barrier sensor for multispectral imaging in the 1 to 3.5 micron band is under development.
Note: An absolute X-Y-Θ position sensor using a two-dimensional phase-encoded binary scale
NASA Astrophysics Data System (ADS)
Kim, Jong-Ahn; Kim, Jae Wan; Kang, Chu-Shik; Jin, Jonghan
2018-04-01
This Note presents a new absolute X-Y-Θ position sensor for measuring planar motion of a precision multi-axis stage system. By analyzing the rotated image of a two-dimensional phase-encoded binary scale (2D), the absolute 2D position values at two separated points were obtained and the absolute X-Y-Θ position could be calculated combining these values. The sensor head was constructed using a board-level camera, a light-emitting diode light source, an imaging lens, and a cube beam-splitter. To obtain the uniform intensity profiles from the vignette scale image, we selected the averaging directions deliberately, and higher resolution in the angle measurement could be achieved by increasing the allowable offset size. The performance of a prototype sensor was evaluated in respect of resolution, nonlinearity, and repeatability. The sensor could resolve 25 nm linear and 0.001° angular displacements clearly, and the standard deviations were less than 18 nm when 2D grid positions were measured repeatedly.
Aquatic Debris Detection Using Embedded Camera Sensors
Wang, Yong; Wang, Dianhong; Lu, Qian; Luo, Dapeng; Fang, Wu
2015-01-01
Aquatic debris monitoring is of great importance to human health, aquatic habitats and water transport. In this paper, we first introduce the prototype of an aquatic sensor node equipped with an embedded camera sensor. Based on this sensing platform, we propose a fast and accurate debris detection algorithm. Our method is specifically designed based on compressive sensing theory to give full consideration to the unique challenges in aquatic environments, such as waves, swaying reflections, and tight energy budget. To upload debris images, we use an efficient sparse recovery algorithm in which only a few linear measurements need to be transmitted for image reconstruction. Besides, we implement the host software and test the debris detection algorithm on realistically deployed aquatic sensor nodes. The experimental results demonstrate that our approach is reliable and feasible for debris detection using camera sensors in aquatic environments. PMID:25647741
The Feasibility of 3d Point Cloud Generation from Smartphones
NASA Astrophysics Data System (ADS)
Alsubaie, N.; El-Sheimy, N.
2016-06-01
This paper proposes a new technique for increasing the accuracy of direct geo-referenced image-based 3D point cloud generated from low-cost sensors in smartphones. The smartphone's motion sensors are used to directly acquire the Exterior Orientation Parameters (EOPs) of the captured images. These EOPs, along with the Interior Orientation Parameters (IOPs) of the camera/ phone, are used to reconstruct the image-based 3D point cloud. However, because smartphone motion sensors suffer from poor GPS accuracy, accumulated drift and high signal noise, inaccurate 3D mapping solutions often result. Therefore, horizontal and vertical linear features, visible in each image, are extracted and used as constraints in the bundle adjustment procedure. These constraints correct the relative position and orientation of the 3D mapping solution. Once the enhanced EOPs are estimated, the semi-global matching algorithm (SGM) is used to generate the image-based dense 3D point cloud. Statistical analysis and assessment are implemented herein, in order to demonstrate the feasibility of 3D point cloud generation from the consumer-grade sensors in smartphones.
Davis, Philip A.
2012-01-01
Airborne digital-image data were collected for the Arizona part of the Colorado River ecosystem below Glen Canyon Dam in 2009. These four-band image data are similar in wavelength band (blue, green, red, and near infrared) and spatial resolution (20 centimeters) to image collections of the river corridor in 2002 and 2005. These periodic image collections are used by the Grand Canyon Monitoring and Research Center (GCMRC) of the U.S. Geological Survey to monitor the effects of Glen Canyon Dam operations on the downstream ecosystem. The 2009 collection used the latest model of the Leica ADS40 airborne digital sensor (the SH52), which uses a single optic for all four bands and collects and stores band radiance in 12-bits, unlike the image sensors that GCMRC used in 2002 and 2005. This study examined the performance of the SH52 sensor, on the basis of the collected image data, and determined that the SH52 sensor provided superior data relative to the previously employed sensors (that is, an early ADS40 model and Zeiss Imaging's Digital Mapping Camera) in terms of band-image registration, dynamic range, saturation, linearity to ground reflectance, and noise level. The 2009 image data were provided as orthorectified segments of each flightline to constrain the size of the image files; each river segment was covered by 5 to 6 overlapping, linear flightlines. Most flightline images for each river segment had some surface-smear defects and some river segments had cloud shadows, but these two conditions did not generally coincide in the majority of the overlapping flightlines for a particular river segment. Therefore, the final image mosaic for the 450-kilometer (km)-long river corridor required careful selection and editing of numerous flightline segments (a total of 513 segments, each 3.2 km long) to minimize surface defects and cloud shadows. The final image mosaic has a total of only 3 km of surface defects. The final image mosaic for the western end of the corridor has areas of cloud shadow because of persistent inclement weather during data collection. This report presents visual comparisons of the 2002, 2005, and 2009 digital-image mosaics for various physical, biological, and cultural resources within the Colorado River ecosystem. All of the comparisons show the superior quality of the 2009 image data. In fact, the 2009 four-band image mosaic is perhaps the best image dataset that exists for the entire Arizona part of the Colorado River.
A robust color signal processing with wide dynamic range WRGB CMOS image sensor
NASA Astrophysics Data System (ADS)
Kawada, Shun; Kuroda, Rihito; Sugawa, Shigetoshi
2011-01-01
We have developed a robust color reproduction methodology by a simple calculation with a new color matrix using the formerly developed wide dynamic range WRGB lateral overflow integration capacitor (LOFIC) CMOS image sensor. The image sensor was fabricated through a 0.18 μm CMOS technology and has a 45 degrees oblique pixel array, the 4.2 μm effective pixel pitch and the W pixels. A W pixel was formed by replacing one of the two G pixels in the Bayer RGB color filter. The W pixel has a high sensitivity through the visible light waveband. An emerald green and yellow (EGY) signal is generated from the difference between the W signal and the sum of RGB signals. This EGY signal mainly includes emerald green and yellow lights. These colors are difficult to be reproduced accurately by the conventional simple linear matrix because their wave lengths are in the valleys of the spectral sensitivity characteristics of the RGB pixels. A new linear matrix based on the EGY-RGB signal was developed. Using this simple matrix, a highly accurate color processing with a large margin to the sensitivity fluctuation and noise has been achieved.
Dagamseh, Ahmad; Wiegerink, Remco; Lammerink, Theo; Krijnen, Gijs
2013-01-01
In Nature, fish have the ability to localize prey, school, navigate, etc., using the lateral-line organ. Artificial hair flow sensors arranged in a linear array shape (inspired by the lateral-line system (LSS) in fish) have been applied to measure airflow patterns at the sensor positions. Here, we take advantage of both biomimetic artificial hair-based flow sensors arranged as LSS and beamforming techniques to demonstrate dipole-source localization in air. Modelling and measurement results show the artificial lateral-line ability to image the position of dipole sources accurately with estimation error of less than 0.14 times the array length. This opens up possibilities for flow-based, near-field environment mapping that can be beneficial to, for example, biologists and robot guidance applications. PMID:23594816
NASA Technical Reports Server (NTRS)
1982-01-01
The state-of-the-art of multispectral sensing is reviewed and recommendations for future research and development are proposed. specifically, two generic sensor concepts were discussed. One is the multispectral pushbroom sensor utilizing linear array technology which operates in six spectral bands including two in the SWIR region and incorporates capabilities for stereo and crosstrack pointing. The second concept is the imaging spectrometer (IS) which incorporates a dispersive element and area arrays to provide both spectral and spatial information simultaneously. Other key technology areas included very large scale integration and the computer aided design of these devices.
Robust optical sensors for safety critical automotive applications
NASA Astrophysics Data System (ADS)
De Locht, Cliff; De Knibber, Sven; Maddalena, Sam
2008-02-01
Optical sensors for the automotive industry need to be robust, high performing and low cost. This paper focuses on the impact of automotive requirements on optical sensor design and packaging. Main strategies to lower optical sensor entry barriers in the automotive market include: Perform sensor calibration and tuning by the sensor manufacturer, sensor test modes on chip to guarantee functional integrity at operation, and package technology is key. As a conclusion, optical sensor applications are growing in automotive. Optical sensor robustness matured to the level of safety critical applications like Electrical Power Assisted Steering (EPAS) and Drive-by-Wire by optical linear arrays based systems and Automated Cruise Control (ACC), Lane Change Assist and Driver Classification/Smart Airbag Deployment by camera imagers based systems.
NASA Technical Reports Server (NTRS)
Swift, C. T.; Goodberlet, M. A.; Wilkerson, J. C.
1990-01-01
The Defence Meteorological Space Program's (DMSP) Special Sensor Microwave/Imager (SSM/I), an operational wind speed algorithm was developed. The algorithm is based on the D-matrix approach which seeks a linear relationship between measured SSM/I brightness temperatures and environmental parameters. D-matrix performance was validated by comparing algorithm derived wind speeds with near-simultaneous and co-located measurements made by off-shore ocean buoys. Other topics include error budget modeling, alternate wind speed algorithms, and D-matrix performance with one or more inoperative SSM/I channels.
NASA Technical Reports Server (NTRS)
Ando, K.
1982-01-01
A substantial technology base of solid state pushbroom sensors exists and is in the process of further evolution at both GSFC and JPL. Technologies being developed relate to short wave infrared (SWIR) detector arrays; HgCdTe hybrid detector arrays; InSb linear and area arrays; passive coolers; spectral beam splitters; the deposition of spectral filters on detector arrays; and the functional design of the shuttle/space platform imaging spectrometer (SIS) system. Spatial and spectral characteristics of field, aircraft and space multispectral sensors are summaried. The status, field of view, and resolution of foreign land observing systems are included.
Mathematical models and photogrammetric exploitation of image sensing
NASA Astrophysics Data System (ADS)
Puatanachokchai, Chokchai
Mathematical models of image sensing are generally categorized into physical/geometrical sensor models and replacement sensor models. While the former is determined from image sensing geometry, the latter is based on knowledge of the physical/geometric sensor models and on using such models for its implementation. The main thrust of this research is in replacement sensor models which have three important characteristics: (1) Highly accurate ground-to-image functions; (2) Rigorous error propagation that is essentially of the same accuracy as the physical model; and, (3) Adjustability, or the ability to upgrade the replacement sensor model parameters when additional control information becomes available after the replacement sensor model has replaced the physical model. In this research, such replacement sensor models are considered as True Replacement Models or TRMs. TRMs provide a significant advantage of universality, particularly for image exploitation functions. There have been several writings about replacement sensor models, and except for the so called RSM (Replacement Sensor Model as a product described in the Manual of Photogrammetry), almost all of them pay very little or no attention to errors and their propagation. This is because, it is suspected, the few physical sensor parameters are usually replaced by many more parameters, thus presenting a potential error estimation difficulty. The third characteristic, adjustability, is perhaps the most demanding. It provides an equivalent flexibility to that of triangulation using the physical model. Primary contributions of this thesis include not only "the eigen-approach", a novel means of replacing the original sensor parameter covariance matrices at the time of estimating the TRM, but also the implementation of the hybrid approach that combines the eigen-approach with the added parameters approach used in the RSM. Using either the eigen-approach or the hybrid approach, rigorous error propagation can be performed during image exploitation. Further, adjustability can be performed when additional control information becomes available after the TRM has been implemented. The TRM is shown to apply to imagery from sensors having different geometries, including an aerial frame camera, a spaceborne linear array sensor, an airborne pushbroom sensor, and an airborne whiskbroom sensor. TRM results show essentially negligible differences as compared to those from rigorous physical sensor models, both for geopositioning from single and overlapping images. Simulated as well as real image data are used to address all three characteristics of the TRM.
A light sheet confocal microscope for image cytometry with a variable linear slit detector
NASA Astrophysics Data System (ADS)
Hutcheson, Joshua A.; Khan, Foysal Z.; Powless, Amy J.; Benson, Devin; Hunter, Courtney; Fritsch, Ingrid; Muldoon, Timothy J.
2016-03-01
We present a light sheet confocal microscope (LSCM) capable of high-resolution imaging of cell suspensions in a microfluidic environment. In lieu of conventional pressure-driven flow or mechanical translation of the samples, we have employed a novel method of fluid transport, redox-magnetohydrodynamics (redox-MHD). This method achieves fluid motion by inducing a small current into the suspension in the presence of a magnetic field via electrodes patterned onto a silicon chip. This on-chip transportation requires no moving parts, and is coupled to the remainder of the imaging system. The microscopy system comprises a 450 nm diode 20 mW laser coupled to a single mode fiber and a cylindrical lens that converges the light sheet into the back aperture of a 10x, 0.3 NA objective lens in an epi-illumination configuration. The emission pathway contains a 150 mm tube lens that focuses the light onto the linear sensor at the conjugate image plane. The linear sensor (ELiiXA+ 8k/4k) has three lateral binning modes which enables variable detection aperture widths between 5, 10, or 20 μm, which can be used to vary axial resolution. We have demonstrated redox-MHD-enabled light sheet microscopy in suspension of fluorescent polystyrene beads. This approach has potential as a high-throughput image cytometer with myriad cellular diagnostic applications.
Noise reduction techniques for Bayer-matrix images
NASA Astrophysics Data System (ADS)
Kalevo, Ossi; Rantanen, Henry
2002-04-01
In this paper, some arrangements to apply Noise Reduction (NR) techniques for images captured by a single sensor digital camera are studied. Usually, the NR filter processes full three-color component image data. This requires that raw Bayer-matrix image data, available from the image sensor, is first interpolated by using Color Filter Array Interpolation (CFAI) method. Another choice is that the raw Bayer-matrix image data is processed directly. The advantages and disadvantages of both processing orders, before (pre-) CFAI and after (post-) CFAI, are studied with linear, multi-stage median, multistage median hybrid and median-rational filters .The comparison is based on the quality of the output image, the processing power requirements and the amount of memory needed. Also the solution, which improves preservation of details in the NR filtering before the CFAI, is proposed.
Material condition assessment with eddy current sensors
NASA Technical Reports Server (NTRS)
Goldfine, Neil J. (Inventor); Washabaugh, Andrew P. (Inventor); Sheiretov, Yanko K. (Inventor); Schlicker, Darrell E. (Inventor); Lyons, Robert J. (Inventor); Windoloski, Mark D. (Inventor); Craven, Christopher A. (Inventor); Tsukernik, Vladimir B. (Inventor); Grundy, David C. (Inventor)
2010-01-01
Eddy current sensors and sensor arrays are used for process quality and material condition assessment of conducting materials. In an embodiment, changes in spatially registered high resolution images taken before and after cold work processing reflect the quality of the process, such as intensity and coverage. These images also permit the suppression or removal of local outlier variations. Anisotropy in a material property, such as magnetic permeability or electrical conductivity, can be intentionally introduced and used to assess material condition resulting from an operation, such as a cold work or heat treatment. The anisotropy is determined by sensors that provide directional property measurements. The sensor directionality arises from constructs that use a linear conducting drive segment to impose the magnetic field in a test material. Maintaining the orientation of this drive segment, and associated sense elements, relative to a material edge provides enhanced sensitivity for crack detection at edges.
An analog gamma correction scheme for high dynamic range CMOS logarithmic image sensors.
Cao, Yuan; Pan, Xiaofang; Zhao, Xiaojin; Wu, Huisi
2014-12-15
In this paper, a novel analog gamma correction scheme with a logarithmic image sensor dedicated to minimize the quantization noise of the high dynamic applications is presented. The proposed implementation exploits a non-linear voltage-controlled-oscillator (VCO) based analog-to-digital converter (ADC) to perform the gamma correction during the analog-to-digital conversion. As a result, the quantization noise does not increase while the same high dynamic range of logarithmic image sensor is preserved. Moreover, by combining the gamma correction with the analog-to-digital conversion, the silicon area and overall power consumption can be greatly reduced. The proposed gamma correction scheme is validated by the reported simulation results and the experimental results measured for our designed test structure, which is fabricated with 0.35 μm standard complementary-metal-oxide-semiconductor (CMOS) process.
An Analog Gamma Correction Scheme for High Dynamic Range CMOS Logarithmic Image Sensors
Cao, Yuan; Pan, Xiaofang; Zhao, Xiaojin; Wu, Huisi
2014-01-01
In this paper, a novel analog gamma correction scheme with a logarithmic image sensor dedicated to minimize the quantization noise of the high dynamic applications is presented. The proposed implementation exploits a non-linear voltage-controlled-oscillator (VCO) based analog-to-digital converter (ADC) to perform the gamma correction during the analog-to-digital conversion. As a result, the quantization noise does not increase while the same high dynamic range of logarithmic image sensor is preserved. Moreover, by combining the gamma correction with the analog-to-digital conversion, the silicon area and overall power consumption can be greatly reduced. The proposed gamma correction scheme is validated by the reported simulation results and the experimental results measured for our designed test structure, which is fabricated with 0.35 μm standard complementary-metal-oxide-semiconductor (CMOS) process. PMID:25517692
The AOLI low-order non-linear curvature wavefront sensor: laboratory and on-sky results
NASA Astrophysics Data System (ADS)
Crass, Jonathan; King, David; MacKay, Craig
2014-08-01
Many adaptive optics (AO) systems in use today require the use of bright reference objects to determine the effects of atmospheric distortions. Typically these systems use Shack-Hartmann Wavefront sensors (SHWFS) to distribute incoming light from a reference object between a large number of sub-apertures. Guyon et al. evaluated the sensitivity of several different wavefront sensing techniques and proposed the non-linear Curvature Wavefront Sensor (nlCWFS) offering improved sensitivity across a range of orders of distortion. On large ground-based telescopes this can provide nearly 100% sky coverage using natural guide stars. We present work being undertaken on the nlCWFS development for the Adaptive Optics Lucky Imager (AOLI) project. The wavefront sensor is being developed as part of a low-order adaptive optics system for use in a dedicated instrument providing an AO corrected beam to a Lucky Imaging based science detector. The nlCWFS provides a total of four reference images on two photon-counting EMCCDs for use in the wavefront reconstruction process. We present results from both laboratory work using a calibration system and the first on-sky data obtained with the nlCWFS at the 4.2 metre William Herschel Telescope, La Palma. In addition, we describe the updated optical design of the wavefront sensor, strategies for minimising intrinsic effects and methods to maximise sensitivity using photon-counting detectors. We discuss on-going work to develop the high speed reconstruction algorithm required for the nlCWFS technique. This includes strategies to implement the technique on graphics processing units (GPUs) and to minimise computing overheads to obtain a prior for a rapid convergence of the wavefront reconstruction. Finally we evaluate the sensitivity of the wavefront sensor based upon both data and low-photon count strategies.
Charge shielding in the In-situ Storage Image Sensor for a vertex detector at the ILC
NASA Astrophysics Data System (ADS)
Zhang, Z.; Stefanov, K. D.; Bailey, D.; Banda, Y.; Buttar, C.; Cheplakov, A.; Cussans, D.; Damerell, C.; Devetak, E.; Fopma, J.; Foster, B.; Gao, R.; Gillman, A.; Goldstein, J.; Greenshaw, T.; Grimes, M.; Halsall, R.; Harder, K.; Hawes, B.; Hayrapetyan, K.; Heath, H.; Hillert, S.; Jackson, D.; Pinto Jayawardena, T.; Jeffery, B.; John, J.; Johnson, E.; Kundu, N.; Laing, A.; Lastovicka, T.; Lau, W.; Li, Y.; Lintern, A.; Lynch, C.; Mandry, S.; Martin, V.; Murray, P.; Nichols, A.; Nomerotski, A.; Page, R.; Parkes, C.; Perry, C.; O'Shea, V.; Sopczak, A.; Tabassam, H.; Thomas, S.; Tikkanen, T.; Velthuis, J.; Walsh, R.; Woolliscroft, T.; Worm, S.
2009-08-01
The Linear Collider Flavour Identification (LCFI) collaboration has successfully developed the first prototype of a novel particle detector, the In-situ Storage Image Sensor (ISIS). This device ideally suits the challenging requirements for the vertex detector at the future International Linear Collider (ILC), combining the charge storing capabilities of the Charge-Coupled Devices (CCD) with readout commonly used in CMOS imagers. The ISIS avoids the need for high-speed readout and offers low power operation combined with low noise, high immunity to electromagnetic interference and increased radiation hardness compared to typical CCDs. The ISIS is one of the most promising detector technologies for vertexing at the ILC. In this paper we describe the measurements on the charge-shielding properties of the p-well, which is used to protect the storage register from parasitic charge collection and is at the core of device's operation. We show that the p-well can suppress the parasitic charge collection by almost two orders of magnitude, satisfying the requirements for the application.
Simple Colorimetric Sensor for Trinitrotoluene Testing
NASA Astrophysics Data System (ADS)
Samanman, S.; Masoh, N.; Salah, Y.; Srisawat, S.; Wattanayon, R.; Wangsirikul, P.; Phumivanichakit, K.
2017-02-01
A simple operating colorimetric sensor for trinitrotoluene (TNT) determination using a commercial scanner as a captured image was designed. The sensor is based on the chemical reaction between TNT and sodium hydroxide reagent to produce the color change within 96 well plates, which observed finally, recorded using a commercial scanner. The intensity of the color change increased with increase in TNT concentration and could easily quantify the concentration of TNT by digital image analysis using the Image J free software. Under optimum conditions, the sensor provided a linear dynamic range between 0.20 and 1.00 mg mL-1(r = 0.9921) with a limit of detection of 0.10± 0.01 mg mL-1. The relative standard deviation for eight experiments for the sensitivity was 3.8%. When applied for the analysis of TNT in two soil extract samples, the concentrations were found to be non-detectable to 0.26±0.04 mg mL-1. The obtained recovery values (93-95%) were acceptable for soil samples tested.
Lyu, Tao; Yao, Suying; Nie, Kaiming; Xu, Jiangtao
2014-11-17
A 12-bit high-speed column-parallel two-step single-slope (SS) analog-to-digital converter (ADC) for CMOS image sensors is proposed. The proposed ADC employs a single ramp voltage and multiple reference voltages, and the conversion is divided into coarse phase and fine phase to improve the conversion rate. An error calibration scheme is proposed to correct errors caused by offsets among the reference voltages. The digital-to-analog converter (DAC) used for the ramp generator is based on the split-capacitor array with an attenuation capacitor. Analysis of the DAC's linearity performance versus capacitor mismatch and parasitic capacitance is presented. A prototype 1024 × 32 Time Delay Integration (TDI) CMOS image sensor with the proposed ADC architecture has been fabricated in a standard 0.18 μm CMOS process. The proposed ADC has average power consumption of 128 μW and a conventional rate 6 times higher than the conventional SS ADC. A high-quality image, captured at the line rate of 15.5 k lines/s, shows that the proposed ADC is suitable for high-speed CMOS image sensors.
NASA Astrophysics Data System (ADS)
Yang, Huizhen; Ma, Liang; Wang, Bin
2018-01-01
In contrast to the conventional adaptive optics (AO) system, the wavefront sensorless (WFSless) AO system doesn't need a WFS to measure the wavefront aberrations. It is simpler than the conventional AO in system architecture and can be applied to the complex conditions. The model-based WFSless system has a great potential in real-time correction applications because of its fast convergence. The control algorithm of the model-based WFSless system is based on an important theory result that is the linear relation between the Mean-Square Gradient (MSG) magnitude of the wavefront aberration and the second moment of the masked intensity distribution in the focal plane (also called as Masked Detector Signal-MDS). The linear dependence between MSG and MDS for the point source imaging with a CCD sensor will be discussed from theory and simulation in this paper. The theory relationship between MSG and MDS is given based on our previous work. To verify the linear relation for the point source, we set up an imaging model under atmospheric turbulence. Additionally, the value of MDS will be deviate from that of theory because of the noise of detector and further the deviation will affect the correction effect. The theory results under noise will be obtained through theoretical derivation and then the linear relation between MDS and MDS under noise will be discussed through the imaging model. Results show the linear relation between MDS and MDS under noise is also maintained well, which provides a theoretical support to applications of the model-based WFSless system.
NASA Astrophysics Data System (ADS)
Yuan, Qun; Zhu, Dan; Chen, Yueyang; Guo, Zhenyan; Zuo, Chao; Gao, Zhishan
2017-04-01
We present the optical design of a Czerny-Turner imaging spectrometer for which astigmatism is corrected using off-the-shelf optics resulting in spectral resolution of 0.1 nm. The classic Czerny-Turner imaging spectrometer, consisting of a plane grating, two spherical mirrors, and a sensor with 10-μm pixels, was used as the benchmark. We comparatively assessed three configurations of the spectrometer that corrected astigmatism with divergent illumination of the grating, by adding a cylindrical lens, or by adding a cylindrical mirror. When configured with the added cylindrical lens, the imaging spectrometer with a point field of view (FOV) and a linear sensor achieved diffraction-limited performance over a broadband width of 400 nm centered at 800 nm, while the maximum allowable bandwidth was only 200 nm for the other two configurations. When configured with the added cylindrical mirror, the imaging spectrometer with a one-dimensional field of view (1D FOV) and an area sensor showed its superiority on imaging quality, spectral nonlinearity, as well as keystone over 100 nm bandwidth and 10 mm spatial extent along the entrance slit.
Haffert, S Y
2016-08-22
Current wavefront sensors for high resolution imaging have either a large dynamic range or a high sensitivity. A new kind of wavefront sensor is developed which can have both: the Generalised Optical Differentiation wavefront sensor. This new wavefront sensor is based on the principles of optical differentiation by amplitude filters. We have extended the theory behind linear optical differentiation and generalised it to nonlinear filters. We used numerical simulations and laboratory experiments to investigate the properties of the generalised wavefront sensor. With this we created a new filter that can decouple the dynamic range from the sensitivity. These properties make it suitable for adaptive optic systems where a large range of phase aberrations have to be measured with high precision.
USGS aerial resolution targets.
Salamonowicz, P.H.
1982-01-01
It is necessary to measure the achievable resolution of any airborne sensor that is to be used for metric purposes. Laboratory calibration facilities may be inadequate or inappropriate for determining the resolution of non-photographic sensors such as optical-mechanical scanners, television imaging tubes, and linear arrays. However, large target arrays imaged in the field can be used in testing such systems. The USGS has constructed an array of resolution targets in order to permit field testing of a variety of airborne sensing systems. The target array permits any interested organization with an airborne sensing system to accurately determine the operational resolution of its system. -from Author
NASA Astrophysics Data System (ADS)
Georges, F.; Remouche, M.; Meyrueis, P.
2011-06-01
Usually manufacturer's specifications do not deal with the ability of linear sheet polarizers to have a constant transmittance function over their geometric area. These parameters are fundamental for developing low cost polarimetric sensors(for instance rotation, torque, displacement) specifically for hybrid car (thermic + electricity power). It is then necessary to specially characterize commercial polarizers sheets to find if they are adapted to this kind of applications. In this paper, we present measuring methods and bench developed for this purpose, and some preliminary characterization results. We state conclusions for effective applications to hybrid car gearbox control and monitoring.
Linear mixing model applied to coarse spatial resolution data from multispectral satellite sensors
NASA Technical Reports Server (NTRS)
Holben, Brent N.; Shimabukuro, Yosio E.
1993-01-01
A linear mixing model was applied to coarse spatial resolution data from the NOAA Advanced Very High Resolution Radiometer. The reflective component of the 3.55-3.95 micron channel was used with the two reflective channels 0.58-0.68 micron and 0.725-1.1 micron to run a constrained least squares model to generate fraction images for an area in the west central region of Brazil. The fraction images were compared with an unsupervised classification derived from Landsat TM data acquired on the same day. The relationship between the fraction images and normalized difference vegetation index images show the potential of the unmixing techniques when using coarse spatial resolution data for global studies.
Chahl, J S
2014-01-20
This paper describes an application for arrays of narrow-field-of-view sensors with parallel optical axes. These devices exhibit some complementary characteristics with respect to conventional perspective projection or angular projection imaging devices. Conventional imaging devices measure rotational egomotion directly by measuring the angular velocity of the projected image. Translational egomotion cannot be measured directly by these devices because the induced image motion depends on the unknown range of the viewed object. On the other hand, a known translational motion generates image velocities which can be used to recover the ranges of objects and hence the three-dimensional (3D) structure of the environment. A new method is presented for computing egomotion and range using the properties of linear arrays of independent narrow-field-of-view optical sensors. An approximate parallel projection can be used to measure translational egomotion in terms of the velocity of the image. On the other hand, a known rotational motion of the paraxial sensor array generates image velocities, which can be used to recover the 3D structure of the environment. Results of tests of an experimental array confirm these properties.
The fusion of satellite and UAV data: simulation of high spatial resolution band
NASA Astrophysics Data System (ADS)
Jenerowicz, Agnieszka; Siok, Katarzyna; Woroszkiewicz, Malgorzata; Orych, Agata
2017-10-01
Remote sensing techniques used in the precision agriculture and farming that apply imagery data obtained with sensors mounted on UAV platforms became more popular in the last few years due to the availability of low- cost UAV platforms and low- cost sensors. Data obtained from low altitudes with low- cost sensors can be characterised by high spatial and radiometric resolution but quite low spectral resolution, therefore the application of imagery data obtained with such technology is quite limited and can be used only for the basic land cover classification. To enrich the spectral resolution of imagery data acquired with low- cost sensors from low altitudes, the authors proposed the fusion of RGB data obtained with UAV platform with multispectral satellite imagery. The fusion is based on the pansharpening process, that aims to integrate the spatial details of the high-resolution panchromatic image with the spectral information of lower resolution multispectral or hyperspectral imagery to obtain multispectral or hyperspectral images with high spatial resolution. The key of pansharpening is to properly estimate the missing spatial details of multispectral images while preserving their spectral properties. In the research, the authors presented the fusion of RGB images (with high spatial resolution) obtained with sensors mounted on low- cost UAV platforms and multispectral satellite imagery with satellite sensors, i.e. Landsat 8 OLI. To perform the fusion of UAV data with satellite imagery, the simulation of the panchromatic bands from RGB data based on the spectral channels linear combination, was conducted. Next, for simulated bands and multispectral satellite images, the Gram-Schmidt pansharpening method was applied. As a result of the fusion, the authors obtained several multispectral images with very high spatial resolution and then analysed the spatial and spectral accuracies of processed images.
Xu, Han-qiu; Zhang, Tie-jun
2011-07-01
The present paper investigates the quantitative relationship between the NDVI and SAVI vegetation indices of Landsat and ASTER sensors based on three tandem image pairs. The study examines how well ASTER sensor vegetation observations replicate ETM+ vegetation observations, and more importantly, the difference in the vegetation observations between the two sensors. The DN values of the three image pairs were first converted to at-sensor reflectance to reduce radiometric differences between two sensors, images. The NDVI and SAVI vegetation indices of the two sensors were then calculated using the converted reflectance. The quantitative relationship was revealed through regression analysis on the scatter plots of the vegetation index values of the two sensors. The models for the conversion between the two sensors, vegetation indices were also obtained from the regression. The results show that the difference does exist between the two sensors, vegetation indices though they have a very strong positive linear relationship. The study found that the red and near infrared measurements differ between the two sensors, with ASTER generally producing higher reflectance in the red band and lower reflectance in the near infrared band than the ETM+ sensor. This results in the ASTER sensor producing lower spectral vegetation index measurements, for the same target, than ETM+. The relative spectral response function differences in the red and near infrared bands between the two sensors are believed to be the main factor contributing to their differences in vegetation index measurements, because the red and near infrared relative spectral response features of the ASTER sensor overlap the vegetation "red edge" spectral region. The obtained conversion models have high accuracy with a RMSE less than 0.04 for both sensors' inter-conversion between corresponding vegetation indices.
Novel eye-safe line scanning 3D laser-radar
NASA Astrophysics Data System (ADS)
Eberle, B.; Kern, Tobias; Hammer, Marcus; Schwanke, Ullrich; Nowak, Heinrich
2014-10-01
Today, the civil market provides quite a number of different 3D-Sensors covering ranges up to 1 km. Typically these sensors are based on single element detectors which suffer from the drawback of spatial resolution at larger distances. Tasks demanding reliable object classification at long ranges can be fulfilled only by sensors consisting of detector arrays. They ensure sufficient frame rates and high spatial resolution. Worldwide there are many efforts in developing 3D-detectors, based on two-dimensional arrays. This paper presents first results on the performance of a recently developed 3D imaging laser radar sensor, working in the short wave infrared (SWIR) at 1.5 μm. It consists of a novel Cadmium Mercury Telluride (CMT) linear array APD detector with 384x1 elements at a pitch of 25 μm, developed by AIM Infrarot Module GmbH. The APD elements are designed to work in the linear (non-Geiger) mode. Each pixel will provide the time of flight measurement, and, due to the linear detection mode, allowing the detection of three successive echoes. The resolution in depth is 15 cm, the maximum repetition rate is 4 kHz. We discuss various sensor concepts regarding possible applications and their dependence on system parameters like field of view, frame rate, spatial resolution and range of operation.
On use of image quality metrics for perceptual blur modeling: image/video compression case
NASA Astrophysics Data System (ADS)
Cha, Jae H.; Olson, Jeffrey T.; Preece, Bradley L.; Espinola, Richard L.; Abbott, A. Lynn
2018-02-01
Linear system theory is employed to make target acquisition performance predictions for electro-optical/infrared imaging systems where the modulation transfer function (MTF) may be imposed from a nonlinear degradation process. Previous research relying on image quality metrics (IQM) methods, which heuristically estimate perceived MTF has supported that an average perceived MTF can be used to model some types of degradation such as image compression. Here, we discuss the validity of the IQM approach by mathematically analyzing the associated heuristics from the perspective of reliability, robustness, and tractability. Experiments with standard images compressed by x.264 encoding suggest that the compression degradation can be estimated by a perceived MTF within boundaries defined by well-behaved curves with marginal error. Our results confirm that the IQM linearizer methodology provides a credible tool for sensor performance modeling.
Optical and electrical characterization of a back-thinned CMOS active pixel sensor
NASA Astrophysics Data System (ADS)
Blue, Andrew; Clark, A.; Houston, S.; Laing, A.; Maneuski, D.; Prydderch, M.; Turchetta, R.; O'Shea, V.
2009-06-01
This work will report on the first work on the characterization of a back-thinned Vanilla-a 512×512 (25 μm squared) active pixel sensor (APS). Characterization of the detectors was carried out through the analysis of photon transfer curves to yield a measurement of full well capacity, noise levels, gain constants and linearity. Spectral characterization of the sensors was also performed in the visible and UV regions. A full comparison against non-back-thinned front illuminated Vanilla sensors is included. Such measurements suggest that the Vanilla APS will be suitable for a wide range of applications, including particle physics and biomedical imaging.
High performance thermal imaging for the 21st century
NASA Astrophysics Data System (ADS)
Clarke, David J.; Knowles, Peter
2003-01-01
In recent years IR detector technology has developed from early short linear arrays. Such devices require high performance signal processing electronics to meet today's thermal imaging requirements for military and para-military applications. This paper describes BAE SYSTEMS Avionics Group's Sensor Integrated Modular Architecture thermal imager which has been developed alongside the group's Eagle 640×512 arrays to provide high performance imaging capability. The electronics architecture also supprots High Definition TV format 2D arrays for future growth capability.
NASA Astrophysics Data System (ADS)
Cheng, Mao-Hsun; Zhao, Chumin; Kanicki, Jerzy
2017-05-01
Current-mode active pixel sensor (C-APS) circuits based on amorphous indium-tin-zinc-oxide thin-film transistors (a-ITZO TFTs) are proposed for indirect X-ray imagers. The proposed C-APS circuits include a combination of a hydrogenated amorphous silicon (a-Si:H) p+-i-n+ photodiode (PD) and a-ITZO TFTs. Source-output (SO) and drain-output (DO) C-APS are investigated and compared. Acceptable signal linearity and high gains are realized for SO C-APS. APS circuit characteristics including voltage gain, charge gain, signal linearity, charge-to-current conversion gain, electron-to-voltage conversion gain are evaluated. The impact of the a-ITZO TFT threshold voltage shifts on C-APS is also considered. A layout for a pixel pitch of 50 μm and an associated fabrication process are suggested. Data line loadings for 4k-resolution X-ray imagers are computed and their impact on circuit performances is taken into consideration. Noise analysis is performed, showing a total input-referred noise of 239 e-.
NASA Astrophysics Data System (ADS)
Zarubin, V.; Bychkov, A.; Simonova, V.; Zhigarkov, V.; Karabutov, A.; Cherepetskaya, E.
2018-05-01
In this paper, a technique for reflection mode immersion 2D laser-ultrasound tomography of solid objects with piecewise linear 2D surface profiles is presented. Pulsed laser radiation was used for generation of short ultrasonic probe pulses, providing high spatial resolution. A piezofilm sensor array was used for detection of the waves reflected by the surface and internal inhomogeneities of the object. The original ultrasonic image reconstruction algorithm accounting for refraction of acoustic waves at the liquid-solid interface provided longitudinal resolution better than 100 μm in the polymethyl methacrylate sample object.
Narrow-Band Organic Photodiodes for High-Resolution Imaging.
Han, Moon Gyu; Park, Kyung-Bae; Bulliard, Xavier; Lee, Gae Hwang; Yun, Sungyoung; Leem, Dong-Seok; Heo, Chul-Joon; Yagi, Tadao; Sakurai, Rie; Ro, Takkyun; Lim, Seon-Jeong; Sul, Sangchul; Na, Kyoungwon; Ahn, Jungchak; Jin, Yong Wan; Lee, Sangyoon
2016-10-05
There are growing opportunities and demands for image sensors that produce higher-resolution images, even in low-light conditions. Increasing the light input areas through 3D architecture within the same pixel size can be an effective solution to address this issue. Organic photodiodes (OPDs) that possess wavelength selectivity can allow for advancements in this regard. Here, we report on novel push-pull D-π-A dyes specially designed for Gaussian-shaped, narrow-band absorption and the high photoelectric conversion. These p-type organic dyes work both as a color filter and as a source of photocurrents with linear and fast light responses, high sensitivity, and excellent stability, when combined with C60 to form bulk heterojunctions (BHJs). The effectiveness of the OPD composed of the active color filter was demonstrated by obtaining a full-color image using a camera that contained an organic/Si hybrid complementary metal-oxide-semiconductor (CMOS) color image sensor.
Li, Jing; Mahmoodi, Alireza; Joseph, Dileepan
2015-10-16
An important class of complementary metal-oxide-semiconductor (CMOS) image sensors are those where pixel responses are monotonic nonlinear functions of light stimuli. This class includes various logarithmic architectures, which are easily capable of wide dynamic range imaging, at video rates, but which are vulnerable to image quality issues. To minimize fixed pattern noise (FPN) and maximize photometric accuracy, pixel responses must be calibrated and corrected due to mismatch and process variation during fabrication. Unlike literature approaches, which employ circuit-based models of varying complexity, this paper introduces a novel approach based on low-degree polynomials. Although each pixel may have a highly nonlinear response, an approximately-linear FPN calibration is possible by exploiting the monotonic nature of imaging. Moreover, FPN correction requires only arithmetic, and an optimal fixed-point implementation is readily derived, subject to a user-specified number of bits per pixel. Using a monotonic spline, involving cubic polynomials, photometric calibration is also possible without a circuit-based model, and fixed-point photometric correction requires only a look-up table. The approach is experimentally validated with a logarithmic CMOS image sensor and is compared to a leading approach from the literature. The novel approach proves effective and efficient.
Chen, Chi-Jim; Pai, Tun-Wen; Cheng, Mox
2015-01-01
A sweeping fingerprint sensor converts fingerprints on a row by row basis through image reconstruction techniques. However, a built fingerprint image might appear to be truncated and distorted when the finger was swept across a fingerprint sensor at a non-linear speed. If the truncated fingerprint images were enrolled as reference targets and collected by any automated fingerprint identification system (AFIS), successful prediction rates for fingerprint matching applications would be decreased significantly. In this paper, a novel and effective methodology with low time computational complexity was developed for detecting truncated fingerprints in a real time manner. Several filtering rules were implemented to validate existences of truncated fingerprints. In addition, a machine learning method of supported vector machine (SVM), based on the principle of structural risk minimization, was applied to reject pseudo truncated fingerprints containing similar characteristics of truncated ones. The experimental result has shown that an accuracy rate of 90.7% was achieved by successfully identifying truncated fingerprint images from testing images before AFIS enrollment procedures. The proposed effective and efficient methodology can be extensively applied to all existing fingerprint matching systems as a preliminary quality control prior to construction of fingerprint templates. PMID:25835186
Interior Temperature Measurement Using Curved Mercury Capillary Sensor Based on X-ray Radiography
NASA Astrophysics Data System (ADS)
Chen, Shuyue; Jiang, Xing; Lu, Guirong
2017-07-01
A method was presented for measuring the interior temperature of objects using a curved mercury capillary sensor based on X-ray radiography. The sensor is composed of a mercury bubble, a capillary and a fixed support. X-ray digital radiography was employed to capture image of the mercury column in the capillary, and a temperature control system was designed for the sensor calibration. We adopted livewire algorithms and mathematical morphology to calculate the mercury length. A measurement model relating mercury length to temperature was established, and the measurement uncertainty associated with the mercury column length and the linear model fitted by least-square method were analyzed. To verify the system, the interior temperature measurement of an autoclave, which is totally closed, was taken from 29.53°C to 67.34°C. The experiment results show that the response of the system is approximately linear with an uncertainty of maximum 0.79°C. This technique provides a new approach to measure interior temperature of objects.
NASA Astrophysics Data System (ADS)
Liu, Zhaoxin; Zhao, Liaoying; Li, Xiaorun; Chen, Shuhan
2018-04-01
Owing to the limitation of spatial resolution of the imaging sensor and the variability of ground surfaces, mixed pixels are widesperead in hyperspectral imagery. The traditional subpixel mapping algorithms treat all mixed pixels as boundary-mixed pixels while ignoring the existence of linear subpixels. To solve this question, this paper proposed a new subpixel mapping method based on linear subpixel feature detection and object optimization. Firstly, the fraction value of each class is obtained by spectral unmixing. Secondly, the linear subpixel features are pre-determined based on the hyperspectral characteristics and the linear subpixel feature; the remaining mixed pixels are detected based on maximum linearization index analysis. The classes of linear subpixels are determined by using template matching method. Finally, the whole subpixel mapping results are iteratively optimized by binary particle swarm optimization algorithm. The performance of the proposed subpixel mapping method is evaluated via experiments based on simulated and real hyperspectral data sets. The experimental results demonstrate that the proposed method can improve the accuracy of subpixel mapping.
NASA Astrophysics Data System (ADS)
Macedo, Milton P.; Correia, C. M. B. A.
2013-04-01
This work aims at showing the applicability of a scanning-stage bench-microscope in bright-field reflection mode for wirebonding inspection of integrated circuits (IC) as well as quality assurance of tracks in printed circuit boards (PCB). The main issues of our laboratorial prototype arise from the use of a linear image sensor taking advantage of its geometry to achieve lower acquisition time in comparison to traditional (pinhole) confocal approach. The use of a slit-detector is normally related to resolution degradation for details parallel to sensor. But an improvement will surely arise using light distribution along line pixels of the sensor which establishes a great advantage in comparison to (pure) slit detectors. The versatility of this bench-microscope affords excellent means to develop and test algorithms. Those to improve lateral resolution isotropy as well as image visualization and 3D mesh reconstruction under different setups namely illumination modes. Based on the results of these tests tests both wide-field illumination and parallel slit illumination and detection configurations were used in these two applications. Results from IC wire-bonding show the ability of the system to extract 3D information. A comparison of auto-focus images and 3D profiles obtained using different 3D reconstruction algorithms as well as a method for the determination of the diameter of the bond wire are presented. Measurements of PCB track width and thickness were performed and the comparison of these results from both longitudinal and transverse tracks stress the limitations of a lower spatial sampling rate induced by the resolution of object stage positioners.
Karbasi, Salman; Arianpour, Ashkan; Motamedi, Nojan; Mellette, William M; Ford, Joseph E
2015-06-10
Imaging fiber bundles can map the curved image surface formed by some high-performance lenses onto flat focal plane detectors. The relative alignment between the focal plane array pixels and the quasi-periodic fiber-bundle cores can impose an undesirable space variant moiré pattern, but this effect may be greatly reduced by flat-field calibration, provided that the local responsivity is known. Here we demonstrate a stable metric for spatial analysis of the moiré pattern strength, and use it to quantify the effect of relative sensor and fiber-bundle pitch, and that of the Bayer color filter. We measure the thermal dependence of the moiré pattern, and the achievable improvement by flat-field calibration at different operating temperatures. We show that a flat-field calibration image at a desired operating temperature can be generated using linear interpolation between white images at several fixed temperatures, comparing the final image quality with an experimentally acquired image at the same temperature.
NASA Astrophysics Data System (ADS)
Rahmes, Mark; Fagan, Dean; Lemieux, George
2017-03-01
The capability of a software algorithm to automatically align same-patient dental bitewing and panoramic x-rays over time is complicated by differences in collection perspectives. We successfully used image correlation with an affine transform for each pixel to discover common image borders, followed by a non-linear homography perspective adjustment to closely align the images. However, significant improvements in image registration could be realized if images were collected from the same perspective, thus facilitating change analysis. The perspective differences due to current dental image collection devices are so significant that straightforward change analysis is not possible. To address this, a new custom dental tray could be used to provide the standard reference needed for consistent positioning of a patient's mouth. Similar to sports mouth guards, the dental tray could be fabricated in standard sizes from plastic and use integrated electronics that have been miniaturized. In addition, the x-ray source needs to be consistently positioned in order to collect images with similar angles and scales. Solving this pose correction is similar to solving for collection angle in aerial imagery for change detection. A standard collection system would provide a method for consistent source positioning using real-time sensor position feedback from a digital x-ray image reference. Automated, robotic sensor positioning could replace manual adjustments. Given an image set from a standard collection, a disparity map between images can be created using parallax from overlapping viewpoints to enable change detection. This perspective data can be rectified and used to create a three-dimensional dental model reconstruction.
Automatic parquet block sorting using real-time spectral classification
NASA Astrophysics Data System (ADS)
Astrom, Anders; Astrand, Erik; Johansson, Magnus
1999-03-01
This paper presents a real-time spectral classification system based on the PGP spectrograph and a smart image sensor. The PGP is a spectrograph which extracts the spectral information from a scene and projects the information on an image sensor, which is a method often referred to as Imaging Spectroscopy. The classification is based on linear models and categorizes a number of pixels along a line. Previous systems adopting this method have used standard sensors, which often resulted in poor performance. The new system, however, is based on a patented near-sensor classification method, which exploits analogue features on the smart image sensor. The method reduces the enormous amount of data to be processed at an early stage, thus making true real-time spectral classification possible. The system has been evaluated on hardwood parquet boards showing very good results. The color defects considered in the experiments were blue stain, white sapwood, yellow decay and red decay. In addition to these four defect classes, a reference class was used to indicate correct surface color. The system calculates a statistical measure for each parquet block, giving the pixel defect percentage. The patented method makes it possible to run at very high speeds with a high spectral discrimination ability. Using a powerful illuminator, the system can run with a line frequency exceeding 2000 line/s. This opens up the possibility to maintain high production speed and still measure with good resolution.
NASA Technical Reports Server (NTRS)
Generazio, Ed
2017-01-01
The technology and methods for remote quantitative imaging of electrostatic potentials and electrostatic fields in and around objects and in free space is presented. Electric field imaging (EFI) technology may be applied to characterize intrinsic or existing electric potentials and electric fields, or an externally generated electrostatic field may be used for illuminating volumes to be inspected with EFI. The baseline sensor technology (e-Sensor) and its construction, optional electric field generation (quasi-static generator), and current e- Sensor enhancements (ephemeral e-Sensor) are discussed. Critical design elements of current linear and real-time two-dimensional (2D) measurement systems are highlighted, and the development of a three dimensional (3D) EFI system is presented. Demonstrations for structural, electronic, human, and memory applications are shown. Recent work demonstrates that phonons may be used to create and annihilate electric dipoles within structures. Phonon induced dipoles are ephemeral and their polarization, strength, and location may be quantitatively characterized by EFI providing a new subsurface Phonon-EFI imaging technology. Initial results from real-time imaging of combustion and ion flow, and their measurement complications, will be discussed. These new EFI capabilities are demonstrated to characterize electric charge distribution creating a new field of study embracing areas of interest including electrostatic discharge (ESD) mitigation, crime scene forensics, design and materials selection for advanced sensors, combustion science, on-orbit space potential, container inspection, remote characterization of electronic circuits and level of activation, dielectric morphology of structures, tether integrity, organic molecular memory, atmospheric science, and medical diagnostic and treatment efficacy applications such as cardiac polarization wave propagation and electromyography imaging.
Radiometry simulation within the end-to-end simulation tool SENSOR
NASA Astrophysics Data System (ADS)
Wiest, Lorenz; Boerner, Anko
2001-02-01
12 An end-to-end simulation is a valuable tool for sensor system design, development, optimization, testing, and calibration. This contribution describes the radiometry module of the end-to-end simulation tool SENSOR. It features MODTRAN 4.0-based look up tables in conjunction with a cache-based multilinear interpolation algorithm to speed up radiometry calculations. It employs a linear reflectance parameterization to reduce look up table size, considers effects due to the topology of a digital elevation model (surface slope, sky view factor) and uses a reflectance class feature map to assign Lambertian and BRDF reflectance properties to the digital elevation model. The overall consistency of the radiometry part is demonstrated by good agreement between ATCOR 4-retrieved reflectance spectra of a simulated digital image cube and the original reflectance spectra used to simulate this image data cube.
LQG control of a deformable mirror adaptive optics system with time-delayed measurements
NASA Astrophysics Data System (ADS)
Anderson, David J.
1991-12-01
This thesis proposes a linear quadratic Gaussian (LQG) control law for a ground-based deformable mirror adaptive optics system. The incoming image wavefront is distorted, primarily in phase, due to the turbulent effects of the earth's atmosphere. The adaptive optics system attempts to compensate for the distortion with a deformable mirror. A Hartman wavefront sensor measures the degree of distortion in the image wavefront. The measurements are input to a Kalman filter which estimates the system states. The state estimates are processed by a linear quadratic regulator which generates the appropriate control voltages to apply to the deformable mirror actuators. The dynamics model for the atmospheric phase distortion consists of 14 Zernike coefficient states; each modeled as a first-order linear time-invariant shaping filter driven by zero-mean white Gaussian noise. The dynamics of the deformable mirror are also model as 14 Zernike coefficients with first-order deterministic dynamics. A significant reduction in total wavefront phase distortion is achieved in the presence of time-delayed measurements. Wavefront sensor sampling rate is the major factor limiting system performance. The Multimode Simulation for Optimal Filter Evaluation (MSOFE) software is the performance evaluation tool of choice for this research.
Putting a finishing touch on GECIs.
Rose, Tobias; Goltstein, Pieter M; Portugues, Ruben; Griesbeck, Oliver
2014-01-01
More than a decade ago genetically encoded calcium indicators (GECIs) entered the stage as new promising tools to image calcium dynamics and neuronal activity in living tissues and designated cell types in vivo. From a variety of initial designs two have emerged as promising prototypes for further optimization: FRET (Förster Resonance Energy Transfer)-based sensors and single fluorophore sensors of the GCaMP family. Recent efforts in structural analysis, engineering and screening have broken important performance thresholds in the latest generation for both classes. While these improvements have made GECIs a powerful means to perform physiology in living animals, a number of other aspects of sensor function deserve attention. These aspects include indicator linearity, toxicity and slow response kinetics. Furthermore creating high performance sensors with optically more favorable emission in red or infrared wavelengths as well as new stably or conditionally GECI-expressing animal lines are on the wish list. When the remaining issues are solved, imaging of GECIs will finally have crossed the last milestone, evolving from an initial promise into a fully matured technology.
The fast and accurate 3D-face scanning technology based on laser triangle sensors
NASA Astrophysics Data System (ADS)
Wang, Jinjiang; Chang, Tianyu; Ge, Baozhen; Tian, Qingguo; Chen, Yang; Kong, Bin
2013-08-01
A laser triangle scanning method and the structure of 3D-face measurement system were introduced. In presented system, a liner laser source was selected as an optical indicated signal in order to scanning a line one times. The CCD image sensor was used to capture image of the laser line modulated by human face. The system parameters were obtained by system calibrated calculated. The lens parameters of image part of were calibrated with machine visual image method and the triangle structure parameters were calibrated with fine wire paralleled arranged. The CCD image part and line laser indicator were set with a linear motor carry which can achieve the line laser scanning form top of the head to neck. For the nose is ledge part and the eyes are sunk part, one CCD image sensor can not obtain the completed image of laser line. In this system, two CCD image sensors were set symmetric at two sides of the laser indicator. In fact, this structure includes two laser triangle measure units. Another novel design is there laser indicators were arranged in order to reduce the scanning time for it is difficult for human to keep static for longer time. The 3D data were calculated after scanning. And further data processing include 3D coordinate refine, mesh calculate and surface show. Experiments show that this system has simply structure, high scanning speed and accurate. The scanning range covers the whole head of adult, the typical resolution is 0.5mm.
Use of anomolous thermal imaging effects for multi-mode systems control during crystal growth
NASA Technical Reports Server (NTRS)
Wargo, Michael J.
1989-01-01
Real time image processing techniques, combined with multitasking computational capabilities are used to establish thermal imaging as a multimode sensor for systems control during crystal growth. Whereas certain regions of the high temperature scene are presently unusable for quantitative determination of temperature, the anomalous information thus obtained is found to serve as a potentially low noise source of other important systems control output. Using this approach, the light emission/reflection characteristics of the crystal, meniscus and melt system are used to infer the crystal diameter and a linear regression algorithm is employed to determine the local diameter trend. This data is utilized as input for closed loop control of crystal shape. No performance penalty in thermal imaging speed is paid for this added functionality. Approach to secondary (diameter) sensor design and systems control structure is discussed. Preliminary experimental results are presented.
Performance Evaluation of a Biometric System Based on Acoustic Images
Izquierdo-Fuente, Alberto; del Val, Lara; Jiménez, María I.; Villacorta, Juan J.
2011-01-01
An acoustic electronic scanning array for acquiring images from a person using a biometric application is developed. Based on pulse-echo techniques, multifrequency acoustic images are obtained for a set of positions of a person (front, front with arms outstretched, back and side). Two Uniform Linear Arrays (ULA) with 15 λ/2-equispaced sensors have been employed, using different spatial apertures in order to reduce sidelobe levels. Working frequencies have been designed on the basis of the main lobe width, the grating lobe levels and the frequency responses of people and sensors. For a case-study with 10 people, the acoustic profiles, formed by all images acquired, are evaluated and compared in a mean square error sense. Finally, system performance, using False Match Rate (FMR)/False Non-Match Rate (FNMR) parameters and the Receiver Operating Characteristic (ROC) curve, is evaluated. On the basis of the obtained results, this system could be used for biometric applications. PMID:22163708
Nonlinear time dependence of dark current in charge-coupled devices
NASA Astrophysics Data System (ADS)
Dunlap, Justin C.; Bodegom, Erik; Widenhorn, Ralf
2011-03-01
It is generally assumed that charge-coupled device (CCD) imagers produce a linear response of dark current versus exposure time except near saturation. We found a large number of pixels with nonlinear dark current response to exposure time to be present in two scientific CCD imagers. These pixels are found to exhibit distinguishable behavior with other analogous pixels and therefore can be characterized in groupings. Data from two Kodak CCD sensors are presented for exposure times from a few seconds up to two hours. Linear behavior is traditionally taken for granted when carrying out dark current correction and as a result, pixels with nonlinear behavior will be corrected inaccurately.
Calibration Methods for a 3D Triangulation Based Camera
NASA Astrophysics Data System (ADS)
Schulz, Ulrike; Böhnke, Kay
A sensor in a camera takes a gray level image (1536 x 512 pixels), which is reflected by a reference body. The reference body is illuminated by a linear laser line. This gray level image can be used for a 3D calibration. The following paper describes how a calibration program calculates the calibration factors. The calibration factors serve to determine the size of an unknown reference body.
Reduction of Radiometric Miscalibration—Applications to Pushbroom Sensors
Rogaß, Christian; Spengler, Daniel; Bochow, Mathias; Segl, Karl; Lausch, Angela; Doktor, Daniel; Roessner, Sigrid; Behling, Robert; Wetzel, Hans-Ulrich; Kaufmann, Hermann
2011-01-01
The analysis of hyperspectral images is an important task in Remote Sensing. Foregoing radiometric calibration results in the assignment of incident electromagnetic radiation to digital numbers and reduces the striping caused by slightly different responses of the pixel detectors. However, due to uncertainties in the calibration some striping remains. This publication presents a new reduction framework that efficiently reduces linear and nonlinear miscalibrations by an image-driven, radiometric recalibration and rescaling. The proposed framework—Reduction Of Miscalibration Effects (ROME)—considering spectral and spatial probability distributions, is constrained by specific minimisation and maximisation principles and incorporates image processing techniques such as Minkowski metrics and convolution. To objectively evaluate the performance of the new approach, the technique was applied to a variety of commonly used image examples and to one simulated and miscalibrated EnMAP (Environmental Mapping and Analysis Program) scene. Other examples consist of miscalibrated AISA/Eagle VNIR (Visible and Near Infrared) and Hawk SWIR (Short Wave Infrared) scenes of rural areas of the region Fichtwald in Germany and Hyperion scenes of the Jalal-Abad district in Southern Kyrgyzstan. Recovery rates of approximately 97% for linear and approximately 94% for nonlinear miscalibrated data were achieved, clearly demonstrating the benefits of the new approach and its potential for broad applicability to miscalibrated pushbroom sensor data. PMID:22163960
Processing-optimised imaging of analog geological models by electrical capacitance tomography
NASA Astrophysics Data System (ADS)
Ortiz Alemán, C.; Espíndola-Carmona, A.; Hernández-Gómez, J. J.; Orozco Del Castillo, MG
2017-06-01
In this work, the electrical capacitance tomography (ECT) technique is applied in monitoring internal deformation of geological analog models, which are used to study structural deformation mechanisms, in particular for simulating migration and emplacement of allochtonous salt bodies. A rectangular ECT sensor was used for internal visualization of analog geologic deformation. The monitoring of analog models consists in the reconstruction of permittivity images from the capacitance measurements obtained by introducing the model inside the ECT sensor. A simulated annealing (SA) algorithm is used as a reconstruction method, and is optimized by taking full advantage of some special features in a linearized version of this inverse approach. As a second part of this work our SA image reconstruction algorithm is applied to synthetic models, where its performance is evaluated in comparison to other commonly used algorithms such as linear back-projection and iterative Landweber methods. Finally, the SA method is applied to visualise two simple geological analog models. Encouraging results were obtained in terms of the quality of the reconstructed images, as interfaces corresponding to main geological units in the analog model were clearly distinguishable in them. We found reliable results quite useful for real time non-invasive monitoring of internal deformation of analog geological models.
3-D readout-electronics packaging for high-bandwidth massively paralleled imager
Kwiatkowski, Kris; Lyke, James
2007-12-18
Dense, massively parallel signal processing electronics are co-packaged behind associated sensor pixels. Microchips containing a linear or bilinear arrangement of photo-sensors, together with associated complex electronics, are integrated into a simple 3-D structure (a "mirror cube"). An array of photo-sensitive cells are disposed on a stacked CMOS chip's surface at a 45.degree. angle from light reflecting mirror surfaces formed on a neighboring CMOS chip surface. Image processing electronics are held within the stacked CMOS chip layers. Electrical connections couple each of said stacked CMOS chip layers and a distribution grid, the connections for distributing power and signals to components associated with each stacked CSMO chip layer.
Image sensor with high dynamic range linear output
NASA Technical Reports Server (NTRS)
Yadid-Pecht, Orly (Inventor); Fossum, Eric R. (Inventor)
2007-01-01
Designs and operational methods to increase the dynamic range of image sensors and APS devices in particular by achieving more than one integration times for each pixel thereof. An APS system with more than one column-parallel signal chains for readout are described for maintaining a high frame rate in readout. Each active pixel is sampled for multiple times during a single frame readout, thus resulting in multiple integration times. The operation methods can also be used to obtain multiple integration times for each pixel with an APS design having a single column-parallel signal chain for readout. Furthermore, analog-to-digital conversion of high speed and high resolution can be implemented.
Digital Simulation Of Precise Sensor Degradations Including Non-Linearities And Shift Variance
NASA Astrophysics Data System (ADS)
Kornfeld, Gertrude H.
1987-09-01
Realistic atmospheric and Forward Looking Infrared Radiometer (FLIR) degradations were digitally simulated. Inputs to the routine are environmental observables and the FLIR specifications. It was possible to achieve realism in the thermal domain within acceptable computer time and random access memory (RAM) requirements because a shift variant recursive convolution algorithm that well describes thermal properties was invented and because each picture element (pixel) has radiative temperature, a materials parameter and range and altitude information. The computer generation steps start with the image synthesis of an undegraded scene. Atmospheric and sensor degradation follow. The final result is a realistic representation of an image seen on the display of a specific FLIR.
Adaptive fusion of infrared and visible images in dynamic scene
NASA Astrophysics Data System (ADS)
Yang, Guang; Yin, Yafeng; Man, Hong; Desai, Sachi
2011-11-01
Multiple modalities sensor fusion has been widely employed in various surveillance and military applications. A variety of image fusion techniques including PCA, wavelet, curvelet and HSV has been proposed in recent years to improve human visual perception for object detection. One of the main challenges for visible and infrared image fusion is to automatically determine an optimal fusion strategy for different input scenes along with an acceptable computational cost. This paper, we propose a fast and adaptive feature selection based image fusion method to obtain high a contrast image from visible and infrared sensors for targets detection. At first, fuzzy c-means clustering is applied on the infrared image to highlight possible hotspot regions, which will be considered as potential targets' locations. After that, the region surrounding the target area is segmented as the background regions. Then image fusion is locally applied on the selected target and background regions by computing different linear combination of color components from registered visible and infrared images. After obtaining different fused images, histogram distributions are computed on these local fusion images as the fusion feature set. The variance ratio which is based on Linear Discriminative Analysis (LDA) measure is employed to sort the feature set and the most discriminative one is selected for the whole image fusion. As the feature selection is performed over time, the process will dynamically determine the most suitable feature for the image fusion in different scenes. Experiment is conducted on the OSU Color-Thermal database, and TNO Human Factor dataset. The fusion results indicate that our proposed method achieved a competitive performance compared with other fusion algorithms at a relatively low computational cost.
Enhancing hyperspectral spatial resolution using multispectral image fusion: A wavelet approach
NASA Astrophysics Data System (ADS)
Jazaeri, Amin
High spectral and spatial resolution images have a significant impact in remote sensing applications. Because both spatial and spectral resolutions of spaceborne sensors are fixed by design and it is not possible to further increase the spatial or spectral resolution, techniques such as image fusion must be applied to achieve such goals. This dissertation introduces the concept of wavelet fusion between hyperspectral and multispectral sensors in order to enhance the spectral and spatial resolution of a hyperspectral image. To test the robustness of this concept, images from Hyperion (hyperspectral sensor) and Advanced Land Imager (multispectral sensor) were first co-registered and then fused using different wavelet algorithms. A regression-based fusion algorithm was also implemented for comparison purposes. The results show that the fused images using a combined bi-linear wavelet-regression algorithm have less error than other methods when compared to the ground truth. In addition, a combined regression-wavelet algorithm shows more immunity to misalignment of the pixels due to the lack of proper registration. The quantitative measures of average mean square error show that the performance of wavelet-based methods degrades when the spatial resolution of hyperspectral images becomes eight times less than its corresponding multispectral image. Regardless of what method of fusion is utilized, the main challenge in image fusion is image registration, which is also a very time intensive process. Because the combined regression wavelet technique is computationally expensive, a hybrid technique based on regression and wavelet methods was also implemented to decrease computational overhead. However, the gain in faster computation was offset by the introduction of more error in the outcome. The secondary objective of this dissertation is to examine the feasibility and sensor requirements for image fusion for future NASA missions in order to be able to perform onboard image fusion. In this process, the main challenge of image registration was resolved by registering the input images using transformation matrices of previously acquired data. The composite image resulted from the fusion process remarkably matched the ground truth, indicating the possibility of real time onboard fusion processing.
Hybrid imaging: a quantum leap in scientific imaging
NASA Astrophysics Data System (ADS)
Atlas, Gene; Wadsworth, Mark V.
2004-01-01
ImagerLabs has advanced its patented next generation imaging technology called the Hybrid Imaging Technology (HIT) that offers scientific quality performance. The key to the HIT is the merging of the CCD and CMOS technologies through hybridization rather than process integration. HIT offers exceptional QE, fill factor, broad spectral response and very low noise properties of the CCD. In addition, it provides the very high-speed readout, low power, high linearity and high integration capability of CMOS sensors. In this work, we present the benefits, and update the latest advances in the performance of this exciting technology.
Development of real-time extensometer based on image processing
NASA Astrophysics Data System (ADS)
Adinanta, H.; Puranto, P.; Suryadi
2017-04-01
An extensometer system was developed by using high definition web camera as main sensor to track object position. The developed system applied digital image processing techniques. The image processing was used to measure the change of object position. The position measurement was done in real-time so that the system can directly showed the actual position in both x and y-axis. In this research, the relation between pixel and object position changes had been characterized. The system was tested by moving the target in a range of 20 cm in interval of 1 mm. To verify the long run performance, the stability and linearity of continuous measurements on both x and y-axis, this measurement had been conducted for 83 hours. The results show that this image processing-based extensometer had both good stability and linearity.
NASA Technical Reports Server (NTRS)
Sarrafzadeh-Khoee, Adel K. (Inventor)
2000-01-01
The invention provides a method of triple-beam and triple-sensor in a laser speckle strain/deformation measurement system. The triple-beam/triple-camera configuration combined with sequential timing of laser beam shutters is capable of providing indications of surface strain and structure deformations. The strain and deformation quantities, the four variables of surface strain, in-plane displacement, out-of-plane displacement and tilt, are determined in closed form solutions.
Sánchez-Durán, José A; Hidalgo-López, José A; Castellanos-Ramos, Julián; Oballe-Peinado, Óscar; Vidal-Verdú, Fernando
2015-08-19
Tactile sensors suffer from many types of interference and errors like crosstalk, non-linearity, drift or hysteresis, therefore calibration should be carried out to compensate for these deviations. However, this procedure is difficult in sensors mounted on artificial hands for robots or prosthetics for instance, where the sensor usually bends to cover a curved surface. Moreover, the calibration procedure should be repeated often because the correction parameters are easily altered by time and surrounding conditions. Furthermore, this intensive and complex calibration could be less determinant, or at least simpler. This is because manipulation algorithms do not commonly use the whole data set from the tactile image, but only a few parameters such as the moments of the tactile image. These parameters could be changed less by common errors and interferences, or at least their variations could be in the order of those caused by accepted limitations, like reduced spatial resolution. This paper shows results from experiments to support this idea. The experiments are carried out with a high performance commercial sensor as well as with a low-cost error-prone sensor built with a common procedure in robotics.
Li, Jing; Mahmoodi, Alireza; Joseph, Dileepan
2015-01-01
An important class of complementary metal-oxide-semiconductor (CMOS) image sensors are those where pixel responses are monotonic nonlinear functions of light stimuli. This class includes various logarithmic architectures, which are easily capable of wide dynamic range imaging, at video rates, but which are vulnerable to image quality issues. To minimize fixed pattern noise (FPN) and maximize photometric accuracy, pixel responses must be calibrated and corrected due to mismatch and process variation during fabrication. Unlike literature approaches, which employ circuit-based models of varying complexity, this paper introduces a novel approach based on low-degree polynomials. Although each pixel may have a highly nonlinear response, an approximately-linear FPN calibration is possible by exploiting the monotonic nature of imaging. Moreover, FPN correction requires only arithmetic, and an optimal fixed-point implementation is readily derived, subject to a user-specified number of bits per pixel. Using a monotonic spline, involving cubic polynomials, photometric calibration is also possible without a circuit-based model, and fixed-point photometric correction requires only a look-up table. The approach is experimentally validated with a logarithmic CMOS image sensor and is compared to a leading approach from the literature. The novel approach proves effective and efficient. PMID:26501287
Multiparameter Estimation in Networked Quantum Sensors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Proctor, Timothy J.; Knott, Paul A.; Dunningham, Jacob A.
We introduce a general model for a network of quantum sensors, and we use this model to consider the question: When can entanglement between the sensors, and/or global measurements, enhance the precision with which the network can measure a set of unknown parameters? We rigorously answer this question by presenting precise theorems proving that for a broad class of problems there is, at most, a very limited intrinsic advantage to using entangled states or global measurements. Moreover, for many estimation problems separable states and local measurements are optimal, and can achieve the ultimate quantum limit on the estimation uncertainty. Thismore » immediately implies that there are broad conditions under which simultaneous estimation of multiple parameters cannot outperform individual, independent estimations. Our results apply to any situation in which spatially localized sensors are unitarily encoded with independent parameters, such as when estimating multiple linear or non-linear optical phase shifts in quantum imaging, or when mapping out the spatial profile of an unknown magnetic field. We conclude by showing that entangling the sensors can enhance the estimation precision when the parameters of interest are global properties of the entire network.« less
Multiparameter Estimation in Networked Quantum Sensors
Proctor, Timothy J.; Knott, Paul A.; Dunningham, Jacob A.
2018-02-21
We introduce a general model for a network of quantum sensors, and we use this model to consider the question: When can entanglement between the sensors, and/or global measurements, enhance the precision with which the network can measure a set of unknown parameters? We rigorously answer this question by presenting precise theorems proving that for a broad class of problems there is, at most, a very limited intrinsic advantage to using entangled states or global measurements. Moreover, for many estimation problems separable states and local measurements are optimal, and can achieve the ultimate quantum limit on the estimation uncertainty. Thismore » immediately implies that there are broad conditions under which simultaneous estimation of multiple parameters cannot outperform individual, independent estimations. Our results apply to any situation in which spatially localized sensors are unitarily encoded with independent parameters, such as when estimating multiple linear or non-linear optical phase shifts in quantum imaging, or when mapping out the spatial profile of an unknown magnetic field. We conclude by showing that entangling the sensors can enhance the estimation precision when the parameters of interest are global properties of the entire network.« less
Film cameras or digital sensors? The challenge ahead for aerial imaging
Light, D.L.
1996-01-01
Cartographic aerial cameras continue to play the key role in producing quality products for the aerial photography business, and specifically for the National Aerial Photography Program (NAPP). One NAPP photograph taken with cameras capable of 39 lp/mm system resolution can contain the equivalent of 432 million pixels at 11 ??m spot size, and the cost is less than $75 per photograph to scan and output the pixels on a magnetic storage medium. On the digital side, solid state charge coupled device linear and area arrays can yield quality resolution (7 to 12 ??m detector size) and a broader dynamic range. If linear arrays are to compete with film cameras, they will require precise attitude and positioning of the aircraft so that the lines of pixels can be unscrambled and put into a suitable homogeneous scene that is acceptable to an interpreter. Area arrays need to be much larger than currently available to image scenes competitive in size with film cameras. Analysis of the relative advantages and disadvantages of the two systems show that the analog approach is more economical at present. However, as arrays become larger, attitude sensors become more refined, global positioning system coordinate readouts become commonplace, and storage capacity becomes more affordable, the digital camera may emerge as the imaging system for the future. Several technical challenges must be overcome if digital sensors are to advance to where they can support mapping, charting, and geographic information system applications.
High dynamic range pixel architecture for advanced diagnostic medical x-ray imaging applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Izadi, Mohammad Hadi; Karim, Karim S.
2006-05-15
The most widely used architecture in large-area amorphous silicon (a-Si) flat panel imagers is a passive pixel sensor (PPS), which consists of a detector and a readout switch. While the PPS has the advantage of being compact and amenable toward high-resolution imaging, small PPS output signals are swamped by external column charge amplifier and data line thermal noise, which reduce the minimum readable sensor input signal. In contrast to PPS circuits, on-pixel amplifiers in a-Si technology reduce readout noise to levels that can meet even the stringent requirements for low noise digital x-ray fluoroscopy (<1000 noise electrons). However, larger voltagesmore » at the pixel input cause the output of the amplified pixel to become nonlinear thus reducing the dynamic range. We reported a hybrid amplified pixel architecture based on a combination of PPS and amplified pixel designs that, in addition to low noise performance, also resulted in large-signal linearity and consequently higher dynamic range [K. S. Karim et al., Proc. SPIE 5368, 657 (2004)]. The additional benefit in large-signal linearity, however, came at the cost of an additional pixel transistor. We present an amplified pixel design that achieves the goals of low noise performance and large-signal linearity without the need for an additional pixel transistor. Theoretical calculations and simulation results for noise indicate the applicability of the amplified a-Si pixel architecture for high dynamic range, medical x-ray imaging applications that require switching between low exposure, real-time fluoroscopy and high-exposure radiography.« less
NASA Astrophysics Data System (ADS)
Ye, Jiamin; Wang, Haigang; Yang, Wuqiang
2016-07-01
Electrical capacitance tomography (ECT) is based on capacitance measurements from electrode pairs mounted outside of a pipe or vessel. The structure of ECT sensors is vital to image quality. In this paper, issues with the number of electrodes and the electrode covering ratio for complex liquid-solids flows in a rotating device are investigated based on a new coupling simulation model. The number of electrodes is increased from 4 to 32 while the electrode covering ratio is changed from 0.1 to 0.9. Using the coupling simulation method, real permittivity distributions and the corresponding capacitance data at 0, 0.5, 1, 2, 3, 5, and 8 s with a rotation speed of 96 rotations per minute (rpm) are collected. Linear back projection (LBP) and Landweber iteration algorithms are used for image reconstruction. The quality of reconstructed images is evaluated by correlation coefficient compared with the real permittivity distributions obtained from the coupling simulation. The sensitivity for each sensor is analyzed and compared with the correlation coefficient. The capacitance data with a range of signal-to-noise ratios (SNRs) of 45, 50, 55 and 60 dB are generated to evaluate the effect of data noise on the performance of ECT sensors. Furthermore, the SNRs of experimental data are analyzed for a stationary pipe with permittivity distribution. Based on the coupling simulation, 16-electrode ECT sensors are recommended to achieve good image quality.
Shortwave infrared 512 x 2 line sensor for earth resources applications
NASA Astrophysics Data System (ADS)
Tower, J. R.; Pellon, L. E.; McCarthy, B. M.; Elabd, H.; Moldovan, A. G.; Kosonocky, W. F.; Kalshoven, J. E., Jr.; Tom, D.
1985-08-01
As part of the NASA remote-sensing Multispectral Linear Array Program, an edge-buttable 512 x 2 IRCCD line image sensor with 30-micron Pd2Si Schottky-barrier detectors is developed for operation with passive cooling at 120 K in the 1.1-2.5 micron short infrared band. On-chip CCD multiplexers provide one video output for each 512 detector band. The monolithic silicon line imager performance at a 4-ms optical integration time includes a signal-to-noise ratio of 241 for irradiance of 7.2 microwatts/sq cm at 1.65 microns wavelength, a 5000 dynamic range, a modulation transfer function, greater than 60 percent at the Nyquist frequency, and an 18-milliwatt imager chip total power dissipation. Blemish-free images with three percent nonuniformity under illumination and nonlinearity of 1.25 percent are obtained. A five SWIR imager hybrid focal plane was constructed, demonstrating the feasibility of arrays with only a two-detector loss at each joint.
Meyer, D.; Chander, G.
2006-01-01
Increasingly, data from multiple sensors are used to gain a more complete understanding of land surface processes at a variety of scales. Although higher-level products (e.g., vegetation cover, albedo, surface temperature) derived from different sensors can be validated independently, the degree to which these sensors and their products can be compared to one another is vastly improved if their relative spectroradiometric responses are known. Most often, sensors are directly calibrated to diffuse solar irradiation or vicariously to ground targets. However, space-based targets are not traceable to metrological standards, and vicarious calibrations are expensive and provide a poor sampling of a sensor's full dynamic range. Crosscalibration of two sensors can augment these methods if certain conditions can be met: (1) the spectral responses are similar, (2) the observations are reasonably concurrent (similar atmospheric & solar illumination conditions), (3) errors due to misregistrations of inhomogeneous surfaces can be minimized (including scale differences), and (4) the viewing geometry is similar (or, some reasonable knowledge of surface bi-directional reflectance distribution functions is available). This study explores the impacts of cross-calibrating sensors when such conditions are met to some degree but not perfectly. In order to constrain the range of conditions at some level, the analysis is limited to sensors where cross-calibration studies have been conducted (Enhanced Thematic Mapper Plus (ETM+) on Landsat-7 (L7), Advance Land Imager (ALI) and Hyperion on Earth Observer-1 (EO-1)) and including systems having somewhat dissimilar geometry, spatial resolution & spectral response characteristics but are still part of the so-called "A.M. constellation" (Moderate Resolution Imaging Spectrometer (MODIS) aboard the Terra platform). Measures for spectral response differences and methods for cross calibrating such sensors are provided in this study. These instruments are cross calibrated using the Railroad Valley playa in Nevada. Best fit linear coefficients (slope and offset) are provided for ALI-to-MODIS and ETM+-to-MODIS cross calibrations, and root-mean-squared errors (RMSEs) and correlation coefficients are provided to quantify the uncertainty in these relationships. In theory, the linear fits and uncertainties can be used to compare radiance and reflectance products derived from each instrument.
NASA Technical Reports Server (NTRS)
Tilton, James C.; Wolfe, Robert E.; Lin, Guoqing
2017-01-01
The visible infrared imaging radiometer suite (VIIRS) instrument was launched 28 October 2011 onboard the Suomi National Polar-orbiting Partnership (SNPP) satellite. The VIIRS instrument is a whiskbroom system with 22 spectral and thermal bands split between 16 moderate resolution bands (M-bands), five imagery resolution bands (I-bands) and a day-night band. In this study we estimate the along-scan line spread function (LSF) of the I-bands and M-bands based on measurements performed on images of the Lake Pontchartrain Causeway Bridge. In doing so we develop a model for the LSF that closely matches the prelaunch laboratory measurements. We utilize VIIRS images co-geolocated with a Landsat TM image to precisely locate the bridge linear feature in the VIIRS images as a linear best fit to a straight line. We then utilize non-linear optimization to compute the best fit equation of the VIIRS image measurements in the vicinity of the bridge to the developed model equation. From the found parameterization of the model equation we derive the full-width at half-maximum (FWHM) as an approximation of the sensor field of view (FOV) for all bands, and compare these on-orbit measured values with prelaunch laboratory results.
Inferring the most probable maps of underground utilities using Bayesian mapping model
NASA Astrophysics Data System (ADS)
Bilal, Muhammad; Khan, Wasiq; Muggleton, Jennifer; Rustighi, Emiliano; Jenks, Hugo; Pennock, Steve R.; Atkins, Phil R.; Cohn, Anthony
2018-03-01
Mapping the Underworld (MTU), a major initiative in the UK, is focused on addressing social, environmental and economic consequences raised from the inability to locate buried underground utilities (such as pipes and cables) by developing a multi-sensor mobile device. The aim of MTU device is to locate different types of buried assets in real time with the use of automated data processing techniques and statutory records. The statutory records, even though typically being inaccurate and incomplete, provide useful prior information on what is buried under the ground and where. However, the integration of information from multiple sensors (raw data) with these qualitative maps and their visualization is challenging and requires the implementation of robust machine learning/data fusion approaches. An approach for automated creation of revised maps was developed as a Bayesian Mapping model in this paper by integrating the knowledge extracted from sensors raw data and available statutory records. The combination of statutory records with the hypotheses from sensors was for initial estimation of what might be found underground and roughly where. The maps were (re)constructed using automated image segmentation techniques for hypotheses extraction and Bayesian classification techniques for segment-manhole connections. The model consisting of image segmentation algorithm and various Bayesian classification techniques (segment recognition and expectation maximization (EM) algorithm) provided robust performance on various simulated as well as real sites in terms of predicting linear/non-linear segments and constructing refined 2D/3D maps.
NASA Technical Reports Server (NTRS)
Koshak, W. J.; Blakeslee, R. J.; Bailey, J. C.
2000-01-01
A linear algebraic solution is provided for the problem of retrieving the location and time of occurrence of lightning ground strikes from an Advanced Lightning Direction Finder (ALDF) network. The ALDF network measures field strength, magnetic bearing, and arrival time of lightning radio emissions. Solutions for the plane (i.e., no earth curvature) are provided that implement all of these measurements. The accuracy of the retrieval method is tested using computer-simulated datasets, and the relative influence of bearing and arrival time data an the outcome of the final solution is formally demonstrated. The algorithm is sufficiently accurate to validate NASA:s Optical Transient Detector and Lightning Imaging Sensor. A quadratic planar solution that is useful when only three arrival time measurements are available is also introduced. The algebra of the quadratic root results are examined in detail to clarify what portions of the analysis region lead to fundamental ambiguities in sc)iirce location, Complex root results are shown to be associated with the presence of measurement errors when the lightning source lies near an outer sensor baseline of the ALDF network. For arbitrary noncollinear network geometries and in the absence of measurement errors, it is shown that the two quadratic roots are equivalent (no source location ambiguity) on the outer sensor baselines. The accuracy of the quadratic planar method is tested with computer-generated datasets, and the results are generally better than those obtained from the three-station linear planar method when bearing errors are about 2 deg.
A Low Power Digital Accumulation Technique for Digital-Domain CMOS TDI Image Sensor.
Yu, Changwei; Nie, Kaiming; Xu, Jiangtao; Gao, Jing
2016-09-23
In this paper, an accumulation technique suitable for digital domain CMOS time delay integration (TDI) image sensors is proposed to reduce power consumption without degrading the rate of imaging. In terms of the slight variations of quantization codes among different pixel exposures towards the same object, the pixel array is divided into two groups: one is for coarse quantization of high bits only, and the other one is for fine quantization of low bits. Then, the complete quantization codes are composed of both results from the coarse-and-fine quantization. The equivalent operation comparably reduces the total required bit numbers of the quantization. In the 0.18 µm CMOS process, two versions of 16-stage digital domain CMOS TDI image sensor chains based on a 10-bit successive approximate register (SAR) analog-to-digital converter (ADC), with and without the proposed technique, are designed. The simulation results show that the average power consumption of slices of the two versions are 6 . 47 × 10 - 8 J/line and 7 . 4 × 10 - 8 J/line, respectively. Meanwhile, the linearity of the two versions are 99.74% and 99.99%, respectively.
Polarization Imaging Apparatus with Auto-Calibration
NASA Technical Reports Server (NTRS)
Zou, Yingyin Kevin (Inventor); Zhao, Hongzhi (Inventor); Chen, Qiushui (Inventor)
2013-01-01
A polarization imaging apparatus measures the Stokes image of a sample. The apparatus consists of an optical lens set, a first variable phase retarder (VPR) with its optical axis aligned 22.5 deg, a second variable phase retarder with its optical axis aligned 45 deg, a linear polarizer, a imaging sensor for sensing the intensity images of the sample, a controller and a computer. Two variable phase retarders were controlled independently by a computer through a controller unit which generates a sequential of voltages to control the phase retardations of the first and second variable phase retarders. A auto-calibration procedure was incorporated into the polarization imaging apparatus to correct the misalignment of first and second VPRs, as well as the half-wave voltage of the VPRs. A set of four intensity images, I(sub 0), I(sub 1), I(sub 2) and I(sub 3) of the sample were captured by imaging sensor when the phase retardations of VPRs were set at (0,0), (pi,0), (pi,pi) and (pi/2,pi), respectively. Then four Stokes components of a Stokes image, S(sub 0), S(sub 1), S(sub 2) and S(sub 3) were calculated using the four intensity images.
Polarization imaging apparatus with auto-calibration
Zou, Yingyin Kevin; Zhao, Hongzhi; Chen, Qiushui
2013-08-20
A polarization imaging apparatus measures the Stokes image of a sample. The apparatus consists of an optical lens set, a first variable phase retarder (VPR) with its optical axis aligned 22.5.degree., a second variable phase retarder with its optical axis aligned 45.degree., a linear polarizer, a imaging sensor for sensing the intensity images of the sample, a controller and a computer. Two variable phase retarders were controlled independently by a computer through a controller unit which generates a sequential of voltages to control the phase retardations of the first and second variable phase retarders. A auto-calibration procedure was incorporated into the polarization imaging apparatus to correct the misalignment of first and second VPRs, as well as the half-wave voltage of the VPRs. A set of four intensity images, I.sub.0, I.sub.1, I.sub.2 and I.sub.3 of the sample were captured by imaging sensor when the phase retardations of VPRs were set at (0,0), (.pi.,0), (.pi.,.pi.) and (.pi./2,.pi.), respectively. Then four Stokes components of a Stokes image, S.sub.0, S.sub.1, S.sub.2 and S.sub.3 were calculated using the four intensity images.
Determining the 3-D structure and motion of objects using a scanning laser range sensor
NASA Technical Reports Server (NTRS)
Nandhakumar, N.; Smith, Philip W.
1993-01-01
In order for the EVAHR robot to autonomously track and grasp objects, its vision system must be able to determine the 3-D structure and motion of an object from a sequence of sensory images. This task is accomplished by the use of a laser radar range sensor which provides dense range maps of the scene. Unfortunately, the currently available laser radar range cameras use a sequential scanning approach which complicates image analysis. Although many algorithms have been developed for recognizing objects from range images, none are suited for use with single beam, scanning, time-of-flight sensors because all previous algorithms assume instantaneous acquisition of the entire image. This assumption is invalid since the EVAHR robot is equipped with a sequential scanning laser range sensor. If an object is moving while being imaged by the device, the apparent structure of the object can be significantly distorted due to the significant non-zero delay time between sampling each image pixel. If an estimate of the motion of the object can be determined, this distortion can be eliminated; but, this leads to the motion-structure paradox - most existing algorithms for 3-D motion estimation use the structure of objects to parameterize their motions. The goal of this research is to design a rigid-body motion recovery technique which overcomes this limitation. The method being developed is an iterative, linear, feature-based approach which uses the non-zero image acquisition time constraint to accurately recover the motion parameters from the distorted structure of the 3-D range maps. Once the motion parameters are determined, the structural distortion in the range images is corrected.
NASA Astrophysics Data System (ADS)
Xu, Yang; Chen, Xi; Chai, Ran; Xing, Chengfen; Li, Huanrong; Yin, Xue-Bo
2016-07-01
A novel magnetic/fluorometric bimodal sensor was built from carbon dots (CDs) and MnO2. The resulting sensor was sensitive to glutathione (GSH), leading to apparent enhancement of magnetic resonance (MR) and fluorescence signals along with visual changes. The bimodal detection strategy is based on the decomposition of the CDs-MnO2 through a redox reaction between GSH and MnO2. This process causes the transformation from non-MR-active MnO2 to MR-active Mn2+, and is accompanied by fluorescence restoration of CDs. Compared with a range of other CDs, the polyethylenimine (PEI) passivated CDs (denoted as pCDs) were suitable for detection due to their positive surface potential. Cross-validation between MR and fluorescence provided detailed information regarding the MnO2 reduction process, and revealed the three distinct stages of the redox process. Thus, the design of a CD-based sensor for the magnetic/fluorometric bimodal detection of GSH was emphasized for the first time. This platform showed a detection limit of 0.6 μM with a linear range of 1-200 μM in the fluorescence mode, while the MR mode exhibited a linear range of 5-200 μM and a GSH detection limit of 2.8 μM with a visible change being observed rapidly at 1 μM in the MR images. Furthermore, the introduction of the MR mode allowed the biothiols to be easily identified. The integration of CD fluorescence with an MR response was demonstrated to be promising for providing detailed information and discriminating power, and therefore extend the application of CDs in sensing and imaging.A novel magnetic/fluorometric bimodal sensor was built from carbon dots (CDs) and MnO2. The resulting sensor was sensitive to glutathione (GSH), leading to apparent enhancement of magnetic resonance (MR) and fluorescence signals along with visual changes. The bimodal detection strategy is based on the decomposition of the CDs-MnO2 through a redox reaction between GSH and MnO2. This process causes the transformation from non-MR-active MnO2 to MR-active Mn2+, and is accompanied by fluorescence restoration of CDs. Compared with a range of other CDs, the polyethylenimine (PEI) passivated CDs (denoted as pCDs) were suitable for detection due to their positive surface potential. Cross-validation between MR and fluorescence provided detailed information regarding the MnO2 reduction process, and revealed the three distinct stages of the redox process. Thus, the design of a CD-based sensor for the magnetic/fluorometric bimodal detection of GSH was emphasized for the first time. This platform showed a detection limit of 0.6 μM with a linear range of 1-200 μM in the fluorescence mode, while the MR mode exhibited a linear range of 5-200 μM and a GSH detection limit of 2.8 μM with a visible change being observed rapidly at 1 μM in the MR images. Furthermore, the introduction of the MR mode allowed the biothiols to be easily identified. The integration of CD fluorescence with an MR response was demonstrated to be promising for providing detailed information and discriminating power, and therefore extend the application of CDs in sensing and imaging. Electronic supplementary information (ESI) available. See DOI: 10.1039/c6nr03129c
An automated mapping satellite system ( Mapsat).
Colvocoresses, A.P.
1982-01-01
The favorable environment of space permits a satellite to orbit the Earth with very high stability as long as no local perturbing forces are involved. Solid-state linear-array sensors have no moving parts and create no perturbing force on the satellite. Digital data from highly stabilized stereo linear arrays are amenable to simplified processing to produce both planimetric imagery and elevation data. A satellite imaging system, called Mapsat, including this concept has been proposed to produce data from which automated mapping in near real time can be accomplished. Image maps as large as 1:50 000 scale with contours as close as a 20-m interval may be produced from Mapsat data. -from Author
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jakubek, J.; Cejnarova, A.; Platkevic, M.
Single quantum counting pixel detectors of Medipix type are starting to be used in various radiographic applications. Compared to standard devices for digital imaging (such as CCDs or CMOS sensors) they present significant advantages: direct conversion of radiation to electric signal, energy sensitivity, noiseless image integration, unlimited dynamic range, absolute linearity. In this article we describe usage of the pixel device TimePix for image accumulation gated by late trigger signal. Demonstration of the technique is given on imaging coincidence instrumental neutron activation analysis (Imaging CINAA). This method allows one to determine concentration and distribution of certain preselected element in anmore » inspected sample.« less
Microscopic resolution broadband dielectric spectroscopy
NASA Astrophysics Data System (ADS)
Mukherjee, S.; Watson, P.; Prance, R. J.
2011-08-01
Results are presented for a non-contact measurement system capable of micron level spatial resolution. It utilises the novel electric potential sensor (EPS) technology, invented at Sussex, to image the electric field above a simple composite dielectric material. EP sensors may be regarded as analogous to a magnetometer and require no adjustments or offsets during either setup or use. The sample consists of a standard glass/epoxy FR4 circuit board, with linear defects machined into the surface by a PCB milling machine. The sample is excited with an a.c. signal over a range of frequencies from 10 kHz to 10 MHz, from the reverse side, by placing it on a conducting sheet connected to the source. The single sensor is raster scanned over the surface at a constant working distance, consistent with the spatial resolution, in order to build up an image of the electric field, with respect to the reference potential. The results demonstrate that both the surface defects and the internal dielectric variations within the composite may be imaged in this way, with good contrast being observed between the glass mat and the epoxy resin.
Wan, Yuhang; Carlson, John A; Kesler, Benjamin A; Peng, Wang; Su, Patrick; Al-Mulla, Saoud A; Lim, Sung Jun; Smith, Andrew M; Dallesasse, John M; Cunningham, Brian T
2016-07-08
A compact analysis platform for detecting liquid absorption and emission spectra using a set of optical linear variable filters atop a CMOS image sensor is presented. The working spectral range of the analysis platform can be extended without a reduction in spectral resolution by utilizing multiple linear variable filters with different wavelength ranges on the same CMOS sensor. With optical setup reconfiguration, its capability to measure both absorption and fluorescence emission is demonstrated. Quantitative detection of fluorescence emission down to 0.28 nM for quantum dot dispersions and 32 ng/mL for near-infrared dyes has been demonstrated on a single platform over a wide spectral range, as well as an absorption-based water quality test, showing the versatility of the system across liquid solutions for different emission and absorption bands. Comparison with a commercially available portable spectrometer and an optical spectrum analyzer shows our system has an improved signal-to-noise ratio and acceptable spectral resolution for discrimination of emission spectra, and characterization of colored liquid's absorption characteristics generated by common biomolecular assays. This simple, compact, and versatile analysis platform demonstrates a path towards an integrated optical device that can be utilized for a wide variety of applications in point-of-use testing and point-of-care diagnostics.
Polarization imaging apparatus
NASA Technical Reports Server (NTRS)
Zou, Yingyin Kevin (Inventor); Chen, Qiushui (Inventor); Zhao, Hongzhi (Inventor)
2010-01-01
A polarization imaging apparatus measures the Stokes image of a sample. The apparatus consists of an optical lens set 11, a linear polarizer 14 with its optical axis 18, a first variable phase retarder 12 with its optical axis 16 aligned 22.5.degree. to axis 18, a second variable phase retarder 13 with its optical axis 17 aligned 45.degree. to axis 18, a imaging sensor 15 for sensing the intensity images of the sample, a controller 101 and a computer 102. Two variable phase retarders 12 and 13 were controlled independently by a computer 102 through a controller unit 101 which generates a sequential of voltages to control the phase retardations of VPRs 12 and 13. A set of four intensity images, I.sub.0, I.sub.1, I.sub.2 and I.sub.3 of the sample were captured by imaging sensor 15 when the phase retardations of VPRs 12 and 13 were set at (0,0), (.pi.,0), (.pi.,.pi.) and (.pi./2,.pi.), respectively Then four Stokes components of a Stokes image, S.sub.0, S.sub.1, S.sub.2 and S.sub.3 were calculated using the four intensity images.
Quantitative detection of the colloidal gold immunochromatographic strip in HSV color space
NASA Astrophysics Data System (ADS)
Wu, Yuanshu; Gao, Yueming; Du, Min
2014-09-01
In this paper, a fast, reliable and accurate quantitative detection method for the colloidal gold immunochromatographic strip(GICA) is presented. An image acquisition device which is mainly composed of annular LED source, zoom ratio lens, and 10bit CMOS image sensors with 54.5dB SNR is designed for the detection. Firstly, the test line is extracted from the strip window through using the H component peak points of the HSV space as the clustering centers via the Fuzzy C-Means(FCM) clustering method. Then, a two dimensional eigenvalue composed with the hue(H) and saturation(S) of HSV space was proposed to improve the accuracy of the quantitative detection. At last, the experiment of human chorionic gonadotropin(HCG) with the concentration range 0-500mIU/mL is carried out. The results show that the linear correlation coefficient between this method and optical density(OD) values measured by the fiber optical sensor reach 96.74%. Meanwhile, the linearity of fitting curve constructed with concentration was greater than 95.00%.
Sensing more modes with fewer sub-apertures: the LIFTed Shack-Hartmann wavefront sensor.
Meimon, Serge; Fusco, Thierry; Michau, Vincent; Plantet, Cédric
2014-05-15
We propose here a novel way to analyze Shack-Hartmann wavefront sensor images in order to retrieve more modes than the two centroid coordinates per sub-aperture. To do so, we use the linearized focal-plane technique (LIFT) phase retrieval method for each sub-aperture. We demonstrate that we can increase the number of modes sensed with the same computational burden per mode. For instance, we show the ability to control a 21×21 actuator deformable mirror using a 10×10 lenslet array.
Putting a finishing touch on GECIs
Rose, Tobias; Goltstein, Pieter M.; Portugues, Ruben; Griesbeck, Oliver
2014-01-01
More than a decade ago genetically encoded calcium indicators (GECIs) entered the stage as new promising tools to image calcium dynamics and neuronal activity in living tissues and designated cell types in vivo. From a variety of initial designs two have emerged as promising prototypes for further optimization: FRET (Förster Resonance Energy Transfer)-based sensors and single fluorophore sensors of the GCaMP family. Recent efforts in structural analysis, engineering and screening have broken important performance thresholds in the latest generation for both classes. While these improvements have made GECIs a powerful means to perform physiology in living animals, a number of other aspects of sensor function deserve attention. These aspects include indicator linearity, toxicity and slow response kinetics. Furthermore creating high performance sensors with optically more favorable emission in red or infrared wavelengths as well as new stably or conditionally GECI-expressing animal lines are on the wish list. When the remaining issues are solved, imaging of GECIs will finally have crossed the last milestone, evolving from an initial promise into a fully matured technology. PMID:25477779
NASA Astrophysics Data System (ADS)
Wei, Minsong; Xing, Fei; You, Zheng
2017-01-01
The advancing growth of micro- and nano-satellites requires miniaturized sun sensors which could be conveniently applied in the attitude determination subsystem. In this work, a profile detecting technology based high accurate wireless digital sun sensor was proposed, which could transform a two-dimensional image into two-linear profile output so that it can realize a high update rate under a very low power consumption. A multiple spots recovery approach with an asymmetric mask pattern design principle was introduced to fit the multiplexing image detector method for accuracy improvement of the sun sensor within a large Field of View (FOV). A FOV determination principle based on the concept of FOV region was also proposed to facilitate both sub-FOV analysis and the whole FOV determination. A RF MCU, together with solar cells, was utilized to achieve the wireless and self-powered functionality. The prototype of the sun sensor is approximately 10 times lower in size and weight compared with the conventional digital sun sensor (DSS). Test results indicated that the accuracy of the prototype was 0.01° within a cone FOV of 100°. Such an autonomous DSS could be equipped flexibly on a micro- or nano-satellite, especially for highly accurate remote sensing applications.
Designing a practical system for spectral imaging of skylight.
López-Alvarez, Miguel A; Hernández-Andrés, Javier; Romero, Javier; Lee, Raymond L
2005-09-20
In earlier work [J. Opt. Soc. Am. A 21, 13-23 (2004)], we showed that a combination of linear models and optimum Gaussian sensors obtained by an exhaustive search can recover daylight spectra reliably from broadband sensor data. Thus our algorithm and sensors could be used to design an accurate, relatively inexpensive system for spectral imaging of daylight. Here we improve our simulation of the multispectral system by (1) considering the different kinds of noise inherent in electronic devices such as change-coupled devices (CCDs) or complementary metal-oxide semiconductors (CMOS) and (2) extending our research to a different kind of natural illumination, skylight. Because exhaustive searches are expensive computationally, here we switch to a simulated annealing algorithm to define the optimum sensors for recovering skylight spectra. The annealing algorithm requires us to minimize a single cost function, and so we develop one that calculates both the spectral and colorimetric similarity of any pair of skylight spectra. We show that the simulated annealing algorithm yields results similar to the exhaustive search but with much less computational effort. Our technique lets us study the properties of optimum sensors in the presence of noise, one side effect of which is that adding more sensors may not improve the spectral recovery.
Multi-platform comparisons of MODIS and AVHRR normalized difference vegetation index data
Gallo, Kevin P.; Ji, Lei; Reed, Bradley C.; Eidenshink, Jeffery C.; Dwyer, John L.
2005-01-01
The relationship between AVHRR-derived normalized difference vegetation index (NDVI) values and those of future sensors is critical to continued long-term monitoring of land surface properties. The follow-on operational sensor to the AVHRR, the Visible/Infrared Imager/Radiometer Suite (VIIRS), will be very similar to the NASA Earth Observing System's Moderate Resolution Imaging Spectroradiometer (MODIS) sensor. NDVI data derived from visible and near-infrared data acquired by the MODIS (Terra and Aqua platforms) and AVHRR (NOAA-16 and NOAA-17) sensors were compared over the same time periods and a variety of land cover classes within the conterminous United States. The results indicate that the 16-day composite NDVI values are quite similar over the composite intervals of 2002 and 2003, and linear relationships exist between the NDVI values from the various sensors. The composite AVHRR NDVI data included water and cloud masks and adjustments for water vapor as did the MODIS NDVI data. When analyzed over a variety of land cover types and composite intervals, the AVHRR derived NDVI data were associated with 89% or more of the variation in the MODIS NDVI values. The results suggest that it may be possible to successfully reprocess historical AVHRR data sets to provide continuity of NDVI products through future sensor systems.
Implementation of software-based sensor linearization algorithms on low-cost microcontrollers.
Erdem, Hamit
2010-10-01
Nonlinear sensors and microcontrollers are used in many embedded system designs. As the input-output characteristic of most sensors is nonlinear in nature, obtaining data from a nonlinear sensor by using an integer microcontroller has always been a design challenge. This paper discusses the implementation of six software-based sensor linearization algorithms for low-cost microcontrollers. The comparative study of the linearization algorithms is performed by using a nonlinear optical distance-measuring sensor. The performance of the algorithms is examined with respect to memory space usage, linearization accuracy and algorithm execution time. The implementation and comparison results can be used for selection of a linearization algorithm based on the sensor transfer function, expected linearization accuracy and microcontroller capacity. Copyright © 2010 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Khan, Muazzam A.; Ahmad, Jawad; Javaid, Qaisar; Saqib, Nazar A.
2017-03-01
Wireless Sensor Networks (WSN) is widely deployed in monitoring of some physical activity and/or environmental conditions. Data gathered from WSN is transmitted via network to a central location for further processing. Numerous applications of WSN can be found in smart homes, intelligent buildings, health care, energy efficient smart grids and industrial control systems. In recent years, computer scientists has focused towards findings more applications of WSN in multimedia technologies, i.e. audio, video and digital images. Due to bulky nature of multimedia data, WSN process a large volume of multimedia data which significantly increases computational complexity and hence reduces battery time. With respect to battery life constraints, image compression in addition with secure transmission over a wide ranged sensor network is an emerging and challenging task in Wireless Multimedia Sensor Networks. Due to the open nature of the Internet, transmission of data must be secure through a process known as encryption. As a result, there is an intensive demand for such schemes that is energy efficient as well as highly secure since decades. In this paper, discrete wavelet-based partial image encryption scheme using hashing algorithm, chaotic maps and Hussain's S-Box is reported. The plaintext image is compressed via discrete wavelet transform and then the image is shuffled column-wise and row wise-wise via Piece-wise Linear Chaotic Map (PWLCM) and Nonlinear Chaotic Algorithm, respectively. To get higher security, initial conditions for PWLCM are made dependent on hash function. The permuted image is bitwise XORed with random matrix generated from Intertwining Logistic map. To enhance the security further, final ciphertext is obtained after substituting all elements with Hussain's substitution box. Experimental and statistical results confirm the strength of the anticipated scheme.
NASA Astrophysics Data System (ADS)
Cheong, M. K.; Bahiki, M. R.; Azrad, S.
2016-10-01
The main goal of this study is to demonstrate the approach of achieving collision avoidance on Quadrotor Unmanned Aerial Vehicle (QUAV) using image sensors with colour- based tracking method. A pair of high definition (HD) stereo cameras were chosen as the stereo vision sensor to obtain depth data from flat object surfaces. Laser transmitter was utilized to project high contrast tracking spot for depth calculation using common triangulation. Stereo vision algorithm was developed to acquire the distance from tracked point to QUAV and the control algorithm was designed to manipulate QUAV's response based on depth calculated. Attitude and position controller were designed using the non-linear model with the help of Optitrack motion tracking system. A number of collision avoidance flight tests were carried out to validate the performance of the stereo vision and control algorithm based on image sensors. In the results, the UAV was able to hover with fairly good accuracy in both static and dynamic collision avoidance for short range collision avoidance. Collision avoidance performance of the UAV was better with obstacle of dull surfaces in comparison to shiny surfaces. The minimum collision avoidance distance achievable was 0.4 m. The approach was suitable to be applied in short range collision avoidance.
Linear variable narrow bandpass optical filters in the far infrared (Conference Presentation)
NASA Astrophysics Data System (ADS)
Rahmlow, Thomas D.
2017-06-01
We are currently developing linear variable filters (LVF) with very high wavelength gradients. In the visible, these filters have a wavelength gradient of 50 to 100 nm/mm. In the infrared, the wavelength gradient covers the range of 500 to 900 microns/mm. Filter designs include band pass, long pass and ulta-high performance anti-reflection coatings. The active area of the filters is on the order of 5 to 30 mm along the wavelength gradient and up to 30 mm in the orthogonal, constant wavelength direction. Variation in performance along the constant direction is less than 1%. Repeatable performance from filter to filter, absolute placement of the filter relative to a substrate fiducial and, high in-band transmission across the full spectral band is demonstrated. Applications include order sorting filters, direct replacement of the spectrometer and hyper-spectral imaging. Off-band rejection with an optical density of greater than 3 allows use of the filter as an order sorting filter. The linear variable order sorting filters replaces other filter types such as block filters. The disadvantage of block filters is the loss of pixels due to the transition between filter blocks. The LVF is a continuous gradient without a discrete transition between filter wavelength regions. If the LVF is designed as a narrow band pass filter, it can be used in place of a spectrometer thus reducing overall sensor weight and cost while improving the robustness of the sensor. By controlling the orthogonal performance (smile) the LVF can be sized to the dimensions of the detector. When imaging on to a 2 dimensional array and operating the sensor in a push broom configuration, the LVF spectrometer performs as a hyper-spectral imager. This paper presents performance of LVF fabricated in the far infrared on substrates sized to available detectors. The impact of spot size, F-number and filter characterization are presented. Results are also compared to extended visible LVF filters.
Efficient, nonlinear phase estimation with the nonmodulated pyramid wavefront sensor
NASA Astrophysics Data System (ADS)
Frazin, Richard A.
2018-04-01
The sensitivity of the the pyramid wavefront sensor (PyWFS) has made it a popular choice for astronomical adaptive optics (AAO) systems, and it is at its most sensitive when it is used without modulation of the input beam. In non-modulated mode, the device is highly nonlinear. Hence, all PyWFS implementations on current AAO systems employ modulation to make the device more linear. The upcoming era of 30-m class telescopes and the demand for ultra-precise wavefront control stemming from science objectives that include direct imaging of exoplanets make using the PyWFS without modulation desirable. This article argues that nonlinear estimation based on Newton's method for nonlinear optimization can be useful for mitigating the effects of nonlinearity in the non-modulated PyWFS. The proposed approach requires all optical modeling to be pre-computed, which has the advantage of avoiding real-time simulations of beam propagation. Further, the required real-time calculations are amenable to massively parallel computation. Numerical experiments simulate a currently operational PyWFS. A singular value analysis shows that the common practice of calculating two "slope" images from the four PyWFS pupil images discards critical information and is unsuitable for the non-modulated PyWFS simulated here. Instead, this article advocates estimators that use the raw pixel values not only from the four geometrical images of the pupil, but from surrounding pixels as well. The simulations indicate that nonlinear estimation can be effective when the Strehl ratio of the input beam is greater than 0.3, and the improvement relative to linear estimation tends to increase at larger Strehl ratios. At Strehl ratios less than about 0.5, the performances of both the nonlinear and linear estimators are relatively insensitive to noise, since they are dominated by nonlinearity error.
Snapshot Imaging Spectrometry in the Visible and Long Wave Infrared
NASA Astrophysics Data System (ADS)
Maione, Bryan David
Imaging spectrometry is an optical technique in which the spectral content of an object is measured at each location in space. The main advantage of this modality is that it enables characterization beyond what is possible with a conventional camera, since spectral information is generally related to the chemical composition of the object. Due to this, imaging spectrometers are often capable of detecting targets that are either morphologically inconsistent, or even under resolved. A specific class of imaging spectrometer, known as a snapshot system, seeks to measure all spatial and spectral information simultaneously, thereby rectifying artifacts associated with scanning designs, and enabling the measurement of temporally dynamic scenes. Snapshot designs are the focus of this dissertation. Three designs for snapshot imaging spectrometers are developed, each providing novel contributions to the field of imaging spectrometry. In chapter 2, the first spatially heterodyned snapshot imaging spectrometer is modeled and experimentally validated. Spatial heterodyning is a technique commonly implemented in non-imaging Fourier transform spectrometry. For Fourier transform imaging spectrometers, spatial heterodyning improves the spectral resolution trade space. Additionally, in this chapter a unique neural network based spectral calibration is developed and determined to be an improvement beyond Fourier and linear operator based techniques. Leveraging spatial heterodyning as developed in chapter 2, in chapter 3, a high spectral resolution snapshot Fourier transform imaging spectrometer, based on a Savart plate interferometer, is developed and experimentally validated. The sensor presented in this chapter is the highest spectral resolution sensor in its class. High spectral resolution enables the sensor to discriminate narrowly spaced spectral lines. The capabilities of neural networks in imaging spectrometry are further explored in this chapter. Neural networks are used to perform single target detection on raw instrument data, thereby eliminating the need for an explicit spectral calibration step. As an extension of the results in chapter 2, neural networks are once again demonstrated to be an improvement when compared to linear operator based detection. In chapter 4 a non-interferometric design is developed for the long wave infrared (wavelengths spanning 8-12 microns). The imaging spectrometer developed in this chapter is a multi-aperture filtered microbolometer. Since the detector is uncooled, the presented design is ultra-compact and low power. Additionally, cost effective polymer absorption filters are used in lieu of interference filters. Since, each measurement of the system is spectrally multiplexed, an SNR advantage is realized. A theoretical model for the filtered design is developed, and the performance of the sensor for detecting liquid contaminants is investigated. Similar to past chapters, neural networks are used and achieve false detection rates of less than 1%. Lastly, this dissertation is concluded with a discussion on future work and potential impact of these devices.
Performance Evaluation of 98 CZT Sensors for Their Use in Gamma-Ray Imaging
NASA Astrophysics Data System (ADS)
Dedek, Nicolas; Speller, Robert D.; Spendley, Paul; Horrocks, Julie A.
2008-10-01
98 SPEAR sensors from eV Products have been evaluated for their use in a portable Compton camera. The sensors have a 5 mm times 5 mm times 5 mm CdZnTe crystal and are provided together with a preamplifier. The energy resolution was studied in detail for all sensors and was found to be 6% on average at 59.5 keV and 3% on average at 662 keV. The standard deviations of the corresponding energy resolution distributions are remarkably small (0.6% at 59.5 keV, 0.7% at 662 keV) and reflect the uniformity of the sensor characteristics. For a possible outside use the temperature dependence of the sensor performances was investigated for temperatures between 15 and 45 deg Celsius. A linear shift in calibration with temperature was observed. The energy resolution at low energies (81 keV) was found to deteriorate exponentially with temperature, while it stayed constant at higher energies (356 keV). A Compton camera built of these sensors was simulated. To obtain realistic energy spectra a suitable detector response function was implemented. To investigate the angular resolution of the camera a 137Cs point source was simulated. Reconstructed images of the point source were compared for perfect and realistic energy and position resolutions. The angular resolution of the camera was found to be better than 10 deg.
Qiu, Lei; Liu, Bin; Yuan, Shenfang; Su, Zhongqing
2016-01-01
The spatial-wavenumber filtering technique is an effective approach to distinguish the propagating direction and wave mode of Lamb wave in spatial-wavenumber domain. Therefore, it has been gradually studied for damage evaluation in recent years. But for on-line impact monitoring in practical application, the main problem is how to realize the spatial-wavenumber filtering of impact signal when the wavenumber of high spatial resolution cannot be measured or the accurate wavenumber curve cannot be modeled. In this paper, a new model-independent spatial-wavenumber filter based impact imaging method is proposed. In this method, a 2D cross-shaped array constructed by two linear piezoelectric (PZT) sensor arrays is used to acquire impact signal on-line. The continuous complex Shannon wavelet transform is adopted to extract the frequency narrowband signals from the frequency wideband impact response signals of the PZT sensors. A model-independent spatial-wavenumber filter is designed based on the spatial-wavenumber filtering technique. Based on the designed filter, a wavenumber searching and best match mechanism is proposed to implement the spatial-wavenumber filtering of the frequency narrowband signals without modeling, which can be used to obtain a wavenumber-time image of the impact relative to a linear PZT sensor array. By using the two wavenumber-time images of the 2D cross-shaped array, the impact direction can be estimated without blind angle. The impact distance relative to the 2D cross-shaped array can be calculated by using the difference of time-of-flight between the frequency narrowband signals of two different central frequencies and the corresponding group velocities. The validations performed on a carbon fiber composite laminate plate and an aircraft composite oil tank show a good impact localization accuracy of the model-independent spatial-wavenumber filter based impact imaging method. Copyright © 2015 Elsevier B.V. All rights reserved.
A CMOS-based large-area high-resolution imaging system for high-energy x-ray applications
NASA Astrophysics Data System (ADS)
Rodricks, Brian; Fowler, Boyd; Liu, Chiao; Lowes, John; Haeffner, Dean; Lienert, Ulrich; Almer, John
2008-08-01
CCDs have been the primary sensor in imaging systems for x-ray diffraction and imaging applications in recent years. CCDs have met the fundamental requirements of low noise, high-sensitivity, high dynamic range and spatial resolution necessary for these scientific applications. State-of-the-art CMOS image sensor (CIS) technology has experienced dramatic improvements recently and their performance is rivaling or surpassing that of most CCDs. The advancement of CIS technology is at an ever-accelerating pace and is driven by the multi-billion dollar consumer market. There are several advantages of CIS over traditional CCDs and other solid-state imaging devices; they include low power, high-speed operation, system-on-chip integration and lower manufacturing costs. The combination of superior imaging performance and system advantages makes CIS a good candidate for high-sensitivity imaging system development. This paper will describe a 1344 x 1212 CIS imaging system with a 19.5μm pitch optimized for x-ray scattering studies at high-energies. Fundamental metrics of linearity, dynamic range, spatial resolution, conversion gain, sensitivity are estimated. The Detective Quantum Efficiency (DQE) is also estimated. Representative x-ray diffraction images are presented. Diffraction images are compared against a CCD-based imaging system.
Reborn quadrant anode image sensor
NASA Astrophysics Data System (ADS)
Prokazov, Yury; Turbin, Evgeny; Vitali, Marco; Herzog, Andreas; Michaelis, Bernd; Zuschratter, Werner; Kemnitz, Klaus
2009-06-01
We describe a position sensitive photon counting microchannel plate based detector with an improved quadrant anode (QA) readout system. The technique relies on a combination of the four planar elements pattern and an additional fifth electrode. The charge cloud induced by single particle detection is split between the electrodes. The measured charge values uniquely define the position of the initial event. QA has been first published in 1976 by Lampton and Malina. This anode configuration was undeservedly forgotten and its potential has been hardly underestimated. The presented approach extends the operating spatial range to the whole sensitive area of the microchannel plate surface and demonstrates good linearity over the field of view. Therefore, the novel image sensor results in spatial resolution better then 50 μm and count rates up to one million events per second.
Yan, Gang; Zhou, Li
2018-02-21
This paper proposes an innovative method for identifying the locations of multiple simultaneous acoustic emission (AE) events in plate-like structures from the view of image processing. By using a linear lead zirconium titanate (PZT) sensor array to record the AE wave signals, a reverse-time frequency-wavenumber (f-k) migration is employed to produce images displaying the locations of AE sources by back-propagating the AE waves. Lamb wave theory is included in the f-k migration to consider the dispersive property of the AE waves. Since the exact occurrence time of the AE events is usually unknown when recording the AE wave signals, a heuristic artificial bee colony (ABC) algorithm combined with an optimal criterion using minimum Shannon entropy is used to find the image with the identified AE source locations and occurrence time that mostly approximate the actual ones. Experimental studies on an aluminum plate with AE events simulated by PZT actuators are performed to validate the applicability and effectiveness of the proposed optimal image-based AE source identification method.
Zhou, Li
2018-01-01
This paper proposes an innovative method for identifying the locations of multiple simultaneous acoustic emission (AE) events in plate-like structures from the view of image processing. By using a linear lead zirconium titanate (PZT) sensor array to record the AE wave signals, a reverse-time frequency-wavenumber (f-k) migration is employed to produce images displaying the locations of AE sources by back-propagating the AE waves. Lamb wave theory is included in the f-k migration to consider the dispersive property of the AE waves. Since the exact occurrence time of the AE events is usually unknown when recording the AE wave signals, a heuristic artificial bee colony (ABC) algorithm combined with an optimal criterion using minimum Shannon entropy is used to find the image with the identified AE source locations and occurrence time that mostly approximate the actual ones. Experimental studies on an aluminum plate with AE events simulated by PZT actuators are performed to validate the applicability and effectiveness of the proposed optimal image-based AE source identification method. PMID:29466310
A look at motion in the frequency domain
NASA Technical Reports Server (NTRS)
Watson, A. B.; Ahumada, A. J., Jr.
1983-01-01
A moving image can be specified by a contrast distribution, c(x,y,t), over the dimensions of space x,y, and time t. Alternatively, it can be specified by the distribution C(u,v,w) over spatial frequency u,v and temporal frequency w. The frequency representation of a moving image is shown to have a characteristic form. This permits two useful observations. The first is that the apparent smoothness of time-sampled moving images (apparent motion) can be explained by the filtering action of the human visual system. This leads to the following formula for the required update rate for time-sampled displays. W(c)=W(l)+ru(l) where w(c) is the required update rate in Hz, W(l) is the limit of human temporal resolution in Hz, r is the velocity of the moving image in degrees/sec, and u(l) is the limit of human spatial resolution in cycles/deg. The second observation is that it is possible to construct a linear sensor that responds to images moving in a particular direction. The sensor is derived and its properties are discussed.
Smoothing-Based Relative Navigation and Coded Aperture Imaging
NASA Technical Reports Server (NTRS)
Saenz-Otero, Alvar; Liebe, Carl Christian; Hunter, Roger C.; Baker, Christopher
2017-01-01
This project will develop an efficient smoothing software for incremental estimation of the relative poses and velocities between multiple, small spacecraft in a formation, and a small, long range depth sensor based on coded aperture imaging that is capable of identifying other spacecraft in the formation. The smoothing algorithm will obtain the maximum a posteriori estimate of the relative poses between the spacecraft by using all available sensor information in the spacecraft formation.This algorithm will be portable between different satellite platforms that possess different sensor suites and computational capabilities, and will be adaptable in the case that one or more satellites in the formation become inoperable. It will obtain a solution that will approach an exact solution, as opposed to one with linearization approximation that is typical of filtering algorithms. Thus, the algorithms developed and demonstrated as part of this program will enhance the applicability of small spacecraft to multi-platform operations, such as precisely aligned constellations and fractionated satellite systems.
NASA Technical Reports Server (NTRS)
Clapp, Brian R.
2005-01-01
For fifteen years, the science mission of the Hubble Space Telescope (HST) required using at least three rate gyros for n Controlling with alternate sensors to replace failing gyros can extend the HST science mission. A two-gyro control law has been designed and implemented using magnetometers, star trackers, and Fine Guidance Sensors (FGSs) to control vehicle rate about the missing gyro axis. The three aforementioned sensors are used in succession to reduce HST boresight jitter to less than 7 milli-arcseconds rms prior to science imaging. The Magnetometer and 2-Gyro (M2G) control law is used for large angle maneuvers and attitude control during earth. occultation of star trackers and FGSs. The Tracker and 2-Gyro (T2G) control law dampens M2G rates and controls attitude in preparation for guide star acquisition with the FGSs. The Fine Guidance Sensor and 2-Gyro (F2G) control law dampens T2G rates and controls HST attitude during science imaging. This paper describes the F2G control law. Details of F2G algorithms are presented, including computation of the FGS-measured star vector using non-linear equations, optimal estimation of HST body rate, design of the F2G control laws and gyro bias observer, SISO and MIMO linear stability analyses, and design of the F2G intramode transition and guide star acquisition logic. Results from an FGS flight spare ground test are presented that define acceptable HST jitter levels for successful guide star acquisition under two-gyro control. HST-specific disturbance and noise models are described that are based upon flight telemetry; these models are used in HSTSIM, a high-fidelity non-linear time domain simulation, to predict HST on-orbit disturbance responses and FGS interferometer Loss of Lock (LOL) characteristics under F2G control. Additional HSTSIM results are presented predicting HST quiescent boresight jitter performance, science maneuver performance, and observer configuration performance during F2G operation. Simulation results are compared to on-orbit data b m F2G flight tests performed in February 2005. Science images and point spread functions from the Advanced Camera for Surveys (ACS) High Resolution Camera (HRC) are presented that compare HST science performance under F2G versus three-gyro control. Images and flight telemetry show that HST boresight jitter with the new F2G control law is usually less than jitter using the three-gyro law, and HST boresight jitter during F2G operation is dependent upon guide star magnitude.
Estimating plant area index for monitoring crop growth dynamics using Landsat-8 and RapidEye images
NASA Astrophysics Data System (ADS)
Shang, Jiali; Liu, Jiangui; Huffman, Ted; Qian, Budong; Pattey, Elizabeth; Wang, Jinfei; Zhao, Ting; Geng, Xiaoyuan; Kroetsch, David; Dong, Taifeng; Lantz, Nicholas
2014-01-01
This study investigates the use of two different optical sensors, the multispectral imager (MSI) onboard the RapidEye satellites and the operational land imager (OLI) onboard the Landsat-8 for mapping within-field variability of crop growth conditions and tracking the seasonal growth dynamics. The study was carried out in southern Ontario, Canada, during the 2013 growing season for three annual crops, corn, soybeans, and winter wheat. Plant area index (PAI) was measured at different growth stages using digital hemispherical photography at two corn fields, two winter wheat fields, and two soybean fields. Comparison between several conventional vegetation indices derived from concurrently acquired image data by the two sensors showed a good agreement. The two-band enhanced vegetation index (EVI2) and the normalized difference vegetation index (NDVI) were derived from the surface reflectance of the two sensors. The study showed that EVI2 was more resistant to saturation at high biomass range than NDVI. A linear relationship could be used for crop green effective PAI estimation from EVI2, with a coefficient of determination (R2) of 0.85 and root-mean-square error of 0.53. The estimated multitemporal product of green PAI was found to be able to capture the seasonal dynamics of the three crops.
Nakata, Toshihiko; Ninomiya, Takanori
2006-10-10
A general solution of undersampling frequency conversion and its optimization for parallel photodisplacement imaging is presented. Phase-modulated heterodyne interference light generated by a linear region of periodic displacement is captured by a charge-coupled device image sensor, in which the interference light is sampled at a sampling rate lower than the Nyquist frequency. The frequencies of the components of the light, such as the sideband and carrier (which include photodisplacement and topography information, respectively), are downconverted and sampled simultaneously based on the integration and sampling effects of the sensor. A general solution of frequency and amplitude in this downconversion is derived by Fourier analysis of the sampling procedure. The optimal frequency condition for the heterodyne beat signal, modulation signal, and sensor gate pulse is derived such that undesirable components are eliminated and each information component is converted into an orthogonal function, allowing each to be discretely reproduced from the Fourier coefficients. The optimal frequency parameters that maximize the sideband-to-carrier amplitude ratio are determined, theoretically demonstrating its high selectivity over 80 dB. Preliminary experiments demonstrate that this technique is capable of simultaneous imaging of reflectivity, topography, and photodisplacement for the detection of subsurface lattice defects at a speed corresponding to an acquisition time of only 0.26 s per 256 x 256 pixel area.
I-ImaS: intelligent imaging sensors
NASA Astrophysics Data System (ADS)
Griffiths, J.; Royle, G.; Esbrand, C.; Hall, G.; Turchetta, R.; Speller, R.
2010-08-01
Conventional x-radiography uniformly irradiates the relevant region of the patient. Across that region, however, there is likely to be significant variation in both the thickness and pathological composition of the tissues present, which means that the x-ray exposure conditions selected, and consequently the image quality achieved, are a compromise. The I-ImaS concept eliminates this compromise by intelligently scanning the patient to identify the important diagnostic features, which are then used to adaptively control the x-ray exposure conditions at each point in the patient. In this way optimal image quality is achieved throughout the region of interest whilst maintaining or reducing the dose. An I-ImaS system has been built under an EU Framework 6 project and has undergone pre-clinical testing. The system is based upon two rows of sensors controlled via an FPGA based DAQ board. Each row consists of a 160 mm × 1 mm linear array of ten scintillator coated 3T CMOS APS devices with 32 μm pixels and a readable array of 520 × 40 pixels. The first sensor row scans the patient using a fraction of the total radiation dose to produce a preview image, which is then interrogated to identify the optimal exposure conditions at each point in the image. A signal is then sent to control a beam filter mechanism to appropriately moderate x-ray beam intensity at the patient as the second row of sensors follows behind. Tests performed on breast tissue sections found that the contrast-to-noise ratio in over 70% of the images was increased by an average of 15% at an average dose reduction of 9%. The same technology is currently also being applied to baggage scanning for airport security.
Hakala, Teemu; Markelin, Lauri; Honkavaara, Eija; Scott, Barry; Theocharous, Theo; Nevalainen, Olli; Näsi, Roope; Suomalainen, Juha; Viljanen, Niko; Greenwell, Claire; Fox, Nigel
2018-05-03
Drone-based remote sensing has evolved rapidly in recent years. Miniaturized hyperspectral imaging sensors are becoming more common as they provide more abundant information of the object compared to traditional cameras. Reflectance is a physically defined object property and therefore often preferred output of the remote sensing data capture to be used in the further processes. Absolute calibration of the sensor provides a possibility for physical modelling of the imaging process and enables efficient procedures for reflectance correction. Our objective is to develop a method for direct reflectance measurements for drone-based remote sensing. It is based on an imaging spectrometer and irradiance spectrometer. This approach is highly attractive for many practical applications as it does not require in situ reflectance panels for converting the sensor radiance to ground reflectance factors. We performed SI-traceable spectral and radiance calibration of a tuneable Fabry-Pérot Interferometer -based (FPI) hyperspectral camera at the National Physical Laboratory NPL (Teddington, UK). The camera represents novel technology by collecting 2D format hyperspectral image cubes using time sequential spectral scanning principle. The radiance accuracy of different channels varied between ±4% when evaluated using independent test data, and linearity of the camera response was on average 0.9994. The spectral response calibration showed side peaks on several channels that were due to the multiple orders of interference of the FPI. The drone-based direct reflectance measurement system showed promising results with imagery collected over Wytham Forest (Oxford, UK).
Hakala, Teemu; Scott, Barry; Theocharous, Theo; Näsi, Roope; Suomalainen, Juha; Greenwell, Claire; Fox, Nigel
2018-01-01
Drone-based remote sensing has evolved rapidly in recent years. Miniaturized hyperspectral imaging sensors are becoming more common as they provide more abundant information of the object compared to traditional cameras. Reflectance is a physically defined object property and therefore often preferred output of the remote sensing data capture to be used in the further processes. Absolute calibration of the sensor provides a possibility for physical modelling of the imaging process and enables efficient procedures for reflectance correction. Our objective is to develop a method for direct reflectance measurements for drone-based remote sensing. It is based on an imaging spectrometer and irradiance spectrometer. This approach is highly attractive for many practical applications as it does not require in situ reflectance panels for converting the sensor radiance to ground reflectance factors. We performed SI-traceable spectral and radiance calibration of a tuneable Fabry-Pérot Interferometer -based (FPI) hyperspectral camera at the National Physical Laboratory NPL (Teddington, UK). The camera represents novel technology by collecting 2D format hyperspectral image cubes using time sequential spectral scanning principle. The radiance accuracy of different channels varied between ±4% when evaluated using independent test data, and linearity of the camera response was on average 0.9994. The spectral response calibration showed side peaks on several channels that were due to the multiple orders of interference of the FPI. The drone-based direct reflectance measurement system showed promising results with imagery collected over Wytham Forest (Oxford, UK). PMID:29751560
Fluorescence Intensity- and Lifetime-Based Glucose Sensing Using Glucose/Galactose-Binding Protein
Pickup, John C.; Khan, Faaizah; Zhi, Zheng-Liang; Coulter, Jonathan; Birch, David J. S.
2013-01-01
We review progress in our laboratories toward developing in vivo glucose sensors for diabetes that are based on fluorescence labeling of glucose/galactose-binding protein. Measurement strategies have included both monitoring glucose-induced changes in fluorescence resonance energy transfer and labeling with the environmentally sensitive fluorophore, badan. Measuring fluorescence lifetime rather than intensity has particular potential advantages for in vivo sensing. A prototype fiber-optic-based glucose sensor using this technology is being tested.Fluorescence technique is one of the major solutions for achieving the continuous and noninvasive glucose sensor for diabetes. In this article, a highly sensitive nanostructured sensor is developed to detect extremely small amounts of aqueous glucose by applying fluorescence energy transfer (FRET). A one-pot method is applied to produce the dextran-fluorescein isothiocyanate (FITC)-conjugating mesoporous silica nanoparticles (MSNs), which afterward interact with the tetramethylrhodamine isothiocyanate (TRITC)-labeled concanavalin A (Con A) to form the FRET nanoparticles (FITC-dextran-Con A-TRITC@MSNs). The nanostructured glucose sensor is then formed via the self-assembly of the FRET nanoparticles on a transparent, flexible, and biocompatible substrate, e.g., poly(dimethylsiloxane). Our results indicate the diameter of the MSNs is 60 ± 5 nm. The difference in the images before and after adding 20 μl of glucose (0.10 mmol/liter) on the FRET sensor can be detected in less than 2 min by the laser confocal laser scanning microscope. The correlation between the ratio of fluorescence intensity, I(donor)/I(acceptor), of the FRET sensor and the concentration of aqueous glucose in the range of 0.04–4 mmol/liter has been investigated; a linear relationship is found. Furthermore, the durability of the nanostructured FRET sensor is evaluated for 5 days. In addition, the recorded images can be converted to digital images by obtaining the pixels from the resulting matrix using Matlab image processing functions. We have also studied the in vitro cytotoxicity of the device. The nanostructured FRET sensor may provide an alternative method to help patients manage the disease continuously. PMID:23439161
NASA Astrophysics Data System (ADS)
Cho, Min-Seok; Kim, Tae-Ho; Kang, Seong-Hee; Kim, Dong-Su; Kim, Kyeong-Hyeon; Shin, Dong-Seok; Noh, Yu-Yun; Koo, Hyun-Jae; Cheon, Geum Seong; Suh, Tae Suk; Kim, Siyong
2016-03-01
Many studies have reported that a patient can move even when an immobilization device is used. Researchers have developed an immobilization-device quality-assurance (QA) system that evaluates the validity of immobilization devices. The QA system consists of force-sensing-resistor (FSR) sensor units, an electric circuit, a signal conditioning device, and a control personal computer (PC) with in-house software. The QA system is designed to measure the force between an immobilization device and a patient's skin by using the FSR sensor unit. This preliminary study aimed to evaluate the feasibility of using the QA system in radiation-exposure situations. When the FSR sensor unit was irradiated with a computed tomography (CT) beam and a treatment beam from a linear accelerator (LINAC), the stability of the output signal, the image artifact on the CT image, and changing the variation on the patient's dose were tested. The results of this study demonstrate that this system is promising in that it performed within the error range (signal variation on CT beam < 0.30 kPa, root-mean-square error (RMSE) of the two CT images according to presence or absence of the FSR sensor unit < 15 HU, signal variation on the treatment beam < 0.15 kPa, and dose difference between the presence and the absence of the FSR sensor unit < 0.02%). Based on the obtained results, we will volunteer tests to investigate the clinical feasibility of the QA system.
Point- and line-based transformation models for high resolution satellite image rectification
NASA Astrophysics Data System (ADS)
Abd Elrahman, Ahmed Mohamed Shaker
Rigorous mathematical models with the aid of satellite ephemeris data can present the relationship between the satellite image space and the object space. With government funded satellites, access to calibration and ephemeris data has allowed the development and use of these models. However, for commercial high-resolution satellites, which have been recently launched, these data are withheld from users, and therefore alternative empirical models should be used. In general, the existing empirical models are based on the use of control points and involve linking points in the image space and the corresponding points in the object space. But the lack of control points in some remote areas and the questionable accuracy of the identified discrete conjugate points provide a catalyst for the development of algorithms based on features other than control points. This research, concerned with image rectification and 3D geo-positioning determination using High-Resolution Satellite Imagery (HRSI), has two major objectives. First, the effects of satellite sensor characteristics, number of ground control points (GCPs), and terrain elevation variations on the performance of several point based empirical models are studied. Second, a new mathematical model, using only linear features as control features, or linear features with a minimum number of GCPs, is developed. To meet the first objective, several experiments for different satellites such as Ikonos, QuickBird, and IRS-1D have been conducted using different point based empirical models. Various data sets covering different terrain types are presented and results from representative sets of the experiments are shown and analyzed. The results demonstrate the effectiveness and the superiority of these models under certain conditions. From the results obtained, several alternatives to circumvent the effects of the satellite sensor characteristics, the number of GCPs, and the terrain elevation variations are introduced. To meet the second objective, a new model named the Line Based Transformation Model (LBTM) is developed for HRSI rectification. The model has the flexibility to either solely use linear features or use linear features and a number of control points to define the image transformation parameters. Unlike point features, which must be explicitly defined, linear features have the advantage that they can be implicitly defined by any segment along the line. (Abstract shortened by UMI.)
Electric Potential and Electric Field Imaging with Dynamic Applications & Extensions
NASA Technical Reports Server (NTRS)
Generazio, Ed
2017-01-01
The technology and methods for remote quantitative imaging of electrostatic potentials and electrostatic fields in and around objects and in free space is presented. Electric field imaging (EFI) technology may be applied to characterize intrinsic or existing electric potentials and electric fields, or an externally generated electrostatic field made be used for volumes to be inspected with EFI. The baseline sensor technology (e-Sensor) and its construction, optional electric field generation (quasi-static generator), and current e- Sensor enhancements (ephemeral e-Sensor) are discussed. Critical design elements of current linear and real-time two-dimensional (2D) measurement systems are highlighted, and the development of a three dimensional (3D) EFI system is presented. Demonstrations for structural, electronic, human, and memory applications are shown. Recent work demonstrates that phonons may be used to create and annihilate electric dipoles within structures. Phonon induced dipoles are ephemeral and their polarization, strength, and location may be quantitatively characterized by EFI providing a new subsurface Phonon-EFI imaging technology. Results from real-time imaging of combustion and ion flow, and their measurement complications, will be discussed. Extensions to environment, Space and subterranean applications will be presented, and initial results for quantitative characterizing material properties are shown. A wearable EFI system has been developed by using fundamental EFI concepts. These new EFI capabilities are demonstrated to characterize electric charge distribution creating a new field of study embracing areas of interest including electrostatic discharge (ESD) mitigation, manufacturing quality control, crime scene forensics, design and materials selection for advanced sensors, combustion science, on-orbit space potential, container inspection, remote characterization of electronic circuits and level of activation, dielectric morphology of structures, tether integrity, organic molecular memory, atmospheric science, weather prediction, earth quake prediction, and medical diagnostic and treatment efficacy applications such as cardiac polarization wave propagation and electromyography imaging.
Low SWaP multispectral sensors using dichroic filter arrays
NASA Astrophysics Data System (ADS)
Dougherty, John; Varghese, Ron
2015-06-01
The benefits of multispectral imaging are well established in a variety of applications including remote sensing, authentication, satellite and aerial surveillance, machine vision, biomedical, and other scientific and industrial uses. However, many of the potential solutions require more compact, robust, and cost-effective cameras to realize these benefits. The next generation of multispectral sensors and cameras needs to deliver improvements in size, weight, power, portability, and spectral band customization to support widespread deployment for a variety of purpose-built aerial, unmanned, and scientific applications. A novel implementation uses micro-patterning of dichroic filters1 into Bayer and custom mosaics, enabling true real-time multispectral imaging with simultaneous multi-band image acquisition. Consistent with color image processing, individual spectral channels are de-mosaiced with each channel providing an image of the field of view. This approach can be implemented across a variety of wavelength ranges and on a variety of detector types including linear, area, silicon, and InGaAs. This dichroic filter array approach can also reduce payloads and increase range for unmanned systems, with the capability to support both handheld and autonomous systems. Recent examples and results of 4 band RGB + NIR dichroic filter arrays in multispectral cameras are discussed. Benefits and tradeoffs of multispectral sensors using dichroic filter arrays are compared with alternative approaches - including their passivity, spectral range, customization options, and scalable production.
NASA Astrophysics Data System (ADS)
Torkildsen, H. E.; Hovland, H.; Opsahl, T.; Haavardsholm, T. V.; Nicolas, S.; Skauli, T.
2014-06-01
In some applications of multi- or hyperspectral imaging, it is important to have a compact sensor. The most compact spectral imaging sensors are based on spectral filtering in the focal plane. For hyperspectral imaging, it has been proposed to use a "linearly variable" bandpass filter in the focal plane, combined with scanning of the field of view. As the image of a given object in the scene moves across the field of view, it is observed through parts of the filter with varying center wavelength, and a complete spectrum can be assembled. However if the radiance received from the object varies with viewing angle, or with time, then the reconstructed spectrum will be distorted. We describe a camera design where this hyperspectral functionality is traded for multispectral imaging with better spectral integrity. Spectral distortion is minimized by using a patterned filter with 6 bands arranged close together, so that a scene object is seen by each spectral band in rapid succession and with minimal change in viewing angle. The set of 6 bands is repeated 4 times so that the spectral data can be checked for internal consistency. Still the total extent of the filter in the scan direction is small. Therefore the remainder of the image sensor can be used for conventional imaging with potential for using motion tracking and 3D reconstruction to support the spectral imaging function. We show detailed characterization of the point spread function of the camera, demonstrating the importance of such characterization as a basis for image reconstruction. A simplified image reconstruction based on feature-based image coregistration is shown to yield reasonable results. Elimination of spectral artifacts due to scene motion is demonstrated.
Preliminary performances measured on a CMOS long linear array for space application
NASA Astrophysics Data System (ADS)
Renard, Christophe; Artinian, Armand; Dantes, Didier; Lepage, Gérald; Diels, Wim
2017-11-01
This paper presents the design and the preliminary performances of a CMOS linear array, resulting from collaboration between Alcatel Alenia Space and Cypress Semiconductor BVBA, which takes advantage of emerging potentialities of CMOS technologies. The design of the sensor is presented: it includes 8000 panchromatic pixels with up to 25 rows used in TDI mode, and 4 lines of 2000 pixels for multispectral imaging. Main system requirements and detector tradeoffs are recalled, and the preliminary test results obtained with a first generation prototype are summarized and compared with predicted performances.
Advanced optical position sensors for magnetically suspended wind tunnel models
NASA Technical Reports Server (NTRS)
Lafleur, S.
1985-01-01
A major concern to aerodynamicists has been the corruption of wind tunnel test data by model support structures, such as stings or struts. A technique for magnetically suspending wind tunnel models was considered by Tournier and Laurenceau (1957) in order to overcome this problem. This technique is now implemented with the aid of a Large Magnetic Suspension and Balance System (LMSBS) and advanced position sensors for measuring model attitude and position within the test section. Two different optical position sensors are discussed, taking into account a device based on the use of linear CCD arrays, and a device utilizing area CID cameras. Current techniques in image processing have been employed to develop target tracking algorithms capable of subpixel resolution for the sensors. The algorithms are discussed in detail, and some preliminary test results are reported.
Chen, Yuncong; Zhu, Chengcheng; Cen, Jiajie; Bai, Yang; He, Weijiang; Guo, Zijian
2015-05-01
The homeostasis of mitochondrial pH (pH m ) is crucial in cell physiology. Developing small-molecular fluorescent sensors for the ratiometric detection of pH m fluctuation is highly demanded yet challenging. A ratiometric pH sensor, Mito-pH , was constructed by integrating a pH-sensitive FITC fluorophore with a pH-insensitive hemicyanine group. The hemicyanine group also acts as the mitochondria targeting group due to its lipophilic cationic nature. Besides its ability to target mitochondria, this sensor provides two ratiometric pH sensing modes, the dual excitation/dual emission mode (D ex /D em ) and dual excitation (D ex ) mode, and its linear and reversible ratiometric response range from pH 6.15 to 8.38 makes this sensor suitable for the practical tracking of pH m fluctuation in live cells. With this sensor, stimulated pH m fluctuation has been successfully tracked in a ratiometric manner via both fluorescence imaging and flow cytometry.
The influence of adhesive on fiber Bragg grating strain sensor
NASA Astrophysics Data System (ADS)
Chen, Jixuan; Gong, Huaping; Jin, Shangzhong; Li, Shuhua
2009-08-01
A fiber Bragg grating (FBG) sensor was fixed on the uniform strength beam with three adhesives, which were modified acrylate, glass glue and epoxy resin. The influence of adhesive on FBG strain sensor was investigated. The strain of FBG sensor was varied by loading weight to the uniform strength beam. The wavelength shift of the FBG sensor fixed by the three kinds of adhesive were measured with different weight at the temperatures 0°C, 10°C, 20°C, 30°C, 40°C. The linearity, sensitivity and their stability at different temperature of FBG sensor which fixed by every kind of adhesives were analyzed. The results show that, the FBG sensor fixed by the modified acrylate has a high linearity, and the linear correlation coefficient is 0.9996. It also has a high sensitivity which is 0.251nm/kg. The linearity and the sensitivity of the FBG sensor have a high stability at different temperatures. The FBG sensor fixed by the glass glue also has a high linearity, and the linear correlation coefficient is 0.9986, but it has a low sensitivity which is only 0.041nm/kg. The linearity and the sensitivity of the FBG sensor fixed by the glass glue have a high stability at different temperatures. When the FBG sensor is fixed by epoxy resin, the sensitivity and linearity is affected significantly by the temperature. When the temperature changes from 0°C to 40°C, the sensitivity decreases from 0.302nm/kg to 0.058nm/kg, and the linear correlation coefficient decreases from 0.9999 to 0.9961.
NASA Astrophysics Data System (ADS)
Di, K.; Liu, Y.; Liu, B.; Peng, M.
2012-07-01
Chang'E-1(CE-1) and Chang'E-2(CE-2) are the two lunar orbiters of China's lunar exploration program. Topographic mapping using CE-1 and CE-2 images is of great importance for scientific research as well as for preparation of landing and surface operation of Chang'E-3 lunar rover. In this research, we developed rigorous sensor models of CE-1 and CE-2 CCD cameras based on push-broom imaging principle with interior and exterior orientation parameters. Based on the rigorous sensor model, the 3D coordinate of a ground point in lunar body-fixed (LBF) coordinate system can be calculated by space intersection from the image coordinates of con-jugate points in stereo images, and the image coordinates can be calculated from 3D coordinates by back-projection. Due to uncer-tainties of the orbit and the camera, the back-projected image points are different from the measured points. In order to reduce these inconsistencies and improve precision, we proposed two methods to refine the rigorous sensor model: 1) refining EOPs by correcting the attitude angle bias, 2) refining the interior orientation model by calibration of the relative position of the two linear CCD arrays. Experimental results show that the mean back-projection residuals of CE-1 images are reduced to better than 1/100 pixel by method 1 and the mean back-projection residuals of CE-2 images are reduced from over 20 pixels to 0.02 pixel by method 2. Consequently, high precision DEM (Digital Elevation Model) and DOM (Digital Ortho Map) are automatically generated.
NASA Astrophysics Data System (ADS)
Cha, B. K.; Kim, J. Y.; Kim, Y. J.; Yun, S.; Cho, G.; Kim, H. K.; Seo, C.-W.; Jeon, S.; Huh, Y.
2012-04-01
In digital X-ray imaging systems, X-ray imaging detectors based on scintillating screens with electronic devices such as charge-coupled devices (CCDs), thin-film transistors (TFT), complementary metal oxide semiconductor (CMOS) flat panel imagers have been introduced for general radiography, dental, mammography and non-destructive testing (NDT) applications. Recently, a large-area CMOS active-pixel sensor (APS) in combination with scintillation films has been widely used in a variety of digital X-ray imaging applications. We employed a scintillator-based CMOS APS image sensor for high-resolution mammography. In this work, both powder-type Gd2O2S:Tb and a columnar structured CsI:Tl scintillation screens with various thicknesses were fabricated and used as materials to convert X-ray into visible light. These scintillating screens were directly coupled to a CMOS flat panel imager with a 25 × 50 mm2 active area and a 48 μm pixel pitch for high spatial resolution acquisition. We used a W/Al mammographic X-ray source with a 30 kVp energy condition. The imaging characterization of the X-ray detector was measured and analyzed in terms of linearity in incident X-ray dose, modulation transfer function (MTF), noise-power spectrum (NPS) and detective quantum efficiency (DQE).
Wan, Yuhang; Carlson, John A.; Kesler, Benjamin A.; Peng, Wang; Su, Patrick; Al-Mulla, Saoud A.; Lim, Sung Jun; Smith, Andrew M.; Dallesasse, John M.; Cunningham, Brian T.
2016-01-01
A compact analysis platform for detecting liquid absorption and emission spectra using a set of optical linear variable filters atop a CMOS image sensor is presented. The working spectral range of the analysis platform can be extended without a reduction in spectral resolution by utilizing multiple linear variable filters with different wavelength ranges on the same CMOS sensor. With optical setup reconfiguration, its capability to measure both absorption and fluorescence emission is demonstrated. Quantitative detection of fluorescence emission down to 0.28 nM for quantum dot dispersions and 32 ng/mL for near-infrared dyes has been demonstrated on a single platform over a wide spectral range, as well as an absorption-based water quality test, showing the versatility of the system across liquid solutions for different emission and absorption bands. Comparison with a commercially available portable spectrometer and an optical spectrum analyzer shows our system has an improved signal-to-noise ratio and acceptable spectral resolution for discrimination of emission spectra, and characterization of colored liquid’s absorption characteristics generated by common biomolecular assays. This simple, compact, and versatile analysis platform demonstrates a path towards an integrated optical device that can be utilized for a wide variety of applications in point-of-use testing and point-of-care diagnostics. PMID:27389070
NASA Astrophysics Data System (ADS)
Wan, Yuhang; Carlson, John A.; Kesler, Benjamin A.; Peng, Wang; Su, Patrick; Al-Mulla, Saoud A.; Lim, Sung Jun; Smith, Andrew M.; Dallesasse, John M.; Cunningham, Brian T.
2016-07-01
A compact analysis platform for detecting liquid absorption and emission spectra using a set of optical linear variable filters atop a CMOS image sensor is presented. The working spectral range of the analysis platform can be extended without a reduction in spectral resolution by utilizing multiple linear variable filters with different wavelength ranges on the same CMOS sensor. With optical setup reconfiguration, its capability to measure both absorption and fluorescence emission is demonstrated. Quantitative detection of fluorescence emission down to 0.28 nM for quantum dot dispersions and 32 ng/mL for near-infrared dyes has been demonstrated on a single platform over a wide spectral range, as well as an absorption-based water quality test, showing the versatility of the system across liquid solutions for different emission and absorption bands. Comparison with a commercially available portable spectrometer and an optical spectrum analyzer shows our system has an improved signal-to-noise ratio and acceptable spectral resolution for discrimination of emission spectra, and characterization of colored liquid’s absorption characteristics generated by common biomolecular assays. This simple, compact, and versatile analysis platform demonstrates a path towards an integrated optical device that can be utilized for a wide variety of applications in point-of-use testing and point-of-care diagnostics.
Backside illuminated CMOS-TDI line scanner for space applications
NASA Astrophysics Data System (ADS)
Cohen, O.; Ben-Ari, N.; Nevo, I.; Shiloah, N.; Zohar, G.; Kahanov, E.; Brumer, M.; Gershon, G.; Ofer, O.
2017-09-01
A new multi-spectral line scanner CMOS image sensor is reported. The backside illuminated (BSI) image sensor was designed for continuous scanning Low Earth Orbit (LEO) space applications including A custom high quality CMOS Active Pixels, Time Delayed Integration (TDI) mechanism that increases the SNR, 2-phase exposure mechanism that increases the dynamic Modulation Transfer Function (MTF), very low power internal Analog to Digital Converters (ADC) with resolution of 12 bit per pixel and on chip controller. The sensor has 4 independent arrays of pixels where each array is arranged in 2600 TDI columns with controllable TDI depth from 8 up to 64 TDI levels. A multispectral optical filter with specific spectral response per array is assembled at the package level. In this paper we briefly describe the sensor design and present some electrical and electro-optical recent measurements of the first prototypes including high Quantum Efficiency (QE), high MTF, wide range selectable Full Well Capacity (FWC), excellent linearity of approximately 1.3% in a signal range of 5-85% and approximately 1.75% in a signal range of 2-95% out of the signal span, readout noise of approximately 95 electrons with 64 TDI levels, negligible dark current and power consumption of less than 1.5W total for 4 bands sensor at all operation conditions .
Application and evaluation of ISVR method in QuickBird image fusion
NASA Astrophysics Data System (ADS)
Cheng, Bo; Song, Xiaolu
2014-05-01
QuickBird satellite images are widely used in many fields, and applications have put forward high requirements for the integration of the spatial information and spectral information of the imagery. A fusion method for high resolution remote sensing images based on ISVR is identified in this study. The core principle of ISVS is taking the advantage of radicalization targeting to remove the effect of different gain and error of satellites' sensors. Transformed from DN to radiance, the multi-spectral image's energy is used to simulate the panchromatic band. The linear regression analysis is carried through the simulation process to find a new synthetically panchromatic image, which is highly linearly correlated to the original panchromatic image. In order to evaluate, test and compare the algorithm results, this paper used ISVR and other two different fusion methods to give a comparative study of the spatial information and spectral information, taking the average gradient and the correlation coefficient as an indicator. Experiments showed that this method could significantly improve the quality of fused image, especially in preserving spectral information, to maximize the spectral information of original multispectral images, while maintaining abundant spatial information.
NASA Astrophysics Data System (ADS)
Cleary, Kevin R.; Banovac, Filip; Levy, Elliot; Tanaka, Daigo
2002-05-01
We have designed and constructed a liver respiratory motion simulator as a first step in demonstrating the feasibility of using a new magnetic tracking system to follow the movement of internal organs. The simulator consists of a dummy torso, a synthetic liver, a linear motion platform, a graphical user interface for image overlay, and a magnetic tracking system along with magnetically tracked instruments. While optical tracking systems are commonly used in commercial image-guided surgery systems for the brain and spine, they are limited to procedures in which a line of sight can be maintained between the tracking system and the instruments which are being tracked. Magnetic tracking systems have been proposed for image-guided surgery applications, but most currently available magnetically tracked sensors are too small to be embedded in the body. The magnetic tracking system employed here, the AURORA from Northern Digital, can use sensors as small as 0.9 mm in diameter by 8 mm in length. This makes it possible to embed these sensors in catheters and thin needles. The catheters can then be wedged in a vein in an internal organ of interest so that tracking the position of the catheter gives a good estimate of the position of the internal organ. Alternatively, a needle with an embedded sensor could be placed near the area of interest.
A novel dual gating approach using joint inertial sensors: implications for cardiac PET imaging
NASA Astrophysics Data System (ADS)
Jafari Tadi, Mojtaba; Teuho, Jarmo; Lehtonen, Eero; Saraste, Antti; Pänkäälä, Mikko; Koivisto, Tero; Teräs, Mika
2017-10-01
Positron emission tomography (PET) is a non-invasive imaging technique which may be considered as the state of art for the examination of cardiac inflammation due to atherosclerosis. A fundamental limitation of PET is that cardiac and respiratory motions reduce the quality of the achieved images. Current approaches for motion compensation involve gating the PET data based on the timing of quiescent periods of cardiac and respiratory cycles. In this study, we present a novel gating method called microelectromechanical (MEMS) dual gating which relies on joint non-electrical sensors, i.e. tri-axial accelerometer and gyroscope. This approach can be used for optimized selection of quiescent phases of cardiac and respiratory cycles. Cardiomechanical activity according to echocardiography observations was investigated to confirm whether this dual sensor solution can provide accurate trigger timings for cardiac gating. Additionally, longitudinal chest motions originating from breathing were measured by accelerometric- and gyroscopic-derived respiratory (ADR and GDR) tracking. The ADR and GDR signals were evaluated against Varian real-time position management (RPM) signals in terms of amplitude and phase. Accordingly, high linear correlation and agreement were achieved between the reference electrocardiography, RPM, and measured MEMS signals. We also performed a Ge-68 phantom study to evaluate possible metal artifacts caused by the integrated read-out electronics including mechanical sensors and semiconductors. The reconstructed phantom images did not reveal any image artifacts. Thus, it was concluded that MEMS-driven dual gating can be used in PET studies without an effect on the quantitative or visual accuracy of the PET images. Finally, the applicability of MEMS dual gating for cardiac PET imaging was investigated with two atherosclerosis patients. Dual gated PET images were successfully reconstructed using only MEMS signals and both qualitative and quantitative assessments revealed encouraging results that warrant further investigation of this method.
NASA Astrophysics Data System (ADS)
Davis, P. A.; Cagney, L. E.; Kohl, K. A.; Gushue, T. M.; Fritzinger, C.; Bennett, G. E.; Hamill, J. F.; Melis, T. S.
2010-12-01
Periodically, the Grand Canyon Monitoring and Research Center of the U.S. Geological Survey collects and interprets high-resolution (20-cm), airborne multispectral imagery and digital surface models (DSMs) to monitor the effects of Glen Canyon Dam operations on natural and cultural resources of the Colorado River in Grand Canyon. We previously employed the first generation of the ADS40 in 2000 and the Zeiss-Imaging Digital Mapping Camera (DMC) in 2005. Data from both sensors displayed band-image misregistration owing to multiple sensor optics and image smearing along abrupt scarps due to errors in image rectification software, both of which increased post-processing time, cost, and errors from image classification. Also, the near-infrared gain on the early, 8-bit ADS40 was not properly set and its signal was saturated for the more chlorophyll-rich vegetation, which limited our vegetation mapping. Both sensors had stereo panchromatic capability for generating a DSM. The ADS40 performed to specifications; the DMC failed. In 2009, we employed the new ADS40 SH52 to acquire 11-bit multispectral data with a single lens (20-cm positional accuracy), as well as stereo panchromatic data that provided a 1-m cell DSM (40-cm root-mean-square vertical error at one sigma). Analyses of the multispectral data showed near-perfect registration of its four band images at our 20-cm resolution, a linear response to ground reflectance, and a large dynamic range and good sensitivity (except for the blue band). Data were acquired over a 10-day period for the 450-km-long river corridor in which acquisition time and atmospheric conditions varied considerably during inclement weather. We received 266 orthorectified flightlines for the corridor, choosing to calibrate and mosaic the data ourselves to ensure a flawless mosaic with consistent, realistic spectral information. A linear least-squares cross-calibration of overlapping flightlines for the corridor showed that the dominate factors in inter-flightline variability were solar zenith angle and atmospheric scattering, which respectively affect the slope and intercept of the calibration. The inter-flightline calibration slopes were consistently close to the square of the ratio of the cosines of the zenith angles of each pair of overlapping flightlines. Our results corroborate previous observations that the cosine of solar zenith angle is a good approximation for atmospheric transmission and the use of its square in radiometric calibrations may compensate for that effect and the effect of non-nadir sun angle on surface reflectance. It was more expedient to acquire imagery for each sub-linear river segment by collecting 5-6 parallel flightlines; river sinuosity caused us to use 2-3 flightlines for each segment. Surfaces near flightline edges were often smeared and replaced with adjacent, more nadir-viewed flightline data. Eliminating surface smearing was the most time consuming aspect of creating a flawless image mosaic for the river corridor, but its removal will increase the efficiency and accuracy of image analyses of monitoring parameters of interest to river managers.
Charge-Coupled Scanned IR Imaging Sensors
1974-07-15
DATE 15 July 1974 O NUMBER OF PAGES 37 IS SECURITY CLASS, ( ol Ihi3 teporl) Unclassified 15a DECLASSIFICATION DOWNGRATING...SCHEDULE N/A 16 DISTRIBUTION STATEMENT’oMhi» Htpu.- A - Approved for public release; distribution unlimited. 17 DISTRIBUTION STATEMENT ( ol ...overlapping layers of polysilicon and metal for improved shift-register performance and stability. This linear array should provide the information
THz impulse radar for biomedical sensing: nonlinear system behavior
NASA Astrophysics Data System (ADS)
Brown, E. R.; Sung, Shijun; Grundfest, W. S.; Taylor, Z. D.
2014-03-01
The THz impulse radar is an "RF-inspired" sensor system that has performed remarkably well since its initial development nearly six years ago. It was developed for ex vivo skin-burn imaging, and has since shown great promise in the sensitive detection of hydration levels in soft tissues of several types, such as in vivo corneal and burn samples. An intriguing aspect of the impulse radar is its hybrid architecture which combines the high-peak-power of photoconductive switches with the high-responsivity and -bandwidth (RF and video) of Schottky-diode rectifiers. The result is a very sensitive sensor system in which the post-detection signal-to-noise ratio depends super-linearly on average signal power up to a point where the diode is "turned on" in the forward direction, and then behaves quasi-linearly beyond that point. This paper reports the first nonlinear systems analysis done on the impulse radar using MATLAB.
Other remote sensing systems: Retrospect and outlook
NASA Technical Reports Server (NTRS)
1982-01-01
The history of remote sensing is reviewed and the scope and versatility of the several remote sensing systems already in orbit are discussed, especially those with sensors operating in other EM spectral modes. The multisensor approach is examined by interrelating LANDSAT observations with data from other satellite systems. The basic principles and practices underlying the use of thermal infrared and radar sensors are explored and the types of observations and interpretations emanating from the Nimbus, Heat Capacity Mapping Mission, and SEASAT programs are examined. Approved or proposed Earth resources oriented missions for the 1980's previewed include LANDSAT D, Stereosat, Gravsat, the French satellite SPOT-1, and multimission modular spacecraft launched from space shuttle. The pushbroom imager, the linear array pushbroom radiometer, the multispectral linear array, and the operational LANDSAT observing system, to be designed the LANDSAT-E series are also envisioned for this decade.
de Sena, Rodrigo Caciano; Soares, Matheus; Pereira, Maria Luiza Oliveira; da Silva, Rogério Cruz Domingues; do Rosário, Francisca Ferreira; da Silva, Joao Francisco Cajaiba
2011-01-01
The development of a simple, rapid and low cost method based on video image analysis and aimed at the detection of low concentrations of precipitated barium sulfate is described. The proposed system is basically composed of a webcam with a CCD sensor and a conventional dichroic lamp. For this purpose, software for processing and analyzing the digital images based on the RGB (Red, Green and Blue) color system was developed. The proposed method had shown very good repeatability and linearity and also presented higher sensitivity than the standard turbidimetric method. The developed method is presented as a simple alternative for future applications in the study of precipitations of inorganic salts and also for detecting the crystallization of organic compounds. PMID:22346607
All-Digital Time-Domain CMOS Smart Temperature Sensor with On-Chip Linearity Enhancement.
Chen, Chun-Chi; Chen, Chao-Lieh; Lin, Yi
2016-01-30
This paper proposes the first all-digital on-chip linearity enhancement technique for improving the accuracy of the time-domain complementary metal-oxide semiconductor (CMOS) smart temperature sensor. To facilitate on-chip application and intellectual property reuse, an all-digital time-domain smart temperature sensor was implemented using 90 nm Field Programmable Gate Arrays (FPGAs). Although the inverter-based temperature sensor has a smaller circuit area and lower complexity, two-point calibration must be used to achieve an acceptable inaccuracy. With the help of a calibration circuit, the influence of process variations was reduced greatly for one-point calibration support, reducing the test costs and time. However, the sensor response still exhibited a large curvature, which substantially affected the accuracy of the sensor. Thus, an on-chip linearity-enhanced circuit is proposed to linearize the curve and achieve a new linearity-enhanced output. The sensor was implemented on eight different Xilinx FPGA using 118 slices per sensor in each FPGA to demonstrate the benefits of the linearization. Compared with the unlinearized version, the maximal inaccuracy of the linearized version decreased from 5 °C to 2.5 °C after one-point calibration in a range of -20 °C to 100 °C. The sensor consumed 95 μW using 1 kSa/s. The proposed linearity enhancement technique significantly improves temperature sensing accuracy, avoiding costly curvature compensation while it is fully synthesizable for future Very Large Scale Integration (VLSI) system.
All-Digital Time-Domain CMOS Smart Temperature Sensor with On-Chip Linearity Enhancement
Chen, Chun-Chi; Chen, Chao-Lieh; Lin, Yi
2016-01-01
This paper proposes the first all-digital on-chip linearity enhancement technique for improving the accuracy of the time-domain complementary metal-oxide semiconductor (CMOS) smart temperature sensor. To facilitate on-chip application and intellectual property reuse, an all-digital time-domain smart temperature sensor was implemented using 90 nm Field Programmable Gate Arrays (FPGAs). Although the inverter-based temperature sensor has a smaller circuit area and lower complexity, two-point calibration must be used to achieve an acceptable inaccuracy. With the help of a calibration circuit, the influence of process variations was reduced greatly for one-point calibration support, reducing the test costs and time. However, the sensor response still exhibited a large curvature, which substantially affected the accuracy of the sensor. Thus, an on-chip linearity-enhanced circuit is proposed to linearize the curve and achieve a new linearity-enhanced output. The sensor was implemented on eight different Xilinx FPGA using 118 slices per sensor in each FPGA to demonstrate the benefits of the linearization. Compared with the unlinearized version, the maximal inaccuracy of the linearized version decreased from 5 °C to 2.5 °C after one-point calibration in a range of −20 °C to 100 °C. The sensor consumed 95 μW using 1 kSa/s. The proposed linearity enhancement technique significantly improves temperature sensing accuracy, avoiding costly curvature compensation while it is fully synthesizable for future Very Large Scale Integration (VLSI) system. PMID:26840316
Inductive Linear-Position Sensor/Limit-Sensor Units
NASA Technical Reports Server (NTRS)
Alhom, Dean; Howard, David; Smith, Dennis; Dutton, Kenneth
2007-01-01
A new sensor provides an absolute position measurement. A schematic view of a motorized linear-translation stage that contains, at each end, an electronic unit that functions as both (1) a non-contact sensor that measures the absolute position of the stage and (2) a non-contact equivalent of a limit switch that is tripped when the stage reaches the nominal limit position. The need for such an absolute linear position-sensor/limit-sensor unit arises in the case of a linear-translation stage that is part of a larger system in which the actual stopping position of the stage (relative to the nominal limit position) must be known. Because inertia inevitably causes the stage to run somewhat past the nominal limit position, tripping of a standard limit switch or other limit sensor does not provide the required indication of the actual stopping position. This innovative sensor unit operates on an electromagnetic-induction principle similar to that of linear variable differential transformers (LVDTs)
NASA Technical Reports Server (NTRS)
Ando, K. J.
1971-01-01
Description of the performance of the silicon diode array vidicon - an imaging sensor which possesses wide spectral response, high quantum efficiency, and linear response. These characteristics, in addition to its inherent ruggedness, simplicity, and long-term stability and operating life make this device potentially of great usefulness for ground-base and spaceborne planetary and stellar imaging applications. However, integration and charged storage for periods greater than approximately five seconds are not possible at room temperature because of diode saturation from dark current buildup. Since dark current can be reduced by cooling, measurements were made in the range from -65 to 25 C. Results are presented on the extension of integration, storage, and slow scan capabilities achievable by cooling. Integration times in excess of 20 minutes were achieved at the lowest temperatures. The measured results are compared with results obtained with other types of sensors and the advantages of the silicon diode array vidicon for imaging applications are discussed.
Large-area, flexible imaging arrays constructed by light-charge organic memories
Zhang, Lei; Wu, Ti; Guo, Yunlong; Zhao, Yan; Sun, Xiangnan; Wen, Yugeng; Yu, Gui; Liu, Yunqi
2013-01-01
Existing organic imaging circuits, which offer attractive benefits of light weight, low cost and flexibility, are exclusively based on phototransistor or photodiode arrays. One shortcoming of these photo-sensors is that the light signal should keep invariant throughout the whole pixel-addressing and reading process. As a feasible solution, we synthesized a new charge storage molecule and embedded it into a device, which we call light-charge organic memory (LCOM). In LCOM, the functionalities of photo-sensor and non-volatile memory are integrated. Thanks to the deliberate engineering of electronic structure and self-organization process at the interface, 92% of the stored charges, which are linearly controlled by the quantity of light, retain after 20000 s. The stored charges can also be non-destructively read and erased by a simple voltage program. These results pave the way to large-area, flexible imaging circuits and demonstrate a bright future of small molecular materials in non-volatile memory. PMID:23326636
Stereo Cloud Height and Wind Determination Using Measurements from a Single Focal Plane
NASA Astrophysics Data System (ADS)
Demajistre, R.; Kelly, M. A.
2014-12-01
We present here a method for extracting cloud heights and winds from an aircraft or orbital platform using measurements from a single focal plane, exploiting the motion of the platform to provide multiple views of the cloud tops. To illustrate this method we use data acquired during aircraft flight tests of a set of simple stereo imagers that are well suited to this purpose. Each of these imagers has three linear arrays on the focal plane, one looking forward, one looking aft, and one looking down. Push-broom images from each of these arrays are constructed, and then a spatial correlation analysis is used to deduce the delays and displacements required for wind and cloud height determination. We will present the algorithms necessary for the retrievals, as well as the methods used to determine the uncertainties of the derived cloud heights and winds. We will apply the retrievals and uncertainty determination to a number of image sets acquired by the airborne sensors. We then generalize these results to potential space based observations made by similar types of sensors.
An Evaluation of Feature Learning Methods for High Resolution Image Classification
NASA Astrophysics Data System (ADS)
Tokarczyk, P.; Montoya, J.; Schindler, K.
2012-07-01
Automatic image classification is one of the fundamental problems of remote sensing research. The classification problem is even more challenging in high-resolution images of urban areas, where the objects are small and heterogeneous. Two questions arise, namely which features to extract from the raw sensor data to capture the local radiometry and image structure at each pixel or segment, and which classification method to apply to the feature vectors. While classifiers are nowadays well understood, selecting the right features remains a largely empirical process. Here we concentrate on the features. Several methods are evaluated which allow one to learn suitable features from unlabelled image data by analysing the image statistics. In a comparative study, we evaluate unsupervised feature learning with different linear and non-linear learning methods, including principal component analysis (PCA) and deep belief networks (DBN). We also compare these automatically learned features with popular choices of ad-hoc features including raw intensity values, standard combinations like the NDVI, a few PCA channels, and texture filters. The comparison is done in a unified framework using the same images, the target classes, reference data and a Random Forest classifier.
Comparison of a CCD and an APS for soft X-ray diffraction
NASA Astrophysics Data System (ADS)
Stewart, Graeme; Bates, R.; Blue, A.; Clark, A.; Dhesi, S. S.; Maneuski, D.; Marchal, J.; Steadman, P.; Tartoni, N.; Turchetta, R.
2011-12-01
We compare a new CMOS Active Pixel Sensor (APS) to a Princeton Instruments PIXIS-XO: 2048B Charge Coupled Device (CCD) with soft X-rays tested in a synchrotron beam line at the Diamond Light Source (DLS). Despite CCDs being established in the field of scientific imaging, APS are an innovative technology that offers advantages over CCDs. These include faster readout, higher operational temperature, in-pixel electronics for advanced image processing and reduced manufacturing cost. The APS employed was the Vanilla sensor designed by the MI3 collaboration and funded by an RCUK Basic technology grant. This sensor has 520 x 520 square pixels, of size 25 μm on each side. The sensor can operate at a full frame readout of up to 20 Hz. The sensor had been back-thinned, to the epitaxial layer. This was the first time that a back-thinned APS had been demonstrated at a beam line at DLS. In the synchrotron experiment soft X-rays with an energy of approximately 708 eV were used to produce a diffraction pattern from a permalloy sample. The pattern was imaged at a range of integration times with both sensors. The CCD had to be operated at a temperature of -55°C whereas the Vanilla was operated over a temperature range from 20°C to -10°C. We show that the APS detector can operate with frame rates up to two hundred times faster than the CCD, without excessive degradation of image quality. The signal to noise of the APS is shown to be the same as that of the CCD at identical integration times and the response is shown to be linear, with no charge blooming effects. The experiment has allowed a direct comparison of back thinned APS and CCDs in a real soft x-ray synchrotron experiment.
Development of a distributed read-out imaging TES X-ray microcalorimeter
NASA Astrophysics Data System (ADS)
Trowell, S.; Holland, A. D.; Fraser, G. W.; Goldie, D.; Gu, E.
2002-02-01
We report on the development of a linear absorber detector for one-dimensional imaging spectroscopy, read-out by two Transition Edge Sensors (TESs). The TESs, based on a single layer of iridium, demonstrate stable and controllable superconducting-to-normal transitions in the region of 130 mK. Results from Monte Carlo simulations are presented indicating that the device configuration is capable of detecting photon positions to better than 200 μm, thereby meeting the resolution specification for missions such as XEUS of ~250 μm. .
Development of plenoptic infrared camera using low dimensional material based photodetectors
NASA Astrophysics Data System (ADS)
Chen, Liangliang
Infrared (IR) sensor has extended imaging from submicron visible spectrum to tens of microns wavelength, which has been widely used for military and civilian application. The conventional bulk semiconductor materials based IR cameras suffer from low frame rate, low resolution, temperature dependent and highly cost, while the unusual Carbon Nanotube (CNT), low dimensional material based nanotechnology has been made much progress in research and industry. The unique properties of CNT lead to investigate CNT based IR photodetectors and imaging system, resolving the sensitivity, speed and cooling difficulties in state of the art IR imagings. The reliability and stability is critical to the transition from nano science to nano engineering especially for infrared sensing. It is not only for the fundamental understanding of CNT photoresponse induced processes, but also for the development of a novel infrared sensitive material with unique optical and electrical features. In the proposed research, the sandwich-structured sensor was fabricated within two polymer layers. The substrate polyimide provided sensor with isolation to background noise, and top parylene packing blocked humid environmental factors. At the same time, the fabrication process was optimized by real time electrical detection dielectrophoresis and multiple annealing to improve fabrication yield and sensor performance. The nanoscale infrared photodetector was characterized by digital microscopy and precise linear stage in order for fully understanding it. Besides, the low noise, high gain readout system was designed together with CNT photodetector to make the nano sensor IR camera available. To explore more of infrared light, we employ compressive sensing algorithm into light field sampling, 3-D camera and compressive video sensing. The redundant of whole light field, including angular images for light field, binocular images for 3-D camera and temporal information of video streams, are extracted and expressed in compressive approach. The following computational algorithms are applied to reconstruct images beyond 2D static information. The super resolution signal processing was then used to enhance and improve the image spatial resolution. The whole camera system brings a deeply detailed content for infrared spectrum sensing.
Autonomous Kinematic Calibration of the Robot Manipulator with a Linear Laser-Vision Sensor
NASA Astrophysics Data System (ADS)
Kang, Hee-Jun; Jeong, Jeong-Woo; Shin, Sung-Weon; Suh, Young-Soo; Ro, Young-Schick
This paper presents a new autonomous kinematic calibration technique by using a laser-vision sensor called "Perceptron TriCam Contour". Because the sensor measures by capturing the image of a projected laser line on the surface of the object, we set up a long, straight line of a very fine string inside the robot workspace, and then allow the sensor mounted on a robot to measure the point intersection of the line of string and the projected laser line. The data collected by changing robot configuration and measuring the intersection points are constrained to on a single straght line such that the closed-loop calibration method can be applied. The obtained calibration method is simple and accurate and also suitable for on-site calibration in an industrial environment. The method is implemented using Hyundai VORG-35 for its effectiveness.
A fiber-optic sensor based on no-core fiber and Faraday rotator mirror structure
NASA Astrophysics Data System (ADS)
Lu, Heng; Wang, Xu; Zhang, Songling; Wang, Fang; Liu, Yufang
2018-05-01
An optical fiber sensor based on the single-mode/no-core/single-mode (SNS) core-offset technology along with a Faraday rotator mirror structure has been proposed and experimentally demonstrated. A transverse optical field distribution of self-imaging has been simulated and experimental parameters have been selected under theoretical guidance. Results of the experiments demonstrate that the temperature sensitivity of the sensor is 0.0551 nm/°C for temperatures between 25 and 80 °C, and the correlation coefficient is 0.99582. The concentration sensitivity of the device for sucrose and glucose solutions was found to be as high as 12.5416 and 6.02248 nm/(g/ml), respectively. Curves demonstrating a linear fit between wavelength shift and solution concentration for three different heavy metal solutions have also been derived on the basis of experimental results. The proposed fiber-optic sensor design provides valuable guidance for the measurement of concentration and temperature.
Gamma/x-ray linear pushbroom stereo for 3D cargo inspection
NASA Astrophysics Data System (ADS)
Zhu, Zhigang; Hu, Yu-Chi
2006-05-01
For evaluating the contents of trucks, containers, cargo, and passenger vehicles by a non-intrusive gamma-ray or X-ray imaging system to determine the possible presence of contraband, three-dimensional (3D) measurements could provide more information than 2D measurements. In this paper, a linear pushbroom scanning model is built for such a commonly used gamma-ray or x-ray cargo inspection system. Accurate 3D measurements of the objects inside a cargo can be obtained by using two such scanning systems with different scanning angles to construct a pushbroom stereo system. A simple but robust calibration method is proposed to find the important parameters of the linear pushbroom sensors. Then, a fast and automated stereo matching algorithm based on free-form deformable registration is developed to obtain 3D measurements of the objects under inspection. A user interface is designed for 3D visualization of the objects in interests. Experimental results of sensor calibration, stereo matching, 3D measurements and visualization of a 3D cargo container and the objects inside, are presented.
Garment Counting in a Textile Warehouse by Means of a Laser Imaging System
Martínez-Sala, Alejandro Santos; Sánchez-Aartnoutse, Juan Carlos; Egea-López, Esteban
2013-01-01
Textile logistic warehouses are highly automated mechanized places where control points are needed to count and validate the number of garments in each batch. This paper proposes and describes a low cost and small size automated system designed to count the number of garments by processing an image of the corresponding hanger hooks generated using an array of phototransistors sensors and a linear laser beam. The generated image is processed using computer vision techniques to infer the number of garment units. The system has been tested on two logistic warehouses with a mean error in the estimated number of hangers of 0.13%. PMID:23628760
Garment counting in a textile warehouse by means of a laser imaging system.
Martínez-Sala, Alejandro Santos; Sánchez-Aartnoutse, Juan Carlos; Egea-López, Esteban
2013-04-29
Textile logistic warehouses are highly automated mechanized places where control points are needed to count and validate the number of garments in each batch. This paper proposes and describes a low cost and small size automated system designed to count the number of garments by processing an image of the corresponding hanger hooks generated using an array of phototransistors sensors and a linear laser beam. The generated image is processed using computer vision techniques to infer the number of garment units. The system has been tested on two logistic warehouses with a mean error in the estimated number of hangers of 0.13%.
Optimized ratiometric calcium sensors for functional in vivo imaging of neurons and T lymphocytes.
Thestrup, Thomas; Litzlbauer, Julia; Bartholomäus, Ingo; Mues, Marsilius; Russo, Luigi; Dana, Hod; Kovalchuk, Yuri; Liang, Yajie; Kalamakis, Georgios; Laukat, Yvonne; Becker, Stefan; Witte, Gregor; Geiger, Anselm; Allen, Taylor; Rome, Lawrence C; Chen, Tsai-Wen; Kim, Douglas S; Garaschuk, Olga; Griesinger, Christian; Griesbeck, Oliver
2014-02-01
The quality of genetically encoded calcium indicators (GECIs) has improved dramatically in recent years, but high-performing ratiometric indicators are still rare. Here we describe a series of fluorescence resonance energy transfer (FRET)-based calcium biosensors with a reduced number of calcium binding sites per sensor. These 'Twitch' sensors are based on the C-terminal domain of Opsanus troponin C. Their FRET responses were optimized by a large-scale functional screen in bacterial colonies, refined by a secondary screen in rat hippocampal neuron cultures. We tested the in vivo performance of the most sensitive variants in the brain and lymph nodes of mice. The sensitivity of the Twitch sensors matched that of synthetic calcium dyes and allowed visualization of tonic action potential firing in neurons and high resolution functional tracking of T lymphocytes. Given their ratiometric readout, their brightness, large dynamic range and linear response properties, Twitch sensors represent versatile tools for neuroscience and immunology.
Onboard TDI stage estimation and calibration using SNR analysis
NASA Astrophysics Data System (ADS)
Haghshenas, Javad
2017-09-01
Electro-Optical design of a push-broom space camera for a Low Earth Orbit (LEO) remote sensing satellite is performed based on the noise analysis of TDI sensors for very high GSDs and low light level missions. It is well demonstrated that the CCD TDI mode of operation provides increased photosensitivity relative to a linear CCD array, without the sacrifice of spatial resolution. However, for satellite imaging, in order to utilize the advantages which the TDI mode of operation offers, attention should be given to the parameters which affect the image quality of TDI sensors such as jitters, vibrations, noises and etc. A predefined TDI stages may not properly satisfy image quality requirement of the satellite camera. Furthermore, in order to use the whole dynamic range of the sensor, imager must be capable to set the TDI stages in every shots based on the affecting parameters. This paper deals with the optimal estimation and setting the stages based on tradeoffs among MTF, noises and SNR. On-board SNR estimation is simulated using the atmosphere analysis based on the MODTRAN algorithm in PcModWin software. According to the noises models, we have proposed a formulation to estimate TDI stages in such a way to satisfy the system SNR requirement. On the other hand, MTF requirement must be satisfy in the same manner. A proper combination of both parameters will guaranty the full dynamic range usage along with the high SNR and image quality.
Using linear polarization for LWIR hyperspectral sensing of liquid contaminants
NASA Astrophysics Data System (ADS)
Thériault, Jean-Marc; Fortin, Gilles; Lacasse, Paul; Bouffard, François; Lavoie, Hugo
2013-09-01
We report and analyze recent results obtained with the MoDDIFS sensor (Multi-option Differential Detection and Imaging Fourier Spectrometer) for the passive polarization sensing of liquid contaminants in the long wave infrared (LWIR). Field measurements of polarized spectral radiance done on ethylene glycol and SF96 probed at distances of 6.5 and 450 meters, respectively, have been used to develop and test a GLRT-type detection algorithm adapted for liquid contaminants. The GLRT detection results serve to establish the potential and advantage of probing the vertical and horizontal linear hyperspectral polarization components for improving liquid contaminants detection.
Robust Models for Optic Flow Coding in Natural Scenes Inspired by Insect Biology
Brinkworth, Russell S. A.; O'Carroll, David C.
2009-01-01
The extraction of accurate self-motion information from the visual world is a difficult problem that has been solved very efficiently by biological organisms utilizing non-linear processing. Previous bio-inspired models for motion detection based on a correlation mechanism have been dogged by issues that arise from their sensitivity to undesired properties of the image, such as contrast, which vary widely between images. Here we present a model with multiple levels of non-linear dynamic adaptive components based directly on the known or suspected responses of neurons within the visual motion pathway of the fly brain. By testing the model under realistic high-dynamic range conditions we show that the addition of these elements makes the motion detection model robust across a large variety of images, velocities and accelerations. Furthermore the performance of the entire system is more than the incremental improvements offered by the individual components, indicating beneficial non-linear interactions between processing stages. The algorithms underlying the model can be implemented in either digital or analog hardware, including neuromorphic analog VLSI, but defy an analytical solution due to their dynamic non-linear operation. The successful application of this algorithm has applications in the development of miniature autonomous systems in defense and civilian roles, including robotics, miniature unmanned aerial vehicles and collision avoidance sensors. PMID:19893631
NASA Astrophysics Data System (ADS)
Leal-Junior, Arnaldo G.; Frizera, Anselmo; José Pontes, Maria
2018-03-01
Polymer optical fibers (POFs) are suitable for applications such as curvature sensors, strain, temperature, liquid level, among others. However, for enhancing sensitivity, many polymer optical fiber curvature sensors based on intensity variation require a lateral section. Lateral section length, depth, and surface roughness have great influence on the sensor sensitivity, hysteresis, and linearity. Moreover, the sensor curvature radius increase the stress on the fiber, which leads on variation of the sensor behavior. This paper presents the analysis relating the curvature radius and lateral section length, depth and surface roughness with the sensor sensitivity, hysteresis and linearity for a POF curvature sensor. Results show a strong correlation between the decision parameters behavior and the performance for sensor applications based on intensity variation. Furthermore, there is a trade-off among the sensitive zone length, depth, surface roughness, and curvature radius with the sensor desired performance parameters, which are minimum hysteresis, maximum sensitivity, and maximum linearity. The optimization of these parameters is applied to obtain a sensor with sensitivity of 20.9 mV/°, linearity of 0.9992 and hysteresis below 1%, which represent a better performance of the sensor when compared with the sensor without the optimization.
Design and laboratory testing of a prototype linear temperature sensor
NASA Astrophysics Data System (ADS)
Dube, C. M.; Nielsen, C. M.
1982-07-01
This report discusses the basic theory, design, and laboratory testing of a prototype linear temperature sensor (or "line sensor'), which is an instrument for measuring internal waves in the ocean. The operating principle of the line sensor consists of measuring the average resistance change of a vertically suspended wire (or coil of wire) induced by the passage of an internal wave in a thermocline. The advantage of the line sensor over conventional internal wave measurement techniques is that it is insensitive to thermal finestructure which contaminates point sensor measurements, and its output is approximately linearly proportional to the internal wave displacement. An approximately one-half scale prototype line sensor module was teste in the laboratory. The line sensor signal was linearly related to the actual fluid displacement to within 10%. Furthermore, the absolute output was well predicted (within 25%) from the theoretical model and the sensor material properties alone. Comparisons of the line sensor and a point sensor in a wavefield with superimposed turbulence (finestructure) revealed negligible distortion in the line sensor signal, while the point sensor signal was swamped by "turbulent noise'. The effects of internal wave strain were also found to be negligible.
High-performance mushroom plasmonic metamaterial absorbers for infrared polarimetric imaging
NASA Astrophysics Data System (ADS)
Ogawa, Shinpei; Fujisawa, Daisuke; Hata, Hisatoshi; Uetsuki, Mitsuharu; Kuboyama, Takafumi; Kimata, Masafumi
2017-02-01
Infrared (IR) polarimetric imaging is a promising approach to enhance object recognition with conventional IR imaging for applications such as artificial object recognition from the natural environment and facial recognition. However, typical infrared polarimetric imaging requires the attachment of polarizers to an IR camera or sensor, which leads to high cost and lower performance caused by their own IR radiation. We have developed asymmetric mushroom plasmonic metamaterial absorbers (A-MPMAs) to address this challenge. The A-MPMAs have an all-Al construction that consists of micropatches and a reflector layer connected with hollow rectangular posts. The asymmetric-shaped micropatches lead to strong polarization-selective IR absorption due to localized surface plasmon resonance at the micropatches. The operating wavelength region can be controlled mainly by the micropatch and the hollow rectangular post size. AMPMAs are complicated three-dimensional structures, the fabrication of which is challenging. Hollow rectangular post structures are introduced to enable simple fabrication using conventional surface micromachining techniques, such as sacrificial layer etching, with no degradation of the optical properties. The A-MPMAs have a smaller thermal mass than metal-insulator-metal based metamaterials and no influence of the strong non-linear dispersion relation of the insulator materials constant, which produces a gap in the wavelength region and additional absorption insensitive to polarization. A-MPMAs are therefore promising candidates for uncooled IR polarimetric image sensors in terms of both their optical properties and ease of fabrication. The results presented here are expected to contribute to the development of highperformance polarimetric uncooled IR image sensors that do not require polarizers.
MSTI-3 sensor package optical design
NASA Astrophysics Data System (ADS)
Horton, Richard F.; Baker, William G.; Griggs, Michael; Nguyen, Van; Baker, H. Vernon
1995-06-01
The MSTI-3 sensor package is a three band imaging telescope for military and dual use sensing missions. The MSTI-3 mission is one of the Air Force Phillips Laboratory's Pegasus launched space missions, a third in the series of state-of-the-art lightweight sensors on low cost satellites. The satellite is planned for launch into a 425 Km orbit in late 1995. The MSTI- 3 satellite is configured with a down looking two axis gimbal and gimbal mirror. The gimbal mirror is an approximately 13 cm by 29 cm mirror which allows a field of regard approximately 100 degrees by 180 degrees. The optical train uses several novel optical features to allow for compactness and light weight. A 105 mm Ritchey Chretien Cassegrain imaging system with a CaF(subscript 2) dome astigmatism corrector is followed by a CaF(subscript 2) beamsplitter cube assembly at the systems first focus. The dichroic beamsplitter cube assembly separates the light into a visible and two IR channels of approximately 2.5 to 3.3, (SWIR), and 3.5 to 4.5, (MWIR), micron wavelength bands. The two IR imaging channels each consist of unity power re-imaging lens cluster, a cooled seven position filter wheel, a cooled Lyot stop and an Amber 256 X 256 InSb array camera. The visible channel uses a unity power re- imaging system prior to a linear variable filter with a Sony CCD array, which allows for a multispectral imaging capability in the 0.5 to 0.8 micron region. The telescope field of view is 1.4 degrees square.
Efficient, nonlinear phase estimation with the nonmodulated pyramid wavefront sensor.
Frazin, Richard A
2018-04-01
The sensitivity of the pyramid wavefront sensor (PyWFS) has made it a popular choice for astronomical adaptive optics (AAO) systems. The PyWFS is at its most sensitive when it is used without modulation of the input beam. In nonmodulated mode, the device is highly nonlinear. Hence, all PyWFS implementations on current AAO systems employ modulation to make the device more linear. The upcoming era of 30-m class telescopes and the demand for ultra-precise wavefront control stemming from science objectives that include direct imaging of exoplanets make using the PyWFS without modulation desirable. This article argues that nonlinear estimation based on Newton's method for nonlinear optimization can be useful for mitigating the effects of nonlinearity in the nonmodulated PyWFS. The proposed approach requires all optical modeling to be pre-computed, which has the advantage of avoiding real-time simulations of beam propagation. Further, the required real-time calculations are amenable to massively parallel computation. Numerical experiments simulate a PyWFS with faces sloped 3.7° to the horizontal, operating at a wavelength of 0.85 μm, and with an index of refraction of 1.45. A singular value analysis shows that the common practice of calculating two "slope" images from the four PyWFS pupil images discards critical information and is unsuitable for the nonmodulated PyWFS simulated here. Instead, this article advocates estimators that use the raw pixel values not only from the four geometrical images of the pupil, but from surrounding pixels as well. The simulations indicate that nonlinear estimation can be effective when the Strehl ratio of the input beam is greater than 0.3, and the improvement relative to linear estimation tends to increase at larger Strehl ratios. At Strehl ratios less than about 0.5, the performances of both the nonlinear and linear estimators are relatively insensitive to noise since they are dominated by nonlinearity error.
Optimal full motion video registration with rigorous error propagation
NASA Astrophysics Data System (ADS)
Dolloff, John; Hottel, Bryant; Doucette, Peter; Theiss, Henry; Jocher, Glenn
2014-06-01
Optimal full motion video (FMV) registration is a crucial need for the Geospatial community. It is required for subsequent and optimal geopositioning with simultaneous and reliable accuracy prediction. An overall approach being developed for such registration is presented that models relevant error sources in terms of the expected magnitude and correlation of sensor errors. The corresponding estimator is selected based on the level of accuracy of the a priori information of the sensor's trajectory and attitude (pointing) information, in order to best deal with non-linearity effects. Estimator choices include near real-time Kalman Filters and batch Weighted Least Squares. Registration solves for corrections to the sensor a priori information for each frame. It also computes and makes available a posteriori accuracy information, i.e., the expected magnitude and correlation of sensor registration errors. Both the registered sensor data and its a posteriori accuracy information are then made available to "down-stream" Multi-Image Geopositioning (MIG) processes. An object of interest is then measured on the registered frames and a multi-image optimal solution, including reliable predicted solution accuracy, is then performed for the object's 3D coordinates. This paper also describes a robust approach to registration when a priori information of sensor attitude is unavailable. It makes use of structure-from-motion principles, but does not use standard Computer Vision techniques, such as estimation of the Essential Matrix which can be very sensitive to noise. The approach used instead is a novel, robust, direct search-based technique.
NASA Tech Briefs, October 2007
NASA Technical Reports Server (NTRS)
2007-01-01
Topics covered include; Wirelessly Interrogated Position or Displacement Sensors; Ka-Band Radar Terminal Descent Sensor; Metal/Metal Oxide Differential Electrode pH Sensors; Improved Sensing Coils for SQUIDs; Inductive Linear-Position Sensor/Limit-Sensor Units; Hilbert-Curve Fractal Antenna With Radiation- Pattern Diversity; Single-Camera Panoramic-Imaging Systems; Interface Electronic Circuitry for an Electronic Tongue; Inexpensive Clock for Displaying Planetary or Sidereal Time; Efficient Switching Arrangement for (N + 1)/N Redundancy; Lightweight Reflectarray Antenna for 7.115 and 32 GHz; Opto-Electronic Oscillator Using Suppressed Phase Modulation; Alternative Controller for a Fiber-Optic Switch; Strong, Lightweight, Porous Materials; Nanowicks; Lightweight Thermal Protection System for Atmospheric Entry; Rapid and Quiet Drill; Hydrogen Peroxide Concentrator; MMIC Amplifiers for 90 to 130 GHz; Robot Would Climb Steep Terrain; Measuring Dynamic Transfer Functions of Cavitating Pumps; Advanced Resistive Exercise Device; Rapid Engineering of Three-Dimensional, Multicellular Tissues With Polymeric Scaffolds; Resonant Tunneling Spin Pump; Enhancing Spin Filters by Use of Bulk Inversion Asymmetry; Optical Magnetometer Incorporating Photonic Crystals; WGM-Resonator/Tapered-Waveguide White-Light Sensor Optics; Raman-Suppressing Coupling for Optical Parametric Oscillator; CO2-Reduction Primary Cell for Use on Venus; Cold Atom Source Containing Multiple Magneto- Optical Traps; POD Model Reconstruction for Gray-Box Fault Detection; System for Estimating Horizontal Velocity During Descent; Software Framework for Peer Data-Management Services; Autogen Version 2.0; Tracking-Data-Conversion Tool; NASA Enterprise Visual Analysis; Advanced Reference Counting Pointers for Better Performance; C Namelist Facility; and Efficient Mosaicking of Spitzer Space Telescope Images.
Multispectral linear array visible and shortwave infrared sensors
NASA Astrophysics Data System (ADS)
Tower, J. R.; Warren, F. B.; Pellon, L. E.; Strong, R.; Elabd, H.; Cope, A. D.; Hoffmann, D. M.; Kramer, W. M.; Longsderff, R. W.
1984-08-01
All-solid state pushbroom sensors for multispectral linear array (MLA) instruments to replace mechanical scanners used on LANDSAT satellites are introduced. A buttable, four-spectral-band, linear-format charge coupled device (CCD) and a buttable, two-spectral-band, linear-format, shortwave infrared CCD are described. These silicon integrated circuits may be butted end to end to provide multispectral focal planes with thousands of contiguous, in-line photosites. The visible CCD integrated circuit is organized as four linear arrays of 1024 pixels each. Each array views the scene in a different spectral window, resulting in a four-band sensor. The shortwave infrared (SWIR) sensor is organized as 2 linear arrays of 512 detectors each. Each linear array is optimized for performance at a different wavelength in the SWIR band.
Comparisons between wave directional spectra from SAR and pressure sensor arrays
NASA Technical Reports Server (NTRS)
Pawka, S. S.; Inman, D. L.; Hsiao, S. V.; Shemdin, O. H.
1980-01-01
Simultaneous directional wave measurements were made at Torrey Pines Beach, California, by a synthetic aperture radar (SAR) and a linear array of pressure sensors. The measurements were conducted during the West Coast Experiment in March 1977. Quantitative comparisons of the normalized directional spectra from the two systems were made for wave periods of 6.9-17.0 s. The comparison results were variable but generally showed good agreement of the primary mode of the normalized directional energy. An attempt was made to quantify the physical criteria for good wave imaging in the SAR. A frequency band analysis of wave parameters such as band energy, slope, and orbital velocity did not show good correlation with the directional comparisons. It is noted that absolute values of the wave height spectrum cannot be derived from the SAR images yet and, consequently, no comparisons of absolute energy levels with corresponding array measurements were intended.
Estimating Position of Mobile Robots From Omnidirectional Vision Using an Adaptive Algorithm.
Li, Luyang; Liu, Yun-Hui; Wang, Kai; Fang, Mu
2015-08-01
This paper presents a novel and simple adaptive algorithm for estimating the position of a mobile robot with high accuracy in an unknown and unstructured environment by fusing images of an omnidirectional vision system with measurements of odometry and inertial sensors. Based on a new derivation where the omnidirectional projection can be linearly parameterized by the positions of the robot and natural feature points, we propose a novel adaptive algorithm, which is similar to the Slotine-Li algorithm in model-based adaptive control, to estimate the robot's position by using the tracked feature points in image sequence, the robot's velocity, and orientation angles measured by odometry and inertial sensors. It is proved that the adaptive algorithm leads to global exponential convergence of the position estimation errors to zero. Simulations and real-world experiments are performed to demonstrate the performance of the proposed algorithm.
Resolution Properties of a Calcium Tungstate (CaWO4) Screen Coupled to a CMOS Imaging Detector
NASA Astrophysics Data System (ADS)
Koukou, Vaia; Martini, Niki; Valais, Ioannis; Bakas, Athanasios; Kalyvas, Nektarios; Lavdas, Eleftherios; Fountos, George; Kandarakis, Ioannis; Michail, Christos
2017-11-01
The aim of the current work was to assess the resolution properties of a calcium tungstate (CaWO4) screen (screen coating thickness: 50.09 mg/cm2, actual thickness: 167.2 μm) coupled to a high resolution complementary metal oxide semiconductor (CMOS) digital imaging sensor. A 2.7x3.6 cm2 CaWO4 sample was extracted from an Agfa Curix universal screen and was coupled directly with the active area of the active pixel sensor (APS) CMOS sensor. Experiments were performed following the new IEC 62220-1-1:2015 International Standard, using an RQA-5 beam quality. Resolution was assessed in terms of the Modulation Transfer Function (MTF), using the slanted-edge method. The CaWO4/CMOS detector configuration was found with linear response, in the exposure range under investigation. The final MTF was obtained through averaging the oversampled edge spread function (ESF), using a custom-made software developed by our team, according to the IEC 62220-1-1:2015. Considering the renewed interest in calcium tungstate for various applications, along with the resolution results of this work, CaWO4 could be also considered for use in X-ray imaging devices such as charged-coupled devices (CCD) and CMOS.
Temporal Noise Analysis of Charge-Domain Sampling Readout Circuits for CMOS Image Sensors.
Ge, Xiaoliang; Theuwissen, Albert J P
2018-02-27
This paper presents a temporal noise analysis of charge-domain sampling readout circuits for Complementary Metal-Oxide Semiconductor (CMOS) image sensors. In order to address the trade-off between the low input-referred noise and high dynamic range, a Gm-cell-based pixel together with a charge-domain correlated-double sampling (CDS) technique has been proposed to provide a way to efficiently embed a tunable conversion gain along the read-out path. Such readout topology, however, operates in a non-stationery large-signal behavior, and the statistical properties of its temporal noise are a function of time. Conventional noise analysis methods for CMOS image sensors are based on steady-state signal models, and therefore cannot be readily applied for Gm-cell-based pixels. In this paper, we develop analysis models for both thermal noise and flicker noise in Gm-cell-based pixels by employing the time-domain linear analysis approach and the non-stationary noise analysis theory, which help to quantitatively evaluate the temporal noise characteristic of Gm-cell-based pixels. Both models were numerically computed in MATLAB using design parameters of a prototype chip, and compared with both simulation and experimental results. The good agreement between the theoretical and measurement results verifies the effectiveness of the proposed noise analysis models.
Temporal Noise Analysis of Charge-Domain Sampling Readout Circuits for CMOS Image Sensors †
Theuwissen, Albert J. P.
2018-01-01
This paper presents a temporal noise analysis of charge-domain sampling readout circuits for Complementary Metal-Oxide Semiconductor (CMOS) image sensors. In order to address the trade-off between the low input-referred noise and high dynamic range, a Gm-cell-based pixel together with a charge-domain correlated-double sampling (CDS) technique has been proposed to provide a way to efficiently embed a tunable conversion gain along the read-out path. Such readout topology, however, operates in a non-stationery large-signal behavior, and the statistical properties of its temporal noise are a function of time. Conventional noise analysis methods for CMOS image sensors are based on steady-state signal models, and therefore cannot be readily applied for Gm-cell-based pixels. In this paper, we develop analysis models for both thermal noise and flicker noise in Gm-cell-based pixels by employing the time-domain linear analysis approach and the non-stationary noise analysis theory, which help to quantitatively evaluate the temporal noise characteristic of Gm-cell-based pixels. Both models were numerically computed in MATLAB using design parameters of a prototype chip, and compared with both simulation and experimental results. The good agreement between the theoretical and measurement results verifies the effectiveness of the proposed noise analysis models. PMID:29495496
SU-E-I-92: Accuracy Evaluation of Depth Data in Microsoft Kinect.
Kozono, K; Aoki, M; Ono, M; Kamikawa, Y; Arimura, H; Toyofuku, F
2012-06-01
Microsoft Kinect has potential for use in real-time patient position monitoring in diagnostic radiology and radiotherapy. We evaluated the accuracy of depth image data and the device-to-device variation in various conditions simulating clinical applications in a hospital. Kinect sensor consists of infrared-ray depth camera and RGB camera. We developed a computer program using OpenNI and OpenCV for measuring quantitative distance data. The program displays depth image obtained from Kinect sensor on the screen, and the cartesian coordinates at an arbitrary point selected by mouse-clicking can be measured. A rectangular box without luster (300 × 198 × 50 mm 3 ) was used as a measuring object. The object was placed on the floor at various distances ranging from 0 to 400 cm in increments of 10 cm from the sensor, and depth data were measured for 10 points on the planar surface of the box. The measured distance data were calibrated by using the least square method. The device-to-device variations were evaluated using five Kinect sensors. There was almost linear relationship between true and measured values. Kinect sensor was unable to measure at a distance of less than 50 cm from the sensor. It was found that distance data calibration was necessary for each sensor. The device-to-device variation error for five Kinect sensors was within 0.46% at the distance range from 50 cm to 2 m from the sensor. The maximum deviation of the distance data after calibration was 1.1 mm at a distance from 50 to 150 cm. The overall average error of five Kinect sensors was 0.18 mm at a distance range of 50 to 150 cm. Kinect sensor has distance accuracy of about 1 mm if each device is properly calibrated. This sensor will be useable for positioning of patients in diagnostic radiology and radiotherapy. © 2012 American Association of Physicists in Medicine.
Guimarães, Ricardo J P S; Freitas, Corina C; Dutra, Luciano V; Scholte, Ronaldo G C; Amaral, Ronaldo S; Drummond, Sandra C; Shimabukuro, Yosio E; Oliveira, Guilherme C; Carvalho, Omar S
2010-07-01
This paper analyses the associations between Normalized Difference Vegetation Index (NDVI) and Enhanced Vegetation Index (EVI) on the prevalence of schistosomiasis and the presence of Biomphalaria glabrata in the state of Minas Gerais (MG), Brazil. Additionally, vegetation, soil and shade fraction images were created using a Linear Spectral Mixture Model (LSMM) from the blue, red and infrared channels of the Moderate Resolution Imaging Spectroradiometer spaceborne sensor and the relationship between these images and the prevalence of schistosomiasis and the presence of B. glabrata was analysed. First, we found a high correlation between the vegetation fraction image and EVI and second, a high correlation between soil fraction image and NDVI. The results also indicate that there was a positive correlation between prevalence and the vegetation fraction image (July 2002), a negative correlation between prevalence and the soil fraction image (July 2002) and a positive correlation between B. glabrata and the shade fraction image (July 2002). This paper demonstrates that the LSMM variables can be used as a substitute for the standard vegetation indices (EVI and NDVI) to determine and delimit risk areas for B. glabrata and schistosomiasis in MG, which can be used to improve the allocation of resources for disease control.
Guidance Of A Mobile Robot Using An Omnidirectional Vision Navigation System
NASA Astrophysics Data System (ADS)
Oh, Sung J.; Hall, Ernest L.
1987-01-01
Navigation and visual guidance are key topics in the design of a mobile robot. Omnidirectional vision using a very wide angle or fisheye lens provides a hemispherical view at a single instant that permits target location without mechanical scanning. The inherent image distortion with this view and the numerical errors accumulated from vision components can be corrected to provide accurate position determination for navigation and path control. The purpose of this paper is to present the experimental results and analyses of the imaging characteristics of the omnivision system including the design of robot-oriented experiments and the calibration of raw results. Errors less than one picture element on each axis were observed by testing the accuracy and repeatability of the experimental setup and the alignment between the robot and the sensor. Similar results were obtained for four different locations using corrected results of the linearity test between zenith angle and image location. Angular error of less than one degree and radial error of less than one Y picture element were observed at moderate relative speed. The significance of this work is that the experimental information and the test of coordinated operation of the equipment provide a greater understanding of the dynamic omnivision system characteristics, as well as insight into the evaluation and improvement of the prototype sensor for a mobile robot. Also, the calibration of the sensor is important, since the results provide a cornerstone for future developments. This sensor system is currently being developed for a robot lawn mower.
A digital ISO expansion technique for digital cameras
NASA Astrophysics Data System (ADS)
Yoo, Youngjin; Lee, Kangeui; Choe, Wonhee; Park, SungChan; Lee, Seong-Deok; Kim, Chang-Yong
2010-01-01
Market's demands of digital cameras for higher sensitivity capability under low-light conditions are remarkably increasing nowadays. The digital camera market is now a tough race for providing higher ISO capability. In this paper, we explore an approach for increasing maximum ISO capability of digital cameras without changing any structure of an image sensor or CFA. Our method is directly applied to the raw Bayer pattern CFA image to avoid non-linearity characteristics and noise amplification which are usually deteriorated after ISP (Image Signal Processor) of digital cameras. The proposed method fuses multiple short exposed images which are noisy, but less blurred. Our approach is designed to avoid the ghost artifact caused by hand-shaking and object motion. In order to achieve a desired ISO image quality, both low frequency chromatic noise and fine-grain noise that usually appear in high ISO images are removed and then we modify the different layers which are created by a two-scale non-linear decomposition of an image. Once our approach is performed on an input Bayer pattern CFA image, the resultant Bayer image is further processed by ISP to obtain a fully processed RGB image. The performance of our proposed approach is evaluated by comparing SNR (Signal to Noise Ratio), MTF50 (Modulation Transfer Function), color error ~E*ab and visual quality with reference images whose exposure times are properly extended into a variety of target sensitivity.
Myopic aberrations: Simulation based comparison of curvature and Hartmann Shack wavefront sensors
NASA Astrophysics Data System (ADS)
Basavaraju, Roopashree M.; Akondi, Vyas; Weddell, Stephen J.; Budihal, Raghavendra Prasad
2014-02-01
In comparison with a Hartmann Shack wavefront sensor, the curvature wavefront sensor is known for its higher sensitivity and greater dynamic range. The aim of this study is to numerically investigate the merits of using a curvature wavefront sensor, in comparison with a Hartmann Shack (HS) wavefront sensor, to analyze aberrations of the myopic eye. Aberrations were statistically generated using Zernike coefficient data of 41 myopic subjects obtained from the literature. The curvature sensor is relatively simple to implement, and the processing of extra- and intra-focal images was linearly resolved using the Radon transform to provide Zernike modes corresponding to statistically generated aberrations. Simulations of the HS wavefront sensor involve the evaluation of the focal spot pattern from simulated aberrations. Optical wavefronts were reconstructed using the slope geometry of Southwell. Monte Carlo simulation was used to find critical parameters for accurate wavefront sensing and to investigate the performance of HS and curvature sensors. The performance of the HS sensor is highly dependent on the number of subapertures and the curvature sensor is largely dependent on the number of Zernike modes used to represent the aberration and the effective propagation distance. It is shown that in order to achieve high wavefront sensing accuracy while measuring aberrations of the myopic eye, a simpler and cost effective curvature wavefront sensor is a reliable alternative to a high resolution HS wavefront sensor with a large number of subapertures.
Cell adhesion and guidance by micropost-array chemical sensors
NASA Astrophysics Data System (ADS)
Pantano, Paul; Quah, Soo-Kim; Danowski, Kristine L.
2002-06-01
An array of ~50,000 individual polymeric micropost sensors was patterned across a glass coverslip by a photoimprint lithographic technique. Individual micropost sensors were ~3-micrometers tall and ~8-micrometers wide. The O2-sensitive micropost array sensors (MPASs) comprised a ruthenium complex encapsulated in a gas permeable photopolymerizable siloxane. The pH-sensitive MPASs comprised a fluorescein conjugate encapsulated in a photocrosslinkable poly(vinyl alcohol)-based polymer. PO2 and pH were quantitated by acquiring MPAS luminescence images with an epifluorescence microscope/charge coupled device imaging system. O2-sensitive MPASs displayed linear Stern-Volmer quenching behavior with a maximum Io/I of ~8.6. pH-sensitive MPASs displayed sigmoidal calibration curves with a pKa of ~5.8. The adhesion of undifferentiated rat pheochromocytoma (PC12) cells across these two polymeric surface types was investigated. The greatest PC12 cell proliferation and adhesion occurred across the poly(vinyl alcohol)-based micropost arrays relative to planar poly(vinyl alcohol)-based surfaces and both patterned and planar siloxane surfaces. An additional advantage of the patterned MPAS layers relative to planar sensing layers was the ability to direct the growth of biological cells. Preliminary data is presented whereby nerve growth factor-differentiated PC12 cells grew neurite-like processes that extended along paths defined by the micropost architecture.
NASA Technical Reports Server (NTRS)
2007-01-01
Topics covered include: Miniature Intelligent Sensor Module; "Smart" Sensor Module; Portable Apparatus for Electrochemical Sensing of Ethylene; Increasing Linear Dynamic Range of a CMOS Image Sensor; Flight Qualified Micro Sun Sensor; Norbornene-Based Polymer Electrolytes for Lithium Cells; Making Single-Source Precursors of Ternary Semiconductors; Water-Free Proton-Conducting Membranes for Fuel Cells; Mo/Ti Diffusion Bonding for Making Thermoelectric Devices; Photodetectors on Coronagraph Mask for Pointing Control; High-Energy-Density, Low-Temperature Li/CFx Primary Cells; G4-FETs as Universal and Programmable Logic Gates; Fabrication of Buried Nanochannels From Nanowire Patterns; Diamond Smoothing Tools; Infrared Imaging System for Studying Brain Function; Rarefying Spectra of Whispering-Gallery-Mode Resonators; Large-Area Permanent-Magnet ECR Plasma Source; Slot-Antenna/Permanent-Magnet Device for Generating Plasma; Fiber-Optic Strain Gauge With High Resolution And Update Rate; Broadband Achromatic Telecentric Lens; Temperature-Corrected Model of Turbulence in Hot Jet Flows; Enhanced Elliptic Grid Generation; Automated Knowledge Discovery From Simulators; Electro-Optical Modulator Bias Control Using Bipolar Pulses; Generative Representations for Automated Design of Robots; Mars-Approach Navigation Using In Situ Orbiters; Efficient Optimization of Low-Thrust Spacecraft Trajectories; Cylindrical Asymmetrical Capacitors for Use in Outer Space; Protecting Against Faults in JPL Spacecraft; Algorithm Optimally Allocates Actuation of a Spacecraft; and Radar Interferometer for Topographic Mapping of Glaciers and Ice Sheets.
Near-infrared fluorescence imaging with a mobile phone (Conference Presentation)
NASA Astrophysics Data System (ADS)
Ghassemi, Pejhman; Wang, Bohan; Wang, Jianting; Wang, Quanzeng; Chen, Yu; Pfefer, T. Joshua
2017-03-01
Mobile phone cameras employ sensors with near-infrared (NIR) sensitivity, yet this capability has not been exploited for biomedical purposes. Removing the IR-blocking filter from a phone-based camera opens the door to a wide range of techniques and applications for inexpensive, point-of-care biophotonic imaging and sensing. This study provides proof of principle for one of these modalities - phone-based NIR fluorescence imaging. An imaging system was assembled using a 780 nm light source along with excitation and emission filters with 800 nm and 825 nm cut-off wavelengths, respectively. Indocyanine green (ICG) was used as an NIR fluorescence contrast agent in an ex vivo rodent model, a resolution test target and a 3D-printed, tissue-simulating vascular phantom. Raw and processed images for red, green and blue pixel channels were analyzed for quantitative evaluation of fundamental performance characteristics including spectral sensitivity, detection linearity and spatial resolution. Mobile phone results were compared with a scientific CCD. The spatial resolution of CCD system was consistently superior to the phone, and green phone camera pixels showed better resolution than blue or green channels. The CCD exhibited similar sensitivity as processed red and blue pixels channels, yet a greater degree of detection linearity. Raw phone pixel data showed lower sensitivity but greater linearity than processed data. Overall, both qualitative and quantitative results provided strong evidence of the potential of phone-based NIR imaging, which may lead to a wide range of applications from cancer detection to glucose sensing.
High-Speed Edge-Detecting Line Scan Smart Camera
NASA Technical Reports Server (NTRS)
Prokop, Norman F.
2012-01-01
A high-speed edge-detecting line scan smart camera was developed. The camera is designed to operate as a component in a NASA Glenn Research Center developed inlet shock detection system. The inlet shock is detected by projecting a laser sheet through the airflow. The shock within the airflow is the densest part and refracts the laser sheet the most in its vicinity, leaving a dark spot or shadowgraph. These spots show up as a dip or negative peak within the pixel intensity profile of an image of the projected laser sheet. The smart camera acquires and processes in real-time the linear image containing the shock shadowgraph and outputting the shock location. Previously a high-speed camera and personal computer would perform the image capture and processing to determine the shock location. This innovation consists of a linear image sensor, analog signal processing circuit, and a digital circuit that provides a numerical digital output of the shock or negative edge location. The smart camera is capable of capturing and processing linear images at over 1,000 frames per second. The edges are identified as numeric pixel values within the linear array of pixels, and the edge location information can be sent out from the circuit in a variety of ways, such as by using a microcontroller and onboard or external digital interface to include serial data such as RS-232/485, USB, Ethernet, or CAN BUS; parallel digital data; or an analog signal. The smart camera system can be integrated into a small package with a relatively small number of parts, reducing size and increasing reliability over the previous imaging system..
A Sensor-Based Method for Diagnostics of Machine Tool Linear Axes.
Vogl, Gregory W; Weiss, Brian A; Donmez, M Alkan
2015-01-01
A linear axis is a vital subsystem of machine tools, which are vital systems within many manufacturing operations. When installed and operating within a manufacturing facility, a machine tool needs to stay in good condition for parts production. All machine tools degrade during operations, yet knowledge of that degradation is illusive; specifically, accurately detecting degradation of linear axes is a manual and time-consuming process. Thus, manufacturers need automated and efficient methods to diagnose the condition of their machine tool linear axes without disruptions to production. The Prognostics and Health Management for Smart Manufacturing Systems (PHM4SMS) project at the National Institute of Standards and Technology (NIST) developed a sensor-based method to quickly estimate the performance degradation of linear axes. The multi-sensor-based method uses data collected from a 'sensor box' to identify changes in linear and angular errors due to axis degradation; the sensor box contains inclinometers, accelerometers, and rate gyroscopes to capture this data. The sensors are expected to be cost effective with respect to savings in production losses and scrapped parts for a machine tool. Numerical simulations, based on sensor bandwidth and noise specifications, show that changes in straightness and angular errors could be known with acceptable test uncertainty ratios. If a sensor box resides on a machine tool and data is collected periodically, then the degradation of the linear axes can be determined and used for diagnostics and prognostics to help optimize maintenance, production schedules, and ultimately part quality.
A Sensor-Based Method for Diagnostics of Machine Tool Linear Axes
Vogl, Gregory W.; Weiss, Brian A.; Donmez, M. Alkan
2017-01-01
A linear axis is a vital subsystem of machine tools, which are vital systems within many manufacturing operations. When installed and operating within a manufacturing facility, a machine tool needs to stay in good condition for parts production. All machine tools degrade during operations, yet knowledge of that degradation is illusive; specifically, accurately detecting degradation of linear axes is a manual and time-consuming process. Thus, manufacturers need automated and efficient methods to diagnose the condition of their machine tool linear axes without disruptions to production. The Prognostics and Health Management for Smart Manufacturing Systems (PHM4SMS) project at the National Institute of Standards and Technology (NIST) developed a sensor-based method to quickly estimate the performance degradation of linear axes. The multi-sensor-based method uses data collected from a ‘sensor box’ to identify changes in linear and angular errors due to axis degradation; the sensor box contains inclinometers, accelerometers, and rate gyroscopes to capture this data. The sensors are expected to be cost effective with respect to savings in production losses and scrapped parts for a machine tool. Numerical simulations, based on sensor bandwidth and noise specifications, show that changes in straightness and angular errors could be known with acceptable test uncertainty ratios. If a sensor box resides on a machine tool and data is collected periodically, then the degradation of the linear axes can be determined and used for diagnostics and prognostics to help optimize maintenance, production schedules, and ultimately part quality. PMID:28691039
Characterization of Scintillating X-ray Optical Fiber Sensors
Sporea, Dan; Mihai, Laura; Vâţă, Ion; McCarthy, Denis; O'Keeffe, Sinead; Lewis, Elfed
2014-01-01
The paper presents a set of tests carried out in order to evaluate the design characteristics and the operating performance of a set of six X-ray extrinsic optical fiber sensors. The extrinsic sensor we developed is intended to be used as a low energy X-ray detector for monitoring radiation levels in radiotherapy, industrial applications and for personnel dosimetry. The reproducibility of the manufacturing process and the characteristics of the sensors were assessed. The sensors dynamic range, linearity, sensitivity, and reproducibility are evaluated through radioluminescence measurements, X-ray fluorescence and X-ray imaging investigations. Their response to the operating conditions of the excitation source was estimated. The effect of the sensors design and implementation, on the collecting efficiency of the radioluminescence signal was measured. The study indicated that the sensors are efficient only in the first 5 mm of the tip, and that a reflective coating can improve their response. Additional tests were done to investigate the concentricity of the sensors tip against the core of the optical fiber guiding the optical signal. The influence of the active material concentration on the sensor response to X-ray was studied. The tests were carried out by measuring the radioluminescence signal with an optical fiber spectrometer and with a Multi-Pixel Photon Counter. PMID:24556676
NASA Astrophysics Data System (ADS)
El-Saba, Aed; Sakla, Wesam A.
2010-04-01
Recently, the use of imaging polarimetry has received considerable attention for use in automatic target recognition (ATR) applications. In military remote sensing applications, there is a great demand for sensors that are capable of discriminating between real targets and decoys. Accurate discrimination of decoys from real targets is a challenging task and often requires the fusion of various sensor modalities that operate simultaneously. In this paper, we use a simple linear fusion technique known as the high-boost fusion method for effective discrimination of real targets in the presence of multiple decoys. The HBF assigns more weight to the polarization-based imagery in forming the final fused image that is used for detection. We have captured both intensity and polarization-based imagery from an experimental laboratory arrangement containing a mixture of sand/dirt, rocks, vegetation, and other objects for the purpose of simulating scenery that would be acquired in a remote sensing military application. A target object and three decoys that are identical in physical appearance (shape, surface structure and color) and different in material composition have also been placed in the scene. We use the wavelet-filter joint transform correlation (WFJTC) technique to perform detection between input scenery and the target object. Our results show that use of the HBF method increases the correlation performance metrics associated with the WFJTC-based detection process when compared to using either the traditional intensity or polarization-based images.
High-Resolution Gamma-Ray Imaging Measurements Using Externally Segmented Germanium Detectors
NASA Technical Reports Server (NTRS)
Callas, J.; Mahoney, W.; Skelton, R.; Varnell, L.; Wheaton, W.
1994-01-01
Fully two-dimensional gamma-ray imaging with simultaneous high-resolution spectroscopy has been demonstrated using an externally segmented germanium sensor. The system employs a single high-purity coaxial detector with its outer electrode segmented into 5 distinct charge collection regions and a lead coded aperture with a uniformly redundant array (URA) pattern. A series of one-dimensional responses was collected around 511 keV while the system was rotated in steps through 180 degrees. A non-negative, linear least-squares algorithm was then employed to reconstruct a 2-dimensional image. Corrections for multiple scattering in the detector, and the finite distance of source and detector are made in the reconstruction process.
Incremental Structured Dictionary Learning for Video Sensor-Based Object Tracking
Xue, Ming; Yang, Hua; Zheng, Shibao; Zhou, Yi; Yu, Zhenghua
2014-01-01
To tackle robust object tracking for video sensor-based applications, an online discriminative algorithm based on incremental discriminative structured dictionary learning (IDSDL-VT) is presented. In our framework, a discriminative dictionary combining both positive, negative and trivial patches is designed to sparsely represent the overlapped target patches. Then, a local update (LU) strategy is proposed for sparse coefficient learning. To formulate the training and classification process, a multiple linear classifier group based on a K-combined voting (KCV) function is proposed. As the dictionary evolves, the models are also trained to timely adapt the target appearance variation. Qualitative and quantitative evaluations on challenging image sequences compared with state-of-the-art algorithms demonstrate that the proposed tracking algorithm achieves a more favorable performance. We also illustrate its relay application in visual sensor networks. PMID:24549252
CMOS Image Sensor Using SOI-MOS/Photodiode Composite Photodetector Device
NASA Astrophysics Data System (ADS)
Uryu, Yuko; Asano, Tanemasa
2002-04-01
A new photodetector device composed of a lateral junction photodiode and a metal-oxide-semiconductor field-effect-transistor (MOSFET), in which the output of the diode is fed through the body of the MOSFET, has been investigated. It is shown that the silicon-on-insulator (SOI)-MOSFET amplifies the junction photodiode current due to the lateral bipolar action. It is also shown that the presence of the electrically floating gate enhances the current amplification factor of the SOI-MOSFET. The output current of this composite device linearly responds by four orders of illumination intensity. As an application of the composite device, a complementary-metal-oxide-semiconductor (CMOS) line sensor incorporating the composite device is fabricated and its operation is demonstrated. The output signal of the line sensor using the composite device was two times larger than that using the lateral photodiode.
Optimal image alignment with random projections of manifolds: algorithm and geometric analysis.
Kokiopoulou, Effrosyni; Kressner, Daniel; Frossard, Pascal
2011-06-01
This paper addresses the problem of image alignment based on random measurements. Image alignment consists of estimating the relative transformation between a query image and a reference image. We consider the specific problem where the query image is provided in compressed form in terms of linear measurements captured by a vision sensor. We cast the alignment problem as a manifold distance minimization problem in the linear subspace defined by the measurements. The transformation manifold that represents synthesis of shift, rotation, and isotropic scaling of the reference image can be given in closed form when the reference pattern is sparsely represented over a parametric dictionary. We show that the objective function can then be decomposed as the difference of two convex functions (DC) in the particular case where the dictionary is built on Gaussian functions. Thus, the optimization problem becomes a DC program, which in turn can be solved globally by a cutting plane method. The quality of the solution is typically affected by the number of random measurements and the condition number of the manifold that describes the transformations of the reference image. We show that the curvature, which is closely related to the condition number, remains bounded in our image alignment problem, which means that the relative transformation between two images can be determined optimally in a reduced subspace.
Spectral Reconstruction for Obtaining Virtual Hyperspectral Images
NASA Astrophysics Data System (ADS)
Perez, G. J. P.; Castro, E. C.
2016-12-01
Hyperspectral sensors demonstrated its capabalities in identifying materials and detecting processes in a satellite scene. However, availability of hyperspectral images are limited due to the high development cost of these sensors. Currently, most of the readily available data are from multi-spectral instruments. Spectral reconstruction is an alternative method to address the need for hyperspectral information. The spectral reconstruction technique has been shown to provide a quick and accurate detection of defects in an integrated circuit, recovers damaged parts of frescoes, and it also aids in converting a microscope into an imaging spectrometer. By using several spectral bands together with a spectral library, a spectrum acquired by a sensor can be expressed as a linear superposition of elementary signals. In this study, spectral reconstruction is used to estimate the spectra of different surfaces imaged by Landsat 8. Four atmospherically corrected surface reflectance from three visible bands (499 nm, 585 nm, 670 nm) and one near-infrared band (872 nm) of Landsat 8, and a spectral library of ground elements acquired from the United States Geological Survey (USGS) are used. The spectral library is limited to 420-1020 nm spectral range, and is interpolated at one nanometer resolution. Singular Value Decomposition (SVD) is used to calculate the basis spectra, which are then applied to reconstruct the spectrum. The spectral reconstruction is applied for test cases within the library consisting of vegetation communities. This technique was successful in reconstructing a hyperspectral signal with error of less than 12% for most of the test cases. Hence, this study demonstrated the potential of simulating information at any desired wavelength, creating a virtual hyperspectral sensor without the need for additional satellite bands.
Burgués, Javier; Jiménez-Soto, Juan Manuel; Marco, Santiago
2018-07-12
The limit of detection (LOD) is a key figure of merit in chemical sensing. However, the estimation of this figure of merit is hindered by the non-linear calibration curve characteristic of semiconductor gas sensor technologies such as, metal oxide (MOX), gasFETs or thermoelectric sensors. Additionally, chemical sensors suffer from cross-sensitivities and temporal stability problems. The application of the International Union of Pure and Applied Chemistry (IUPAC) recommendations for univariate LOD estimation in non-linear semiconductor gas sensors is not straightforward due to the strong statistical requirements of the IUPAC methodology (linearity, homoscedasticity, normality). Here, we propose a methodological approach to LOD estimation through linearized calibration models. As an example, the methodology is applied to the detection of low concentrations of carbon monoxide using MOX gas sensors in a scenario where the main source of error is the presence of uncontrolled levels of humidity. Copyright © 2018 Elsevier B.V. All rights reserved.
Source-space ICA for MEG source imaging.
Jonmohamadi, Yaqub; Jones, Richard D
2016-02-01
One of the most widely used approaches in electroencephalography/magnetoencephalography (MEG) source imaging is application of an inverse technique (such as dipole modelling or sLORETA) on the component extracted by independent component analysis (ICA) (sensor-space ICA + inverse technique). The advantage of this approach over an inverse technique alone is that it can identify and localize multiple concurrent sources. Among inverse techniques, the minimum-variance beamformers offer a high spatial resolution. However, in order to have both high spatial resolution of beamformer and be able to take on multiple concurrent sources, sensor-space ICA + beamformer is not an ideal combination. We propose source-space ICA for MEG as a powerful alternative approach which can provide the high spatial resolution of the beamformer and handle multiple concurrent sources. The concept of source-space ICA for MEG is to apply the beamformer first and then singular value decomposition + ICA. In this paper we have compared source-space ICA with sensor-space ICA both in simulation and real MEG. The simulations included two challenging scenarios of correlated/concurrent cluster sources. Source-space ICA provided superior performance in spatial reconstruction of source maps, even though both techniques performed equally from a temporal perspective. Real MEG from two healthy subjects with visual stimuli were also used to compare performance of sensor-space ICA and source-space ICA. We have also proposed a new variant of minimum-variance beamformer called weight-normalized linearly-constrained minimum-variance with orthonormal lead-field. As sensor-space ICA-based source reconstruction is popular in EEG and MEG imaging, and given that source-space ICA has superior spatial performance, it is expected that source-space ICA will supersede its predecessor in many applications.
NASA Technical Reports Server (NTRS)
Koshak, W. J.; Blakeslee, R. J.; Bailey, J. C.
1997-01-01
A linear algebraic solution is provided for the problem of retrieving the location and time of occurrence of lightning ground strikes from in Advanced Lightning Direction Finder (ALDF) network. The ALDF network measures field strength, magnetic bearing, and arrival time of lightning radio emissions and solutions for the plane (i.e.. no Earth curvature) are provided that implement all of these measurements. The accuracy of the retrieval method is tested using computer-simulated data sets and the relative influence of bearing and arrival time data on the outcome of the final solution is formally demonstrated. The algorithm is sufficiently accurate to validate NASA's Optical Transient Detector (OTD) and Lightning Imaging System (LIS). We also introduce a quadratic planar solution that is useful when only three arrival time measurements are available. The algebra of the quadratic root results are examined in detail to clarify what portions of the analysis region lead to fundamental ambiguities in source location. Complex root results are shown to be associated with the presence of measurement errors when the lightning source lies near an outer sensor baseline of the ALDF network. For arbitrary noncollinear network geometries and in the absence of measurement errors, it is shown that the two quadratic roots are equivalent (no source location ambiguity) on the outer sensor baselines. The accuracy of the quadratic planar method is tested with computer-generated data sets and the results are generally better than those obtained from the three station linear planar method when bearing errors are about 2 degrees.
Holographic elements and curved slit used to enlarge field of view in rocket detection system
NASA Astrophysics Data System (ADS)
Breton, Mélanie; Fortin, Jean; Lessard, Roger A.; Châteauneuf, Marc
2006-09-01
Rocket detection over a wide field of view is an important issue in the protection of light armored vehicle. Traditionally, the detection occurs in UV band, but recent studies have shown the existence of significant emission peaks in the visible and near infrared at rocket launch time. The use of the visible region is interesting in order to reduce the weight and cost of systems. Current methods to detect those specific peaks involve use of interferometric filters. However, they fail to combine wide angle with wavelength selectivity. A linear array of volume holographic elements combined with a curved exit slit is proposed for the development of a wide field of view sensor for the detection of solid propellant motor launch flash. The sensor is envisaged to trigger an active protection system. On the basis of geometric theory, a system has been designed. It consists of a collector, a linear array of holographic elements, a curved slit and a detector. The collector is an off-axis parabolic mirror. Holographic elements are recorded subdividing a hologram film in regions, each individually exposed with a different incidence angle. All regions have a common diffraction angle. The incident angle determines the instantaneous field of view of the elements. The volume hologram performs the function of separating and focusing the diffracted beam on an image plane to achieve wavelength filtering. Conical diffraction property is used to enlarge the field of view in elevation. A curved slit was designed to correspond to oblique incidence of the holographic linear array. It is situated at the image plane and filters the diffracted spectrum toward the sensor. The field of view of the design was calculated to be 34 degrees. This was validated by a prototype tested during a field trial. Results are presented and analyzed. The system succeeded in detecting the rocket launch flash at desired fields of view.
Wedge imaging spectrometer: application to drug and pollution law enforcement
NASA Astrophysics Data System (ADS)
Elerding, George T.; Thunen, John G.; Woody, Loren M.
1991-08-01
The Wedge Imaging Spectrometer (WIS) represents a novel implementation of an imaging spectrometer sensor that is compact and rugged and, therefore, suitable for use in drug interdiction and pollution monitoring activities. With performance characteristics equal to comparable conventional imaging spectrometers, it would be capable of detecting and identifying primary and secondary indicators of drug activities and pollution events. In the design, a linear wedge filter is mated to an area array of detectors to achieve two-dimensional sampling of the combined spatial/spectral information passed by the filter. As a result, the need for complex and delicate fore optics is avoided, and the size and weight of the instrument are approximately 50% that of comparable sensors. Spectral bandwidths can be controlled to provide relatively narrow individual bandwidths over a broad spectrum, including all visible and infrared wavelengths. This sensor concept has been under development at the Hughes Aircraft Co. Santa Barbara Research Center (SBRC), and hardware exists in the form of a brassboard prototype. This prototype provides 64 spectral bands over the visible and near infrared region (0.4 to 1.0 micrometers ). Implementation issues have been examined, and plans have been formulated for packaging the sensor into a test-bed aircraft for demonstration of capabilities. Two specific areas of utility to the drug interdiction problem are isolated: (1) detection and classification of narcotic crop growth areas and (2) identification of coca processing sites, cued by the results of broad-area survey and collateral information. Vegetation stress and change-detection processing may also be useful in detecting active from dormant airfields. For pollution monitoring, a WIS sensor could provide data with fine spectral and spatial resolution over suspect areas. On-board or ground processing of the data would isolate the presence of polluting effluents, effects on vegetation caused by airborne or other pollutants, or anomalous ground conditions indicative of buried or dumped toxic materials.
NASA Astrophysics Data System (ADS)
Bisogni, Maria Giuseppina
2006-04-01
In this paper we report on the performances and the first imaging test results of a digital mammographic demonstrator based on GaAs pixel detectors. The heart of this prototype is the X-ray detection unit, which is a GaAs pixel sensor read-out by the PCC/MEDIPIXI circuit. Since the active area of the sensor is 1 cm2, 18 detectors have been organized in two staggered rows of nine chips each. To cover the typical mammographic format (18 × 24 cm2) a linear scanning is performed by means of a stepper motor. The system is integrated in mammographic equipment comprehending the X-ray tube, the bias and data acquisition systems and the PC-based control system. The prototype has been developed in the framework of the integrated Mammographic Imaging (IMI) project, an industrial research activity aiming to develop innovative instrumentation for morphologic and functional imaging. The project has been supported by the Italian Ministry of Education, University and Research (MIUR) and by five Italian High Tech companies in collaboration with the universities of Ferrara, Roma “La Sapienza”, Pisa and the INFN.
Intelligent Network-Centric Sensors Development Program
2012-07-31
Image sensor Configuration: ; Cone 360 degree LWIR PFx Sensor: •■. Image sensor . Configuration: Image MWIR Configuration; Cone 360 degree... LWIR PFx Sensor: Video Configuration: Cone 360 degree SW1R, 2. Reasoning Process to Match Sensor Systems to Algorithms The ontological...effects of coherent imaging because of aberrations. Another reason is the specular nature of active imaging. Both contribute to the nonuniformity
Increasing Linear Dynamic Range of a CMOS Image Sensor
NASA Technical Reports Server (NTRS)
Pain, Bedabrata
2007-01-01
A generic design and a corresponding operating sequence have been developed for increasing the linear-response dynamic range of a complementary metal oxide/semiconductor (CMOS) image sensor. The design provides for linear calibrated dual-gain pixels that operate at high gain at a low signal level and at low gain at a signal level above a preset threshold. Unlike most prior designs for increasing dynamic range of an image sensor, this design does not entail any increase in noise (including fixed-pattern noise), decrease in responsivity or linearity, or degradation of photometric calibration. The figure is a simplified schematic diagram showing the circuit of one pixel and pertinent parts of its column readout circuitry. The conventional part of the pixel circuit includes a photodiode having a small capacitance, CD. The unconventional part includes an additional larger capacitance, CL, that can be connected to the photodiode via a transfer gate controlled in part by a latch. In the high-gain mode, the signal labeled TSR in the figure is held low through the latch, which also helps to adapt the gain on a pixel-by-pixel basis. Light must be coupled to the pixel through a microlens or by back illumination in order to obtain a high effective fill factor; this is necessary to ensure high quantum efficiency, a loss of which would minimize the efficacy of the dynamic- range-enhancement scheme. Once the level of illumination of the pixel exceeds the threshold, TSR is turned on, causing the transfer gate to conduct, thereby adding CL to the pixel capacitance. The added capacitance reduces the conversion gain, and increases the pixel electron-handling capacity, thereby providing an extension of the dynamic range. By use of an array of comparators also at the bottom of the column, photocharge voltages on sampling capacitors in each column are compared with a reference voltage to determine whether it is necessary to switch from the high-gain to the low-gain mode. Depending upon the built-in offset in each pixel and in each comparator, the point at which the gain change occurs will be different, adding gain-dependent fixed pattern noise in each pixel. The offset, and hence the fixed pattern noise, is eliminated by sampling the pixel readout charge four times by use of four capacitors (instead of two such capacitors as in conventional design) connected to the bottom of the column via electronic switches SHS1, SHR1, SHS2, and SHR2, respectively, corresponding to high and low values of the signals TSR and RST. The samples are combined in an appropriate fashion to cancel offset-induced errors, and provide spurious-free imaging with extended dynamic range.
Development of a prototype sensor system for ultra-high-speed LDA-PIV
NASA Astrophysics Data System (ADS)
Griffiths, Jennifer A.; Royle, Gary J.; Bohndiek, Sarah E.; Turchetta, Renato; Chen, Daoyi
2008-04-01
Laser Doppler Anemometry (LDA) and Particle Image Velocimetry (PIV) are commonly used in the analysis of particulates in fluid flows. Despite the successes of these techniques, current instrumentation has placed limitations on the size and shape of the particles undergoing measurement, thus restricting the available data for the many industrial processes now utilising nano/micro particles. Data for spherical and irregularly shaped particles down to the order of 0.1 µm is now urgently required. Therefore, an ultra-fast LDA-PIV system is being constructed for the acquisition of this data. A key component of this instrument is the PIV optical detection system. Both the size and speed of the particles under investigation place challenging constraints on the system specifications: magnification is required within the system in order to visualise particles of the size of interest, but this restricts the corresponding field of view in a linearly inverse manner. Thus, for several images of a single particle in a fast fluid flow to be obtained, the image capture rate and sensitivity of the system must be sufficiently high. In order to fulfil the instrumentation criteria, the optical detection system chosen is a high-speed, lensed, digital imaging system based on state-of-the-art CMOS technology - the 'Vanilla' sensor developed by the UK based MI3 consortium. This novel Active Pixel Sensor is capable of high frame rates and sparse readout. When coupled with an image intensifier, it will have single photon detection capabilities. An FPGA based DAQ will allow real-time operation with minimal data transfer.
A 128 x 128 CMOS Active Pixel Image Sensor for Highly Integrated Imaging Systems
NASA Technical Reports Server (NTRS)
Mendis, Sunetra K.; Kemeny, Sabrina E.; Fossum, Eric R.
1993-01-01
A new CMOS-based image sensor that is intrinsically compatible with on-chip CMOS circuitry is reported. The new CMOS active pixel image sensor achieves low noise, high sensitivity, X-Y addressability, and has simple timing requirements. The image sensor was fabricated using a 2 micrometer p-well CMOS process, and consists of a 128 x 128 array of 40 micrometer x 40 micrometer pixels. The CMOS image sensor technology enables highly integrated smart image sensors, and makes the design, incorporation and fabrication of such sensors widely accessible to the integrated circuit community.
Hu, Xin; Wen, Long; Yu, Yan; Cumming, David R. S.
2016-01-01
The increasing miniaturization and resolution of image sensors bring challenges to conventional optical elements such as spectral filters and polarizers, the properties of which are determined mainly by the materials used, including dye polymers. Recent developments in spectral filtering and optical manipulating techniques based on nanophotonics have opened up the possibility of an alternative method to control light spectrally and spatially. By integrating these technologies into image sensors, it will become possible to achieve high compactness, improved process compatibility, robust stability and tunable functionality. In this Review, recent representative achievements on nanophotonic image sensors are presented and analyzed including image sensors with nanophotonic color filters and polarizers, metamaterial‐based THz image sensors, filter‐free nanowire image sensors and nanostructured‐based multispectral image sensors. This novel combination of cutting edge photonics research and well‐developed commercial products may not only lead to an important application of nanophotonics but also offer great potential for next generation image sensors beyond Moore's Law expectations. PMID:27239941
Micro-/nanoscale multi-field coupling in nonlinear photonic devices
NASA Astrophysics Data System (ADS)
Yang, Qing; Wang, Yubo; Tang, Mingwei; Xu, Pengfei; Xu, Yingke; Liu, Xu
2017-08-01
The coupling of mechanics/electronics/photonics may improve the performance of nanophotonic devices not only in the linear region but also in the nonlinear region. This review letter mainly presents the recent advances on multi-field coupling in nonlinear photonic devices. The nonlinear piezoelectric effect and piezo-phototronic effects in quantum wells and fibers show that large second-order nonlinear susceptibilities can be achieved, and second harmonic generation and electro-optic modulation can be enhanced and modulated. Strain engineering can tune the lattice structures and induce second order susceptibilities in central symmetry semiconductors. By combining the absorption-based photoacoustic effect and intensity-dependent photobleaching effect, subdiffraction imaging can be achieved. This review will also discuss possible future applications of these novel effects and the perspective of their research. The review can help us develop a deeper knowledge of the substance of photon-electron-phonon interaction in a micro-/nano- system. Moreover, it can benefit the design of nonlinear optical sensors and imaging devices with a faster response rate, higher efficiency, more sensitivity and higher spatial resolution which could be applied in environmental detection, bio-sensors, medical imaging and so on.
Teich, Sorin; Al-Rawi, Wisam; Heima, Masahiro; Faddoul, Fady F; Goldzweig, Gil; Gutmacher, Zvi; Aizenbud, Dror
2016-10-01
To evaluate the image quality generated by eight commercially available intraoral sensors. Eighteen clinicians ranked the quality of a bitewing acquired from one subject using eight different intraoral sensors. Analytical methods used to evaluate clinical image quality included the Visual Grading Characteristics method, which helps to quantify subjective opinions to make them suitable for analysis. The Dexis sensor was ranked significantly better than Sirona and Carestream-Kodak sensors; and the image captured using the Carestream-Kodak sensor was ranked significantly worse than those captured using Dexis, Schick and Cyber Medical Imaging sensors. The Image Works sensor image was rated the lowest by all clinicians. Other comparisons resulted in non-significant results. None of the sensors was considered to generate images of significantly better quality than the other sensors tested. Further research should be directed towards determining the clinical significance of the differences in image quality reported in this study. © 2016 FDI World Dental Federation.
NASA Astrophysics Data System (ADS)
Ikhsanti, Mila Izzatul; Bouzida, Rana; Wijaya, Sastra Kusuma; Rohmadi, Muttakin, Imamul; Taruno, Warsito P.
2017-02-01
This research aims to explore the feasibility of capacitance-digital converter and impedance converter for measurement module in electrical capacitance tomography (ECT) system. ECT sensor used was a cylindrical sensor having 8 electrodes. Absolute capacitance measurement system based on Sigma Delta Capacitance-to-Digital-Converter AD7746 has been shown to produce measurement with high resolution. Whereas, capacitance measurement with wide range of frequency is possible using Impedance Converter AD5933. Comparison of measurement accuracy by both AD7746 and AD5933 with reference of LCR meter was evaluated. Biological matters represented in water and oil were treated as object reconstructed into image using linear back projection (LBP) algorithm.
NASA Astrophysics Data System (ADS)
Jannati, Mojtaba; Valadan Zoej, Mohammad Javad; Mokhtarzade, Mehdi
2018-03-01
This paper presents a novel approach to epipolar resampling of cross-track linear pushbroom imagery using orbital parameters model (OPM). The backbone of the proposed method relies on modification of attitude parameters of linear array stereo imagery in such a way to parallelize the approximate conjugate epipolar lines (ACELs) with the instantaneous base line (IBL) of the conjugate image points (CIPs). Afterward, a complementary rotation is applied in order to parallelize all the ACELs throughout the stereo imagery. The new estimated attitude parameters are evaluated based on the direction of the IBL and the ACELs. Due to the spatial and temporal variability of the IBL (respectively changes in column and row numbers of the CIPs) and nonparallel nature of the epipolar lines in the stereo linear images, some polynomials in the both column and row numbers of the CIPs are used to model new attitude parameters. As the instantaneous position of sensors remains fix, the digital elevation model (DEM) of the area of interest is not required in the resampling process. According to the experimental results obtained from two pairs of SPOT and RapidEye stereo imagery with a high elevation relief, the average absolute values of remained vertical parallaxes of CIPs in the normalized images were obtained 0.19 and 0.28 pixels respectively, which confirm the high accuracy and applicability of the proposed method.
Luo, Qiaohui; Yu, Neng; Shi, Chunfei; Wang, Xiaoping; Wu, Jianmin
2016-12-01
A surface plasmon resonance (SPR) sensor combined with nanoscale molecularly imprinted polymer (MIP) film as recognition element was developed for selective detection of the antibiotic ciprofloxacin (CIP). The MIP film on SPR sensor chip was prepared by in situ photo-initiated polymerization method which has the advantages of short polymerization time, controllable thickness and good uniformity. The surface wettability and thickness of MIP film on SPR sensor chip were characterized by static contact angle measurement and stylus profiler. The MIP-SPR sensor exhibited high selectivity, sensitivity and good stability for ciprofloxacin. The imprinting factors of the MIP-SPR sensor to ciprofloxacin and its structural analogue ofloxacin were 2.63 and 3.80, which is much higher than those to azithromycin, dopamine and penicillin. The SPR response had good linear relation with CIP concentration over the range 10 -11 -10 -7 molL -1 . The MIP-SPR sensor also showed good repeatability and stability during cyclic detections. On the basis of the photo-initiated polymerization method, a surface plasmon resonance imaging (SPRi) chip modified with three types of MIP sensing spots was fabricated. The MIPs-SPRi sensor shows different response patterns to ciprofloxacin and azithromycin, revealing the ability to recognize different antibiotic molecules. Copyright © 2016 Elsevier B.V. All rights reserved.
Cooperative Surveillance and Pursuit Using Unmanned Aerial Vehicles and Unattended Ground Sensors
Las Fargeas, Jonathan; Kabamba, Pierre; Girard, Anouck
2015-01-01
This paper considers the problem of path planning for a team of unmanned aerial vehicles performing surveillance near a friendly base. The unmanned aerial vehicles do not possess sensors with automated target recognition capability and, thus, rely on communicating with unattended ground sensors placed on roads to detect and image potential intruders. The problem is motivated by persistent intelligence, surveillance, reconnaissance and base defense missions. The problem is formulated and shown to be intractable. A heuristic algorithm to coordinate the unmanned aerial vehicles during surveillance and pursuit is presented. Revisit deadlines are used to schedule the vehicles' paths nominally. The algorithm uses detections from the sensors to predict intruders' locations and selects the vehicles' paths by minimizing a linear combination of missed deadlines and the probability of not intercepting intruders. An analysis of the algorithm's completeness and complexity is then provided. The effectiveness of the heuristic is illustrated through simulations in a variety of scenarios. PMID:25591168
Huang, Qi; Zhang, Qingyou; Wang, Enze; Zhou, Yanmei; Qiao, Han; Pang, Lanfang; Yu, Fang
2016-01-05
In this paper, a new fluorescent probe has been synthesized and applied as "off-on" sensor for the detection of Al(3+) with a high sensitivity and excellent selectivity in aqueous media. The sensor was easily prepared by one step reaction between rhodamine B hydrazide and pyridoxal hydrochloride named RBP. The structure of the sensor has been characterized by nuclear magnetic resonance and electron spray ionization-mass spectrometry. The fluorescence intensity and absorbance for the sensor showed a good linearity with the concentration of Al(3+) in the range of 0-12.5μM and 8-44μM, respectively, with detection limits of 0.23μM and 1.90μM. The sensor RBP was preliminarily applied to the determination of Al(3+) in water samples from the lake of Henan University and tap water with satisfying results. Moreover, it can be used as a bioimaging reagent for imaging of Al(3+) in living cells. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhang, Yunfei; Liu, Zonglun; Yang, Kui; Zhang, Yi; Xu, Yongqian; Li, Hongjuan; Wang, Chaoxia; Lu, Aiping; Sun, Shiguo
2015-02-01
Copper ions play a vital role in a variety of fundamental physiological processes not only in human beings and plants, but also for extensive insects and microorganisms. In this paper, a novel water-soluble ruthenium(II) complex as a turn-on copper(II) ions luminescent sensor based on o-(phenylazo)aniline was designed and synthesized. The azo group would undergo a specific oxidative cyclization reaction with copper(II) ions and turn into high luminescent benzotriazole, triggering significant luminescent increasements which were linear to the concentrations of copper(II) ions. The sensor distinguished by its high sensitivity (over 80-fold luminescent switch-on response), good selectivity (the changes of the emission intensity in the presence of other metal ions or amino acids were negligible) and low detection limit (4.42 nM) in water. Moreover, the copper(II) luminescent sensor exhibited good photostability under light irradiation. Furthermore, the applicability of the proposed sensor in biological samples assay was also studied and imaged copper(II) ions in living pea aphids successfully.
Multiparameter Estimation in Networked Quantum Sensors
NASA Astrophysics Data System (ADS)
Proctor, Timothy J.; Knott, Paul A.; Dunningham, Jacob A.
2018-02-01
We introduce a general model for a network of quantum sensors, and we use this model to consider the following question: When can entanglement between the sensors, and/or global measurements, enhance the precision with which the network can measure a set of unknown parameters? We rigorously answer this question by presenting precise theorems proving that for a broad class of problems there is, at most, a very limited intrinsic advantage to using entangled states or global measurements. Moreover, for many estimation problems separable states and local measurements are optimal, and can achieve the ultimate quantum limit on the estimation uncertainty. This immediately implies that there are broad conditions under which simultaneous estimation of multiple parameters cannot outperform individual, independent estimations. Our results apply to any situation in which spatially localized sensors are unitarily encoded with independent parameters, such as when estimating multiple linear or nonlinear optical phase shifts in quantum imaging, or when mapping out the spatial profile of an unknown magnetic field. We conclude by showing that entangling the sensors can enhance the estimation precision when the parameters of interest are global properties of the entire network.
Shape Tracking of a Dexterous Continuum Manipulator Utilizing Two Large Deflection Shape Sensors
Farvardin, Amirhossein; Grupp, Robert; Murphy, Ryan J.; Taylor, Russell H.; Iordachita, Iulian
2016-01-01
Dexterous continuum manipulators (DCMs) can largely increase the reachable region and steerability for minimally and less invasive surgery. Many such procedures require the DCM to be capable of producing large deflections. The real-time control of the DCM shape requires sensors that accurately detect and report large deflections. We propose a novel, large deflection, shape sensor to track the shape of a 35 mm DCM designed for a less invasive treatment of osteolysis. Two shape sensors, each with three fiber Bragg grating sensing nodes is embedded within the DCM, and the sensors’ distal ends fixed to the DCM. The DCM centerline is computed using the centerlines of each sensor curve. An experimental platform was built and different groups of experiments were carried out, including free bending and three cases of bending with obstacles. For each experiment, the DCM drive cable was pulled with a precise linear slide stage, the DCM centerline was calculated, and a 2D camera image was captured for verification. The reconstructed shape created with the shape sensors is compared with the ground truth generated by executing a 2D–3D registration between the camera image and 3D DCM model. Results show that the distal tip tracking accuracy is 0.40 ± 0.30 mm for the free bending and 0.61 ± 0.15 mm, 0.93 ± 0.05 mm and 0.23 ± 0.10 mm for three cases of bending with obstacles. The data suggest FBG arrays can accurately characterize the shape of large-deflection DCMs. PMID:27761103
Model-based estimation and control for off-axis parabolic mirror alignment
NASA Astrophysics Data System (ADS)
Fang, Joyce; Savransky, Dmitry
2018-02-01
This paper propose an model-based estimation and control method for an off-axis parabolic mirror (OAP) alignment. Current studies in automated optical alignment systems typically require additional wavefront sensors. We propose a self-aligning method using only focal plane images captured by the existing camera. Image processing methods and Karhunen-Loève (K-L) decomposition are used to extract measurements for the observer in closed-loop control system. Our system has linear dynamic in state transition, and a nonlinear mapping from the state to the measurement. An iterative extended Kalman filter (IEKF) is shown to accurately predict the unknown states, and nonlinear observability is discussed. Linear-quadratic regulator (LQR) is applied to correct the misalignments. The method is validated experimentally on the optical bench with a commercial OAP. We conduct 100 tests in the experiment to demonstrate the consistency in between runs.
A multifunctional force microscope for soft matter with in situ imaging
NASA Astrophysics Data System (ADS)
Roberts, Paul; Pilkington, Georgia A.; Wang, Yumo; Frechette, Joelle
2018-04-01
We present the multifunctional force microscope (MFM), a normal and lateral force-measuring instrument with in situ imaging. In the MFM, forces are calculated from the normal and lateral deflection of a cantilever as measured via fiber optic sensors. The motion of the cantilever is controlled normally by a linear micro-translation stage and a piezoelectric actuator, while the lateral motion of the sample is controlled by another linear micro-translation stage. The micro-translation stages allow for travel distances that span 25 mm with a minimum step size of 50 nm, while the piezo has a minimum step size of 0.2 nm, but a 100 μm maximum range. Custom-designed cantilevers allow for the forces to be measured over 4 orders of magnitude (from 50 μN to 1 N). We perform probe tack, friction, and hydrodynamic drainage experiments to demonstrate the sensitivity, versatility, and measurable force range of the instrument.
Comparison of SeaWinds Backscatter Imaging Algorithms
Long, David G.
2017-01-01
This paper compares the performance and tradeoffs of various backscatter imaging algorithms for the SeaWinds scatterometer when multiple passes over a target are available. Reconstruction methods are compared with conventional gridding algorithms. In particular, the performance and tradeoffs in conventional ‘drop in the bucket’ (DIB) gridding at the intrinsic sensor resolution are compared to high-spatial-resolution imaging algorithms such as fine-resolution DIB and the scatterometer image reconstruction (SIR) that generate enhanced-resolution backscatter images. Various options for each algorithm are explored, including considering both linear and dB computation. The effects of sampling density and reconstruction quality versus time are explored. Both simulated and actual data results are considered. The results demonstrate the effectiveness of high-resolution reconstruction using SIR as well as its limitations and the limitations of DIB and fDIB. PMID:28828143
Chen, Longyi; Tse, Wai Hei; Chen, Yi; McDonald, Matthew W; Melling, James; Zhang, Jin
2017-05-15
In this paper, a nanostructured biosensor is developed to detect glucose in tear by using fluorescence resonance energy transfer (FRET) quenching mechanism. The designed FRET pair, including the donor, CdSe/ZnS quantum dots (QDs), and the acceptor, dextran-binding malachite green (MG-dextran), was conjugated to concanavalin A (Con A), an enzyme with specific affinity to glucose. In the presence of glucose, the quenched emission of QDs through the FRET mechanism is restored by displacing the dextran from Con A. To have a dual-modulation sensor for convenient and accurate detection, the nanostructured FRET sensors were assembled onto a patterned ZnO nanorod array deposited on the synthetic silicone hydrogel. Consequently, the concentration of glucose detected by the patterned sensor can be converted to fluorescence spectra with high signal-to-noise ratio and calibrated image pixel value. The photoluminescence intensity of the patterned FRET sensor increases linearly with increasing concentration of glucose from 0.03mmol/L to 3mmol/L, which covers the range of tear glucose levels for both diabetics and healthy subjects. Meanwhile, the calibrated values of pixel intensities of the fluorescence images captured by a handhold fluorescence microscope increases with increasing glucose. Four male Sprague-Dawley rats with different blood glucose concentrations were utilized to demonstrate the quick response of the patterned FRET sensor to 2µL of tear samples. Copyright © 2016 Elsevier B.V. All rights reserved.
Alsubaie, Naif M; Youssef, Ahmed A; El-Sheimy, Naser
2017-09-30
This paper introduces a new method which facilitate the use of smartphones as a handheld low-cost mobile mapping system (MMS). Smartphones are becoming more sophisticated and smarter and are quickly closing the gap between computers and portable tablet devices. The current generation of smartphones are equipped with low-cost GPS receivers, high-resolution digital cameras, and micro-electro mechanical systems (MEMS)-based navigation sensors (e.g., accelerometers, gyroscopes, magnetic compasses, and barometers). These sensors are in fact the essential components for a MMS. However, smartphone navigation sensors suffer from the poor accuracy of global navigation satellite System (GNSS), accumulated drift, and high signal noise. These issues affect the accuracy of the initial Exterior Orientation Parameters (EOPs) that are inputted into the bundle adjustment algorithm, which then produces inaccurate 3D mapping solutions. This paper proposes new methodologies for increasing the accuracy of direct geo-referencing of smartphones using relative orientation and smartphone motion sensor measurements as well as integrating geometric scene constraints into free network bundle adjustment. The new methodologies incorporate fusing the relative orientations of the captured images and their corresponding motion sensor measurements to improve the initial EOPs. Then, the geometric features (e.g., horizontal and vertical linear lines) visible in each image are extracted and used as constraints in the bundle adjustment procedure which correct the relative position and orientation of the 3D mapping solution.
Alsubaie, Naif M.; Youssef, Ahmed A.; El-Sheimy, Naser
2017-01-01
This paper introduces a new method which facilitate the use of smartphones as a handheld low-cost mobile mapping system (MMS). Smartphones are becoming more sophisticated and smarter and are quickly closing the gap between computers and portable tablet devices. The current generation of smartphones are equipped with low-cost GPS receivers, high-resolution digital cameras, and micro-electro mechanical systems (MEMS)-based navigation sensors (e.g., accelerometers, gyroscopes, magnetic compasses, and barometers). These sensors are in fact the essential components for a MMS. However, smartphone navigation sensors suffer from the poor accuracy of global navigation satellite System (GNSS), accumulated drift, and high signal noise. These issues affect the accuracy of the initial Exterior Orientation Parameters (EOPs) that are inputted into the bundle adjustment algorithm, which then produces inaccurate 3D mapping solutions. This paper proposes new methodologies for increasing the accuracy of direct geo-referencing of smartphones using relative orientation and smartphone motion sensor measurements as well as integrating geometric scene constraints into free network bundle adjustment. The new methodologies incorporate fusing the relative orientations of the captured images and their corresponding motion sensor measurements to improve the initial EOPs. Then, the geometric features (e.g., horizontal and vertical linear lines) visible in each image are extracted and used as constraints in the bundle adjustment procedure which correct the relative position and orientation of the 3D mapping solution. PMID:28973958
NASA Technical Reports Server (NTRS)
Russell, Richard; Wincheski, Russell; Jablonski, David; Washabaugh, Andy; Sheiretov, Yanko; Martin, Christopher; Goldfine, Neil
2011-01-01
Composite Overwrapped Pressure Vessels (COPVs) are used in essentially all NASA spacecraft, launch. vehicles and payloads to contain high-pressure fluids for propulsion, life support systems and science experiments. Failure of any COPV either in flight or during ground processing would result in catastrophic damage to the spacecraft or payload, and could lead to loss of life. Therefore, NASA continues to investigate new methods to non-destructively inspect (NDE) COPVs for structural anomalies and to provide a means for in-situ structural health monitoring (SHM) during operational service. Partnering with JENTEK Sensors, engineers at NASA, Kennedy Space Center have successfully conducted a proof-of-concept study to develop Meandering Winding Magnetometer (MWM) eddy current sensors designed to make direct measurements of the stresses of the internal layers of a carbon fiber composite wrapped COPV. During this study three different MWM sensors were tested at three orientations to demonstrate the ability of the technology to measure stresses at various fiber orientations and depths. These results showed good correlation with actual surface strain gage measurements. MWM-Array technology for scanning COPVs can reliably be used to image and detect mechanical damage. To validate this conclusion, several COPVs were scanned to obtain a baseline, and then each COPV was impacted at varying energy levels and then rescanned. The baseline subtracted images were used to demonstrate damage detection. These scans were performed with two different MWM-Arrays. with different geometries for near-surface and deeper penetration imaging at multiple frequencies and in multiple orientations of the linear MWM drive. This presentation will include a review of micromechanical models that relate measured sensor responses to composite material constituent properties, validated by the proof of concept study, as the basis for SHM and NDE data analysis as well as potential improvements including design changes to miniaturize and make the sensors durable in the vacuum of space
Chen, Qin; Hu, Xin; Wen, Long; Yu, Yan; Cumming, David R S
2016-09-01
The increasing miniaturization and resolution of image sensors bring challenges to conventional optical elements such as spectral filters and polarizers, the properties of which are determined mainly by the materials used, including dye polymers. Recent developments in spectral filtering and optical manipulating techniques based on nanophotonics have opened up the possibility of an alternative method to control light spectrally and spatially. By integrating these technologies into image sensors, it will become possible to achieve high compactness, improved process compatibility, robust stability and tunable functionality. In this Review, recent representative achievements on nanophotonic image sensors are presented and analyzed including image sensors with nanophotonic color filters and polarizers, metamaterial-based THz image sensors, filter-free nanowire image sensors and nanostructured-based multispectral image sensors. This novel combination of cutting edge photonics research and well-developed commercial products may not only lead to an important application of nanophotonics but also offer great potential for next generation image sensors beyond Moore's Law expectations. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Ni, Guangming; Liu, Lin; Zhang, Jing; Liu, Juanxiu; Liu, Yong
2018-01-01
With the development of the liquid crystal display (LCD) module industry, LCD modules become more and more precise with larger sizes, which demands harsh imaging requirements for automated optical inspection (AOI). Here, we report a high-resolution and clearly focused imaging optomechatronics for precise LCD module bonding AOI inspection. It first presents and achieves high-resolution imaging for LCD module bonding AOI inspection using a line scan camera (LSC) triggered by a linear optical encoder, self-adaptive focusing for the whole large imaging region using LSC, and a laser displacement sensor, which reduces the requirements of machining, assembly, and motion control of AOI devices. Results show that this system can directly achieve clearly focused imaging for AOI inspection of large LCD module bonding with 0.8 μm image resolution, 2.65-mm scan imaging width, and no limited imaging width theoretically. All of these are significant for AOI inspection in the LCD module industry and other fields that require imaging large regions with high resolution.
NASA Astrophysics Data System (ADS)
Zhang, Wenzeng; Chen, Nian; Wang, Bin; Cao, Yipeng
2005-01-01
Rocket engine is a hard-core part of aerospace transportation and thrusting system, whose research and development is very important in national defense, aviation and aerospace. A novel vision sensor is developed, which can be used for error detecting in arc length control and seam tracking in precise pulse TIG welding of the extending part of the rocket engine jet tube. The vision sensor has many advantages, such as imaging with high quality, compactness and multiple functions. The optics design, mechanism design and circuit design of the vision sensor have been described in detail. Utilizing the mirror imaging of Tungsten electrode in the weld pool, a novel method is proposed to detect the arc length and seam tracking error of Tungsten electrode to the center line of joint seam from a single weld image. A calculating model of the method is proposed according to the relation of the Tungsten electrode, weld pool, the mirror of Tungsten electrode in weld pool and joint seam. The new methodologies are given to detect the arc length and seam tracking error. Through analyzing the results of the experiments, a system error modifying method based on a linear function is developed to improve the detecting precise of arc length and seam tracking error. Experimental results show that the final precision of the system reaches 0.1 mm in detecting the arc length and the seam tracking error of Tungsten electrode to the center line of joint seam.
Typical effects of laser dazzling CCD camera
NASA Astrophysics Data System (ADS)
Zhang, Zhen; Zhang, Jianmin; Shao, Bibo; Cheng, Deyan; Ye, Xisheng; Feng, Guobin
2015-05-01
In this article, an overview of laser dazzling effect to buried channel CCD camera is given. The CCDs are sorted into staring and scanning types. The former includes the frame transfer and interline transfer types. The latter includes linear and time delay integration types. All CCDs must perform four primary tasks in generating an image, which are called charge generation, charge collection, charge transfer and charge measurement. In camera, the lenses are needed to input the optical signal to the CCD sensors, in which the techniques for erasing stray light are used. And the electron circuits are needed to process the output signal of CCD, in which many electronic techniques are used. The dazzling effects are the conjunct result of light distribution distortion and charge distribution distortion, which respectively derive from the lens and the sensor. Strictly speaking, in lens, the light distribution is not distorted. In general, the lens are so well designed and fabricated that its stray light can be neglected. But the laser is of much enough intensity to make its stray light obvious. In CCD image sensors, laser can induce a so large electrons generation. Charges transfer inefficiency and charges blooming will cause the distortion of the charge distribution. Commonly, the largest signal outputted from CCD sensor is restricted by capability of the collection well of CCD, and can't go beyond the dynamic range for the subsequent electron circuits maintaining normal work. So the signal is not distorted in the post-processing circuits. But some techniques in the circuit can make some dazzling effects present different phenomenon in final image.
Fiber-Optic Strain Sensors With Linear Characteristics
NASA Technical Reports Server (NTRS)
Egalon, Claudio O.; Rogowski, Robert S.
1993-01-01
Fiber-optic modal domain strain sensors having linear characteristics over wide range of strains proposed. Conceived in effort to improve older fiber-optic strain sensors. Linearity obtained by appropriate choice of design parameters. Pattern of light and dark areas at output end of optical fiber produced by interference between electromagnetic modes in which laser beam propagates in fiber. Photodetector monitors intensity at one point in pattern.
Intrinsic coincident linear polarimetry using stacked organic photovoltaics.
Roy, S Gupta; Awartani, O M; Sen, P; O'Connor, B T; Kudenov, M W
2016-06-27
Polarimetry has widespread applications within atmospheric sensing, telecommunications, biomedical imaging, and target detection. Several existing methods of imaging polarimetry trade off the sensor's spatial resolution for polarimetric resolution, and often have some form of spatial registration error. To mitigate these issues, we have developed a system using oriented polymer-based organic photovoltaics (OPVs) that can preferentially absorb linearly polarized light. Additionally, the OPV cells can be made semitransparent, enabling multiple detectors to be cascaded along the same optical axis. Since each device performs a partial polarization measurement of the same incident beam, high temporal resolution is maintained with the potential for inherent spatial registration. In this paper, a Mueller matrix model of the stacked OPV design is provided. Based on this model, a calibration technique is developed and presented. This calibration technique and model are validated with experimental data, taken with a cascaded three cell OPV Stokes polarimeter, capable of measuring incident linear polarization states. Our results indicate polarization measurement error of 1.2% RMS and an average absolute radiometric accuracy of 2.2% for the demonstrated polarimeter.
NASA Astrophysics Data System (ADS)
Guggenheim, James A.; Zhang, Edward Z.; Beard, Paul C.
2017-03-01
The planar Fabry-Pérot (FP) sensor provides high quality photoacoustic (PA) images but beam walk-off limits sensitivity and thus penetration depth to ≍1 cm. Planoconcave microresonator sensors eliminate beam walk-off enabling sensitivity to be increased by an order-of-magnitude whilst retaining the highly favourable frequency response and directional characteristics of the FP sensor. The first tomographic PA images obtained in a tissue-realistic phantom using the new sensors are described. These show that the microresonator sensors provide near identical image quality as the planar FP sensor but with significantly greater penetration depth (e.g. 2-3cm) due to their higher sensitivity. This offers the prospect of whole body small animal imaging and clinical imaging to depths previously unattainable using the FP planar sensor.
Multispectral and polarimetric photodetection using a plasmonic metasurface
NASA Astrophysics Data System (ADS)
Pelzman, Charles; Cho, Sang-Yeon
2018-01-01
We present a metasurface-integrated Si 2-D CMOS sensor array for multispectral and polarimetric photodetection applications. The demonstrated sensor is based on the polarization selective extraordinary optical transmission from periodic subwavelength nanostructures, acting as artificial atoms, known as meta-atoms. The meta-atoms were created by patterning periodic rectangular apertures that support optical resonance at the designed spectral bands. By spatially separating meta-atom clusters with different lattice constants and orientations, the demonstrated metasurface can convert the polarization and spectral information of an optical input into a 2-D intensity pattern. As a proof-of-concept experiment, we measured the linear components of the Stokes parameters directly from captured images using a CMOS camera at four spectral bands. Compared to existing multispectral polarimetric sensors, the demonstrated metasurface-integrated CMOS system is compact and does not require any moving components, offering great potential for advanced photodetection applications.
Capacitive touch sensing : signal and image processing algorithms
NASA Astrophysics Data System (ADS)
Baharav, Zachi; Kakarala, Ramakrishna
2011-03-01
Capacitive touch sensors have been in use for many years, and recently gained center stage with the ubiquitous use in smart-phones. In this work we will analyze the most common method of projected capacitive sensing, that of absolute capacitive sensing, together with the most common sensing pattern, that of diamond-shaped sensors. After a brief introduction to the problem, and the reasons behind its popularity, we will formulate the problem as a reconstruction from projections. We derive analytic solutions for two simple cases: circular finger on a wire grid, and square finger on a square grid. The solutions give insight into the ambiguities of finding finger location from sensor readings. The main contribution of our paper is the discussion of interpolation algorithms including simple linear interpolation , curve fitting (parabolic and Gaussian), filtering, general look-up-table, and combinations thereof. We conclude with observations on the limits of the present algorithmic methods, and point to possible future research.
Impact of LANDSAT MSS sensor differences on change detection analysis
NASA Technical Reports Server (NTRS)
Likens, W. C.; Wrigley, R. C.
1983-01-01
Some 512 by 512 pixel subwindows for simultaneously acquired scene pairs obtained by LANDSAT 2,3 and 4 multispectral band scanners were coregistered using LANDSAT 4 scenes as the base to which the other images were registered. Scattergrams between the coregistered scenes (a form of contingency analysis) were used to radiometrically compare data from the various sensors. Mode values were derived and used to visually fit a linear regression. Root mean square errors of the registration varied between .1 and 1.5 pixels. There appear to be no major problem preventing the use of LANDSAT 4 MSS with previous MSS sensors for change detection, provided the noise interference can be removed or minimized. Data normalizations for change detection should be based on the data rather than solely on calibration information. This allows simultaneous normalization of the atmosphere as well as the radiometry.
The feasibility of using Microsoft Kinect v2 sensors during radiotherapy delivery.
Edmunds, David M; Bashforth, Sophie E; Tahavori, Fatemeh; Wells, Kevin; Donovan, Ellen M
2016-11-08
Consumer-grade distance sensors, such as the Microsoft Kinect devices (v1 and v2), have been investigated for use as marker-free motion monitoring systems for radiotherapy. The radiotherapy delivery environment is challenging for such sen-sors because of the proximity to electromagnetic interference (EMI) from the pulse forming network which fires the magnetron and electron gun of a linear accelerator (linac) during radiation delivery, as well as the requirement to operate them from the control area. This work investigated whether using Kinect v2 sensors as motion monitors was feasible during radiation delivery. Three sensors were used each with a 12 m USB 3.0 active cable which replaced the supplied 3 m USB 3.0 cable. Distance output data from the Kinect v2 sensors was recorded under four condi-tions of linac operation: (i) powered up only, (ii) pulse forming network operating with no radiation, (iii) pulse repetition frequency varied between 6 Hz and 400 Hz, (iv) dose rate varied between 50 and 1450 monitor units (MU) per minute. A solid water block was used as an object and imaged when static, moved in a set of steps from 0.6 m to 2.0 m from the sensor and moving dynamically in two sinusoidal-like trajectories. Few additional image artifacts were observed and there was no impact on the tracking of the motion patterns (root mean squared accuracy of 1.4 and 1.1mm, respectively). The sensors' distance accuracy varied by 2.0 to 3.8 mm (1.2 to 1.4 mm post distance calibration) across the range measured; the precision was 1 mm. There was minimal effect from the EMI on the distance calibration data: 0 mm or 1 mm reported distance change (2 mm maximum change at one position). Kinect v2 sensors operated with 12 m USB 3.0 active cables appear robust to the radiotherapy treatment environment. © 2016 The Authors.
Real time thermal imaging for analysis and control of crystal growth by the Czochralski technique
NASA Technical Reports Server (NTRS)
Wargo, M. J.; Witt, A. F.
1992-01-01
A real time thermal imaging system with temperature resolution better than +/- 0.5 C and spatial resolution of better than 0.5 mm has been developed. It has been applied to the analysis of melt surface thermal field distributions in both Czochralski and liquid encapsulated Czochralski growth configurations. The sensor can provide single/multiple point thermal information; a multi-pixel averaging algorithm has been developed which permits localized, low noise sensing and display of optical intensity variations at any location in the hot zone as a function of time. Temperature distributions are measured by extraction of data along a user selectable linear pixel array and are simultaneously displayed, as a graphic overlay, on the thermal image.
NASA Technical Reports Server (NTRS)
Rahman, Zia-ur
2005-01-01
The purpose of this research was to develop enhancement and multi-sensor fusion algorithms and techniques to make it safer for the pilot to fly in what would normally be considered Instrument Flight Rules (IFR) conditions, where pilot visibility is severely restricted due to fog, haze or other weather phenomenon. We proposed to use the non-linear Multiscale Retinex (MSR) as the basic driver for developing an integrated enhancement and fusion engine. When we started this research, the MSR was being applied primarily to grayscale imagery such as medical images, or to three-band color imagery, such as that produced in consumer photography: it was not, however, being applied to other imagery such as that produced by infrared image sources. However, we felt that it was possible by using the MSR algorithm in conjunction with multiple imaging modalities such as long-wave infrared (LWIR), short-wave infrared (SWIR), and visible spectrum (VIS), we could substantially improve over the then state-of-the-art enhancement algorithms, especially in poor visibility conditions. We proposed the following tasks: 1) Investigate the effects of applying the MSR to LWIR and SWIR images. This consisted of optimizing the algorithm in terms of surround scales, and weights for these spectral bands; 2) Fusing the LWIR and SWIR images with the VIS images using the MSR framework to determine the best possible representation of the desired features; 3) Evaluating different mixes of LWIR, SWIR and VIS bands for maximum fog and haze reduction, and low light level compensation; 4) Modifying the existing algorithms to work with video sequences. Over the course of the 3 year research period, we were able to accomplish these tasks and report on them at various internal presentations at NASA Langley Research Center, and in presentations and publications elsewhere. A description of the work performed under the tasks is provided in Section 2. The complete list of relevant publications during the research periods is provided in Section 5. This research also resulted in the generation of intellectual property.
Zhang, Xiaoyong; Qiu, Bensheng; Wei, Zijun; Yan, Fei; Shi, Caiyun; Su, Shi; Liu, Xin; Ji, Jim X; Xie, Guoxi
2017-01-01
To develop and assess a three-dimensional (3D) self-gated technique for the evaluation of myocardial infarction (MI) in mouse model without the use of external electrocardiogram (ECG) trigger and respiratory motion sensor on a 3T clinical MR system. A 3D T1-weighted GRE sequence with stack-of-stars sampling trajectories was developed and performed on six mice with MIs that were injected with a gadolinium-based contrast agent at a 3T clinical MR system. Respiratory and cardiac self-gating signals were derived from the Cartesian mapping of the k-space center along the partition encoding direction by bandpass filtering in image domain. The data were then realigned according to the predetermined self-gating signals for the following image reconstruction. In order to accelerate the data acquisition, image reconstruction was based on compressed sensing (CS) theory by exploiting temporal sparsity of the reconstructed images. In addition, images were also reconstructed from the same realigned data by conventional regridding method for demonstrating the advantageous of the proposed reconstruction method. Furthermore, the accuracy of detecting MI by the proposed method was assessed using histological analysis as the standard reference. Linear regression and Bland-Altman analysis were used to assess the agreement between the proposed method and the histological analysis. Compared to the conventional regridding method, the proposed CS method reconstructed images with much less streaking artifact, as well as a better contrast-to-noise ratio (CNR) between the blood and myocardium (4.1 ± 2.1 vs. 2.9 ± 1.1, p = 0.031). Linear regression and Bland-Altman analysis demonstrated that excellent correlation was obtained between infarct sizes derived from the proposed method and histology analysis. A 3D T1-weighted self-gating technique for mouse cardiac imaging was developed, which has potential for accurately evaluating MIs in mice at 3T clinical MR system without the use of external ECG trigger and respiratory motion sensor.
Pang, Yu; Zhang, Kunning; Yang, Zhen; Jiang, Song; Ju, Zhenyi; Li, Yuxing; Wang, Xuefeng; Wang, Danyang; Jian, Muqiang; Zhang, Yingying; Liang, Renrong; Tian, He; Yang, Yi; Ren, Tian-Ling
2018-03-27
Recently, wearable pressure sensors have attracted tremendous attention because of their potential applications in monitoring physiological signals for human healthcare. Sensitivity and linearity are the two most essential parameters for pressure sensors. Although various designed micro/nanostructure morphologies have been introduced, the trade-off between sensitivity and linearity has not been well balanced. Human skin, which contains force receptors in a reticular layer, has a high sensitivity even for large external stimuli. Herein, inspired by the skin epidermis with high-performance force sensing, we have proposed a special surface morphology with spinosum microstructure of random distribution via the combination of an abrasive paper template and reduced graphene oxide. The sensitivity of the graphene pressure sensor with random distribution spinosum (RDS) microstructure is as high as 25.1 kPa -1 in a wide linearity range of 0-2.6 kPa. Our pressure sensor exhibits superior comprehensive properties compared with previous surface-modified pressure sensors. According to simulation and mechanism analyses, the spinosum microstructure and random distribution contribute to the high sensitivity and large linearity range, respectively. In addition, the pressure sensor shows promising potential in detecting human physiological signals, such as heartbeat, respiration, phonation, and human motions of a pushup, arm bending, and walking. The wearable pressure sensor array was further used to detect gait states of supination, neutral, and pronation. The RDS microstructure provides an alternative strategy to improve the performance of pressure sensors and extend their potential applications in monitoring human activities.
A Planar Two-Dimensional Superconducting Bolometer Array for the Green Bank Telescope
NASA Technical Reports Server (NTRS)
Benford, Dominic; Staguhn, Johannes G.; Chervenak, James A.; Chen, Tina C.; Moseley, S. Harvey; Wollack, Edward J.; Devlin, Mark J.; Dicker, Simon R.; Supanich, Mark
2004-01-01
In order to provide high sensitivity rapid imaging at 3.3mm (90GHz) for the Green Bank Telescope - the world's largest steerable aperture - a camera is being built by the University of Pennsylvania, NASA/GSFC, and NRAO. The heart of this camera is an 8x8 close-packed, Nyquist-sampled detector array. We have designed and are fabricating a functional superconducting bolometer array system using a monolithic planar architecture. Read out by SQUID multiplexers, the superconducting transition edge sensors will provide fast, linear, sensitive response for high performance imaging. This will provide the first ever superconducting bolometer array on a facility instrument.
A New Approach to Detect Mover Position in Linear Motors Using Magnetic Sensors
Paul, Sarbajit; Chang, Junghwan
2015-01-01
A new method to detect the mover position of a linear motor is proposed in this paper. This method employs a simple cheap Hall Effect sensor-based magnetic sensor unit to detect the mover position of the linear motor. With the movement of the linear motor, Hall Effect sensor modules electrically separated 120° along with the idea of three phase balanced condition (va + vb + vc = 0) are used to produce three phase signals. The amplitude of the sensor output voltage signals are adjusted to unit amplitude to minimize the amplitude errors. With the unit amplitude signals three to two phase transformation is done to reduce the three multiples of harmonic components. The final output thus obtained is converted to position data by the use of arctangent function. The measurement accuracy of the new method is analyzed by experiments and compared with the conventional two phase method. Using the same number of sensor modules as the conventional two phase method, the proposed method gives more accurate position information compared to the conventional system where sensors are separated by 90° electrical angles. PMID:26506348
NASA Astrophysics Data System (ADS)
Pagnutti, Mary; Ryan, Robert E.; Cazenavette, George; Gold, Maxwell; Harlan, Ryan; Leggett, Edward; Pagnutti, James
2017-01-01
A comprehensive radiometric characterization of raw-data format imagery acquired with the Raspberry Pi 3 and V2.1 camera module is presented. The Raspberry Pi is a high-performance single-board computer designed to educate and solve real-world problems. This small computer supports a camera module that uses a Sony IMX219 8 megapixel CMOS sensor. This paper shows that scientific and engineering-grade imagery can be produced with the Raspberry Pi 3 and its V2.1 camera module. Raw imagery is shown to be linear with exposure and gain (ISO), which is essential for scientific and engineering applications. Dark frame, noise, and exposure stability assessments along with flat fielding results, spectral response measurements, and absolute radiometric calibration results are described. This low-cost imaging sensor, when calibrated to produce scientific quality data, can be used in computer vision, biophotonics, remote sensing, astronomy, high dynamic range imaging, and security applications, to name a few.
Selecting algorithms, sensors, and linear bases for optimum spectral recovery of skylight.
López-Alvarez, Miguel A; Hernández-Andrés, Javier; Valero, Eva M; Romero, Javier
2007-04-01
In a previous work [Appl. Opt.44, 5688 (2005)] we found the optimum sensors for a planned multispectral system for measuring skylight in the presence of noise by adapting a linear spectral recovery algorithm proposed by Maloney and Wandell [J. Opt. Soc. Am. A3, 29 (1986)]. Here we continue along these lines by simulating the responses of three to five Gaussian sensors and recovering spectral information from noise-affected sensor data by trying out four different estimation algorithms, three different sizes for the training set of spectra, and various linear bases. We attempt to find the optimum combination of sensors, recovery method, linear basis, and matrix size to recover the best skylight spectral power distributions from colorimetric and spectral (in the visible range) points of view. We show how all these parameters play an important role in the practical design of a real multispectral system and how to obtain several relevant conclusions from simulating the behavior of sensors in the presence of noise.
Robotic Vehicle Communications Interoperability
1988-08-01
starter (cold start) X X Fire suppression X Fording control X Fuel control X Fuel tank selector X Garage toggle X Gear selector X X X X Hazard warning...optic Sensors Sensor switch Video Radar IR Thermal imaging system Image intensifier Laser ranger Video camera selector Forward Stereo Rear Sensor control...optic sensors Sensor switch Video Radar IR Thermal imaging system Image intensifier Laser ranger Video camera selector Forward Stereo Rear Sensor
Full spectrum optical safeguard
Ackerman, Mark R.
2008-12-02
An optical safeguard device with two linear variable Fabry-Perot filters aligned relative to a light source with at least one of the filters having a nonlinear dielectric constant material such that, when a light source produces a sufficiently high intensity light, the light alters the characteristics of the nonlinear dielectric constant material to reduce the intensity of light impacting a connected optical sensor. The device can be incorporated into an imaging system on a moving platform, such as an aircraft or satellite.
Comparative analysis of multisensor satellite monitoring of Arctic sea-ice
Belchansky, G.I.; Mordvintsev, Ilia N.; Douglas, David C.
1999-01-01
This report represents comparative analysis of nearly coincident Russian OKEAN-01 polar orbiting satellite data, Special Sensor Microwave Imager (SSM/I) and Advanced Very High Resolution Radiometer (AVHRR) imagery. OKEAN-01 ice concentration algorithms utilize active and passive microwave measurements and a linear mixture model for measured values of the brightness temperature and the radar backscatter. SSM/I and AVHRR ice concentrations were computed with NASA Team algorithm and visible and thermal-infrared wavelength AVHRR data, accordingly
Process simulation in digital camera system
NASA Astrophysics Data System (ADS)
Toadere, Florin
2012-06-01
The goal of this paper is to simulate the functionality of a digital camera system. The simulations cover the conversion from light to numerical signal and the color processing and rendering. We consider the image acquisition system to be linear shift invariant and axial. The light propagation is orthogonal to the system. We use a spectral image processing algorithm in order to simulate the radiometric properties of a digital camera. In the algorithm we take into consideration the transmittances of the: light source, lenses, filters and the quantum efficiency of a CMOS (complementary metal oxide semiconductor) sensor. The optical part is characterized by a multiple convolution between the different points spread functions of the optical components. We use a Cooke triplet, the aperture, the light fall off and the optical part of the CMOS sensor. The electrical part consists of the: Bayer sampling, interpolation, signal to noise ratio, dynamic range, analog to digital conversion and JPG compression. We reconstruct the noisy blurred image by blending different light exposed images in order to reduce the photon shot noise, also we filter the fixed pattern noise and we sharpen the image. Then we have the color processing blocks: white balancing, color correction, gamma correction, and conversion from XYZ color space to RGB color space. For the reproduction of color we use an OLED (organic light emitting diode) monitor. The analysis can be useful to assist students and engineers in image quality evaluation and imaging system design. Many other configurations of blocks can be used in our analysis.
NASA Astrophysics Data System (ADS)
Chen, Chun-Jen; Wu, Wen-Hong; Huang, Kuo-Cheng
2009-08-01
A multi-function lens test instrument is report in this paper. This system can evaluate the image resolution, image quality, depth of field, image distortion and light intensity distribution of the tested lens by changing the tested patterns. This system consists of a tested lens, a CCD camera, a linear motorized stage, a system fixture, an observer LCD monitor, and a notebook for pattern providing. The LCD monitor displays a serious of specified tested patterns sent by the notebook. Then each displayed pattern goes through the tested lens and images in the CCD camera sensor. Consequently, the system can evaluate the performance of the tested lens by analyzing the image of CCD camera with special designed software. The major advantage of this system is that it can complete whole test quickly without interruption due to part replacement, because the tested patterns are statically displayed on monitor and controlled by the notebook.
An infrared/video fusion system for military robotics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davis, A.W.; Roberts, R.S.
1997-08-05
Sensory information is critical to the telerobotic operation of mobile robots. In particular, visual sensors are a key component of the sensor package on a robot engaged in urban military operations. Visual sensors provide the robot operator with a wealth of information including robot navigation and threat assessment. However, simple countermeasures such as darkness, smoke, or blinding by a laser, can easily neutralize visual sensors. In order to provide a robust visual sensing system, an infrared sensor is required to augment the primary visual sensor. An infrared sensor can acquire useful imagery in conditions that incapacitate a visual sensor. Amore » simple approach to incorporating an infrared sensor into the visual sensing system is to display two images to the operator: side-by-side visual and infrared images. However, dual images might overwhelm the operator with information, and result in degraded robot performance. A better solution is to combine the visual and infrared images into a single image that maximizes scene information. Fusing visual and infrared images into a single image demands balancing the mixture of visual and infrared information. Humans are accustom to viewing and interpreting visual images. They are not accustom to viewing or interpreting infrared images. Hence, the infrared image must be used to enhance the visual image, not obfuscate it.« less
Texture- and deformability-based surface recognition by tactile image analysis.
Khasnobish, Anwesha; Pal, Monalisa; Tibarewala, D N; Konar, Amit; Pal, Kunal
2016-08-01
Deformability and texture are two unique object characteristics which are essential for appropriate surface recognition by tactile exploration. Tactile sensation is required to be incorporated in artificial arms for rehabilitative and other human-computer interface applications to achieve efficient and human-like manoeuvring. To accomplish the same, surface recognition by tactile data analysis is one of the prerequisites. The aim of this work is to develop effective technique for identification of various surfaces based on deformability and texture by analysing tactile images which are obtained during dynamic exploration of the item by artificial arms whose gripper is fitted with tactile sensors. Tactile data have been acquired, while human beings as well as a robot hand fitted with tactile sensors explored the objects. The tactile images are pre-processed, and relevant features are extracted from the tactile images. These features are provided as input to the variants of support vector machine (SVM), linear discriminant analysis and k-nearest neighbour (kNN) for classification. Based on deformability, six household surfaces are recognized from their corresponding tactile images. Moreover, based on texture five surfaces of daily use are classified. The method adopted in the former two cases has also been applied for deformability- and texture-based recognition of four biomembranes, i.e. membranes prepared from biomaterials which can be used for various applications such as drug delivery and implants. Linear SVM performed best for recognizing surface deformability with an accuracy of 83 % in 82.60 ms, whereas kNN classifier recognizes surfaces of daily use having different textures with an accuracy of 89 % in 54.25 ms and SVM with radial basis function kernel recognizes biomembranes with an accuracy of 78 % in 53.35 ms. The classifiers are observed to generalize well on the unseen test datasets with very high performance to achieve efficient material recognition based on its deformability and texture.
NASA Technical Reports Server (NTRS)
1984-01-01
Among the topics discussed are NASA's land remote sensing plans for the 1980s, the evolution of Landsat 4 and the performance of its sensors, the Landsat 4 thematic mapper image processing system radiometric and geometric characteristics, data quality, image data radiometric analysis and spectral/stratigraphic analysis, and thematic mapper agricultural, forest resource and geological applications. Also covered are geologic applications of side-looking airborne radar, digital image processing, the large format camera, the RADARSAT program, the SPOT 1 system's program status, distribution plans, and simulation program, Space Shuttle multispectral linear array studies of the optical and biological properties of terrestrial land cover, orbital surveys of solar-stimulated luminescence, the Space Shuttle imaging radar research facility, and Space Shuttle-based polar ice sounding altimetry.
Rabi cropped area forecasting of parts of Banaskatha District,Gujarat using MRS RISAT-1 SAR data
NASA Astrophysics Data System (ADS)
Parekh, R. A.; Mehta, R. L.; Vyas, A.
2016-10-01
Radar sensors can be used for large-scale vegetation mapping and monitoring using backscatter coefficients in different polarisations and wavelength bands. Due to cloud and haze interference, optical images are not always available at all phonological stages important for crop discrimination. Moreover, in cloud prone areas, exclusively SAR approach would provide operational solution. This paper presents the results of classifying the cropped and non cropped areas using multi-temporal SAR images. Dual polarised C- band RISAT MRS (Medium Resolution ScanSAR mode) data were acquired on 9thDec. 2012, 28thJan. 2013 and 22nd Feb. 2013 at 18m spatial resolution. Intensity images of two polarisations (HH, HV) were extracted and converted into backscattering coefficient images. Cross polarisation ratio (CPR) images and Radar fractional vegetation density index (RFDI) were created from the temporal data and integrated with the multi-temporal images. Signatures of cropped and un-cropped areas were used for maximum likelihood supervised classification. Separability in cropped and umcropped classes using different polarisation combinations and classification accuracy analysis was carried out. FCC (False Color Composite) prepared using best three SAR polarisations in the data set was compared with LISS-III (Linear Imaging Self-Scanning System-III) image. The acreage under rabi crops was estimated. The methodology developed was for rabi cropped area, due to availability of SAR data of rabi season. Though, the approach is more relevant for acreage estimation of kharif crops when frequent cloud cover condition prevails during monsoon season and optical sensors fail to deliver good quality images.
CMOS Image Sensors: Electronic Camera On A Chip
NASA Technical Reports Server (NTRS)
Fossum, E. R.
1995-01-01
Recent advancements in CMOS image sensor technology are reviewed, including both passive pixel sensors and active pixel sensors. On- chip analog to digital converters and on-chip timing and control circuits permit realization of an electronic camera-on-a-chip. Highly miniaturized imaging systems based on CMOS image sensor technology are emerging as a competitor to charge-coupled devices for low cost uses.
Albion: the UK 3rd generation high-performance thermal imaging programme
NASA Astrophysics Data System (ADS)
McEwen, R. K.; Lupton, M.; Lawrence, M.; Knowles, P.; Wilson, M.; Dennis, P. N. J.; Gordon, N. T.; Lees, D. J.; Parsons, J. F.
2007-04-01
The first generation of high performance thermal imaging sensors in the UK was based on two axis opto-mechanical scanning systems and small (4-16 element) arrays of the SPRITE detector, developed during the 1970s. Almost two decades later, a 2nd Generation system, STAIRS C was introduced, based on single axis scanning and a long linear array of approximately 3000 elements. The UK has now begun the industrialisation of 3 rd Generation High Performance Thermal Imaging under a programme known as "Albion". Three new high performance cadmium mercury telluride arrays are being manufactured. The CMT material is grown by MOVPE on low cost substrates and bump bonded to the silicon read out circuit (ROIC). To maintain low production costs, all three detectors are designed to fit with existing standard Integrated Detector Cooling Assemblies (IDCAs). The two largest focal planes are conventional devices operating in the MWIR and LWIR spectral bands. A smaller format LWIR device is also described which has a smart ROIC, enabling much longer stare times than are feasible with conventional pixel circuits, thus achieving very high sensitivity. A new reference surface technology for thermal imaging sensors is described, based on Negative Luminescence (NL), which offers several advantages over conventional peltier references, improving the quality of the Non-Uniformity Correction (NUC) algorithms.
Hattori, Toshiaki; Masaki, Yoshitomo; Atsumi, Kazuya; Kato, Ryo; Sawada, Kazuaki
2010-01-01
Two-dimensional real-time observation of potassium ion distributions was achieved using an ion imaging device based on charge-coupled device (CCD) and metal-oxide semiconductor technologies, and an ion selective membrane. The CCD potassium ion image sensor was equipped with an array of 32 × 32 pixels (1024 pixels). It could record five frames per second with an area of 4.16 × 4.16 mm(2). Potassium ion images were produced instantly. The leaching of potassium ion from a 3.3 M KCl Ag/AgCl reference electrode was dynamically monitored in aqueous solution. The potassium ion selective membrane on the semiconductor consisted of plasticized poly(vinyl chloride) (PVC) with bis(benzo-15-crown-5). The addition of a polyhedral oligomeric silsesquioxane to the plasticized PVC membrane greatly improved adhesion of the membrane onto Si(3)N(4) of the semiconductor surface, and the potential response was stabilized. The potential response was linear from 10(-2) to 10(-5) M logarithmic concentration of potassium ion. The selectivity coefficients were K(K(+),Li(+))(pot) = 10(-2.85), K(K(+),Na(+))(pot) = 10(-2.30), K(K(+),Rb(+))(pot) =10(-1.16), and K(K(+),Cs(+))(pot) = 10(-2.05).
Single Photon Counting Large Format Imaging Sensors with High Spatial and Temporal Resolution
NASA Astrophysics Data System (ADS)
Siegmund, O. H. W.; Ertley, C.; Vallerga, J. V.; Cremer, T.; Craven, C. A.; Lyashenko, A.; Minot, M. J.
High time resolution astronomical and remote sensing applications have been addressed with microchannel plate based imaging, photon time tagging detector sealed tube schemes. These are being realized with the advent of cross strip readout techniques with high performance encoding electronics and atomic layer deposited (ALD) microchannel plate technologies. Sealed tube devices up to 20 cm square have now been successfully implemented with sub nanosecond timing and imaging. The objective is to provide sensors with large areas (25 cm2 to 400 cm2) with spatial resolutions of <20 μm FWHM and timing resolutions of <100 ps for dynamic imaging. New high efficiency photocathodes for the visible regime are discussed, which also allow response down below 150nm for UV sensing. Borosilicate MCPs are providing high performance, and when processed with ALD techniques are providing order of magnitude lifetime improvements and enhanced photocathode stability. New developments include UV/visible photocathodes, ALD MCPs, and high resolution cross strip anodes for 100 mm detectors. Tests with 50 mm format cross strip readouts suitable for Planacon devices show spatial resolutions better than 20 μm FWHM, with good image linearity while using low gain ( 106). Current cross strip encoding electronics can accommodate event rates of >5 MHz and event timing accuracy of 100 ps. High-performance ASIC versions of these electronics are in development with better event rate, power and mass suitable for spaceflight instruments.
NASA Astrophysics Data System (ADS)
Leroux, Romain; Chatellier, Ludovic; David, Laurent
2018-01-01
This article is devoted to the estimation of time-resolved particle image velocimetry (TR-PIV) flow fields using a time-resolved point measurements of a voltage signal obtained by hot-film anemometry. A multiple linear regression model is first defined to map the TR-PIV flow fields onto the voltage signal. Due to the high temporal resolution of the signal acquired by the hot-film sensor, the estimates of the TR-PIV flow fields are obtained with a multiple linear regression method called orthonormalized partial least squares regression (OPLSR). Subsequently, this model is incorporated as the observation equation in an ensemble Kalman filter (EnKF) applied on a proper orthogonal decomposition reduced-order model to stabilize it while reducing the effects of the hot-film sensor noise. This method is assessed for the reconstruction of the flow around a NACA0012 airfoil at a Reynolds number of 1000 and an angle of attack of {20}°. Comparisons with multi-time delay-modified linear stochastic estimation show that both the OPLSR and EnKF combined with OPLSR are more accurate as they produce a much lower relative estimation error, and provide a faithful reconstruction of the time evolution of the velocity flow fields.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-05-07
... INTERNATIONAL TRADE COMMISSION [Docket No. 2895] Certain CMOS Image Sensors and Products.... International Trade Commission has received a complaint entitled Certain CMOS Image Sensors and Products... importation, and the sale within the United States after importation of certain CMOS image sensors and...
Imaging tristimulus colorimeter for the evaluation of color in printed textiles
NASA Astrophysics Data System (ADS)
Hunt, Martin A.; Goddard, James S., Jr.; Hylton, Kathy W.; Karnowski, Thomas P.; Richards, Roger K.; Simpson, Marc L.; Tobin, Kenneth W., Jr.; Treece, Dale A.
1999-03-01
The high-speed production of textiles with complicated printed patterns presents a difficult problem for a colorimetric measurement system. Accurate assessment of product quality requires a repeatable measurement using a standard color space, such as CIELAB, and the use of a perceptually based color difference formula, e.g. (Delta) ECMC color difference formula. Image based color sensors used for on-line measurement are not colorimetric by nature and require a non-linear transformation of the component colors based on the spectral properties of the incident illumination, imaging sensor, and the actual textile color. This research and development effort describes a benchtop, proof-of-principle system that implements a projection onto convex sets (POCS) algorithm for mapping component color measurements to standard tristimulus values and incorporates structural and color based segmentation for improved precision and accuracy. The POCS algorithm consists of determining the closed convex sets that describe the constraints on the reconstruction of the true tristimulus values based on the measured imperfect values. We show that using a simulated D65 standard illuminant, commercial filters and a CCD camera, accurate (under perceptibility limits) per-region based (Delta) ECMC values can be measured on real textile samples.
Multidisciplinary study on Wyoming test sites
NASA Technical Reports Server (NTRS)
Houston, R. S. (Principal Investigator); Marrs, R. W.; Borgman, L. E.
1975-01-01
The author has identified the following significant results. Ten EREP data passes over the Wyoming test site provided excellent S190A and S190B coverage and some useful S192 imagery. These data were employed in an evaluation of the EREP imaging sensors in several earth resources applications. Boysen Reservoir and Hyattsville were test areas for band to band comparison of the S190 and S192 sensors and for evaluation of the image data for geologic mapping. Contrast measurements were made from the S192 image data for typical sequence of sedimentary rocks. Histograms compiled from these measurements show that near infrared S192 bands provide the greatest amount of contrast between geologic units. Comparison was also made between LANDSAT imagery and S190B and aerial photography for regional land use mapping. The S190B photography was found far superior to the color composite LANDSAT imagery and was almost as effective as the 1:120,000 scale aerial photography. A map of linear elements prepared from LANDSAT and EREP imagery of the southwestern Bighorn Mountains provided an important aid in defining the relationship between fracture and ground water movement through the Madison aquifer.
Shi, Bingfang; Su, Yubin; Zhang, Liangliang; Liu, Rongjun; Huang, Mengjiao; Zhao, Shulin
2016-08-15
A nitrogen-rich functional groups carbon nanoparticles (N-CNs) based fluorescent pH sensor with a broad-range responding was prepared by one-pot hydrothermal treatment of melamine and triethanolamine. The as-prepared N-CNs exhibited excellent photoluminesence properties with an absolute quantum yield (QY) of 11.0%. Furthermore, the N-CNs possessed a broad-range pH response. The linear pH response range was 3.0 to 12.0, which is much wider than that of previously reported fluorescent pH sensors. The possible mechanism for the pH-sensitive response of the N-CNs was ascribed to photoinduced electron transfer (PET). Cell toxicity experiment showed that the as-prepared N-CNs exhibited low cytotoxicity and excellent biocompatibility with the cell viabilities of more than 87%. The proposed N-CNs-based pH sensor was used for pH monitoring of environmental water samples, and pH fluorescence imaging of live T24 cells. The N-CNs is promising as a convenient and general fluorescent pH sensor for environmental monitoring and bioimaging applications. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Ha, W.; Gowda, P. H.; Oommen, T.; Howell, T. A.; Hernandez, J. E.
2010-12-01
High spatial resolution Land Surface Temperature (LST) images are required to estimate evapotranspiration (ET) at a field scale for irrigation scheduling purposes. Satellite sensors such as Landsat 5 Thematic Mapper (TM) and Moderate Resolution Imaging Spectroradiometer (MODIS) can offer images at several spectral bandwidths including visible, near-infrared (NIR), shortwave-infrared, and thermal-infrared (TIR). The TIR images usually have coarser spatial resolutions than those from non-thermal infrared bands. Due to this technical constraint of the satellite sensors on these platforms, image downscaling has been proposed in the field of ET remote sensing. This paper explores the potential of the Support Vector Machines (SVM) to perform downscaling of LST images derived from aircraft (4 m spatial resolution), TM (120 m), and MODIS (1000 m) using normalized difference vegetation index images derived from simultaneously acquired high resolution visible and NIR data (1 m for aircraft, 30 m for TM, and 250 m for MODIS). The SVM is a new generation machine learning algorithm that has found a wide application in the field of pattern recognition and time series analysis. The SVM would be ideally suited for downscaling problems due to its generalization ability in capturing non-linear regression relationship between the predictand and the multiple predictors. Remote sensing data acquired over the Texas High Plains during the 2008 summer growing season will be used in this study. Accuracy assessment of the downscaled 1, 30, and 250 m LST images will be made by comparing them with LST data measured with infrared thermometers at a small spatial scale, upscaled 30 m aircraft-based LST images, and upscaled 250 m TM-based LST images, respectively.
The Development of Methodologies for Determining Non-Linear Effects in Infrasound Sensors
2010-09-01
THE DEVELOPMENT OF METHODOLOGIES FOR DETERMINING NON-LINEAR EFFECTS IN INFRASOUND SENSORS Darren M. Hart, Harold V. Parks, and Randy K. Rembold...the past year, four new infrasound sensor designs were evaluated for common performance characteristics, i.e., power consumption, response (amplitude...and phase), noise, full-scale, and dynamic range. In the process of evaluating a fifth infrasound sensor, which is an update of an original design
An Optical Sensor with Polyaniline-Gold Hybrid Nanostructures for Monitoring pH in Saliva.
Luo, Chongdai; Wang, Yangyang; Li, Xuemeng; Jiang, Xueqin; Gao, Panpan; Sun, Kang; Zhou, Jianhua; Zhang, Zhiguang; Jiang, Qing
2017-03-17
Saliva contains important personal physiological information that is related to some diseases, and it is a valuable source of biochemical information that can be collected rapidly, frequently, and without stress. In this article, we reported a new and simple localized surface plasmon resonance (LSPR) substrate composed of polyaniline (PANI)-gold hybrid nanostructures as an optical sensor for monitoring the pH of saliva samples. The overall appearance and topography of the substrates, the composition, and the wettability of the LSPR surfaces were characterized by optical and scanning electron microscope (SEM) images, infrared spectra, and contact angles measurement, respectively. The PANI-gold hybrid substrate readily responded to the pH. The response time was very short, which was 3.5 s when the pH switched from 2 to 7, and 4.5 s from 7 to 2. The changes of visible-near-infrared (NIR) spectra of this sensor upon varying pH in solution showed that-for the absorption at given wavelengths of 665 nm and 785 nm-the sensitivities were 0.0299 a.u./pH (a.u. = arbitrary unit) with a linear range of pH = 5-8 and 0.0234 a.u./pH with linear range of pH = 2-8, respectively. By using this new sensor, the pH of a real saliva sample was monitored and was consistent with the parallel measurements with a standard laboratory method. The results suggest that this novel LSPR sensor shows great potential in the field of mobile healthcare and home medical devices, and could also be modified by different sensitive materials to detect various molecules or ions in the future.
Lee, Hyung-Seok; Lee, Hwi Don; Kim, Hyo Jin; Cho, Jae Du; Jeong, Myung Yung; Kim, Chang-Seok
2014-01-01
A linearized wavelength-swept thermo-optic laser chip was applied to demonstrate a fiber Bragg grating (FBG) sensor interrogation system. A broad tuning range of 11.8 nm was periodically obtained from the laser chip for a sweep rate of 16 Hz. To measure the linear time response of the reflection signal from the FBG sensor, a programmed driving signal was directly applied to the wavelength-swept laser chip. The linear wavelength response of the applied strain was clearly extracted with an R-squared value of 0.99994. To test the feasibility of the system for dynamic measurements, the dynamic strain was successfully interrogated with a repetition rate of 0.2 Hz by using this FBG sensor interrogation system. PMID:25177803
Hahn, C; Weber, G; Märtin, R; Höfer, S; Kämpfer, T; Stöhlker, Th
2016-04-01
Single-photon spectroscopy of pulsed, high-intensity sources of hard X-rays - such as laser-generated plasmas - is often hampered by the pileup of several photons absorbed by the unsegmented, large-volume sensors routinely used for the detection of high-energy radiation. Detectors based on the Timepix chip, with a segmentation pitch of 55 μm and the possibility to be equipped with high-Z sensor chips, constitute an attractive alternative to commonly used passive solutions such as image plates. In this report, we present energy calibration and characterization measurements of such devices. The achievable energy resolution is comparable to that of scintillators for γ spectroscopy. Moreover, we also introduce a simple two-detector Compton polarimeter setup with a polarimeter quality of (98 ± 1)%. Finally, a proof-of-principle polarimetry experiment is discussed, where we studied the linear polarization of bremsstrahlung emitted by a laser-driven plasma and found an indication of the X-ray polarization direction depending on the polarization state of the incident laser pulse.
NASA Astrophysics Data System (ADS)
Hahn, C.; Weber, G.; Märtin, R.; Höfer, S.; Kämpfer, T.; Stöhlker, Th.
2016-04-01
Single-photon spectroscopy of pulsed, high-intensity sources of hard X-rays — such as laser-generated plasmas — is often hampered by the pileup of several photons absorbed by the unsegmented, large-volume sensors routinely used for the detection of high-energy radiation. Detectors based on the Timepix chip, with a segmentation pitch of 55 μm and the possibility to be equipped with high-Z sensor chips, constitute an attractive alternative to commonly used passive solutions such as image plates. In this report, we present energy calibration and characterization measurements of such devices. The achievable energy resolution is comparable to that of scintillators for γ spectroscopy. Moreover, we also introduce a simple two-detector Compton polarimeter setup with a polarimeter quality of (98 ± 1)%. Finally, a proof-of-principle polarimetry experiment is discussed, where we studied the linear polarization of bremsstrahlung emitted by a laser-driven plasma and found an indication of the X-ray polarization direction depending on the polarization state of the incident laser pulse.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hahn, C., E-mail: christoph.hahn@uni-jena.de; Höfer, S.; Kämpfer, T.
Single-photon spectroscopy of pulsed, high-intensity sources of hard X-rays — such as laser-generated plasmas — is often hampered by the pileup of several photons absorbed by the unsegmented, large-volume sensors routinely used for the detection of high-energy radiation. Detectors based on the Timepix chip, with a segmentation pitch of 55 μm and the possibility to be equipped with high-Z sensor chips, constitute an attractive alternative to commonly used passive solutions such as image plates. In this report, we present energy calibration and characterization measurements of such devices. The achievable energy resolution is comparable to that of scintillators for γ spectroscopy.more » Moreover, we also introduce a simple two-detector Compton polarimeter setup with a polarimeter quality of (98 ± 1)%. Finally, a proof-of-principle polarimetry experiment is discussed, where we studied the linear polarization of bremsstrahlung emitted by a laser-driven plasma and found an indication of the X-ray polarization direction depending on the polarization state of the incident laser pulse.« less
Vu, Cung; Nihei, Kurt T.; Schmitt, Denis P.; Skelt, Christopher; Johnson, Paul A.; Guyer, Robert; TenCate, James A.; Le Bas, Pierre-Yves
2013-01-01
In some aspects of the disclosure, a method for creating three-dimensional images of non-linear properties and the compressional to shear velocity ratio in a region remote from a borehole using a conveyed logging tool is disclosed. In some aspects, the method includes arranging a first source in the borehole and generating a steered beam of elastic energy at a first frequency; arranging a second source in the borehole and generating a steerable beam of elastic energy at a second frequency, such that the steerable beam at the first frequency and the steerable beam at the second frequency intercept at a location away from the borehole; receiving at the borehole by a sensor a third elastic wave, created by a three wave mixing process, with a frequency equal to a difference between the first and second frequencies and a direction of propagation towards the borehole; determining a location of a three wave mixing region based on the arrangement of the first and second sources and on properties of the third wave signal; and creating three-dimensional images of the non-linear properties using data recorded by repeating the generating, receiving and determining at a plurality of azimuths, inclinations and longitudinal locations within the borehole. The method is additionally used to generate three dimensional images of the ratio of compressional to shear acoustic velocity of the same volume surrounding the borehole.
Photon Counting Imaging with an Electron-Bombarded Pixel Image Sensor
Hirvonen, Liisa M.; Suhling, Klaus
2016-01-01
Electron-bombarded pixel image sensors, where a single photoelectron is accelerated directly into a CCD or CMOS sensor, allow wide-field imaging at extremely low light levels as they are sensitive enough to detect single photons. This technology allows the detection of up to hundreds or thousands of photon events per frame, depending on the sensor size, and photon event centroiding can be employed to recover resolution lost in the detection process. Unlike photon events from electron-multiplying sensors, the photon events from electron-bombarded sensors have a narrow, acceleration-voltage-dependent pulse height distribution. Thus a gain voltage sweep during exposure in an electron-bombarded sensor could allow photon arrival time determination from the pulse height with sub-frame exposure time resolution. We give a brief overview of our work with electron-bombarded pixel image sensor technology and recent developments in this field for single photon counting imaging, and examples of some applications. PMID:27136556
Poletti Papi, Maurício A; Caetano, Fabio R; Bergamini, Márcio F; Marcolino-Junior, Luiz H
2017-06-01
The present work describes the synthesis of a new conductive nanocomposite based on polypyrrole (PPy) and silver nanoparticles (PPy-AgNP) based on a facile reverse microemulsion method and its application as a non-enzymatic electrochemical sensor for glucose detection. Focusing on the best sensor performance, all experimental parameters used in the synthesis of nanocomposite were optimized based on its electrochemical response for glucose. Characterization of the optimized material by FT-IR, cyclic voltammetry, and DRX measurements and TEM images showed good monodispersion of semispherical Ag nanoparticles capped by PPy structure, with size average of 12±5nm. Under the best analytical conditions, the proposed sensor exhibited glucose response in linear dynamic range of 25 to 2500μmolL -1 , with limit of detection of 3.6μmolL -1 . Recovery studies with human saliva samples varying from 99 to 105% revealed the accuracy and feasibility of a non-enzymatic electrochemical sensor for glucose determination by easy construction and low-cost. Copyright © 2017 Elsevier B.V. All rights reserved.
Calibration Of An Omnidirectional Vision Navigation System Using An Industrial Robot
NASA Astrophysics Data System (ADS)
Oh, Sung J.; Hall, Ernest L.
1989-09-01
The characteristics of an omnidirectional vision navigation system were studied to determine position accuracy for the navigation and path control of a mobile robot. Experiments for calibration and other parameters were performed using an industrial robot to conduct repetitive motions. The accuracy and repeatability of the experimental setup and the alignment between the robot and the sensor provided errors of less than 1 pixel on each axis. Linearity between zenith angle and image location was tested at four different locations. Angular error of less than 1° and radial error of less than 1 pixel were observed at moderate speed variations. The experimental information and the test of coordinated operation of the equipment provide understanding of characteristics as well as insight into the evaluation and improvement of the prototype dynamic omnivision system. The calibration of the sensor is important since the accuracy of navigation influences the accuracy of robot motion. This sensor system is currently being developed for a robot lawn mower; however, wider applications are obvious. The significance of this work is that it adds to the knowledge of the omnivision sensor.
Ultrasensitive displacement sensor based on tunable horn-shaped resonators
NASA Astrophysics Data System (ADS)
Tian, Ying; Wu, Jiong; Yu, Le; Yang, Helin; Huang, Xiaojun
2018-04-01
In this paper, we proposed a novel double-deck displacement sensor with a high linearity based on tunable horn-shaped resonators. The designed sensor included two substrate layers etched with copper metallization in various shapes. When the upper trip-type resonator layer has a relative displacement to the bottom horn-shaped resonator layer, the resonance frequency of the sensor is redshift. High sensitivity of the sensor is around 207.2 MHz mm-1 with 4 mm linear dynamic range. We fabricate the sample of the proposed displacement sensor, in addition the simulated results are verified by experiment. The proposed displacement sensor is appropriate for using MEMS technology in further miniaturization.
Performance test and image correction of CMOS image sensor in radiation environment
NASA Astrophysics Data System (ADS)
Wang, Congzheng; Hu, Song; Gao, Chunming; Feng, Chang
2016-09-01
CMOS image sensors rival CCDs in domains that include strong radiation resistance as well as simple drive signals, so it is widely applied in the high-energy radiation environment, such as space optical imaging application and video monitoring of nuclear power equipment. However, the silicon material of CMOS image sensors has the ionizing dose effect in the high-energy rays, and then the indicators of image sensors, such as signal noise ratio (SNR), non-uniformity (NU) and bad point (BP) are degraded because of the radiation. The radiation environment of test experiments was generated by the 60Co γ-rays source. The camera module based on image sensor CMV2000 from CMOSIS Inc. was chosen as the research object. The ray dose used for the experiments was with a dose rate of 20krad/h. In the test experiences, the output signals of the pixels of image sensor were measured on the different total dose. The results of data analysis showed that with the accumulation of irradiation dose, SNR of image sensors decreased, NU of sensors was enhanced, and the number of BP increased. The indicators correction of image sensors was necessary, as it was the main factors to image quality. The image processing arithmetic was adopt to the data from the experiences in the work, which combined local threshold method with NU correction based on non-local means (NLM) method. The results from image processing showed that image correction can effectively inhibit the BP, improve the SNR, and reduce the NU.
High speed three-dimensional laser scanner with real time processing
NASA Technical Reports Server (NTRS)
Lavelle, Joseph P. (Inventor); Schuet, Stefan R. (Inventor)
2008-01-01
A laser scanner computes a range from a laser line to an imaging sensor. The laser line illuminates a detail within an area covered by the imaging sensor, the area having a first dimension and a second dimension. The detail has a dimension perpendicular to the area. A traverse moves a laser emitter coupled to the imaging sensor, at a height above the area. The laser emitter is positioned at an offset along the scan direction with respect to the imaging sensor, and is oriented at a depression angle with respect to the area. The laser emitter projects the laser line along the second dimension of the area at a position where a image frame is acquired. The imaging sensor is sensitive to laser reflections from the detail produced by the laser line. The imaging sensor images the laser reflections from the detail to generate the image frame. A computer having a pipeline structure is connected to the imaging sensor for reception of the image frame, and for computing the range to the detail using height, depression angle and/or offset. The computer displays the range to the area and detail thereon covered by the image frame.
Recognizing Banknote Fitness with a Visible Light One Dimensional Line Image Sensor
Pham, Tuyen Danh; Park, Young Ho; Kwon, Seung Yong; Nguyen, Dat Tien; Vokhidov, Husan; Park, Kang Ryoung; Jeong, Dae Sik; Yoon, Sungsoo
2015-01-01
In general, dirty banknotes that have creases or soiled surfaces should be replaced by new banknotes, whereas clean banknotes should be recirculated. Therefore, the accurate classification of banknote fitness when sorting paper currency is an important and challenging task. Most previous research has focused on sensors that used visible, infrared, and ultraviolet light. Furthermore, there was little previous research on the fitness classification for Indian paper currency. Therefore, we propose a new method for classifying the fitness of Indian banknotes, with a one-dimensional line image sensor that uses only visible light. The fitness of banknotes is usually determined by various factors such as soiling, creases, and tears, etc. although we just consider banknote soiling in our research. This research is novel in the following four ways: first, there has been little research conducted on fitness classification for the Indian Rupee using visible-light images. Second, the classification is conducted based on the features extracted from the regions of interest (ROIs), which contain little texture. Third, 1-level discrete wavelet transformation (DWT) is used to extract the features for discriminating between fit and unfit banknotes. Fourth, the optimal DWT features that represent the fitness and unfitness of banknotes are selected based on linear regression analysis with ground-truth data measured by densitometer. In addition, the selected features are used as the inputs to a support vector machine (SVM) for the final classification of banknote fitness. Experimental results showed that our method outperforms other methods. PMID:26343654
Materials Science and Engineering-1989 Publications (Naval Research Laboratory)
1991-03-29
34 D.G. Cory, J.B. Miller, A.N. Garroway "Acousto-Optic and Linear Electro-Optic Journal of Magnetic Resonance, 85, 219 Properties of Organic Polymeric...34Demonstration of Indirect Detection of ൕC Refocused Gradient Imaging of Solids" 14N Overtone NMR Transitions" J.B. Miller, A.N. Garroway A.N. Garroway , J.B...Conductive Polymer Solids" Chemical Vapor Sensors" J.B. Miller, A.N. Garroway J.F. Giuiani, T.M. Keller Journal of Magnetic Resonance, 85, 255 Journal of
CMOS Active-Pixel Image Sensor With Intensity-Driven Readout
NASA Technical Reports Server (NTRS)
Langenbacher, Harry T.; Fossum, Eric R.; Kemeny, Sabrina
1996-01-01
Proposed complementary metal oxide/semiconductor (CMOS) integrated-circuit image sensor automatically provides readouts from pixels in order of decreasing illumination intensity. Sensor operated in integration mode. Particularly useful in number of image-sensing tasks, including diffractive laser range-finding, three-dimensional imaging, event-driven readout of sparse sensor arrays, and star tracking.
Is flat fielding safe for precision CCD astronomy?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baumer, Michael; Davis, Christopher P.; Roodman, Aaron
The ambitious goals of precision cosmology with wide-field optical surveys such as the Dark Energy Survey (DES) and the Large Synoptic Survey Telescope (LSST) demand precision CCD astronomy as their foundation. This in turn requires an understanding of previously uncharacterized sources of systematic error in CCD sensors, many of which manifest themselves as static effective variations in pixel area. Such variation renders a critical assumption behind the traditional procedure of flat fielding—that a sensor's pixels comprise a uniform grid—invalid. In this work, we present a method to infer a curl-free model of a sensor's underlying pixel grid from flat-field images,more » incorporating the superposition of all electrostatic sensor effects—both known and unknown—present in flat-field data. We use these pixel grid models to estimate the overall impact of sensor systematics on photometry, astrometry, and PSF shape measurements in a representative sensor from the Dark Energy Camera (DECam) and a prototype LSST sensor. Applying the method to DECam data recovers known significant sensor effects for which corrections are currently being developed within DES. For an LSST prototype CCD with pixel-response non-uniformity (PRNU) of 0.4%, we find the impact of "improper" flat fielding on these observables is negligible in nominal .7'' seeing conditions. Furthermore, these errors scale linearly with the PRNU, so for future LSST production sensors, which may have larger PRNU, our method provides a way to assess whether pixel-level calibration beyond flat fielding will be required.« less
Is flat fielding safe for precision CCD astronomy?
Baumer, Michael; Davis, Christopher P.; Roodman, Aaron
2017-07-06
The ambitious goals of precision cosmology with wide-field optical surveys such as the Dark Energy Survey (DES) and the Large Synoptic Survey Telescope (LSST) demand precision CCD astronomy as their foundation. This in turn requires an understanding of previously uncharacterized sources of systematic error in CCD sensors, many of which manifest themselves as static effective variations in pixel area. Such variation renders a critical assumption behind the traditional procedure of flat fielding—that a sensor's pixels comprise a uniform grid—invalid. In this work, we present a method to infer a curl-free model of a sensor's underlying pixel grid from flat-field images,more » incorporating the superposition of all electrostatic sensor effects—both known and unknown—present in flat-field data. We use these pixel grid models to estimate the overall impact of sensor systematics on photometry, astrometry, and PSF shape measurements in a representative sensor from the Dark Energy Camera (DECam) and a prototype LSST sensor. Applying the method to DECam data recovers known significant sensor effects for which corrections are currently being developed within DES. For an LSST prototype CCD with pixel-response non-uniformity (PRNU) of 0.4%, we find the impact of "improper" flat fielding on these observables is negligible in nominal .7'' seeing conditions. Furthermore, these errors scale linearly with the PRNU, so for future LSST production sensors, which may have larger PRNU, our method provides a way to assess whether pixel-level calibration beyond flat fielding will be required.« less
Endoscopic add-on stiffness probe for real-time soft surface characterisation in MIS.
Faragasso, A; Stilli, A; Bimbo, J; Noh, Y; Liu, H; Nanayakkara, T; Dasgupta, P; Wurdemann, H A; Althoefer, K
2014-01-01
This paper explores a novel stiffness sensor which is mounted on the tip of a laparoscopic camera. The proposed device is able to compute stiffness when interacting with soft surfaces. The sensor can be used in Minimally Invasive Surgery, for instance, to localise tumor tissue which commonly has a higher stiffness when compared to healthy tissue. The purely mechanical sensor structure utilizes the functionality of an endoscopic camera to the maximum by visually analyzing the behavior of trackers within the field of view. Two pairs of spheres (used as easily identifiable features in the camera images) are connected to two springs with known but different spring constants. Four individual indenters attached to the spheres are used to palpate the surface. During palpation, the spheres move linearly towards the objective lens (i.e. the distance between lens and spheres is changing) resulting in variations of their diameters in the camera images. Relating the measured diameters to the different spring constants, a developed mathematical model is able to determine the surface stiffness in real-time. Tests were performed using a surgical endoscope to palpate silicon phantoms presenting different stiffness. Results show that the accuracy of the sensing system developed increases with the softness of the examined tissue.
NASA Astrophysics Data System (ADS)
Konstantinidis, A.; Anaxagoras, T.; Esposito, M.; Allinson, N.; Speller, R.
2012-03-01
X-ray diffraction studies are used to identify specific materials. Several laboratory-based x-ray diffraction studies were made for breast cancer diagnosis. Ideally a large area, low noise, linear and wide dynamic range digital x-ray detector is required to perform x-ray diffraction measurements. Recently, digital detectors based on Complementary Metal-Oxide- Semiconductor (CMOS) Active Pixel Sensor (APS) technology have been used in x-ray diffraction studies. Two APS detectors, namely Vanilla and Large Area Sensor (LAS), were developed by the Multidimensional Integrated Intelligent Imaging (MI-3) consortium to cover a range of scientific applications including x-ray diffraction. The MI-3 Plus consortium developed a novel large area APS, named as Dynamically Adjustable Medical Imaging Technology (DynAMITe), to combine the key characteristics of Vanilla and LAS with a number of extra features. The active area (12.8 × 13.1 cm2) of DynaMITe offers the ability of angle dispersive x-ray diffraction (ADXRD). The current study demonstrates the feasibility of using DynaMITe for breast cancer diagnosis by identifying six breast-equivalent plastics. Further work will be done to optimize the system in order to perform ADXRD for identification of suspicious areas of breast tissue following a conventional mammogram taken with the same sensor.
Multiscale morphological filtering for analysis of noisy and complex images
NASA Astrophysics Data System (ADS)
Kher, A.; Mitra, S.
Images acquired with passive sensing techniques suffer from illumination variations and poor local contrasts that create major difficulties in interpretation and identification tasks. On the other hand, images acquired with active sensing techniques based on monochromatic illumination are degraded with speckle noise. Mathematical morphology offers elegant techniques to handle a wide range of image degradation problems. Unlike linear filters, morphological filters do not blur the edges and hence maintain higher image resolution. Their rich mathematical framework facilitates the design and analysis of these filters as well as their hardware implementation. Morphological filters are easier to implement and are more cost effective and efficient than several conventional linear filters. Morphological filters to remove speckle noise while maintaining high resolution and preserving thin image regions that are particularly vulnerable to speckle noise were developed and applied to SAR imagery. These filters used combination of linear (one-dimensional) structuring elements in different (typically four) orientations. Although this approach preserves more details than the simple morphological filters using two-dimensional structuring elements, the limited orientations of one-dimensional elements approximate the fine details of the region boundaries. A more robust filter designed recently overcomes the limitation of the fixed orientations. This filter uses a combination of concave and convex structuring elements. Morphological operators are also useful in extracting features from visible and infrared imagery. A multiresolution image pyramid obtained with successive filtering and a subsampling process aids in the removal of the illumination variations and enhances local contrasts. A morphology-based interpolation scheme was also introduced to reduce intensity discontinuities created in any morphological filtering task. The generality of morphological filtering techniques in extracting information from a wide variety of images obtained with active and passive sensing techniques is discussed. Such techniques are particularly useful in obtaining more information from fusion of complex images by different sensors such as SAR, visible, and infrared.
Multiscale Morphological Filtering for Analysis of Noisy and Complex Images
NASA Technical Reports Server (NTRS)
Kher, A.; Mitra, S.
1993-01-01
Images acquired with passive sensing techniques suffer from illumination variations and poor local contrasts that create major difficulties in interpretation and identification tasks. On the other hand, images acquired with active sensing techniques based on monochromatic illumination are degraded with speckle noise. Mathematical morphology offers elegant techniques to handle a wide range of image degradation problems. Unlike linear filters, morphological filters do not blur the edges and hence maintain higher image resolution. Their rich mathematical framework facilitates the design and analysis of these filters as well as their hardware implementation. Morphological filters are easier to implement and are more cost effective and efficient than several conventional linear filters. Morphological filters to remove speckle noise while maintaining high resolution and preserving thin image regions that are particularly vulnerable to speckle noise were developed and applied to SAR imagery. These filters used combination of linear (one-dimensional) structuring elements in different (typically four) orientations. Although this approach preserves more details than the simple morphological filters using two-dimensional structuring elements, the limited orientations of one-dimensional elements approximate the fine details of the region boundaries. A more robust filter designed recently overcomes the limitation of the fixed orientations. This filter uses a combination of concave and convex structuring elements. Morphological operators are also useful in extracting features from visible and infrared imagery. A multiresolution image pyramid obtained with successive filtering and a subsampling process aids in the removal of the illumination variations and enhances local contrasts. A morphology-based interpolation scheme was also introduced to reduce intensity discontinuities created in any morphological filtering task. The generality of morphological filtering techniques in extracting information from a wide variety of images obtained with active and passive sensing techniques is discussed. Such techniques are particularly useful in obtaining more information from fusion of complex images by different sensors such as SAR, visible, and infrared.
He, Haijun; Shao, Liyang; Qian, Heng; Zhang, Xinpu; Liang, Jiawei; Luo, Bin; Pan, Wei; Yan, Lianshan
2017-03-20
A novel demodulation method for Sagnac loop interferometer based sensor has been proposed and demonstrated, by unwrapping the phase changes with birefringence interrogation. A temperature sensor based on Sagnac loop interferometer has been used to verify the feasibility of the proposed method. Several tests with 40 °C temperature range have been accomplished with a great linearity of 0.9996 in full range. The proposed scheme is universal for all Sagnac loop interferometer based sensors and it has unlimited linear measurable range which overwhelming the conventional demodulation method with peak/dip tracing. Furthermore, the influence of the wavelength sampling interval and wavelength span on the demodulation error has been discussed in this work. The proposed interrogation method has a great significance for Sagnac loop interferometer sensor and it might greatly enhance the availability of this type of sensors in practical application.
Evaluation of Sun Glint Correction Algorithms for High-Spatial Resolution Hyperspectral Imagery
2012-09-01
ACRONYMS AND ABBREVIATIONS AISA Airborne Imaging Spectrometer for Applications AVIRIS Airborne Visible/Infrared Imaging Spectrometer BIL Band...sensor bracket mount combining Airborne Imaging Spectrometer for Applications ( AISA ) Eagle and Hawk sensors into a single imaging system (SpecTIR 2011...The AISA Eagle is a VNIR sensor with a wavelength range of approximately 400–970 nm and the AISA Hawk sensor is a SWIR sensor with a wavelength
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Hee Yoon; Department of Otolaryngology-Head and Neck Surgery, Stanford University, Stanford, California; Raphael, Patrick D.
Cochlear amplification has been most commonly investigated by measuring the vibrations of the basilar membrane in animal models. Several different techniques have been used for measuring these vibrations such as laser Doppler vibrometry, miniature pressure sensors, low coherence interferometry, and spectral-domain optical coherence tomography (SD-OCT). We have built a swept-source OCT (SS-OCT) system, which is similar to SD-OCT in that it is capable of performing both imaging and vibration measurements within the mouse cochlea in vivo without having to open the bone. In vivo 3D images of a mouse cochlea were obtained, and the basilar membrane, tectorial membrane, Reissner’s membrane,more » tunnel of Corti, and reticular lamina could all be resolved. We measured vibrations of multiple structures within the mouse cochlea to sound stimuli. As well, we measured the radial deflections of the reticular lamina and tectorial membrane to estimate the displacement of the outer hair cell stereocilia. These measurements have the potential to more clearly define the mechanisms underlying the linear and non-linear processes within the mammalian cochlea.« less
NASA Astrophysics Data System (ADS)
Zolfaghari, Abolfazl; Jeon, Seongkyul; Stepanick, Christopher K.; Lee, ChaBum
2017-06-01
This paper presents a novel method for measuring two-degree-of-freedom (DOF) motion of flexure-based nanopositioning systems based on optical knife-edge sensing (OKES) technology, which utilizes the interference of two superimposed waves: a geometrical wave from the primary source of light and a boundary diffraction wave from the secondary source. This technique allows for two-DOF motion measurement of the linear and pitch motions of nanopositioning systems. Two capacitive sensors (CSs) are used for a baseline comparison with the proposed sensor by simultaneously measuring the motions of the nanopositioning system. The experimental results show that the proposed sensor closely agrees with the fundamental linear motion of the CS. However, the two-DOF OKES technology was shown to be approximately three times more sensitive to the pitch motion than the CS. The discrepancy in the two sensor outputs is discussed in terms of measuring principle, linearity, bandwidth, control effectiveness, and resolution.
Smart sensors II; Proceedings of the Seminar, San Diego, CA, July 31, August 1, 1980
NASA Astrophysics Data System (ADS)
Barbe, D. F.
1980-01-01
Topics discussed include technology for smart sensors, smart sensors for tracking and surveillance, and techniques and algorithms for smart sensors. Papers are presented on the application of very large scale integrated circuits to smart sensors, imaging charge-coupled devices for deep-space surveillance, ultra-precise star tracking using charge coupled devices, and automatic target identification of blurred images with super-resolution features. Attention is also given to smart sensors for terminal homing, algorithms for estimating image position, and the computational efficiency of multiple image registration algorithms.
Fiber-Optic Linear Displacement Sensor Based On Matched Interference Filters
NASA Astrophysics Data System (ADS)
Fuhr, Peter L.; Feener, Heidi C.; Spillman, William B.
1990-02-01
A fiber optic linear displacement sensor has been developed in which a pair of matched interference filters are used to encode linear position on a broadband optical signal as relative intensity variations. As the filters are displaced, the optical beam illuminates varying amounts of each filter. Determination of the relative intensities at each filter pairs' passband is based on measurements acquired with matching filters and photodetectors. Source power variation induced errors are minimized by basing determination of linear position on signal Visibility. A theoretical prediction of the sensor's performance is developed and compared with experiments performed in the near IR spectral region using large core multimode optical fiber.
CMOS image sensor-based implantable glucose sensor using glucose-responsive fluorescent hydrogel.
Tokuda, Takashi; Takahashi, Masayuki; Uejima, Kazuhiro; Masuda, Keita; Kawamura, Toshikazu; Ohta, Yasumi; Motoyama, Mayumi; Noda, Toshihiko; Sasagawa, Kiyotaka; Okitsu, Teru; Takeuchi, Shoji; Ohta, Jun
2014-11-01
A CMOS image sensor-based implantable glucose sensor based on an optical-sensing scheme is proposed and experimentally verified. A glucose-responsive fluorescent hydrogel is used as the mediator in the measurement scheme. The wired implantable glucose sensor was realized by integrating a CMOS image sensor, hydrogel, UV light emitting diodes, and an optical filter on a flexible polyimide substrate. Feasibility of the glucose sensor was verified by both in vitro and in vivo experiments.
Estimation of forest biomass using remote sensing
NASA Astrophysics Data System (ADS)
Sarker, Md. Latifur Rahman
Forest biomass estimation is essential for greenhouse gas inventories, terrestrial carbon accounting and climate change modelling studies. The availability of new SAR, (C-band RADARSAT-2 and L-band PALSAR) and optical sensors (SPOT-5 and AVNIR-2) has opened new possibilities for biomass estimation because these new SAR sensors can provide data with varying polarizations, incidence angles and fine spatial resolutions. 'Therefore, this study investigated the potential of two SAR sensors (RADARSAT-2 with C-band and PALSAR with L-band) and two optical sensors (SPOT-5 and AVNIR2) for the estimation of biomass in Hong Kong. Three common major processing steps were used for data processing, namely (i) spectral reflectance/intensity, (ii) texture measurements and (iii) polarization or band ratios of texture parameters. Simple linear and stepwise multiple regression models were developed to establish a relationship between the image parameters and the biomass of field plots. The results demonstrate the ineffectiveness of raw data. However, significant improvements in performance (r2) (RADARSAT-2=0.78; PALSAR=0.679; AVNIR-2=0.786; SPOT-5=0.854; AVNIR-2 + SPOT-5=0.911) were achieved using texture parameters of all sensors. The performances were further improved and very promising performances (r2) were obtained using the ratio of texture parameters (RADARSAT-2=0.91; PALSAR=0.823; PALSAR two-date=0.921; AVNIR-2=0.899; SPOT-5=0.916; AVNIR-2 + SPOT-5=0.939). These performances suggest four main contributions arising from this research, namely (i) biomass estimation can be significantly improved by using texture parameters, (ii) further improvements can be obtained using the ratio of texture parameters, (iii) multisensor texture parameters and their ratios have more potential than texture from a single sensor, and (iv) biomass can be accurately estimated far beyond the previously perceived saturation levels of SAR and optical data using texture parameters or the ratios of texture parameters. A further important contribution resulting from the fusion of SAR & optical images produced accuracies (r2) of 0.706 and 0.77 from the simple fusion, and the texture processing of the fused image, respectively. Although these performances were not as attractive as the performances obtained from the other four processing steps, the wavelet fusion procedure improved the saturation level of the optical (AVNIR-2) image very significantly after fusion with SAR, image. Keywords: biomass, climate change, SAR, optical, multisensors, RADARSAT-2, PALSAR, AVNIR-2, SPOT-5, texture measurement, ratio of texture parameters, wavelets, fusion, saturation
Radiometric Normalization of Large Airborne Image Data Sets Acquired by Different Sensor Types
NASA Astrophysics Data System (ADS)
Gehrke, S.; Beshah, B. T.
2016-06-01
Generating seamless mosaics of aerial images is a particularly challenging task when the mosaic comprises a large number of im-ages, collected over longer periods of time and with different sensors under varying imaging conditions. Such large mosaics typically consist of very heterogeneous image data, both spatially (different terrain types and atmosphere) and temporally (unstable atmo-spheric properties and even changes in land coverage). We present a new radiometric normalization or, respectively, radiometric aerial triangulation approach that takes advantage of our knowledge about each sensor's properties. The current implementation supports medium and large format airborne imaging sensors of the Leica Geosystems family, namely the ADS line-scanner as well as DMC and RCD frame sensors. A hierarchical modelling - with parameters for the overall mosaic, the sensor type, different flight sessions, strips and individual images - allows for adaptation to each sensor's geometric and radiometric properties. Additional parameters at different hierarchy levels can compensate radiome-tric differences of various origins to compensate for shortcomings of the preceding radiometric sensor calibration as well as BRDF and atmospheric corrections. The final, relative normalization is based on radiometric tie points in overlapping images, absolute radiometric control points and image statistics. It is computed in a global least squares adjustment for the entire mosaic by altering each image's histogram using a location-dependent mathematical model. This model involves contrast and brightness corrections at radiometric fix points with bilinear interpolation for corrections in-between. The distribution of the radiometry fixes is adaptive to each image and generally increases with image size, hence enabling optimal local adaptation even for very long image strips as typi-cally captured by a line-scanner sensor. The normalization approach is implemented in HxMap software. It has been successfully applied to large sets of heterogeneous imagery, including the adjustment of original sensor images prior to quality control and further processing as well as radiometric adjustment for ortho-image mosaic generation.
NASA Astrophysics Data System (ADS)
Ortiz, M.; Graber, H. C.; Wilkinson, J.; Nyman, L. M.; Lund, B.
2017-12-01
Much work has been done on determining changes in summer ice albedo and morphological properties of melt ponds such as depth, shape and distribution using in-situ measurements and satellite-based sensors. Although these studies have dedicated much pioneering work in this area, there still lacks sufficient spatial and temporal scales. We present a prototype algorithm using Linear Support Vector Machines (LSVMs) designed to quantify the evolution of melt pond fraction from a recently government-declassified high-resolution panchromatic optical dataset. The study area of interest lies within the Beaufort marginal ice zone (MIZ), where several in-situ instruments were deployed by the British Antarctic Survey in joint with the MIZ Program, from April-September, 2014. The LSVM uses four dimensional feature data from the intensity image itself, and from various textures calculated from a modified first-order histogram technique using probability density of occurrences. We explore both the temporal evolution of melt ponds and spatial statistics such as pond fraction, pond area, and number pond density, to name a few. We also introduce a linear regression model that can potentially be used to estimate average pond area by ingesting several melt pond statistics and shape parameters.
Linearization of Positional Response Curve of a Fiber-optic Displacement Sensor
NASA Astrophysics Data System (ADS)
Babaev, O. G.; Matyunin, S. A.; Paranin, V. D.
2018-01-01
Currently, the creation of optical measuring instruments and sensors for measuring linear displacement is one of the most relevant problems in the area of instrumentation. Fiber-optic contactless sensors based on the magneto-optical effect are of special interest. They are essentially contactless, non-electrical and have a closed optical channel not subject to contamination. The main problem of this type of sensors is the non-linearity of their positional response curve due to the hyperbolic nature of the magnetic field intensity variation induced by moving the magnetic source mounted on the controlled object relative to the sensing element. This paper discusses an algorithmic method of linearizing the positional response curve of fiber-optic displacement sensors in any selected range of the displacements to be measured. The method is divided into two stages: 1 - definition of the calibration function, 2 - measurement and linearization of the positional response curve (including its temperature stabilization). The algorithm under consideration significantly reduces the number of points of the calibration function, which is essential for the calibration of temperature dependence, due to the use of the points that randomly deviate from the grid points with uniform spacing. Subsequent interpolation of the deviating points and piecewise linear-plane approximation of the calibration function reduces the microcontroller storage capacity for storing the calibration function and the time required to process the measurement results. The paper also presents experimental results of testing real samples of fiber-optic displacement sensors.
Development of an electromagnetic imaging system for well bore integrity inspection
NASA Astrophysics Data System (ADS)
Plotnikov, Yuri; Wheeler, Frederick W.; Mandal, Sudeep; Climent, Helene C.; Kasten, A. Matthias; Ross, William
2017-02-01
State-of-the-art imaging technologies for monitoring the integrity of oil and gas well bores are typically limited to the inspection of metal casings and cement bond interfaces close to the first casing region. The objective of this study is to develop and evaluate a novel well-integrity inspection system that is capable of providing enhanced information about the flaw structure and topology of hydrocarbon producing well bores. In order to achieve this, we propose the development of a multi-element electromagnetic (EM) inspection tool that can provide information about material loss in the first and second casing structure as well as information about eccentricity between multiple casing strings. Furthermore, the information gathered from the EM inspection tool will be combined with other imaging modalities (e.g. data from an x-ray backscatter imaging device). The independently acquired data are then fused to achieve a comprehensive assessment of integrity with greater accuracy. A test rig composed of several concentric metal casings with various defect structures was assembled and imaged. Initial test results were obtained with a scanning system design that includes a single transmitting coil and several receiving coils mounted on a single rod. A mechanical linear translation stage was used to move the EM sensors in the axial direction during data acquisition. For simplicity, a single receiving coil and repetitive scans were employed to simulate performance of the designed receiving sensor array system. The resulting electromagnetic images enable the detection of the metal defects in the steel pipes. Responses from several sensors were used to assess the location and amount of material loss in the first and second metal pipe as well as the relative eccentric position between these two pipes. The results from EM measurements and x-ray backscatter simulations demonstrate that data fusion from several sensing modalities can provide an enhanced assessment of flaw structures in producing well bores and potentially allow for early detection of anomalies that if undetected might lead to catastrophic failures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dinwiddie, Ralph Barton; Parris, Larkin S.; Lindal, John M.
This paper explores the temperature range extension of long-wavelength infrared (LWIR) cameras by placing an aperture in front of the lens. An aperture smaller than the lens will reduce the radiance to the sensor, allowing the camera to image targets much hotter than typically allowable. These higher temperatures were accurately determined after developing a correction factor which was applied to the built-in temperature calibration. The relationship between aperture diameter and temperature range is linear. The effect of pre-lens apertures on the image uniformity is a form of anti-vignetting, meaning the corners appear brighter (hotter) than the rest of the image.more » An example of using this technique to measure temperatures of high melting point polymers during 3D printing provide valuable information of the time required for the weld-line temperature to fall below the glass transition temperature.« less
Development of InSb charge-coupled infrared imaging devices: Linear imager
NASA Technical Reports Server (NTRS)
Phillips, J. D.
1976-01-01
The following results were accomplished in the development of charge coupled infrared imaging devices: (1) a four-phase overlapping gate with 9 transfers (2-bits) and 1.0-mil gate lengths was successfully operated, (2) the measured transfer efficiency of 0.975 for this device is in excellent agreement with predictions for the reduced gate length device, (3) mask revisions of the channel stop metal on the 8582 mask have been carried out with the result being a large increase in the dc yield of the tested devices, (4) partial optical sensitivity to chopped blackbody radiation was observed for an 8582 9-bit imager, (5) analytical consideration of the modulation transfer function degradation caused by transfer inefficiency in the CCD registers was presented, and (6) for larger array lengths or for the insertion of isolated bits between sensors, improvements in InSb fabrication technology with corresponding decrease in the interface state density are required.
Microfluidic in-channel multi-electrode platform for neurotransmitter sensing
NASA Astrophysics Data System (ADS)
Kara, A.; Mathault, J.; Reitz, A.; Boisvert, M.; Tessier, F.; Greener, J.; Miled, A.
2016-03-01
In this project we present a microfluidic platform with in-channel micro-electrodes for in situ screening of bio/chemical samples through a lab-on-chip system. We used a novel method to incorporate electrochemical sensors array (16x20) connected to a PCB, which opens the way for imaging applications. A 200 μm height microfluidic channel was bonded to electrochemical sensors. The micro-channel contains 3 inlets used to introduce phosphate buffer saline (PBS), ferrocynide and neurotransmitters. The flow rate was controlled through automated micro-pumps. A multiplexer was used to scan electrodes and perform individual cyclic voltammograms by a custom potentiostat. The behavior of the system was linear in terms of variation of current versus concentration. It was used to detect the neurotransmitters serotonin, dopamine and glutamate.
Crack Detection in Concrete Tunnels Using a Gabor Filter Invariant to Rotation.
Medina, Roberto; Llamas, José; Gómez-García-Bermejo, Jaime; Zalama, Eduardo; Segarra, Miguel José
2017-07-20
In this article, a system for the detection of cracks in concrete tunnel surfaces, based on image sensors, is presented. Both data acquisition and processing are covered. Linear cameras and proper lighting are used for data acquisition. The required resolution of the camera sensors and the number of cameras is discussed in terms of the crack size and the tunnel type. Data processing is done by applying a new method called Gabor filter invariant to rotation, allowing the detection of cracks in any direction. The parameter values of this filter are set by using a modified genetic algorithm based on the Differential Evolution optimization method. The detection of the pixels belonging to cracks is obtained to a balanced accuracy of 95.27%, thus improving the results of previous approaches.
a Preliminary Investigation on Comparison and Transformation of SENTINEL-2 MSI and Landsat 8 Oli
NASA Astrophysics Data System (ADS)
Chen, F.; Lou, S.; Fan, Q.; Li, J.; Wang, C.; Claverie, M.
2018-05-01
A PRELIMINARY INVESTIGATION ON COMPARISON AND TRANSFORMATION OF SENTINEL-2 MSI AND LANDSAT 8 OLI Timely and accurate earth observation with short revisit interval is usually necessary, especially for emergency response. Currently, several new generation sensors provided with similar channel characteristics have been operated onboard different satellite platforms, including Sentinel-2 and Landsat 8. Joint use of the observations by different sensors offers an opportunity to meet the demands for emergency requirements. For example, through the combination of Landsat and Sentinel-2 data, the land can be observed every 2-3 days at medium spatial resolution. However, differences are expected in radiometric values (e.g., channel reflectance) of the corresponding channels between two sensors. Spectral response function (SRF) is taken as an important aspect of sensor settings. Accordingly, between-sensor differences due to SRFs variation need to be quantified and compensated. The comparison of SRFs shows difference (more or less) in channel settings between Sentinel-2 Multi-Spectral Instrument (MSI) and Landsat 8 Operational Land Imager (OLI). Effect of the difference in SRF on corresponding values between MSI and OLI was investigated, mainly in terms of channel reflectance and several derived spectral indices. Spectra samples from ASTER Spectral Library Version 2.0 and Hyperion data archives were used in obtaining channel reflectance simulation of MSI and OLI. Preliminary results show that MSI and OLI are well comparable in several channels with small relative discrepancy (< 5 %), including the Costal Aerosol channel, a NIR (855-875 nm) channel, the SWIR channels, and the Cirrus channel. Meanwhile, for channels covering Blue, Green, Red, and NIR (785-900 nm), the between-sensor differences are significantly presented. Compared with the difference in reflectance of each individual channel, the difference in derived spectral index is more significant. In addition, effectiveness of linear transformation model is not ensured when the target belongs to another spectra collection. If an improper transformation model is selected, the between-sensor discrepancy will even largely increase. In conclusion, improvement in between-sensor consistency is possibly a challenge, through linear transformation based on model(s) generated from other spectra collections.
Ultra-wideband and broad-angle linear polarization conversion metasurface
NASA Astrophysics Data System (ADS)
Sun, Hengyi; Gu, Changqing; Chen, Xinlei; Li, Zhuo; Liu, Liangliang; Martín, Ferran
2017-05-01
In this work, a metasurface acting as a linear polarization rotator, that can efficiently convert linearly polarized electromagnetic waves to cross polarized waves within an ultra wide frequency band and with a broad incident angle, is proposed. Based on the electric and magnetic resonant features of the unit cell, composed by a double-head arrow, a cut-wire, and two short V-shaped wire structures, three resonances, which lead to the bandwidth expansion of cross-polarization reflections, are generated. The simulation results show that an average polarization conversion ratio of 90% from 17.3 GHz to 42.2 GHz can be achieved. Furthermore, the designed metasurface exhibits polarization insensitivity within a broad incident angle, from 0° to 50°. The experiments conducted on the fabricated metasurface are in good agreement with the simulations. The proposed metasurface can find potential applications in reflector antennas, imaging systems, and remote sensors operating at microwave frequencies.
Design Method For Ultra-High Resolution Linear CCD Imagers
NASA Astrophysics Data System (ADS)
Sheu, Larry S.; Truong, Thanh; Yuzuki, Larry; Elhatem, Abdul; Kadekodi, Narayan
1984-11-01
This paper presents the design method to achieve ultra-high resolution linear imagers. This method utilizes advanced design rules and novel staggered bilinear photo sensor arrays with quadrilinear shift registers. Design constraint in the detector arrays and shift registers are analyzed. Imager architecture to achieve ultra-high resolution is presented. The characteristics of MTF, aliasing, speed, transfer efficiency and fine photolithography requirements associated with this architecture are also discussed. A CCD imager with advanced 1.5 um minimum feature size was fabricated. It is intended as a test vehicle for the next generation small sampling pitch ultra-high resolution CCD imager. Standard double-poly, two-phase shift registers were fabricated at an 8 um pitch using the advanced design rules. A special process step that blocked the source-drain implant from the shift register area was invented. This guaranteed excellent performance of the shift registers regardless of the small poly overlaps. A charge transfer efficiency of better than 0.99995 and maximum transfer speed of 8 MHz were achieved. The imager showed excellent performance. The dark current was less than 0.2 mV/ms, saturation 250 mV, adjacent photoresponse non-uniformity ± 4% and responsivity 0.7 V/ μJ/cm2 for the 8 μm x 6 μm photosensor size. The MTF was 0.6 at 62.5 cycles/mm. These results confirm the feasibility of the next generation ultra-high resolution CCD imagers.
Single Photon Counting Performance and Noise Analysis of CMOS SPAD-Based Image Sensors.
Dutton, Neale A W; Gyongy, Istvan; Parmesan, Luca; Henderson, Robert K
2016-07-20
SPAD-based solid state CMOS image sensors utilising analogue integrators have attained deep sub-electron read noise (DSERN) permitting single photon counting (SPC) imaging. A new method is proposed to determine the read noise in DSERN image sensors by evaluating the peak separation and width (PSW) of single photon peaks in a photon counting histogram (PCH). The technique is used to identify and analyse cumulative noise in analogue integrating SPC SPAD-based pixels. The DSERN of our SPAD image sensor is exploited to confirm recent multi-photon threshold quanta image sensor (QIS) theory. Finally, various single and multiple photon spatio-temporal oversampling techniques are reviewed.
Wagner, M; Gondan, M; Zöllner, C; Wünscher, J J; Nickel, F; Albala, L; Groch, A; Suwelack, S; Speidel, S; Maier-Hein, L; Müller-Stich, B P; Kenngott, H G
2016-02-01
Laparoscopic resection is a minimally invasive treatment option for rectal cancer but requires highly experienced surgeons. Computer-aided technologies could help to improve safety and efficiency by visualizing risk structures during the procedure. The prerequisite for such an image guidance system is reliable intraoperative information on iatrogenic tissue shift. This could be achieved by intraoperative imaging, which is rarely available. Thus, the aim of the present study was to develop and validate a method for real-time deformation compensation using preoperative imaging and intraoperative electromagnetic tracking (EMT) of the rectum. Three models were compared and evaluated for the compensation of tissue deformation. For model A, no compensation was performed. Model B moved the corresponding points rigidly to the motion of the EMT sensor. Model C used five nested linear regressions with increasing level of complexity to compute the deformation (C1-C5). For evaluation, 14 targets and an EMT organ sensor were fit into a silicone-molded rectum of the OpenHELP phantom. Following a computed tomography, the image guidance was initiated and the rectum was deformed in the same way as during surgery in a total of 14 experimental runs. The target registration error (TRE) was measured for all targets in different positions of the rectum. The mean TRE without correction (model A) was 32.8 ± 20.8 mm, with only 19.6% of the measurements below 10 mm (80.4% above 10 mm). With correction, the mean TRE could be reduced using the rigid correction (model B) to 6.8 ± 4.8 mm with 78.7% of the measurements being <10 mm. Using the most complex linear regression correction (model C5), the error could be reduced to 2.9 ± 1.4 mm with 99.8% being below 10 mm. In laparoscopic rectal surgery, the combination of electromagnetic organ tracking and preoperative imaging is a promising approach to compensating for intraoperative tissue shift in real-time.
Microwave Sensors for Breast Cancer Detection
2018-01-01
Breast cancer is the leading cause of death among females, early diagnostic methods with suitable treatments improve the 5-year survival rates significantly. Microwave breast imaging has been reported as the most potential to become the alternative or additional tool to the current gold standard X-ray mammography for detecting breast cancer. The microwave breast image quality is affected by the microwave sensor, sensor array, the number of sensors in the array and the size of the sensor. In fact, microwave sensor array and sensor play an important role in the microwave breast imaging system. Numerous microwave biosensors have been developed for biomedical applications, with particular focus on breast tumor detection. Compared to the conventional medical imaging and biosensor techniques, these microwave sensors not only enable better cancer detection and improve the image resolution, but also provide attractive features such as label-free detection. This paper aims to provide an overview of recent important achievements in microwave sensors for biomedical imaging applications, with particular focus on breast cancer detection. The electric properties of biological tissues at microwave spectrum, microwave imaging approaches, microwave biosensors, current challenges and future works are also discussed in the manuscript. PMID:29473867
Microwave Sensors for Breast Cancer Detection.
Wang, Lulu
2018-02-23
Breast cancer is the leading cause of death among females, early diagnostic methods with suitable treatments improve the 5-year survival rates significantly. Microwave breast imaging has been reported as the most potential to become the alternative or additional tool to the current gold standard X-ray mammography for detecting breast cancer. The microwave breast image quality is affected by the microwave sensor, sensor array, the number of sensors in the array and the size of the sensor. In fact, microwave sensor array and sensor play an important role in the microwave breast imaging system. Numerous microwave biosensors have been developed for biomedical applications, with particular focus on breast tumor detection. Compared to the conventional medical imaging and biosensor techniques, these microwave sensors not only enable better cancer detection and improve the image resolution, but also provide attractive features such as label-free detection. This paper aims to provide an overview of recent important achievements in microwave sensors for biomedical imaging applications, with particular focus on breast cancer detection. The electric properties of biological tissues at microwave spectrum, microwave imaging approaches, microwave biosensors, current challenges and future works are also discussed in the manuscript.
High-content analysis of single cells directly assembled on CMOS sensor based on color imaging.
Tanaka, Tsuyoshi; Saeki, Tatsuya; Sunaga, Yoshihiko; Matsunaga, Tadashi
2010-12-15
A complementary metal oxide semiconductor (CMOS) image sensor was applied to high-content analysis of single cells which were assembled closely or directly onto the CMOS sensor surface. The direct assembling of cell groups on CMOS sensor surface allows large-field (6.66 mm×5.32 mm in entire active area of CMOS sensor) imaging within a second. Trypan blue-stained and non-stained cells in the same field area on the CMOS sensor were successfully distinguished as white- and blue-colored images under white LED light irradiation. Furthermore, the chemiluminescent signals of each cell were successfully visualized as blue-colored images on CMOS sensor only when HeLa cells were placed directly on the micro-lens array of the CMOS sensor. Our proposed approach will be a promising technique for real-time and high-content analysis of single cells in a large-field area based on color imaging. Copyright © 2010 Elsevier B.V. All rights reserved.
CMOS image sensor-based implantable glucose sensor using glucose-responsive fluorescent hydrogel
Tokuda, Takashi; Takahashi, Masayuki; Uejima, Kazuhiro; Masuda, Keita; Kawamura, Toshikazu; Ohta, Yasumi; Motoyama, Mayumi; Noda, Toshihiko; Sasagawa, Kiyotaka; Okitsu, Teru; Takeuchi, Shoji; Ohta, Jun
2014-01-01
A CMOS image sensor-based implantable glucose sensor based on an optical-sensing scheme is proposed and experimentally verified. A glucose-responsive fluorescent hydrogel is used as the mediator in the measurement scheme. The wired implantable glucose sensor was realized by integrating a CMOS image sensor, hydrogel, UV light emitting diodes, and an optical filter on a flexible polyimide substrate. Feasibility of the glucose sensor was verified by both in vitro and in vivo experiments. PMID:25426316
Lee, Youngoh; Park, Jonghwa; Cho, Soowon; Shin, Young-Eun; Lee, Hochan; Kim, Jinyoung; Myoung, Jinyoung; Cho, Seungse; Kang, Saewon; Baig, Chunggi; Ko, Hyunhyub
2018-04-24
Flexible pressure sensors with a high sensitivity over a broad linear range can simplify wearable sensing systems without additional signal processing for the linear output, enabling device miniaturization and low power consumption. Here, we demonstrate a flexible ferroelectric sensor with ultrahigh pressure sensitivity and linear response over an exceptionally broad pressure range based on the material and structural design of ferroelectric composites with a multilayer interlocked microdome geometry. Due to the stress concentration between interlocked microdome arrays and increased contact area in the multilayer design, the flexible ferroelectric sensors could perceive static/dynamic pressure with high sensitivity (47.7 kPa -1 , 1.3 Pa minimum detection). In addition, efficient stress distribution between stacked multilayers enables linear sensing over exceptionally broad pressure range (0.0013-353 kPa) with fast response time (20 ms) and high reliability over 5000 repetitive cycles even at an extremely high pressure of 272 kPa. Our sensor can be used to monitor diverse stimuli from a low to a high pressure range including weak gas flow, acoustic sound, wrist pulse pressure, respiration, and foot pressure with a single device.
NASA Technical Reports Server (NTRS)
Lindner, D. K.; Zvonar, G. A.; Baumann, W. T.; Delos, P. L.
1993-01-01
Recently, a modal domain optical fiber sensor has been demonstrated as a sensor in a control system for vibration suppression of a flexible cantilevered beam. This sensor responds to strain through a mechanical attachment to the structure. Because this sensor is of the interferometric type, the output of the sensor has a sinusoidal nonlinearity. For small levels of strain, the sensor can be operated in its linear region. For large levels of strain, the detection electronics can be configured to count fringes. In both of these configurations, the sensor nonlinearity imposes some restrictions on the performance of the control system. In this paper we investigate the effects of these sensor nonlinearities on the control system, and identify the region of linear operation in terms of the optical fiber sensor parameters.
Optical fibres in pre-detector signal processing
NASA Astrophysics Data System (ADS)
Flinn, A. R.
The basic form of conventional electro-optic sensors is described. The main drawback of these sensors is their inability to deal with the background radiation which usually accompanies the signal. This 'clutter' limits the sensors performance long before other noise such as 'shot' noise. Pre-detector signal processing using the complex amplitude of the light is introduced as a means to discriminate between the signal and 'clutter'. Further improvements to predetector signal processors can be made by the inclusion of optical fibres allowing radiation to be used with greater efficiency and enabling certain signal processing tasks to be carried out with an ease unequalled by any other method. The theory of optical waveguides and their application in sensors, interferometers, and signal processors is reviewed. Geometrical aspects of the formation of linear and circular interference fringes are described along with temporal and spatial coherence theory and their relationship to Michelson's visibility function. The requirements for efficient coupling of a source into singlemode and multimode fibres are given. We describe interference experiments between beams of light emitted from a few metres of two or more, singlemode or multimode, optical fibres. Fresnel's equation is used to obtain expressions for Fresnel and Fraunhofer diffraction patterns which enable electro-optic (E-0) sensors to be analysed by Fourier optics. Image formation is considered when the aperture plane of an E-0 sensor is illuminated with partially coherent light. This allows sensors to be designed using optical transfer functions which are sensitive to the spatial coherence of the illuminating light. Spatial coherence sensors which use gratings as aperture plane reticles are discussed. By using fibre arrays, spatial coherence processing enables E-0 sensors to discriminate between a spatially coherent source and an incoherent background. The sensors enable the position and wavelength of the source to be determined. Experiments are described which use optical fibre arrays as masks for correlation with spatial distributions of light in image planes of E-0 sensors. Correlations between laser light from different points in a scene is investigated by interfering the light emitted from an array of fibres, placed in the image plane of a sensor, with each other. Temporal signal processing experiments show that the visibility of interference fringes gives information about path differences in a scene or through an optical system. Most E-0 sensors employ wavelength filtering of the detected radiation to improve their discrimination and this is shown to be less selective than temporal coherence filtering which is sensitive to spectral bandwidth. Experiments using fibre interferometers to discriminate between red and blue laser light by their bandwidths are described. In most cases the path difference need only be a few tens of centimetres. We consider spatial and temporal coherence in fibres. We show that high visibility interference fringes can be produced by red and blue laser light transmitted through over 100 metres of singlemode or multimode fibre. The effect of detector size, relative to speckle size, is considered for fringes produced by multimode fibres. The effect of dispersion on the coherence of the light emitted from fibres is considered in terms of correlation and interference between modes. We describe experiments using a spatial light modulator called SIGHT-MOD. The device is used in various systems as a fibre optic switch and as a programmable aperture plane reticle. The contrast of the device is measured using red and green, HeNe, sources. Fourier transform images of patterns on the SIGHT-MOD are obtained and used to demonstrate the geometrical manipulation of images using 2D fibre arrays. Correlation of Fourier transform images of the SIGHT-MOD with 2D fibre arrays is demonstrated.
A New Pansharpening Method Based on Spatial and Spectral Sparsity Priors.
He, Xiyan; Condat, Laurent; Bioucas-Diaz, Jose; Chanussot, Jocelyn; Xia, Junshi
2014-06-27
The development of multisensor systems in recent years has led to great increase in the amount of available remote sensing data. Image fusion techniques aim at inferring high quality images of a given area from degraded versions of the same area obtained by multiple sensors. This paper focuses on pansharpening, which is the inference of a high spatial resolution multispectral image from two degraded versions with complementary spectral and spatial resolution characteristics: a) a low spatial resolution multispectral image; and b) a high spatial resolution panchromatic image. We introduce a new variational model based on spatial and spectral sparsity priors for the fusion. In the spectral domain we encourage low-rank structure, whereas in the spatial domain we promote sparsity on the local differences. Given the fact that both panchromatic and multispectral images are integrations of the underlying continuous spectra using different channel responses, we propose to exploit appropriate regularizations based on both spatial and spectral links between panchromatic and the fused multispectral images. A weighted version of the vector Total Variation (TV) norm of the data matrix is employed to align the spatial information of the fused image with that of the panchromatic image. With regard to spectral information, two different types of regularization are proposed to promote a soft constraint on the linear dependence between the panchromatic and the fused multispectral images. The first one estimates directly the linear coefficients from the observed panchromatic and low resolution multispectral images by Linear Regression (LR) while the second one employs the Principal Component Pursuit (PCP) to obtain a robust recovery of the underlying low-rank structure. We also show that the two regularizers are strongly related. The basic idea of both regularizers is that the fused image should have low-rank and preserve edge locations. We use a variation of the recently proposed Split Augmented Lagrangian Shrinkage (SALSA) algorithm to effectively solve the proposed variational formulations. Experimental results on simulated and real remote sensing images show the effectiveness of the proposed pansharpening method compared to the state-of-the-art.
Beam imaging sensor and method for using same
DOE Office of Scientific and Technical Information (OSTI.GOV)
McAninch, Michael D.; Root, Jeffrey J.
The present invention relates generally to the field of sensors for beam imaging and, in particular, to a new and useful beam imaging sensor for use in determining, for example, the power density distribution of a beam including, but not limited to, an electron beam or an ion beam. In one embodiment, the beam imaging sensor of the present invention comprises, among other items, a circumferential slit that is either circular, elliptical or polygonal in nature. In another embodiment, the beam imaging sensor of the present invention comprises, among other things, a discontinuous partially circumferential slit. Also disclosed is amore » method for using the various beams sensor embodiments of the present invention.« less
Jeong, Y J; Oh, T I; Woo, E J; Kim, K J
2017-07-01
Recently, highly flexible and soft pressure distribution imaging sensor is in great demand for tactile sensing, gait analysis, ubiquitous life-care based on activity recognition, and therapeutics. In this study, we integrate the piezo-capacitive and piezo-electric nanowebs with the conductive fabric sheets for detecting static and dynamic pressure distributions on a large sensing area. Electrical impedance tomography (EIT) and electric source imaging are applied for reconstructing pressure distribution images from measured current-voltage data on the boundary of the hybrid fabric sensor. We evaluated the piezo-capacitive nanoweb sensor, piezo-electric nanoweb sensor, and hybrid fabric sensor. The results show the feasibility of static and dynamic pressure distribution imaging from the boundary measurements of the fabric sensors.
A FPGA implementation for linearly unmixing a hyperspectral image using OpenCL
NASA Astrophysics Data System (ADS)
Guerra, Raúl; López, Sebastián.; Sarmiento, Roberto
2017-10-01
Hyperspectral imaging systems provide images in which single pixels have information from across the electromagnetic spectrum of the scene under analysis. These systems divide the spectrum into many contiguos channels, which may be even out of the visible part of the spectra. The main advantage of the hyperspectral imaging technology is that certain objects leave unique fingerprints in the electromagnetic spectrum, known as spectral signatures, which allow to distinguish between different materials that may look like the same in a traditional RGB image. Accordingly, the most important hyperspectral imaging applications are related with distinguishing or identifying materials in a particular scene. In hyperspectral imaging applications under real-time constraints, the huge amount of information provided by the hyperspectral sensors has to be rapidly processed and analysed. For such purpose, parallel hardware devices, such as Field Programmable Gate Arrays (FPGAs) are typically used. However, developing hardware applications typically requires expertise in the specific targeted device, as well as in the tools and methodologies which can be used to perform the implementation of the desired algorithms in the specific device. In this scenario, the Open Computing Language (OpenCL) emerges as a very interesting solution in which a single high-level synthesis design language can be used to efficiently develop applications in multiple and different hardware devices. In this work, the Fast Algorithm for Linearly Unmixing Hyperspectral Images (FUN) has been implemented into a Bitware Stratix V Altera FPGA using OpenCL. The obtained results demonstrate the suitability of OpenCL as a viable design methodology for quickly creating efficient FPGAs designs for real-time hyperspectral imaging applications.
A smart sensor architecture based on emergent computation in an array of outer-totalistic cells
NASA Astrophysics Data System (ADS)
Dogaru, Radu; Dogaru, Ioana; Glesner, Manfred
2005-06-01
A novel smart-sensor architecture is proposed, capable to segment and recognize characters in a monochrome image. It is capable to provide a list of ASCII codes representing the recognized characters from the monochrome visual field. It can operate as a blind's aid or for industrial applications. A bio-inspired cellular model with simple linear neurons was found the best to perform the nontrivial task of cropping isolated compact objects such as handwritten digits or characters. By attaching a simple outer-totalistic cell to each pixel sensor, emergent computation in the resulting cellular automata lattice provides a straightforward and compact solution to the otherwise computationally intensive problem of character segmentation. A simple and robust recognition algorithm is built in a compact sequential controller accessing the array of cells so that the integrated device can provide directly a list of codes of the recognized characters. Preliminary simulation tests indicate good performance and robustness to various distortions of the visual field.
Ultra-Sensitive Strain Sensor Based on Flexible Poly(vinylidene fluoride) Piezoelectric Film
NASA Astrophysics Data System (ADS)
Lu, Kai; Huang, Wen; Guo, Junxiong; Gong, Tianxun; Wei, Xiongbang; Lu, Bing-Wei; Liu, Si-Yi; Yu, Bin
2018-03-01
A flexible 4 × 4 sensor array with 16 micro-scale capacitive units has been demonstrated based on flexible piezoelectric poly(vinylidene fluoride) (PVDF) film. The piezoelectricity and surface morphology of the PVDF were examined by optical imaging and piezoresponse force microscopy (PFM). The PFM shows phase contrast, indicating clear interface between the PVDF and electrode. The electro-mechanical properties show that the sensor exhibits excellent output response and an ultra-high signal-to-noise ratio. The output voltage and the applied pressure possess linear relationship with a slope of 12 mV/kPa. The hold-and-release output characteristics recover in less than 2.5 μs, demonstrating outstanding electro-mechanical response. Additionally, signal interference between the adjacent arrays has been investigated via theoretical simulation. The results show the interference reduces with decreasing pressure at a rate of 0.028 mV/kPa, highly scalable with electrode size and becoming insignificant for pressure level under 178 kPa.
Ultra-Sensitive Strain Sensor Based on Flexible Poly(vinylidene fluoride) Piezoelectric Film.
Lu, Kai; Huang, Wen; Guo, Junxiong; Gong, Tianxun; Wei, Xiongbang; Lu, Bing-Wei; Liu, Si-Yi; Yu, Bin
2018-03-14
A flexible 4 × 4 sensor array with 16 micro-scale capacitive units has been demonstrated based on flexible piezoelectric poly(vinylidene fluoride) (PVDF) film. The piezoelectricity and surface morphology of the PVDF were examined by optical imaging and piezoresponse force microscopy (PFM). The PFM shows phase contrast, indicating clear interface between the PVDF and electrode. The electro-mechanical properties show that the sensor exhibits excellent output response and an ultra-high signal-to-noise ratio. The output voltage and the applied pressure possess linear relationship with a slope of 12 mV/kPa. The hold-and-release output characteristics recover in less than 2.5 μs, demonstrating outstanding electro-mechanical response. Additionally, signal interference between the adjacent arrays has been investigated via theoretical simulation. The results show the interference reduces with decreasing pressure at a rate of 0.028 mV/kPa, highly scalable with electrode size and becoming insignificant for pressure level under 178 kPa.
Improved Denoising via Poisson Mixture Modeling of Image Sensor Noise.
Zhang, Jiachao; Hirakawa, Keigo
2017-04-01
This paper describes a study aimed at comparing the real image sensor noise distribution to the models of noise often assumed in image denoising designs. A quantile analysis in pixel, wavelet transform, and variance stabilization domains reveal that the tails of Poisson, signal-dependent Gaussian, and Poisson-Gaussian models are too short to capture real sensor noise behavior. A new Poisson mixture noise model is proposed to correct the mismatch of tail behavior. Based on the fact that noise model mismatch results in image denoising that undersmoothes real sensor data, we propose a mixture of Poisson denoising method to remove the denoising artifacts without affecting image details, such as edge and textures. Experiments with real sensor data verify that denoising for real image sensor data is indeed improved by this new technique.
Light field image denoising using a linear 4D frequency-hyperfan all-in-focus filter
NASA Astrophysics Data System (ADS)
Dansereau, Donald G.; Bongiorno, Daniel L.; Pizarro, Oscar; Williams, Stefan B.
2013-02-01
Imaging in low light is problematic as sensor noise can dominate imagery, and increasing illumination or aperture size is not always effective or practical. Computational photography offers a promising solution in the form of the light field camera, which by capturing redundant information offers an opportunity for elegant noise rejection. We show that the light field of a Lambertian scene has a 4D hyperfan-shaped frequency-domain region of support at the intersection of a dual-fan and a hypercone. By designing and implementing a filter with appropriately shaped passband we accomplish denoising with a single all-in-focus linear filter. Drawing examples from the Stanford Light Field Archive and images captured using a commercially available lenselet- based plenoptic camera, we demonstrate that the hyperfan outperforms competing methods including synthetic focus, fan-shaped antialiasing filters, and a range of modern nonlinear image and video denoising techniques. We show the hyperfan preserves depth of field, making it a single-step all-in-focus denoising filter suitable for general-purpose light field rendering. We include results for different noise types and levels, over a variety of metrics, and in real-world scenarios. Finally, we show that the hyperfan's performance scales with aperture count.
Apparatus and method for a light direction sensor
NASA Technical Reports Server (NTRS)
Leviton, Douglas B. (Inventor)
2011-01-01
The present invention provides a light direction sensor for determining the direction of a light source. The system includes an image sensor; a spacer attached to the image sensor, and a pattern mask attached to said spacer. The pattern mask has a slit pattern that as light passes through the slit pattern it casts a diffraction pattern onto the image sensor. The method operates by receiving a beam of light onto a patterned mask, wherein the patterned mask as a plurality of a slit segments. Then, diffusing the beam of light onto an image sensor and determining the direction of the light source.
NASA Astrophysics Data System (ADS)
Fernández-Manso, O.; Fernández-Manso, A.; Quintano, C.
2014-09-01
Aboveground biomass (AGB) estimation from optical satellite data is usually based on regression models of original or synthetic bands. To overcome the poor relation between AGB and spectral bands due to mixed-pixels when a medium spatial resolution sensor is considered, we propose to base the AGB estimation on fraction images from Linear Spectral Mixture Analysis (LSMA). Our study area is a managed Mediterranean pine woodland (Pinus pinaster Ait.) in central Spain. A total of 1033 circular field plots were used to estimate AGB from Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) optical data. We applied Pearson correlation statistics and stepwise multiple regression to identify suitable predictors from the set of variables of original bands, fraction imagery, Normalized Difference Vegetation Index and Tasselled Cap components. Four linear models and one nonlinear model were tested. A linear combination of ASTER band 2 (red, 0.630-0.690 μm), band 8 (short wave infrared 5, 2.295-2.365 μm) and green vegetation fraction (from LSMA) was the best AGB predictor (Radj2=0.632, the root-mean-squared error of estimated AGB was 13.3 Mg ha-1 (or 37.7%), resulting from cross-validation), rather than other combinations of the above cited independent variables. Results indicated that using ASTER fraction images in regression models improves the AGB estimation in Mediterranean pine forests. The spatial distribution of the estimated AGB, based on a multiple linear regression model, may be used as baseline information for forest managers in future studies, such as quantifying the regional carbon budget, fuel accumulation or monitoring of management practices.
Study the performance of star sensor influenced by space radiation damage of image sensor
NASA Astrophysics Data System (ADS)
Feng, Jie; Li, Yudong; Wen, Lin; Guo, Qi; Zhang, Xingyao
2018-03-01
Star sensor is an essential component of spacecraft attitude control system. Spatial radiation can cause star sensor performance degradation, abnormal work, attitude measurement accuracy and reliability reduction. Many studies have already been dedicated to the radiation effect on Charge-Coupled Device(CCD) image sensor, but fewer studies focus on the radiation effect of star sensor. The innovation of this paper is to study the radiation effects from the device level to the system level. The influence of the degradation of CCD image sensor radiation sensitive parameters on the performance parameters of star sensor is studied in this paper. The correlation among the radiation effect of proton, the non-uniformity noise of CCD image sensor and the performance parameter of star sensor is analyzed. This paper establishes a foundation for the study of error prediction and correction technology of star sensor on-orbit attitude measurement, and provides some theoretical basis for the design of high performance star sensor.
Single Photon Counting Performance and Noise Analysis of CMOS SPAD-Based Image Sensors
Dutton, Neale A. W.; Gyongy, Istvan; Parmesan, Luca; Henderson, Robert K.
2016-01-01
SPAD-based solid state CMOS image sensors utilising analogue integrators have attained deep sub-electron read noise (DSERN) permitting single photon counting (SPC) imaging. A new method is proposed to determine the read noise in DSERN image sensors by evaluating the peak separation and width (PSW) of single photon peaks in a photon counting histogram (PCH). The technique is used to identify and analyse cumulative noise in analogue integrating SPC SPAD-based pixels. The DSERN of our SPAD image sensor is exploited to confirm recent multi-photon threshold quanta image sensor (QIS) theory. Finally, various single and multiple photon spatio-temporal oversampling techniques are reviewed. PMID:27447643
Oxygen mapping: Probing a novel seeding strategy for bone tissue engineering.
Westphal, Ines; Jedelhauser, Claudia; Liebsch, Gregor; Wilhelmi, Arnd; Aszodi, Attila; Schieker, Matthias
2017-04-01
Bone tissue engineering (BTE) utilizing biomaterial scaffolds and human mesenchymal stem cells (hMSCs) is a promising approach for the treatment of bone defects. The quality of engineered tissue is crucially affected by numerous parameters including cell density and the oxygen supply. In this study, a novel oxygen-imaging sensor was introduced to monitor the oxygen distribution in three dimensional (3D) scaffolds in order to analyze a new cell-seeding strategy. Immortalized hMSCs, pre-cultured in a monolayer for 30-40% or 70-80% confluence, were used to seed demineralized bone matrix (DBM) scaffolds. Real-time measurements of oxygen consumption in vitro were simultaneously performed by the novel planar sensor and a conventional needle-type sensor over 24 h. Recorded oxygen maps of the novel planar sensor revealed that scaffolds, seeded with hMSCs harvested at lower densities (30-40% confluence), exhibited rapid exponential oxygen consumption profile. In contrast, harvesting cells at higher densities (70-80% confluence) resulted in a very slow, almost linear, oxygen decrease due to gradual achieving the stationary growth phase. In conclusion, it could be shown that not only the seeding density on a scaffold, but also the cell density at the time point of harvest is of major importance for BTE. The new cell seeding strategy of harvested MSCs at low density during its log phase could be a useful strategy for an early in vivo implantation of cell-seeded scaffolds after a shorter in vitro culture period. Furthermore, the novel oxygen imaging sensor enables a continuous, two-dimensional, quick and convenient to handle oxygen mapping for the development and optimization of tissue engineered scaffolds. Biotechnol. Bioeng. 2017;114: 894-902. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
NASA Technical Reports Server (NTRS)
Ebert, D. H.; Eppes, T. A.; Thomas, D. J.
1973-01-01
The impact of a conical scan versus a linear scan multispectral scanner (MSS) instrument was studied in terms of: (1) design modifications required in framing and continuous image recording devices; and (2) changes in configurations of an all-digital precision image processor. A baseline system was defined to provide the framework for comparison, and included pertinent spacecraft parameters, a conical MSS, a linear MSS, an image recording system, and an all-digital precision processor. Lateral offset pointing of the sensors over a range of plus or minus 20 deg was considered. The study addressed the conical scan impact on geometric, radiometric, and aperture correction of MSS data in terms of hardware and software considerations, system complexity, quality of corrections, throughput, and cost of implementation. It was concluded that: (1) if the MSS data are to be only film recorded, then there is only a nomial concial scan impact on the ground data processing system; and (2) if digital data are to be provided to users on computer compatible tapes in rectilinear format, then there is a significant conical scan impact on the ground data processing system.
Improved linearity using harmonic error rejection in a full-field range imaging system
NASA Astrophysics Data System (ADS)
Payne, Andrew D.; Dorrington, Adrian A.; Cree, Michael J.; Carnegie, Dale A.
2008-02-01
Full field range imaging cameras are used to simultaneously measure the distance for every pixel in a given scene using an intensity modulated illumination source and a gain modulated receiver array. The light is reflected from an object in the scene, and the modulation envelope experiences a phase shift proportional to the target distance. Ideally the waveforms are sinusoidal, allowing the phase, and hence object range, to be determined from four measurements using an arctangent function. In practice these waveforms are often not perfectly sinusoidal, and in some cases square waveforms are instead used to simplify the electronic drive requirements. The waveforms therefore commonly contain odd harmonics which contribute a nonlinear error to the phase determination, and therefore an error in the range measurement. We have developed a unique sampling method to cancel the effect of these harmonics, with the results showing an order of magnitude improvement in the measurement linearity without the need for calibration or lookup tables, while the acquisition time remains unchanged. The technique can be applied to existing range imaging systems without having to change or modify the complex illumination or sensor systems, instead only requiring a change to the signal generation and timing electronics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reddy, Y. Ashok Kumar, E-mail: akreddy111@gmail.com; Shin, Young Bong; Kang, In-Ku
This study aims to investigate the influence of the sputtering pressure (P{sub S}) on Nb:TiO{sub 2−x} films to enhance the bolometric properties. A decrease in the growth rate with the sputtering pressure was perceived in amorphous Nb:TiO{sub 2−x} films. The incorporation of oxygen with P{sub S} was confirmed in an X-ray photo electron spectroscopy analysis. The electrical resistivity was increased with an increase in P{sub S} due to a decrease in the number of oxygen vacancies. The linear I-V characteristics confirmed the ohmic contact behavior between the Nb:TiO{sub 2−x} layer and the electrode material. The present investigation finds that themore » sample with lower resistivity has good bolometric properties with low noise and high universal bolometric parameters. Finally, the Nb:TiO{sub 2−x} sample deposited at a sputtering pressure of 2 mTorr shows better bolometric properties than other materials for infrared image sensor applications.« less
Evaluation of a HDR image sensor with logarithmic response for mobile video-based applications
NASA Astrophysics Data System (ADS)
Tektonidis, Marco; Pietrzak, Mateusz; Monnin, David
2017-10-01
The performance of mobile video-based applications using conventional LDR (Low Dynamic Range) image sensors highly depends on the illumination conditions. As an alternative, HDR (High Dynamic Range) image sensors with logarithmic response are capable to acquire illumination-invariant HDR images in a single shot. We have implemented a complete image processing framework for a HDR sensor, including preprocessing methods (nonuniformity correction (NUC), cross-talk correction (CTC), and demosaicing) as well as tone mapping (TM). We have evaluated the HDR sensor for video-based applications w.r.t. the display of images and w.r.t. image analysis techniques. Regarding the display we have investigated the image intensity statistics over time, and regarding image analysis we assessed the number of feature correspondences between consecutive frames of temporal image sequences. For the evaluation we used HDR image data recorded from a vehicle on outdoor or combined outdoor/indoor itineraries, and we performed a comparison with corresponding conventional LDR image data.
The lucky image-motion prediction for simple scene observation based soft-sensor technology
NASA Astrophysics Data System (ADS)
Li, Yan; Su, Yun; Hu, Bin
2015-08-01
High resolution is important to earth remote sensors, while the vibration of the platforms of the remote sensors is a major factor restricting high resolution imaging. The image-motion prediction and real-time compensation are key technologies to solve this problem. For the reason that the traditional autocorrelation image algorithm cannot meet the demand for the simple scene image stabilization, this paper proposes to utilize soft-sensor technology in image-motion prediction, and focus on the research of algorithm optimization in imaging image-motion prediction. Simulations results indicate that the improving lucky image-motion stabilization algorithm combining the Back Propagation Network (BP NN) and support vector machine (SVM) is the most suitable for the simple scene image stabilization. The relative error of the image-motion prediction based the soft-sensor technology is below 5%, the training computing speed of the mathematical predication model is as fast as the real-time image stabilization in aerial photography.
Accommodating multiple illumination sources in an imaging colorimetry environment
NASA Astrophysics Data System (ADS)
Tobin, Kenneth W., Jr.; Goddard, James S., Jr.; Hunt, Martin A.; Hylton, Kathy W.; Karnowski, Thomas P.; Simpson, Marc L.; Richards, Roger K.; Treece, Dale A.
2000-03-01
Researchers at the Oak Ridge National Laboratory have been developing a method for measuring color quality in textile products using a tri-stimulus color camera system. Initial results of the Imaging Tristimulus Colorimeter (ITC) were reported during 1999. These results showed that the projection onto convex sets (POCS) approach to color estimation could be applied to complex printed patterns on textile products with high accuracy and repeatability. Image-based color sensors used for on-line measurement are not colorimetric by nature and require a non-linear transformation of the component colors based on the spectral properties of the incident illumination, imaging sensor, and the actual textile color. Our earlier work reports these results for a broad-band, smoothly varying D65 standard illuminant. To move the measurement to the on-line environment with continuously manufactured textile webs, the illumination source becomes problematic. The spectral content of these light sources varies substantially from the D65 standard illuminant and can greatly impact the measurement performance of the POCS system. Although absolute color measurements are difficult to make under different illumination, referential measurements to monitor color drift provide a useful indication of product quality. Modifications to the ITC system have been implemented to enable the study of different light sources. These results and the subsequent analysis of relative color measurements will be reported for textile products.
Fusion: ultra-high-speed and IR image sensors
NASA Astrophysics Data System (ADS)
Etoh, T. Goji; Dao, V. T. S.; Nguyen, Quang A.; Kimata, M.
2015-08-01
Most targets of ultra-high-speed video cameras operating at more than 1 Mfps, such as combustion, crack propagation, collision, plasma, spark discharge, an air bag at a car accident and a tire under a sudden brake, generate sudden heat. Researchers in these fields require tools to measure the high-speed motion and heat simultaneously. Ultra-high frame rate imaging is achieved by an in-situ storage image sensor. Each pixel of the sensor is equipped with multiple memory elements to record a series of image signals simultaneously at all pixels. Image signals stored in each pixel are read out after an image capturing operation. In 2002, we developed an in-situ storage image sensor operating at 1 Mfps 1). However, the fill factor of the sensor was only 15% due to a light shield covering the wide in-situ storage area. Therefore, in 2011, we developed a backside illuminated (BSI) in-situ storage image sensor to increase the sensitivity with 100% fill factor and a very high quantum efficiency 2). The sensor also achieved a much higher frame rate,16.7 Mfps, thanks to the wiring on the front side with more freedom 3). The BSI structure has another advantage that it has less difficulties in attaching an additional layer on the backside, such as scintillators. This paper proposes development of an ultra-high-speed IR image sensor in combination of advanced nano-technologies for IR imaging and the in-situ storage technology for ultra-highspeed imaging with discussion on issues in the integration.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sword, Charles Keith
A scanner system and method for acquisition of position-based ultrasonic inspection data are described. The scanner system includes an inspection probe and a first non-contact linear encoder having a first sensor and a first scale to track inspection probe position. The first sensor is positioned to maintain a continuous non-contact interface between the first sensor and the first scale and to maintain a continuous alignment of the first sensor with the inspection probe. The scanner system may be used to acquire two-dimensional inspection probe position data by including a second non-contact linear encoder having a second sensor and a secondmore » scale, the second sensor positioned to maintain a continuous non-contact interface between the second sensor and the second scale and to maintain a continuous alignment of the second sensor with the first sensor.« less
Advanced sensor-simulation capability
NASA Astrophysics Data System (ADS)
Cota, Stephen A.; Kalman, Linda S.; Keller, Robert A.
1990-09-01
This paper provides an overview of an advanced simulation capability currently in use for analyzing visible and infrared sensor systems. The software system, called VISTAS (VISIBLE/INFRARED SENSOR TRADES, ANALYSES, AND SIMULATIONS) combines classical image processing techniques with detailed sensor models to produce static and time dependent simulations of a variety of sensor systems including imaging, tracking, and point target detection systems. Systems modelled to date include space-based scanning line-array sensors as well as staring 2-dimensional array sensors which can be used for either imaging or point source detection.
Flexible phosphor sensors: a digital supplement or option to rigid sensors.
Glazer, Howard S
2014-01-01
An increasing number of dental practices are upgrading from film radiography to digital radiography, for reasons that include faster image processing, easier image access, better patient education, enhanced data storage, and improved office productivity. Most practices that have converted to digital technology use rigid, or direct, sensors. Another digital option is flexible phosphor sensors, also called indirect sensors or phosphor storage plates (PSPs). Flexible phosphor sensors can be advantageous for use with certain patients who may be averse to direct sensors, and they can deliver a larger image area. Additionally, sensor cost for replacement PSPs is considerably lower than for hard sensors. As such, flexible phosphor sensors appear to be a viable supplement or option to direct sensors.
Spaceborne imaging radar research in the 90's
NASA Technical Reports Server (NTRS)
Elachi, Charles
1986-01-01
The imaging radar experiments on SEASAT and on the space shuttle (SIR-A and SIR-B) have led to a wide interest in the use of spaceborne imaging radars in Earth and planetary sciences. The radar sensors provide unique and complimentary information to what is acquired with visible and infrared imagers. This includes subsurface imaging in arid regions, all weather observation of ocean surface dynamic phenomena, structural mapping, soil moisture mapping, stereo imaging and resulting topographic mapping. However, experiments up to now have exploited only a very limited range of the generic capability of radar sensors. With planned sensor developments in the late 80's and early 90's, a quantum jump will be made in our ability to fully exploit the potential of these sensors. These developments include: multiparameter research sensors such as SIR-C and X-SAR, long-term and global monitoring sensors such as ERS-1, JERS-1, EOS, Radarsat, GLORI and the spaceborne sounder, planetary mapping sensors such as the Magellan and Cassini/Titan mappers, topographic three-dimensional imagers such as the scanning radar altimeter and three-dimensional rain mapping. These sensors and their associated research are briefly described.
Star centroiding error compensation for intensified star sensors.
Jiang, Jie; Xiong, Kun; Yu, Wenbo; Yan, Jinyun; Zhang, Guangjun
2016-12-26
A star sensor provides high-precision attitude information by capturing a stellar image; however, the traditional star sensor has poor dynamic performance, which is attributed to its low sensitivity. Regarding the intensified star sensor, the image intensifier is utilized to improve the sensitivity, thereby further improving the dynamic performance of the star sensor. However, the introduction of image intensifier results in star centroiding accuracy decrease, further influencing the attitude measurement precision of the star sensor. A star centroiding error compensation method for intensified star sensors is proposed in this paper to reduce the influences. First, the imaging model of the intensified detector, which includes the deformation parameter of the optical fiber panel, is established based on the orthographic projection through the analysis of errors introduced by the image intensifier. Thereafter, the position errors at the target points based on the model are obtained by using the Levenberg-Marquardt (LM) optimization method. Last, the nearest trigonometric interpolation method is presented to compensate for the arbitrary centroiding error of the image plane. Laboratory calibration result and night sky experiment result show that the compensation method effectively eliminates the error introduced by the image intensifier, thus remarkably improving the precision of the intensified star sensors.
McAninch, Michael D.; Root, Jeffrey J.
2016-07-05
The present invention relates generally to the field of sensors for beam imaging and, in particular, to a new and useful beam imaging sensor for use in determining, for example, the power density distribution of a beam including, but not limited to, an electron beam or an ion beam. In one embodiment, the beam imaging sensor of the present invention comprises, among other items, a circumferential slit that is either circular, elliptical or polygonal in nature.
Theoretical and Experimental Study on Wide Range Optical Fiber Turbine Flow Sensor.
Du, Yuhuan; Guo, Yingqing
2016-07-15
In this paper, a novel fiber turbine flow sensor was proposed and demonstrated for liquid measurement with optical fiber, using light intensity modulation to measure the turbine rotational speed for converting to flow rate. The double-circle-coaxial (DCC) fiber probe was introduced in frequency measurement for the first time. Through the divided ratio of two rings light intensity, the interference in light signals acquisition can be eliminated. To predict the characteristics between the output frequency and flow in the nonlinear range, the turbine flow sensor model was built. Via analyzing the characteristics of turbine flow sensor, piecewise linear equations were achieved in expanding the flow measurement range. Furthermore, the experimental verification was tested. The results showed that the flow range ratio of DN20 turbine flow sensor was improved 2.9 times after using piecewise linear in the nonlinear range. Therefore, combining the DCC fiber sensor and piecewise linear method, it can be developed into a strong anti-electromagnetic interference(anti-EMI) and wide range fiber turbine flowmeter.
Theoretical and Experimental Study on Wide Range Optical Fiber Turbine Flow Sensor
Du, Yuhuan; Guo, Yingqing
2016-01-01
In this paper, a novel fiber turbine flow sensor was proposed and demonstrated for liquid measurement with optical fiber, using light intensity modulation to measure the turbine rotational speed for converting to flow rate. The double-circle-coaxial (DCC) fiber probe was introduced in frequency measurement for the first time. Through the divided ratio of two rings light intensity, the interference in light signals acquisition can be eliminated. To predict the characteristics between the output frequency and flow in the nonlinear range, the turbine flow sensor model was built. Via analyzing the characteristics of turbine flow sensor, piecewise linear equations were achieved in expanding the flow measurement range. Furthermore, the experimental verification was tested. The results showed that the flow range ratio of DN20 turbine flow sensor was improved 2.9 times after using piecewise linear in the nonlinear range. Therefore, combining the DCC fiber sensor and piecewise linear method, it can be developed into a strong anti-electromagnetic interference(anti-EMI) and wide range fiber turbine flowmeter. PMID:27428976
Optical and Electric Multifunctional CMOS Image Sensors for On-Chip Biosensing Applications.
Tokuda, Takashi; Noda, Toshihiko; Sasagawa, Kiyotaka; Ohta, Jun
2010-12-29
In this review, the concept, design, performance, and a functional demonstration of multifunctional complementary metal-oxide-semiconductor (CMOS) image sensors dedicated to on-chip biosensing applications are described. We developed a sensor architecture that allows flexible configuration of a sensing pixel array consisting of optical and electric sensing pixels, and designed multifunctional CMOS image sensors that can sense light intensity and electric potential or apply a voltage to an on-chip measurement target. We describe the sensors' architecture on the basis of the type of electric measurement or imaging functionalities.
A 100 Mfps image sensor for biological applications
NASA Astrophysics Data System (ADS)
Etoh, T. Goji; Shimonomura, Kazuhiro; Nguyen, Anh Quang; Takehara, Kosei; Kamakura, Yoshinari; Goetschalckx, Paul; Haspeslagh, Luc; De Moor, Piet; Dao, Vu Truong Son; Nguyen, Hoang Dung; Hayashi, Naoki; Mitsui, Yo; Inumaru, Hideo
2018-02-01
Two ultrahigh-speed CCD image sensors with different characteristics were fabricated for applications to advanced scientific measurement apparatuses. The sensors are BSI MCG (Backside-illuminated Multi-Collection-Gate) image sensors with multiple collection gates around the center of the front side of each pixel, placed like petals of a flower. One has five collection gates and one drain gate at the center, which can capture consecutive five frames at 100 Mfps with the pixel count of about 600 kpixels (512 x 576 x 2 pixels). In-pixel signal accumulation is possible for repetitive image capture of reproducible events. The target application is FLIM. The other is equipped with four collection gates each connected to an in-situ CCD memory with 305 elements, which enables capture of 1,220 (4 x 305) consecutive images at 50 Mfps. The CCD memory is folded and looped with the first element connected to the last element, which also makes possible the in-pixel signal accumulation. The sensor is a small test sensor with 32 x 32 pixels. The target applications are imaging TOF MS, pulse neutron tomography and dynamic PSP. The paper also briefly explains an expression of the temporal resolution of silicon image sensors theoretically derived by the authors in 2017. It is shown that the image sensor designed based on the theoretical analysis achieves imaging of consecutive frames at the frame interval of 50 ps.
Smart image sensors: an emerging key technology for advanced optical measurement and microsystems
NASA Astrophysics Data System (ADS)
Seitz, Peter
1996-08-01
Optical microsystems typically include photosensitive devices, analog preprocessing circuitry and digital signal processing electronics. The advances in semiconductor technology have made it possible today to integrate all photosensitive and electronical devices on one 'smart image sensor' or photo-ASIC (application-specific integrated circuits containing photosensitive elements). It is even possible to provide each 'smart pixel' with additional photoelectronic functionality, without compromising the fill factor substantially. This technological capability is the basis for advanced cameras and optical microsystems showing novel on-chip functionality: Single-chip cameras with on- chip analog-to-digital converters for less than $10 are advertised; image sensors have been developed including novel functionality such as real-time selectable pixel size and shape, the capability of performing arbitrary convolutions simultaneously with the exposure, as well as variable, programmable offset and sensitivity of the pixels leading to image sensors with a dynamic range exceeding 150 dB. Smart image sensors have been demonstrated offering synchronous detection and demodulation capabilities in each pixel (lock-in CCD), and conventional image sensors are combined with an on-chip digital processor for complete, single-chip image acquisition and processing systems. Technological problems of the monolithic integration of smart image sensors include offset non-uniformities, temperature variations of electronic properties, imperfect matching of circuit parameters, etc. These problems can often be overcome either by designing additional compensation circuitry or by providing digital correction routines. Where necessary for technological or economic reasons, smart image sensors can also be combined with or realized as hybrids, making use of commercially available electronic components. It is concluded that the possibilities offered by custom smart image sensors will influence the design and the performance of future electronic imaging systems in many disciplines, reaching from optical metrology to machine vision on the factory floor and in robotics applications.
Testing and evaluation of tactical electro-optical sensors
NASA Astrophysics Data System (ADS)
Middlebrook, Christopher T.; Smith, John G.
2002-07-01
As integrated electro-optical sensor payloads (multi- sensors) comprised of infrared imagers, visible imagers, and lasers advance in performance, the tests and testing methods must also advance in order to fully evaluate them. Future operational requirements will require integrated sensor payloads to perform missions at further ranges and with increased targeting accuracy. In order to meet these requirements sensors will require advanced imaging algorithms, advanced tracking capability, high-powered lasers, and high-resolution imagers. To meet the U.S. Navy's testing requirements of such multi-sensors, the test and evaluation group in the Night Vision and Chemical Biological Warfare Department at NAVSEA Crane is developing automated testing methods, and improved tests to evaluate imaging algorithms, and procuring advanced testing hardware to measure high resolution imagers and line of sight stabilization of targeting systems. This paper addresses: descriptions of the multi-sensor payloads tested, testing methods used and under development, and the different types of testing hardware and specific payload tests that are being developed and used at NAVSEA Crane.
Thermal Effects on Camera Focal Length in Messenger Star Calibration and Orbital Imaging
NASA Astrophysics Data System (ADS)
Burmeister, S.; Elgner, S.; Preusker, F.; Stark, A.; Oberst, J.
2018-04-01
We analyse images taken by the MErcury Surface, Space ENviorment, GEochemistry, and Ranging (MESSENGER) spacecraft for the camera's thermal response in the harsh thermal environment near Mercury. Specifically, we study thermally induced variations in focal length of the Mercury Dual Imaging System (MDIS). Within the several hundreds of images of star fields, the Wide Angle Camera (WAC) typically captures up to 250 stars in one frame of the panchromatic channel. We measure star positions and relate these to the known star coordinates taken from the Tycho-2 catalogue. We solve for camera pointing, the focal length parameter and two non-symmetrical distortion parameters for each image. Using data from the temperature sensors on the camera focal plane we model a linear focal length function in the form of f(T) = A0 + A1 T. Next, we use images from MESSENGER's orbital mapping mission. We deal with large image blocks, typically used for the production of a high-resolution digital terrain models (DTM). We analyzed images from the combined quadrangles H03 and H07, a selected region, covered by approx. 10,600 images, in which we identified about 83,900 tiepoints. Using bundle block adjustments, we solved for the unknown coordinates of the control points, the pointing of the camera - as well as the camera's focal length. We then fit the above linear function with respect to the focal plane temperature. As a result, we find a complex response of the camera to thermal conditions of the spacecraft. To first order, we see a linear increase by approx. 0.0107 mm per degree temperature for the Narrow-Angle Camera (NAC). This is in agreement with the observed thermal response seen in images of the panchromatic channel of the WAC. Unfortunately, further comparisons of results from the two methods, both of which use different portions of the available image data, are limited. If leaving uncorrected, these effects may pose significant difficulties in the photogrammetric analysis, specifically these may be responsible for erroneous longwavelength trends in topographic models.
NASA Astrophysics Data System (ADS)
Mao, Mingxu; Ye, Jiamin; Wang, Haigang; Yang, Wuqiang
2016-09-01
The hydrodynamics of gas-solids flow in the bottom of a circulating fluidized bed (CFB) are complicated. Three-dimensional (3D) electrical capacitance tomography (ECT) has been used to investigate the hydrodynamics in risers of different shapes. Four different ECT sensors with 12 electrodes each are designed according to the dimension of risers, including two circular ECT sensors, a square ECT sensor and a rectangular ECT sensor. The electrodes are evenly arranged in three planes to obtain capacitance in different heights and to reconstruct the 3D images by linear back projection (LBP) algorithm. Experiments were carried out on the four risers using sands as the solids material. The capacitance and differential pressure are measured under the gas superficial velocity from 0.6 m s-1 to 3.0 m s-1 with a step of 0.2 m s-1. The flow regime is investigated according to the solids concentration and differential pressure. The dynamic property of bubbling flows is analyzed theoretically and the performance of the 3D ECT sensors is evaluated. The experimental results show that 3D ECT can be used in the CFB with different risers to predict the hydrodynamics of gas-solids bubbling flows.
Characterization of a hybrid energy-resolving photon-counting detector
NASA Astrophysics Data System (ADS)
Zang, A.; Pelzer, G.; Anton, G.; Ballabriga Sune, R.; Bisello, F.; Campbell, M.; Fauler, A.; Fiederle, M.; Llopart Cudie, X.; Ritter, I.; Tennert, F.; Wölfel, S.; Wong, W. S.; Michel, T.
2014-03-01
Photon-counting detectors in medical x-ray imaging provide a higher dose efficiency than integrating detectors. Even further possibilities for imaging applications arise, if the energy of each photon counted is measured, as for example K-edge-imaging or optimizing image quality by applying energy weighting factors. In this contribution, we show results of the characterization of the Dosepix detector. This hybrid photon- counting pixel detector allows energy resolved measurements with a novel concept of energy binning included in the pixel electronics. Based on ideas of the Medipix detector family, it provides three different modes of operation: An integration mode, a photon-counting mode, and an energy-binning mode. In energy-binning mode, it is possible to set 16 energy thresholds in each pixel individually to derive a binned energy spectrum in every pixel in one acquisition. The hybrid setup allows using different sensor materials. For the measurements 300 μm Si and 1 mm CdTe were used. The detector matrix consists of 16 x 16 square pixels for CdTe (16 x 12 for Si) with a pixel pitch of 220 μm. The Dosepix was originally intended for applications in the field of radiation measurement. Therefore it is not optimized towards medical imaging. The detector concept itself still promises potential as an imaging detector. We present spectra measured in one single pixel as well as in the whole pixel matrix in energy-binning mode with a conventional x-ray tube. In addition, results concerning the count rate linearity for the different sensor materials are shown as well as measurements regarding energy resolution.
NASA Astrophysics Data System (ADS)
Watanabe, Shigeo; Takahashi, Teruo; Bennett, Keith
2017-02-01
The"scientific" CMOS (sCMOS) camera architecture fundamentally differs from CCD and EMCCD cameras. In digital CCD and EMCCD cameras, conversion from charge to the digital output is generally through a single electronic chain, and the read noise and the conversion factor from photoelectrons to digital outputs are highly uniform for all pixels, although quantum efficiency may spatially vary. In CMOS cameras, the charge to voltage conversion is separate for each pixel and each column has independent amplifiers and analog-to-digital converters, in addition to possible pixel-to-pixel variation in quantum efficiency. The "raw" output from the CMOS image sensor includes pixel-to-pixel variability in the read noise, electronic gain, offset and dark current. Scientific camera manufacturers digitally compensate the raw signal from the CMOS image sensors to provide usable images. Statistical noise in images, unless properly modeled, can introduce errors in methods such as fluctuation correlation spectroscopy or computational imaging, for example, localization microscopy using maximum likelihood estimation. We measured the distributions and spatial maps of individual pixel offset, dark current, read noise, linearity, photoresponse non-uniformity and variance distributions of individual pixels for standard, off-the-shelf Hamamatsu ORCA-Flash4.0 V3 sCMOS cameras using highly uniform and controlled illumination conditions, from dark conditions to multiple low light levels between 20 to 1,000 photons / pixel per frame to higher light conditions. We further show that using pixel variance for flat field correction leads to errors in cameras with good factory calibration.
Self-amplified CMOS image sensor using a current-mode readout circuit
NASA Astrophysics Data System (ADS)
Santos, Patrick M.; de Lima Monteiro, Davies W.; Pittet, Patrick
2014-05-01
The feature size of the CMOS processes decreased during the past few years and problems such as reduced dynamic range have become more significant in voltage-mode pixels, even though the integration of more functionality inside the pixel has become easier. This work makes a contribution on both sides: the possibility of a high signal excursion range using current-mode circuits together with functionality addition by making signal amplification inside the pixel. The classic 3T pixel architecture was rebuild with small modifications to integrate a transconductance amplifier providing a current as an output. The matrix with these new pixels will operate as a whole large transistor outsourcing an amplified current that will be used for signal processing. This current is controlled by the intensity of the light received by the matrix, modulated pixel by pixel. The output current can be controlled by the biasing circuits to achieve a very large range of output signal levels. It can also be controlled with the matrix size and this permits a very high degree of freedom on the signal level, observing the current densities inside the integrated circuit. In addition, the matrix can operate at very small integration times. Its applications would be those in which fast imaging processing, high signal amplification are required and low resolution is not a major problem, such as UV image sensors. Simulation results will be presented to support: operation, control, design, signal excursion levels and linearity for a matrix of pixels that was conceived using this new concept of sensor.
Cloud-to-Ground Lightning Estimates Derived from SSMI Microwave Remote Sensing and NLDN
NASA Technical Reports Server (NTRS)
Winesett, Thomas; Magi, Brian; Cecil, Daniel
2015-01-01
Lightning observations are collected using ground-based and satellite-based sensors. The National Lightning Detection Network (NLDN) in the United States uses multiple ground sensors to triangulate the electromagnetic signals created when lightning strikes the Earth's surface. Satellite-based lightning observations have been made from 1998 to present using the Lightning Imaging Sensor (LIS) on the NASA Tropical Rainfall Measuring Mission (TRMM) satellite, and from 1995 to 2000 using the Optical Transient Detector (OTD) on the Microlab-1 satellite. Both LIS and OTD are staring imagers that detect lightning as momentary changes in an optical scene. Passive microwave remote sensing (85 and 37 GHz brightness temperatures) from the TRMM Microwave Imager (TMI) has also been used to quantify characteristics of thunderstorms related to lightning. Each lightning detection system has fundamental limitations. TRMM satellite coverage is limited to the tropics and subtropics between 38 deg N and 38 deg S, so lightning at the higher latitudes of the northern and southern hemispheres is not observed. The detection efficiency of NLDN sensors exceeds 95%, but the sensors are only located in the USA. Even if data from other ground-based lightning sensors (World Wide Lightning Location Network, the European Cooperation for Lightning Detection, and Canadian Lightning Detection Network) were combined with TRMM and NLDN, there would be enormous spatial gaps in present-day coverage of lightning. In addition, a globally-complete time history of observed lightning activity is currently not available either, with network coverage and detection efficiencies varying through the years. Previous research using the TRMM LIS and Microwave Imager (TMI) showed that there is a statistically significant correlation between lightning flash rates and passive microwave brightness temperatures. The physical basis for this correlation emerges because lightning in a thunderstorm occurs where ice is first present in the cloud and electric charge separation occurs. These ice particles efficiently scatter the microwave radiation at the 85 and 37 GHz frequencies, thus leading to large brightness temperature depressions. Lightning flash rate is related to the total amount of ice passing through the convective updraft regions of thunderstorms. Confirmation of this relationship using TRMM LIS and TMI data, however, remains constrained to TRMM observational limits of the tropics and subtropics. Satellites from the Defense Meteorology Satellite Program (DMSP) have global coverage and are equipped with passive microwave imagers that, like TMI, observe brightness temperatures at 85 and 37 GHz. Unlike the TRMM satellite, however, DMSP satellites do not have a lightning sensor, and the DMSP microwave data has never been used to derive global lightning. In this presentation, a relationship between DMSP Special Sensor Microwave Imager (SSMI) data and ground-based cloud-to-ground (CG) lightning data from NLDN is investigated to derive a spatially complete time history of CG lightning for the USA study area. This relationship is analogous to the established using TRMM LIS and TMI data. NLDN has the most spatially and temporally complete CG lightning data for the USA, and therefore provides the best opportunity to find geospatially coincident observations with SSMI sensors. The strongest thunderstorms generally have minimum 85 GHz Polarized Corrected brightness Temperatures (PCT) less than 150 K. Archived radar data was used to resolve the spatial extent of the individual storms. NLDN data for that storm spatial extent defined by radar data was used to calculate the CG flash rate for the storm. Similar to results using TRMM sensors, a linear model best explained the relationship between storm-specific CG flash rates and minimum 85 GHz PCT. However, the results in this study apply only to CG lightning. To extend the results to weaker storms, the probability of CG lightning (instead of the flash rate) was calculated for storms having 85 GHz PCT greater than 150 K. NLDN data was used to determine if a CG strike occurred for a storm. This probability of CG lightning was plotted as a function of minimum 85 GHz PCT and minimum 37 GHz PCT. These probabilities were used in conjunction with the linear model to estimate the CG flash rate for weaker storms with minimum 85 GHz PCTs greater than 150 K. Results from the investigation of CG lightning and passive microwave radiation signals agree with the previous research investigating total lightning and brightness temperature. Future work will take the established relationships and apply them to the decades of available DMSP data for the USA to derive a map of CG lightning flash rates. Validation of this method and uncertainty analysis will be done by comparing the derived maps of CG lightning flash rates against existing NLDN maps of CG lightning flash rates.
NASA Astrophysics Data System (ADS)
Gao, Xiangdong; Chen, Yuquan; You, Deyong; Xiao, Zhenlin; Chen, Xiaohui
2017-02-01
An approach for seam tracking of micro gap weld whose width is less than 0.1 mm based on magneto optical (MO) imaging technique during butt-joint laser welding of steel plates is investigated. Kalman filtering(KF) technology with radial basis function(RBF) neural network for weld detection by an MO sensor was applied to track the weld center position. Because the laser welding system process noises and the MO sensor measurement noises were colored noises, the estimation accuracy of traditional KF for seam tracking was degraded by the system model with extreme nonlinearities and could not be solved by the linear state-space model. Also, the statistics characteristics of noises could not be accurately obtained in actual welding. Thus, a RBF neural network was applied to the KF technique to compensate for the weld tracking errors. The neural network can restrain divergence filter and improve the system robustness. In comparison of traditional KF algorithm, the RBF with KF was not only more effectively in improving the weld tracking accuracy but also reduced noise disturbance. Experimental results showed that magneto optical imaging technique could be applied to detect micro gap weld accurately, which provides a novel approach for micro gap seam tracking.
NASA Astrophysics Data System (ADS)
Hayami, Hajime; Takehara, Hiroaki; Nagata, Kengo; Haruta, Makito; Noda, Toshihiko; Sasagawa, Kiyotaka; Tokuda, Takashi; Ohta, Jun
2016-04-01
Intra body communication technology allows the fabrication of compact implantable biomedical sensors compared with RF wireless technology. In this paper, we report the fabrication of an implantable image sensor of 625 µm width and 830 µm length and the demonstration of wireless image-data transmission through a brain tissue of a living mouse. The sensor was designed to transmit output signals of pixel values by pulse width modulation (PWM). The PWM signals from the sensor transmitted through a brain tissue were detected by a receiver electrode. Wireless data transmission of a two-dimensional image was successfully demonstrated in a living mouse brain. The technique reported here is expected to provide useful methods of data transmission using micro sized implantable biomedical sensors.
Laser designator protection filter for see-spot thermal imaging systems
NASA Astrophysics Data System (ADS)
Donval, Ariela; Fisher, Tali; Lipman, Ofir; Oron, Moshe
2012-06-01
In some cases the FLIR has an open window in the 1.06 micrometer wavelength range; this capability is called 'see spot' and allows seeing a laser designator spot using the FLIR. A problem arises when the returned laser energy is too high for the camera sensitivity, and therefore can cause damage to the sensor. We propose a non-linear, solid-state dynamic filter solution protecting from damage in a passive way. Our filter blocks the transmission, only if the power exceeds a certain threshold as opposed to spectral filters that block a certain wavelength permanently. In this paper we introduce the Wideband Laser Protection Filter (WPF) solution for thermal imaging systems possessing the ability to see the laser spot.
Characterizing the scientific potential of satellite sensors. [San Francisco, California
NASA Technical Reports Server (NTRS)
1984-01-01
Eleven thematic mapper (TM) radiometric calibration programs were tested and evaluated in support of the task to characterize the potential of LANDSAT TM digital imagery for scientific investigations in the Earth sciences and terrestrial physics. Three software errors related to integer overflow, divide by zero, and nonexist file group were found and solved. Raw, calibrated, and corrected image groups that were created and stored on the Barker2 disk are enumerated. Black and white pixel print files were created for various subscenes of a San Francisco scene (ID 40392-18152). The development of linear regression software is discussed. The output of the software and its function are described. Future work in TM radiometric calibration, image processing, and software development is outlined.
Guerreiro, Gabriela V; Zaitouna, Anita J; Lai, Rebecca Y
2014-01-31
Here we report the characterization of an electrochemical mercury (Hg(2+)) sensor constructed with a methylene blue (MB)-modified and thymine-containing linear DNA probe. Similar to the linear probe electrochemical DNA sensor, the resultant sensor behaved as a "signal-off" sensor in alternating current voltammetry and cyclic voltammetry. However, depending on the applied frequency or pulse width, the sensor can behave as either a "signal-off" or "signal-on" sensor in square wave voltammetry (SWV) and differential pulse voltammetry (DPV). In SWV, the sensor showed "signal-on" behavior at low frequencies and "signal-off" behavior at high frequencies. In DPV, the sensor showed "signal-off" behavior at short pulse widths and "signal-on" behavior at long pulse widths. Independent of the sensor interrogation technique, the limit of detection was found to be 10nM, with a linear dynamic range between 10nM and 500nM. In addition, the sensor responded to Hg(2+) rather rapidly; majority of the signal change occurred in <20min. Overall, the sensor retains all the characteristics of this class of sensors; it is reagentless, reusable, sensitive, specific and selective. This study also highlights the feasibility of using a MB-modified probe for real-time sensing of Hg(2+), which has not been previously reported. More importantly, the observed "switching" behavior in SWV and DPV is potentially generalizable and should be applicable to most sensors in this class of dynamics-based electrochemical biosensors. Copyright © 2013 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Azzam, R. M. A.; Howlader, M. M. K.; Georgiou, T. Y.
1995-08-01
A transparent or absorbing substrate can be coated with a transparent thin film to produce a linear reflectance-versus-angle-of-incidence response over a certain range of angles. Linearization at and near normal incidence is a special case that leads to a maximally flat response for p -polarized, s -polarized, or unpolarized light. For midrange and high-range linearization with moderate and high slopes, respectively, the best results are obtained when the incident light is s polarized. Application to a Si substrate that is coated with a SiO2 film leads to novel passive and active reflection rotation sensors. Experimental results and an error analysis of this rotation sensor are presented.
Highly Sensitive and Stretchable Strain Sensor Based on Ag@CNTs.
Zhang, Qiang; Liu, Lihua; Zhao, Dong; Duan, Qianqian; Ji, Jianlong; Jian, Aoqun; Zhang, Wendong; Sang, Shengbo
2017-12-04
Due to the rapid development and superb performance of electronic skin, we propose a highly sensitive and stretchable temperature and strain sensor. Silver nanoparticles coated carbon nanowires (Ag@CNT) nanomaterials with different Ag concentrations were synthesized. After the morphology and components of the nanomaterials were demonstrated, the sensors composed of Polydimethylsiloxane (PDMS) and CNTs or Ag@CNTs were prepared via a simple template method. Then, the electronic properties and piezoresistive effects of the sensors were tested. Characterization results present excellent performance of the sensors for the highest gauge factor (GF) of the linear region between 0-17.3% of the sensor with Ag@CNTs1 was 137.6, the sensor with Ag@CNTs2 under the strain in the range of 0-54.8% exhibiting a perfect linearity and the GF of the sensor with Ag@CNTs2 was 14.9.
Proceedings of the Augmented VIsual Display (AVID) Research Workshop
NASA Technical Reports Server (NTRS)
Kaiser, Mary K. (Editor); Sweet, Barbara T. (Editor)
1993-01-01
The papers, abstracts, and presentations were presented at a three day workshop focused on sensor modeling and simulation, and image enhancement, processing, and fusion. The technical sessions emphasized how sensor technology can be used to create visual imagery adequate for aircraft control and operations. Participants from industry, government, and academic laboratories contributed to panels on Sensor Systems, Sensor Modeling, Sensor Fusion, Image Processing (Computer and Human Vision), and Image Evaluation and Metrics.
2015-11-05
AFRL-AFOSR-VA-TR-2015-0359 Integrated Spectral Low Noise Image Sensor with Nanowire Polarization Filters for Low Contrast Imaging Viktor Gruev...To) 02/15/2011 - 08/15/2015 4. TITLE AND SUBTITLE Integrated Spectral Low Noise Image Sensor with Nanowire Polarization Filters for Low Contrast...investigate alternative spectral imaging architectures based on my previous experience in this research area. I will develop nanowire polarization
NASA Astrophysics Data System (ADS)
El-Saba, A. M.; Alam, M. S.; Surpanani, A.
2006-05-01
Important aspects of automatic pattern recognition systems are their ability to efficiently discriminate and detect proper targets with low false alarms. In this paper we extend the applications of passive imaging polarimetry to effectively discriminate and detect different color targets of identical shapes using color-blind imaging sensor. For this case of study we demonstrate that traditional color-blind polarization-insensitive imaging sensors that rely only on the spatial distribution of targets suffer from high false detection rates, especially in scenarios where multiple identical shape targets are present. On the other hand we show that color-blind polarization-sensitive imaging sensors can successfully and efficiently discriminate and detect true targets based on their color only. We highlight the main advantages of using our proposed polarization-encoded imaging sensor.
NASA Astrophysics Data System (ADS)
Takada, Shunji; Ihama, Mikio; Inuiya, Masafumi
2006-02-01
Digital still cameras overtook film cameras in Japanese market in 2000 in terms of sales volume owing to their versatile functions. However, the image-capturing capabilities such as sensitivity and latitude of color films are still superior to those of digital image sensors. In this paper, we attribute the cause for the high performance of color films to their multi-layered structure, and propose the solid-state image sensors with stacked organic photoconductive layers having narrow absorption bands on CMOS read-out circuits.
NASA Technical Reports Server (NTRS)
Mcgwire, K.; Friedl, M.; Estes, J. E.
1993-01-01
This article describes research related to sampling techniques for establishing linear relations between land surface parameters and remotely-sensed data. Predictive relations are estimated between percentage tree cover in a savanna environment and a normalized difference vegetation index (NDVI) derived from the Thematic Mapper sensor. Spatial autocorrelation in original measurements and regression residuals is examined using semi-variogram analysis at several spatial resolutions. Sampling schemes are then tested to examine the effects of autocorrelation on predictive linear models in cases of small sample sizes. Regression models between image and ground data are affected by the spatial resolution of analysis. Reducing the influence of spatial autocorrelation by enforcing minimum distances between samples may also improve empirical models which relate ground parameters to satellite data.
Ultrasensitive surveillance of sensors and processes
Wegerich, Stephan W.; Jarman, Kristin K.; Gross, Kenneth C.
2001-01-01
A method and apparatus for monitoring a source of data for determining an operating state of a working system. The method includes determining a sensor (or source of data) arrangement associated with monitoring the source of data for a system, activating a method for performing a sequential probability ratio test if the data source includes a single data (sensor) source, activating a second method for performing a regression sequential possibility ratio testing procedure if the arrangement includes a pair of sensors (data sources) with signals which are linearly or non-linearly related; activating a third method for performing a bounded angle ratio test procedure if the sensor arrangement includes multiple sensors and utilizing at least one of the first, second and third methods to accumulate sensor signals and determining the operating state of the system.
Ultrasensitive surveillance of sensors and processes
Wegerich, Stephan W.; Jarman, Kristin K.; Gross, Kenneth C.
1999-01-01
A method and apparatus for monitoring a source of data for determining an operating state of a working system. The method includes determining a sensor (or source of data) arrangement associated with monitoring the source of data for a system, activating a method for performing a sequential probability ratio test if the data source includes a single data (sensor) source, activating a second method for performing a regression sequential possibility ratio testing procedure if the arrangement includes a pair of sensors (data sources) with signals which are linearly or non-linearly related; activating a third method for performing a bounded angle ratio test procedure if the sensor arrangement includes multiple sensors and utilizing at least one of the first, second and third methods to accumulate sensor signals and determining the operating state of the system.
Design and Laboratory Testing of a Prototype Linear Temperature Sensor
1982-07-01
computer, critical quantities such as the line sensor’s voltage, vertical position and, occasionally, a point sensor were also monitored in real time on a...REUT.............. ........... * 30 5.1 Linearity - Comparison With Thoy............... 31 5.2 Response Time ...from some initial time t 0 is more relevant to the measurement of internal waves (since the second term in 0 the above equation is usually small
Blur spot limitations in distal endoscope sensors
NASA Astrophysics Data System (ADS)
Yaron, Avi; Shechterman, Mark; Horesh, Nadav
2006-02-01
In years past, the picture quality of electronic video systems was limited by the image sensor. In the present, the resolution of miniature image sensors, as in medical endoscopy, is typically superior to the resolution of the optical system. This "excess resolution" is utilized by Visionsense to create stereoscopic vision. Visionsense has developed a single chip stereoscopic camera that multiplexes the horizontal dimension of the image sensor into two (left and right) images, compensates the blur phenomena, and provides additional depth resolution without sacrificing planar resolution. The camera is based on a dual-pupil imaging objective and an image sensor coated by an array of microlenses (a plenoptic camera). The camera has the advantage of being compact, providing simultaneous acquisition of left and right images, and offering resolution comparable to a dual chip stereoscopic camera with low to medium resolution imaging lenses. A stereoscopic vision system provides an improved 3-dimensional perspective of intra-operative sites that is crucial for advanced minimally invasive surgery and contributes to surgeon performance. An additional advantage of single chip stereo sensors is improvement of tolerance to electronic signal noise.
Multi-objects recognition for distributed intelligent sensor networks
NASA Astrophysics Data System (ADS)
He, Haibo; Chen, Sheng; Cao, Yuan; Desai, Sachi; Hohil, Myron E.
2008-04-01
This paper proposes an innovative approach for multi-objects recognition for homeland security and defense based intelligent sensor networks. Unlike the conventional way of information analysis, data mining in such networks is typically characterized with high information ambiguity/uncertainty, data redundancy, high dimensionality and real-time constrains. Furthermore, since a typical military based network normally includes multiple mobile sensor platforms, ground forces, fortified tanks, combat flights, and other resources, it is critical to develop intelligent data mining approaches to fuse different information resources to understand dynamic environments, to support decision making processes, and finally to achieve the goals. This paper aims to address these issues with a focus on multi-objects recognition. Instead of classifying a single object as in the traditional image classification problems, the proposed method can automatically learn multiple objectives simultaneously. Image segmentation techniques are used to identify the interesting regions in the field, which correspond to multiple objects such as soldiers or tanks. Since different objects will come with different feature sizes, we propose a feature scaling method to represent each object in the same number of dimensions. This is achieved by linear/nonlinear scaling and sampling techniques. Finally, support vector machine (SVM) based learning algorithms are developed to learn and build the associations for different objects, and such knowledge will be adaptively accumulated for objects recognition in the testing stage. We test the effectiveness of proposed method in different simulated military environments.
Image Sensors Enhance Camera Technologies
NASA Technical Reports Server (NTRS)
2010-01-01
In the 1990s, a Jet Propulsion Laboratory team led by Eric Fossum researched ways of improving complementary metal-oxide semiconductor (CMOS) image sensors in order to miniaturize cameras on spacecraft while maintaining scientific image quality. Fossum s team founded a company to commercialize the resulting CMOS active pixel sensor. Now called the Aptina Imaging Corporation, based in San Jose, California, the company has shipped over 1 billion sensors for use in applications such as digital cameras, camera phones, Web cameras, and automotive cameras. Today, one of every three cell phone cameras on the planet feature Aptina s sensor technology.
A design of driving circuit for star sensor imaging camera
NASA Astrophysics Data System (ADS)
Li, Da-wei; Yang, Xiao-xu; Han, Jun-feng; Liu, Zhao-hui
2016-01-01
The star sensor is a high-precision attitude sensitive measuring instruments, which determine spacecraft attitude by detecting different positions on the celestial sphere. Imaging camera is an important portion of star sensor. The purpose of this study is to design a driving circuit based on Kodak CCD sensor. The design of driving circuit based on Kodak KAI-04022 is discussed, and the timing of this CCD sensor is analyzed. By the driving circuit testing laboratory and imaging experiments, it is found that the driving circuits can meet the requirements of Kodak CCD sensor.
Multispectral image-fused head-tracked vision system (HTVS) for driving applications
NASA Astrophysics Data System (ADS)
Reese, Colin E.; Bender, Edward J.
2001-08-01
Current military thermal driver vision systems consist of a single Long Wave Infrared (LWIR) sensor mounted on a manually operated gimbal, which is normally locked forward during driving. The sensor video imagery is presented on a large area flat panel display for direct view. The Night Vision and Electronics Sensors Directorate and Kaiser Electronics are cooperatively working to develop a driver's Head Tracked Vision System (HTVS) which directs dual waveband sensors in a more natural head-slewed imaging mode. The HTVS consists of LWIR and image intensified sensors, a high-speed gimbal, a head mounted display, and a head tracker. The first prototype systems have been delivered and have undergone preliminary field trials to characterize the operational benefits of a head tracked sensor system for tactical military ground applications. This investigation will address the advantages of head tracked vs. fixed sensor systems regarding peripheral sightings of threats, road hazards, and nearby vehicles. An additional thrust will investigate the degree to which additive (A+B) fusion of LWIR and image intensified sensors enhances overall driving performance. Typically, LWIR sensors are better for detecting threats, while image intensified sensors provide more natural scene cues, such as shadows and texture. This investigation will examine the degree to which the fusion of these two sensors enhances the driver's overall situational awareness.
ESA DUE GlobTemperature project: Infrared-based LST Product
NASA Astrophysics Data System (ADS)
Ermida, Sofia; Pires, Ana; Ghent, Darren; Trigo, Isabel; DaCamara, Carlos; Remedios, John
2016-04-01
One of the purposes of the GlobTemperature project is to provide a product of global Land Surface Temperature (LST) based on Geostationary Earth Orbit (GEO) and Low Earth polar Orbit (LEO) satellite data. The objective is to use existing LST products, which are obtained from different sensors/platforms, combining them into a harmonized product for a reference view angle. In a first approach, only infra-red based retrievals are considered, and LEO LSTs will be used as a common denominator among geostationary sensors. LST data is provided by a wide range of sensors to optimize spatial coverage, namely: (i) 2 LEO sensors - the Advanced Along Track Scanning Radiometer (AATSR) series of instruments on-board ESA's Envisat, and the Moderate Resolution Imaging Spectroradiometer (MODIS) on-board NASA's TERRA and AQUA; and (ii) 3 GEO sensors - the Spinning Enhanced Visible and Infrared Imager (SEVIRI) on-board EUMETSAT's Meteosat Second Generation (MSG), the Japanese Meteorological Imager (JAMI) on-board the Japanese Meteorological Association (JMA) Multifunction Transport SATellite (MTSAT-2), and NASA's Geostationary Operational Environmental Satellites (GOES). The merged LST product is generated in two steps: 1) calibration between each LEO and each GEO that consists in the removal of systematic differences (associated to sensor type and LST algorithms, including calibration, atmospheric and surface emissivity corrections, amongst others) represented by linear regressions; 2) angular correction that consists in bringing all LST data to reference (nadir) view. Angular effects on LST are estimated by means of a kernel model of the surface thermal emission, which describes the angular dependence of LST as function of viewing and illumination geometry. The model is adjusted to MODIS and SEVIRI/MSG LST estimates and validated against LST retrievals from those sensors obtained for other years (not used in the calibration). It is shown that the model leads to a reduction of LST differences between the two sensors, indicating that it may be used to effectively estimate/correct angular dependence in LST. A global set of kernel model parameters is finally obtained by adjusting the model to either a GEO and a LEO or the two LEOs (poles). A first version of the merged product will be released in 2016, available for download through the GlobTemperature portal. This includes only the calibration process (step 1), incorporating LST data from SEVIRI, GOES, MTSAT and MODIS; information on directional effects added as an extra layer of information. A second version of the dataset with a better incorporation of the angular correction is currently in preparation.
Leica ADS40 Sensor for Coastal Multispectral Imaging
NASA Technical Reports Server (NTRS)
Craig, John C.
2007-01-01
The Leica ADS40 Sensor as it is used for coastal multispectral imaging is presented. The contents include: 1) Project Area Overview; 2) Leica ADS40 Sensor; 3) Focal Plate Arrangements; 4) Trichroid Filter; 5) Gradient Correction; 6) Image Acquisition; 7) Remote Sensing and ADS40; 8) Band comparisons of Satellite and Airborne Sensors; 9) Impervious Surface Extraction; and 10) Impervious Surface Details.
Establishing imaging sensor specifications for digital still cameras
NASA Astrophysics Data System (ADS)
Kriss, Michael A.
2007-02-01
Digital Still Cameras, DSCs, have now displaced conventional still cameras in most markets. The heart of a DSC is thought to be the imaging sensor, be it Full Frame CCD, and Interline CCD, a CMOS sensor or the newer Foveon buried photodiode sensors. There is a strong tendency by consumers to consider only the number of mega-pixels in a camera and not to consider the overall performance of the imaging system, including sharpness, artifact control, noise, color reproduction, exposure latitude and dynamic range. This paper will provide a systematic method to characterize the physical requirements of an imaging sensor and supporting system components based on the desired usage. The analysis is based on two software programs that determine the "sharpness", potential for artifacts, sensor "photographic speed", dynamic range and exposure latitude based on the physical nature of the imaging optics, sensor characteristics (including size of pixels, sensor architecture, noise characteristics, surface states that cause dark current, quantum efficiency, effective MTF, and the intrinsic full well capacity in terms of electrons per square centimeter). Examples will be given for consumer, pro-consumer, and professional camera systems. Where possible, these results will be compared to imaging system currently on the market.
Wave analysis of a plenoptic system and its applications
NASA Astrophysics Data System (ADS)
Shroff, Sapna A.; Berkner, Kathrin
2013-03-01
Traditional imaging systems directly image a 2D object plane on to the sensor. Plenoptic imaging systems contain a lenslet array at the conventional image plane and a sensor at the back focal plane of the lenslet array. In this configuration the data captured at the sensor is not a direct image of the object. Each lenslet effectively images the aperture of the main imaging lens at the sensor. Therefore the sensor data retains angular light-field information which can be used for a posteriori digital computation of multi-angle images and axially refocused images. If a filter array, containing spectral filters or neutral density or polarization filters, is placed at the pupil aperture of the main imaging lens, then each lenslet images the filters on to the sensor. This enables the digital separation of multiple filter modalities giving single snapshot, multi-modal images. Due to the diversity of potential applications of plenoptic systems, their investigation is increasing. As the application space moves towards microscopes and other complex systems, and as pixel sizes become smaller, the consideration of diffraction effects in these systems becomes increasingly important. We discuss a plenoptic system and its wave propagation analysis for both coherent and incoherent imaging. We simulate a system response using our analysis and discuss various applications of the system response pertaining to plenoptic system design, implementation and calibration.
Detection of Obstacles in Monocular Image Sequences
NASA Technical Reports Server (NTRS)
Kasturi, Rangachar; Camps, Octavia
1997-01-01
The ability to detect and locate runways/taxiways and obstacles in images captured using on-board sensors is an essential first step in the automation of low-altitude flight, landing, takeoff, and taxiing phase of aircraft navigation. Automation of these functions under different weather and lighting situations, can be facilitated by using sensors of different modalities. An aircraft-based Synthetic Vision System (SVS), with sensors of different modalities mounted on-board, complements the current ground-based systems in functions such as detection and prevention of potential runway collisions, airport surface navigation, and landing and takeoff in all weather conditions. In this report, we address the problem of detection of objects in monocular image sequences obtained from two types of sensors, a Passive Millimeter Wave (PMMW) sensor and a video camera mounted on-board a landing aircraft. Since the sensors differ in their spatial resolution, and the quality of the images obtained using these sensors is not the same, different approaches are used for detecting obstacles depending on the sensor type. These approaches are described separately in two parts of this report. The goal of the first part of the report is to develop a method for detecting runways/taxiways and objects on the runway in a sequence of images obtained from a moving PMMW sensor. Since the sensor resolution is low and the image quality is very poor, we propose a model-based approach for detecting runways/taxiways. We use the approximate runway model and the position information of the camera provided by the Global Positioning System (GPS) to define regions of interest in the image plane to search for the image features corresponding to the runway markers. Once the runway region is identified, we use histogram-based thresholding to detect obstacles on the runway and regions outside the runway. This algorithm is tested using image sequences simulated from a single real PMMW image.
Wireless acceleration sensor of moving elements for condition monitoring of mechanisms
NASA Astrophysics Data System (ADS)
Sinitsin, Vladimir V.; Shestakov, Aleksandr L.
2017-09-01
Comprehensive analysis of the angular and linear accelerations of moving elements (shafts, gears) allows an increase in the quality of the condition monitoring of mechanisms. However, existing tools and methods measure either linear or angular acceleration with postprocessing. This paper suggests a new construction design of an angular acceleration sensor for moving elements. The sensor is mounted on a moving element and, among other things, the data transfer and electric power supply are carried out wirelessly. In addition, the authors introduce a method for processing the received information which makes it possible to divide the measured acceleration into the angular and linear components. The design has been validated by the results of laboratory tests of an experimental model of the sensor. The study has shown that this method provides a definite separation of the measured acceleration into linear and angular components, even in noise. This research contributes an advance in the range of methods and tools for condition monitoring of mechanisms.
Micijevic, Esad; Morfitt, Ron
2010-01-01
Systematic characterization and calibration of the Landsat sensors and the assessment of image data quality are performed using the Image Assessment System (IAS). The IAS was first introduced as an element of the Landsat 7 (L7) Enhanced Thematic Mapper Plus (ETM+) ground segment and recently extended to Landsat 4 (L4) and 5 (L5) Thematic Mappers (TM) and Multispectral Sensors (MSS) on-board the Landsat 1-5 satellites. In preparation for the Landsat Data Continuity Mission (LDCM), the IAS was developed for the Earth Observer 1 (EO-1) Advanced Land Imager (ALI) with a capability to assess pushbroom sensors. This paper describes the LDCM version of the IAS and how it relates to unique calibration and validation attributes of its on-board imaging sensors. The LDCM IAS system will have to handle a significantly larger number of detectors and the associated database than the previous IAS versions. An additional challenge is that the LDCM IAS must handle data from two sensors, as the LDCM products will combine the Operational Land Imager (OLI) and Thermal Infrared Sensor (TIRS) spectral bands.
A time-resolved image sensor for tubeless streak cameras
NASA Astrophysics Data System (ADS)
Yasutomi, Keita; Han, SangMan; Seo, Min-Woong; Takasawa, Taishi; Kagawa, Keiichiro; Kawahito, Shoji
2014-03-01
This paper presents a time-resolved CMOS image sensor with draining-only modulation (DOM) pixels for tube-less streak cameras. Although the conventional streak camera has high time resolution, the device requires high voltage and bulky system due to the structure with a vacuum tube. The proposed time-resolved imager with a simple optics realize a streak camera without any vacuum tubes. The proposed image sensor has DOM pixels, a delay-based pulse generator, and a readout circuitry. The delay-based pulse generator in combination with an in-pixel logic allows us to create and to provide a short gating clock to the pixel array. A prototype time-resolved CMOS image sensor with the proposed pixel is designed and implemented using 0.11um CMOS image sensor technology. The image array has 30(Vertical) x 128(Memory length) pixels with the pixel pitch of 22.4um. .
2017-03-01
A Low- Power Wireless Image Sensor Node with Noise-Robust Moving Object Detection and a Region-of-Interest Based Rate Controller Jong Hwan Ko...Atlanta, GA 30332 USA Contact Author Email: jonghwan.ko@gatech.edu Abstract: This paper presents a low- power wireless image sensor node for...present a low- power wireless image sensor node with a noise-robust moving object detection and region-of-interest based rate controller [Fig. 1]. The
Chen, Chia-Wei; Chow, Chi-Wai; Liu, Yang; Yeh, Chien-Hung
2017-10-02
Recently even the low-end mobile-phones are equipped with a high-resolution complementary-metal-oxide-semiconductor (CMOS) image sensor. This motivates using a CMOS image sensor for visible light communication (VLC). Here we propose and demonstrate an efficient demodulation scheme to synchronize and demodulate the rolling shutter pattern in image sensor based VLC. The implementation algorithm is discussed. The bit-error-rate (BER) performance and processing latency are evaluated and compared with other thresholding schemes.
Prasad, Dilip K; Rajan, Deepu; Rachmawati, Lily; Rajabally, Eshan; Quek, Chai
2016-12-01
This paper addresses the problem of horizon detection, a fundamental process in numerous object detection algorithms, in a maritime environment. The maritime environment is characterized by the absence of fixed features, the presence of numerous linear features in dynamically changing objects and background and constantly varying illumination, rendering the typically simple problem of detecting the horizon a challenging one. We present a novel method called multi-scale consistence of weighted edge Radon transform, abbreviated as MuSCoWERT. It detects the long linear features consistent over multiple scales using multi-scale median filtering of the image followed by Radon transform on a weighted edge map and computing the histogram of the detected linear features. We show that MuSCoWERT has excellent performance, better than seven other contemporary methods, for 84 challenging maritime videos, containing over 33,000 frames, and captured using visible range and near-infrared range sensors mounted onboard, onshore, or on floating buoys. It has a median error of about 2 pixels (less than 0.2%) from the center of the actual horizon and a median angular error of less than 0.4 deg. We are also sharing a new challenging horizon detection dataset of 65 videos of visible, infrared cameras for onshore and onboard ship camera placement.
NASA Astrophysics Data System (ADS)
Burk, Laurel M.; Lee, Yueh Z.; Wait, J. Matthew; Lu, Jianping; Zhou, Otto Z.
2012-09-01
A cone beam micro-CT has previously been utilized along with a pressure-tracking respiration sensor to acquire prospectively gated images of both wild-type mice and various adult murine disease models. While the pressure applied to the abdomen of the subject by this sensor is small and is generally without physiological effect, certain disease models of interest, as well as very young animals, are prone to atelectasis with added pressure, or they generate too weak a respiration signal with this method to achieve optimal prospective gating. In this work we present a new fibre-optic displacement sensor which monitors respiratory motion of a subject without requiring physical contact. The sensor outputs an analogue signal which can be used for prospective respiration gating in micro-CT imaging. The device was characterized and compared against a pneumatic air chamber pressure sensor for the imaging of adult wild-type mice. The resulting images were found to be of similar quality with respect to physiological motion blur; the quality of the respiration signal trace obtained using the non-contact sensor was comparable to that of the pressure sensor and was superior for gating purposes due to its better signal-to-noise ratio. The non-contact sensor was then used to acquire in-vivo micro-CT images of a murine model for congenital diaphragmatic hernia and of 11-day-old mouse pups. In both cases, quality CT images were successfully acquired using this new respiration sensor. Despite the presence of beam hardening artefacts arising from the presence of a fibre-optic cable in the imaging field, we believe this new technique for respiration monitoring and gating presents an opportunity for in-vivo imaging of disease models which were previously considered too delicate for established animal handling methods.
Magnetoelectric Current Sensors
Bichurin, Mirza; Petrov, Roman; Leontiev, Viktor; Semenov, Gennadiy; Sokolov, Oleg
2017-01-01
In this work a magnetoelectric (ME) current sensor design based on a magnetoelectric effect is presented and discussed. The resonant and non-resonant type of ME current sensors are considered. Theoretical calculations of the ME current sensors by the equivalent circuit method were conducted. The application of different sensors using the new effects, for example, the ME effect, is made possible with the development of new ME composites. A large number of studies conducted in the field of new composites, allowed us to obtain a high magnetostrictive-piezoelectric laminate sensitivity. An optimal ME structure composition was matched. The characterization of a non-resonant current sensor showed that in the operation range to 5 A, the sensor had a sensitivity of 0.34 V/A, non-linearity less than 1% and for a resonant current sensor in the same operation range, the sensitivity was of 0.53 V/A, non-linearity less than 0.5%. PMID:28574486
NASA Astrophysics Data System (ADS)
Li, Shengbo Eben; Li, Guofa; Yu, Jiaying; Liu, Chang; Cheng, Bo; Wang, Jianqiang; Li, Keqiang
2018-01-01
Detection and tracking of objects in the side-near-field has attracted much attention for the development of advanced driver assistance systems. This paper presents a cost-effective approach to track moving objects around vehicles using linearly arrayed ultrasonic sensors. To understand the detection characteristics of a single sensor, an empirical detection model was developed considering the shapes and surface materials of various detected objects. Eight sensors were arrayed linearly to expand the detection range for further application in traffic environment recognition. Two types of tracking algorithms, including an Extended Kalman filter (EKF) and an Unscented Kalman filter (UKF), for the sensor array were designed for dynamic object tracking. The ultrasonic sensor array was designed to have two types of fire sequences: mutual firing or serial firing. The effectiveness of the designed algorithms were verified in two typical driving scenarios: passing intersections with traffic sign poles or street lights, and overtaking another vehicle. Experimental results showed that both EKF and UKF had more precise tracking position and smaller RMSE (root mean square error) than a traditional triangular positioning method. The effectiveness also encourages the application of cost-effective ultrasonic sensors in the near-field environment perception in autonomous driving systems.
Quasi-Epipolar Resampling of High Resolution Satellite Stereo Imagery for Semi Global Matching
NASA Astrophysics Data System (ADS)
Tatar, N.; Saadatseresht, M.; Arefi, H.; Hadavand, A.
2015-12-01
Semi-global matching is a well-known stereo matching algorithm in photogrammetric and computer vision society. Epipolar images are supposed as input of this algorithm. Epipolar geometry of linear array scanners is not a straight line as in case of frame camera. Traditional epipolar resampling algorithms demands for rational polynomial coefficients (RPCs), physical sensor model or ground control points. In this paper we propose a new solution for epipolar resampling method which works without the need for these information. In proposed method, automatic feature extraction algorithms are employed to generate corresponding features for registering stereo pairs. Also original images are divided into small tiles. In this way by omitting the need for extra information, the speed of matching algorithm increased and the need for high temporal memory decreased. Our experiments on GeoEye-1 stereo pair captured over Qom city in Iran demonstrates that the epipolar images are generated with sub-pixel accuracy.
Development of new family of wide-angle anamorphic lens with controlled distortion profile
NASA Astrophysics Data System (ADS)
Gauvin, Jonny; Doucet, Michel; Wang, Min; Thibault, Simon; Blanc, Benjamin
2005-08-01
It is well known that a fish-eye lens produces a circular image of the scene with a particular distortion profile. When using a fish-eye lens with a standard sensor (e.g. 1/3", 1/4",.), only a part of the rectangular detector area is used, leaving many pixels unused. We proposed a new approach to get enhanced resolution for panoramic imaging. In this paper, various arrangements of innovative 180-degree anamorphic wide-angle lens design are considered. Their performances as well as lens manufacturability are also discussed. The concept of the design is to use anamorphic optics to produce elliptical image that maximize pixel resolution in both axis. Furthermore, a non-linear distortion profile is also introduced to enhance spatial resolution for specific field angle. Typical applications such as panoramic photography, video conferencing, and homeland/transportation security are also presented.
A 3D image sensor with adaptable charge subtraction scheme for background light suppression
NASA Astrophysics Data System (ADS)
Shin, Jungsoon; Kang, Byongmin; Lee, Keechang; Kim, James D. K.
2013-02-01
We present a 3D ToF (Time-of-Flight) image sensor with adaptive charge subtraction scheme for background light suppression. The proposed sensor can alternately capture high resolution color image and high quality depth map in each frame. In depth-mode, the sensor requires enough integration time for accurate depth acquisition, but saturation will occur in high background light illumination. We propose to divide the integration time into N sub-integration times adaptively. In each sub-integration time, our sensor captures an image without saturation and subtracts the charge to prevent the pixel from the saturation. In addition, the subtraction results are cumulated N times obtaining a final result image without background illumination at full integration time. Experimental results with our own ToF sensor show high background suppression performance. We also propose in-pixel storage and column-level subtraction circuit for chiplevel implementation of the proposed method. We believe the proposed scheme will enable 3D sensors to be used in out-door environment.
Time and Memory Efficient Online Piecewise Linear Approximation of Sensor Signals.
Grützmacher, Florian; Beichler, Benjamin; Hein, Albert; Kirste, Thomas; Haubelt, Christian
2018-05-23
Piecewise linear approximation of sensor signals is a well-known technique in the fields of Data Mining and Activity Recognition. In this context, several algorithms have been developed, some of them with the purpose to be performed on resource constrained microcontroller architectures of wireless sensor nodes. While microcontrollers are usually constrained in computational power and memory resources, all state-of-the-art piecewise linear approximation techniques either need to buffer sensor data or have an execution time depending on the segment’s length. In the paper at hand, we propose a novel piecewise linear approximation algorithm, with a constant computational complexity as well as a constant memory complexity. Our proposed algorithm’s worst-case execution time is one to three orders of magnitude smaller and its average execution time is three to seventy times smaller compared to the state-of-the-art Piecewise Linear Approximation (PLA) algorithms in our experiments. In our evaluations, we show that our algorithm is time and memory efficient without sacrificing the approximation quality compared to other state-of-the-art piecewise linear approximation techniques, while providing a maximum error guarantee per segment, a small parameter space of only one parameter, and a maximum latency of one sample period plus its worst-case execution time.
Image acquisition system using on sensor compressed sampling technique
NASA Astrophysics Data System (ADS)
Gupta, Pravir Singh; Choi, Gwan Seong
2018-01-01
Advances in CMOS technology have made high-resolution image sensors possible. These image sensors pose significant challenges in terms of the amount of raw data generated, energy efficiency, and frame rate. This paper presents a design methodology for an imaging system and a simplified image sensor pixel design to be used in the system so that the compressed sensing (CS) technique can be implemented easily at the sensor level. This results in significant energy savings as it not only cuts the raw data rate but also reduces transistor count per pixel; decreases pixel size; increases fill factor; simplifies analog-to-digital converter, JPEG encoder, and JPEG decoder design; decreases wiring; and reduces the decoder size by half. Thus, CS has the potential to increase the resolution of image sensors for a given technology and die size while significantly decreasing the power consumption and design complexity. We show that it has potential to reduce power consumption by about 23% to 65%.
On computer vision in wireless sensor networks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berry, Nina M.; Ko, Teresa H.
Wireless sensor networks allow detailed sensing of otherwise unknown and inaccessible environments. While it would be beneficial to include cameras in a wireless sensor network because images are so rich in information, the power cost of transmitting an image across the wireless network can dramatically shorten the lifespan of the sensor nodes. This paper describe a new paradigm for the incorporation of imaging into wireless networks. Rather than focusing on transmitting images across the network, we show how an image can be processed locally for key features using simple detectors. Contrasted with traditional event detection systems that trigger an imagemore » capture, this enables a new class of sensors which uses a low power imaging sensor to detect a variety of visual cues. Sharing these features among relevant nodes cues specific actions to better provide information about the environment. We report on various existing techniques developed for traditional computer vision research which can aid in this work.« less
A linearization time-domain CMOS smart temperature sensor using a curvature compensation oscillator.
Chen, Chun-Chi; Chen, Hao-Wen
2013-08-28
This paper presents an area-efficient time-domain CMOS smart temperature sensor using a curvature compensation oscillator for linearity enhancement with a -40 to 120 °C temperature range operability. The inverter-based smart temperature sensors can substantially reduce the cost and circuit complexity of integrated temperature sensors. However, a large curvature exists on the temperature-to-time transfer curve of the inverter-based delay line and results in poor linearity of the sensor output. For cost reduction and error improvement, a temperature-to-pulse generator composed of a ring oscillator and a time amplifier was used to generate a thermal sensing pulse with a sufficient width proportional to the absolute temperature (PTAT). Then, a simple but effective on-chip curvature compensation oscillator is proposed to simultaneously count and compensate the PTAT pulse with curvature for linearization. With such a simple structure, the proposed sensor possesses an extremely small area of 0.07 mm2 in a TSMC 0.35-mm CMOS 2P4M digital process. By using an oscillator-based scheme design, the proposed sensor achieves a fine resolution of 0.045 °C without significantly increasing the circuit area. With the curvature compensation, the inaccuracy of -1.2 to 0.2 °C is achieved in an operation range of -40 to 120 °C after two-point calibration for 14 packaged chips. The power consumption is measured as 23 mW at a sample rate of 10 samples/s.
NASA Technical Reports Server (NTRS)
Pain, Bedabrata; Yang, Guang; Ortiz, Monico; Wrigley, Christopher; Hancock, Bruce; Cunningham, Thomas
2000-01-01
Noise in photodiode-type CMOS active pixel sensors (APS) is primarily due to the reset (kTC) noise at the sense node, since it is difficult to implement in-pixel correlated double sampling for a 2-D array. Signal integrated on the photodiode sense node (SENSE) is calculated by measuring difference between the voltage on the column bus (COL) - before and after the reset (RST) is pulsed. Lower than kTC noise can be achieved with photodiode-type pixels by employing "softreset" technique. Soft-reset refers to resetting with both drain and gate of the n-channel reset transistor kept at the same potential, causing the sense node to be reset using sub-threshold MOSFET current. However, lowering of noise is achieved only at the expense higher image lag and low-light-level non-linearity. In this paper, we present an analysis to explain the noise behavior, show evidence of degraded performance under low-light levels, and describe new pixels that eliminate non-linearity and lag without compromising noise.
Podoleanu, Adrian Gh; Bradu, Adrian
2013-08-12
Conventional spectral domain interferometry (SDI) methods suffer from the need of data linearization. When applied to optical coherence tomography (OCT), conventional SDI methods are limited in their 3D capability, as they cannot deliver direct en-face cuts. Here we introduce a novel SDI method, which eliminates these disadvantages. We denote this method as Master - Slave Interferometry (MSI), because a signal is acquired by a slave interferometer for an optical path difference (OPD) value determined by a master interferometer. The MSI method radically changes the main building block of an SDI sensor and of a spectral domain OCT set-up. The serially provided signal in conventional technology is replaced by multiple signals, a signal for each OPD point in the object investigated. This opens novel avenues in parallel sensing and in parallelization of signal processing in 3D-OCT, with applications in high- resolution medical imaging and microscopy investigation of biosamples. Eliminating the need of linearization leads to lower cost OCT systems and opens potential avenues in increasing the speed of production of en-face OCT images in comparison with conventional SDI.
Electric Field Controlled Magnetism in BiFeO3/Ferromagnet Films
NASA Astrophysics Data System (ADS)
Barry, M.; Lee, K.; Chu, Y. H.; Yang, P. L.; Martin, L. W.; Jenkins, C. A.; Ramesh, R.; Scholl, A.; Doran, A.
2007-03-01
BiFeO3 is the only single phase room temperature multiferroic that is currently known. Not only does it have applications as a lead-free replacement for ferroelectric memory cells and piezoelectric sensors, but its interactions with other materials are now attracting a great deal of attention. Its multiferroic nature has potential in the field of exchange bias, where it could allow electric-field control of the ferromagnetic (FM) magnetization. In order to understand this coupling, an understanding of the magnetization in BiFeO3 is necessary. X-ray linear and circular dichroism images were obtained using a high spatial resolution photoelectron emission microscope (PEEM), allowing elemental specificity and surface sensitivity. A piezoelectric force microscope (PFM) was used to map the ferroelectric state in micron-sized regions of the films, which were then probed using crystallographic measurements and temperature dependent PEEM measurements. Temperature dependent structural measurements allow decoupling of the two order parameters, ferroelectric and magnetic, contributing to the photoemission signal. Careful analysis of linear and circular dichroism images allows determination of magnetic directions in BiFeO3 and FM layers.
Image-Based Environmental Monitoring Sensor Application Using an Embedded Wireless Sensor Network
Paek, Jeongyeup; Hicks, John; Coe, Sharon; Govindan, Ramesh
2014-01-01
This article discusses the experiences from the development and deployment of two image-based environmental monitoring sensor applications using an embedded wireless sensor network. Our system uses low-power image sensors and the Tenet general purpose sensing system for tiered embedded wireless sensor networks. It leverages Tenet's built-in support for reliable delivery of high rate sensing data, scalability and its flexible scripting language, which enables mote-side image compression and the ease of deployment. Our first deployment of a pitfall trap monitoring application at the James San Jacinto Mountain Reserve provided us with insights and lessons learned into the deployment of and compression schemes for these embedded wireless imaging systems. Our three month-long deployment of a bird nest monitoring application resulted in over 100,000 images collected from a 19-camera node network deployed over an area of 0.05 square miles, despite highly variable environmental conditions. Our biologists found the on-line, near-real-time access to images to be useful for obtaining data on answering their biological questions. PMID:25171121
Image-based environmental monitoring sensor application using an embedded wireless sensor network.
Paek, Jeongyeup; Hicks, John; Coe, Sharon; Govindan, Ramesh
2014-08-28
This article discusses the experiences from the development and deployment of two image-based environmental monitoring sensor applications using an embedded wireless sensor network. Our system uses low-power image sensors and the Tenet general purpose sensing system for tiered embedded wireless sensor networks. It leverages Tenet's built-in support for reliable delivery of high rate sensing data, scalability and its flexible scripting language, which enables mote-side image compression and the ease of deployment. Our first deployment of a pitfall trap monitoring application at the James San Cannot Mountain Reserve provided us with insights and lessons learned into the deployment of and compression schemes for these embedded wireless imaging systems. Our three month-long deployment of a bird nest monitoring application resulted in over 100,000 images collected from a 19-camera node network deployed over an area of 0.05 square miles, despite highly variable environmental conditions. Our biologists found the on-line, near-real-time access to images to be useful for obtaining data on answering their biological questions.
York, Timothy; Powell, Samuel B.; Gao, Shengkui; Kahan, Lindsey; Charanya, Tauseef; Saha, Debajit; Roberts, Nicholas W.; Cronin, Thomas W.; Marshall, Justin; Achilefu, Samuel; Lake, Spencer P.; Raman, Baranidharan; Gruev, Viktor
2015-01-01
In this paper, we present recent work on bioinspired polarization imaging sensors and their applications in biomedicine. In particular, we focus on three different aspects of these sensors. First, we describe the electro–optical challenges in realizing a bioinspired polarization imager, and in particular, we provide a detailed description of a recent low-power complementary metal–oxide–semiconductor (CMOS) polarization imager. Second, we focus on signal processing algorithms tailored for this new class of bioinspired polarization imaging sensors, such as calibration and interpolation. Third, the emergence of these sensors has enabled rapid progress in characterizing polarization signals and environmental parameters in nature, as well as several biomedical areas, such as label-free optical neural recording, dynamic tissue strength analysis, and early diagnosis of flat cancerous lesions in a murine colorectal tumor model. We highlight results obtained from these three areas and discuss future applications for these sensors. PMID:26538682
Shi, Jidong; Wang, Liu; Dai, Zhaohe; Zhao, Lingyu; Du, Mingde; Li, Hongbian; Fang, Ying
2018-05-30
Flexible piezoresistive pressure sensors have been attracting wide attention for applications in health monitoring and human-machine interfaces because of their simple device structure and easy-readout signals. For practical applications, flexible pressure sensors with both high sensitivity and wide linearity range are highly desirable. Herein, a simple and low-cost method for the fabrication of a flexible piezoresistive pressure sensor with a hierarchical structure over large areas is presented. The piezoresistive pressure sensor consists of arrays of microscale papillae with nanoscale roughness produced by replicating the lotus leaf's surface and spray-coating of graphene ink. Finite element analysis (FEA) shows that the hierarchical structure governs the deformation behavior and pressure distribution at the contact interface, leading to a quick and steady increase in contact area with loads. As a result, the piezoresistive pressure sensor demonstrates a high sensitivity of 1.2 kPa -1 and a wide linearity range from 0 to 25 kPa. The flexible pressure sensor is applied for sensitive monitoring of small vibrations, including wrist pulse and acoustic waves. Moreover, a piezoresistive pressure sensor array is fabricated for mapping the spatial distribution of pressure. These results highlight the potential applications of the flexible piezoresistive pressure sensor for health monitoring and electronic skin. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Multi-Image Registration for an Enhanced Vision System
NASA Technical Reports Server (NTRS)
Hines, Glenn; Rahman, Zia-Ur; Jobson, Daniel; Woodell, Glenn
2002-01-01
An Enhanced Vision System (EVS) utilizing multi-sensor image fusion is currently under development at the NASA Langley Research Center. The EVS will provide enhanced images of the flight environment to assist pilots in poor visibility conditions. Multi-spectral images obtained from a short wave infrared (SWIR), a long wave infrared (LWIR), and a color visible band CCD camera, are enhanced and fused using the Retinex algorithm. The images from the different sensors do not have a uniform data structure: the three sensors not only operate at different wavelengths, but they also have different spatial resolutions, optical fields of view (FOV), and bore-sighting inaccuracies. Thus, in order to perform image fusion, the images must first be co-registered. Image registration is the task of aligning images taken at different times, from different sensors, or from different viewpoints, so that all corresponding points in the images match. In this paper, we present two methods for registering multiple multi-spectral images. The first method performs registration using sensor specifications to match the FOVs and resolutions directly through image resampling. In the second method, registration is obtained through geometric correction based on a spatial transformation defined by user selected control points and regression analysis.
NASA Astrophysics Data System (ADS)
Goss, Tristan M.
2016-05-01
With 640x512 pixel format IR detector arrays having been on the market for the past decade, Standard Definition (SD) thermal imaging sensors have been developed and deployed across the world. Now with 1280x1024 pixel format IR detector arrays becoming readily available designers of thermal imager systems face new challenges as pixel sizes reduce and the demand and applications for High Definition (HD) thermal imaging sensors increases. In many instances the upgrading of existing under-sampled SD thermal imaging sensors into more optimally sampled or oversampled HD thermal imaging sensors provides a more cost effective and reduced time to market option than to design and develop a completely new sensor. This paper presents the analysis and rationale behind the selection of the best suited HD pixel format MWIR detector for the upgrade of an existing SD thermal imaging sensor to a higher performing HD thermal imaging sensor. Several commercially available and "soon to be" commercially available HD small pixel IR detector options are included as part of the analysis and are considered for this upgrade. The impact the proposed detectors have on the sensor's overall sensitivity, noise and resolution is analyzed, and the improved range performance is predicted. Furthermore with reduced dark currents due to the smaller pixel sizes, the candidate HD MWIR detectors are operated at higher temperatures when compared to their SD predecessors. Therefore, as an additional constraint and as a design goal, the feasibility of achieving upgraded performance without any increase in the size, weight and power consumption of the thermal imager is discussed herein.
NASA Astrophysics Data System (ADS)
Caballero-Águila, R.; Hermoso-Carazo, A.; Linares-Pérez, J.
2009-08-01
In this paper, the state least-squares linear estimation problem from correlated uncertain observations coming from multiple sensors is addressed. It is assumed that, at each sensor, the state is measured in the presence of additive white noise and that the uncertainty in the observations is characterized by a set of Bernoulli random variables which are only correlated at consecutive time instants. Assuming that the statistical properties of such variables are not necessarily the same for all the sensors, a recursive filtering algorithm is proposed, and the performance of the estimators is illustrated by a numerical simulation example wherein a signal is estimated from correlated uncertain observations coming from two sensors with different uncertainty characteristics.
Zinc oxide inverse opal enzymatic biosensor
NASA Astrophysics Data System (ADS)
You, Xueqiu; Pikul, James H.; King, William P.; Pak, James J.
2013-06-01
We report ZnO inverse opal- and nanowire (NW)-based enzymatic glucose biosensors with extended linear detection ranges. The ZnO inverse opal sensors have 0.01-18 mM linear detection range, which is 2.5 times greater than that of ZnO NW sensors and 1.5 times greater than that of other reported ZnO sensors. This larger range is because of reduced glucose diffusivity through the inverse opal geometry. The ZnO inverse opal sensors have an average sensitivity of 22.5 μA/(mM cm2), which diminished by 10% after 35 days, are more stable than ZnO NW sensors whose sensitivity decreased by 10% after 7 days.
Optimal sensor placement for control of a supersonic mixed-compression inlet with variable geometry
NASA Astrophysics Data System (ADS)
Moore, Kenneth Thomas
A method of using fluid dynamics models for the generation of models that are useable for control design and analysis is investigated. The problem considered is the control of the normal shock location in the VDC inlet, which is a mixed-compression, supersonic, variable-geometry inlet of a jet engine. A quasi-one-dimensional set of fluid equations incorporating bleed and moving walls is developed. An object-oriented environment is developed for simulation of flow systems under closed-loop control. A public interface between the controller and fluid classes is defined. A linear model representing the dynamics of the VDC inlet is developed from the finite difference equations, and its eigenstructure is analyzed. The order of this model is reduced using the square root balanced model reduction method to produce a reduced-order linear model that is suitable for control design and analysis tasks. A modification to this method that improves the accuracy of the reduced-order linear model for the purpose of sensor placement is presented and analyzed. The reduced-order linear model is used to develop a sensor placement method that quantifies as a function of the sensor location the ability of a sensor to provide information on the variable of interest for control. This method is used to develop a sensor placement metric for the VDC inlet. The reduced-order linear model is also used to design a closed loop control system to control the shock position in the VDC inlet. The object-oriented simulation code is used to simulate the nonlinear fluid equations under closed-loop control.
Compressive Coded-Aperture Multimodal Imaging Systems
NASA Astrophysics Data System (ADS)
Rueda-Chacon, Hoover F.
Multimodal imaging refers to the framework of capturing images that span different physical domains such as space, spectrum, depth, time, polarization, and others. For instance, spectral images are modeled as 3D cubes with two spatial and one spectral coordinate. Three-dimensional cubes spanning just the space domain, are referred as depth volumes. Imaging cubes varying in time, spectra or depth, are referred as 4D-images. Nature itself spans different physical domains, thus imaging our real world demands capturing information in at least 6 different domains simultaneously, giving turn to 3D-spatial+spectral+polarized dynamic sequences. Conventional imaging devices, however, can capture dynamic sequences with up-to 3 spectral channels, in real-time, by the use of color sensors. Capturing multiple spectral channels require scanning methodologies, which demand long time. In general, to-date multimodal imaging requires a sequence of different imaging sensors, placed in tandem, to simultaneously capture the different physical properties of a scene. Then, different fusion techniques are employed to mix all the individual information into a single image. Therefore, new ways to efficiently capture more than 3 spectral channels of 3D time-varying spatial information, in a single or few sensors, are of high interest. Compressive spectral imaging (CSI) is an imaging framework that seeks to optimally capture spectral imagery (tens of spectral channels of 2D spatial information), using fewer measurements than that required by traditional sensing procedures which follows the Shannon-Nyquist sampling. Instead of capturing direct one-to-one representations of natural scenes, CSI systems acquire linear random projections of the scene and then solve an optimization algorithm to estimate the 3D spatio-spectral data cube by exploiting the theory of compressive sensing (CS). To date, the coding procedure in CSI has been realized through the use of ``block-unblock" coded apertures, commonly implemented as chrome-on-quartz photomasks. These apertures block or permit to pass the entire spectrum from the scene at given spatial locations, thus modulating the spatial characteristics of the scene. In the first part, this thesis aims to expand the framework of CSI by replacing the traditional block-unblock coded apertures by patterned optical filter arrays, referred as ``color" coded apertures. These apertures are formed by tiny pixelated optical filters, which in turn, allow the input image to be modulated not only spatially but spectrally as well, entailing more powerful coding strategies. The proposed colored coded apertures are either synthesized through linear combinations of low-pass, high-pass and band-pass filters, paired with binary pattern ensembles realized by a digital-micromirror-device (DMD), or experimentally realized through thin-film color-patterned filter arrays. The optical forward model of the proposed CSI architectures will be presented along with the design and proof-of-concept implementations, which achieve noticeable improvements in the quality of the reconstructions compared with conventional block-unblock coded aperture-based CSI architectures. On another front, due to the rich information contained in the infrared spectrum as well as the depth domain, this thesis aims to explore multimodal imaging by extending the range sensitivity of current CSI systems to a dual-band visible+near-infrared spectral domain, and also, it proposes, for the first time, a new imaging device that captures simultaneously 4D data cubes (2D spatial+1D spectral+depth imaging) with as few as a single snapshot. Due to the snapshot advantage of this camera, video sequences are possible, thus enabling the joint capture of 5D imagery. It aims to create super-human sensing that will enable the perception of our world in new and exciting ways. With this, we intend to advance in the state of the art in compressive sensing systems to extract depth while accurately capturing spatial and spectral material properties. The applications of such a sensor are self-evident in fields such as computer/robotic vision because they would allow an artificial intelligence to make informed decisions about not only the location of objects within a scene but also their material properties.
Experimental single-chip color HDTV image acquisition system with 8M-pixel CMOS image sensor
NASA Astrophysics Data System (ADS)
Shimamoto, Hiroshi; Yamashita, Takayuki; Funatsu, Ryohei; Mitani, Kohji; Nojiri, Yuji
2006-02-01
We have developed an experimental single-chip color HDTV image acquisition system using 8M-pixel CMOS image sensor. The sensor has 3840 × 2160 effective pixels and is progressively scanned at 60 frames per second. We describe the color filter array and interpolation method to improve image quality with a high-pixel-count single-chip sensor. We also describe an experimental image acquisition system we used to measured spatial frequency characteristics in the horizontal direction. The results indicate good prospects for achieving a high quality single chip HDTV camera that reduces pseudo signals and maintains high spatial frequency characteristics within the frequency band for HDTV.
The challenge of sCMOS image sensor technology to EMCCD
NASA Astrophysics Data System (ADS)
Chang, Weijing; Dai, Fang; Na, Qiyue
2018-02-01
In the field of low illumination image sensor, the noise of the latest scientific-grade CMOS image sensor is close to EMCCD, and the industry thinks it has the potential to compete and even replace EMCCD. Therefore we selected several typical sCMOS and EMCCD image sensors and cameras to compare their performance parameters. The results show that the signal-to-noise ratio of sCMOS is close to EMCCD, and the other parameters are superior. But signal-to-noise ratio is very important for low illumination imaging, and the actual imaging results of sCMOS is not ideal. EMCCD is still the first choice in the high-performance application field.
One dimensional wavefront distortion sensor comprising a lens array system
Neal, Daniel R.; Michie, Robert B.
1996-01-01
A 1-dimensional sensor for measuring wavefront distortion of a light beam as a function of time and spatial position includes a lens system which incorporates a linear array of lenses, and a detector system which incorporates a linear array of light detectors positioned from the lens system so that light passing through any of the lenses is focused on at least one of the light detectors. The 1-dimensional sensor determines the slope of the wavefront by location of the detectors illuminated by the light. The 1 dimensional sensor has much greater bandwidth that 2 dimensional systems.
Linear FBG Temperature Sensor Interrogation with Fabry-Perot ITU Multi-wavelength Reference.
Park, Hyoung-Jun; Song, Minho
2008-10-29
The equidistantly spaced multi-passbands of a Fabry-Perot ITU filter are used as an efficient multi-wavelength reference for fiber Bragg grating sensor demodulation. To compensate for the nonlinear wavelength tuning effect in the FBG sensor demodulator, a polynomial fitting algorithm was applied to the temporal peaks of the wavelength-scanned ITU filter. The fitted wavelength values are assigned to the peak locations of the FBG sensor reflections, obtaining constant accuracy, regardless of the wavelength scan range and frequency. A linearity error of about 0.18% against a reference thermocouple thermometer was obtained with the suggested method.
One dimensional wavefront distortion sensor comprising a lens array system
Neal, D.R.; Michie, R.B.
1996-02-20
A 1-dimensional sensor for measuring wavefront distortion of a light beam as a function of time and spatial position includes a lens system which incorporates a linear array of lenses, and a detector system which incorporates a linear array of light detectors positioned from the lens system so that light passing through any of the lenses is focused on at least one of the light detectors. The 1-dimensional sensor determines the slope of the wavefront by location of the detectors illuminated by the light. The 1 dimensional sensor has much greater bandwidth that 2 dimensional systems. 8 figs.
Evaluation of physical properties of different digital intraoral sensors.
Al-Rawi, Wisam; Teich, Sorin
2013-09-01
Digital technologies provide clinically acceptable results comparable to traditional films while having other advantages such as the ability to store and manipulate images, immediate evaluation of the image diagnostic quality, possible reduction in patient radiation exposure, and so on. The purpose of this paper is to present the results of the evaluation of the physical design of eight CMOS digital intraoral sensors. Sensors tested included: XDR (Cyber Medical Imaging, Los Angeles, CA, USA), RVG 6100 (Carestream Dental LLC, Atlanta, GA, USA), Platinum (DEXIS LLC., Hatfield, PA, USA), CDR Elite (Schick Technologies, Long Island City, NY, USA), ProSensor (Planmeca, Helsinki, Finland), EVA (ImageWorks, Elmsford, NY, USA), XIOS Plus (Sirona, Bensheim, Germany), and GXS-700 (Gendex Dental Systems, Hatfield, PA, USA). The sensors were evaluated for cable configuration, connectivity interface, presence of back-scattering radiation shield, plate thickness, active sensor area, and comparing the active imaging area to the outside casing and to conventional radiographic films. There were variations among the physical design of different sensors. For most parameters tested, a lack of standardization exists in the industry. The results of this study revealed that these details are not always available through the material provided by the manufacturers and are often not advertised. For all sensor sizes, active imaging area was smaller compared with conventional films. There was no sensor in the group that had the best physical design. Data presented in this paper establishes a benchmark for comparing the physical design of digital intraoral sensors.
Graphene-based ultrasonic detector for photoacoustic imaging
NASA Astrophysics Data System (ADS)
Yang, Fan; Song, Wei; Zhang, Chonglei; Fang, Hui; Min, Changjun; Yuan, Xiaocong
2018-03-01
Taking advantage of optical absorption imaging contrast, photoacoustic imaging technology is able to map the volumetric distribution of the optical absorption properties within biological tissues. Unfortunately, traditional piezoceramics-based transducers used in most photoacoustic imaging setups have inadequate frequency response, resulting in both poor depth resolution and inaccurate quantification of the optical absorption information. Instead of the piezoelectric ultrasonic transducer, we develop a graphene-based optical sensor for detecting photoacoustic pressure. The refractive index in the coupling medium is modulated due to photoacoustic pressure perturbation, which creates the variation of the polarization-sensitive optical absorption property of the graphene. As a result, the photoacoustic detection is realized through recording the reflectance intensity difference of polarization light. The graphene-based detector process an estimated noise-equivalentpressure (NEP) sensitivity of 550 Pa over 20-MHz bandwidth with a nearby linear pressure response from 11.0 kPa to 53.0 kPa. Further, a graphene-based photoacoustic microscopy is built, and non-invasively reveals the microvascular anatomy in mouse ears label-freely.
Research of infrared laser based pavement imaging and crack detection
NASA Astrophysics Data System (ADS)
Hong, Hanyu; Wang, Shu; Zhang, Xiuhua; Jing, Genqiang
2013-08-01
Road crack detection is seriously affected by many factors in actual applications, such as some shadows, road signs, oil stains, high frequency noise and so on. Due to these factors, the current crack detection methods can not distinguish the cracks in complex scenes. In order to solve this problem, a novel method based on infrared laser pavement imaging is proposed. Firstly, single sensor laser pavement imaging system is adopted to obtain pavement images, high power laser line projector is well used to resist various shadows. Secondly, the crack extraction algorithm which has merged multiple features intelligently is proposed to extract crack information. In this step, the non-negative feature and contrast feature are used to extract the basic crack information, and circular projection based on linearity feature is applied to enhance the crack area and eliminate noise. A series of experiments have been performed to test the proposed method, which shows that the proposed automatic extraction method is effective and advanced.
Ionic pH and glucose sensors fabricated using hydrothermal ZnO nanostructures
NASA Astrophysics Data System (ADS)
Wang, Jyh-Liang; Yang, Po-Yu; Hsieh, Tsang-Yen; Juan, Pi-Chun
2016-01-01
Hydrothermally synthesized aluminum-doped ZnO (AZO) nanostructures have been adopted in extended-gate field-effect transistor (EGFET) sensors to demonstrate the sensitive and stable pH and glucose sensing characteristics of AZO-nanostructured EGFET sensors. The AZO-nanostructured EGFET sensors exhibited the following superior pH sensing characteristics: a high current sensitivity of 0.96 µA1/2/pH, a high linearity of 0.9999, less distortion of output waveforms, a small hysteresis width of 4.83 mV, good long-term repeatability, and a wide sensing range from pHs 1 to 13. The glucose sensing characteristics of AZO-nanostructured biosensors exhibited the desired sensitivity of 60.5 µA·cm-2·mM-1 and a linearity of 0.9996 up to 13.9 mM. The attractive characteristics of high sensitivity, high linearity, and repeatability of using ionic AZO-nanostructured EGFET sensors indicate their potential use as electrochemical and disposable biosensors.
NASA Astrophysics Data System (ADS)
Windl, Roman; Abert, Claas; Bruckner, Florian; Huber, Christian; Vogler, Christoph; Weitensfelder, Herbert; Suess, Dieter
2017-11-01
Within this work a passive and wireless magnetic sensor, to monitor linear displacements, is proposed. We exploit recent advances in 3D printing and fabricate a polymer bonded magnet with a spatially linear magnetic field component corresponding to the length of the magnet. Regulating the magnetic compound fraction during printing allows specific shaping of the magnetic field distribution. A giant magnetoresistance magnetic field sensor is combined with a radio-frequency identification tag in order to passively monitor the exerted magnetic field of the printed magnet. Due to the tailored magnetic field, a displacement of the magnet with respect to the sensor can be detected within the sub-mm regime. The sensor design provides good flexibility by controlling the 3D printing process according to application needs. Absolute displacement detection using low cost components and providing passive operation, long term stability, and longevity renders the proposed sensor system ideal for structural health monitoring applications.
Wavefront sensorless adaptive optics ophthalmoscopy in the human eye
Hofer, Heidi; Sredar, Nripun; Queener, Hope; Li, Chaohong; Porter, Jason
2011-01-01
Wavefront sensor noise and fidelity place a fundamental limit on achievable image quality in current adaptive optics ophthalmoscopes. Additionally, the wavefront sensor ‘beacon’ can interfere with visual experiments. We demonstrate real-time (25 Hz), wavefront sensorless adaptive optics imaging in the living human eye with image quality rivaling that of wavefront sensor based control in the same system. A stochastic parallel gradient descent algorithm directly optimized the mean intensity in retinal image frames acquired with a confocal adaptive optics scanning laser ophthalmoscope (AOSLO). When imaging through natural, undilated pupils, both control methods resulted in comparable mean image intensities. However, when imaging through dilated pupils, image intensity was generally higher following wavefront sensor-based control. Despite the typically reduced intensity, image contrast was higher, on average, with sensorless control. Wavefront sensorless control is a viable option for imaging the living human eye and future refinements of this technique may result in even greater optical gains. PMID:21934779
Photodiode area effect on performance of X-ray CMOS active pixel sensors
NASA Astrophysics Data System (ADS)
Kim, M. S.; Kim, Y.; Kim, G.; Lim, K. T.; Cho, G.; Kim, D.
2018-02-01
Compared to conventional TFT-based X-ray imaging devices, CMOS-based X-ray imaging sensors are considered next generation because they can be manufactured in very small pixel pitches and can acquire high-speed images. In addition, CMOS-based sensors have the advantage of integration of various functional circuits within the sensor. The image quality can also be improved by the high fill-factor in large pixels. If the size of the subject is small, the size of the pixel must be reduced as a consequence. In addition, the fill factor must be reduced to aggregate various functional circuits within the pixel. In this study, 3T-APS (active pixel sensor) with photodiodes of four different sizes were fabricated and evaluated. It is well known that a larger photodiode leads to improved overall performance. Nonetheless, if the size of the photodiode is > 1000 μm2, the degree to which the sensor performance increases as the photodiode size increases, is reduced. As a result, considering the fill factor, pixel-pitch > 32 μm is not necessary to achieve high-efficiency image quality. In addition, poor image quality is to be expected unless special sensor-design techniques are included for sensors with a pixel pitch of 25 μm or less.
Linear Covariance Analysis for a Lunar Lander
NASA Technical Reports Server (NTRS)
Jang, Jiann-Woei; Bhatt, Sagar; Fritz, Matthew; Woffinden, David; May, Darryl; Braden, Ellen; Hannan, Michael
2017-01-01
A next-generation lunar lander Guidance, Navigation, and Control (GNC) system, which includes a state-of-the-art optical sensor suite, is proposed in a concept design cycle. The design goal is to allow the lander to softly land within the prescribed landing precision. The achievement of this precision landing requirement depends on proper selection of the sensor suite. In this paper, a robust sensor selection procedure is demonstrated using a Linear Covariance (LinCov) analysis tool developed by Draper.
The Effect of the Thickness of the Sensitive Layer on the Performance of the Accumulating NOx Sensor
Groß, Andrea; Richter, Miriam; Kubinski, David J.; Visser, Jacobus H.; Moos, Ralf
2012-01-01
A novel and promising method to measure low levels of NOx utilizes the accumulating sensor principle. During an integration cycle, incoming NOx molecules are stored in a sensitive layer based on an automotive lean NOx trap (LNT) material that changes its electrical resistivity proportional to the amount of stored NOx, making the sensor suitable for long-term detection of low levels of NOx. In this study, the influence of the thickness of the sensitive layer, prepared by multiple screen-printing, is investigated. All samples show good accumulating sensing properties for both NO and NO2. In accordance to a simplified model, the base resistance of the sensitive layer and the sensitivity to NOx decrease with increasing thickness. Contrarily, the sensor response time increases. The linear measurement range of all samples ends at a sensor response of about 30% resulting in an increase of the linearly detectable amount with the thickness. Hence, the variation of the thickness of the sensitive layer is a powerful tool to adapt the linear measurement range (proportional to the thickness) as well as the sensitivity (proportional to the inverse thickness) to the application requirements. Calculations combining the sensor model with the measurement results indicate that for operation in the linear range, about 3% of the LNT material is converted to nitrate.
A Closed-Form Error Model of Straight Lines for Improved Data Association and Sensor Fusing
2018-01-01
Linear regression is a basic tool in mobile robotics, since it enables accurate estimation of straight lines from range-bearing scans or in digital images, which is a prerequisite for reliable data association and sensor fusing in the context of feature-based SLAM. This paper discusses, extends and compares existing algorithms for line fitting applicable also in the case of strong covariances between the coordinates at each single data point, which must not be neglected if range-bearing sensors are used. Besides, in particular, the determination of the covariance matrix is considered, which is required for stochastic modeling. The main contribution is a new error model of straight lines in closed form for calculating quickly and reliably the covariance matrix dependent on just a few comprehensible and easily-obtainable parameters. The model can be applied widely in any case when a line is fitted from a number of distinct points also without a priori knowledge of the specific measurement noise. By means of extensive simulations, the performance and robustness of the new model in comparison to existing approaches is shown. PMID:29673205
Analysis of Dark Current in BRITE Nanostellite CCD Sensors †
Popowicz, Adam
2018-01-01
The BRightest Target Explorer (BRITE) is the pioneering nanosatellite mission dedicated for photometric observations of the brightest stars in the sky. The BRITE charge coupled device (CCD) sensors are poorly shielded against extensive flux of energetic particles which constantly induce defects in the silicon lattice. In this paper we investigate the temporal evolution of the generation of the dark current in the BRITE CCDs over almost four years after launch. Utilizing several steps of image processing and employing normalization of the results, it was possible to obtain useful information about the progress of thermal activity in the sensors. The outcomes show a clear and consistent linear increase of induced damage despite the fact that only about 0.14% of CCD pixels were probed. By performing the analysis of temperature dependencies of the dark current, we identified the observed defects as phosphorus-vacancy (PV) pairs, which are common in proton irradiated CCD matrices. Moreover, the Meyer-Neldel empirical rule was confirmed in our dark current data, yielding EMN=24.8 meV for proton-induced PV defects. PMID:29415471
High-Speed Binary-Output Image Sensor
NASA Technical Reports Server (NTRS)
Fossum, Eric; Panicacci, Roger A.; Kemeny, Sabrina E.; Jones, Peter D.
1996-01-01
Photodetector outputs digitized by circuitry on same integrated-circuit chip. Developmental special-purpose binary-output image sensor designed to capture up to 1,000 images per second, with resolution greater than 10 to the 6th power pixels per image. Lower-resolution but higher-frame-rate prototype of sensor contains 128 x 128 array of photodiodes on complementary metal oxide/semiconductor (CMOS) integrated-circuit chip. In application for which it is being developed, sensor used to examine helicopter oil to determine whether amount of metal and sand in oil sufficient to warrant replacement.
Radiographic endodontic working length estimation: comparison of three digital image receptors.
Athar, Anas; Angelopoulos, Christos; Katz, Jerald O; Williams, Karen B; Spencer, Paulette
2008-10-01
This in vitro study was conducted to evaluate the accuracy of the Schick wireless image receptor compared with 2 other types of digital image receptors for measuring the radiographic landmarks pertinent to endodontic treatment. Fourteen human cadaver mandibles with retained molars were selected. A fine endodontic file (#10) was introduced into the canal at random distances from the apex and at the apex of the tooth; images were made with 3 different #2-size image receptors: DenOptix storage phosphor plates, Gendex CCD sensor (wired), and Schick CDR sensor (wireless). Six raters viewed the images for identification of the radiographic apex of the tooth and the tip of a fine (#10) endodontic file. Inter-rater reliability was also assessed. Repeated-measures analysis of variance revealed a significant main effect for the type of image receptor. Raters' error in identifying structures of interest was significantly higher for Denoptix storage phosphor plates, whereas the least error was noted with the Schick CDR sensor. A significant interaction effect was observed for rater and type of image receptor used, but this effect contributed only 6% (P < .01; eta(2) = 0.06) toward the outcome of the results. Schick CDR wireless sensor may be preferable to other solid-state sensors, because there is no cable connecting the sensor to the computer. Further testing of this sensor for other diagnostic tasks is recommended, as well as evaluation of patient acceptance.
Precise calibration of pupil images in pyramid wavefront sensor.
Liu, Yong; Mu, Quanquan; Cao, Zhaoliang; Hu, Lifa; Yang, Chengliang; Xuan, Li
2017-04-20
The pyramid wavefront sensor (PWFS) is a novel wavefront sensor with several inspiring advantages compared with Shack-Hartmann wavefront sensors. The PWFS uses four pupil images to calculate the local tilt of the incoming wavefront. Pupil images are conjugated with a telescope pupil so that each pixel in the pupil image is diffraction-limited by the telescope pupil diameter, thus the sensing error of the PWFS is much lower than that of the Shack-Hartmann sensor and is related to the extraction and alignment accuracy of pupil images. However, precise extraction of these images is difficult to conduct in practice. Aiming at improving the sensing accuracy, we analyzed the physical model of calibration of a PWFS and put forward an extraction algorithm. The process was verified via a closed-loop correction experiment. The results showed that the sensing accuracy of the PWFS increased after applying the calibration and extraction method.
Advances in multi-sensor data fusion: algorithms and applications.
Dong, Jiang; Zhuang, Dafang; Huang, Yaohuan; Fu, Jingying
2009-01-01
With the development of satellite and remote sensing techniques, more and more image data from airborne/satellite sensors have become available. Multi-sensor image fusion seeks to combine information from different images to obtain more inferences than can be derived from a single sensor. In image-based application fields, image fusion has emerged as a promising research area since the end of the last century. The paper presents an overview of recent advances in multi-sensor satellite image fusion. Firstly, the most popular existing fusion algorithms are introduced, with emphasis on their recent improvements. Advances in main applications fields in remote sensing, including object identification, classification, change detection and maneuvering targets tracking, are described. Both advantages and limitations of those applications are then discussed. Recommendations are addressed, including: (1) Improvements of fusion algorithms; (2) Development of "algorithm fusion" methods; (3) Establishment of an automatic quality assessment scheme.
Camera sensor arrangement for crop/weed detection accuracy in agronomic images.
Romeo, Juan; Guerrero, José Miguel; Montalvo, Martín; Emmi, Luis; Guijarro, María; Gonzalez-de-Santos, Pablo; Pajares, Gonzalo
2013-04-02
In Precision Agriculture, images coming from camera-based sensors are commonly used for weed identification and crop line detection, either to apply specific treatments or for vehicle guidance purposes. Accuracy of identification and detection is an important issue to be addressed in image processing. There are two main types of parameters affecting the accuracy of the images, namely: (a) extrinsic, related to the sensor's positioning in the tractor; (b) intrinsic, related to the sensor specifications, such as CCD resolution, focal length or iris aperture, among others. Moreover, in agricultural applications, the uncontrolled illumination, existing in outdoor environments, is also an important factor affecting the image accuracy. This paper is exclusively focused on two main issues, always with the goal to achieve the highest image accuracy in Precision Agriculture applications, making the following two main contributions: (a) camera sensor arrangement, to adjust extrinsic parameters and (b) design of strategies for controlling the adverse illumination effects.
Geographical Topics Learning of Geo-Tagged Social Images.
Zhang, Xiaoming; Ji, Shufan; Wang, Senzhang; Li, Zhoujun; Lv, Xueqiang
2016-03-01
With the availability of cheap location sensors, geotagging of images in online social media is very popular. With a large amount of geo-tagged social images, it is interesting to study how these images are shared across geographical regions and how the geographical language characteristics and vision patterns are distributed across different regions. Unlike textual document, geo-tagged social image contains multiple types of content, i.e., textual description, visual content, and geographical information. Existing approaches usually mine geographical characteristics using a subset of multiple types of image contents or combining those contents linearly, which ignore correlations between different types of contents, and their geographical distributions. Therefore, in this paper, we propose a novel method to discover geographical characteristics of geo-tagged social images using a geographical topic model called geographical topic model of social images (GTMSIs). GTMSI integrates multiple types of social image contents as well as the geographical distributions, in which image topics are modeled based on both vocabulary and visual features. In GTMSI, each region of the image would have its own topic distribution, and hence have its own language model and vision pattern. Experimental results show that our GTMSI could identify interesting topics and vision patterns, as well as provide location prediction and image tagging.
Commercial CMOS image sensors as X-ray imagers and particle beam monitors
NASA Astrophysics Data System (ADS)
Castoldi, A.; Guazzoni, C.; Maffessanti, S.; Montemurro, G. V.; Carraresi, L.
2015-01-01
CMOS image sensors are widely used in several applications such as mobile handsets webcams and digital cameras among others. Furthermore they are available across a wide range of resolutions with excellent spectral and chromatic responses. In order to fulfill the need of cheap systems as beam monitors and high resolution image sensors for scientific applications we exploited the possibility of using commercial CMOS image sensors as X-rays and proton detectors. Two different sensors have been mounted and tested. An Aptina MT9v034, featuring 752 × 480 pixels, 6μm × 6μm pixel size has been mounted and successfully tested as bi-dimensional beam profile monitor, able to take pictures of the incoming proton bunches at the DeFEL beamline (1-6 MeV pulsed proton beam) of the LaBeC of INFN in Florence. The naked sensor is able to successfully detect the interactions of the single protons. The sensor point-spread-function (PSF) has been qualified with 1MeV protons and is equal to one pixel (6 mm) r.m.s. in both directions. A second sensor MT9M032, featuring 1472 × 1096 pixels, 2.2 × 2.2 μm pixel size has been mounted on a dedicated board as high-resolution imager to be used in X-ray imaging experiments with table-top generators. In order to ease and simplify the data transfer and the image acquisition the system is controlled by a dedicated micro-processor board (DM3730 1GHz SoC ARM Cortex-A8) on which a modified LINUX kernel has been implemented. The paper presents the architecture of the sensor systems and the results of the experimental measurements.
Apparatus and method for imaging metallic objects using an array of giant magnetoresistive sensors
Chaiken, Alison
2000-01-01
A portable, low-power, metallic object detector and method for providing an image of a detected metallic object. In one embodiment, the present portable low-power metallic object detector an array of giant magnetoresistive (GMR) sensors. The array of GMR sensors is adapted for detecting the presence of and compiling image data of a metallic object. In the embodiment, the array of GMR sensors is arranged in a checkerboard configuration such that axes of sensitivity of alternate GMR sensors are orthogonally oriented. An electronics portion is coupled to the array of GMR sensors. The electronics portion is adapted to receive and process the image data of the metallic object compiled by the array of GMR sensors. The embodiment also includes a display unit which is coupled to the electronics portion. The display unit is adapted to display a graphical representation of the metallic object detected by the array of GMR sensors. In so doing, a graphical representation of the detected metallic object is provided.
Hunt, Andrew P; Bach, Aaron J E; Borg, David N; Costello, Joseph T; Stewart, Ian B
2017-01-01
An accurate measure of core body temperature is critical for monitoring individuals, groups and teams undertaking physical activity in situations of high heat stress or prolonged cold exposure. This study examined the range in systematic bias of ingestible temperature sensors compared to a certified and traceable reference thermometer. A total of 119 ingestible temperature sensors were immersed in a circulated water bath at five water temperatures (TEMP A: 35.12 ± 0.60°C, TEMP B: 37.33 ± 0.56°C, TEMP C: 39.48 ± 0.73°C, TEMP D: 41.58 ± 0.97°C, and TEMP E: 43.47 ± 1.07°C) along with a certified traceable reference thermometer. Thirteen sensors (10.9%) demonstrated a systematic bias > ±0.1°C, of which 4 (3.3%) were > ± 0.5°C. Limits of agreement (95%) indicated that systematic bias would likely fall in the range of -0.14 to 0.26°C, highlighting that it is possible for temperatures measured between sensors to differ by more than 0.4°C. The proportion of sensors with systematic bias > ±0.1°C (10.9%) confirms that ingestible temperature sensors require correction to ensure their accuracy. An individualized linear correction achieved a mean systematic bias of 0.00°C, and limits of agreement (95%) to 0.00-0.00°C, with 100% of sensors achieving ±0.1°C accuracy. Alternatively, a generalized linear function (Corrected Temperature (°C) = 1.00375 × Sensor Temperature (°C) - 0.205549), produced as the average slope and intercept of a sub-set of 51 sensors and excluding sensors with accuracy outside ±0.5°C, reduced the systematic bias to < ±0.1°C in 98.4% of the remaining sensors ( n = 64). In conclusion, these data show that using an uncalibrated ingestible temperature sensor may provide inaccurate data that still appears to be statistically, physiologically, and clinically meaningful. Correction of sensor temperature to a reference thermometer by linear function eliminates this systematic bias (individualized functions) or ensures systematic bias is within ±0.1°C in 98% of the sensors (generalized function).
Advances in HgCdTe APDs and LADAR Receivers
NASA Technical Reports Server (NTRS)
Bailey, Steven; McKeag, William; Wang, Jinxue; Jack, Michael; Amzajerdian, Farzin
2010-01-01
Raytheon is developing NIR sensor chip assemblies (SCAs) for scanning and staring 3D LADAR systems. High sensitivity is obtained by integrating high performance detectors with gain i.e. APDs with very low noise Readout Integrated Circuits. Unique aspects of these designs include: independent acquisition (non-gated) of pulse returns, multiple pulse returns with both time and intensity reported to enable full 3D reconstruction of the image. Recent breakthrough in device design has resulted in HgCdTe APDs operating at 300K with essentially no excess noise to gains in excess of 100, low NEP <1nW and GHz bandwidths and have demonstrated linear mode photon counting. SCAs utilizing these high performance APDs have been integrated and demonstrated excellent spatial and range resolution enabling detailed 3D imagery both at short range and long ranges. In this presentation we will review progress in high resolution scanning, staring and ultra-high sensitivity photon counting LADAR sensors.
Investigation of the detection of shallow tunnels using electromagnetic and seismic waves
NASA Astrophysics Data System (ADS)
Counts, Tegan; Larson, Gregg; Gürbüz, Ali Cafer; McClellan, James H.; Scott, Waymond R., Jr.
2007-04-01
Multimodal detection of subsurface targets such as tunnels, pipes, reinforcement bars, and structures has been investigated using both ground-penetrating radar (GPR) and seismic sensors with signal processing techniques to enhance localization capabilities. Both systems have been tested in bi-static configurations but the GPR has been expanded to a multi-static configuration for improved performance. The use of two compatible sensors that sense different phenomena (GPR detects changes in electrical properties while the seismic system measures mechanical properties) increases the overall system's effectiveness in a wider range of soils and conditions. Two experimental scenarios have been investigated in a laboratory model with nearly homogeneous sand. Images formed from the raw data have been enhanced using beamforming inversion techniques and Hough Transform techniques to specifically address the detection of linear targets. The processed data clearly indicate the locations of the buried targets of various sizes at a range of depths.
Advanced Fibre Bragg Grating and Microfibre Bragg Grating Fabrication Techniques
NASA Astrophysics Data System (ADS)
Chung, Kit Man
Fibre Bragg gratings (FBGs) have become a very important technology for communication systems and fibre optic sensing. Typically, FBGs are less than 10-mm long and are fabricated using fused silica uniform phase masks which become more expensive for longer length or non-uniform pitch. Generally, interference UV laser beams are employed to make long or complex FBGs, and this technique introduces critical precision and control issues. In this work, we demonstrate an advanced FBG fabrication system that enables the writing of long and complex gratings in optical fibres with virtually any apodisation profile, local phase and Bragg wavelength using a novel optical design in which the incident angles of two UV beams onto an optical fibre can be adjusted simultaneously by moving just one optical component, instead of two optics employed in earlier configurations, to vary the grating pitch. The key advantage of the grating fabrication system is that complex gratings can be fabricated by controlling the linear movements of two translation stages. In addition to the study of advanced grating fabrication technique, we also focus on the inscription of FBGs written in optical fibres with a cladding diameter of several ten's of microns. Fabrication of microfibres was investigated using a sophisticated tapering method. We also proposed a simple but practical technique to filter out the higher order modes reflected from the FBG written in microfibres via a linear taper region while the fundamental mode re-couples to the core. By using this technique, reflection from the microfibre Bragg grating (MFBG) can be effectively single mode, simplifying the demultiplexing and demodulation processes. MFBG exhibits high sensitivity to contact force and an MFBG-based force sensor was also constructed and tested to investigate their suitability for use as an invasive surgery device. Performance of the contact force sensor packaged in a conforming elastomer material compares favourably to one of the best-performing commercial contact force sensors in catheterization applications. The proposed sensor features extremely high sensitivity up to 1.37-mN, miniature size (2.4-mm) that meets standard specification, excellent linearity, low hysteresis, and magnetic resonance imaging compatibility.
Fors, Octavi; Núñez, Jorge; Otazu, Xavier; Prades, Albert; Cardinal, Robert D.
2010-01-01
In this paper we show how the techniques of image deconvolution can increase the ability of image sensors as, for example, CCD imagers, to detect faint stars or faint orbital objects (small satellites and space debris). In the case of faint stars, we show that this benefit is equivalent to double the quantum efficiency of the used image sensor or to increase the effective telescope aperture by more than 30% without decreasing the astrometric precision or introducing artificial bias. In the case of orbital objects, the deconvolution technique can double the signal-to-noise ratio of the image, which helps to discover and control dangerous objects as space debris or lost satellites. The benefits obtained using CCD detectors can be extrapolated to any kind of image sensors. PMID:22294896
Fors, Octavi; Núñez, Jorge; Otazu, Xavier; Prades, Albert; Cardinal, Robert D
2010-01-01
In this paper we show how the techniques of image deconvolution can increase the ability of image sensors as, for example, CCD imagers, to detect faint stars or faint orbital objects (small satellites and space debris). In the case of faint stars, we show that this benefit is equivalent to double the quantum efficiency of the used image sensor or to increase the effective telescope aperture by more than 30% without decreasing the astrometric precision or introducing artificial bias. In the case of orbital objects, the deconvolution technique can double the signal-to-noise ratio of the image, which helps to discover and control dangerous objects as space debris or lost satellites. The benefits obtained using CCD detectors can be extrapolated to any kind of image sensors.
Jiang, Hao; Zhao, Dehua; Cai, Ying; An, Shuqing
2012-01-01
In previous attempts to identify aquatic vegetation from remotely-sensed images using classification trees (CT), the images used to apply CT models to different times or locations necessarily originated from the same satellite sensor as that from which the original images used in model development came, greatly limiting the application of CT. We have developed an effective normalization method to improve the robustness of CT models when applied to images originating from different sensors and dates. A total of 965 ground-truth samples of aquatic vegetation types were obtained in 2009 and 2010 in Taihu Lake, China. Using relevant spectral indices (SI) as classifiers, we manually developed a stable CT model structure and then applied a standard CT algorithm to obtain quantitative (optimal) thresholds from 2009 ground-truth data and images from Landsat7-ETM+, HJ-1B-CCD, Landsat5-TM and ALOS-AVNIR-2 sensors. Optimal CT thresholds produced average classification accuracies of 78.1%, 84.7% and 74.0% for emergent vegetation, floating-leaf vegetation and submerged vegetation, respectively. However, the optimal CT thresholds for different sensor images differed from each other, with an average relative variation (RV) of 6.40%. We developed and evaluated three new approaches to normalizing the images. The best-performing method (Method of 0.1% index scaling) normalized the SI images using tailored percentages of extreme pixel values. Using the images normalized by Method of 0.1% index scaling, CT models for a particular sensor in which thresholds were replaced by those from the models developed for images originating from other sensors provided average classification accuracies of 76.0%, 82.8% and 68.9% for emergent vegetation, floating-leaf vegetation and submerged vegetation, respectively. Applying the CT models developed for normalized 2009 images to 2010 images resulted in high classification (78.0%–93.3%) and overall (92.0%–93.1%) accuracies. Our results suggest that Method of 0.1% index scaling provides a feasible way to apply CT models directly to images from sensors or time periods that differ from those of the images used to develop the original models.
A mobile ferromagnetic shape detection sensor using a Hall sensor array and magnetic imaging.
Misron, Norhisam; Shin, Ng Wei; Shafie, Suhaidi; Marhaban, Mohd Hamiruce; Mailah, Nashiren Farzilah
2011-01-01
This paper presents a mobile Hall sensor array system for the shape detection of ferromagnetic materials that are embedded in walls or floors. The operation of the mobile Hall sensor array system is based on the principle of magnetic flux leakage to describe the shape of the ferromagnetic material. Two permanent magnets are used to generate the magnetic flux flow. The distribution of magnetic flux is perturbed as the ferromagnetic material is brought near the permanent magnets and the changes in magnetic flux distribution are detected by the 1-D array of the Hall sensor array setup. The process for magnetic imaging of the magnetic flux distribution is done by a signal processing unit before it displays the real time images using a netbook. A signal processing application software is developed for the 1-D Hall sensor array signal acquisition and processing to construct a 2-D array matrix. The processed 1-D Hall sensor array signals are later used to construct the magnetic image of ferromagnetic material based on the voltage signal and the magnetic flux distribution. The experimental results illustrate how the shape of specimens such as square, round and triangle shapes is determined through magnetic images based on the voltage signal and magnetic flux distribution of the specimen. In addition, the magnetic images of actual ferromagnetic objects are also illustrated to prove the functionality of mobile Hall sensor array system for actual shape detection. The results prove that the mobile Hall sensor array system is able to perform magnetic imaging in identifying various ferromagnetic materials.
A Mobile Ferromagnetic Shape Detection Sensor Using a Hall Sensor Array and Magnetic Imaging
Misron, Norhisam; Shin, Ng Wei; Shafie, Suhaidi; Marhaban, Mohd Hamiruce; Mailah, Nashiren Farzilah
2011-01-01
This paper presents a Mobile Hall Sensor Array system for the shape detection of ferromagnetic materials that are embedded in walls or floors. The operation of the Mobile Hall Sensor Array system is based on the principle of magnetic flux leakage to describe the shape of the ferromagnetic material. Two permanent magnets are used to generate the magnetic flux flow. The distribution of magnetic flux is perturbed as the ferromagnetic material is brought near the permanent magnets and the changes in magnetic flux distribution are detected by the 1-D array of the Hall sensor array setup. The process for magnetic imaging of the magnetic flux distribution is done by a signal processing unit before it displays the real time images using a netbook. A signal processing application software is developed for the 1-D Hall sensor array signal acquisition and processing to construct a 2-D array matrix. The processed 1-D Hall sensor array signals are later used to construct the magnetic image of ferromagnetic material based on the voltage signal and the magnetic flux distribution. The experimental results illustrate how the shape of specimens such as square, round and triangle shapes is determined through magnetic images based on the voltage signal and magnetic flux distribution of the specimen. In addition, the magnetic images of actual ferromagnetic objects are also illustrated to prove the functionality of Mobile Hall Sensor Array system for actual shape detection. The results prove that the Mobile Hall Sensor Array system is able to perform magnetic imaging in identifying various ferromagnetic materials. PMID:22346653
A needle-type sensor for monitoring glucose in whole blood.
Yang, Q; Atanasov, P; Wilkins, E
1997-01-01
A new surface-process technology employing electrochemical fixation of a bioactive substance (enzyme and heparin) to a sensor electrode was developed to provide biocompatability and functionality. The fabrication process includes electroentrapment of glucose oxidase and heparin on a platinum electrode by using 1,3-phenylenediamine codeposition. Electrochemically grown 1,3-phenylenediamine was also used as the outer coating of the sensor's enzyme electrode in order to extend the linear range. The sensor shows a sensitivity of 3 nA/mM and a linear range from 40 to 400 mg/dL at 37 degrees C when tested in whole blood. This sensor is characterized by a fast response. The sensor shows a minimum change in its performance when stored inactive in buffer for 12 weeks. When tested at physiologic glucose levels, the sensor demonstrates satisfactory low interference from common interfering substances. This technology seems promising for the preparation of implantable intravascular biosensors.
Differential Measurement Periodontal Structures Mapping System
NASA Technical Reports Server (NTRS)
Companion, John A. (Inventor)
1998-01-01
This invention relates to a periodontal structure mapping system employing a dental handpiece containing first and second acoustic sensors for locating the Cemento-Enamel Junction (CEJ) and measuring the differential depth between the CEJ and the bottom of the periodontal pocket. Measurements are taken at multiple locations on each tooth of a patient, observed, analyzed by an optical analysis subsystem, and archived by a data storage system for subsequent study and comparison with previous and subsequent measurements. Ultrasonic transducers for the first and second acoustic sensors are contained within the handpiece and in connection with a control computer. Pressurized water is provided for the depth measurement sensor and a linearly movable probe sensor serves as the sensor for the CEJ finder. The linear movement of the CEJ sensor is obtained by a control computer actuated by the prober. In an alternate embodiment, the CEJ probe is an optical fiber sensor with appropriate analysis structure provided therefor.
Design and calibration of a six-axis MEMS sensor array for use in scoliosis correction surgery
NASA Astrophysics Data System (ADS)
Benfield, David; Yue, Shichao; Lou, Edmond; Moussa, Walied A.
2014-08-01
A six-axis sensor array has been developed to quantify the 3D force and moment loads applied in scoliosis correction surgery. Initially this device was developed to be applied during scoliosis correction surgery and augmented onto existing surgical instrumentation, however, use as a general load sensor is also feasible. The development has included the design, microfabrication, deployment and calibration of a sensor array. The sensor array consists of four membrane devices, each containing piezoresistive sensing elements, generating a total of 16 differential voltage outputs. The calibration procedure has made use of a custom built load application frame, which allows quantified forces and moments to be applied and compared to the outputs from the sensor array. Linear or non-linear calibration equations are generated to convert the voltage outputs from the sensor array back into 3D force and moment information for display or analysis.
Joint sparsity based heterogeneous data-level fusion for target detection and estimation
NASA Astrophysics Data System (ADS)
Niu, Ruixin; Zulch, Peter; Distasio, Marcello; Blasch, Erik; Shen, Dan; Chen, Genshe
2017-05-01
Typical surveillance systems employ decision- or feature-level fusion approaches to integrate heterogeneous sensor data, which are sub-optimal and incur information loss. In this paper, we investigate data-level heterogeneous sensor fusion. Since the sensors monitor the common targets of interest, whose states can be determined by only a few parameters, it is reasonable to assume that the measurement domain has a low intrinsic dimensionality. For heterogeneous sensor data, we develop a joint-sparse data-level fusion (JSDLF) approach based on the emerging joint sparse signal recovery techniques by discretizing the target state space. This approach is applied to fuse signals from multiple distributed radio frequency (RF) signal sensors and a video camera for joint target detection and state estimation. The JSDLF approach is data-driven and requires minimum prior information, since there is no need to know the time-varying RF signal amplitudes, or the image intensity of the targets. It can handle non-linearity in the sensor data due to state space discretization and the use of frequency/pixel selection matrices. Furthermore, for a multi-target case with J targets, the JSDLF approach only requires discretization in a single-target state space, instead of discretization in a J-target state space, as in the case of the generalized likelihood ratio test (GLRT) or the maximum likelihood estimator (MLE). Numerical examples are provided to demonstrate that the proposed JSDLF approach achieves excellent performance with near real-time accurate target position and velocity estimates.
Wavelength interrogation of fiber Bragg grating sensors based on crossed optical Gaussian filters.
Cheng, Rui; Xia, Li; Zhou, Jiaao; Liu, Deming
2015-04-15
Conventional intensity-modulated measurements require to be operated in linear range of filter or interferometric response to ensure a linear detection. Here, we present a wavelength interrogation system for fiber Bragg grating sensors where the linear transition is achieved with crossed Gaussian transmissions. This unique filtering characteristic makes the responses of the two branch detections follow Gaussian functions with the same parameters except for a delay. The substraction of these two delayed Gaussian responses (in dB) ultimately leads to a linear behavior, which is exploited for the sensor wavelength determination. Beside its flexibility and inherently power insensitivity, the proposal also shows a potential of a much wider operational range. Interrogation of a strain-tuned grating was accomplished, with a wide sensitivity tuning range from 2.56 to 8.7 dB/nm achieved.
Depth map generation using a single image sensor with phase masks.
Jang, Jinbeum; Park, Sangwoo; Jo, Jieun; Paik, Joonki
2016-06-13
Conventional stereo matching systems generate a depth map using two or more digital imaging sensors. It is difficult to use the small camera system because of their high costs and bulky sizes. In order to solve this problem, this paper presents a stereo matching system using a single image sensor with phase masks for the phase difference auto-focusing. A novel pattern of phase mask array is proposed to simultaneously acquire two pairs of stereo images. Furthermore, a noise-invariant depth map is generated from the raw format sensor output. The proposed method consists of four steps to compute the depth map: (i) acquisition of stereo images using the proposed mask array, (ii) variational segmentation using merging criteria to simplify the input image, (iii) disparity map generation using the hierarchical block matching for disparity measurement, and (iv) image matting to fill holes to generate the dense depth map. The proposed system can be used in small digital cameras without additional lenses or sensors.
CCD developments for particle colliders
NASA Astrophysics Data System (ADS)
Stefanov, Konstantin D.
2006-09-01
Charge Coupled Devices (CCDs) have been successfully used in several high-energy physics experiments over the last 20 years. Their small pixel size and excellent precision provide superb tool for studying of short-lived particles and understanding the nature at fundamental level. Over the last years the Linear Collider Flavour Identification (LCFI) collaboration has developed Column-Parallel CCDs (CPCCD) and CMOS readout chips to be used for the vertex detector at the International Linear Collider (ILC). The CPCCDs are very fast devices capable of satisfying the challenging requirements imposed by the beam structure of the superconducting accelerator. First set of prototype devices have been designed, manufactured and successfully tested, with second-generation chips on the way. Another idea for CCD-based device, the In-situ Storage Image Sensor (ISIS) is also under development and the first prototype is in production.
CCD-based vertex detector for ILC
NASA Astrophysics Data System (ADS)
Stefanov, Konstantin D.
2006-12-01
Charge Coupled Devices (CCDs) have been successfully used in several high-energy physics experiments over the last 20 years. Their small pixel size and excellent precision provide a superb tool for studying of short-lived particles and understanding the nature at fundamental level. Over the last few years the Linear Collider Flavour Identification (LCFI) collaboration has developed Column-Parallel CCDs (CPCCD) and CMOS readout chips, to be used for the vertex detector at the International Linear Collider (ILC). The CPCCDs are very fast devices capable of satisfying the challenging requirements imposed by the beam structure of the superconducting accelerator. The first set of prototype devices have been successfully designed, manufactured and tested, with second generation chips on the way. Another idea for CCD-based device, the In-situ Storage Image Sensor (ISIS) is also under development and the first prototype has been manufactured.
NASA Astrophysics Data System (ADS)
Bird, Alan; Anderson, Scott A.; Linne von Berg, Dale; Davidson, Morgan; Holt, Niel; Kruer, Melvin; Wilson, Michael L.
2010-04-01
EyePod is a compact survey and inspection day/night imaging sensor suite for small unmanned aircraft systems (UAS). EyePod generates georeferenced image products in real-time from visible near infrared (VNIR) and long wave infrared (LWIR) imaging sensors and was developed under the ONR funded FEATHAR (Fusion, Exploitation, Algorithms, and Targeting for High-Altitude Reconnaissance) program. FEATHAR is being directed and executed by the Naval Research Laboratory (NRL) in conjunction with the Space Dynamics Laboratory (SDL) and FEATHAR's goal is to develop and test new tactical sensor systems specifically designed for small manned and unmanned platforms (payload weight < 50 lbs). The EyePod suite consists of two VNIR/LWIR (day/night) gimbaled sensors that, combined, provide broad area survey and focused inspection capabilities. Each EyePod sensor pairs an HD visible EO sensor with a LWIR bolometric imager providing precision geo-referenced and fully digital EO/IR NITFS output imagery. The LWIR sensor is mounted to a patent-pending jitter-reduction stage to correct for the high-frequency motion typically found on small aircraft and unmanned systems. Details will be presented on both the wide-area and inspection EyePod sensor systems, their modes of operation, and results from recent flight demonstrations.
PET and PVC separation with hyperspectral imagery.
Moroni, Monica; Mei, Alessandro; Leonardi, Alessandra; Lupo, Emanuela; Marca, Floriana La
2015-01-20
Traditional plants for plastic separation in homogeneous products employ material physical properties (for instance density). Due to the small intervals of variability of different polymer properties, the output quality may not be adequate. Sensing technologies based on hyperspectral imaging have been introduced in order to classify materials and to increase the quality of recycled products, which have to comply with specific standards determined by industrial applications. This paper presents the results of the characterization of two different plastic polymers--polyethylene terephthalate (PET) and polyvinyl chloride (PVC)--in different phases of their life cycle (primary raw materials, urban and urban-assimilated waste and secondary raw materials) to show the contribution of hyperspectral sensors in the field of material recycling. This is accomplished via near-infrared (900-1700 nm) reflectance spectra extracted from hyperspectral images acquired with a two-linear-spectrometer apparatus. Results have shown that a rapid and reliable identification of PET and PVC can be achieved by using a simple two near-infrared wavelength operator coupled to an analysis of reflectance spectra. This resulted in 100% classification accuracy. A sensor based on this identification method appears suitable and inexpensive to build and provides the necessary speed and performance required by the recycling industry.
PET and PVC Separation with Hyperspectral Imagery
Moroni, Monica; Mei, Alessandro; Leonardi, Alessandra; Lupo, Emanuela; La Marca, Floriana
2015-01-01
Traditional plants for plastic separation in homogeneous products employ material physical properties (for instance density). Due to the small intervals of variability of different polymer properties, the output quality may not be adequate. Sensing technologies based on hyperspectral imaging have been introduced in order to classify materials and to increase the quality of recycled products, which have to comply with specific standards determined by industrial applications. This paper presents the results of the characterization of two different plastic polymers—polyethylene terephthalate (PET) and polyvinyl chloride (PVC)—in different phases of their life cycle (primary raw materials, urban and urban-assimilated waste and secondary raw materials) to show the contribution of hyperspectral sensors in the field of material recycling. This is accomplished via near-infrared (900–1700 nm) reflectance spectra extracted from hyperspectral images acquired with a two-linear-spectrometer apparatus. Results have shown that a rapid and reliable identification of PET and PVC can be achieved by using a simple two near-infrared wavelength operator coupled to an analysis of reflectance spectra. This resulted in 100% classification accuracy. A sensor based on this identification method appears suitable and inexpensive to build and provides the necessary speed and performance required by the recycling industry. PMID:25609050
NASA Technical Reports Server (NTRS)
Roberts, J. Brent
2010-01-01
Detailed studies of the energy and water cycles require accurate estimation of the turbulent fluxes of moisture and heat across the atmosphere-ocean interface at regional to basin scale. Providing estimates of these latent and sensible heat fluxes over the global ocean necessitates the use of satellite or reanalysis-based estimates of near surface variables. Recent studies have shown that errors in the surface (10 meter)estimates of humidity and temperature are currently the largest sources of uncertainty in the production of turbulent fluxes from satellite observations. Therefore, emphasis has been placed on reducing the systematic errors in the retrieval of these parameters from microwave radiometers. This study discusses recent improvements in the retrieval of air temperature and humidity through improvements in the choice of algorithms (linear vs. nonlinear) and the choice of microwave sensors. Particular focus is placed on improvements using a neural network approach with a single sensor (Special Sensor Microwave/Imager) and the use of combined sensors from the NASA AQUA satellite platform. The latter algorithm utilizes the unique sampling available on AQUA from the Advanced Microwave Scanning Radiometer (AMSR-E) and the Advanced Microwave Sounding Unit (AMSU-A). Current estimates of uncertainty in the near-surface humidity and temperature from single and multi-sensor approaches are discussed and used to estimate errors in the turbulent fluxes.
Mochizuki, Futa; Kagawa, Keiichiro; Okihara, Shin-ichiro; Seo, Min-Woong; Zhang, Bo; Takasawa, Taishi; Yasutomi, Keita; Kawahito, Shoji
2016-02-22
In the work described in this paper, an image reproduction scheme with an ultra-high-speed temporally compressive multi-aperture CMOS image sensor was demonstrated. The sensor captures an object by compressing a sequence of images with focal-plane temporally random-coded shutters, followed by reconstruction of time-resolved images. Because signals are modulated pixel-by-pixel during capturing, the maximum frame rate is defined only by the charge transfer speed and can thus be higher than those of conventional ultra-high-speed cameras. The frame rate and optical efficiency of the multi-aperture scheme are discussed. To demonstrate the proposed imaging method, a 5×3 multi-aperture image sensor was fabricated. The average rising and falling times of the shutters were 1.53 ns and 1.69 ns, respectively. The maximum skew among the shutters was 3 ns. The sensor observed plasma emission by compressing it to 15 frames, and a series of 32 images at 200 Mfps was reconstructed. In the experiment, by correcting disparities and considering temporal pixel responses, artifacts in the reconstructed images were reduced. An improvement in PSNR from 25.8 dB to 30.8 dB was confirmed in simulations.
Low-Cost Linear Optical Sensors.
ERIC Educational Resources Information Center
Kinsey, Kenneth F.; Meisel, David D.
1994-01-01
Discusses the properties and application of three light-to-voltage optical sensors. The sensors have been used for sensing diffraction patterns, the inverse-square law, and as a fringe counter with an interferometer. (MVL)
CMOS Imaging of Pin-Printed Xerogel-Based Luminescent Sensor Microarrays.
Yao, Lei; Yung, Ka Yi; Khan, Rifat; Chodavarapu, Vamsy P; Bright, Frank V
2010-12-01
We present the design and implementation of a luminescence-based miniaturized multisensor system using pin-printed xerogel materials which act as host media for chemical recognition elements. We developed a CMOS imager integrated circuit (IC) to image the luminescence response of the xerogel-based sensor array. The imager IC uses a 26 × 20 (520 elements) array of active pixel sensors and each active pixel includes a high-gain phototransistor to convert the detected optical signals into electrical currents. The imager includes a correlated double sampling circuit and pixel address/digital control circuit; the image data is read-out as coded serial signal. The sensor system uses a light-emitting diode (LED) to excite the target analyte responsive luminophores doped within discrete xerogel-based sensor elements. As a prototype, we developed a 4 × 4 (16 elements) array of oxygen (O 2 ) sensors. Each group of 4 sensor elements in the array (arranged in a row) is designed to provide a different and specific sensitivity to the target gaseous O 2 concentration. This property of multiple sensitivities is achieved by using a strategic mix of two oxygen sensitive luminophores ([Ru(dpp) 3 ] 2+ and ([Ru(bpy) 3 ] 2+ ) in each pin-printed xerogel sensor element. The CMOS imager consumes an average power of 8 mW operating at 1 kHz sampling frequency driven at 5 V. The developed prototype system demonstrates a low cost and miniaturized luminescence multisensor system.
Fingerprint enhancement using a multispectral sensor
NASA Astrophysics Data System (ADS)
Rowe, Robert K.; Nixon, Kristin A.
2005-03-01
The level of performance of a biometric fingerprint sensor is critically dependent on the quality of the fingerprint images. One of the most common types of optical fingerprint sensors relies on the phenomenon of total internal reflectance (TIR) to generate an image. Under ideal conditions, a TIR fingerprint sensor can produce high-contrast fingerprint images with excellent feature definition. However, images produced by the same sensor under conditions that include dry skin, dirt on the skin, and marginal contact between the finger and the sensor, are likely to be severely degraded. This paper discusses the use of multispectral sensing as a means to collect additional images with new information about the fingerprint that can significantly augment the system performance under both normal and adverse sample conditions. In the context of this paper, "multispectral sensing" is used to broadly denote a collection of images taken under different illumination conditions: different polarizations, different illumination/detection configurations, as well as different wavelength illumination. Results from three small studies using an early-stage prototype of the multispectral-TIR (MTIR) sensor are presented along with results from the corresponding TIR data. The first experiment produced data from 9 people, 4 fingers from each person and 3 measurements per finger under "normal" conditions. The second experiment provided results from a study performed to test the relative performance of TIR and MTIR images when taken under extreme dry and dirty conditions. The third experiment examined the case where the area of contact between the finger and sensor is greatly reduced.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-06
... INTERNATIONAL TRADE COMMISSION [Investigation No. 337-TA-846] Certain CMOS Image Sensors and..., the sale for importation, and the sale within the United States after importation of certain CMOS image sensors and products containing same by reason of infringement of certain claims of U.S. Patent No...
NASA Astrophysics Data System (ADS)
Masuzawa, Tomoaki; Neo, Yoichiro; Mimura, Hidenori; Okamoto, Tamotsu; Nagao, Masayoshi; Akiyoshi, Masafumi; Sato, Nobuhiro; Takagi, Ikuji; Tsuji, Hiroshi; Gotoh, Yasuhito
2016-10-01
A growing demand on incident detection is recognized since the Great East Japan Earthquake and successive accidents in Fukushima nuclear power plant in 2011. Radiation tolerant image sensors are powerful tools to collect crucial information at initial stages of such incidents. However, semiconductor based image sensors such as CMOS and CCD have limited tolerance to radiation exposure. Image sensors used in nuclear facilities are conventional vacuum tubes using thermal cathodes, which have large size and high power consumption. In this study, we propose a compact image sensor composed of a CdTe-based photodiode and a matrix-driven Spindt-type electron beam source called field emitter array (FEA). A basic principle of FEA-based image sensors is similar to conventional Vidicon type camera tubes, but its electron source is replaced from a thermal cathode to FEA. The use of a field emitter as an electron source should enable significant size reduction while maintaining high radiation tolerance. Current researches on radiation tolerant FEAs and development of CdTe based photoconductive films will be presented.