Sample records for ccd image sensor

  1. A design of driving circuit for star sensor imaging camera

    NASA Astrophysics Data System (ADS)

    Li, Da-wei; Yang, Xiao-xu; Han, Jun-feng; Liu, Zhao-hui

    2016-01-01

    The star sensor is a high-precision attitude sensitive measuring instruments, which determine spacecraft attitude by detecting different positions on the celestial sphere. Imaging camera is an important portion of star sensor. The purpose of this study is to design a driving circuit based on Kodak CCD sensor. The design of driving circuit based on Kodak KAI-04022 is discussed, and the timing of this CCD sensor is analyzed. By the driving circuit testing laboratory and imaging experiments, it is found that the driving circuits can meet the requirements of Kodak CCD sensor.

  2. High-resolution CCD imaging alternatives

    NASA Astrophysics Data System (ADS)

    Brown, D. L.; Acker, D. E.

    1992-08-01

    High resolution CCD color cameras have recently stimulated the interest of a large number of potential end-users for a wide range of practical applications. Real-time High Definition Television (HDTV) systems are now being used or considered for use in applications ranging from entertainment program origination through digital image storage to medical and scientific research. HDTV generation of electronic images offers significant cost and time-saving advantages over the use of film in such applications. Further in still image systems electronic image capture is faster and more efficient than conventional image scanners. The CCD still camera can capture 3-dimensional objects into the computing environment directly without having to shoot a picture on film develop it and then scan the image into a computer. 2. EXTENDING CCD TECHNOLOGY BEYOND BROADCAST Most standard production CCD sensor chips are made for broadcast-compatible systems. One popular CCD and the basis for this discussion offers arrays of roughly 750 x 580 picture elements (pixels) or a total array of approximately 435 pixels (see Fig. 1). FOR. A has developed a technique to increase the number of available pixels for a given image compared to that produced by the standard CCD itself. Using an inter-lined CCD with an overall spatial structure several times larger than the photo-sensitive sensor areas each of the CCD sensors is shifted in two dimensions in order to fill in spatial gaps between adjacent sensors.

  3. Study the performance of star sensor influenced by space radiation damage of image sensor

    NASA Astrophysics Data System (ADS)

    Feng, Jie; Li, Yudong; Wen, Lin; Guo, Qi; Zhang, Xingyao

    2018-03-01

    Star sensor is an essential component of spacecraft attitude control system. Spatial radiation can cause star sensor performance degradation, abnormal work, attitude measurement accuracy and reliability reduction. Many studies have already been dedicated to the radiation effect on Charge-Coupled Device(CCD) image sensor, but fewer studies focus on the radiation effect of star sensor. The innovation of this paper is to study the radiation effects from the device level to the system level. The influence of the degradation of CCD image sensor radiation sensitive parameters on the performance parameters of star sensor is studied in this paper. The correlation among the radiation effect of proton, the non-uniformity noise of CCD image sensor and the performance parameter of star sensor is analyzed. This paper establishes a foundation for the study of error prediction and correction technology of star sensor on-orbit attitude measurement, and provides some theoretical basis for the design of high performance star sensor.

  4. CMOS Imaging Sensor Technology for Aerial Mapping Cameras

    NASA Astrophysics Data System (ADS)

    Neumann, Klaus; Welzenbach, Martin; Timm, Martin

    2016-06-01

    In June 2015 Leica Geosystems launched the first large format aerial mapping camera using CMOS sensor technology, the Leica DMC III. This paper describes the motivation to change from CCD sensor technology to CMOS for the development of this new aerial mapping camera. In 2002 the DMC first generation was developed by Z/I Imaging. It was the first large format digital frame sensor designed for mapping applications. In 2009 Z/I Imaging designed the DMC II which was the first digital aerial mapping camera using a single ultra large CCD sensor to avoid stitching of smaller CCDs. The DMC III is now the third generation of large format frame sensor developed by Z/I Imaging and Leica Geosystems for the DMC camera family. It is an evolution of the DMC II using the same system design with one large monolithic PAN sensor and four multi spectral camera heads for R,G, B and NIR. For the first time a 391 Megapixel large CMOS sensor had been used as PAN chromatic sensor, which is an industry record. Along with CMOS technology goes a range of technical benefits. The dynamic range of the CMOS sensor is approx. twice the range of a comparable CCD sensor and the signal to noise ratio is significantly better than with CCDs. Finally results from the first DMC III customer installations and test flights will be presented and compared with other CCD based aerial sensors.

  5. Characterization of Electrically Active Defects in Si Using CCD Image Sensors

    DTIC Science & Technology

    1978-02-01

    63 35 Dislocation Segments in CCD Imager . . . . . . . . . . . . . 64 36 422 Reflection Topograph of Dislocation Loop ir... Loops . . . . . 3 39 422 Reflection Topograph of Scratch on CCD Imager, . . . 69 40 Dark Current Display of a CCD Imager with 32 ms integration Time...made of each slice using the elon -asoorbio aold developer described in Appendix D. The inagers were then thinned using the procedure at Appendix taor

  6. A 100 Mfps image sensor for biological applications

    NASA Astrophysics Data System (ADS)

    Etoh, T. Goji; Shimonomura, Kazuhiro; Nguyen, Anh Quang; Takehara, Kosei; Kamakura, Yoshinari; Goetschalckx, Paul; Haspeslagh, Luc; De Moor, Piet; Dao, Vu Truong Son; Nguyen, Hoang Dung; Hayashi, Naoki; Mitsui, Yo; Inumaru, Hideo

    2018-02-01

    Two ultrahigh-speed CCD image sensors with different characteristics were fabricated for applications to advanced scientific measurement apparatuses. The sensors are BSI MCG (Backside-illuminated Multi-Collection-Gate) image sensors with multiple collection gates around the center of the front side of each pixel, placed like petals of a flower. One has five collection gates and one drain gate at the center, which can capture consecutive five frames at 100 Mfps with the pixel count of about 600 kpixels (512 x 576 x 2 pixels). In-pixel signal accumulation is possible for repetitive image capture of reproducible events. The target application is FLIM. The other is equipped with four collection gates each connected to an in-situ CCD memory with 305 elements, which enables capture of 1,220 (4 x 305) consecutive images at 50 Mfps. The CCD memory is folded and looped with the first element connected to the last element, which also makes possible the in-pixel signal accumulation. The sensor is a small test sensor with 32 x 32 pixels. The target applications are imaging TOF MS, pulse neutron tomography and dynamic PSP. The paper also briefly explains an expression of the temporal resolution of silicon image sensors theoretically derived by the authors in 2017. It is shown that the image sensor designed based on the theoretical analysis achieves imaging of consecutive frames at the frame interval of 50 ps.

  7. CCD imaging sensors

    NASA Technical Reports Server (NTRS)

    Janesick, James R. (Inventor); Elliott, Stythe T. (Inventor)

    1989-01-01

    A method for promoting quantum efficiency (QE) of a CCD imaging sensor for UV, far UV and low energy x-ray wavelengths by overthinning the back side beyond the interface between the substrate and the photosensitive semiconductor material, and flooding the back side with UV prior to using the sensor for imaging. This UV flooding promotes an accumulation layer of positive states in the oxide film over the thinned sensor to greatly increase QE for either frontside or backside illumination. A permanent or semipermanent image (analog information) may be stored in a frontside SiO.sub.2 layer over the photosensitive semiconductor material using implanted ions for a permanent storage and intense photon radiation for a semipermanent storage. To read out this stored information, the gate potential of the CCD is biased more negative than that used for normal imaging, and excess charge current thus produced through the oxide is integrated in the pixel wells for subsequent readout by charge transfer from well to well in the usual manner.

  8. CCD image sensor induced error in PIV applications

    NASA Astrophysics Data System (ADS)

    Legrand, M.; Nogueira, J.; Vargas, A. A.; Ventas, R.; Rodríguez-Hidalgo, M. C.

    2014-06-01

    The readout procedure of charge-coupled device (CCD) cameras is known to generate some image degradation in different scientific imaging fields, especially in astrophysics. In the particular field of particle image velocimetry (PIV), widely extended in the scientific community, the readout procedure of the interline CCD sensor induces a bias in the registered position of particle images. This work proposes simple procedures to predict the magnitude of the associated measurement error. Generally, there are differences in the position bias for the different images of a certain particle at each PIV frame. This leads to a substantial bias error in the PIV velocity measurement (˜0.1 pixels). This is the order of magnitude that other typical PIV errors such as peak-locking may reach. Based on modern CCD technology and architecture, this work offers a description of the readout phenomenon and proposes a modeling for the CCD readout bias error magnitude. This bias, in turn, generates a velocity measurement bias error when there is an illumination difference between two successive PIV exposures. The model predictions match the experiments performed with two 12-bit-depth interline CCD cameras (MegaPlus ES 4.0/E incorporating the Kodak KAI-4000M CCD sensor with 4 megapixels). For different cameras, only two constant values are needed to fit the proposed calibration model and predict the error from the readout procedure. Tests by different researchers using different cameras would allow verification of the model, that can be used to optimize acquisition setups. Simple procedures to obtain these two calibration values are also described.

  9. Improving the Ability of Image Sensors to Detect Faint Stars and Moving Objects Using Image Deconvolution Techniques

    PubMed Central

    Fors, Octavi; Núñez, Jorge; Otazu, Xavier; Prades, Albert; Cardinal, Robert D.

    2010-01-01

    In this paper we show how the techniques of image deconvolution can increase the ability of image sensors as, for example, CCD imagers, to detect faint stars or faint orbital objects (small satellites and space debris). In the case of faint stars, we show that this benefit is equivalent to double the quantum efficiency of the used image sensor or to increase the effective telescope aperture by more than 30% without decreasing the astrometric precision or introducing artificial bias. In the case of orbital objects, the deconvolution technique can double the signal-to-noise ratio of the image, which helps to discover and control dangerous objects as space debris or lost satellites. The benefits obtained using CCD detectors can be extrapolated to any kind of image sensors. PMID:22294896

  10. Improving the ability of image sensors to detect faint stars and moving objects using image deconvolution techniques.

    PubMed

    Fors, Octavi; Núñez, Jorge; Otazu, Xavier; Prades, Albert; Cardinal, Robert D

    2010-01-01

    In this paper we show how the techniques of image deconvolution can increase the ability of image sensors as, for example, CCD imagers, to detect faint stars or faint orbital objects (small satellites and space debris). In the case of faint stars, we show that this benefit is equivalent to double the quantum efficiency of the used image sensor or to increase the effective telescope aperture by more than 30% without decreasing the astrometric precision or introducing artificial bias. In the case of orbital objects, the deconvolution technique can double the signal-to-noise ratio of the image, which helps to discover and control dangerous objects as space debris or lost satellites. The benefits obtained using CCD detectors can be extrapolated to any kind of image sensors.

  11. Typical effects of laser dazzling CCD camera

    NASA Astrophysics Data System (ADS)

    Zhang, Zhen; Zhang, Jianmin; Shao, Bibo; Cheng, Deyan; Ye, Xisheng; Feng, Guobin

    2015-05-01

    In this article, an overview of laser dazzling effect to buried channel CCD camera is given. The CCDs are sorted into staring and scanning types. The former includes the frame transfer and interline transfer types. The latter includes linear and time delay integration types. All CCDs must perform four primary tasks in generating an image, which are called charge generation, charge collection, charge transfer and charge measurement. In camera, the lenses are needed to input the optical signal to the CCD sensors, in which the techniques for erasing stray light are used. And the electron circuits are needed to process the output signal of CCD, in which many electronic techniques are used. The dazzling effects are the conjunct result of light distribution distortion and charge distribution distortion, which respectively derive from the lens and the sensor. Strictly speaking, in lens, the light distribution is not distorted. In general, the lens are so well designed and fabricated that its stray light can be neglected. But the laser is of much enough intensity to make its stray light obvious. In CCD image sensors, laser can induce a so large electrons generation. Charges transfer inefficiency and charges blooming will cause the distortion of the charge distribution. Commonly, the largest signal outputted from CCD sensor is restricted by capability of the collection well of CCD, and can't go beyond the dynamic range for the subsequent electron circuits maintaining normal work. So the signal is not distorted in the post-processing circuits. But some techniques in the circuit can make some dazzling effects present different phenomenon in final image.

  12. Timing generator of scientific grade CCD camera and its implementation based on FPGA technology

    NASA Astrophysics Data System (ADS)

    Si, Guoliang; Li, Yunfei; Guo, Yongfei

    2010-10-01

    The Timing Generator's functions of Scientific Grade CCD Camera is briefly presented: it generates various kinds of impulse sequence for the TDI-CCD, video processor and imaging data output, acting as the synchronous coordinator for time in the CCD imaging unit. The IL-E2TDI-CCD sensor produced by DALSA Co.Ltd. use in the Scientific Grade CCD Camera. Driving schedules of IL-E2 TDI-CCD sensor has been examined in detail, the timing generator has been designed for Scientific Grade CCD Camera. FPGA is chosen as the hardware design platform, schedule generator is described with VHDL. The designed generator has been successfully fulfilled function simulation with EDA software and fitted into XC2VP20-FF1152 (a kind of FPGA products made by XILINX). The experiments indicate that the new method improves the integrated level of the system. The Scientific Grade CCD camera system's high reliability, stability and low power supply are achieved. At the same time, the period of design and experiment is sharply shorted.

  13. The fast and accurate 3D-face scanning technology based on laser triangle sensors

    NASA Astrophysics Data System (ADS)

    Wang, Jinjiang; Chang, Tianyu; Ge, Baozhen; Tian, Qingguo; Chen, Yang; Kong, Bin

    2013-08-01

    A laser triangle scanning method and the structure of 3D-face measurement system were introduced. In presented system, a liner laser source was selected as an optical indicated signal in order to scanning a line one times. The CCD image sensor was used to capture image of the laser line modulated by human face. The system parameters were obtained by system calibrated calculated. The lens parameters of image part of were calibrated with machine visual image method and the triangle structure parameters were calibrated with fine wire paralleled arranged. The CCD image part and line laser indicator were set with a linear motor carry which can achieve the line laser scanning form top of the head to neck. For the nose is ledge part and the eyes are sunk part, one CCD image sensor can not obtain the completed image of laser line. In this system, two CCD image sensors were set symmetric at two sides of the laser indicator. In fact, this structure includes two laser triangle measure units. Another novel design is there laser indicators were arranged in order to reduce the scanning time for it is difficult for human to keep static for longer time. The 3D data were calculated after scanning. And further data processing include 3D coordinate refine, mesh calculate and surface show. Experiments show that this system has simply structure, high scanning speed and accurate. The scanning range covers the whole head of adult, the typical resolution is 0.5mm.

  14. Active pixel sensors: the sensor of choice for future space applications?

    NASA Astrophysics Data System (ADS)

    Leijtens, Johan; Theuwissen, Albert; Rao, Padmakumar R.; Wang, Xinyang; Xie, Ning

    2007-10-01

    It is generally known that active pixel sensors (APS) have a number of advantages over CCD detectors if it comes to cost for mass production, power consumption and ease of integration. Nevertheless, most space applications still use CCD detectors because they tend to give better performance and have a successful heritage. To this respect a change may be at hand with the advent of deep sub-micron processed APS imagers (< 0.25-micron feature size). Measurements performed on test structures at the University of Delft have shown that the imagers are very radiation tolerant even if made in a standard process without the use of special design rules. Furthermore it was shown that the 1/f noise associated with deep sub-micron imagers is reduced as compared to previous generations APS imagers due to the improved quality of the gate oxides. Considering that end of life performance will have to be guaranteed, limited budget for adding shielding metal will be available for most applications and lower power operations is always seen as a positive characteristic in space applications, deep sub-micron APS imagers seem to have a number of advantages over CCD's that will probably cause them to replace CCD's in those applications where radiation tolerance and low power operation are important

  15. Development of CCD imaging sensors for space applications, phase 1

    NASA Technical Reports Server (NTRS)

    Antcliffe, G. A.

    1975-01-01

    The results of an experimental investigation to develop a large area charge coupled device (CCD) imager for space photography applications are described. Details of the design and processing required to achieve 400 X 400 imagers are presented together with a discussion of the optical characterization techniques developed for this program. A discussion of several aspects of large CCD performance is given with detailed test reports. The areas covered include dark current, uniformity of optical response, square wave amplitude response, spectral responsivity and dynamic range.

  16. Comparison of a CCD and an APS for soft X-ray diffraction

    NASA Astrophysics Data System (ADS)

    Stewart, Graeme; Bates, R.; Blue, A.; Clark, A.; Dhesi, S. S.; Maneuski, D.; Marchal, J.; Steadman, P.; Tartoni, N.; Turchetta, R.

    2011-12-01

    We compare a new CMOS Active Pixel Sensor (APS) to a Princeton Instruments PIXIS-XO: 2048B Charge Coupled Device (CCD) with soft X-rays tested in a synchrotron beam line at the Diamond Light Source (DLS). Despite CCDs being established in the field of scientific imaging, APS are an innovative technology that offers advantages over CCDs. These include faster readout, higher operational temperature, in-pixel electronics for advanced image processing and reduced manufacturing cost. The APS employed was the Vanilla sensor designed by the MI3 collaboration and funded by an RCUK Basic technology grant. This sensor has 520 x 520 square pixels, of size 25 μm on each side. The sensor can operate at a full frame readout of up to 20 Hz. The sensor had been back-thinned, to the epitaxial layer. This was the first time that a back-thinned APS had been demonstrated at a beam line at DLS. In the synchrotron experiment soft X-rays with an energy of approximately 708 eV were used to produce a diffraction pattern from a permalloy sample. The pattern was imaged at a range of integration times with both sensors. The CCD had to be operated at a temperature of -55°C whereas the Vanilla was operated over a temperature range from 20°C to -10°C. We show that the APS detector can operate with frame rates up to two hundred times faster than the CCD, without excessive degradation of image quality. The signal to noise of the APS is shown to be the same as that of the CCD at identical integration times and the response is shown to be linear, with no charge blooming effects. The experiment has allowed a direct comparison of back thinned APS and CCDs in a real soft x-ray synchrotron experiment.

  17. Periodicity analysis on cat-eye reflected beam profiles of optical detectors

    NASA Astrophysics Data System (ADS)

    Gong, Mali; He, Sifeng

    2017-05-01

    The cat-eye effect reflected beam profiles of most optical detectors have a certain characteristic of periodicity, which is caused by array arrangement of sensors at their optical focal planes. It is the first time to find and prove that the reflected beam profile becomes several periodic spots at the reflected propagation distance corresponding to half the imaging distance of a CCD camera. Furthermore, the spatial cycle of these spots is approximately constant, independent of the CCD camera's imaging distance, which is related only to the focal length and pixel size of the CCD sensor. Thus, we can obtain the imaging distance and intrinsic parameters of the optical detector by analyzing its cat-eye reflected beam profiles. This conclusion can be applied in the field of non-cooperative cat-eye target recognition.

  18. Establishing imaging sensor specifications for digital still cameras

    NASA Astrophysics Data System (ADS)

    Kriss, Michael A.

    2007-02-01

    Digital Still Cameras, DSCs, have now displaced conventional still cameras in most markets. The heart of a DSC is thought to be the imaging sensor, be it Full Frame CCD, and Interline CCD, a CMOS sensor or the newer Foveon buried photodiode sensors. There is a strong tendency by consumers to consider only the number of mega-pixels in a camera and not to consider the overall performance of the imaging system, including sharpness, artifact control, noise, color reproduction, exposure latitude and dynamic range. This paper will provide a systematic method to characterize the physical requirements of an imaging sensor and supporting system components based on the desired usage. The analysis is based on two software programs that determine the "sharpness", potential for artifacts, sensor "photographic speed", dynamic range and exposure latitude based on the physical nature of the imaging optics, sensor characteristics (including size of pixels, sensor architecture, noise characteristics, surface states that cause dark current, quantum efficiency, effective MTF, and the intrinsic full well capacity in terms of electrons per square centimeter). Examples will be given for consumer, pro-consumer, and professional camera systems. Where possible, these results will be compared to imaging system currently on the market.

  19. Design and DSP implementation of star image acquisition and star point fast acquiring and tracking

    NASA Astrophysics Data System (ADS)

    Zhou, Guohui; Wang, Xiaodong; Hao, Zhihang

    2006-02-01

    Star sensor is a special high accuracy photoelectric sensor. Attitude acquisition time is an important function index of star sensor. In this paper, the design target is to acquire 10 samples per second dynamic performance. On the basis of analyzing CCD signals timing and star image processing, a new design and a special parallel architecture for improving star image processing are presented in this paper. In the design, the operation moving the data in expanded windows including the star to the on-chip memory of DSP is arranged in the invalid period of CCD frame signal. During the CCD saving the star image to memory, DSP processes the data in the on-chip memory. This parallelism greatly improves the efficiency of processing. The scheme proposed here results in enormous savings of memory normally required. In the scheme, DSP HOLD mode and CPLD technology are used to make a shared memory between CCD and DSP. The efficiency of processing is discussed in numerical tests. Only in 3.5ms is acquired the five lightest stars in the star acquisition stage. In 43us, the data in five expanded windows including stars are moved into the internal memory of DSP, and in 1.6ms, five star coordinates are achieved in the star tracking stage.

  20. High-speed high-resolution epifluorescence imaging system using CCD sensor and digital storage for neurobiological research

    NASA Astrophysics Data System (ADS)

    Takashima, Ichiro; Kajiwara, Riichi; Murano, Kiyo; Iijima, Toshio; Morinaka, Yasuhiro; Komobuchi, Hiroyoshi

    2001-04-01

    We have designed and built a high-speed CCD imaging system for monitoring neural activity in an exposed animal cortex stained with a voltage-sensitive dye. Two types of custom-made CCD sensors were developed for this system. The type I chip has a resolution of 2664 (H) X 1200 (V) pixels and a wide imaging area of 28.1 X 13.8 mm, while the type II chip has 1776 X 1626 pixels and an active imaging area of 20.4 X 18.7 mm. The CCD arrays were constructed with multiple output amplifiers in order to accelerate the readout rate. The two chips were divided into either 24 (I) or 16 (II) distinct areas that were driven in parallel. The parallel CCD outputs were digitized by 12-bit A/D converters and then stored in the frame memory. The frame memory was constructed with synchronous DRAM modules, which provided a capacity of 128 MB per channel. On-chip and on-memory binning methods were incorporated into the system, e.g., this enabled us to capture 444 X 200 pixel-images for periods of 36 seconds at a rate of 500 frames/second. This system was successfully used to visualize neural activity in the cortices of rats, guinea pigs, and monkeys.

  1. Optical cell monitoring system for underwater targets

    NASA Astrophysics Data System (ADS)

    Moon, SangJun; Manzur, Fahim; Manzur, Tariq; Demirci, Utkan

    2008-10-01

    We demonstrate a cell based detection system that could be used for monitoring an underwater target volume and environment using a microfluidic chip and charge-coupled-device (CCD). This technique allows us to capture specific cells and enumerate these cells on a large area on a microchip. The microfluidic chip and a lens-less imaging platform were then merged to monitor cell populations and morphologies as a system that may find use in distributed sensor networks. The chip, featuring surface chemistry and automatic cell imaging, was fabricated from a cover glass slide, double sided adhesive film and a transparent Polymethlymetacrylate (PMMA) slab. The optically clear chip allows detecting cells with a CCD sensor. These chips were fabricated with a laser cutter without the use of photolithography. We utilized CD4+ cells that are captured on the floor of a microfluidic chip due to the ability to address specific target cells using antibody-antigen binding. Captured CD4+ cells were imaged with a fluorescence microscope to verify the chip specificity and efficiency. We achieved 70.2 +/- 6.5% capturing efficiency and 88.8 +/- 5.4% specificity for CD4+ T lymphocytes (n = 9 devices). Bright field images of the captured cells in the 24 mm × 4 mm × 50 μm microfluidic chip were obtained with the CCD sensor in one second. We achieved an inexpensive system that rapidly captures cells and images them using a lens-less CCD system. This microfluidic device can be modified for use in single cell detection utilizing a cheap light-emitting diode (LED) chip instead of a wide range CCD system.

  2. Ionizing doses and displacement damage testing of COTS CMOS imagers

    NASA Astrophysics Data System (ADS)

    Bernard, Frédéric; Petit, Sophie; Courtade, Sophie

    2017-11-01

    CMOS sensors begin to be a credible alternative to CCD sensors in some space missions. However, technology evolution of CMOS sensors is much faster than CCD one's. So a continuous technology evaluation is needed for CMOS imagers. Many of commercial COTS (Components Off The Shelf) CMOS sensors use organic filters, micro-lenses and non rad-hard technologies. An evaluation of the possibilities offered by such technologies is interesting before any custom development. This can be obtained by testing commercial COTS imagers. This article will present electro-optical performances evolution of off the shelves CMOS imagers after Ionizing Doses until 50kRad(Si) and Displacement Damage environment tests (until 1011 p/cm2 at 50 MeV). Dark current level and non uniformity evolutions are compared and discussed. Relative spectral response measurement and associated evolution with irradiation will also be presented and discussed. Tests have been performed on CNES detection benches.

  3. Producing CCD imaging sensor with flashed backside metal film

    NASA Technical Reports Server (NTRS)

    Janesick, James R. (Inventor)

    1988-01-01

    A backside illuminated CCD imaging sensor for reading out image charges from wells of the array of pixels is significantly improved for blue, UV, far UV and low energy x-ray wavelengths (1-5000.ANG.) by so overthinning the backside as to place the depletion edge at the surface and depositing a thin transparent metal film of about 10.ANG. on a native-quality oxide film of less than about 30.ANG. grown on the thinned backside. The metal is selected to have a higher work function than that of the semiconductor to so bend the energy bands (at the interface of the semiconductor material and the oxide film) as to eliminate wells that would otherwise trap minority carriers. A bias voltage may be applied to extend the frontside depletion edge to the interface of the semiconductor material with the oxide film in the event there is not sufficient thinning. This metal film (flash gate), which improves and stabilizes the quantum efficiency of a CCD imaging sensor, will also improve the QE of any p-n junction photodetector.

  4. CCD imaging sensor with flashed backside metal film

    NASA Technical Reports Server (NTRS)

    Janesick, James R. (Inventor)

    1991-01-01

    A backside illuminated CCD imaging sensor for reading out image charges from wells of the array of pixels is significantly improved for blue, UV, far UV and low energy x-ray wavelengths (1-5000.ANG.) by so overthinning the backside as to place the depletion edge at the surface and depositing a thin transparent metal film of about 10.ANG. on a native-quality oxide film of less than about 30.ANG. grown on the thinned backside. The metal is selected to have a higher work function than that of the semiconductor to so bend the energy bands (at the interface of the semiconductor material and the oxide film) as to eliminate wells that would otherwise trap minority carriers. A bias voltage may be applied to extend the frontside depletion edge to the interface of the semiconductor material with the oxide film in the event there is not sufficient thinning. This metal film (flash gate), which improves and stabilizes the quantum efficiency of a CCD imaging sensor, will also improve the QE of any p-n junction photodetector.

  5. Radioactive Quality Evaluation and Cross Validation of Data from the HJ-1A/B Satellites' CCD Sensors

    PubMed Central

    Zhang, Xin; Zhao, Xiang; Liu, Guodong; Kang, Qian; Wu, Donghai

    2013-01-01

    Data from multiple sensors are frequently used in Earth science to gain a more complete understanding of spatial information changes. Higher quality and mutual consistency are prerequisites when multiple sensors are jointly used. The HJ-1A/B satellites successfully launched on 6 September 2008. There are four charge-coupled device (CCD) sensors with uniform spatial resolutions and spectral range onboard the HJ-A/B satellites. Whether these data are keeping consistency is a major issue before they are used. This research aims to evaluate the data consistency and radioactive quality from the four CCDs. First, images of urban, desert, lake and ocean are chosen as the objects of evaluation. Second, objective evaluation variables, such as mean, variance and angular second moment, are used to identify image performance. Finally, a cross validation method are used to ensure the correlation of the data from the four HJ-1A/B CCDs and that which is gathered from the moderate resolution imaging spectro-radiometer (MODIS). The results show that the image quality of HJ-1A/B CCDs is stable, and the digital number distribution of CCD data is relatively low. In cross validation with MODIS, the root mean square errors of bands 1, 2 and 3 range from 0.055 to 0.065, and for band 4 it is 0.101. The data from HJ-1A/B CCD have better consistency. PMID:23881127

  6. Radioactive quality evaluation and cross validation of data from the HJ-1A/B satellites' CCD sensors.

    PubMed

    Zhang, Xin; Zhao, Xiang; Liu, Guodong; Kang, Qian; Wu, Donghai

    2013-07-05

    Data from multiple sensors are frequently used in Earth science to gain a more complete understanding of spatial information changes. Higher quality and mutual consistency are prerequisites when multiple sensors are jointly used. The HJ-1A/B satellites successfully launched on 6 September 2008. There are four charge-coupled device (CCD) sensors with uniform spatial resolutions and spectral range onboard the HJ-A/B satellites. Whether these data are keeping consistency is a major issue before they are used. This research aims to evaluate the data consistency and radioactive quality from the four CCDs. First, images of urban, desert, lake and ocean are chosen as the objects of evaluation. Second, objective evaluation variables, such as mean, variance and angular second moment, are used to identify image performance. Finally, a cross validation method are used to ensure the correlation of the data from the four HJ-1A/B CCDs and that which is gathered from the moderate resolution imaging spectro-radiometer (MODIS). The results show that the image quality of HJ-1A/B CCDs is stable, and the digital number distribution of CCD data is relatively low. In cross validation with MODIS, the root mean square errors of bands 1, 2 and 3 range from 0.055 to 0.065, and for band 4 it is 0.101. The data from HJ-1A/B CCD have better consistency.

  7. Dosimetry of heavy ions by use of CCD detectors

    NASA Technical Reports Server (NTRS)

    Schott, J. U.

    1994-01-01

    The design and the atomic composition of Charge Coupled Devices (CCD's) make them unique for investigations of single energetic particle events. As detector system for ionizing particles they detect single particles with local resolution and near real time particle tracking. In combination with its properties as optical sensor, particle transversals of single particles are to be correlated to any objects attached to the light sensitive surface of the sensor by simple imaging of their shadow and subsequent image analysis of both, optical image and particle effects, observed in affected pixels. With biological objects it is possible for the first time to investigate effects of single heavy ions in tissue or extinguished organs of metabolizing (i.e. moving) systems with a local resolution better than 15 microns. Calibration data for particle detection in CCD's are presented for low energetic protons and heavy ions.

  8. Fixed mount wavefront sensor

    DOEpatents

    Neal, Daniel R.

    2000-01-01

    A rigid mount and method of mounting for a wavefront sensor. A wavefront dissector, such as a lenslet array, is rigidly mounted at a fixed distance relative to an imager, such as a CCD camera, without need for a relay imaging lens therebetween.

  9. An Overview of the CBERS-2 Satellite and Comparison of the CBERS-2 CCD Data with the L5 TM Data

    NASA Technical Reports Server (NTRS)

    Chandler, Gyanesh

    2007-01-01

    CBERS satellite carries on-board a multi sensor payload with different spatial resolutions and collection frequencies. HRCCD (High Resolution CCD Camera), IRMSS (Infrared Multispectral Scanner), and WFI (Wide-Field Imager). The CCD and the WFI camera operate in the VNIR regions, while the IRMSS operates in SWIR and thermal region. In addition to the imaging payload, the satellite carries a Data Collection System (DCS) and Space Environment Monitor (SEM).

  10. CCD TV focal plane guider development and comparison to SIRTF applications

    NASA Technical Reports Server (NTRS)

    Rank, David M.

    1989-01-01

    It is expected that the SIRTF payload will use a CCD TV focal plane fine guidance sensor to provide acquisition of sources and tracking stability of the telescope. Work has been done to develop CCD TV cameras and guiders at Lick Observatory for several years and have produced state of the art CCD TV systems for internal use. NASA decided to provide additional support so that the limits of this technology could be established and a comparison between SIRTF requirements and practical systems could be put on a more quantitative basis. The results of work carried out at Lick Observatory which was designed to characterize present CCD autoguiding technology and relate it to SIRTF applications is presented. Two different design types of CCD cameras were constructed using virtual phase and burred channel CCD sensors. A simple autoguider was built and used on the KAO, Mt. Lemon and Mt. Hamilton telescopes. A video image processing system was also constructed in order to characterize the performance of the auto guider and CCD cameras.

  11. Development of a Portable 3CCD Camera System for Multispectral Imaging of Biological Samples

    PubMed Central

    Lee, Hoyoung; Park, Soo Hyun; Noh, Sang Ha; Lim, Jongguk; Kim, Moon S.

    2014-01-01

    Recent studies have suggested the need for imaging devices capable of multispectral imaging beyond the visible region, to allow for quality and safety evaluations of agricultural commodities. Conventional multispectral imaging devices lack flexibility in spectral waveband selectivity for such applications. In this paper, a recently developed portable 3CCD camera with significant improvements over existing imaging devices is presented. A beam-splitter prism assembly for 3CCD was designed to accommodate three interference filters that can be easily changed for application-specific multispectral waveband selection in the 400 to 1000 nm region. We also designed and integrated electronic components on printed circuit boards with firmware programming, enabling parallel processing, synchronization, and independent control of the three CCD sensors, to ensure the transfer of data without significant delay or data loss due to buffering. The system can stream 30 frames (3-waveband images in each frame) per second. The potential utility of the 3CCD camera system was demonstrated in the laboratory for detecting defect spots on apples. PMID:25350510

  12. Design of area array CCD image acquisition and display system based on FPGA

    NASA Astrophysics Data System (ADS)

    Li, Lei; Zhang, Ning; Li, Tianting; Pan, Yue; Dai, Yuming

    2014-09-01

    With the development of science and technology, CCD(Charge-coupled Device) has been widely applied in various fields and plays an important role in the modern sensing system, therefore researching a real-time image acquisition and display plan based on CCD device has great significance. This paper introduces an image data acquisition and display system of area array CCD based on FPGA. Several key technical challenges and problems of the system have also been analyzed and followed solutions put forward .The FPGA works as the core processing unit in the system that controls the integral time sequence .The ICX285AL area array CCD image sensor produced by SONY Corporation has been used in the system. The FPGA works to complete the driver of the area array CCD, then analog front end (AFE) processes the signal of the CCD image, including amplification, filtering, noise elimination, CDS correlation double sampling, etc. AD9945 produced by ADI Corporation to convert analog signal to digital signal. Developed Camera Link high-speed data transmission circuit, and completed the PC-end software design of the image acquisition, and realized the real-time display of images. The result through practical testing indicates that the system in the image acquisition and control is stable and reliable, and the indicators meet the actual project requirements.

  13. Photon-counting image sensors for the ultraviolet

    NASA Technical Reports Server (NTRS)

    Jenkins, E. B.

    1985-01-01

    An investigation on specific performance details of photon counting, ultraviolet image sensors having 2-dimensional formats is reviewed. In one study, controlled experiments were performed which compare the quantum efficiencies, in pulse counting mode, of CsI photocathodes deposited on: (1) the front surface of a microchannel plate (MCP), (2) a solid surface in front of an MCP, and (3) an intensified CCD image sensor (ICCD) where a CCD is directly bombarded by accelerated photoelectrons. Tests indicated that the detection efficiency of the CsI-coated MCP at 1026 A is lower by a factor of 2.5 than that of the MCP with a separate, opaque CsI photocathode, and the detection efficiency ratio increases substantially at longer wavelengths (ratio is 5 at 1216 A and 20 at 1608 A).

  14. Toward one Giga frames per second--evolution of in situ storage image sensors.

    PubMed

    Etoh, Takeharu G; Son, Dao V T; Yamada, Tetsuo; Charbon, Edoardo

    2013-04-08

    The ISIS is an ultra-fast image sensor with in-pixel storage. The evolution of the ISIS in the past and in the near future is reviewed and forecasted. To cover the storage area with a light shield, the conventional frontside illuminated ISIS has a limited fill factor. To achieve higher sensitivity, a BSI ISIS was developed. To avoid direct intrusion of light and migration of signal electrons to the storage area on the frontside, a cross-sectional sensor structure with thick pnpn layers was developed, and named "Tetratified structure". By folding and looping in-pixel storage CCDs, an image signal accumulation sensor, ISAS, is proposed. The ISAS has a new function, the in-pixel signal accumulation, in addition to the ultra-high-speed imaging. To achieve much higher frame rate, a multi-collection-gate (MCG) BSI image sensor architecture is proposed. The photoreceptive area forms a honeycomb-like shape. Performance of a hexagonal CCD-type MCG BSI sensor is examined by simulations. The highest frame rate is theoretically more than 1Gfps. For the near future, a stacked hybrid CCD/CMOS MCG image sensor seems most promising. The associated problems are discussed. A fine TSV process is the key technology to realize the structure.

  15. Integrative Multi-Spectral Sensor Device for Far-Infrared and Visible Light Fusion

    NASA Astrophysics Data System (ADS)

    Qiao, Tiezhu; Chen, Lulu; Pang, Yusong; Yan, Gaowei

    2018-06-01

    Infrared and visible light image fusion technology is a hot spot in the research of multi-sensor fusion technology in recent years. Existing infrared and visible light fusion technologies need to register before fusion because of using two cameras. However, the application effect of the registration technology has yet to be improved. Hence, a novel integrative multi-spectral sensor device is proposed for infrared and visible light fusion, and by using the beam splitter prism, the coaxial light incident from the same lens is projected to the infrared charge coupled device (CCD) and visible light CCD, respectively. In this paper, the imaging mechanism of the proposed sensor device is studied with the process of the signals acquisition and fusion. The simulation experiment, which involves the entire process of the optic system, signal acquisition, and signal fusion, is constructed based on imaging effect model. Additionally, the quality evaluation index is adopted to analyze the simulation result. The experimental results demonstrate that the proposed sensor device is effective and feasible.

  16. Quadrilinear CCD sensors for the multispectral channel of spaceborne imagers

    NASA Astrophysics Data System (ADS)

    Materne, Alex; Gili, Bruno; Laubier, David; Gimenez, Thierry

    2001-12-01

    The PLEIADES-HR Earth Observation satellites will combine a high resolution panchromatic channel -- 0.7 m at nadir -- and a multispectral channel allowing a 2.8 m resolution. This paper presents the main specifications, design and performances of a 52 microns pitch quadrilinear CCD sensor developed by ATMEL under CNES contract, for the multispectral channel of the PLEIADES-HR instrument. The monolithic CCD device includes four lines of 1500 pixels, each line dedicated to a narrow spectral band within blue to near infra red spectrum. The design of the photodiodes and CCD registers, with larger size than those developed up to now for CNES spaceborne imagers, needed some specific structures to break the large equipotential areas where charge do not flow properly. Results are presented on the options which were experimented to improve sensitivity, maintain transfer efficiency and reduce power dissipation. The four spectral bands are achieved by four stripe filters made by SAGEM-REOSC PRODUCTS on a glass substrate, to be assembled on the sensor window. Line to line spacing on the silicon die takes into account the results of straylight analysis. A mineral layer, with high optical absorption performances is deposited between photosensitive lines to further reduce straylight.

  17. Smear correction of highly variable, frame-transfer CCD images with application to polarimetry.

    PubMed

    Iglesias, Francisco A; Feller, Alex; Nagaraju, Krishnappa

    2015-07-01

    Image smear, produced by the shutterless operation of frame-transfer CCD detectors, can be detrimental for many imaging applications. Existing algorithms used to numerically remove smear do not contemplate cases where intensity levels change considerably between consecutive frame exposures. In this report, we reformulate the smearing model to include specific variations of the sensor illumination. The corresponding desmearing expression and its noise properties are also presented and demonstrated in the context of fast imaging polarimetry.

  18. 3D space positioning and image feature extraction for workpiece

    NASA Astrophysics Data System (ADS)

    Ye, Bing; Hu, Yi

    2008-03-01

    An optical system of 3D parameters measurement for specific area of a workpiece has been presented and discussed in this paper. A number of the CCD image sensors are employed to construct the 3D coordinate system for the measured area. The CCD image sensor of the monitoring target is used to lock the measured workpiece when it enters the field of view. The other sensors, which are placed symmetrically beam scanners, measure the appearance of the workpiece and the characteristic parameters. The paper established target image segmentation and the image feature extraction algorithm to lock the target, based on the geometric similarity of objective characteristics, rapid locking the goal can be realized. When line laser beam scan the tested workpiece, a number of images are extracted equal time interval and the overlapping images are processed to complete image reconstruction, and achieve the 3D image information. From the 3D coordinate reconstruction model, the 3D characteristic parameters of the tested workpiece are gained. The experimental results are provided in the paper.

  19. Development of CCD Cameras for Soft X-ray Imaging at the National Ignition Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Teruya, A. T.; Palmer, N. E.; Schneider, M. B.

    2013-09-01

    The Static X-Ray Imager (SXI) is a National Ignition Facility (NIF) diagnostic that uses a CCD camera to record time-integrated X-ray images of target features such as the laser entrance hole of hohlraums. SXI has two dedicated positioners on the NIF target chamber for viewing the target from above and below, and the X-ray energies of interest are 870 eV for the “soft” channel and 3 – 5 keV for the “hard” channels. The original cameras utilize a large format back-illuminated 2048 x 2048 CCD sensor with 24 micron pixels. Since the original sensor is no longer available, an effortmore » was recently undertaken to build replacement cameras with suitable new sensors. Three of the new cameras use a commercially available front-illuminated CCD of similar size to the original, which has adequate sensitivity for the hard X-ray channels but not for the soft. For sensitivity below 1 keV, Lawrence Livermore National Laboratory (LLNL) had additional CCDs back-thinned and converted to back-illumination for use in the other two new cameras. In this paper we describe the characteristics of the new cameras and present performance data (quantum efficiency, flat field, and dynamic range) for the front- and back-illuminated cameras, with comparisons to the original cameras.« less

  20. Tests of commercial colour CMOS cameras for astronomical applications

    NASA Astrophysics Data System (ADS)

    Pokhvala, S. M.; Reshetnyk, V. M.; Zhilyaev, B. E.

    2013-12-01

    We present some results of testing commercial colour CMOS cameras for astronomical applications. Colour CMOS sensors allow to perform photometry in three filters simultaneously that gives a great advantage compared with monochrome CCD detectors. The Bayer BGR colour system realized in colour CMOS sensors is close to the astronomical Johnson BVR system. The basic camera characteristics: read noise (e^{-}/pix), thermal noise (e^{-}/pix/sec) and electronic gain (e^{-}/ADU) for the commercial digital camera Canon 5D MarkIII are presented. We give the same characteristics for the scientific high performance cooled CCD camera system ALTA E47. Comparing results for tests of Canon 5D MarkIII and CCD ALTA E47 show that present-day commercial colour CMOS cameras can seriously compete with the scientific CCD cameras in deep astronomical imaging.

  1. Solid state television camera (CCD-buried channel)

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The development of an all solid state television camera, which uses a buried channel charge coupled device (CCD) as the image sensor, was undertaken. A 380 x 488 element CCD array is utilized to ensure compatibility with 525 line transmission and display monitor equipment. Specific camera design approaches selected for study and analysis included (a) optional clocking modes for either fast (1/60 second) or normal (1/30 second) frame readout, (b) techniques for the elimination or suppression of CCD blemish effects, and (c) automatic light control and video gain control (i.e., ALC and AGC) techniques to eliminate or minimize sensor overload due to bright objects in the scene. Preferred approaches were determined and integrated into a deliverable solid state TV camera which addressed the program requirements for a prototype qualifiable to space environment conditions.

  2. Solid state television camera (CCD-buried channel), revision 1

    NASA Technical Reports Server (NTRS)

    1977-01-01

    An all solid state television camera was designed which uses a buried channel charge coupled device (CCD) as the image sensor. A 380 x 488 element CCD array is utilized to ensure compatibility with 525-line transmission and display monitor equipment. Specific camera design approaches selected for study and analysis included (1) optional clocking modes for either fast (1/60 second) or normal (1/30 second) frame readout, (2) techniques for the elimination or suppression of CCD blemish effects, and (3) automatic light control and video gain control techniques to eliminate or minimize sensor overload due to bright objects in the scene. Preferred approaches were determined and integrated into a deliverable solid state TV camera which addressed the program requirements for a prototype qualifiable to space environment conditions.

  3. Solid state, CCD-buried channel, television camera study and design

    NASA Technical Reports Server (NTRS)

    Hoagland, K. A.; Balopole, H.

    1976-01-01

    An investigation of an all solid state television camera design, which uses a buried channel charge-coupled device (CCD) as the image sensor, was undertaken. A 380 x 488 element CCD array was utilized to ensure compatibility with 525 line transmission and display monitor equipment. Specific camera design approaches selected for study and analysis included (a) optional clocking modes for either fast (1/60 second) or normal (1/30 second) frame readout, (b) techniques for the elimination or suppression of CCD blemish effects, and (c) automatic light control and video gain control techniques to eliminate or minimize sensor overload due to bright objects in the scene. Preferred approaches were determined and integrated into a design which addresses the program requirements for a deliverable solid state TV camera.

  4. Turbulent Mixing and Combustion for High-Speed Air-Breathing Propulsion Application

    DTIC Science & Technology

    2007-08-12

    deficit (the velocity of the wake relative to the free-stream velocity), decays rapidly with downstream distance, so that the streamwise velocity is...switched laser with double-pulse option) and a new imaging system (high-resolution: 4008x2672 pix2, low- noise (cooled) Cooke PCO-4000 CCD camera). The...was designed in-house for high-speed low- noise image acquisition. The KFS CCD image sensor was designed by Mark Wadsworth of JPL and has a resolution

  5. High-speed line-scan camera with digital time delay integration

    NASA Astrophysics Data System (ADS)

    Bodenstorfer, Ernst; Fürtler, Johannes; Brodersen, Jörg; Mayer, Konrad J.; Eckel, Christian; Gravogl, Klaus; Nachtnebel, Herbert

    2007-02-01

    Dealing with high-speed image acquisition and processing systems, the speed of operation is often limited by the amount of available light, due to short exposure times. Therefore, high-speed applications often use line-scan cameras, based on charge-coupled device (CCD) sensors with time delayed integration (TDI). Synchronous shift and accumulation of photoelectric charges on the CCD chip - according to the objects' movement - result in a longer effective exposure time without introducing additional motion blur. This paper presents a high-speed color line-scan camera based on a commercial complementary metal oxide semiconductor (CMOS) area image sensor with a Bayer filter matrix and a field programmable gate array (FPGA). The camera implements a digital equivalent to the TDI effect exploited with CCD cameras. The proposed design benefits from the high frame rates of CMOS sensors and from the possibility of arbitrarily addressing the rows of the sensor's pixel array. For the digital TDI just a small number of rows are read out from the area sensor which are then shifted and accumulated according to the movement of the inspected objects. This paper gives a detailed description of the digital TDI algorithm implemented on the FPGA. Relevant aspects for the practical application are discussed and key features of the camera are listed.

  6. Design of multi-mode compatible image acquisition system for HD area array CCD

    NASA Astrophysics Data System (ADS)

    Wang, Chen; Sui, Xiubao

    2014-11-01

    Combining with the current development trend in video surveillance-digitization and high-definition, a multimode-compatible image acquisition system for HD area array CCD is designed. The hardware and software designs of the color video capture system of HD area array CCD KAI-02150 presented by Truesense Imaging company are analyzed, and the structure parameters of the HD area array CCD and the color video gathering principle of the acquisition system are introduced. Then, the CCD control sequence and the timing logic of the whole capture system are realized. The noises of the video signal (KTC noise and 1/f noise) are filtered by using the Correlated Double Sampling (CDS) technique to enhance the signal-to-noise ratio of the system. The compatible designs in both software and hardware for the two other image sensors of the same series: KAI-04050 and KAI-08050 are put forward; the effective pixels of these two HD image sensors are respectively as many as four million and eight million. A Field Programmable Gate Array (FPGA) is adopted as the key controller of the system to perform the modularization design from top to bottom, which realizes the hardware design by software and improves development efficiency. At last, the required time sequence driving is simulated accurately by the use of development platform of Quartus II 12.1 combining with VHDL. The result of the simulation indicates that the driving circuit is characterized by simple framework, low power consumption, and strong anti-interference ability, which meet the demand of miniaturization and high-definition for the current tendency.

  7. Research-grade CMOS image sensors for remote sensing applications

    NASA Astrophysics Data System (ADS)

    Saint-Pe, Olivier; Tulet, Michel; Davancens, Robert; Larnaudie, Franck; Magnan, Pierre; Martin-Gonthier, Philippe; Corbiere, Franck; Belliot, Pierre; Estribeau, Magali

    2004-11-01

    Imaging detectors are key elements for optical instruments and sensors on board space missions dedicated to Earth observation (high resolution imaging, atmosphere spectroscopy...), Solar System exploration (micro cameras, guidance for autonomous vehicle...) and Universe observation (space telescope focal planes, guiding sensors...). This market has been dominated by CCD technology for long. Since the mid-90s, CMOS Image Sensors (CIS) have been competing with CCDs for consumer domains (webcams, cell phones, digital cameras...). Featuring significant advantages over CCD sensors for space applications (lower power consumption, smaller system size, better radiations behaviour...), CMOS technology is also expanding in this field, justifying specific R&D and development programs funded by national and European space agencies (mainly CNES, DGA and ESA). All along the 90s and thanks to their increasingly improving performances, CIS have started to be successfully used for more and more demanding space applications, from vision and control functions requiring low-level performances to guidance applications requiring medium-level performances. Recent technology improvements have made possible the manufacturing of research-grade CIS that are able to compete with CCDs in the high-performances arena. After an introduction outlining the growing interest of optical instruments designers for CMOS image sensors, this paper will present the existing and foreseen ways to reach high-level electro-optics performances for CIS. The developments and performances of CIS prototypes built using an imaging CMOS process will be presented in the corresponding section.

  8. Hybrid imaging: a quantum leap in scientific imaging

    NASA Astrophysics Data System (ADS)

    Atlas, Gene; Wadsworth, Mark V.

    2004-01-01

    ImagerLabs has advanced its patented next generation imaging technology called the Hybrid Imaging Technology (HIT) that offers scientific quality performance. The key to the HIT is the merging of the CCD and CMOS technologies through hybridization rather than process integration. HIT offers exceptional QE, fill factor, broad spectral response and very low noise properties of the CCD. In addition, it provides the very high-speed readout, low power, high linearity and high integration capability of CMOS sensors. In this work, we present the benefits, and update the latest advances in the performance of this exciting technology.

  9. Is flat fielding safe for precision CCD astronomy?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baumer, Michael; Davis, Christopher P.; Roodman, Aaron

    The ambitious goals of precision cosmology with wide-field optical surveys such as the Dark Energy Survey (DES) and the Large Synoptic Survey Telescope (LSST) demand precision CCD astronomy as their foundation. This in turn requires an understanding of previously uncharacterized sources of systematic error in CCD sensors, many of which manifest themselves as static effective variations in pixel area. Such variation renders a critical assumption behind the traditional procedure of flat fielding—that a sensor's pixels comprise a uniform grid—invalid. In this work, we present a method to infer a curl-free model of a sensor's underlying pixel grid from flat-field images,more » incorporating the superposition of all electrostatic sensor effects—both known and unknown—present in flat-field data. We use these pixel grid models to estimate the overall impact of sensor systematics on photometry, astrometry, and PSF shape measurements in a representative sensor from the Dark Energy Camera (DECam) and a prototype LSST sensor. Applying the method to DECam data recovers known significant sensor effects for which corrections are currently being developed within DES. For an LSST prototype CCD with pixel-response non-uniformity (PRNU) of 0.4%, we find the impact of "improper" flat fielding on these observables is negligible in nominal .7'' seeing conditions. Furthermore, these errors scale linearly with the PRNU, so for future LSST production sensors, which may have larger PRNU, our method provides a way to assess whether pixel-level calibration beyond flat fielding will be required.« less

  10. Is flat fielding safe for precision CCD astronomy?

    DOE PAGES

    Baumer, Michael; Davis, Christopher P.; Roodman, Aaron

    2017-07-06

    The ambitious goals of precision cosmology with wide-field optical surveys such as the Dark Energy Survey (DES) and the Large Synoptic Survey Telescope (LSST) demand precision CCD astronomy as their foundation. This in turn requires an understanding of previously uncharacterized sources of systematic error in CCD sensors, many of which manifest themselves as static effective variations in pixel area. Such variation renders a critical assumption behind the traditional procedure of flat fielding—that a sensor's pixels comprise a uniform grid—invalid. In this work, we present a method to infer a curl-free model of a sensor's underlying pixel grid from flat-field images,more » incorporating the superposition of all electrostatic sensor effects—both known and unknown—present in flat-field data. We use these pixel grid models to estimate the overall impact of sensor systematics on photometry, astrometry, and PSF shape measurements in a representative sensor from the Dark Energy Camera (DECam) and a prototype LSST sensor. Applying the method to DECam data recovers known significant sensor effects for which corrections are currently being developed within DES. For an LSST prototype CCD with pixel-response non-uniformity (PRNU) of 0.4%, we find the impact of "improper" flat fielding on these observables is negligible in nominal .7'' seeing conditions. Furthermore, these errors scale linearly with the PRNU, so for future LSST production sensors, which may have larger PRNU, our method provides a way to assess whether pixel-level calibration beyond flat fielding will be required.« less

  11. Very-large-area CCD image sensors: concept and cost-effective research

    NASA Astrophysics Data System (ADS)

    Bogaart, E. W.; Peters, I. M.; Kleimann, A. C.; Manoury, E. J. P.; Klaassens, W.; de Laat, W. T. F. M.; Draijer, C.; Frost, R.; Bosiers, J. T.

    2009-01-01

    A new-generation full-frame 36x48 mm2 48Mp CCD image sensor with vertical anti-blooming for professional digital still camera applications is developed by means of the so-called building block concept. The 48Mp devices are formed by stitching 1kx1k building blocks with 6.0 µm pixel pitch in 6x8 (hxv) format. This concept allows us to design four large-area (48Mp) and sixty-two basic (1Mp) devices per 6" wafer. The basic image sensor is relatively small in order to obtain data from many devices. Evaluation of the basic parameters such as the image pixel and on-chip amplifier provides us statistical data using a limited number of wafers. Whereas the large-area devices are evaluated for aspects typical to large-sensor operation and performance, such as the charge transport efficiency. Combined with the usability of multi-layer reticles, the sensor development is cost effective for prototyping. Optimisation of the sensor design and technology has resulted in a pixel charge capacity of 58 ke- and significantly reduced readout noise (12 electrons at 25 MHz pixel rate, after CDS). Hence, a dynamic range of 73 dB is obtained. Microlens and stack optimisation resulted in an excellent angular response that meets with the wide-angle photography demands.

  12. Failure Analysis of CCD Image Sensors Using SQUID and GMR Magnetic Current Imaging

    NASA Technical Reports Server (NTRS)

    Felt, Frederick S.

    2005-01-01

    During electrical testing of a Full Field CCD Image Senor, electrical shorts were detected on three of six devices. These failures occurred after the parts were soldered to the PCB. Failure analysis was performed to determine the cause and locations of these failures on the devices. After removing the fiber optic faceplate, optical inspection was performed on the CCDs to understand the design and package layout. Optical inspection revealed that the device had a light shield ringing the CCD array. This structure complicated the failure analysis. Alternate methods of analysis were considered, including liquid crystal, light and thermal emission, LT/A, TT/A SQUID, and MP. Of these, SQUID and MP techniques were pursued for further analysis. Also magnetoresistive current imaging technology is discussed and compared to SQUID.

  13. Nonlinear time dependence of dark current in charge-coupled devices

    NASA Astrophysics Data System (ADS)

    Dunlap, Justin C.; Bodegom, Erik; Widenhorn, Ralf

    2011-03-01

    It is generally assumed that charge-coupled device (CCD) imagers produce a linear response of dark current versus exposure time except near saturation. We found a large number of pixels with nonlinear dark current response to exposure time to be present in two scientific CCD imagers. These pixels are found to exhibit distinguishable behavior with other analogous pixels and therefore can be characterized in groupings. Data from two Kodak CCD sensors are presented for exposure times from a few seconds up to two hours. Linear behavior is traditionally taken for granted when carrying out dark current correction and as a result, pixels with nonlinear behavior will be corrected inaccurately.

  14. Cameras for digital microscopy.

    PubMed

    Spring, Kenneth R

    2013-01-01

    This chapter reviews the fundamental characteristics of charge-coupled devices (CCDs) and related detectors, outlines the relevant parameters for their use in microscopy, and considers promising recent developments in the technology of detectors. Electronic imaging with a CCD involves three stages--interaction of a photon with the photosensitive surface, storage of the liberated charge, and readout or measurement of the stored charge. The most demanding applications in fluorescence microscopy may require as much as four orders of greater magnitude sensitivity. The image in the present-day light microscope is usually acquired with a CCD camera. The CCD is composed of a large matrix of photosensitive elements (often referred to as "pixels" shorthand for picture elements, which simultaneously capture an image over the entire detector surface. The light-intensity information for each pixel is stored as electronic charge and is converted to an analog voltage by a readout amplifier. This analog voltage is subsequently converted to a numerical value by a digitizer situated on the CCD chip, or very close to it. Several (three to six) amplifiers are required for each pixel, and to date, uniform images with a homogeneous background have been a problem because of the inherent difficulties of balancing the gain in all of the amplifiers. Complementary metal oxide semiconductor sensors also exhibit relatively high noise associated with the requisite high-speed switching. Both of these deficiencies are being addressed, and sensor performance is nearing that required for scientific imaging. Copyright © 1998 Elsevier Inc. All rights reserved.

  15. Research-grade CMOS image sensors for demanding space applications

    NASA Astrophysics Data System (ADS)

    Saint-Pé, Olivier; Tulet, Michel; Davancens, Robert; Larnaudie, Franck; Magnan, Pierre; Corbière, Franck; Martin-Gonthier, Philippe; Belliot, Pierre

    2004-06-01

    Imaging detectors are key elements for optical instruments and sensors on board space missions dedicated to Earth observation (high resolution imaging, atmosphere spectroscopy...), Solar System exploration (micro cameras, guidance for autonomous vehicle...) and Universe observation (space telescope focal planes, guiding sensors...). This market has been dominated by CCD technology for long. Since the mid-90s, CMOS Image Sensors (CIS) have been competing with CCDs for more and more consumer domains (webcams, cell phones, digital cameras...). Featuring significant advantages over CCD sensors for space applications (lower power consumption, smaller system size, better radiations behaviour...), CMOS technology is also expanding in this field, justifying specific R&D and development programs funded by national and European space agencies (mainly CNES, DGA, and ESA). All along the 90s and thanks to their increasingly improving performances, CIS have started to be successfully used for more and more demanding applications, from vision and control functions requiring low-level performances to guidance applications requiring medium-level performances. Recent technology improvements have made possible the manufacturing of research-grade CIS that are able to compete with CCDs in the high-performances arena. After an introduction outlining the growing interest of optical instruments designers for CMOS image sensors, this talk will present the existing and foreseen ways to reach high-level electro-optics performances for CIS. The developments of CIS prototypes built using an imaging CMOS process and of devices based on improved designs will be presented.

  16. Research-grade CMOS image sensors for demanding space applications

    NASA Astrophysics Data System (ADS)

    Saint-Pé, Olivier; Tulet, Michel; Davancens, Robert; Larnaudie, Franck; Magnan, Pierre; Corbière, Franck; Martin-Gonthier, Philippe; Belliot, Pierre

    2017-11-01

    Imaging detectors are key elements for optical instruments and sensors on board space missions dedicated to Earth observation (high resolution imaging, atmosphere spectroscopy...), Solar System exploration (micro cameras, guidance for autonomous vehicle...) and Universe observation (space telescope focal planes, guiding sensors...). This market has been dominated by CCD technology for long. Since the mid- 90s, CMOS Image Sensors (CIS) have been competing with CCDs for more and more consumer domains (webcams, cell phones, digital cameras...). Featuring significant advantages over CCD sensors for space applications (lower power consumption, smaller system size, better radiations behaviour...), CMOS technology is also expanding in this field, justifying specific R&D and development programs funded by national and European space agencies (mainly CNES, DGA, and ESA). All along the 90s and thanks to their increasingly improving performances, CIS have started to be successfully used for more and more demanding applications, from vision and control functions requiring low-level performances to guidance applications requiring medium-level performances. Recent technology improvements have made possible the manufacturing of research-grade CIS that are able to compete with CCDs in the high-performances arena. After an introduction outlining the growing interest of optical instruments designers for CMOS image sensors, this talk will present the existing and foreseen ways to reach high-level electro-optics performances for CIS. The developments of CIS prototypes built using an imaging CMOS process and of devices based on improved designs will be presented.

  17. Active Pixel Sensors: Are CCD's Dinosaurs?

    NASA Technical Reports Server (NTRS)

    Fossum, Eric R.

    1993-01-01

    Charge-coupled devices (CCD's) are presently the technology of choice for most imaging applications. In the 23 years since their invention in 1970, they have evolved to a sophisticated level of performance. However, as with all technologies, we can be certain that they will be supplanted someday. In this paper, the Active Pixel Sensor (APS) technology is explored as a possible successor to the CCD. An active pixel is defined as a detector array technology that has at least one active transistor within the pixel unit cell. The APS eliminates the need for nearly perfect charge transfer -- the Achilles' heel of CCDs. This perfect charge transfer makes CCD's radiation 'soft,' difficult to use under low light conditions, difficult to manufacture in large array sizes, difficult to integrate with on-chip electronics, difficult to use at low temperatures, difficult to use at high frame rates, and difficult to manufacture in non-silicon materials that extend wavelength response.

  18. General Model of Photon-Pair Detection with an Image Sensor

    NASA Astrophysics Data System (ADS)

    Defienne, Hugo; Reichert, Matthew; Fleischer, Jason W.

    2018-05-01

    We develop an analytic model that relates intensity correlation measurements performed by an image sensor to the properties of photon pairs illuminating it. Experiments using an effective single-photon counting camera, a linear electron-multiplying charge-coupled device camera, and a standard CCD camera confirm the model. The results open the field of quantum optical sensing using conventional detectors.

  19. Design considerations for imaging charge-coupled device

    NASA Astrophysics Data System (ADS)

    1981-04-01

    The image dissector tube, which was formerly used as detector in star trackers, will be replaced by solid state imaging devices. The technology advances of charge transfer devices, like the charge-coupled device (CCD) and the charge-injection device (CID) have made their application to star trackers an immediate reality. The Air Force in 1979 funded an American Aerospace company to develop an imaging CCD (ICCD) star sensor for the Multimission Attitude Determination and Autonomous Navigation (MADAN) system. The MADAN system is a technology development for a strapdown attitude and navigation system which can be used on all Air Force 3-axis stabilized satellites. The system will be autonomous and will provide real-time satellite attitude and position information. The star sensor accuracy provides an overall MADAN attitude accuracy of 2 arcsec for star rates up to 300 arcsec/sec. The ICCD is basically an integrating device. Its pixel resolution in not yet satisfactory for precision applications.

  20. Flame Imaging System

    NASA Technical Reports Server (NTRS)

    Barnes, Heidi L. (Inventor); Smith, Harvey S. (Inventor)

    1998-01-01

    A system for imaging a flame and the background scene is discussed. The flame imaging system consists of two charge-coupled-device (CCD) cameras. One camera uses a 800 nm long pass filter which during overcast conditions blocks sufficient background light so the hydrogen flame is brighter than the background light, and the second CCD camera uses a 1100 nm long pass filter, which blocks the solar background in full sunshine conditions such that the hydrogen flame is brighter than the solar background. Two electronic viewfinders convert the signal from the cameras into a visible image. The operator can select the appropriate filtered camera to use depending on the current light conditions. In addition, a narrow band pass filtered InGaAs sensor at 1360 nm triggers an audible alarm and a flashing LED if the sensor detects a flame, providing additional flame detection so the operator does not overlook a small flame.

  1. Development of CMOS Active Pixel Image Sensors for Low Cost Commercial Applications

    NASA Technical Reports Server (NTRS)

    Gee, R.; Kemeny, S.; Kim, Q.; Mendis, S.; Nakamura, J.; Nixon, R.; Ortiz, M.; Pain, B.; Staller, C.; Zhou, Z; hide

    1994-01-01

    JPL, under sponsorship from the NASA Office of Advanced Concepts and Technology, has been developing a second-generation solid-state image sensor technology. Charge-coupled devices (CCD) are a well-established first generation image sensor technology. For both commercial and NASA applications, CCDs have numerous shortcomings. In response, the active pixel sensor (APS) technology has been under research. The major advantages of APS technology are the ability to integrate on-chip timing, control, signal-processing and analog-to-digital converter functions, reduced sensitivity to radiation effects, low power operation, and random access readout.

  2. Can direct electron detectors outperform phosphor-CCD systems for TEM?

    NASA Astrophysics Data System (ADS)

    Moldovan, G.; Li, X.; Kirkland, A.

    2008-08-01

    A new generation of imaging detectors is being considered for application in TEM, but which device architectures can provide the best images? Monte Carlo simulations of the electron-sensor interaction are used here to calculate the expected modulation transfer of monolithic active pixel sensors (MAPS), hybrid active pixel sensors (HAPS) and double sided Silicon strip detectors (DSSD), showing that ideal and nearly ideal transfer can be obtained using DSSD and MAPS sensors. These results highly recommend the replacement of current phosphor screen and charge coupled device imaging systems with such new directly exposed position sensitive electron detectors.

  3. Analysis of Dark Current in BRITE Nanostellite CCD Sensors †

    PubMed Central

    Popowicz, Adam

    2018-01-01

    The BRightest Target Explorer (BRITE) is the pioneering nanosatellite mission dedicated for photometric observations of the brightest stars in the sky. The BRITE charge coupled device (CCD) sensors are poorly shielded against extensive flux of energetic particles which constantly induce defects in the silicon lattice. In this paper we investigate the temporal evolution of the generation of the dark current in the BRITE CCDs over almost four years after launch. Utilizing several steps of image processing and employing normalization of the results, it was possible to obtain useful information about the progress of thermal activity in the sensors. The outcomes show a clear and consistent linear increase of induced damage despite the fact that only about 0.14% of CCD pixels were probed. By performing the analysis of temperature dependencies of the dark current, we identified the observed defects as phosphorus-vacancy (PV) pairs, which are common in proton irradiated CCD matrices. Moreover, the Meyer-Neldel empirical rule was confirmed in our dark current data, yielding EMN=24.8 meV for proton-induced PV defects. PMID:29415471

  4. Multi-image acquisition-based distance sensor using agile laser spot beam.

    PubMed

    Riza, Nabeel A; Amin, M Junaid

    2014-09-01

    We present a novel laser-based distance measurement technique that uses multiple-image-based spatial processing to enable distance measurements. Compared with the first-generation distance sensor using spatial processing, the modified sensor is no longer hindered by the classic Rayleigh axial resolution limit for the propagating laser beam at its minimum beam waist location. The proposed high-resolution distance sensor design uses an electronically controlled variable focus lens (ECVFL) in combination with an optical imaging device, such as a charged-coupled device (CCD), to produce and capture different laser spot size images on a target with these beam spot sizes different from the minimal spot size possible at this target distance. By exploiting the unique relationship of the target located spot sizes with the varying ECVFL focal length for each target distance, the proposed distance sensor can compute the target distance with a distance measurement resolution better than the axial resolution via the Rayleigh resolution criterion. Using a 30 mW 633 nm He-Ne laser coupled with an electromagnetically actuated liquid ECVFL, along with a 20 cm focal length bias lens, and using five spot images captured per target position by a CCD-based Nikon camera, a proof-of-concept proposed distance sensor is successfully implemented in the laboratory over target ranges from 10 to 100 cm with a demonstrated sub-cm axial resolution, which is better than the axial Rayleigh resolution limit at these target distances. Applications for the proposed potentially cost-effective distance sensor are diverse and include industrial inspection and measurement and 3D object shape mapping and imaging.

  5. Photon Counting Imaging with an Electron-Bombarded Pixel Image Sensor

    PubMed Central

    Hirvonen, Liisa M.; Suhling, Klaus

    2016-01-01

    Electron-bombarded pixel image sensors, where a single photoelectron is accelerated directly into a CCD or CMOS sensor, allow wide-field imaging at extremely low light levels as they are sensitive enough to detect single photons. This technology allows the detection of up to hundreds or thousands of photon events per frame, depending on the sensor size, and photon event centroiding can be employed to recover resolution lost in the detection process. Unlike photon events from electron-multiplying sensors, the photon events from electron-bombarded sensors have a narrow, acceleration-voltage-dependent pulse height distribution. Thus a gain voltage sweep during exposure in an electron-bombarded sensor could allow photon arrival time determination from the pulse height with sub-frame exposure time resolution. We give a brief overview of our work with electron-bombarded pixel image sensor technology and recent developments in this field for single photon counting imaging, and examples of some applications. PMID:27136556

  6. The development of a multifunction lens test instrument by using computer aided variable test patterns

    NASA Astrophysics Data System (ADS)

    Chen, Chun-Jen; Wu, Wen-Hong; Huang, Kuo-Cheng

    2009-08-01

    A multi-function lens test instrument is report in this paper. This system can evaluate the image resolution, image quality, depth of field, image distortion and light intensity distribution of the tested lens by changing the tested patterns. This system consists of a tested lens, a CCD camera, a linear motorized stage, a system fixture, an observer LCD monitor, and a notebook for pattern providing. The LCD monitor displays a serious of specified tested patterns sent by the notebook. Then each displayed pattern goes through the tested lens and images in the CCD camera sensor. Consequently, the system can evaluate the performance of the tested lens by analyzing the image of CCD camera with special designed software. The major advantage of this system is that it can complete whole test quickly without interruption due to part replacement, because the tested patterns are statically displayed on monitor and controlled by the notebook.

  7. LED characterization for development of on-board calibration unit of CCD-based advanced wide-field sensor camera of Resourcesat-2A

    NASA Astrophysics Data System (ADS)

    Chatterjee, Abhijit; Verma, Anurag

    2016-05-01

    The Advanced Wide Field Sensor (AWiFS) camera caters to high temporal resolution requirement of Resourcesat-2A mission with repeativity of 5 days. The AWiFS camera consists of four spectral bands, three in the visible and near IR and one in the short wave infrared. The imaging concept in VNIR bands is based on push broom scanning that uses linear array silicon charge coupled device (CCD) based Focal Plane Array (FPA). On-Board Calibration unit for these CCD based FPAs is used to monitor any degradation in FPA during entire mission life. Four LEDs are operated in constant current mode and 16 different light intensity levels are generated by electronically changing exposure of CCD throughout the calibration cycle. This paper describes experimental setup and characterization results of various flight model visible LEDs (λP=650nm) for development of On-Board Calibration unit of Advanced Wide Field Sensor (AWiFS) camera of RESOURCESAT-2A. Various LED configurations have been studied to meet dynamic range coverage of 6000 pixels silicon CCD based focal plane array from 20% to 60% of saturation during night pass of the satellite to identify degradation of detector elements. The paper also explains comparison of simulation and experimental results of CCD output profile at different LED combinations in constant current mode.

  8. Electron-bombarded CCD detectors for ultraviolet atmospheric remote sensing

    NASA Technical Reports Server (NTRS)

    Carruthers, G. R.; Opal, C. B.

    1983-01-01

    Electronic image sensors based on charge coupled devices operated in electron-bombarded mode, yielding real-time, remote-readout, photon-limited UV imaging capability are being developed. The sensors also incorporate fast-focal-ratio Schmidt optics and opaque photocathodes, giving nearly the ultimate possible diffuse-source sensitivity. They can be used for direct imagery of atmospheric emission phenomena, and for imaging spectrography with moderate spatial and spectral resolution. The current state of instrument development, laboratory results, planned future developments and proposed applications of the sensors in space flight instrumentation is described.

  9. Systems approach to the design of the CCD sensors and camera electronics for the AIA and HMI instruments on solar dynamics observatory

    NASA Astrophysics Data System (ADS)

    Waltham, N.; Beardsley, S.; Clapp, M.; Lang, J.; Jerram, P.; Pool, P.; Auker, G.; Morris, D.; Duncan, D.

    2017-11-01

    Solar Dynamics Observatory (SDO) is imaging the Sun in many wavelengths near simultaneously and with a resolution ten times higher than the average high-definition television. In this paper we describe our innovative systems approach to the design of the CCD cameras for two of SDO's remote sensing instruments, the Atmospheric Imaging Assembly (AIA) and the Helioseismic and Magnetic Imager (HMI). Both instruments share use of a custom-designed 16 million pixel science-grade CCD and common camera readout electronics. A prime requirement was for the CCD to operate with significantly lower drive voltages than before, motivated by our wish to simplify the design of the camera readout electronics. Here, the challenge lies in the design of circuitry to drive the CCD's highly capacitive electrodes and to digitize its analogue video output signal with low noise and to high precision. The challenge is greatly exacerbated when forced to work with only fully space-qualified, radiation-tolerant components. We describe our systems approach to the design of the AIA and HMI CCD and camera electronics, and the engineering solutions that enabled us to comply with both mission and instrument science requirements.

  10. Hyperspectral CMOS imager

    NASA Astrophysics Data System (ADS)

    Jerram, P. A.; Fryer, M.; Pratlong, J.; Pike, A.; Walker, A.; Dierickx, B.; Dupont, B.; Defernez, A.

    2017-11-01

    CCDs have been used for many years for Hyperspectral imaging missions and have been extremely successful. These include the Medium Resolution Imaging Spectrometer (MERIS) [1] on Envisat, the Compact High Resolution Imaging Spectrometer (CHRIS) on Proba and the Ozone Monitoring Instrument operating in the UV spectral region. ESA are also planning a number of further missions that are likely to use CCD technology (Sentinel 3, 4 and 5). However CMOS sensors have a number of advantages which means that they will probably be used for hyperspectral applications in the longer term. There are two main advantages with CMOS sensors: First a hyperspectral image consists of spectral lines with a large difference in intensity; in a frame transfer CCD the faint spectral lines have to be transferred through the part of the imager illuminated by intense lines. This can lead to cross-talk and whilst this problem can be reduced by the use of split frame transfer and faster line rates CMOS sensors do not require a frame transfer and hence inherently will not suffer from this problem. Second, with a CMOS sensor the intense spectral lines can be read multiple times within a frame to give a significant increase in dynamic range. We will describe the design, and initial test of a CMOS sensor for use in hyperspectral applications. This device has been designed to give as high a dynamic range as possible with minimum cross-talk. The sensor has been manufactured on high resistivity epitaxial silicon wafers and is be back-thinned and left relatively thick in order to obtain the maximum quantum efficiency across the entire spectral range

  11. 3D digital image correlation using single color camera pseudo-stereo system

    NASA Astrophysics Data System (ADS)

    Li, Junrui; Dan, Xizuo; Xu, Wan; Wang, Yonghong; Yang, Guobiao; Yang, Lianxiang

    2017-10-01

    Three dimensional digital image correlation (3D-DIC) has been widely used by industry to measure the 3D contour and whole-field displacement/strain. In this paper, a novel single color camera 3D-DIC setup, using a reflection-based pseudo-stereo system, is proposed. Compared to the conventional single camera pseudo-stereo system, which splits the CCD sensor into two halves to capture the stereo views, the proposed system achieves both views using the whole CCD chip and without reducing the spatial resolution. In addition, similarly to the conventional 3D-DIC system, the center of the two views stands in the center of the CCD chip, which minimizes the image distortion relative to the conventional pseudo-stereo system. The two overlapped views in the CCD are separated by the color domain, and the standard 3D-DIC algorithm can be utilized directly to perform the evaluation. The system's principle and experimental setup are described in detail, and multiple tests are performed to validate the system.

  12. Novel low-cost vision-sensing technology with controllable of exposal time for welding

    NASA Astrophysics Data System (ADS)

    Zhang, Wenzeng; Wang, Bin; Chen, Nian; Cao, Yipeng

    2005-02-01

    In the process of robot Welding, position of welding seam and welding pool shape is detected by CCD camera for quality control and seam tracking in real-time. It is difficult to always get a clear welding image in some welding methods, such as TIG welding. A novel idea that the exposal time of CCD camera is automatically controlled by arc voltage or arc luminance is proposed to get clear welding image. A set of special device and circuits are added to a common industrial CCD camera in order to flexibly control the CCD to start or close exposal by control of the internal clearing signal of the accumulated charge. Two special vision sensors according to the idea are developed. Their exposal grabbing can be triggered respectively by the arc voltage and the variety of the arc luminance. Two prototypes have been designed and manufactured. Experiments show that they can stably grab clear welding images at appointed moment, which is a basic for the feedback control of automatic welding.

  13. Soft x-ray imager (SXI) onboard the NeXT satellite

    NASA Astrophysics Data System (ADS)

    Tsuru, Takeshi Go; Takagi, Shin-Ichiro; Matsumoto, Hironori; Inui, Tatsuya; Ozawa, Midori; Koyama, Katsuji; Tsunemi, Hiroshi; Hayashida, Kiyoshi; Miyata, Emi; Ozawa, Hideki; Touhiguchi, Masakuni; Matsuura, Daisuke; Dotani, Tadayasu; Ozaki, Masanobu; Murakami, Hiroshi; Kohmura, Takayoshi; Kitamoto, Shunji; Awaki, Hisamitsu

    2006-06-01

    We give overview and the current status of the development of the Soft X-ray Imager (SXI) onboard the NeXT satellite. SXI is an X-ray CCD camera placed at the focal plane detector of the Soft X-ray Telescopes for Imaging (SXT-I) onboard NeXT. The pixel size and the format of the CCD is 24 x 24μm (IA) and 2048 x 2048 x 2 (IA+FS). Currently, we have been developing two types of CCD as candidates for SXI, in parallel. The one is front illumination type CCD with moderate thickness of the depletion layer (70 ~ 100μm) as a baseline plan. The other one is the goal plan, in which we develop back illumination type CCD with a thick depletion layer (200 ~ 300μm). For the baseline plan, we successfully developed the proto model 'CCD-NeXT1' with the pixel size of 12μm x 12μm and the CCD size of 24mm x 48mm. The depletion layer of the CCD has reached 75 ~ 85μm. The goal plan is realized by introduction of a new type of CCD 'P-channel CCD', which collects holes in stead of electrons in the common 'N-channel CCD'. By processing a test model of P-channel CCD we have confirmed high quantum efficiency above 10 keV with an equivalent depletion layer of 300μm. A back illumination type of P-channel CCD with a depletion layer of 200μm with aluminum coating for optical blocking has been also successfully developed. We have been also developing a thermo-electric cooler (TEC) with the function of the mechanically support of the CCD wafer without standoff insulators, for the purpose of the reduction of thermal input to the CCD through the standoff insulators. We have been considering the sensor housing and the onboard electronics for the CCD clocking, readout and digital processing of the frame date.

  14. Technical guidance for the development of a solid state image sensor for human low vision image warping

    NASA Technical Reports Server (NTRS)

    Vanderspiegel, Jan

    1994-01-01

    This report surveys different technologies and approaches to realize sensors for image warping. The goal is to study the feasibility, technical aspects, and limitations of making an electronic camera with special geometries which implements certain transformations for image warping. This work was inspired by the research done by Dr. Juday at NASA Johnson Space Center on image warping. The study has looked into different solid-state technologies to fabricate image sensors. It is found that among the available technologies, CMOS is preferred over CCD technology. CMOS provides more flexibility to design different functions into the sensor, is more widely available, and is a lower cost solution. By using an architecture with row and column decoders one has the added flexibility of addressing the pixels at random, or read out only part of the image.

  15. Camera-on-a-Chip

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Jet Propulsion Laboratory's research on a second generation, solid-state image sensor technology has resulted in the Complementary Metal- Oxide Semiconductor Active Pixel Sensor (CMOS), establishing an alternative to the Charged Coupled Device (CCD). Photobit Corporation, the leading supplier of CMOS image sensors, has commercialized two products of their own based on this technology: the PB-100 and PB-300. These devices are cameras on a chip, combining all camera functions. CMOS "active-pixel" digital image sensors offer several advantages over CCDs, a technology used in video and still-camera applications for 30 years. The CMOS sensors draw less energy, they use the same manufacturing platform as most microprocessors and memory chips, and they allow on-chip programming of frame size, exposure, and other parameters.

  16. Event-based Sensing for Space Situational Awareness

    NASA Astrophysics Data System (ADS)

    Cohen, G.; Afshar, S.; van Schaik, A.; Wabnitz, A.; Bessell, T.; Rutten, M.; Morreale, B.

    A revolutionary type of imaging device, known as a silicon retina or event-based sensor, has recently been developed and is gaining in popularity in the field of artificial vision systems. These devices are inspired by a biological retina and operate in a significantly different way to traditional CCD-based imaging sensors. While a CCD produces frames of pixel intensities, an event-based sensor produces a continuous stream of events, each of which is generated when a pixel detects a change in log light intensity. These pixels operate asynchronously and independently, producing an event-based output with high temporal resolution. There are also no fixed exposure times, allowing these devices to offer a very high dynamic range independently for each pixel. Additionally, these devices offer high-speed, low power operation and a sparse spatiotemporal output. As a consequence, the data from these sensors must be interpreted in a significantly different way to traditional imaging sensors and this paper explores the advantages this technology provides for space imaging. The applicability and capabilities of event-based sensors for SSA applications are demonstrated through telescope field trials. Trial results have confirmed that the devices are capable of observing resident space objects from LEO through to GEO orbital regimes. Significantly, observations of RSOs were made during both day-time and nighttime (terminator) conditions without modification to the camera or optics. The event based sensor’s ability to image stars and satellites during day-time hours offers a dramatic capability increase for terrestrial optical sensors. This paper shows the field testing and validation of two different architectures of event-based imaging sensors. An eventbased sensor’s asynchronous output has an intrinsically low data-rate. In addition to low-bandwidth communications requirements, the low weight, low-power and high-speed make them ideally suitable to meeting the demanding challenges required by space-based SSA systems. Results from these experiments and the systems developed highlight the applicability of event-based sensors to ground and space-based SSA tasks.

  17. Method of orthogonally splitting imaging pose measurement

    NASA Astrophysics Data System (ADS)

    Zhao, Na; Sun, Changku; Wang, Peng; Yang, Qian; Liu, Xintong

    2018-01-01

    In order to meet the aviation's and machinery manufacturing's pose measurement need of high precision, fast speed and wide measurement range, and to resolve the contradiction between measurement range and resolution of vision sensor, this paper proposes an orthogonally splitting imaging pose measurement method. This paper designs and realizes an orthogonally splitting imaging vision sensor and establishes a pose measurement system. The vision sensor consists of one imaging lens, a beam splitter prism, cylindrical lenses and dual linear CCD. Dual linear CCD respectively acquire one dimensional image coordinate data of the target point, and two data can restore the two dimensional image coordinates of the target point. According to the characteristics of imaging system, this paper establishes the nonlinear distortion model to correct distortion. Based on cross ratio invariability, polynomial equation is established and solved by the least square fitting method. After completing distortion correction, this paper establishes the measurement mathematical model of vision sensor, and determines intrinsic parameters to calibrate. An array of feature points for calibration is built by placing a planar target in any different positions for a few times. An terative optimization method is presented to solve the parameters of model. The experimental results show that the field angle is 52 °, the focus distance is 27.40 mm, image resolution is 5185×5117 pixels, displacement measurement error is less than 0.1mm, and rotation angle measurement error is less than 0.15°. The method of orthogonally splitting imaging pose measurement can satisfy the pose measurement requirement of high precision, fast speed and wide measurement range.

  18. Camera sensor arrangement for crop/weed detection accuracy in agronomic images.

    PubMed

    Romeo, Juan; Guerrero, José Miguel; Montalvo, Martín; Emmi, Luis; Guijarro, María; Gonzalez-de-Santos, Pablo; Pajares, Gonzalo

    2013-04-02

    In Precision Agriculture, images coming from camera-based sensors are commonly used for weed identification and crop line detection, either to apply specific treatments or for vehicle guidance purposes. Accuracy of identification and detection is an important issue to be addressed in image processing. There are two main types of parameters affecting the accuracy of the images, namely: (a) extrinsic, related to the sensor's positioning in the tractor; (b) intrinsic, related to the sensor specifications, such as CCD resolution, focal length or iris aperture, among others. Moreover, in agricultural applications, the uncontrolled illumination, existing in outdoor environments, is also an important factor affecting the image accuracy. This paper is exclusively focused on two main issues, always with the goal to achieve the highest image accuracy in Precision Agriculture applications, making the following two main contributions: (a) camera sensor arrangement, to adjust extrinsic parameters and (b) design of strategies for controlling the adverse illumination effects.

  19. A safety monitoring system for taxi based on CMOS imager

    NASA Astrophysics Data System (ADS)

    Liu, Zhi

    2005-01-01

    CMOS image sensors now become increasingly competitive with respect to their CCD counterparts, while adding advantages such as no blooming, simpler driving requirements and the potential of on-chip integration of sensor, analogue circuitry, and digital processing functions. A safety monitoring system for taxi based on cmos imager that can record field situation when unusual circumstance happened is described in this paper. The monitoring system is based on a CMOS imager (OV7120), which can output digital image data through parallel pixel data port. The system consists of a CMOS image sensor, a large capacity NAND FLASH ROM, a USB interface chip and a micro controller (AT90S8515). The structure of whole system and the test data is discussed and analyzed in detail.

  20. Coded aperture detector: an image sensor with sub 20-nm pixel resolution.

    PubMed

    Miyakawa, Ryan; Mayer, Rafael; Wojdyla, Antoine; Vannier, Nicolas; Lesser, Ian; Aron-Dine, Shifrah; Naulleau, Patrick

    2014-08-11

    We describe the coded aperture detector, a novel image sensor based on uniformly redundant arrays (URAs) with customizable pixel size, resolution, and operating photon energy regime. In this sensor, a coded aperture is scanned laterally at the image plane of an optical system, and the transmitted intensity is measured by a photodiode. The image intensity is then digitally reconstructed using a simple convolution. We present results from a proof-of-principle optical prototype, demonstrating high-fidelity image sensing comparable to a CCD. A 20-nm half-pitch URA fabricated by the Center for X-ray Optics (CXRO) nano-fabrication laboratory is presented that is suitable for high-resolution image sensing at EUV and soft X-ray wavelengths.

  1. Fourier transform digital holographic adaptive optics imaging system

    PubMed Central

    Liu, Changgeng; Yu, Xiao; Kim, Myung K.

    2013-01-01

    A Fourier transform digital holographic adaptive optics imaging system and its basic principles are proposed. The CCD is put at the exact Fourier transform plane of the pupil of the eye lens. The spherical curvature introduced by the optics except the eye lens itself is eliminated. The CCD is also at image plane of the target. The point-spread function of the system is directly recorded, making it easier to determine the correct guide-star hologram. Also, the light signal will be stronger at the CCD, especially for phase-aberration sensing. Numerical propagation is avoided. The sensor aperture has nothing to do with the resolution and the possibility of using low coherence or incoherent illumination is opened. The system becomes more efficient and flexible. Although it is intended for ophthalmic use, it also shows potential application in microscopy. The robustness and feasibility of this compact system are demonstrated by simulations and experiments using scattering objects. PMID:23262541

  2. A Simple Method Based on the Application of a CCD Camera as a Sensor to Detect Low Concentrations of Barium Sulfate in Suspension

    PubMed Central

    de Sena, Rodrigo Caciano; Soares, Matheus; Pereira, Maria Luiza Oliveira; da Silva, Rogério Cruz Domingues; do Rosário, Francisca Ferreira; da Silva, Joao Francisco Cajaiba

    2011-01-01

    The development of a simple, rapid and low cost method based on video image analysis and aimed at the detection of low concentrations of precipitated barium sulfate is described. The proposed system is basically composed of a webcam with a CCD sensor and a conventional dichroic lamp. For this purpose, software for processing and analyzing the digital images based on the RGB (Red, Green and Blue) color system was developed. The proposed method had shown very good repeatability and linearity and also presented higher sensitivity than the standard turbidimetric method. The developed method is presented as a simple alternative for future applications in the study of precipitations of inorganic salts and also for detecting the crystallization of organic compounds. PMID:22346607

  3. Fully depleted back illuminated CCD

    DOEpatents

    Holland, Stephen Edward

    2001-01-01

    A backside illuminated charge coupled device (CCD) is formed of a relatively thick high resistivity photon sensitive silicon substrate, with frontside electronic circuitry, and an optically transparent backside ohmic contact for applying a backside voltage which is at least sufficient to substantially fully deplete the substrate. A greater bias voltage which overdepletes the substrate may also be applied. One way of applying the bias voltage to the substrate is by physically connecting the voltage source to the ohmic contact. An alternate way of applying the bias voltage to the substrate is to physically connect the voltage source to the frontside of the substrate, at a point outside the depletion region. Thus both frontside and backside contacts can be used for backside biasing to fully deplete the substrate. Also, high resistivity gaps around the CCD channels and electrically floating channel stop regions can be provided in the CCD array around the CCD channels. The CCD array forms an imaging sensor useful in astronomy.

  4. Chromatic Modulator for High Resolution CCD or APS Devices

    NASA Technical Reports Server (NTRS)

    Hartley, Frank T. (Inventor); Hull, Anthony B. (Inventor)

    2003-01-01

    A system for providing high-resolution color separation in electronic imaging. Comb drives controllably oscillate a red-green-blue (RGB) color strip filter system (or otherwise) over an electronic imaging system such as a charge-coupled device (CCD) or active pixel sensor (APS). The color filter is modulated over the imaging array at a rate three or more times the frame rate of the imaging array. In so doing, the underlying active imaging elements are then able to detect separate color-separated images, which are then combined to provide a color-accurate frame which is then recorded as the representation of the recorded image. High pixel resolution is maintained. Registration is obtained between the color strip filter and the underlying imaging array through the use of electrostatic comb drives in conjunction with a spring suspension system.

  5. Sensors for 3D Imaging: Metric Evaluation and Calibration of a CCD/CMOS Time-of-Flight Camera.

    PubMed

    Chiabrando, Filiberto; Chiabrando, Roberto; Piatti, Dario; Rinaudo, Fulvio

    2009-01-01

    3D imaging with Time-of-Flight (ToF) cameras is a promising recent technique which allows 3D point clouds to be acquired at video frame rates. However, the distance measurements of these devices are often affected by some systematic errors which decrease the quality of the acquired data. In order to evaluate these errors, some experimental tests on a CCD/CMOS ToF camera sensor, the SwissRanger (SR)-4000 camera, were performed and reported in this paper. In particular, two main aspects are treated: the calibration of the distance measurements of the SR-4000 camera, which deals with evaluation of the camera warm up time period, the distance measurement error evaluation and a study of the influence on distance measurements of the camera orientation with respect to the observed object; the second aspect concerns the photogrammetric calibration of the amplitude images delivered by the camera using a purpose-built multi-resolution field made of high contrast targets.

  6. Composite x-ray image assembly for large-field digital mammography with one- and two-dimensional positioning of a focal plane array

    NASA Technical Reports Server (NTRS)

    Halama, G.; McAdoo, J.; Liu, H.

    1998-01-01

    To demonstrate the feasibility of a novel large-field digital mammography technique, a 1024 x 1024 pixel Loral charge-coupled device (CCD) focal plane array (FPA) was positioned in a mammographic field with one- and two-dimensional scan sequences to obtain 950 x 1800 pixel and 3600 x 3600 pixel composite images, respectively. These experiments verify that precise positioning of FPAs produced seamless composites and that the CCD mosaic concept has potential for high-resolution, large-field imaging. The proposed CCD mosaic concept resembles a checkerboard pattern with spacing left between the CCDs for the driver and readout electronics. To obtain a complete x-ray image, the mosaic must be repositioned four times, with an x-ray exposure at each position. To reduce the patient dose, a lead shield with appropriately patterned holes is placed between the x-ray source and the patient. The high-precision motorized translation stages and the fiber-coupled-scintillating-screen-CCD sensor assembly were placed in the position usually occupied by the film cassette. Because of the high mechanical precision, seamless composites were constructed from the subimages. This paper discusses the positioning, image alignment procedure, and composite image results. The paper only addresses the formation of a seamless composite image from subimages and will not consider the effects of the lead shield, multiple CCDs, or the speed of motion.

  7. Flat Field Anomalies in an X-ray CCD Camera Measured Using a Manson X-ray Source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    M. J. Haugh and M. B. Schneider

    2008-10-31

    The Static X-ray Imager (SXI) is a diagnostic used at the National Ignition Facility (NIF) to measure the position of the X-rays produced by lasers hitting a gold foil target. The intensity distribution taken by the SXI camera during a NIF shot is used to determine how accurately NIF can aim laser beams. This is critical to proper NIF operation. Imagers are located at the top and the bottom of the NIF target chamber. The CCD chip is an X-ray sensitive silicon sensor, with a large format array (2k x 2k), 24 μm square pixels, and 15 μm thick. Amore » multi-anode Manson X-ray source, operating up to 10kV and 10W, was used to characterize and calibrate the imagers. The output beam is heavily filtered to narrow the spectral beam width, giving a typical resolution E/ΔE≈10. The X-ray beam intensity was measured using an absolute photodiode that has accuracy better than 1% up to the Si K edge and better than 5% at higher energies. The X-ray beam provides full CCD illumination and is flat, within ±1% maximum to minimum. The spectral efficiency was measured at 10 energy bands ranging from 930 eV to 8470 eV. We observed an energy dependent pixel sensitivity variation that showed continuous change over a large portion of the CCD. The maximum sensitivity variation occurred at 8470 eV. The geometric pattern did not change at lower energies, but the maximum contrast decreased and was not observable below 4 keV. We were also able to observe debris, damage, and surface defects on the CCD chip. The Manson source is a powerful tool for characterizing the imaging errors of an X-ray CCD imager. These errors are quite different from those found in a visible CCD imager.« less

  8. Large Format CMOS-based Detectors for Diffraction Studies

    NASA Astrophysics Data System (ADS)

    Thompson, A. C.; Nix, J. C.; Achterkirchen, T. G.; Westbrook, E. M.

    2013-03-01

    Complementary Metal Oxide Semiconductor (CMOS) devices are rapidly replacing CCD devices in many commercial and medical applications. Recent developments in CMOS fabrication have improved their radiation hardness, device linearity, readout noise and thermal noise, making them suitable for x-ray crystallography detectors. Large-format (e.g. 10 cm × 15 cm) CMOS devices with a pixel size of 100 μm × 100 μm are now becoming available that can be butted together on three sides so that very large area detector can be made with no dead regions. Like CCD systems our CMOS systems use a GdOS:Tb scintillator plate to convert stopping x-rays into visible light which is then transferred with a fiber-optic plate to the sensitive surface of the CMOS sensor. The amount of light per x-ray on the sensor is much higher in the CMOS system than a CCD system because the fiber optic plate is only 3 mm thick while on a CCD system it is highly tapered and much longer. A CMOS sensor is an active pixel matrix such that every pixel is controlled and readout independently of all other pixels. This allows these devices to be readout while the sensor is collecting charge in all the other pixels. For x-ray diffraction detectors this is a major advantage since image frames can be collected continuously at up 20 Hz while the crystal is rotated. A complete diffraction dataset can be collected over five times faster than with CCD systems with lower radiation exposure to the crystal. In addition, since the data is taken fine-phi slice mode the 3D angular position of diffraction peaks is improved. We have developed a cooled 6 sensor CMOS detector with an active area of 28.2 × 29.5 cm with 100 μm × 100 μm pixels and a readout rate of 20 Hz. The detective quantum efficiency exceeds 60% over the range 8-12 keV. One, two and twelve sensor systems are also being developed for a variety of scientific applications. Since the sensors are butt able on three sides, even larger systems could be built at reasonable cost.

  9. Design and performance of 4 x 5120-element visible and 2 x 2560-element shortwave infrared multispectral focal planes

    NASA Astrophysics Data System (ADS)

    Tower, J. R.; Cope, A. D.; Pellon, L. E.; McCarthy, B. M.; Strong, R. T.

    1986-06-01

    Two solid-state sensors for use in remote sensing instruments operating in the pushbroom mode are examined. The design and characteristics of the visible/near-infrared (VIS/NIR) device and the short-wavelength infrared (SWIR) device are described. The VIS/NIR is a CCD imager with four parallel sensor lines, each 1024 pixel long; the chip design and filter system of the VIS/NIR are studied. The performance of the VIS/NIR sensor with mask and its system performance are measured. The SWIR is a dual-band line imager consisting of palladium silicide Schottky-barrier detectors coupled to CCD multiplexers; the performance of the device is analyzed. The substrate materials and layout designs used to assemble the 4 x 5120-element VIS/NIR array and the 2 x 2560-element SWIR array are discussed, and the planarity of the butted arrays are verified using a profilometer. The optical and electrical characteristics, and the placement and butting accuracy of the arrays are evaluated. It is noted that the arrays met or exceed their expected performance.

  10. A novel imaging method for photonic crystal fiber fusion splicer

    NASA Astrophysics Data System (ADS)

    Bi, Weihong; Fu, Guangwei; Guo, Xuan

    2007-01-01

    Because the structure of Photonic Crystal Fiber (PCF) is very complex, and it is very difficult that traditional fiber fusion splice obtains optical axial information of PCF. Therefore, we must search for a bran-new optical imaging method to get section information of Photonic Crystal Fiber. Based on complex trait of PCF, a novel high-precision optics imaging system is presented in this article. The system uses a thinned electron-bombarded CCD (EBCCD) which is a kind of image sensor as imaging element, the thinned electron-bombarded CCD can offer low light level performance superior to conventional image intensifier coupled CCD approaches, this high-performance device can provide high contrast high resolution in low light level surveillance imaging; in order to realize precision focusing of image, we use a ultra-highprecision pace motor to adjust position of imaging lens. In this way, we can obtain legible section information of PCF. We may realize further concrete analysis for section information of PCF by digital image processing technology. Using this section information may distinguish different sorts of PCF, compute some parameters such as the size of PCF ventage, cladding structure of PCF and so on, and provide necessary analysis data for PCF fixation, adjustment, regulation, fusion and cutting system.

  11. Image sensor for testing refractive error of eyes

    NASA Astrophysics Data System (ADS)

    Li, Xiangning; Chen, Jiabi; Xu, Longyun

    2000-05-01

    It is difficult to detect ametropia and anisometropia for children. Image sensor for testing refractive error of eyes does not need the cooperation of children and can be used to do the general survey of ametropia and anisometropia for children. In our study, photographs are recorded by a CCD element in a digital form which can be directly processed by a computer. In order to process the image accurately by digital technique, formula considering the effect of extended light source and the size of lens aperture has been deduced, which is more reliable in practice. Computer simulation of the image sensing is made to verify the fineness of the results.

  12. Space infrared telescope pointing control system. Infrared telescope tracking in the presence of target motion

    NASA Technical Reports Server (NTRS)

    Powell, J. D.; Schneider, J. B.

    1986-01-01

    The use of charge-coupled-devices, or CCD's, has been documented by a number of sources as an effective means of providing a measurement of spacecraft attitude with respect to the stars. A method exists of defocussing and interpolation of the resulting shape of a star image over a small subsection of a large CCD array. This yields an increase in the accuracy of the device by better than an order of magnitude over the case when the star image is focussed upon a single CCD pixel. This research examines the effect that image motion has upon the overall precision of this star sensor when applied to an orbiting infrared observatory. While CCD's collect energy within the visible spectrum of light, the targets of scientific interest may well have no appreciable visible emissions. Image motion has the effect of smearing the image of the star in the direction of motion during a particular sampling interval. The presence of image motion is incorporated into a Kalman filter for the system, and it is shown that the addition of a gyro command term is adequate to compensate for the effect of image motion in the measurement. The updated gyro model is included in this analysis, but has natural frequencies faster than the projected star tracker sample rate for dim stars. The system state equations are reduced by modelling gyro drift as a white noise process. There exists a tradeoff in selected star tracker sample time between the CCD, which has improved noise characteristics as sample time increases, and the gyro, which will potentially drift further between long attitude updates. A sample time which minimizes pointing estimation error exists for the random drift gyro model as well as for a random walk gyro model.

  13. A novel method to increase LinLog CMOS sensors' performance in high dynamic range scenarios.

    PubMed

    Martínez-Sánchez, Antonio; Fernández, Carlos; Navarro, Pedro J; Iborra, Andrés

    2011-01-01

    Images from high dynamic range (HDR) scenes must be obtained with minimum loss of information. For this purpose it is necessary to take full advantage of the quantification levels provided by the CCD/CMOS image sensor. LinLog CMOS sensors satisfy the above demand by offering an adjustable response curve that combines linear and logarithmic responses. This paper presents a novel method to quickly adjust the parameters that control the response curve of a LinLog CMOS image sensor. We propose to use an Adaptive Proportional-Integral-Derivative controller to adjust the exposure time of the sensor, together with control algorithms based on the saturation level and the entropy of the images. With this method the sensor's maximum dynamic range (120 dB) can be used to acquire good quality images from HDR scenes with fast, automatic adaptation to scene conditions. Adaptation to a new scene is rapid, with a sensor response adjustment of less than eight frames when working in real time video mode. At least 67% of the scene entropy can be retained with this method.

  14. Research of optical coherence tomography microscope based on CCD detector

    NASA Astrophysics Data System (ADS)

    Zhang, Hua; Xu, Zhongbao; Zhang, Shuomo

    2008-12-01

    The reference wave phase was modulated with a sinusoidal vibrating mirror attached to a Piezoelectric Transducer (PZT), the integration was performed by a CCD, and the charge storage period of the CCD image sensor was one-quarter period of the sinusoidal phase modulation. With the frequency- synchronous detection technique, four images (four frames of interference pattern) were recorded during one period of the phase modulation. In order to obtain the optimum modulation parameter, the values of amplitude and phase of the sinusoidal phase modulation were determined by considering the measurement error caused by the additive noise contained in the detected values. The PZT oscillation was controlled by a closed loop control system based on PID controller. An ideal discrete digital sine function at 50Hz with adjustable amplitude was used to adjust the vibrating of PZT, and a digital phase shift techniques was used to adjust vibrating phase of PZT so that the phase of the modulation could reach their optimum values. The CCD detector was triggered with software at 200Hz. Based on work above a small coherent signal masked by the preponderant incoherent background with a CCD detector was obtained.

  15. Development of a driving method suitable for ultrahigh-speed shooting in a 2M-fps 300k-pixel single-chip color camera

    NASA Astrophysics Data System (ADS)

    Yonai, J.; Arai, T.; Hayashida, T.; Ohtake, H.; Namiki, J.; Yoshida, T.; Etoh, T. Goji

    2012-03-01

    We have developed an ultrahigh-speed CCD camera that can capture instantaneous phenomena not visible to the human eye and impossible to capture with a regular video camera. The ultrahigh-speed CCD was specially constructed so that the CCD memory between the photodiode and the vertical transfer path of each pixel can store 144 frames each. For every one-frame shot, the electric charges generated from the photodiodes are transferred in one step to the memory of all the parallel pixels, making ultrahigh-speed shooting possible. Earlier, we experimentally manufactured a 1M-fps ultrahigh-speed camera and tested it for broadcasting applications. Through those tests, we learned that there are cases that require shooting speeds (frame rate) of more than 1M fps; hence we aimed to develop a new ultrahigh-speed camera that will enable much faster shooting speeds than what is currently possible. Since shooting at speeds of more than 200,000 fps results in decreased image quality and abrupt heating of the image sensor and drive circuit board, faster speeds cannot be achieved merely by increasing the drive frequency. We therefore had to improve the image sensor wiring layout and the driving method to develop a new 2M-fps, 300k-pixel ultrahigh-speed single-chip color camera for broadcasting purposes.

  16. Radiation tolerant compact image sensor using CdTe photodiode and field emitter array (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Masuzawa, Tomoaki; Neo, Yoichiro; Mimura, Hidenori; Okamoto, Tamotsu; Nagao, Masayoshi; Akiyoshi, Masafumi; Sato, Nobuhiro; Takagi, Ikuji; Tsuji, Hiroshi; Gotoh, Yasuhito

    2016-10-01

    A growing demand on incident detection is recognized since the Great East Japan Earthquake and successive accidents in Fukushima nuclear power plant in 2011. Radiation tolerant image sensors are powerful tools to collect crucial information at initial stages of such incidents. However, semiconductor based image sensors such as CMOS and CCD have limited tolerance to radiation exposure. Image sensors used in nuclear facilities are conventional vacuum tubes using thermal cathodes, which have large size and high power consumption. In this study, we propose a compact image sensor composed of a CdTe-based photodiode and a matrix-driven Spindt-type electron beam source called field emitter array (FEA). A basic principle of FEA-based image sensors is similar to conventional Vidicon type camera tubes, but its electron source is replaced from a thermal cathode to FEA. The use of a field emitter as an electron source should enable significant size reduction while maintaining high radiation tolerance. Current researches on radiation tolerant FEAs and development of CdTe based photoconductive films will be presented.

  17. Design Method For Ultra-High Resolution Linear CCD Imagers

    NASA Astrophysics Data System (ADS)

    Sheu, Larry S.; Truong, Thanh; Yuzuki, Larry; Elhatem, Abdul; Kadekodi, Narayan

    1984-11-01

    This paper presents the design method to achieve ultra-high resolution linear imagers. This method utilizes advanced design rules and novel staggered bilinear photo sensor arrays with quadrilinear shift registers. Design constraint in the detector arrays and shift registers are analyzed. Imager architecture to achieve ultra-high resolution is presented. The characteristics of MTF, aliasing, speed, transfer efficiency and fine photolithography requirements associated with this architecture are also discussed. A CCD imager with advanced 1.5 um minimum feature size was fabricated. It is intended as a test vehicle for the next generation small sampling pitch ultra-high resolution CCD imager. Standard double-poly, two-phase shift registers were fabricated at an 8 um pitch using the advanced design rules. A special process step that blocked the source-drain implant from the shift register area was invented. This guaranteed excellent performance of the shift registers regardless of the small poly overlaps. A charge transfer efficiency of better than 0.99995 and maximum transfer speed of 8 MHz were achieved. The imager showed excellent performance. The dark current was less than 0.2 mV/ms, saturation 250 mV, adjacent photoresponse non-uniformity ± 4% and responsivity 0.7 V/ μJ/cm2 for the 8 μm x 6 μm photosensor size. The MTF was 0.6 at 62.5 cycles/mm. These results confirm the feasibility of the next generation ultra-high resolution CCD imagers.

  18. Adjustment of multi-CCD-chip-color-camera heads

    NASA Astrophysics Data System (ADS)

    Guyenot, Volker; Tittelbach, Guenther; Palme, Martin

    1999-09-01

    The principle of beam-splitter-multi-chip cameras consists in splitting an image into differential multiple images of different spectral ranges and in distributing these onto separate black and white CCD-sensors. The resulting electrical signals from the chips are recombined to produce a high quality color picture on the monitor. Because this principle guarantees higher resolution and sensitivity in comparison to conventional single-chip camera heads, the greater effort is acceptable. Furthermore, multi-chip cameras obtain the compete spectral information for each individual object point while single-chip system must rely on interpolation. In a joint project, Fraunhofer IOF and STRACON GmbH and in future COBRA electronic GmbH develop methods for designing the optics and dichroitic mirror system of such prism color beam splitter devices. Additionally, techniques and equipment for the alignment and assembly of color beam splitter-multi-CCD-devices on the basis of gluing with UV-curable adhesives have been developed, too.

  19. Automatic Welding System of Aluminum Pipe by Monitoring Backside Image of Molten Pool Using Vision Sensor

    NASA Astrophysics Data System (ADS)

    Baskoro, Ario Sunar; Kabutomori, Masashi; Suga, Yasuo

    An automatic welding system using Tungsten Inert Gas (TIG) welding with vision sensor for welding of aluminum pipe was constructed. This research studies the intelligent welding process of aluminum alloy pipe 6063S-T5 in fixed position and moving welding torch with the AC welding machine. The monitoring system consists of a vision sensor using a charge-coupled device (CCD) camera to monitor backside image of molten pool. The captured image was processed to recognize the edge of molten pool by image processing algorithm. Neural network model for welding speed control were constructed to perform the process automatically. From the experimental results it shows the effectiveness of the control system confirmed by good detection of molten pool and sound weld of experimental result.

  20. Rigorous Photogrammetric Processing of CHANG'E-1 and CHANG'E-2 Stereo Imagery for Lunar Topographic Mapping

    NASA Astrophysics Data System (ADS)

    Di, K.; Liu, Y.; Liu, B.; Peng, M.

    2012-07-01

    Chang'E-1(CE-1) and Chang'E-2(CE-2) are the two lunar orbiters of China's lunar exploration program. Topographic mapping using CE-1 and CE-2 images is of great importance for scientific research as well as for preparation of landing and surface operation of Chang'E-3 lunar rover. In this research, we developed rigorous sensor models of CE-1 and CE-2 CCD cameras based on push-broom imaging principle with interior and exterior orientation parameters. Based on the rigorous sensor model, the 3D coordinate of a ground point in lunar body-fixed (LBF) coordinate system can be calculated by space intersection from the image coordinates of con-jugate points in stereo images, and the image coordinates can be calculated from 3D coordinates by back-projection. Due to uncer-tainties of the orbit and the camera, the back-projected image points are different from the measured points. In order to reduce these inconsistencies and improve precision, we proposed two methods to refine the rigorous sensor model: 1) refining EOPs by correcting the attitude angle bias, 2) refining the interior orientation model by calibration of the relative position of the two linear CCD arrays. Experimental results show that the mean back-projection residuals of CE-1 images are reduced to better than 1/100 pixel by method 1 and the mean back-projection residuals of CE-2 images are reduced from over 20 pixels to 0.02 pixel by method 2. Consequently, high precision DEM (Digital Elevation Model) and DOM (Digital Ortho Map) are automatically generated.

  1. Accuracy of Conventional and Digital Radiography in Detecting External Root Resorption

    PubMed Central

    Mesgarani, Abbas; Haghanifar, Sina; Ehsani, Maryam; Yaghub, Samereh Dokhte; Bijani, Ali

    2014-01-01

    Introduction: External root resorption (ERR) is associated with physiological and pathological dissolution of mineralized tissues by clastic cells and radiography is one of the most important methods in its diagnosis. The aim of this experimental study was to evaluate the accuracy of conventional intraoral radiography (CR) in comparison with digital radiographic techniques, i.e. charge-coupled device (CCD) and photo-stimulable phosphor (PSP) sensors, in detection of ERR. Methods and Materials: This study was performed on 80 extracted human mandibular premolars. After taking separate initial periapical radiographs with CR technique, CCD and PSP sensors, the artificial defects resembling ERR with variable sizes were created in apical half of the mesial, distal and buccal surfaces of the teeth. Ten teeth were used as control samples without any resorption. The radiographs were then repeated with 2 different exposure times and the images were observed by 3 observers. Data were analyzed using SPSS version 17 and chi-squared and Cohen’s Kappa tests with 95% confidence interval (CI=95%). Result: The CCD had the highest percentage of correct assessment compared to the CR and PSP sensors, although the difference was not significant (P=0.39). It was shown that the higher dosage of radiation increases the accuracy of diagnosis; however, it was only significant for CCD sensor (P=0.02). Also, the accuracy of diagnosis increased with the increase in the size of lesion (P=0.001). Conclusion: Statistically significant difference was not observed for accurate detection of ERR by conventional and digital radiographic techniques. PMID:25386202

  2. Multi-Image Registration for an Enhanced Vision System

    NASA Technical Reports Server (NTRS)

    Hines, Glenn; Rahman, Zia-Ur; Jobson, Daniel; Woodell, Glenn

    2002-01-01

    An Enhanced Vision System (EVS) utilizing multi-sensor image fusion is currently under development at the NASA Langley Research Center. The EVS will provide enhanced images of the flight environment to assist pilots in poor visibility conditions. Multi-spectral images obtained from a short wave infrared (SWIR), a long wave infrared (LWIR), and a color visible band CCD camera, are enhanced and fused using the Retinex algorithm. The images from the different sensors do not have a uniform data structure: the three sensors not only operate at different wavelengths, but they also have different spatial resolutions, optical fields of view (FOV), and bore-sighting inaccuracies. Thus, in order to perform image fusion, the images must first be co-registered. Image registration is the task of aligning images taken at different times, from different sensors, or from different viewpoints, so that all corresponding points in the images match. In this paper, we present two methods for registering multiple multi-spectral images. The first method performs registration using sensor specifications to match the FOVs and resolutions directly through image resampling. In the second method, registration is obtained through geometric correction based on a spatial transformation defined by user selected control points and regression analysis.

  3. Optical design of a novel instrument that uses the Hartmann-Shack sensor and Zernike polynomials to measure and simulate customized refraction correction surgery outcomes and patient satisfaction

    NASA Astrophysics Data System (ADS)

    Yasuoka, Fatima M. M.; Matos, Luciana; Cremasco, Antonio; Numajiri, Mirian; Marcato, Rafael; Oliveira, Otavio G.; Sabino, Luis G.; Castro N., Jarbas C.; Bagnato, Vanderlei S.; Carvalho, Luis A. V.

    2016-03-01

    An optical system that conjugates the patient's pupil to the plane of a Hartmann-Shack (HS) wavefront sensor has been simulated using optical design software. And an optical bench prototype is mounted using mechanical eye device, beam splitter, illumination system, lenses, mirrors, mirrored prism, movable mirror, wavefront sensor and camera CCD. The mechanical eye device is used to simulate aberrations of the eye. From this device the rays are emitted and travelled by the beam splitter to the optical system. Some rays fall on the camera CCD and others pass in the optical system and finally reach the sensor. The eye models based on typical in vivo eye aberrations is constructed using the optical design software Zemax. The computer-aided outcomes of each HS images for each case are acquired, and these images are processed using customized techniques. The simulated and real images for low order aberrations are compared using centroid coordinates to assure that the optical system is constructed precisely in order to match the simulated system. Afterwards a simulated version of retinal images is constructed to show how these typical eyes would perceive an optotype positioned 20 ft away. Certain personalized corrections are allowed by eye doctors based on different Zernike polynomial values and the optical images are rendered to the new parameters. Optical images of how that eye would see with or without corrections of certain aberrations are generated in order to allow which aberrations can be corrected and in which degree. The patient can then "personalize" the correction to their own satisfaction. This new approach to wavefront sensing is a promising change in paradigm towards the betterment of the patient-physician relationship.

  4. Rolling Shutter Effect aberration compensation in Digital Holographic Microscopy

    NASA Astrophysics Data System (ADS)

    Monaldi, Andrea C.; Romero, Gladis G.; Cabrera, Carlos M.; Blanc, Adriana V.; Alanís, Elvio E.

    2016-05-01

    Due to the sequential-readout nature of most CMOS sensors, each row of the sensor array is exposed at a different time, resulting in the so-called rolling shutter effect that induces geometric distortion to the image if the video camera or the object moves during image acquisition. Particularly in digital holograms recording, while the sensor captures progressively each row of the hologram, interferometric fringes can oscillate due to external vibrations and/or noises even when the object under study remains motionless. The sensor records each hologram row in different instants of these disturbances. As a final effect, phase information is corrupted, distorting the reconstructed holograms quality. We present a fast and simple method for compensating this effect based on image processing tools. The method is exemplified by holograms of microscopic biological static objects. Results encourage incorporating CMOS sensors over CCD in Digital Holographic Microscopy due to a better resolution and less expensive benefits.

  5. Planar and finger-shaped optical tactile sensors for robotic applications

    NASA Technical Reports Server (NTRS)

    Begej, Stefan

    1988-01-01

    Progress is described regarding the development of optical tactile sensors specifically designed for application to dexterous robotics. These sensors operate on optical principles involving the frustration of total internal reflection at a waveguide/elastomer interface and produce a grey-scale tactile image that represents the normal (vertical) forces of contact. The first tactile sensor discussed is a compact, 32 x 32 planar sensor array intended for mounting on a parallel-jaw gripper. Optical fibers were employed to convey the tactile image to a CCD camera and microprocessor-based image analysis system. The second sensor had the shape and size of a human fingertip and was designed for a dexterous robotic hand. It contained 256 sensing sites (taxels) distributed in a dual-density pattern that included a tactile fovea near the tip measuring 13 x 13 mm and containing 169 taxels. The design and construction details of these tactile sensors are presented, in addition to photographs of tactile imprints.

  6. Flat field anomalies in an x-ray charge coupled device camera measured using a Manson x-ray source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haugh, M. J.; Schneider, M. B.

    2008-10-15

    The static x-ray imager (SXI) is a diagnostic used at the National Ignition Facility (NIF) to measure the position of the x rays produced by lasers hitting a gold foil target. The intensity distribution taken by the SXI camera during a NIF shot is used to determine how accurately NIF can aim laser beams. This is critical to proper NIF operation. Imagers are located at the top and the bottom of the NIF target chamber. The charge coupled device (CCD) chip is an x-ray sensitive silicon sensor, with a large format array (2kx2k), 24 {mu}m square pixels, and 15 {mu}mmore » thick. A multianode Manson x-ray source, operating up to 10 kV and 10 W, was used to characterize and calibrate the imagers. The output beam is heavily filtered to narrow the spectral beam width, giving a typical resolution E/{delta}E{approx_equal}10. The x-ray beam intensity was measured using an x-ray photodiode that has an accuracy better than 1% up to the Si K edge and better than 5% at higher energies. The x-ray beam provides full CCD illumination and is flat, within {+-}1% maximum to minimum. The spectral efficiency was measured at ten energy bands ranging from 930 to 8470 eV. We observed an energy dependent pixel sensitivity variation that showed continuous change over a large portion of the CCD. The maximum sensitivity variation occurred at 8470 eV. The geometric pattern did not change at lower energies, but the maximum contrast decreased and was not observable below 4 keV. We were also able to observe debris, damage, and surface defects on the CCD chip. The Manson source is a powerful tool for characterizing the imaging errors of an x-ray CCD imager. These errors are quite different from those found in a visible CCD imager.« less

  7. Onboard TDI stage estimation and calibration using SNR analysis

    NASA Astrophysics Data System (ADS)

    Haghshenas, Javad

    2017-09-01

    Electro-Optical design of a push-broom space camera for a Low Earth Orbit (LEO) remote sensing satellite is performed based on the noise analysis of TDI sensors for very high GSDs and low light level missions. It is well demonstrated that the CCD TDI mode of operation provides increased photosensitivity relative to a linear CCD array, without the sacrifice of spatial resolution. However, for satellite imaging, in order to utilize the advantages which the TDI mode of operation offers, attention should be given to the parameters which affect the image quality of TDI sensors such as jitters, vibrations, noises and etc. A predefined TDI stages may not properly satisfy image quality requirement of the satellite camera. Furthermore, in order to use the whole dynamic range of the sensor, imager must be capable to set the TDI stages in every shots based on the affecting parameters. This paper deals with the optimal estimation and setting the stages based on tradeoffs among MTF, noises and SNR. On-board SNR estimation is simulated using the atmosphere analysis based on the MODTRAN algorithm in PcModWin software. According to the noises models, we have proposed a formulation to estimate TDI stages in such a way to satisfy the system SNR requirement. On the other hand, MTF requirement must be satisfy in the same manner. A proper combination of both parameters will guaranty the full dynamic range usage along with the high SNR and image quality.

  8. MOSES: a modular sensor electronics system for space science and commercial applications

    NASA Astrophysics Data System (ADS)

    Michaelis, Harald; Behnke, Thomas; Tschentscher, Matthias; Mottola, Stefano; Neukum, Gerhard

    1999-10-01

    The camera group of the DLR--Institute of Space Sensor Technology and Planetary Exploration is developing imaging instruments for scientific and space applications. One example is the ROLIS imaging system of the ESA scientific space mission `Rosetta', which consists of a descent/downlooking and a close-up imager. Both are parts of the Rosetta-Lander payload and will operate in the extreme environment of a cometary nucleus. The Rosetta Lander Imaging System (ROLIS) will introduce a new concept for the sensor electronics, which is referred to as MOSES (Modula Sensor Electronics System). MOSES is a 3D miniaturized CCD- sensor-electronics which is based on single modules. Each of the modules has some flexibility and enables a simple adaptation to specific application requirements. MOSES is mainly designed for space applications where high performance and high reliability are required. This concept, however, can also be used in other science or commercial applications. This paper describes the concept of MOSES, its characteristics, performance and applications.

  9. Consequences of CCD imperfections for cosmology determined by weak lensing surveys: from laboratory measurements to cosmological parameter bias

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Okura, Yuki; Petri, Andrea; May, Morgan

    Weak gravitational lensing causes subtle changes in the apparent shapes of galaxies due to the bending of light by the gravity of foreground masses. By measuring the shapes of large numbers of galaxies (millions in recent surveys, up to tens of billions in future surveys) we can infer the parameters that determine cosmology. Imperfections in the detectors used to record images of the sky can introduce changes in the apparent shape of galaxies, which in turn can bias the inferred cosmological parameters. Here in this paper we consider the effect of two widely discussed sensor imperfections: tree-rings, due to impuritymore » gradients which cause transverse electric fields in the Charge-Coupled Devices (CCD), and pixel-size variation, due to periodic CCD fabrication errors. These imperfections can be observed when the detectors are subject to uniform illumination (flat field images). We develop methods to determine the spurious shear and convergence (due to the imperfections) from the flat-field images. We calculate how the spurious shear when added to the lensing shear will bias the determination of cosmological parameters. We apply our methods to candidate sensors of the Large Synoptic Survey Telescope (LSST) as a timely and important example, analyzing flat field images recorded with LSST prototype CCDs in the laboratory. In conclusion, we find that tree-rings and periodic pixel-size variation present in the LSST CCDs will introduce negligible bias to cosmological parameters determined from the lensing power spectrum, specifically w,Ω m and σ 8.« less

  10. Consequences of CCD imperfections for cosmology determined by weak lensing surveys: from laboratory measurements to cosmological parameter bias

    DOE PAGES

    Okura, Yuki; Petri, Andrea; May, Morgan; ...

    2016-06-27

    Weak gravitational lensing causes subtle changes in the apparent shapes of galaxies due to the bending of light by the gravity of foreground masses. By measuring the shapes of large numbers of galaxies (millions in recent surveys, up to tens of billions in future surveys) we can infer the parameters that determine cosmology. Imperfections in the detectors used to record images of the sky can introduce changes in the apparent shape of galaxies, which in turn can bias the inferred cosmological parameters. Here in this paper we consider the effect of two widely discussed sensor imperfections: tree-rings, due to impuritymore » gradients which cause transverse electric fields in the Charge-Coupled Devices (CCD), and pixel-size variation, due to periodic CCD fabrication errors. These imperfections can be observed when the detectors are subject to uniform illumination (flat field images). We develop methods to determine the spurious shear and convergence (due to the imperfections) from the flat-field images. We calculate how the spurious shear when added to the lensing shear will bias the determination of cosmological parameters. We apply our methods to candidate sensors of the Large Synoptic Survey Telescope (LSST) as a timely and important example, analyzing flat field images recorded with LSST prototype CCDs in the laboratory. In conclusion, we find that tree-rings and periodic pixel-size variation present in the LSST CCDs will introduce negligible bias to cosmological parameters determined from the lensing power spectrum, specifically w,Ω m and σ 8.« less

  11. Intelligent imaging systems for automotive applications

    NASA Astrophysics Data System (ADS)

    Thompson, Chris; Huang, Yingping; Fu, Shan

    2004-03-01

    In common with many other application areas, visual signals are becoming an increasingly important information source for many automotive applications. For several years CCD cameras have been used as research tools for a range of automotive applications. Infrared cameras, RADAR and LIDAR are other types of imaging sensors that have also been widely investigated for use in cars. This paper will describe work in this field performed in C2VIP over the last decade - starting with Night Vision Systems and looking at various other Advanced Driver Assistance Systems. Emerging from this experience, we make the following observations which are crucial for "intelligent" imaging systems: 1. Careful arrangement of sensor array. 2. Dynamic-Self-Calibration. 3. Networking and processing. 4. Fusion with other imaging sensors, both at the image level and the feature level, provides much more flexibility and reliability in complex situations. We will discuss how these problems can be addressed and what are the outstanding issues.

  12. Degradation of optical components in space

    NASA Technical Reports Server (NTRS)

    Blue, M. D.

    1993-01-01

    This report concerns two types of optical components: multilayer filters and mirrors, and self-scanned imaging arrays using charge coupled device (CCD) readouts. For the filters and mirrors, contamination produces a strong reduction in transmittance in the ultraviolet spectral region, but has little or no effect in the visible and infrared spectral regions. Soft substrates containing halides are unsatisfactory as windows or substrates. Materials choice for dielectric layers should also reflect such considerations. Best performance is also found for the harder materials. Compaction of the layers and interlayer diffusion causes a blue shift in center wavelength and loss of throughput. For sensors using CCD's, shifts in gate voltage and reductions in transfer efficiency occur. Such effects in CCD's are in accord with expectations of the effects of the radiation dose on the device. Except for optical fiber, degradation of CCD's represents the only ionizing-radiation induced effect on the Long Duration Exposure Facility (LDEF) optical systems components that has been observed.

  13. Ultrafast Imaging using Spectral Resonance Modulation

    NASA Astrophysics Data System (ADS)

    Huang, Eric; Ma, Qian; Liu, Zhaowei

    2016-04-01

    CCD cameras are ubiquitous in research labs, industry, and hospitals for a huge variety of applications, but there are many dynamic processes in nature that unfold too quickly to be captured. Although tradeoffs can be made between exposure time, sensitivity, and area of interest, ultimately the speed limit of a CCD camera is constrained by the electronic readout rate of the sensors. One potential way to improve the imaging speed is with compressive sensing (CS), a technique that allows for a reduction in the number of measurements needed to record an image. However, most CS imaging methods require spatial light modulators (SLMs), which are subject to mechanical speed limitations. Here, we demonstrate an etalon array based SLM without any moving elements that is unconstrained by either mechanical or electronic speed limitations. This novel spectral resonance modulator (SRM) shows great potential in an ultrafast compressive single pixel camera.

  14. A Novel Method to Increase LinLog CMOS Sensors’ Performance in High Dynamic Range Scenarios

    PubMed Central

    Martínez-Sánchez, Antonio; Fernández, Carlos; Navarro, Pedro J.; Iborra, Andrés

    2011-01-01

    Images from high dynamic range (HDR) scenes must be obtained with minimum loss of information. For this purpose it is necessary to take full advantage of the quantification levels provided by the CCD/CMOS image sensor. LinLog CMOS sensors satisfy the above demand by offering an adjustable response curve that combines linear and logarithmic responses. This paper presents a novel method to quickly adjust the parameters that control the response curve of a LinLog CMOS image sensor. We propose to use an Adaptive Proportional-Integral-Derivative controller to adjust the exposure time of the sensor, together with control algorithms based on the saturation level and the entropy of the images. With this method the sensor’s maximum dynamic range (120 dB) can be used to acquire good quality images from HDR scenes with fast, automatic adaptation to scene conditions. Adaptation to a new scene is rapid, with a sensor response adjustment of less than eight frames when working in real time video mode. At least 67% of the scene entropy can be retained with this method. PMID:22164083

  15. An LOD with improved breakdown voltage in full-frame CCD devices

    NASA Astrophysics Data System (ADS)

    Banghart, Edmund K.; Stevens, Eric G.; Doan, Hung Q.; Shepherd, John P.; Meisenzahl, Eric J.

    2005-02-01

    In full-frame image sensors, lateral overflow drain (LOD) structures are typically formed along the vertical CCD shift registers to provide a means for preventing charge blooming in the imager pixels. In a conventional LOD structure, the n-type LOD implant is made through the thin gate dielectric stack in the device active area and adjacent to the thick field oxidation that isolates the vertical CCD columns of the imager. In this paper, a novel LOD structure is described in which the n-type LOD impurities are placed directly under the field oxidation and are, therefore, electrically isolated from the gate electrodes. By reducing the electrical fields that cause breakdown at the silicon surface, this new structure permits a larger amount of n-type impurities to be implanted for the purpose of increasing the LOD conductivity. As a consequence of the improved conductance, the LOD width can be significantly reduced, enabling the design of higher resolution imaging arrays without sacrificing charge capacity in the pixels. Numerical simulations with MEDICI of the LOD leakage current are presented that identify the breakdown mechanism, while three-dimensional solutions to Poisson's equation are used to determine the charge capacity as a function of pixel dimension.

  16. Scanning Microscopes Using X Rays and Microchannels

    NASA Technical Reports Server (NTRS)

    Wang, Yu

    2003-01-01

    Scanning microscopes that would be based on microchannel filters and advanced electronic image sensors and that utilize x-ray illumination have been proposed. Because the finest resolution attainable in a microscope is determined by the wavelength of the illumination, the xray illumination in the proposed microscopes would make it possible, in principle, to achieve resolutions of the order of nanometers about a thousand times as fine as the resolution of a visible-light microscope. Heretofore, it has been necessary to use scanning electron microscopes to obtain such fine resolution. In comparison with scanning electron microscopes, the proposed microscopes would likely be smaller, less massive, and less expensive. Moreover, unlike in scanning electron microscopes, it would not be necessary to place specimens under vacuum. The proposed microscopes are closely related to the ones described in several prior NASA Tech Briefs articles; namely, Miniature Microscope Without Lenses (NPO-20218), NASA Tech Briefs, Vol. 22, No. 8 (August 1998), page 43; and Reflective Variants of Miniature Microscope Without Lenses (NPO-20610), NASA Tech Briefs, Vol. 26, No. 9 (September 2002) page 6a. In all of these microscopes, the basic principle of design and operation is the same: The focusing optics of a conventional visible-light microscope are replaced by a combination of a microchannel filter and a charge-coupled-device (CCD) image detector. A microchannel plate containing parallel, microscopic-cross-section holes much longer than they are wide is placed between a specimen and an image sensor, which is typically the CCD. The microchannel plate must be made of a material that absorbs the illuminating radiation reflected or scattered from the specimen. The microchannels must be positioned and dimensioned so that each one is registered with a pixel on the image sensor. Because most of the radiation incident on the microchannel walls becomes absorbed, the radiation that reaches the image sensor consists predominantly of radiation that was launched along the longitudinal direction of the microchannels. Therefore, most of the radiation arriving at each pixel on the sensor must have traveled along a straight line from a corresponding location on the specimen. Thus, there is a one-to-one mapping from a point on a specimen to a pixel in the image sensor, so that the output of the image sensor contains image information equivalent to that from a microscope.

  17. Diffraction-based optical sensor detection system for capture-restricted environments

    NASA Astrophysics Data System (ADS)

    Khandekar, Rahul M.; Nikulin, Vladimir V.

    2008-04-01

    The use of digital cameras and camcorders in prohibited areas presents a growing problem. Piracy in the movie theaters results in huge revenue loss to the motion picture industry every year, but still image and video capture may present even a bigger threat if performed in high-security locations. While several attempts are being made to address this issue, an effective solution is yet to be found. We propose to approach this problem using a very commonly observed optical phenomenon. Cameras and camcorders use CCD and CMOS sensors, which include a number of photosensitive elements/pixels arranged in a certain fashion. Those are photosites in CCD sensors and semiconductor elements in CMOS sensors. They are known to reflect a small fraction of incident light, but could also act as a diffraction grating, resulting in the optical response that could be utilized to identify the presence of such a sensor. A laser-based detection system is proposed that accounts for the elements in the optical train of the camera, as well as the eye-safety of the people who could be exposed to optical beam radiation. This paper presents preliminary experimental data, as well as the proof-of-concept simulation results.

  18. Radiographic endodontic working length estimation: comparison of three digital image receptors.

    PubMed

    Athar, Anas; Angelopoulos, Christos; Katz, Jerald O; Williams, Karen B; Spencer, Paulette

    2008-10-01

    This in vitro study was conducted to evaluate the accuracy of the Schick wireless image receptor compared with 2 other types of digital image receptors for measuring the radiographic landmarks pertinent to endodontic treatment. Fourteen human cadaver mandibles with retained molars were selected. A fine endodontic file (#10) was introduced into the canal at random distances from the apex and at the apex of the tooth; images were made with 3 different #2-size image receptors: DenOptix storage phosphor plates, Gendex CCD sensor (wired), and Schick CDR sensor (wireless). Six raters viewed the images for identification of the radiographic apex of the tooth and the tip of a fine (#10) endodontic file. Inter-rater reliability was also assessed. Repeated-measures analysis of variance revealed a significant main effect for the type of image receptor. Raters' error in identifying structures of interest was significantly higher for Denoptix storage phosphor plates, whereas the least error was noted with the Schick CDR sensor. A significant interaction effect was observed for rater and type of image receptor used, but this effect contributed only 6% (P < .01; eta(2) = 0.06) toward the outcome of the results. Schick CDR wireless sensor may be preferable to other solid-state sensors, because there is no cable connecting the sensor to the computer. Further testing of this sensor for other diagnostic tasks is recommended, as well as evaluation of patient acceptance.

  19. Proximal caries detection: Sirona Sidexis versus Kodak Ektaspeed Plus.

    PubMed

    Khan, Emad A; Tyndall, Donald A; Ludlow, John B; Caplan, Daniel

    2005-01-01

    This study compared the accuracy of intraoral film and a charge-coupled device (CCD) receptor for proximal caries detection. Four observers evaluated images of the proximal surfaces of 40 extracted posterior teeth. The presence or absence of caries was scored using a five-point confidence scale. The actual status of each surface was determined from ground section histology. Responses were evaluated by means of receiver operating characteristic (ROC) analysis. Areas under ROC curves (Az) were assessed through a paired t-test. The performance of the CCD-based intraoral sensor was not different statistically from Ektaspeed Plus film in detecting proximal caries.

  20. Detection of Spatially Unresolved (Nominally Sub-Pixel) Submerged and Surface Targets Using Hyperspectral Data

    DTIC Science & Technology

    2012-09-01

    Feasibility (MT Modeling ) a. Continuum of mixture distributions interpolated b. Mixture infeasibilities calculated for each pixel c. Valid detections...Visible/Infrared Imaging Spectrometer BRDF Bidirectional Reflectance Distribution Function CASI Compact Airborne Spectrographic Imager CCD...filtering (MTMF), and was designed by Healey and Slater (1999) to use “a physical model to generate the set of sensor spectra for a target that will be

  1. Real-time two-dimensional imaging of potassium ion distribution using an ion semiconductor sensor with charged coupled device technology.

    PubMed

    Hattori, Toshiaki; Masaki, Yoshitomo; Atsumi, Kazuya; Kato, Ryo; Sawada, Kazuaki

    2010-01-01

    Two-dimensional real-time observation of potassium ion distributions was achieved using an ion imaging device based on charge-coupled device (CCD) and metal-oxide semiconductor technologies, and an ion selective membrane. The CCD potassium ion image sensor was equipped with an array of 32 × 32 pixels (1024 pixels). It could record five frames per second with an area of 4.16 × 4.16 mm(2). Potassium ion images were produced instantly. The leaching of potassium ion from a 3.3 M KCl Ag/AgCl reference electrode was dynamically monitored in aqueous solution. The potassium ion selective membrane on the semiconductor consisted of plasticized poly(vinyl chloride) (PVC) with bis(benzo-15-crown-5). The addition of a polyhedral oligomeric silsesquioxane to the plasticized PVC membrane greatly improved adhesion of the membrane onto Si(3)N(4) of the semiconductor surface, and the potential response was stabilized. The potential response was linear from 10(-2) to 10(-5) M logarithmic concentration of potassium ion. The selectivity coefficients were K(K(+),Li(+))(pot) = 10(-2.85), K(K(+),Na(+))(pot) = 10(-2.30), K(K(+),Rb(+))(pot) =10(-1.16), and K(K(+),Cs(+))(pot) = 10(-2.05).

  2. Exploiting Satellite Focal Plane Geometry for Automatic Extraction of Traffic Flow from Single Optical Satellite Imagery

    NASA Astrophysics Data System (ADS)

    Krauß, T.

    2014-11-01

    The focal plane assembly of most pushbroom scanner satellites is built up in a way that different multispectral or multispectral and panchromatic bands are not all acquired exactly at the same time. This effect is due to offsets of some millimeters of the CCD-lines in the focal plane. Exploiting this special configuration allows the detection of objects moving during this small time span. In this paper we present a method for automatic detection and extraction of moving objects - mainly traffic - from single very high resolution optical satellite imagery of different sensors. The sensors investigated are WorldView-2, RapidEye, Pléiades and also the new SkyBox satellites. Different sensors require different approaches for detecting moving objects. Since the objects are mapped on different positions only in different spectral bands also the change of spectral properties have to be taken into account. In case the main distance in the focal plane is between the multispectral and the panchromatic CCD-line like for Pléiades an approach for weighted integration to receive mostly identical images is investigated. Other approaches for RapidEye and WorldView-2 are also shown. From these intermediate bands difference images are calculated and a method for detecting the moving objects from these difference images is proposed. Based on these presented methods images from different sensors are processed and the results are assessed for detection quality - how many moving objects can be detected, how many are missed - and accuracy - how accurate is the derived speed and size of the objects. Finally the results are discussed and an outlook for possible improvements towards operational processing is presented.

  3. A multi-characteristic based algorithm for classifying vegetation in a plateau area: Qinghai Lake watershed, northwestern China

    NASA Astrophysics Data System (ADS)

    Ma, Weiwei; Gong, Cailan; Hu, Yong; Li, Long; Meng, Peng

    2015-10-01

    Remote sensing technology has been broadly recognized for its convenience and efficiency in mapping vegetation, particularly in high-altitude and inaccessible areas where there are lack of in-situ observations. In this study, Landsat Thematic Mapper (TM) images and Chinese environmental mitigation satellite CCD sensor (HJ-1 CCD) images, both of which are at 30m spatial resolution were employed for identifying and monitoring of vegetation types in a area of Western China——Qinghai Lake Watershed(QHLW). A decision classification tree (DCT) algorithm using multi-characteristic including seasonal TM/HJ-1 CCD time series data combined with digital elevation models (DEMs) dataset, and a supervised maximum likelihood classification (MLC) algorithm with single-data TM image were applied vegetation classification. Accuracy of the two algorithms was assessed using field observation data. Based on produced vegetation classification maps, it was found that the DCT using multi-season data and geomorphologic parameters was superior to the MLC algorithm using single-data image, improving the overall accuracy by 11.86% at second class level and significantly reducing the "salt and pepper" noise. The DCT algorithm applied to TM /HJ-1 CCD time series data geomorphologic parameters appeared as a valuable and reliable tool for monitoring vegetation at first class level (5 vegetation classes) and second class level(8 vegetation subclasses). The DCT algorithm using multi-characteristic might provide a theoretical basis and general approach to automatic extraction of vegetation types from remote sensing imagery over plateau areas.

  4. Intra-cavity upconversion to 631 nm of images illuminated by an eye-safe ASE source at 1550 nm.

    PubMed

    Torregrosa, A J; Maestre, H; Capmany, J

    2015-11-15

    We report an image wavelength upconversion system. The system mixes an incoming image at around 1550 nm (eye-safe region) illuminated by an amplified spontaneous emission (ASE) fiber source with a Gaussian beam at 1064 nm generated in a continuous-wave diode-pumped Nd(3+):GdVO(4) laser. Mixing takes place in a periodically poled lithium niobate (PPLN) crystal placed intra-cavity. The upconverted image obtained by sum-frequency mixing falls around the 631 nm red spectral region, well within the spectral response of standard silicon focal plane array bi-dimensional sensors, commonly used in charge-coupled device (CCD) or complementary metal-oxide-semiconductor (CMOS) video cameras, and of most image intensifiers. The use of ASE illumination benefits from a noticeable increase in the field of view (FOV) that can be upconverted with regard to using coherent laser illumination. The upconverted power allows us to capture real-time video in a standard nonintensified CCD camera.

  5. A Real-Time Ultraviolet Radiation Imaging System Using an Organic Photoconductive Image Sensor†

    PubMed Central

    Okino, Toru; Yamahira, Seiji; Yamada, Shota; Hirose, Yutaka; Odagawa, Akihiro; Kato, Yoshihisa; Tanaka, Tsuyoshi

    2018-01-01

    We have developed a real time ultraviolet (UV) imaging system that can visualize both invisible UV light and a visible (VIS) background scene in an outdoor environment. As a UV/VIS image sensor, an organic photoconductive film (OPF) imager is employed. The OPF has an intrinsically higher sensitivity in the UV wavelength region than those of conventional consumer Complementary Metal Oxide Semiconductor (CMOS) image sensors (CIS) or Charge Coupled Devices (CCD). As particular examples, imaging of hydrogen flame and of corona discharge is demonstrated. UV images overlapped on background scenes are simply made by on-board background subtraction. The system is capable of imaging weaker UV signals by four orders of magnitude than that of VIS background. It is applicable not only to future hydrogen supply stations but also to other UV/VIS monitor systems requiring UV sensitivity under strong visible radiation environment such as power supply substations. PMID:29361742

  6. A fast double shutter for CCD-based metrology

    NASA Astrophysics Data System (ADS)

    Geisler, R.

    2017-02-01

    Image based metrology such as Particle Image Velocimetry (PIV) depends on the comparison of two images of an object taken in fast succession. Cameras for these applications provide the so-called `double shutter' mode: One frame is captured with a short exposure time and in direct succession a second frame with a long exposure time can be recorded. The difference in the exposure times is typically no problem since illumination is provided by a pulsed light source such as a laser and the measurements are performed in a darkened environment to prevent ambient light from accumulating in the long second exposure time. However, measurements of self-luminous processes (e.g. plasma, combustion ...) as well as experiments in ambient light are difficult to perform and require special equipment (external shutters, highspeed image sensors, multi-sensor systems ...). Unfortunately, all these methods incorporate different drawbacks such as reduced resolution, degraded image quality, decreased light sensitivity or increased susceptibility to decalibration. In the solution presented here, off-the-shelf CCD sensors are used with a special timing to combine neighbouring pixels in a binning-like way. As a result, two frames of short exposure time can be captured in fast succession. They are stored in the on-chip vertical register in a line-interleaved pattern, read out in the common way and separated again by software. The two resultant frames are completely congruent; they expose no insensitive lines or line shifts and thus enable sub-pixel accurate measurements. A third frame can be captured at the full resolution analogue to the double shutter technique. Image based measurement techniques such as PIV can benefit from this mode when applied in bright environments. The third frame is useful e.g. for acceleration measurements or for particle tracking applications.

  7. Method for eliminating artifacts in CCD imagers

    DOEpatents

    Turko, B.T.; Yates, G.J.

    1992-06-09

    An electronic method for eliminating artifacts in a video camera employing a charge coupled device (CCD) as an image sensor is disclosed. The method comprises the step of initializing the camera prior to normal read out and includes a first dump cycle period for transferring radiation generated charge into the horizontal register while the decaying image on the phosphor being imaged is being integrated in the photosites, and a second dump cycle period, occurring after the phosphor image has decayed, for rapidly dumping unwanted smear charge which has been generated in the vertical registers. Image charge is then transferred from the photosites and to the vertical registers and read out in conventional fashion. The inventive method allows the video camera to be used in environments having high ionizing radiation content, and to capture images of events of very short duration and occurring either within or outside the normal visual wavelength spectrum. Resultant images are free from ghost, smear and smear phenomena caused by insufficient opacity of the registers and, and are also free from random damage caused by ionization charges which exceed the charge limit capacity of the photosites. 3 figs.

  8. Isolation Mounting for Charge-Coupled Devices

    NASA Technical Reports Server (NTRS)

    Goss, W. C.; Salomon, P. M.

    1985-01-01

    CCD's suspended by wires under tension. Remote thermoelectric cooling of charge coupled device allows vibration isolating mounting of CCD assembly alone, without having to suspend entire mass and bulk of thermoelectric module. Mounting hardware simple and light. Developed for charge-coupled devices (CCD's) in infrared telescope support adaptable to sensors in variety of environments, e.g., sensors in nuclear reactors, engine exhausts and plasma chambers.

  9. Superresolution with the focused plenoptic camera

    NASA Astrophysics Data System (ADS)

    Georgiev, Todor; Chunev, Georgi; Lumsdaine, Andrew

    2011-03-01

    Digital images from a CCD or CMOS sensor with a color filter array must undergo a demosaicing process to combine the separate color samples into a single color image. This interpolation process can interfere with the subsequent superresolution process. Plenoptic superresolution, which relies on precise sub-pixel sampling across captured microimages, is particularly sensitive to such resampling of the raw data. In this paper we present an approach for superresolving plenoptic images that takes place at the time of demosaicing the raw color image data. Our approach exploits the interleaving provided by typical color filter arrays (e.g., Bayer filter) to further refine plenoptic sub-pixel sampling. Our rendering algorithm treats the color channels in a plenoptic image separately, which improves final superresolution by a factor of two. With appropriate plenoptic capture we show the theoretical possibility for rendering final images at full sensor resolution.

  10. Hologram production and representation for corrected image

    NASA Astrophysics Data System (ADS)

    Jiao, Gui Chao; Zhang, Rui; Su, Xue Mei

    2015-12-01

    In this paper, a CCD sensor device is used to record the distorted homemade grid images which are taken by a wide angle camera. The distorted images are corrected by using methods of position calibration and correction of gray with vc++ 6.0 and opencv software. Holography graphes for the corrected pictures are produced. The clearly reproduced images are obtained where Fresnel algorithm is used in graph processing by reducing the object and reference light from Fresnel diffraction to delete zero-order part of the reproduced images. The investigation is useful in optical information processing and image encryption transmission.

  11. Photonic-crystal membranes for optical detection of single nano-particles, designed for biosensor application.

    PubMed

    Grepstad, Jon Olav; Kaspar, Peter; Solgaard, Olav; Johansen, Ib-Rune; Sudbø, Aasmund S

    2012-03-26

    A sensor designed to detect bio-molecules is presented. The sensor exploits a planar 2D photonic crystal (PC) membrane with sub-micron thickness and through holes, to induce high optical fields that allow detection of nano-particles smaller than the diffraction limit of an optical microscope. We report on our design and fabrication of a PC membrane with a nano-particle trapped inside. We have also designed and built an imaging system where an optical microscope and a CCD camera are used to take images of the PC membrane. Results show how the trapped nano-particle appears as a bright spot in the image. In a first experimental realization of the imaging system, single particles with a radius of 75 nm can be detected.

  12. Effects of space-radiation damage and temperature on CCD noise for the Lyman FUSE mission

    NASA Astrophysics Data System (ADS)

    Murowinski, Richard G.; Gao, Linzhuang; Deen, Mohamed J.

    1993-09-01

    Charge coupled device (CCD) imaging arrays are becoming more frequently used in space vehicles and equipment, especially space-based astronomical telescopes. It is important to understand the effects of radiation on a CCD so that its performance degradation during mission lifetime can be predicted, and so that methods to prevent unacceptable performance degradation can be found. Much recent work by various groups has focused on the problems surrounding the loss of charge transfer efficiency and the increase in dark current and dark current spikes in CCDs. The use of a CCD as the fine error sensor in the Lyman Far Ultraviolet Spectroscopic Explorer (FUSE) is limited by its noise performance. In this work we attempt to understand some of the factors surrounding the noise degradation due to radiation in a space environment. Later, we demonstrate how low frequency noise can be used as a characterization tool for studying proton radiation damage in CCDs.

  13. CCD sensors in synchrotron X-ray detectors

    NASA Astrophysics Data System (ADS)

    Strauss, M. G.; Naday, I.; Sherman, I. S.; Kraimer, M. R.; Westbrook, E. M.; Zaluzec, N. J.

    1988-04-01

    The intense photon flux from advanced synchrotron light sources, such as the 7-GeV synchrotron being designed at Argonne, require integrating-type detectors. Charge-coupled devices (CCDs) are well suited as synchrotron X-ray detectors. When irradiated indirectly via a phosphor followed by reducing optics, diffraction patterns of 100 cm 2 can be imaged on a 2 cm 2 CCD. With a conversion efficiency of ˜ 1 CCD electron/X-ray photon, a peak saturation capacity of > 10 6 X-rays can be obtained. A programmable CCD controller operating at a clock frequency of 20 MHz has been developed. The readout rate is 5 × 10 6 pixels/s and the shift rate in the parallel registers is 10 6 lines/s. The test detector was evaluated in two experiments. In protein crystallography diffraction patterns have been obtained from a lysozyme crystal using a conventional rotating anode X-ray generator. Based on these results we expect to obtain at a synchrotron diffraction images at a rate of ˜ 1 frame/s or a complete 3-dimensional data set from a single crystal in ˜ 2 min. In electron energy-loss spectroscopy (EELS), the CCD was used in a parallel detection mode which is similar to the mode array detectors are used in dispersive EXAFS. With a beam current corresponding to 3 × 10 9 electron/s on the detector, a series of 64 spectra were recorded on the CCD in a continuous sequence without interruption due to readout. The frame-to-frame pixel signal fluctuations had σ = 0.4% from which DQE = 0.4 was obtained, where the detector conversion efficiency was 2.6 CCD electrons/X-ray photon. These multiple frame series also showed the time-resolved modulation of the electron microscope optics by stray magnetic fields.

  14. The CAOS camera platform: ushering in a paradigm change in extreme dynamic range imager design

    NASA Astrophysics Data System (ADS)

    Riza, Nabeel A.

    2017-02-01

    Multi-pixel imaging devices such as CCD, CMOS and Focal Plane Array (FPA) photo-sensors dominate the imaging world. These Photo-Detector Array (PDA) devices certainly have their merits including increasingly high pixel counts and shrinking pixel sizes, nevertheless, they are also being hampered by limitations in instantaneous dynamic range, inter-pixel crosstalk, quantum full well capacity, signal-to-noise ratio, sensitivity, spectral flexibility, and in some cases, imager response time. Recently invented is the Coded Access Optical Sensor (CAOS) Camera platform that works in unison with current Photo-Detector Array (PDA) technology to counter fundamental limitations of PDA-based imagers while providing high enough imaging spatial resolution and pixel counts. Using for example the Texas Instruments (TI) Digital Micromirror Device (DMD) to engineer the CAOS camera platform, ushered in is a paradigm change in advanced imager design, particularly for extreme dynamic range applications.

  15. Restoration of out-of-focus images based on circle of confusion estimate

    NASA Astrophysics Data System (ADS)

    Vivirito, Paolo; Battiato, Sebastiano; Curti, Salvatore; La Cascia, M.; Pirrone, Roberto

    2002-11-01

    In this paper a new method for a fast out-of-focus blur estimation and restoration is proposed. It is suitable for CFA (Color Filter Array) images acquired by typical CCD/CMOS sensor. The method is based on the analysis of a single image and consists of two steps: 1) out-of-focus blur estimation via Bayer pattern analysis; 2) image restoration. Blur estimation is based on a block-wise edge detection technique. This edge detection is carried out on the green pixels of the CFA sensor image also called Bayer pattern. Once the blur level has been estimated the image is restored through the application of a new inverse filtering technique. This algorithm gives sharp images reducing ringing and crisping artifact, involving wider region of frequency. Experimental results show the effectiveness of the method, both in subjective and numerical way, by comparison with other techniques found in literature.

  16. Smart image sensors: an emerging key technology for advanced optical measurement and microsystems

    NASA Astrophysics Data System (ADS)

    Seitz, Peter

    1996-08-01

    Optical microsystems typically include photosensitive devices, analog preprocessing circuitry and digital signal processing electronics. The advances in semiconductor technology have made it possible today to integrate all photosensitive and electronical devices on one 'smart image sensor' or photo-ASIC (application-specific integrated circuits containing photosensitive elements). It is even possible to provide each 'smart pixel' with additional photoelectronic functionality, without compromising the fill factor substantially. This technological capability is the basis for advanced cameras and optical microsystems showing novel on-chip functionality: Single-chip cameras with on- chip analog-to-digital converters for less than $10 are advertised; image sensors have been developed including novel functionality such as real-time selectable pixel size and shape, the capability of performing arbitrary convolutions simultaneously with the exposure, as well as variable, programmable offset and sensitivity of the pixels leading to image sensors with a dynamic range exceeding 150 dB. Smart image sensors have been demonstrated offering synchronous detection and demodulation capabilities in each pixel (lock-in CCD), and conventional image sensors are combined with an on-chip digital processor for complete, single-chip image acquisition and processing systems. Technological problems of the monolithic integration of smart image sensors include offset non-uniformities, temperature variations of electronic properties, imperfect matching of circuit parameters, etc. These problems can often be overcome either by designing additional compensation circuitry or by providing digital correction routines. Where necessary for technological or economic reasons, smart image sensors can also be combined with or realized as hybrids, making use of commercially available electronic components. It is concluded that the possibilities offered by custom smart image sensors will influence the design and the performance of future electronic imaging systems in many disciplines, reaching from optical metrology to machine vision on the factory floor and in robotics applications.

  17. Imaging of transient surface acoustic waves by full-field photorefractive interferometry.

    PubMed

    Xiong, Jichuan; Xu, Xiaodong; Glorieux, Christ; Matsuda, Osamu; Cheng, Liping

    2015-05-01

    A stroboscopic full-field imaging technique based on photorefractive interferometry for the visualization of rapidly changing surface displacement fields by using of a standard charge-coupled device (CCD) camera is presented. The photorefractive buildup of the space charge field during and after probe laser pulses is simulated numerically. The resulting anisotropic diffraction upon the refractive index grating and the interference between the polarization-rotated diffracted reference beam and the transmitted signal beam are modeled theoretically. The method is experimentally demonstrated by full-field imaging of the propagation of photoacoustically generated surface acoustic waves with a temporal resolution of nanoseconds. The surface acoustic wave propagation in a 23 mm × 17 mm area on an aluminum plate was visualized with 520 × 696 pixels of the CCD sensor, yielding a spatial resolution of 33 μm. The short pulse duration (8 ns) of the probe laser yields the capability of imaging SAWs with frequencies up to 60 MHz.

  18. Fringing in MonoCam Y4 filter images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brooks, J.; Fisher-Levine, M.; Nomerotski, A.

    Here, we study the fringing patterns observed in MonoCam, a camera with a single Large Synoptic Survey Telescope (LSST) CCD sensor. Images were taken at the U.S. Naval Observatory in Flagstaff, Arizona (NOFS) employing its 1.3 m telescope and an LSST y4 filter. Fringing occurs due to the reflection of infrared light (700 nm or larger) from the bottom surface of the CCD which constructively or destructively interferes with the incident light to produce a net "fringe" pattern which is superimposed on all images taken. Emission lines from the atmosphere, dominated by hydroxyl (OH) spectra, can change in their relativemore » intensities as the night goes on, producing different fringe patterns in the images taken. We found through several methods that the general shape of the fringe patterns remained constant, though with slight changes in the amplitude and phase of the fringes. Lastly, we also found that a superposition of fringes from two monochromatic lines taken in the lab offered a reasonable description of the sky data.« less

  19. Fringing in MonoCam Y4 filter images

    DOE PAGES

    Brooks, J.; Fisher-Levine, M.; Nomerotski, A.

    2017-05-05

    Here, we study the fringing patterns observed in MonoCam, a camera with a single Large Synoptic Survey Telescope (LSST) CCD sensor. Images were taken at the U.S. Naval Observatory in Flagstaff, Arizona (NOFS) employing its 1.3 m telescope and an LSST y4 filter. Fringing occurs due to the reflection of infrared light (700 nm or larger) from the bottom surface of the CCD which constructively or destructively interferes with the incident light to produce a net "fringe" pattern which is superimposed on all images taken. Emission lines from the atmosphere, dominated by hydroxyl (OH) spectra, can change in their relativemore » intensities as the night goes on, producing different fringe patterns in the images taken. We found through several methods that the general shape of the fringe patterns remained constant, though with slight changes in the amplitude and phase of the fringes. Lastly, we also found that a superposition of fringes from two monochromatic lines taken in the lab offered a reasonable description of the sky data.« less

  20. Electronic still camera

    NASA Astrophysics Data System (ADS)

    Holland, S. Douglas

    1992-09-01

    A handheld, programmable, digital camera is disclosed that supports a variety of sensors and has program control over the system components to provide versatility. The camera uses a high performance design which produces near film quality images from an electronic system. The optical system of the camera incorporates a conventional camera body that was slightly modified, thus permitting the use of conventional camera accessories, such as telephoto lenses, wide-angle lenses, auto-focusing circuitry, auto-exposure circuitry, flash units, and the like. An image sensor, such as a charge coupled device ('CCD') collects the photons that pass through the camera aperture when the shutter is opened, and produces an analog electrical signal indicative of the image. The analog image signal is read out of the CCD and is processed by preamplifier circuitry, a correlated double sampler, and a sample and hold circuit before it is converted to a digital signal. The analog-to-digital converter has an accuracy of eight bits to insure accuracy during the conversion. Two types of data ports are included for two different data transfer needs. One data port comprises a general purpose industrial standard port and the other a high speed/high performance application specific port. The system uses removable hard disks as its permanent storage media. The hard disk receives the digital image signal from the memory buffer and correlates the image signal with other sensed parameters, such as longitudinal or other information. When the storage capacity of the hard disk has been filled, the disk can be replaced with a new disk.

  1. Electronic Still Camera

    NASA Technical Reports Server (NTRS)

    Holland, S. Douglas (Inventor)

    1992-01-01

    A handheld, programmable, digital camera is disclosed that supports a variety of sensors and has program control over the system components to provide versatility. The camera uses a high performance design which produces near film quality images from an electronic system. The optical system of the camera incorporates a conventional camera body that was slightly modified, thus permitting the use of conventional camera accessories, such as telephoto lenses, wide-angle lenses, auto-focusing circuitry, auto-exposure circuitry, flash units, and the like. An image sensor, such as a charge coupled device ('CCD') collects the photons that pass through the camera aperture when the shutter is opened, and produces an analog electrical signal indicative of the image. The analog image signal is read out of the CCD and is processed by preamplifier circuitry, a correlated double sampler, and a sample and hold circuit before it is converted to a digital signal. The analog-to-digital converter has an accuracy of eight bits to insure accuracy during the conversion. Two types of data ports are included for two different data transfer needs. One data port comprises a general purpose industrial standard port and the other a high speed/high performance application specific port. The system uses removable hard disks as its permanent storage media. The hard disk receives the digital image signal from the memory buffer and correlates the image signal with other sensed parameters, such as longitudinal or other information. When the storage capacity of the hard disk has been filled, the disk can be replaced with a new disk.

  2. Attitude determination for high-accuracy submicroradian jitter pointing on space-based platforms

    NASA Astrophysics Data System (ADS)

    Gupta, Avanindra A.; van Houten, Charles N.; Germann, Lawrence M.

    1990-10-01

    A description of the requirement definition process is given for a new wideband attitude determination subsystem (ADS) for image motion compensation (IMC) systems. The subsystem consists of either lateral accelerometers functioning in differential pairs or gas-bearing gyros for high-frequency sensors using CCD-based star trackers for low-frequency sensors. To minimize error the sensor signals are combined so that the mixing filter does not allow phase distortion. The two ADS models are introduced in an IMC simulation to predict measurement error, correction capability, and residual image jitter for a variety of system parameters. The IMC three-axis testbed is utilized to simulate an incoming beam in inertial space. Results demonstrate that both mechanical and electronic IMC meet the requirements of image stabilization for space-based observation at submicroradian-jitter levels. Currently available technology may be employed to implement IMC systems.

  3. Honeywell's Compact, Wide-angle Uv-visible Imaging Sensor

    NASA Technical Reports Server (NTRS)

    Pledger, D.; Billing-Ross, J.

    1993-01-01

    Honeywell is currently developing the Earth Reference Attitude Determination System (ERADS). ERADS determines attitude by imaging the entire Earth's limb and a ring of the adjacent star field in the 2800-3000 A band of the ultraviolet. This is achieved through the use of a highly nonconventional optical system, an intensifier tube, and a mega-element CCD array. The optics image a 30 degree region in the center of the field, and an outer region typically from 128 to 148 degrees, which can be adjusted up to 180 degrees. Because of the design employed, the illumination at the outer edge of the field is only some 15 percent below that at the center, in contrast to the drastic rolloffs encountered in conventional wide-angle sensors. The outer diameter of the sensor is only 3 in; the volume and weight of the entire system, including processor, are 1000 cc and 6 kg, respectively.

  4. Miss-distance indicator for tank main gun systems

    NASA Astrophysics Data System (ADS)

    Bornstein, Jonathan A.; Hillis, David B.

    1994-07-01

    The initial development of a passive, automated system to track bullet trajectories near a target to determine the `miss distance,' and the corresponding correction necessary to bring the following round `on target' is discussed. The system consists of a visible wavelength CCD sensor, long focal length optics, and a separate IR sensor to detect the muzzle flash of the firing event; this is coupled to a `PC' based image processing and automatic tracking system designed to follow the projectile trajectory by intelligently comparing frame to frame variation of the projectile tracer image. An error analysis indicates that the device is particularly sensitive to variation of the projectile time of flight to the target, and requires development of algorithms to estimate this value from the 2D images employed by the sensor to monitor the projectile trajectory. Initial results obtained by using a brassboard prototype to track training ammunition are promising.

  5. Vision communications based on LED array and imaging sensor

    NASA Astrophysics Data System (ADS)

    Yoo, Jong-Ho; Jung, Sung-Yoon

    2012-11-01

    In this paper, we propose a brand new communication concept, called as "vision communication" based on LED array and image sensor. This system consists of LED array as a transmitter and digital device which include image sensor such as CCD and CMOS as receiver. In order to transmit data, the proposed communication scheme simultaneously uses the digital image processing and optical wireless communication scheme. Therefore, the cognitive communication scheme is possible with the help of recognition techniques used in vision system. By increasing data rate, our scheme can use LED array consisting of several multi-spectral LEDs. Because arranged each LED can emit multi-spectral optical signal such as visible, infrared and ultraviolet light, the increase of data rate is possible similar to WDM and MIMO skills used in traditional optical and wireless communications. In addition, this multi-spectral capability also makes it possible to avoid the optical noises in communication environment. In our vision communication scheme, the data packet is composed of Sync. data and information data. Sync. data is used to detect the transmitter area and calibrate the distorted image snapshots obtained by image sensor. By making the optical rate of LED array be same with the frame rate (frames per second) of image sensor, we can decode the information data included in each image snapshot based on image processing and optical wireless communication techniques. Through experiment based on practical test bed system, we confirm the feasibility of the proposed vision communications based on LED array and image sensor.

  6. Undersampled digital holographic interferometry

    NASA Astrophysics Data System (ADS)

    Halaq, H.; Demoli, N.; Sović, I.; Šariri, K.; Torzynski, M.; Vukičević, D.

    2008-04-01

    In digital holography, primary holographic fringes are recorded using a matricial CCD sensor. Because of the low spatial resolution of currently available CCD arrays, the angle between the reference and object beams must be limited to a few degrees. Namely, due to the digitization involved, the Shannon's criterion imposes that the Nyquist sampling frequency be at least twice the highest signal frequency. This means that, in the case of the recording of an interference fringe pattern by a CCD sensor, the inter-fringe distance must be larger than twice the pixel period. This in turn limits the angle between the object and the reference beams. If this angle, in a practical holographic interferometry measuring setup, cannot be limited to the required value, aliasing will occur in the reconstructed image. In this work, we demonstrate that the low spatial frequency metrology data could nevertheless be efficiently extracted by careful choice of twofold, and even threefold, undersampling of the object field. By combining the time-averaged recording with subtraction digital holography method, we present results for a loudspeaker membrane interferometric study obtained under strong aliasing conditions. High-contrast fringes, as a consequence of the vibration modes of the membrane, are obtained.

  7. Imaging Sensor Development for Scattering Atmospheres.

    DTIC Science & Technology

    1983-03-01

    subtracted out- put from a CCD imaging detector for a single frame can be written as A _ S (2-22) V B + B{ shot noise thermal noise , dark current shot ...addition, the spectral re- sponses of current devices are limited to the visible region and their sensitivities are not very high. Solid state detectors ...are generally much more sensitive than spatial light modulators, and some (e.g., HgCdTe detectors ) can re- spond up to the 10 um region. Several

  8. An Underwater Target Detection System for Electro-Optical Imagery Data

    DTIC Science & Technology

    2010-06-01

    detection and segmentation of underwater mine-like objects in the EO images captured with a CCD-based image sensor. The main focus of this research is to...develop a robust detection algorithm that can be used to detect low contrast and partial underwater objects from the EO imagery with low false alarm rate...underwater target detection I. INTRODUCTION Automatic detection and recognition of underwater objects from EO imagery poses a serious challenge due to poor

  9. A four-lens based plenoptic camera for depth measurements

    NASA Astrophysics Data System (ADS)

    Riou, Cécile; Deng, Zhiyuan; Colicchio, Bruno; Lauffenburger, Jean-Philippe; Kohler, Sophie; Haeberlé, Olivier; Cudel, Christophe

    2015-04-01

    In previous works, we have extended the principles of "variable homography", defined by Zhang and Greenspan, for measuring height of emergent fibers on glass and non-woven fabrics. This method has been defined for working with fabric samples progressing on a conveyor belt. Triggered acquisition of two successive images was needed to perform the 3D measurement. In this work, we have retained advantages of homography variable for measurements along Z axis, but we have reduced acquisitions number to a single one, by developing an acquisition device characterized by 4 lenses placed in front of a single image sensor. The idea is then to obtain four projected sub-images on a single CCD sensor. The device becomes a plenoptic or light field camera, capturing multiple views on the same image sensor. We have adapted the variable homography formulation for this device and we propose a new formulation to calculate a depth with plenoptic cameras. With these results, we have transformed our plenoptic camera in a depth camera and first results given are very promising.

  10. Defect detection in slab surface: a novel dual Charge-coupled Device imaging-based fuzzy connectedness strategy.

    PubMed

    Zhao, Liming; Ouyang, Qi; Chen, Dengfu; Udupa, Jayaram K; Wang, Huiqian; Zeng, Yuebin

    2014-11-01

    To provide an accurate surface defects inspection system and make the automation of robust image segmentation method a reality in routine production line, a general approach is presented for continuous casting slab (CC-slab) surface defects extraction and delineation. The applicability of the system is not tied to CC-slab exclusively. We combined the line array CCD (Charge-coupled Device) traditional scanning imaging (LS-imaging) and area array CCD laser three-dimensional (3D) scanning imaging (AL-imaging) strategies in designing the system. Its aim is to suppress the respective imaging system's limitations. In the system, the images acquired from the two CCD sensors are carefully aligned in space and in time by maximum mutual information-based full-fledged registration schema. Subsequently, the image information is fused from these two subsystems such as the unbroken 2D information in LS-imaging and 3D depressed information in AL-imaging. Finally, on the basis of the established dual scanning imaging system the region of interest (ROI) localization by seed specification was designed, and the delineation for ROI by iterative relative fuzzy connectedness (IRFC) algorithm was utilized to get a precise inspection result. Our method takes into account the complementary advantages in the two common machine vision (MV) systems and it performs competitively with the state-of-the-art as seen from the comparison of experimental results. For the first time, a joint imaging scanning strategy is proposed for CC-slab surface defect inspection that allows a feasible way of powerful ROI delineation strategies to be applied to the MV inspection field. Multi-ROI delineation by using IRFC in this research field may further improve the results.

  11. Performance of the STIS CCD Dark Rate Temperature Correction

    NASA Astrophysics Data System (ADS)

    Branton, Doug; STScI STIS Team

    2018-06-01

    Since July 2001, the Space Telescope Imaging Spectrograph (STIS) onboard Hubble has operated on its Side-2 electronics due to a failure in the primary Side-1 electronics. While nearly identical, Side-2 lacks a functioning temperature sensor for the CCD, introducing a variability in the CCD operating temperature. Previous analysis utilized the CCD housing temperature telemetry to characterize the relationship between the housing temperature and the dark rate. It was found that a first-order 7%/°C uniform dark correction demonstrated a considerable improvement in the quality of dark subtraction on Side-2 era CCD data, and that value has been used on all Side-2 CCD darks since. In this report, we show how this temperature correction has performed historically. We compare the current 7%/°C value against the ideal first-order correction at a given time (which can vary between ~6%/°C and ~10%/°C) as well as against a more complex second-order correction that applies a unique slope to each pixel as a function of dark rate and time. At worst, the current correction has performed ~1% worse than the second-order correction. Additionally, we present initial evidence suggesting that the variability in pixel temperature-sensitivity is significant enough to warrant a temperature correction that considers pixels individually rather than correcting them uniformly.

  12. Near-infrared fluorescence imaging with a mobile phone (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Ghassemi, Pejhman; Wang, Bohan; Wang, Jianting; Wang, Quanzeng; Chen, Yu; Pfefer, T. Joshua

    2017-03-01

    Mobile phone cameras employ sensors with near-infrared (NIR) sensitivity, yet this capability has not been exploited for biomedical purposes. Removing the IR-blocking filter from a phone-based camera opens the door to a wide range of techniques and applications for inexpensive, point-of-care biophotonic imaging and sensing. This study provides proof of principle for one of these modalities - phone-based NIR fluorescence imaging. An imaging system was assembled using a 780 nm light source along with excitation and emission filters with 800 nm and 825 nm cut-off wavelengths, respectively. Indocyanine green (ICG) was used as an NIR fluorescence contrast agent in an ex vivo rodent model, a resolution test target and a 3D-printed, tissue-simulating vascular phantom. Raw and processed images for red, green and blue pixel channels were analyzed for quantitative evaluation of fundamental performance characteristics including spectral sensitivity, detection linearity and spatial resolution. Mobile phone results were compared with a scientific CCD. The spatial resolution of CCD system was consistently superior to the phone, and green phone camera pixels showed better resolution than blue or green channels. The CCD exhibited similar sensitivity as processed red and blue pixels channels, yet a greater degree of detection linearity. Raw phone pixel data showed lower sensitivity but greater linearity than processed data. Overall, both qualitative and quantitative results provided strong evidence of the potential of phone-based NIR imaging, which may lead to a wide range of applications from cancer detection to glucose sensing.

  13. Developing handheld real time multispectral imager to clinically detect erythema in darkly pigmented skin

    NASA Astrophysics Data System (ADS)

    Kong, Linghua; Sprigle, Stephen; Yi, Dingrong; Wang, Fengtao; Wang, Chao; Liu, Fuhan

    2010-02-01

    Pressure ulcers have been identified as a public health concern by the US government through the Healthy People 2010 initiative and the National Quality Forum (NQF). Currently, no tools are available to assist clinicians in erythema, i.e. the early stage pressure ulcer detection. The results from our previous research (supported by NIH grant) indicate that erythema in different skin tones can be identified using a set of wavelengths 540, 577, 650 and 970nm. This paper will report our recent work which is developing a handheld, point-of-care, clinicallyviable and affordable, real time multispectral imager to detect erythema in persons with darkly pigmented skin. Instead of using traditional filters, e.g. filter wheels, generalized Lyot filter, electrical tunable filter or the methods of dispersing light, e.g. optic-acoustic crystal, a novel custom filter mosaic has been successfully designed and fabricated using lithography and vacuum multi layer film technologies. The filter has been integrated with CMOS and CCD sensors. The filter incorporates four or more different wavelengths within the visual to nearinfrared range each having a narrow bandwidth of 30nm or less. Single wavelength area is chosen as 20.8μx 20.8μ. The filter can be deposited on regular optical glass as substrate or directly on a CMOS and CCD imaging sensor. This design permits a multi-spectral image to be acquired in a single exposure, thereby providing overwhelming convenience in multi spectral imaging acquisition.

  14. Method for eliminating artifacts in CCD imagers

    DOEpatents

    Turko, Bojan T.; Yates, George J.

    1992-01-01

    An electronic method for eliminating artifacts in a video camera (10) employing a charge coupled device (CCD) (12) as an image sensor. The method comprises the step of initializing the camera (10) prior to normal read out and includes a first dump cycle period (76) for transferring radiation generated charge into the horizontal register (28) while the decaying image on the phosphor (39) being imaged is being integrated in the photosites, and a second dump cycle period (78), occurring after the phosphor (39) image has decayed, for rapidly dumping unwanted smear charge which has been generated in the vertical registers (32). Image charge is then transferred from the photosites (36) and (38) to the vertical registers (32) and read out in conventional fashion. The inventive method allows the video camera (10) to be used in environments having high ionizing radiation content, and to capture images of events of very short duration and occurring either within or outside the normal visual wavelength spectrum. Resultant images are free from ghost, smear and smear phenomena caused by insufficient opacity of the registers (28) and (32), and are also free from random damage caused by ionization charges which exceed the charge limit capacity of the photosites (36) and (37).

  15. Video semaphore decoding for free-space optical communication

    NASA Astrophysics Data System (ADS)

    Last, Matthew; Fisher, Brian; Ezekwe, Chinwuba; Hubert, Sean M.; Patel, Sheetal; Hollar, Seth; Leibowitz, Brian S.; Pister, Kristofer S. J.

    2001-04-01

    Using teal-time image processing we have demonstrated a low bit-rate free-space optical communication system at a range of more than 20km with an average optical transmission power of less than 2mW. The transmitter is an autonomous one cubic inch microprocessor-controlled sensor node with a laser diode output. The receiver is a standard CCD camera with a 1-inch aperture lens, and both hardware and software implementations of the video semaphore decoding algorithm. With this system sensor data can be reliably transmitted 21 km form San Francisco to Berkeley.

  16. Fixed Pattern Noise pixel-wise linear correction for crime scene imaging CMOS sensor

    NASA Astrophysics Data System (ADS)

    Yang, Jie; Messinger, David W.; Dube, Roger R.; Ientilucci, Emmett J.

    2017-05-01

    Filtered multispectral imaging technique might be a potential method for crime scene documentation and evidence detection due to its abundant spectral information as well as non-contact and non-destructive nature. Low-cost and portable multispectral crime scene imaging device would be highly useful and efficient. The second generation crime scene imaging system uses CMOS imaging sensor to capture spatial scene and bandpass Interference Filters (IFs) to capture spectral information. Unfortunately CMOS sensors suffer from severe spatial non-uniformity compared to CCD sensors and the major cause is Fixed Pattern Noise (FPN). IFs suffer from "blue shift" effect and introduce spatial-spectral correlated errors. Therefore, Fixed Pattern Noise (FPN) correction is critical to enhance crime scene image quality and is also helpful for spatial-spectral noise de-correlation. In this paper, a pixel-wise linear radiance to Digital Count (DC) conversion model is constructed for crime scene imaging CMOS sensor. Pixel-wise conversion gain Gi,j and Dark Signal Non-Uniformity (DSNU) Zi,j are calculated. Also, conversion gain is divided into four components: FPN row component, FPN column component, defects component and effective photo response signal component. Conversion gain is then corrected to average FPN column and row components and defects component so that the sensor conversion gain is uniform. Based on corrected conversion gain and estimated image incident radiance from the reverse of pixel-wise linear radiance to DC model, corrected image spatial uniformity can be enhanced to 7 times as raw image, and the bigger the image DC value within its dynamic range, the better the enhancement.

  17. Filtered Rayleigh Scattering Measurements in a Buoyant Flowfield

    DTIC Science & Technology

    2007-03-01

    common filter used in FRS applications . Iodine is more attractive than mercury to use in a filter due to its broader range of blocking and transmission...is a 4032x2688 pixel camera with a monochrome or colored CCD imaging sensor. The binning range of the camera is (HxV) 1x1 to 2x8. The manufacturer...center position of the jet of the time averaged image . The z center position is chosen so that it is the average z value bounding helium

  18. An ultrahigh-speed color video camera operating at 1,000,000 fps with 288 frame memories

    NASA Astrophysics Data System (ADS)

    Kitamura, K.; Arai, T.; Yonai, J.; Hayashida, T.; Kurita, T.; Maruyama, H.; Namiki, J.; Yanagi, T.; Yoshida, T.; van Kuijk, H.; Bosiers, Jan T.; Saita, A.; Kanayama, S.; Hatade, K.; Kitagawa, S.; Etoh, T. Goji

    2008-11-01

    We developed an ultrahigh-speed color video camera that operates at 1,000,000 fps (frames per second) and had capacity to store 288 frame memories. In 2005, we developed an ultrahigh-speed, high-sensitivity portable color camera with a 300,000-pixel single CCD (ISIS-V4: In-situ Storage Image Sensor, Version 4). Its ultrahigh-speed shooting capability of 1,000,000 fps was made possible by directly connecting CCD storages, which record video images, to the photodiodes of individual pixels. The number of consecutive frames was 144. However, longer capture times were demanded when the camera was used during imaging experiments and for some television programs. To increase ultrahigh-speed capture times, we used a beam splitter and two ultrahigh-speed 300,000-pixel CCDs. The beam splitter was placed behind the pick up lens. One CCD was located at each of the two outputs of the beam splitter. The CCD driving unit was developed to separately drive two CCDs, and the recording period of the two CCDs was sequentially switched. This increased the recording capacity to 288 images, an increase of a factor of two over that of conventional ultrahigh-speed camera. A problem with the camera was that the incident light on each CCD was reduced by a factor of two by using the beam splitter. To improve the light sensitivity, we developed a microlens array for use with the ultrahigh-speed CCDs. We simulated the operation of the microlens array in order to optimize its shape and then fabricated it using stamping technology. Using this microlens increased the light sensitivity of the CCDs by an approximate factor of two. By using a beam splitter in conjunction with the microlens array, it was possible to make an ultrahigh-speed color video camera that has 288 frame memories but without decreasing the camera's light sensitivity.

  19. A TV Camera System Which Extracts Feature Points For Non-Contact Eye Movement Detection

    NASA Astrophysics Data System (ADS)

    Tomono, Akira; Iida, Muneo; Kobayashi, Yukio

    1990-04-01

    This paper proposes a highly efficient camera system which extracts, irrespective of background, feature points such as the pupil, corneal reflection image and dot-marks pasted on a human face in order to detect human eye movement by image processing. Two eye movement detection methods are sugested: One utilizing face orientation as well as pupil position, The other utilizing pupil and corneal reflection images. A method of extracting these feature points using LEDs as illumination devices and a new TV camera system designed to record eye movement are proposed. Two kinds of infra-red LEDs are used. These LEDs are set up a short distance apart and emit polarized light of different wavelengths. One light source beams from near the optical axis of the lens and the other is some distance from the optical axis. The LEDs are operated in synchronization with the camera. The camera includes 3 CCD image pick-up sensors and a prism system with 2 boundary layers. Incident rays are separated into 2 wavelengths by the first boundary layer of the prism. One set of rays forms an image on CCD-3. The other set is split by the half-mirror layer of the prism and forms an image including the regularly reflected component by placing a polarizing filter in front of CCD-1 or another image not including the component by not placing a polarizing filter in front of CCD-2. Thus, three images with different reflection characteristics are obtained by three CCDs. Through the experiment, it is shown that two kinds of subtraction operations between the three images output from CCDs accentuate three kinds of feature points: the pupil and corneal reflection images and the dot-marks. Since the S/N ratio of the subtracted image is extremely high, the thresholding process is simple and allows reducting the intensity of the infra-red illumination. A high speed image processing apparatus using this camera system is decribed. Realtime processing of the subtraction, thresholding and gravity position calculation of the feature points is possible.

  20. A multimodal image sensor system for identifying water stress in grapevines

    NASA Astrophysics Data System (ADS)

    Zhao, Yong; Zhang, Qin; Li, Minzan; Shao, Yongni; Zhou, Jianfeng; Sun, Hong

    2012-11-01

    Water stress is one of the most common limitations of fruit growth. Water is the most limiting resource for crop growth. In grapevines, as well as in other fruit crops, fruit quality benefits from a certain level of water deficit which facilitates to balance vegetative and reproductive growth and the flow of carbohydrates to reproductive structures. A multi-modal sensor system was designed to measure the reflectance signature of grape plant surfaces and identify different water stress levels in this paper. The multi-modal sensor system was equipped with one 3CCD camera (three channels in R, G, and IR). The multi-modal sensor can capture and analyze grape canopy from its reflectance features, and identify the different water stress levels. This research aims at solving the aforementioned problems. The core technology of this multi-modal sensor system could further be used as a decision support system that combines multi-modal sensory data to improve plant stress detection and identify the causes of stress. The images were taken by multi-modal sensor which could output images in spectral bands of near-infrared, green and red channel. Based on the analysis of the acquired images, color features based on color space and reflectance features based on image process method were calculated. The results showed that these parameters had the potential as water stress indicators. More experiments and analysis are needed to validate the conclusion.

  1. Planetary exploration with optical imaging systems review: what is the best sensor for future missions

    NASA Astrophysics Data System (ADS)

    Michaelis, H.; Behnke, T.; Bredthauer, R.; Holland, A.; Janesick, J.; Jaumann, R.; Keller, H. U.; Magrin, D.; Greggio, D.; Mottola, Stefano; Thomas, N.; Smith, P.

    2017-11-01

    When we talk about planetary exploration missions most people think spontaneously about fascinating images from other planets or close-up pictures of small planetary bodies such as asteroids and comets. Such images come in most cases from VIS/NIR- imaging- systems, simply called `cameras', which were typically built by institutes in collaboration with industry. Until now, they have nearly all been based on silicon CCD sensors, they have filter wheels and have often high power-consuming electronics. The question is, what are the challenges for future missions and what can be done to improve performance and scientific output. The exploration of Mars is ongoing. NASA and ESA are planning future missions to the outer planets like to the icy Jovian moons. Exploration of asteroids and comets are in focus of several recent and future missions. Furthermore, the detection and characterization of exo-planets will keep us busy for next generations. The paper is discussing the challenges and visions of imaging sensors for future planetary exploration missions. The focus of the talk is monolithic VIS/NIR- detectors.

  2. Quantitative evaluation of the accuracy and variance of individual pixels in a scientific CMOS (sCMOS) camera for computational imaging

    NASA Astrophysics Data System (ADS)

    Watanabe, Shigeo; Takahashi, Teruo; Bennett, Keith

    2017-02-01

    The"scientific" CMOS (sCMOS) camera architecture fundamentally differs from CCD and EMCCD cameras. In digital CCD and EMCCD cameras, conversion from charge to the digital output is generally through a single electronic chain, and the read noise and the conversion factor from photoelectrons to digital outputs are highly uniform for all pixels, although quantum efficiency may spatially vary. In CMOS cameras, the charge to voltage conversion is separate for each pixel and each column has independent amplifiers and analog-to-digital converters, in addition to possible pixel-to-pixel variation in quantum efficiency. The "raw" output from the CMOS image sensor includes pixel-to-pixel variability in the read noise, electronic gain, offset and dark current. Scientific camera manufacturers digitally compensate the raw signal from the CMOS image sensors to provide usable images. Statistical noise in images, unless properly modeled, can introduce errors in methods such as fluctuation correlation spectroscopy or computational imaging, for example, localization microscopy using maximum likelihood estimation. We measured the distributions and spatial maps of individual pixel offset, dark current, read noise, linearity, photoresponse non-uniformity and variance distributions of individual pixels for standard, off-the-shelf Hamamatsu ORCA-Flash4.0 V3 sCMOS cameras using highly uniform and controlled illumination conditions, from dark conditions to multiple low light levels between 20 to 1,000 photons / pixel per frame to higher light conditions. We further show that using pixel variance for flat field correction leads to errors in cameras with good factory calibration.

  3. X-ray imaging using digital cameras

    NASA Astrophysics Data System (ADS)

    Winch, Nicola M.; Edgar, Andrew

    2012-03-01

    The possibility of using the combination of a computed radiography (storage phosphor) cassette and a semiprofessional grade digital camera for medical or dental radiography is investigated. We compare the performance of (i) a Canon 5D Mk II single lens reflex camera with f1.4 lens and full-frame CMOS array sensor and (ii) a cooled CCD-based camera with a 1/3 frame sensor and the same lens system. Both systems are tested with 240 x 180 mm cassettes which are based on either powdered europium-doped barium fluoride bromide or needle structure europium-doped cesium bromide. The modulation transfer function for both systems has been determined and falls to a value of 0.2 at around 2 lp/mm, and is limited by light scattering of the emitted light from the storage phosphor rather than the optics or sensor pixelation. The modulation transfer function for the CsBr:Eu2+ plate is bimodal, with a high frequency wing which is attributed to the light-guiding behaviour of the needle structure. The detective quantum efficiency has been determined using a radioisotope source and is comparatively low at 0.017 for the CMOS camera and 0.006 for the CCD camera, attributed to the poor light harvesting by the lens. The primary advantages of the method are portability, robustness, digital imaging and low cost; the limitations are the low detective quantum efficiency and hence signal-to-noise ratio for medical doses, and restricted range of plate sizes. Representative images taken with medical doses are shown and illustrate the potential use for portable basic radiography.

  4. A curved surface micro-moiré method and its application in evaluating curved surface residual stress

    NASA Astrophysics Data System (ADS)

    Zhang, Hongye; Wu, Chenlong; Liu, Zhanwei; Xie, Huimin

    2014-09-01

    The moiré method is typically applied to the measurement of deformations of a flat surface while, for a curved surface, this method is rarely used other than for projection moiré or moiré interferometry. Here, a novel colour charge-coupled device (CCD) micro-moiré method has been developed, based on which a curved surface micro-moiré (CSMM) method is proposed with a colour CCD and optical microscope (OM). In the CSMM method, no additional reference grating is needed as a Bayer colour filter array (CFA) installed on the OM in front of the colour CCD image sensor performs this role. Micro-moiré fringes with high contrast are directly observed with the OM through the Bayer CFA under the special condition of observing a curved specimen grating. The principle of the CSMM method based on a colour CCD micro-moiré method and its application range and error analysis are all described in detail. In an experiment, the curved surface residual stress near a welded seam on a stainless steel tube was investigated using the CSMM method.

  5. Multispectral linear array visible and shortwave infrared sensors

    NASA Astrophysics Data System (ADS)

    Tower, J. R.; Warren, F. B.; Pellon, L. E.; Strong, R.; Elabd, H.; Cope, A. D.; Hoffmann, D. M.; Kramer, W. M.; Longsderff, R. W.

    1984-08-01

    All-solid state pushbroom sensors for multispectral linear array (MLA) instruments to replace mechanical scanners used on LANDSAT satellites are introduced. A buttable, four-spectral-band, linear-format charge coupled device (CCD) and a buttable, two-spectral-band, linear-format, shortwave infrared CCD are described. These silicon integrated circuits may be butted end to end to provide multispectral focal planes with thousands of contiguous, in-line photosites. The visible CCD integrated circuit is organized as four linear arrays of 1024 pixels each. Each array views the scene in a different spectral window, resulting in a four-band sensor. The shortwave infrared (SWIR) sensor is organized as 2 linear arrays of 512 detectors each. Each linear array is optimized for performance at a different wavelength in the SWIR band.

  6. Using a trichromatic CCD camera for spectral skylight estimation.

    PubMed

    López-Alvarez, Miguel A; Hernández-Andrés, Javier; Romero, Javier; Olmo, F J; Cazorla, A; Alados-Arboledas, L

    2008-12-01

    In a previous work [J. Opt. Soc. Am. A 24, 942-956 (2007)] we showed how to design an optimum multispectral system aimed at spectral recovery of skylight. Since high-resolution multispectral images of skylight could be interesting for many scientific disciplines, here we also propose a nonoptimum but much cheaper and faster approach to achieve this goal by using a trichromatic RGB charge-coupled device (CCD) digital camera. The camera is attached to a fish-eye lens, hence permitting us to obtain a spectrum of every point of the skydome corresponding to each pixel of the image. In this work we show how to apply multispectral techniques to the sensors' responses of a common trichromatic camera in order to obtain skylight spectra from them. This spectral information is accurate enough to estimate experimental values of some climate parameters or to be used in algorithms for automatic cloud detection, among many other possible scientific applications.

  7. In situ microscopy for on-line determination of biomass.

    PubMed

    Bittner, C; Wehnert, G; Scheper, T

    1998-10-05

    A sensor is presented, which allows on-line microscopic observation of microorganisms during fermentations in bioreactors. This sensor, an In Situ Microscope (ISM) consists of a direct-light microscope with a measuring chamber, integrated in a 25 mm stainless steel tube, two CCD-cameras, and two frame-grabbers. The data obtained are processed by an automatic image analysis system. The ISM is connected with the bioreactor via a standard port, and it is immersed directly in the culture liquid-in our case Saccharomyces cerevisiae in a synthetic medium. The microscopic examination of the liquid is performed in the measuring chamber, which is situated near the front end of the sensor head. The measuring chamber is opened and closed periodically. In the open state, the liquid in the bioreactor flows unrestricted through the chamber. In closing, a defined volume of 2,2. 10(-8) mL of the liquid becomes enclosed. After a few seconds, when the movement of the cells in the enclosed culture has stopped, they are examined with the microscope. The microscopic images of the cells are registered with the CCD-cameras and are visualized on a monitor, allowing a direct view of the cell population. After detection, the measuring chamber reopens, and the enclosed liquid is released. The images obtained are evaluated as to cell concentration, cell size, cell volume, biomass, and other relevant parameters simultaneously by automatic image analysis. With a PC (486/33 MHz), image processing takes about 15 s per image. The detection range tested when measuring cells of S. cerevisiae is about 10(6) to 10(9) cells/mL (equivalent to a biomass of 0.01 g/L to 12 g/L). The calculated biomass values correlate very well with those obtained using dry weight analysis. Furthermore, histograms can be calculated, which are comparable to those obtained by flow cytometry. Copyright 1998 John Wiley & Sons, Inc.

  8. Single photon detection imaging of Cherenkov light emitted during radiation therapy

    NASA Astrophysics Data System (ADS)

    Adamson, Philip M.; Andreozzi, Jacqueline M.; LaRochelle, Ethan; Gladstone, David J.; Pogue, Brian W.

    2018-03-01

    Cherenkov imaging during radiation therapy has been developed as a tool for dosimetry, which could have applications in patient delivery verification or in regular quality audit. The cameras used are intensified imaging sensors, either ICCD or ICMOS cameras, which allow important features of imaging, including: (1) nanosecond time gating, (2) amplification by 103-104, which together allow for imaging which has (1) real time capture at 10-30 frames per second, (2) sensitivity at the level of single photon event level, and (3) ability to suppress background light from the ambient room. However, the capability to achieve single photon imaging has not been fully analyzed to date, and as such was the focus of this study. The ability to quantitatively characterize how a single photon event appears in amplified camera imaging from the Cherenkov images was analyzed with image processing. The signal seen at normal gain levels appears to be a blur of about 90 counts in the CCD detector, after going through the chain of photocathode detection, amplification through a microchannel plate PMT, excitation onto a phosphor screen and then imaged on the CCD. The analysis of single photon events requires careful interpretation of the fixed pattern noise, statistical quantum noise distributions, and the spatial spread of each pulse through the ICCD.

  9. Design and Calibration of a Novel Bio-Inspired Pixelated Polarized Light Compass.

    PubMed

    Han, Guoliang; Hu, Xiaoping; Lian, Junxiang; He, Xiaofeng; Zhang, Lilian; Wang, Yujie; Dong, Fengliang

    2017-11-14

    Animals, such as Savannah sparrows and North American monarch butterflies, are able to obtain compass information from skylight polarization patterns to help them navigate effectively and robustly. Inspired by excellent navigation ability of animals, this paper proposes a novel image-based polarized light compass, which has the advantages of having a small size and being light weight. Firstly, the polarized light compass, which is composed of a Charge Coupled Device (CCD) camera, a pixelated polarizer array and a wide-angle lens, is introduced. Secondly, the measurement method of a skylight polarization pattern and the orientation method based on a single scattering Rayleigh model are presented. Thirdly, the error model of the sensor, mainly including the response error of CCD pixels and the installation error of the pixelated polarizer, is established. A calibration method based on iterative least squares estimation is proposed. In the outdoor environment, the skylight polarization pattern can be measured in real time by our sensor. The orientation accuracy of the sensor increases with the decrease of the solar elevation angle, and the standard deviation of orientation error is 0 . 15 ∘ at sunset. Results of outdoor experiments show that the proposed polarization navigation sensor can be used for outdoor autonomous navigation.

  10. Multiple Sensor Camera for Enhanced Video Capturing

    NASA Astrophysics Data System (ADS)

    Nagahara, Hajime; Kanki, Yoshinori; Iwai, Yoshio; Yachida, Masahiko

    A resolution of camera has been drastically improved under a current request for high-quality digital images. For example, digital still camera has several mega pixels. Although a video camera has the higher frame-rate, the resolution of a video camera is lower than that of still camera. Thus, the high-resolution is incompatible with the high frame rate of ordinary cameras in market. It is difficult to solve this problem by a single sensor, since it comes from physical limitation of the pixel transfer rate. In this paper, we propose a multi-sensor camera for capturing a resolution and frame-rate enhanced video. Common multi-CCDs camera, such as 3CCD color camera, has same CCD for capturing different spectral information. Our approach is to use different spatio-temporal resolution sensors in a single camera cabinet for capturing higher resolution and frame-rate information separately. We build a prototype camera which can capture high-resolution (2588×1958 pixels, 3.75 fps) and high frame-rate (500×500, 90 fps) videos. We also proposed the calibration method for the camera. As one of the application of the camera, we demonstrate an enhanced video (2128×1952 pixels, 90 fps) generated from the captured videos for showing the utility of the camera.

  11. Design and Calibration of a Novel Bio-Inspired Pixelated Polarized Light Compass

    PubMed Central

    Hu, Xiaoping; Lian, Junxiang; He, Xiaofeng; Zhang, Lilian; Wang, Yujie; Dong, Fengliang

    2017-01-01

    Animals, such as Savannah sparrows and North American monarch butterflies, are able to obtain compass information from skylight polarization patterns to help them navigate effectively and robustly. Inspired by excellent navigation ability of animals, this paper proposes a novel image-based polarized light compass, which has the advantages of having a small size and being light weight. Firstly, the polarized light compass, which is composed of a Charge Coupled Device (CCD) camera, a pixelated polarizer array and a wide-angle lens, is introduced. Secondly, the measurement method of a skylight polarization pattern and the orientation method based on a single scattering Rayleigh model are presented. Thirdly, the error model of the sensor, mainly including the response error of CCD pixels and the installation error of the pixelated polarizer, is established. A calibration method based on iterative least squares estimation is proposed. In the outdoor environment, the skylight polarization pattern can be measured in real time by our sensor. The orientation accuracy of the sensor increases with the decrease of the solar elevation angle, and the standard deviation of orientation error is 0.15∘ at sunset. Results of outdoor experiments show that the proposed polarization navigation sensor can be used for outdoor autonomous navigation. PMID:29135927

  12. The research of digital circuit system for high accuracy CCD of portable Raman spectrometer

    NASA Astrophysics Data System (ADS)

    Yin, Yu; Cui, Yongsheng; Zhang, Xiuda; Yan, Huimin

    2013-08-01

    The Raman spectrum technology is widely used for it can identify various types of molecular structure and material. The portable Raman spectrometer has become a hot direction of the spectrometer development nowadays for its convenience in handheld operation and real-time detection which is superior to traditional Raman spectrometer with heavy weight and bulky size. But there is still a gap for its measurement sensitivity between portable and traditional devices. However, portable Raman Spectrometer with Shell-Isolated Nanoparticle-Enhanced Raman Spectroscopy (SHINERS) technology can enhance the Raman signal significantly by several orders of magnitude, giving consideration in both measurement sensitivity and mobility. This paper proposed a design and implementation of driver and digital circuit for high accuracy CCD sensor, which is core part of portable spectrometer. The main target of the whole design is to reduce the dark current generation rate and increase signal sensitivity during the long integration time, and in the weak signal environment. In this case, we use back-thinned CCD image sensor from Hamamatsu Corporation with high sensitivity, low noise and large dynamic range. In order to maximize this CCD sensor's performance and minimize the whole size of the device simultaneously to achieve the project indicators, we delicately designed a peripheral circuit for the CCD sensor. The design is mainly composed with multi-voltage circuit, sequential generation circuit, driving circuit and A/D transition parts. As the most important power supply circuit, the multi-voltage circuits with 12 independent voltages are designed with reference power supply IC and set to specified voltage value by the amplifier making up the low-pass filter, which allows the user to obtain a highly stable and accurate voltage with low noise. What's more, to make our design easy to debug, CPLD is selected to generate sequential signal. The A/D converter chip consists of a correlated double sampler; a digitally controlled variable gain amplifier and a 16-bit A/D converter which can help improve the data quality. And the acquired digital signals are transmitted into the computer via USB 2.0 data port. Our spectrometer with SHINERS technology can acquire the Raman spectrum signals efficiently in long time integration and weak signal environment, and the size of our system is well controlled for portable application.

  13. Collaborative Point Paper on Border Surveillance Technology

    DTIC Science & Technology

    2007-06-01

    Systems PLC LORHIS (Long Range Hyperspectral Imaging System ) can be configured for either manned or unmanned aircraft to automatically detect and...Airships, and/or Aerostats, (RF, Electro-Optical, Infrared, Video) • Land- based Sensor Systems (Attended/Mobile and Unattended: e.g., CCD, Motion, Acoustic...electronic surveillance technologies for intrusion detection and warning. These ground- based systems are primarily short-range, up to around 500 meters

  14. Backthinned TDI CCD image sensor design and performance for the Pleiades high resolution Earth observation satellites

    NASA Astrophysics Data System (ADS)

    Materne, A.; Bardoux, A.; Geoffray, H.; Tournier, T.; Kubik, P.; Morris, D.; Wallace, I.; Renard, C.

    2017-11-01

    The PLEIADES-HR Earth observing satellites, under CNES development, combine a 0.7m resolution panchromatic channel, and a multispectral channel allowing a 2.8 m resolution, in 4 spectral bands. The 2 satellites will be placed on a sun-synchronous orbit at an altitude of 695 km. The camera operates in push broom mode, providing images across a 20 km swath. This paper focuses on the specifications, design and performance of the TDI detectors developed by e2v technologies under CNES contract for the panchromatic channel. Design drivers, derived from the mission and satellite requirements, architecture of the sensor and measurement results for key performances of the first prototypes are presented.

  15. A real-time monitoring system for night glare protection

    NASA Astrophysics Data System (ADS)

    Ma, Jun; Ni, Xuxiang

    2010-11-01

    When capturing a dark scene with a high bright object, the monitoring camera will be saturated in some regions and the details will be lost in and near these saturated regions because of the glare vision. This work aims at developing a real-time night monitoring system. The system can decrease the influence of the glare vision and gain more details from the ordinary camera when exposing a high-contrast scene like a car with its headlight on during night. The system is made up of spatial light modulator (The liquid crystal on silicon: LCoS), image sensor (CCD), imaging lens and DSP. LCoS, a reflective liquid crystal, can modular the intensity of reflective light at every pixel as a digital device. Through modulation function of LCoS, CCD is exposed with sub-region. With the control of DSP, the light intensity is decreased to minimum in the glare regions, and the light intensity is negative feedback modulated based on PID theory in other regions. So that more details of the object will be imaging on CCD and the glare protection of monitoring system is achieved. In experiments, the feedback is controlled by the embedded system based on TI DM642. Experiments shows: this feedback modulation method not only reduces the glare vision to improve image quality, but also enhances the dynamic range of image. The high-quality and high dynamic range image is real-time captured at 30hz. The modulation depth of LCoS determines how strong the glare can be removed.

  16. Preliminary study of the reliability of imaging charge coupled devices

    NASA Technical Reports Server (NTRS)

    Beall, J. R.; Borenstein, M. D.; Homan, R. A.; Johnson, D. L.; Wilson, D. D.; Young, V. F.

    1978-01-01

    Imaging CCDs are capable of low light level response and high signal-to-noise ratios. In space applications they offer the user the ability to achieve extremely high resolution imaging with minimum circuitry in the photo sensor array. This work relates the CCD121H Fairchild device to the fundamentals of CCDs and the representative technologies. Several failure modes are described, construction is analyzed and test results are reported. In addition, the relationship of the device reliability to packaging principles is analyzed and test data presented. Finally, a test program is defined for more general reliability evaluation of CCDs.

  17. [Nitrogen stress measurement of canola based on multi-spectral charged coupled device imaging sensor].

    PubMed

    Feng, Lei; Fang, Hui; Zhou, Wei-Jun; Huang, Min; He, Yong

    2006-09-01

    Site-specific variable nitrogen application is one of the major precision crop production management operations. Obtaining sufficient crop nitrogen stress information is essential for achieving effective site-specific nitrogen applications. The present paper describes the development of a multi-spectral nitrogen deficiency sensor, which uses three channels (green, red, near-infrared) of crop images to determine the nitrogen level of canola. This sensor assesses the nitrogen stress by means of estimated SPAD value of the canola based on canola canopy reflectance sensed using three channels (green, red, near-infrared) of the multi-spectral camera. The core of this investigation is the calibration methods between the multi-spectral references and the nitrogen levels in crops measured using a SPAD 502 chlorophyll meter. Based on the results obtained from this study, it can be concluded that a multi-spectral CCD camera can provide sufficient information to perform reasonable SPAD values estimation during field operations.

  18. Direct Detection Electron Energy-Loss Spectroscopy: A Method to Push the Limits of Resolution and Sensitivity.

    PubMed

    Hart, James L; Lang, Andrew C; Leff, Asher C; Longo, Paolo; Trevor, Colin; Twesten, Ray D; Taheri, Mitra L

    2017-08-15

    In many cases, electron counting with direct detection sensors offers improved resolution, lower noise, and higher pixel density compared to conventional, indirect detection sensors for electron microscopy applications. Direct detection technology has previously been utilized, with great success, for imaging and diffraction, but potential advantages for spectroscopy remain unexplored. Here we compare the performance of a direct detection sensor operated in counting mode and an indirect detection sensor (scintillator/fiber-optic/CCD) for electron energy-loss spectroscopy. Clear improvements in measured detective quantum efficiency and combined energy resolution/energy field-of-view are offered by counting mode direct detection, showing promise for efficient spectrum imaging, low-dose mapping of beam-sensitive specimens, trace element analysis, and time-resolved spectroscopy. Despite the limited counting rate imposed by the readout electronics, we show that both core-loss and low-loss spectral acquisition are practical. These developments will benefit biologists, chemists, physicists, and materials scientists alike.

  19. Flexible distributed architecture for semiconductor process control and experimentation

    NASA Astrophysics Data System (ADS)

    Gower, Aaron E.; Boning, Duane S.; McIlrath, Michael B.

    1997-01-01

    Semiconductor fabrication requires an increasingly expensive and integrated set of tightly controlled processes, driving the need for a fabrication facility with fully computerized, networked processing equipment. We describe an integrated, open system architecture enabling distributed experimentation and process control for plasma etching. The system was developed at MIT's Microsystems Technology Laboratories and employs in-situ CCD interferometry based analysis in the sensor-feedback control of an Applied Materials Precision 5000 Plasma Etcher (AME5000). Our system supports accelerated, advanced research involving feedback control algorithms, and includes a distributed interface that utilizes the internet to make these fabrication capabilities available to remote users. The system architecture is both distributed and modular: specific implementation of any one task does not restrict the implementation of another. The low level architectural components include a host controller that communicates with the AME5000 equipment via SECS-II, and a host controller for the acquisition and analysis of the CCD sensor images. A cell controller (CC) manages communications between these equipment and sensor controllers. The CC is also responsible for process control decisions; algorithmic controllers may be integrated locally or via remote communications. Finally, a system server images connections from internet/intranet (web) based clients and uses a direct link with the CC to access the system. Each component communicates via a predefined set of TCP/IP socket based messages. This flexible architecture makes integration easier and more robust, and enables separate software components to run on the same or different computers independent of hardware or software platform.

  20. Performance of PHOTONIS' low light level CMOS imaging sensor for long range observation

    NASA Astrophysics Data System (ADS)

    Bourree, Loig E.

    2014-05-01

    Identification of potential threats in low-light conditions through imaging is commonly achieved through closed-circuit television (CCTV) and surveillance cameras by combining the extended near infrared (NIR) response (800-10000nm wavelengths) of the imaging sensor with NIR LED or laser illuminators. Consequently, camera systems typically used for purposes of long-range observation often require high-power lasers in order to generate sufficient photons on targets to acquire detailed images at night. While these systems may adequately identify targets at long-range, the NIR illumination needed to achieve such functionality can easily be detected and therefore may not be suitable for covert applications. In order to reduce dependency on supplemental illumination in low-light conditions, the frame rate of the imaging sensors may be reduced to increase the photon integration time and thus improve the signal to noise ratio of the image. However, this may hinder the camera's ability to image moving objects with high fidelity. In order to address these particular drawbacks, PHOTONIS has developed a CMOS imaging sensor (CIS) with a pixel architecture and geometry designed specifically to overcome these issues in low-light level imaging. By combining this CIS with field programmable gate array (FPGA)-based image processing electronics, PHOTONIS has achieved low-read noise imaging with enhanced signal-to-noise ratio at quarter moon illumination, all at standard video frame rates. The performance of this CIS is discussed herein and compared to other commercially available CMOS and CCD for long-range observation applications.

  1. Intelligent error correction method applied on an active pixel sensor based star tracker

    NASA Astrophysics Data System (ADS)

    Schmidt, Uwe

    2005-10-01

    Star trackers are opto-electronic sensors used on-board of satellites for the autonomous inertial attitude determination. During the last years star trackers became more and more important in the field of the attitude and orbit control system (AOCS) sensors. High performance star trackers are based up today on charge coupled device (CCD) optical camera heads. The active pixel sensor (APS) technology, introduced in the early 90-ties, allows now the beneficial replacement of CCD detectors by APS detectors with respect to performance, reliability, power, mass and cost. The company's heritage in star tracker design started in the early 80-ties with the launch of the worldwide first fully autonomous star tracker system ASTRO1 to the Russian MIR space station. Jena-Optronik recently developed an active pixel sensor based autonomous star tracker "ASTRO APS" as successor of the CCD based star tracker product series ASTRO1, ASTRO5, ASTRO10 and ASTRO15. Key features of the APS detector technology are, a true xy-address random access, the multiple windowing read out and the on-chip signal processing including the analogue to digital conversion. These features can be used for robust star tracking at high slew rates and under worse conditions like stray light and solar flare induced single event upsets. A special algorithm have been developed to manage the typical APS detector error contributors like fixed pattern noise (FPN), dark signal non-uniformity (DSNU) and white spots. The algorithm works fully autonomous and adapts to e.g. increasing DSNU and up-coming white spots automatically without ground maintenance or re-calibration. In contrast to conventional correction methods the described algorithm does not need calibration data memory like full image sized calibration data sets. The application of the presented algorithm managing the typical APS detector error contributors is a key element for the design of star trackers for long term satellite applications like geostationary telecom platforms.

  2. Performance measurement of commercial electronic still picture cameras

    NASA Astrophysics Data System (ADS)

    Hsu, Wei-Feng; Tseng, Shinn-Yih; Chiang, Hwang-Cheng; Cheng, Jui-His; Liu, Yuan-Te

    1998-06-01

    Commercial electronic still picture cameras need a low-cost, systematic method for evaluating the performance. In this paper, we present a measurement method to evaluating the dynamic range and sensitivity by constructing the opto- electronic conversion function (OECF), the fixed pattern noise by the peak S/N ratio (PSNR) and the image shading function (ISF), and the spatial resolution by the modulation transfer function (MTF). The evaluation results of individual color components and the luminance signal from a PC camera using SONY interlaced CCD array as the image sensor are then presented.

  3. Diffraction mode terahertz tomography

    DOEpatents

    Ferguson, Bradley; Wang, Shaohong; Zhang, Xi-Cheng

    2006-10-31

    A method of obtaining a series of images of a three-dimensional object. The method includes the steps of transmitting pulsed terahertz (THz) radiation through the entire object from a plurality of angles, optically detecting changes in the transmitted THz radiation using pulsed laser radiation, and constructing a plurality of imaged slices of the three-dimensional object using the detected changes in the transmitted THz radiation. The THz radiation is transmitted through the object as a two-dimensional array of parallel rays. The optical detection is an array of detectors such as a CCD sensor.

  4. Research on coding and decoding method for digital levels.

    PubMed

    Tu, Li-fen; Zhong, Si-dong

    2011-01-20

    A new coding and decoding method for digital levels is proposed. It is based on an area-array CCD sensor and adopts mixed coding technology. By taking advantage of redundant information in a digital image signal, the contradiction that the field of view and image resolution restrict each other in a digital level measurement is overcome, and the geodetic leveling becomes easier. The experimental results demonstrate that the uncertainty of measurement is 1 mm when the measuring range is between 2 m and 100 m, which can meet practical needs.

  5. A CMOS high speed imaging system design based on FPGA

    NASA Astrophysics Data System (ADS)

    Tang, Hong; Wang, Huawei; Cao, Jianzhong; Qiao, Mingrui

    2015-10-01

    CMOS sensors have more advantages than traditional CCD sensors. The imaging system based on CMOS has become a hot spot in research and development. In order to achieve the real-time data acquisition and high-speed transmission, we design a high-speed CMOS imaging system on account of FPGA. The core control chip of this system is XC6SL75T and we take advantages of CameraLink interface and AM41V4 CMOS image sensors to transmit and acquire image data. AM41V4 is a 4 Megapixel High speed 500 frames per second CMOS image sensor with global shutter and 4/3" optical format. The sensor uses column parallel A/D converters to digitize the images. The CameraLink interface adopts DS90CR287 and it can convert 28 bits of LVCMOS/LVTTL data into four LVDS data stream. The reflected light of objects is photographed by the CMOS detectors. CMOS sensors convert the light to electronic signals and then send them to FPGA. FPGA processes data it received and transmits them to upper computer which has acquisition cards through CameraLink interface configured as full models. Then PC will store, visualize and process images later. The structure and principle of the system are both explained in this paper and this paper introduces the hardware and software design of the system. FPGA introduces the driven clock of CMOS. The data in CMOS is converted to LVDS signals and then transmitted to the data acquisition cards. After simulation, the paper presents a row transfer timing sequence of CMOS. The system realized real-time image acquisition and external controls.

  6. The placido wavefront sensor and preliminary measurement on a mechanical eye.

    PubMed

    Carvalho, Luis Alberto; Castro, Jarbas C

    2006-02-01

    The hardware and software of a novel wavefront sensor was developed (The sensor presented here is patent pending.). It has the same principal of the Hartmann-Shack (HS) and other sensors that are based on slope information for recovery of wavefront surface, but a different symmetry, and does not use individual microlenses. This polar symmetry might offer differences during practical measurements that may add value to current and well-established "gold standard" techniques. The sensor consists of a set of concentric "half-donut" surfaces (longitudinally sectioned toroids) molded on an acrylic surface with a CCD located at the focal plane. When illuminated with a plane wavefront, it focuses a symmetric pattern of concentric discs on the CCD plane; for a distorted wavefront, a nonsymmetric disc pattern is formed (similar to images of a placido-based videokeratographer). From detection of shift in the radial direction, radial slopes are computed for a maximum of 2880 points, and the traditional least-squares procedure is used to fit these partial derivatives to a set of 15 conventional OSA-VSIA Zernike polynomials. Theoretical computations for several synthetic surfaces containing low-order aberration (LOA) and high-order aberration (HOA) were implemented for both the HS and the new sensor. Root mean square error (RMSE) in microns when theoretical data was taken as control, for HS sensor and new sensor, was 0.02 and 0.00003 for LOA (defocus, astigmatism) and 0.07 and 0.06 for HOA (coma, spherical, and higher terms), respectively. After this, practical preliminary measurements on a mechanical eye with a 5-mm pupil and 10 different defocus aberrations ranging from -5 D to 5 D, in steps of 1 D, were compared between sensors. RMSE for difference in measurements for HS and new sensor for sphere, cylinder, and axis, was 0.13 D, 0.07 D, and 11. Measurements were taken only on defocus aberrations. Qualitative images for astigmatism are shown. Although practical in vivo tests were not conducted in this first study, we also discuss certain possible alignment differences that may arise as a result of the different symmetry of the new sensor. To take any conclusive assumption regarding the accuracy and/or precision of this new sensor, when compared with other well-established sensors, statistically significant in vivo measurements will need to be conducted.

  7. UV-sensitive scientific CCD image sensors

    NASA Astrophysics Data System (ADS)

    Vishnevsky, Grigory I.; Kossov, Vladimir G.; Iblyaminova, A. F.; Lazovsky, Leonid Y.; Vydrevitch, Michail G.

    1997-06-01

    An investigation of probe laser irradiation interaction with substances containing in an environment has long since become a recognized technique for contamination detection and identification. For this purpose, a near and midrange-IR laser irradiation is traditionally used. However, as many works presented on last ecology monitoring conferences show, in addition to traditional systems, rapidly growing are systems with laser irradiation from near-UV range (250 - 500 nm). Use of CCD imagers is one of the prerequisites for this allowing the development of a multi-channel computer-based spectral research system. To identify and analyze contaminating impurities on an environment, such methods as laser fluorescence analysis, UV absorption and differential spectroscopy, Raman scattering are commonly used. These methods are used to identify a large number of impurities (petrol, toluene, Xylene isomers, SO2, acetone, methanol), to detect and identify food pathogens in real time, to measure a concentration of NH3, SO2 and NO in combustion outbursts, to detect oil products in a water, to analyze contaminations in ground waters, to define ozone distribution in the atmosphere profile, to monitor various chemical processes including radioactive materials manufacturing, heterogeneous catalytic reactions, polymers production etc. Multi-element image sensor with enhanced UV sensitivity, low optical non-uniformity, low intrinsic noise and high dynamic range is a key element of all above systems. Thus, so called Virtual Phase (VP) CCDs possessing all these features, seems promising for ecology monitoring spectral measuring systems. Presently, a family of VP CCDs with different architecture and number of pixels is developed and being manufactured. All CCDs from this family are supported with a precise slow-scan digital image acquisition system that can be used in various image processing systems in astronomy, biology, medicine, ecology etc. An image is displayed directly on a PC monitor through a software support.

  8. Continuous non-invasive blood glucose monitoring by spectral image differencing method

    NASA Astrophysics Data System (ADS)

    Huang, Hao; Liao, Ningfang; Cheng, Haobo; Liang, Jing

    2018-01-01

    Currently, the use of implantable enzyme electrode sensor is the main method for continuous blood glucose monitoring. But the effect of electrochemical reactions and the significant drift caused by bioelectricity in body will reduce the accuracy of the glucose measurements. So the enzyme-based glucose sensors need to be calibrated several times each day by the finger-prick blood corrections. This increases the patient's pain. In this paper, we proposed a method for continuous Non-invasive blood glucose monitoring by spectral image differencing method in the near infrared band. The method uses a high-precision CCD detector to switch the filter in a very short period of time, obtains the spectral images. And then by using the morphological method to obtain the spectral image differences, the dynamic change of blood sugar is reflected in the image difference data. Through the experiment proved that this method can be used to monitor blood glucose dynamically to a certain extent.

  9. Shortwave infrared 512 x 2 line sensor for earth resources applications

    NASA Astrophysics Data System (ADS)

    Tower, J. R.; Pellon, L. E.; McCarthy, B. M.; Elabd, H.; Moldovan, A. G.; Kosonocky, W. F.; Kalshoven, J. E., Jr.; Tom, D.

    1985-08-01

    As part of the NASA remote-sensing Multispectral Linear Array Program, an edge-buttable 512 x 2 IRCCD line image sensor with 30-micron Pd2Si Schottky-barrier detectors is developed for operation with passive cooling at 120 K in the 1.1-2.5 micron short infrared band. On-chip CCD multiplexers provide one video output for each 512 detector band. The monolithic silicon line imager performance at a 4-ms optical integration time includes a signal-to-noise ratio of 241 for irradiance of 7.2 microwatts/sq cm at 1.65 microns wavelength, a 5000 dynamic range, a modulation transfer function, greater than 60 percent at the Nyquist frequency, and an 18-milliwatt imager chip total power dissipation. Blemish-free images with three percent nonuniformity under illumination and nonlinearity of 1.25 percent are obtained. A five SWIR imager hybrid focal plane was constructed, demonstrating the feasibility of arrays with only a two-detector loss at each joint.

  10. Pc-based car license plate reading

    NASA Astrophysics Data System (ADS)

    Tanabe, Katsuyoshi; Marubayashi, Eisaku; Kawashima, Harumi; Nakanishi, Tadashi; Shio, Akio

    1994-03-01

    A PC-based car license plate recognition system has been developed. The system recognizes Chinese characters and Japanese phonetic hiragana characters as well as six digits on Japanese license plates. The system consists of a CCD camera, vehicle sensors, a strobe unit, a monitoring center, and an i486-based PC. The PC includes in its extension slots: a vehicle detector board, a strobe emitter board, and an image grabber board. When a passing vehicle is detected by the vehicle sensors, the strobe emits a pulse of light. The light pulse is synchronized with the time the vehicle image is frozen on an image grabber board. The recognition process is composed of three steps: image thresholding, character region extraction, and matching-based character recognition. The recognition software can handle obscured characters. Experimental results for hundreds of outdoor images showed high recognition performance within relatively short performance times. The results confirmed that the system is applicable to a wide variety of applications such as automatic vehicle identification and travel time measurement.

  11. Laser pulse detection method and apparatus

    NASA Technical Reports Server (NTRS)

    Goss, W.; Janesick, J. R. (Inventor)

    1984-01-01

    A sensor is described for detecting the difference in phase of a pair of returned light pulse components, such as two components of a light pulse of an optical gyro. In an optic gyro, the two light components have passed in opposite directions through a coil of optical fiber, with the difference in phase of the returned light components determining the intensity of light shining on the sensor. The sensor includes a CCD (charge coupled device) that receives the pair of returned light components to generate a charge proportional to the number of photons in the received light. The amount of the charge represents the phase difference between the two light components. At a time after the transmission of the light pulse and before the expected time of arrival of the interfering light components, charge accumulating in the CCD as a result of reflections from components in the system, are repeatedly removed from the CCD, by transferring out charges in the CCD and dumping these charges.

  12. Scientific CCD technology at JPL

    NASA Technical Reports Server (NTRS)

    Janesick, J.; Collins, S. A.; Fossum, E. R.

    1991-01-01

    Charge-coupled devices (CCD's) were recognized for their potential as an imaging technology almost immediately following their conception in 1970. Twenty years later, they are firmly established as the technology of choice for visible imaging. While consumer applications of CCD's, especially the emerging home video camera market, dominated manufacturing activity, the scientific market for CCD imagers has become significant. Activity of the Jet Propulsion Laboratory and its industrial partners in the area of CCD imagers for space scientific instruments is described. Requirements for scientific imagers are significantly different from those needed for home video cameras, and are described. An imager for an instrument on the CRAF/Cassini mission is described in detail to highlight achieved levels of performance.

  13. Applications of iQID cameras

    NASA Astrophysics Data System (ADS)

    Han, Ling; Miller, Brian W.; Barrett, Harrison H.; Barber, H. Bradford; Furenlid, Lars R.

    2017-09-01

    iQID is an intensified quantum imaging detector developed in the Center for Gamma-Ray Imaging (CGRI). Originally called BazookaSPECT, iQID was designed for high-resolution gamma-ray imaging and preclinical gamma-ray single-photon emission computed tomography (SPECT). With the use of a columnar scintillator, an image intensifier and modern CCD/CMOS sensors, iQID cameras features outstanding intrinsic spatial resolution. In recent years, many advances have been achieved that greatly boost the performance of iQID, broadening its applications to cover nuclear and particle imaging for preclinical, clinical and homeland security settings. This paper presents an overview of the recent advances of iQID technology and its applications in preclinical and clinical scintigraphy, preclinical SPECT, particle imaging (alpha, neutron, beta, and fission fragment), and digital autoradiography.

  14. A Method for Application of Classification Tree Models to Map Aquatic Vegetation Using Remotely Sensed Images from Different Sensors and Dates

    PubMed Central

    Jiang, Hao; Zhao, Dehua; Cai, Ying; An, Shuqing

    2012-01-01

    In previous attempts to identify aquatic vegetation from remotely-sensed images using classification trees (CT), the images used to apply CT models to different times or locations necessarily originated from the same satellite sensor as that from which the original images used in model development came, greatly limiting the application of CT. We have developed an effective normalization method to improve the robustness of CT models when applied to images originating from different sensors and dates. A total of 965 ground-truth samples of aquatic vegetation types were obtained in 2009 and 2010 in Taihu Lake, China. Using relevant spectral indices (SI) as classifiers, we manually developed a stable CT model structure and then applied a standard CT algorithm to obtain quantitative (optimal) thresholds from 2009 ground-truth data and images from Landsat7-ETM+, HJ-1B-CCD, Landsat5-TM and ALOS-AVNIR-2 sensors. Optimal CT thresholds produced average classification accuracies of 78.1%, 84.7% and 74.0% for emergent vegetation, floating-leaf vegetation and submerged vegetation, respectively. However, the optimal CT thresholds for different sensor images differed from each other, with an average relative variation (RV) of 6.40%. We developed and evaluated three new approaches to normalizing the images. The best-performing method (Method of 0.1% index scaling) normalized the SI images using tailored percentages of extreme pixel values. Using the images normalized by Method of 0.1% index scaling, CT models for a particular sensor in which thresholds were replaced by those from the models developed for images originating from other sensors provided average classification accuracies of 76.0%, 82.8% and 68.9% for emergent vegetation, floating-leaf vegetation and submerged vegetation, respectively. Applying the CT models developed for normalized 2009 images to 2010 images resulted in high classification (78.0%–93.3%) and overall (92.0%–93.1%) accuracies. Our results suggest that Method of 0.1% index scaling provides a feasible way to apply CT models directly to images from sensors or time periods that differ from those of the images used to develop the original models.

  15. High-frame rate multiport CCD imager and camera

    NASA Astrophysics Data System (ADS)

    Levine, Peter A.; Patterson, David R.; Esposito, Benjamin J.; Tower, John R.; Lawler, William B.

    1993-01-01

    A high frame rate visible CCD camera capable of operation up to 200 frames per second is described. The camera produces a 256 X 256 pixel image by using one quadrant of a 512 X 512 16-port, back illuminated CCD imager. Four contiguous outputs are digitally reformatted into a correct, 256 X 256 image. This paper details the architecture and timing used for the CCD drive circuits, analog processing, and the digital reformatter.

  16. Sensor performance and weather effects modeling for intelligent transportation systems (ITS) applications

    NASA Astrophysics Data System (ADS)

    Everson, Jeffrey H.; Kopala, Edward W.; Lazofson, Laurence E.; Choe, Howard C.; Pomerleau, Dean A.

    1995-01-01

    Optical sensors are used for several ITS applications, including lateral control of vehicles, traffic sign recognition, car following, autonomous vehicle navigation, and obstacle detection. This paper treats the performance assessment of a sensor/image processor used as part of an on-board countermeasure system to prevent single vehicle roadway departure crashes. Sufficient image contrast between objects of interest and backgrounds is an essential factor influencing overall system performance. Contrast is determined by material properties affecting reflected/radiated intensities, as well as weather and visibility conditions. This paper discusses the modeling of these parameters and characterizes the contrast performance effects due to reduced visibility. The analysis process first involves generation of inherent road/off- road contrasts, followed by weather effects as a contrast modification. The sensor is modeled as a charge coupled device (CCD), with variable parameters. The results of the sensor/weather modeling are used to predict the performance on an in-vehicle warning system under various levels of adverse weather. Software employed in this effort was previously developed for the U.S. Air Force Wright Laboratory to determine target/background detection and recognition ranges for different sensor systems operating under various mission scenarios.

  17. Design on the x-ray oral digital image display card

    NASA Astrophysics Data System (ADS)

    Wang, Liping; Gu, Guohua; Chen, Qian

    2009-10-01

    According to the main characteristics of X-ray imaging, the X-ray display card is successfully designed and debugged using the basic principle of correlated double sampling (CDS) and combined with embedded computer technology. CCD sensor drive circuit and the corresponding procedures have been designed. Filtering and sampling hold circuit have been designed. The data exchange with PC104 bus has been implemented. Using complex programmable logic device as a device to provide gating and timing logic, the functions which counting, reading CPU control instructions, corresponding exposure and controlling sample-and-hold have been completed. According to the image effect and noise analysis, the circuit components have been adjusted. And high-quality images have been obtained.

  18. Vulnerability of CMOS image sensors in Megajoule Class Laser harsh environment.

    PubMed

    Goiffon, V; Girard, S; Chabane, A; Paillet, P; Magnan, P; Cervantes, P; Martin-Gonthier, P; Baggio, J; Estribeau, M; Bourgade, J-L; Darbon, S; Rousseau, A; Glebov, V Yu; Pien, G; Sangster, T C

    2012-08-27

    CMOS image sensors (CIS) are promising candidates as part of optical imagers for the plasma diagnostics devoted to the study of fusion by inertial confinement. However, the harsh radiative environment of Megajoule Class Lasers threatens the performances of these optical sensors. In this paper, the vulnerability of CIS to the transient and mixed pulsed radiation environment associated with such facilities is investigated during an experiment at the OMEGA facility at the Laboratory for Laser Energetics (LLE), Rochester, NY, USA. The transient and permanent effects of the 14 MeV neutron pulse on CIS are presented. The behavior of the tested CIS shows that active pixel sensors (APS) exhibit a better hardness to this harsh environment than a CCD. A first order extrapolation of the reported results to the higher level of radiation expected for Megajoule Class Laser facilities (Laser Megajoule in France or National Ignition Facility in the USA) shows that temporarily saturated pixels due to transient neutron-induced single event effects will be the major issue for the development of radiation-tolerant plasma diagnostic instruments whereas the permanent degradation of the CIS related to displacement damage or total ionizing dose effects could be reduced by applying well known mitigation techniques.

  19. Performance of 4x5120 Element Visible and 2x2560 Element Shortwave Infrared Multispectral Focal Planes

    NASA Astrophysics Data System (ADS)

    Tower, J. R.; Cope, A. D.; Pellion, L. E.; McCarthy, B. M.; Strong, R. T.; Kinnard, K. F.; Moldovan, A. G.; Levine, P. A.; Elabd, H.; Hoffman, D. M.

    1985-12-01

    Performance measurements of two Multispectral Linear Array focal planes are presented. Both pushbroom sensors have been developed for application in remote sensing instruments. A buttable, four-spectral-band, linear-format charge coupled device (CCD) and a but-table, two-spectral-band, linear-format, shortwave infrared charge coupled device (IRCCD) have been developed under NASA funding. These silicon integrated circuits may be butted end to end to provide very-high-resolution multispectral focal planes. The visible CCD is organized as four sensor lines of 1024 pixels each. Each line views the scene in a different spectral window defined by integral optical bandpass filters. A prototype focal plane with five devices, providing 4x5120-pixel resolution has been demonstrated. The high quantum efficiency of the backside-illuminated CCD technology provides excellent signal-to-noise performance and unusually high MTF across the entire visible and near-IR spectrum. The shortwave infrared (SWIR) sensor is organized as two line sensors of 512 detectors each. The SWIR (1-2.5 μm) spectral windows may be defined by bandpass filters placed in close proximity to the devices. The dual-band sensor consists of Schottky barrier detectors read out by CCD multiplexers. This monolithic sensor operates at 125°K with radiometric performance. A prototype five-device focal plane providing 2x2560 detectors has been demonstrated. The devices provide very high uniformity, and excellent MTF across the SWIR band.

  20. Detection of cavitated or non-cavitated approximal enamel caries lesions using CMOS and CCD digital X-ray sensors and conventional D and F-speed films at different exposure conditions.

    PubMed

    Bottenberg, Peter; Jacquet, Wolfgang; Stachniss, Vitus; Wellnitz, Johann; Schulte, Andreas G

    2011-04-01

    To determine the ability of digital sensors (CMOS and CCD sensors) and D and F-speed films to detect cavitated and non-cavitated enamel caries lesions at different exposure conditions compared to a gold standard. 100 extracted human molars and premolars were selected and mounted in a block between two neighboring teeth. Sensors or films were exposed with voltages of 60 or 70 kVp at varying times. Three observers assessed each approximal site independently. Lesion depth was rated according to an anatomical five-point scale (0 = no lesion to 4 = lesion reaching inner half of dentin). Serial sections of resin-embedded teeth were prepared. Gold-standard scores were established by consensus based on histological sectioning. A carious lesion was present at scores of 1 and higher. Statistical evaluation (sensitivity, specificity and receiver-operating curves) was based on caries-free surfaces and those presenting enamel caries (n=116). The ROC curves had "area under the curve" values (Az) from 0.50 (F-speed, 70 kVp, 0.20 seconds) to 0.58 (CCD 60 kVp, 0.08 seconds). The detection percentage of cavitated lesions was generally higher (0-52%, depending on technique and observer) than that of non-cavitated lesions (3-32%). The CMOS sensor showed Az values comparable to the CCD sensors but required higher exposure times. There was no significant difference between 60 and 70 kVp.

  1. Practical Method to Identify Orbital Anomaly as Breakup Event in the Geostationary Region

    DTIC Science & Technology

    2015-01-14

    point ! Geocentric distance at the pinch point Table 4 summarizes the results of the origin identifications. One object labeled x15300 was...Table 4. The result of origin identification of the seven detected objects Object name Parent object Inclination vector Pinch point Geocentric distance...of the object. X-Y, X’-Y’, and R.A.-Dec. represent the Image Coordinate before rotating the CCD sensor, after rotation, and the Geocentric Inertial

  2. Characterization and Processing of Non-Uniformities in Back-Illuminated CCDs

    NASA Astrophysics Data System (ADS)

    Lemm, Alia D.; Della-Rose, Devin J.; Maddocks, Sally

    2018-01-01

    In astronomical photometry, Charged Coupled Device (CCD) detectors are used to achieve high precision photometry and must be properly calibrated to correct for noise and pixel non-uniformities. Uncalibrated images may contain bias offset, dark current, bias structure and uneven illumination. In addition, standard data reduction is often not sufficient to “normalize” imagery to single-digit millimagnitude (mmag) precision. We are investigating an apparent non-uniformity, or interference pattern, in a back-illuminated sensor, the Alta U-47, attached to a DFM Engineering 41-cm Ritchey-Chrétien f/8 telescope. Based on the amplitude of this effect, we estimate that instrument magnitude peak-to-valley deviations of 50 mmag or more may result. Our initial testing strongly suggests that reflected skylight from high pressure sodium city lights may be the cause of this interference pattern. Our research goals are twofold: to fully characterize this non-uniformity and to determine the best method to remove this interference pattern from our reduced CCD images.

  3. Panoramic thermal imaging: challenges and tradeoffs

    NASA Astrophysics Data System (ADS)

    Aburmad, Shimon

    2014-06-01

    Over the past decade, we have witnessed a growing demand for electro-optical systems that can provide continuous 3600 coverage. Applications such as perimeter security, autonomous vehicles, and military warning systems are a few of the most common applications for panoramic imaging. There are several different technological approaches for achieving panoramic imaging. Solutions based on rotating elements do not provide continuous coverage as there is a time lag between updates. Continuous panoramic solutions either use "stitched" images from multiple adjacent sensors, or sophisticated optical designs which warp a panoramic view onto a single sensor. When dealing with panoramic imaging in the visible spectrum, high volume production and advancement of semiconductor technology has enabled the use of CMOS/CCD image sensors with a huge number of pixels, small pixel dimensions, and low cost devices. However, in the infrared spectrum, the growth of detector pixel counts, pixel size reduction, and cost reduction is taking place at a slower rate due to the complexity of the technology and limitations caused by the laws of physics. In this work, we will explore the challenges involved in achieving 3600 panoramic thermal imaging, and will analyze aspects such as spatial resolution, FOV, data complexity, FPA utilization, system complexity, coverage and cost of the different solutions. We will provide illustrations, calculations, and tradeoffs between three solutions evaluated by Opgal: A unique 3600 lens design using an LWIR XGA detector, stitching of three adjacent LWIR sensors equipped with a low distortion 1200 lens, and a fisheye lens with a HFOV of 180º and an XGA sensor.

  4. Computer simulation and discussion of high-accuracy laser direction finding in real time

    NASA Astrophysics Data System (ADS)

    Chen, Wenyi; Chen, Yongzhi

    1997-12-01

    On condition that CCD is used as the sensor, there are at least five methods that can be used to realize laser's direction finding with high accuracy. They are: image matching method, radiation center method, geometric center method, center of rectangle envelope method and center of maximum run length method. The first three can get the highest accuracy but working in real-time it is too complicated to realize and the cost is very expansive. The other two can also get high accuracy, and it is not difficult to realize working in real time. By using a single-chip microcomputer and an ordinary CCD camera a very simple system can get the position information of a laser beam. The data rate is 50 times per second.

  5. Advanced optical position sensors for magnetically suspended wind tunnel models

    NASA Technical Reports Server (NTRS)

    Lafleur, S.

    1985-01-01

    A major concern to aerodynamicists has been the corruption of wind tunnel test data by model support structures, such as stings or struts. A technique for magnetically suspending wind tunnel models was considered by Tournier and Laurenceau (1957) in order to overcome this problem. This technique is now implemented with the aid of a Large Magnetic Suspension and Balance System (LMSBS) and advanced position sensors for measuring model attitude and position within the test section. Two different optical position sensors are discussed, taking into account a device based on the use of linear CCD arrays, and a device utilizing area CID cameras. Current techniques in image processing have been employed to develop target tracking algorithms capable of subpixel resolution for the sensors. The algorithms are discussed in detail, and some preliminary test results are reported.

  6. Design and Fabrication of High-Efficiency CMOS/CCD Imagers

    NASA Technical Reports Server (NTRS)

    Pain, Bedabrata

    2007-01-01

    An architecture for back-illuminated complementary metal oxide/semiconductor (CMOS) and charge-coupled-device (CCD) ultraviolet/visible/near infrared- light image sensors, and a method of fabrication to implement the architecture, are undergoing development. The architecture and method are expected to enable realization of the full potential of back-illuminated CMOS/CCD imagers to perform with high efficiency, high sensitivity, excellent angular response, and in-pixel signal processing. The architecture and method are compatible with next-generation CMOS dielectric-forming and metallization techniques, and the process flow of the method is compatible with process flows typical of the manufacture of very-large-scale integrated (VLSI) circuits. The architecture and method overcome all obstacles that have hitherto prevented high-yield, low-cost fabrication of back-illuminated CMOS/CCD imagers by use of standard VLSI fabrication tools and techniques. It is not possible to discuss the obstacles in detail within the space available for this article. Briefly, the obstacles are posed by the problems of generating light-absorbing layers having desired uniform and accurate thicknesses, passivation of surfaces, forming structures for efficient collection of charge carriers, and wafer-scale thinning (in contradistinction to diescale thinning). A basic element of the present architecture and method - the element that, more than any other, makes it possible to overcome the obstacles - is the use of an alternative starting material: Instead of starting with a conventional bulk-CMOS wafer that consists of a p-doped epitaxial silicon layer grown on a heavily-p-doped silicon substrate, one starts with a special silicon-on-insulator (SOI) wafer that consists of a thermal oxide buried between a lightly p- or n-doped, thick silicon layer and a device silicon layer of appropriate thickness and doping. The thick silicon layer is used as a handle: that is, as a mechanical support for the device silicon layer during micro-fabrication.

  7. Ultrahigh sensitivity endoscopic camera using a new CMOS image sensor: providing with clear images under low illumination in addition to fluorescent images.

    PubMed

    Aoki, Hisae; Yamashita, Hiromasa; Mori, Toshiyuki; Fukuyo, Tsuneo; Chiba, Toshio

    2014-11-01

    We developed a new ultrahigh-sensitive CMOS camera using a specific sensor that has a wide range of spectral sensitivity characteristics. The objective of this study is to present our updated endoscopic technology that has successfully integrated two innovative functions; ultrasensitive imaging as well as advanced fluorescent viewing. Two different experiments were conducted. One was carried out to evaluate the function of the ultrahigh-sensitive camera. The other was to test the availability of the newly developed sensor and its performance as a fluorescence endoscope. In both studies, the distance from the endoscopic tip to the target was varied and those endoscopic images in each setting were taken for further comparison. In the first experiment, the 3-CCD camera failed to display the clear images under low illumination, and the target was hardly seen. In contrast, the CMOS camera was able to display the targets regardless of the camera-target distance under low illumination. Under high illumination, imaging quality given by both cameras was quite alike. In the second experiment as a fluorescence endoscope, the CMOS camera was capable of clearly showing the fluorescent-activated organs. The ultrahigh sensitivity CMOS HD endoscopic camera is expected to provide us with clear images under low illumination in addition to the fluorescent images under high illumination in the field of laparoscopic surgery.

  8. Fiber-MZI-based FBG sensor interrogation: comparative study with a CCD spectrometer.

    PubMed

    Das, Bhargab; Chandra, Vikash

    2016-10-10

    We present an experimental comparative study of the two most commonly used fiber Bragg grating (FBG) sensor interrogation techniques: a charge-coupled device (CCD) spectrometer and a fiber Mach-Zehnder interferometer (F-MZI). Although the interferometric interrogation technique is historically known to offer the highest sensitivity measurements, very little information exists regarding how it compares with the current commercially available spectral-characteristics-based interrogation systems. It is experimentally established here that the performance of a modern-day CCD spectrometer interrogator is very close to a F-MZI interrogator with the capability of measuring Bragg wavelength shifts with sub-picometer-level accuracy. The results presented in this research study can further be used as a guideline for choosing between the two FBG sensor interrogator types for small-amplitude dynamic perturbation measurements down to nano-level strain.

  9. Imaging using a supercontinuum laser to assess tumors in patients with breast carcinoma

    NASA Astrophysics Data System (ADS)

    Sordillo, Laura A.; Sordillo, Peter P.; Alfano, R. R.

    2016-03-01

    The supercontinuum laser light source has many advantages over other light sources, including broad spectral range. Transmission images of paired normal and malignant breast tissue samples from two patients were obtained using a Leukos supercontinuum (SC) laser light source with wavelengths in the second and third NIR optical windows and an IR- CCD InGaAs camera detector (Goodrich Sensors Inc. high response camera SU320KTSW-1.7RT with spectral response between 900 nm and 1,700 nm). Optical attenuation measurements at the four NIR optical windows were obtained from the samples.

  10. CMOS Image Sensors for High Speed Applications.

    PubMed

    El-Desouki, Munir; Deen, M Jamal; Fang, Qiyin; Liu, Louis; Tse, Frances; Armstrong, David

    2009-01-01

    Recent advances in deep submicron CMOS technologies and improved pixel designs have enabled CMOS-based imagers to surpass charge-coupled devices (CCD) imaging technology for mainstream applications. The parallel outputs that CMOS imagers can offer, in addition to complete camera-on-a-chip solutions due to being fabricated in standard CMOS technologies, result in compelling advantages in speed and system throughput. Since there is a practical limit on the minimum pixel size (4∼5 μm) due to limitations in the optics, CMOS technology scaling can allow for an increased number of transistors to be integrated into the pixel to improve both detection and signal processing. Such smart pixels truly show the potential of CMOS technology for imaging applications allowing CMOS imagers to achieve the image quality and global shuttering performance necessary to meet the demands of ultrahigh-speed applications. In this paper, a review of CMOS-based high-speed imager design is presented and the various implementations that target ultrahigh-speed imaging are described. This work also discusses the design, layout and simulation results of an ultrahigh acquisition rate CMOS active-pixel sensor imager that can take 8 frames at a rate of more than a billion frames per second (fps).

  11. The design and development of low- and high-voltage ASICs for space-borne CCD cameras

    NASA Astrophysics Data System (ADS)

    Waltham, N.; Morrissey, Q.; Clapp, M.; Bell, S.; Jones, L.; Torbet, M.

    2017-12-01

    The CCD remains the pre-eminent visible and UV wavelength image sensor in space science, Earth and planetary remote sensing. However, the design of space-qualified CCD readout electronics is a significant challenge with requirements for low-volume, low-mass, low-power, high-reliability and tolerance to space radiation. Space-qualified components are frequently unavailable and up-screened commercial components seldom meet project or international space agency requirements. In this paper, we describe an alternative approach of designing and space-qualifying a series of low- and high-voltage mixed-signal application-specific integrated circuits (ASICs), the ongoing development of two low-voltage ASICs with successful flight heritage, and two new high-voltage designs. A challenging sub-system of any CCD camera is the video processing and digitisation electronics. We describe recent developments to improve performance and tolerance to radiation-induced single event latchup of a CCD video processing ASIC originally developed for NASA's Solar Terrestrial Relations Observatory and Solar Dynamics Observatory. We also describe a programme to develop two high-voltage ASICs to address the challenges presented with generating a CCD's bias voltages and drive clocks. A 0.35 μm, 50 V tolerant, CMOS process has been used to combine standard low-voltage 3.3 V transistors with high-voltage 50 V diffused MOSFET transistors that enable output buffers to drive CCD bias drains, gates and clock electrodes directly. We describe a CCD bias voltage generator ASIC that provides 24 independent and programmable 0-32 V outputs. Each channel incorporates a 10-bit digital-to-analogue converter, provides current drive of up to 20 mA into loads of 10 μF, and includes current-limiting and short-circuit protection. An on-chip telemetry system with a 12-bit analogue-to-digital converter enables the outputs and multiple off-chip camera voltages to be monitored. The ASIC can drive one or more CCDs and replaces the many discrete components required in current cameras. We also describe a CCD clock driver ASIC that provides six independent and programmable drivers with high-current capacity. The device enables various CCD clock parameters to be programmed independently, for example the clock-low and clock-high voltage levels, and the clock-rise and clock-fall times, allowing configuration for serial clock frequencies in the range 0.1-2 MHz and image clock frequencies in the range 10-100 kHz. Finally, we demonstrate the impact and importance of this technology for the development of compact, high-performance and low-power integrated focal plane electronics.

  12. Subframe Burst Gating for Raman Spectroscopy in Combustion

    NASA Technical Reports Server (NTRS)

    Kojima, Jun; Fischer, David; Nguyen, Quang-Viet

    2010-01-01

    We describe an architecture for spontaneous Raman scattering utilizing a frame-transfer CCD sensor operating in a subframe burst-gating mode to realize time-resolved combustion diagnostics. The technique permits all-electronic optical gating with microsecond shutter speeds 5 J.Ls) without compromising optical throughput or image fidelity. When used in conjunction with a pair of orthogonally polarized excitation lasers, the technique measures single-shot vibrational Raman scattering that is minimally contaminated by problematic optical background noise.

  13. CCD Detects Two Images In Quick Succession

    NASA Technical Reports Server (NTRS)

    Janesick, James R.; Collins, Andy

    1996-01-01

    Prototype special-purpose charge-coupled device (CCD) designed to detect two 1,024 x 1,024-pixel images in rapid succession. Readout performed slowly to minimize noise. CCD operated in synchronism with pulsed laser, stroboscope, or other pulsed source of light to form pairs of images of rapidly moving objects.

  14. Silicon sample holder for molecular beam epitaxy on pre-fabricated integrated circuits

    NASA Technical Reports Server (NTRS)

    Hoenk, Michael E. (Inventor); Grunthaner, Paula J. (Inventor); Grunthaner, Frank J. (Inventor)

    1994-01-01

    The sample holder of the invention is formed of the same semiconductor crystal as the integrated circuit on which the molecular beam expitaxial process is to be performed. In the preferred embodiment, the sample holder comprises three stacked micro-machined silicon wafers: a silicon base wafer having a square micro-machined center opening corresponding in size and shape to the active area of a CCD imager chip, a silicon center wafer micro-machined as an annulus having radially inwardly pointing fingers whose ends abut the edges of and center the CCD imager chip within the annulus, and a silicon top wafer micro-machined as an annulus having cantilevered membranes which extend over the top of the CCD imager chip. The micro-machined silicon wafers are stacked in the order given above with the CCD imager chip centered in the center wafer and sandwiched between the base and top wafers. The thickness of the center wafer is about 20% less than the thickness of the CCD imager chip. Preferably, four titanium wires, each grasping the edges of the top and base wafers, compress all three wafers together, flexing the cantilever fingers of the top wafer to accommodate the thickness of the CCD imager chip, acting as a spring holding the CCD imager chip in place.

  15. Test of CCD Precision Limits for Differential Photometry

    NASA Technical Reports Server (NTRS)

    Borucki, W. J.; Dunham, E. W.; Wei, M. Z.; Robinson, L. B.; Ford, C. H.; Granados, A. F.

    1995-01-01

    Results of tests to demonstrate the very high differential-photometric stability of CCD light sensors are presented. The measurements reported here demonstrate that in a controlled laboratory environment, a front-illuminated CCD can provide differential-photometric measurements with reproducible precision approaching one part in 105. Practical limitations to the precision of differential-photometric measurements with CCDs and implications for spaceborne applications are discussed.

  16. Study of optimal wavefront sensing with elongated laser guide stars

    NASA Astrophysics Data System (ADS)

    Thomas, S. J.; Adkins, S.; Gavel, D.; Fusco, T.; Michau, V.

    2008-06-01

    Over the past decade, adaptive optics (AO) has become an established method for overcoming the effects of atmospheric turbulence on both astronomical imaging and spectroscopic observations. These systems are now beginning to make extensive use of laser guide star (LGS) techniques to improve performance and provide increased sky coverage. Sodium LGS AO employs one or more lasers at 589-nm wavelength to produce an artificial guide star through excitation of sodium atoms in the mesosphere (90 km altitude). Because of its dependence on the abundance and distribution of sodium atoms in the mesosphere, this approach has its own unique set of difficulties not seen with natural stars. The sodium layer exhibits time-dependent variations in density and altitude, and since it is at a finite range, the LGS images become elongated due to the thickness of the layer and the offset between the laser projection point and the subapertures of a Shack-Hartmann wavefront sensor (SHWFS). Elongation causes the LGS image to be spread out resulting in a decrease in the signal-to-noise ratio which, in turn, leads to an increase in SHWFS measurement error and therefore an increased error in wavefront phase reconstruction. To address the problem of elongation, and also to provide a higher level of readout performance and reduced readout noise, a new type of charge-coupled device (CCD) is now under development for Shack-Hartmann wavefront sensing called the polar coordinate CCD. In this device, discrete imaging arrays are provided in each SHWFS subaperture and the size, shape and orientation of each discrete imaging array are adjusted to optimally sample the LGS image. The device is referred to as the polar coordinate CCD because the location of each imager is defined by a polar coordinate system centred on the laser guide star projection point. This concept is especially suited to Extremely Large Telescopes (ELTs) where the effect of perspective elongation is a significant factor. In this paper, we evaluate the performance of centroiders based on this CCD geometry by evaluating the centroid error variance and also the linearity issues associated with LGS image sampling and truncation. We also describe how we will extend this work to address the problems presented by the time variability of the sodium layer and how this will impact SHWFS performance in LGS AO systems.

  17. Fundamental performance differences between CMOS and CCD imagers: Part II

    NASA Astrophysics Data System (ADS)

    Janesick, James; Andrews, James; Tower, John; Grygon, Mark; Elliott, Tom; Cheng, John; Lesser, Michael; Pinter, Jeff

    2007-09-01

    A new class of CMOS imagers that compete with scientific CCDs is presented. The sensors are based on deep depletion backside illuminated technology to achieve high near infrared quantum efficiency and low pixel cross-talk. The imagers deliver very low read noise suitable for single photon counting - Fano-noise limited soft x-ray applications. Digital correlated double sampling signal processing necessary to achieve low read noise performance is analyzed and demonstrated for CMOS use. Detailed experimental data products generated by different pixel architectures (notably 3TPPD, 5TPPD and 6TPG designs) are presented including read noise, charge capacity, dynamic range, quantum efficiency, charge collection and transfer efficiency and dark current generation. Radiation damage data taken for the imagers is also reported.

  18. Pixel pitch and particle energy influence on the dark current distribution of neutron irradiated CMOS image sensors.

    PubMed

    Belloir, Jean-Marc; Goiffon, Vincent; Virmontois, Cédric; Raine, Mélanie; Paillet, Philippe; Duhamel, Olivier; Gaillardin, Marc; Molina, Romain; Magnan, Pierre; Gilard, Olivier

    2016-02-22

    The dark current produced by neutron irradiation in CMOS Image Sensors (CIS) is investigated. Several CIS with different photodiode types and pixel pitches are irradiated with various neutron energies and fluences to study the influence of each of these optical detector and irradiation parameters on the dark current distribution. An empirical model is tested on the experimental data and validated on all the irradiated optical imagers. This model is able to describe all the presented dark current distributions with no parameter variation for neutron energies of 14 MeV or higher, regardless of the optical detector and irradiation characteristics. For energies below 1 MeV, it is shown that a single parameter has to be adjusted because of the lower mean damage energy per nuclear interaction. This model and these conclusions can be transposed to any silicon based solid-state optical imagers such as CIS or Charged Coupled Devices (CCD). This work can also be used when designing an optical imager instrument, to anticipate the dark current increase or to choose a mitigation technique.

  19. Advanced microlens and color filter process technology for the high-efficiency CMOS and CCD image sensors

    NASA Astrophysics Data System (ADS)

    Fan, Yang-Tung; Peng, Chiou-Shian; Chu, Cheng-Yu

    2000-12-01

    New markets are emerging for digital electronic image device, especially in visual communications, PC camera, mobile/cell phone, security system, toys, vehicle image system and computer peripherals for document capture. To enable one-chip image system that image sensor is with a full digital interface, can make image capture devices in our daily lives. Adding a color filter to such image sensor in a pattern of mosaics pixel or wide stripes can make image more real and colorful. We can say 'color filter makes the life more colorful color filter is? Color filter means can filter image light source except the color with specific wavelength and transmittance that is same as color filter itself. Color filter process is coating and patterning green, red and blue (or cyan, magenta and yellow) mosaic resists onto matched pixel in image sensing array pixels. According to the signal caught from each pixel, we can figure out the environment image picture. Widely use of digital electronic camera and multimedia applications today makes the feature of color filter becoming bright. Although it has challenge but it is very worthy to develop the process of color filter. We provide the best service on shorter cycle time, excellent color quality, high and stable yield. The key issues of advanced color process have to be solved and implemented are planarization and micro-lens technology. Lost of key points of color filter process technology have to consider will also be described in this paper.

  20. Swap intensified WDR CMOS module for I2/LWIR fusion

    NASA Astrophysics Data System (ADS)

    Ni, Yang; Noguier, Vincent

    2015-05-01

    The combination of high resolution visible-near-infrared low light sensor and moderate resolution uncooled thermal sensor provides an efficient way for multi-task night vision. Tremendous progress has been made on uncooled thermal sensors (a-Si, VOx, etc.). It's possible to make a miniature uncooled thermal camera module in a tiny 1cm3 cube with <1W power consumption. For silicon based solid-state low light CCD/CMOS sensors have observed also a constant progress in terms of readout noise, dark current, resolution and frame rate. In contrast to thermal sensing which is intrinsic day&night operational, the silicon based solid-state sensors are not yet capable to do the night vision performance required by defense and critical surveillance applications. Readout noise, dark current are 2 major obstacles. The low dynamic range at high sensitivity mode of silicon sensors is also an important limiting factor, which leads to recognition failure due to local or global saturations & blooming. In this context, the image intensifier based solution is still attractive for the following reasons: 1) high gain and ultra-low dark current; 2) wide dynamic range and 3) ultra-low power consumption. With high electron gain and ultra low dark current of image intensifier, the only requirement on the silicon image pickup device are resolution, dynamic range and power consumption. In this paper, we present a SWAP intensified Wide Dynamic Range CMOS module for night vision applications, especially for I2/LWIR fusion. This module is based on a dedicated CMOS image sensor using solar-cell mode photodiode logarithmic pixel design which covers a huge dynamic range (> 140dB) without saturation and blooming. The ultra-wide dynamic range image from this new generation logarithmic sensor can be used directly without any image processing and provide an instant light accommodation. The complete module is slightly bigger than a simple ANVIS format I2 tube with <500mW power consumption.

  1. Environmental performance evaluation of an advanced-design solid-state television camera

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The development of an advanced-design black-and-white solid-state television camera which can survive exposure to space environmental conditions was undertaken. A 380 x 488 element buried-channel CCD is utilized as the image sensor to ensure compatibility with 525-line transmission and display equipment. Specific camera design approaches selected for study and analysis included: (1) component and circuit sensitivity to temperature; (2) circuit board thermal and mechanical design; and (3) CCD temperature control. Preferred approaches were determined and integrated into the final design for two deliverable solid-state TV cameras. One of these cameras was subjected to environmental tests to determine stress limits for exposure to vibration, shock, acceleration, and temperature-vacuum conditions. These tests indicate performance at the design goal limits can be achieved for most of the specified conditions.

  2. CCD developments for particle colliders

    NASA Astrophysics Data System (ADS)

    Stefanov, Konstantin D.

    2006-09-01

    Charge Coupled Devices (CCDs) have been successfully used in several high-energy physics experiments over the last 20 years. Their small pixel size and excellent precision provide superb tool for studying of short-lived particles and understanding the nature at fundamental level. Over the last years the Linear Collider Flavour Identification (LCFI) collaboration has developed Column-Parallel CCDs (CPCCD) and CMOS readout chips to be used for the vertex detector at the International Linear Collider (ILC). The CPCCDs are very fast devices capable of satisfying the challenging requirements imposed by the beam structure of the superconducting accelerator. First set of prototype devices have been designed, manufactured and successfully tested, with second-generation chips on the way. Another idea for CCD-based device, the In-situ Storage Image Sensor (ISIS) is also under development and the first prototype is in production.

  3. CCD-based vertex detector for ILC

    NASA Astrophysics Data System (ADS)

    Stefanov, Konstantin D.

    2006-12-01

    Charge Coupled Devices (CCDs) have been successfully used in several high-energy physics experiments over the last 20 years. Their small pixel size and excellent precision provide a superb tool for studying of short-lived particles and understanding the nature at fundamental level. Over the last few years the Linear Collider Flavour Identification (LCFI) collaboration has developed Column-Parallel CCDs (CPCCD) and CMOS readout chips, to be used for the vertex detector at the International Linear Collider (ILC). The CPCCDs are very fast devices capable of satisfying the challenging requirements imposed by the beam structure of the superconducting accelerator. The first set of prototype devices have been successfully designed, manufactured and tested, with second generation chips on the way. Another idea for CCD-based device, the In-situ Storage Image Sensor (ISIS) is also under development and the first prototype has been manufactured.

  4. Design and laboratory calibration of the compact pushbroom hyperspectral imaging system

    NASA Astrophysics Data System (ADS)

    Zhou, Jiankang; Ji, Yiqun; Chen, Yuheng; Chen, Xinhua; Shen, Weimin

    2009-11-01

    The designed hyperspectral imaging system is composed of three main parts, that is, optical subsystem, electronic subsystem and capturing subsystem. And a three-dimensional "image cube" can be obtained through push-broom. The fore-optics is commercial-off-the-shelf with high speed and three continuous zoom ratios. Since the dispersive imaging part is based on Offner relay configuration with an aberration-corrected convex grating, high power of light collection and variable view field are obtained. The holographic recording parameters of the convex grating are optimized, and the aberration of the Offner configuration dispersive system is balanced. The electronic system adopts module design, which can minimize size, mass, and power consumption. Frame transfer area-array CCD is chosen as the image sensor and the spectral line can be binned to achieve better SNR and sensitivity without any deterioration in spatial resolution. The capturing system based on the computer can set the capturing parameters, calibrate the spectrometer, process and display spectral imaging data. Laboratory calibrations are prerequisite for using precise spectral data. The spatial and spectral calibration minimize smile and keystone distortion caused by optical system, assembly and so on and fix positions of spatial and spectral line on the frame area-array CCD. Gases excitation lamp is used in smile calibration and the keystone calculation is carried out by different viewing field point source created by a series of narrow slit. The laboratory and field imaging results show that this pushbroom hyperspectral imaging system can acquire high quality spectral images.

  5. The iQID Camera: An Ionizing-Radiation Quantum Imaging Detector

    DOE PAGES

    Miller, Brian W.; Gregory, Stephanie J.; Fuller, Erin S.; ...

    2014-06-11

    We have developed and tested a novel, ionizing-radiation Quantum Imaging Detector (iQID). This scintillation-based detector was originally developed as a high-resolution gamma-ray imager, called BazookaSPECT, for use in single-photon emission computed tomography (SPECT). Recently, we have investigated the detectors response and imaging potential with other forms of ionizing radiation including alpha, neutron, beta, and fission fragment particles. The detector’s response to a broad range of ionizing radiation has prompted its new title. The principle operation of the iQID camera involves coupling a scintillator to an image intensifier. The scintillation light generated particle interactions is optically amplified by the intensifier andmore » then re-imaged onto a CCD/CMOS camera sensor. The intensifier provides sufficient optical gain that practically any CCD/CMOS camera can be used to image ionizing radiation. Individual particles are identified and their spatial position (to sub-pixel accuracy) and energy are estimated on an event-by-event basis in real time using image analysis algorithms on high-performance graphics processing hardware. Distinguishing features of the iQID camera include portability, large active areas, high sensitivity, and high spatial resolution (tens of microns). Although modest, iQID has energy resolution that is sufficient to discrimate between particles. Additionally, spatial features of individual events can be used for particle discrimination. An important iQID imaging application that has recently been developed is single-particle, real-time digital autoradiography. In conclusion, we present the latest results and discuss potential applications.« less

  6. The AOLI Non-Linear Curvature Wavefront Sensor: High sensitivity reconstruction for low-order AO

    NASA Astrophysics Data System (ADS)

    Crass, Jonathan; King, David; Mackay, Craig

    2013-12-01

    Many adaptive optics (AO) systems in use today require bright reference objects to determine the effects of atmospheric distortions on incoming wavefronts. This requirement is because Shack Hartmann wavefront sensors (SHWFS) distribute incoming light from reference objects into a large number of sub-apertures. Bright natural reference objects occur infrequently across the sky leading to the use of laser guide stars which add complexity to wavefront measurement systems. The non-linear curvature wavefront sensor as described by Guyon et al. has been shown to offer a significant increase in sensitivity when compared to a SHWFS. This facilitates much greater sky coverage using natural guide stars alone. This paper describes the current status of the non-linear curvature wavefront sensor being developed as part of an adaptive optics system for the Adaptive Optics Lucky Imager (AOLI) project. The sensor comprises two photon-counting EMCCD detectors from E2V Technologies, recording intensity at four near-pupil planes. These images are used with a reconstruction algorithm to determine the phase correction to be applied by an ALPAO 241-element deformable mirror. The overall system is intended to provide low-order correction for a Lucky Imaging based multi CCD imaging camera. We present the current optical design of the instrument including methods to minimise inherent optical effects, principally chromaticity. Wavefront reconstruction methods are discussed and strategies for their optimisation to run at the required real-time speeds are introduced. Finally, we discuss laboratory work with a demonstrator setup of the system.

  7. Linear dependence between the wavefront gradient and the masked intensity for the point source with a CCD sensor

    NASA Astrophysics Data System (ADS)

    Yang, Huizhen; Ma, Liang; Wang, Bin

    2018-01-01

    In contrast to the conventional adaptive optics (AO) system, the wavefront sensorless (WFSless) AO system doesn't need a WFS to measure the wavefront aberrations. It is simpler than the conventional AO in system architecture and can be applied to the complex conditions. The model-based WFSless system has a great potential in real-time correction applications because of its fast convergence. The control algorithm of the model-based WFSless system is based on an important theory result that is the linear relation between the Mean-Square Gradient (MSG) magnitude of the wavefront aberration and the second moment of the masked intensity distribution in the focal plane (also called as Masked Detector Signal-MDS). The linear dependence between MSG and MDS for the point source imaging with a CCD sensor will be discussed from theory and simulation in this paper. The theory relationship between MSG and MDS is given based on our previous work. To verify the linear relation for the point source, we set up an imaging model under atmospheric turbulence. Additionally, the value of MDS will be deviate from that of theory because of the noise of detector and further the deviation will affect the correction effect. The theory results under noise will be obtained through theoretical derivation and then the linear relation between MDS and MDS under noise will be discussed through the imaging model. Results show the linear relation between MDS and MDS under noise is also maintained well, which provides a theoretical support to applications of the model-based WFSless system.

  8. Evaluation on Radiometric Capability of Chinese Optical Satellite Sensors.

    PubMed

    Yang, Aixia; Zhong, Bo; Wu, Shanlong; Liu, Qinhuo

    2017-01-22

    The radiometric capability of on-orbit sensors should be updated on time due to changes induced by space environmental factors and instrument aging. Some sensors, such as Moderate Resolution Imaging Spectroradiometer (MODIS), have onboard calibrators, which enable real-time calibration. However, most Chinese remote sensing satellite sensors lack onboard calibrators. Their radiometric calibrations have been updated once a year based on a vicarious calibration procedure, which has affected the applications of the data. Therefore, a full evaluation of the sensors' radiometric capabilities is essential before quantitative applications can be made. In this study, a comprehensive procedure for evaluating the radiometric capability of several Chinese optical satellite sensors is proposed. In this procedure, long-term radiometric stability and radiometric accuracy are the two major indicators for radiometric evaluation. The radiometric temporal stability is analyzed by the tendency of long-term top-of-atmosphere (TOA) reflectance variation; the radiometric accuracy is determined by comparison with the TOA reflectance from MODIS after spectrally matching. Three Chinese sensors including the Charge-Coupled Device (CCD) camera onboard Huan Jing 1 satellite (HJ-1), as well as the Visible and Infrared Radiometer (VIRR) and Medium-Resolution Spectral Imager (MERSI) onboard the Feng Yun 3 satellite (FY-3) are evaluated in reflective bands based on this procedure. The results are reasonable, and thus can provide reliable reference for the sensors' application, and as such will promote the development of Chinese satellite data.

  9. Test of CCD Precision Limits for Differential Photometry

    NASA Technical Reports Server (NTRS)

    Robinson, L. B.; Wei, M. Z.; Borucki, W. J.; Dunham, E. W.; Ford, C. H.; Granados, A. F.

    1995-01-01

    Results of tests to demonstrate the very high differential-photometric stability of CCD light sensors are presented. The measurements reported here demonstrate that in a controlled laboratory environment, a front-illuminated CCD can provide differential-photometric measurements with reproducible precision approaching one part in 10(exp 5). Practical limitations to the precision of differential-photometric measurements with CCDs and implications for spaceborne applications are discussed.

  10. Charge shielding in the In-situ Storage Image Sensor for a vertex detector at the ILC

    NASA Astrophysics Data System (ADS)

    Zhang, Z.; Stefanov, K. D.; Bailey, D.; Banda, Y.; Buttar, C.; Cheplakov, A.; Cussans, D.; Damerell, C.; Devetak, E.; Fopma, J.; Foster, B.; Gao, R.; Gillman, A.; Goldstein, J.; Greenshaw, T.; Grimes, M.; Halsall, R.; Harder, K.; Hawes, B.; Hayrapetyan, K.; Heath, H.; Hillert, S.; Jackson, D.; Pinto Jayawardena, T.; Jeffery, B.; John, J.; Johnson, E.; Kundu, N.; Laing, A.; Lastovicka, T.; Lau, W.; Li, Y.; Lintern, A.; Lynch, C.; Mandry, S.; Martin, V.; Murray, P.; Nichols, A.; Nomerotski, A.; Page, R.; Parkes, C.; Perry, C.; O'Shea, V.; Sopczak, A.; Tabassam, H.; Thomas, S.; Tikkanen, T.; Velthuis, J.; Walsh, R.; Woolliscroft, T.; Worm, S.

    2009-08-01

    The Linear Collider Flavour Identification (LCFI) collaboration has successfully developed the first prototype of a novel particle detector, the In-situ Storage Image Sensor (ISIS). This device ideally suits the challenging requirements for the vertex detector at the future International Linear Collider (ILC), combining the charge storing capabilities of the Charge-Coupled Devices (CCD) with readout commonly used in CMOS imagers. The ISIS avoids the need for high-speed readout and offers low power operation combined with low noise, high immunity to electromagnetic interference and increased radiation hardness compared to typical CCDs. The ISIS is one of the most promising detector technologies for vertexing at the ILC. In this paper we describe the measurements on the charge-shielding properties of the p-well, which is used to protect the storage register from parasitic charge collection and is at the core of device's operation. We show that the p-well can suppress the parasitic charge collection by almost two orders of magnitude, satisfying the requirements for the application.

  11. Novel instrumentation of multispectral imaging technology for detecting tissue abnormity

    NASA Astrophysics Data System (ADS)

    Yi, Dingrong; Kong, Linghua

    2012-10-01

    Multispectral imaging is becoming a powerful tool in a wide range of biological and clinical studies by adding spectral, spatial and temporal dimensions to visualize tissue abnormity and the underlying biological processes. A conventional spectral imaging system includes two physically separated major components: a band-passing selection device (such as liquid crystal tunable filter and diffraction grating) and a scientific-grade monochromatic camera, and is expensive and bulky. Recently micro-arrayed narrow-band optical mosaic filter was invented and successfully fabricated to reduce the size and cost of multispectral imaging devices in order to meet the clinical requirement for medical diagnostic imaging applications. However the challenging issue of how to integrate and place the micro filter mosaic chip to the targeting focal plane, i.e., the imaging sensor, of an off-shelf CMOS/CCD camera is not reported anywhere. This paper presents the methods and results of integrating such a miniaturized filter with off-shelf CMOS imaging sensors to produce handheld real-time multispectral imaging devices for the application of early stage pressure ulcer (ESPU) detection. Unlike conventional multispectral imaging devices which are bulky and expensive, the resulting handheld real-time multispectral ESPU detector can produce multiple images at different center wavelengths with a single shot, therefore eliminates the image registration procedure required by traditional multispectral imaging technologies.

  12. Interference-free optical detection for Raman spectroscopy

    NASA Technical Reports Server (NTRS)

    Fischer, David G (Inventor); Kojima, Jun (Inventor); Nguyen, Quang-Viet (Inventor)

    2012-01-01

    An architecture for spontaneous Raman scattering (SRS) that utilizes a frame-transfer charge-coupled device (CCD) sensor operating in a subframe burst gating mode to realize time-resolved combustion diagnostics is disclosed. The technique permits all-electronic optical gating with microsecond shutter speeds (<5 .mu.s), without compromising optical throughput or image fidelity. When used in conjunction with a pair of orthogonally-polarized excitation lasers, the technique measures time-resolved vibrational Raman scattering that is minimally contaminated by problematic optical background noise.

  13. Accurate measurement of chest compression depth using impulse-radio ultra-wideband sensor on a mattress

    PubMed Central

    Kim, Yeomyung

    2017-01-01

    Objective We developed a new chest compression depth (CCD) measuring technology using radar and impulse-radio ultra-wideband (IR-UWB) sensor. This study was performed to determine its accuracy on a soft surface. Methods Four trials, trial 1: chest compressions on the floor using an accelerometer device; trial 2: chest compressions on the floor using an IR-UWB sensor; trial 3: chest compressions on a foam mattress using an accelerometer device; trial 4: chest compressions on a foam mattress using an IR-UWB sensor, were performed in a random order. In all the trials, a cardiopulmonary resuscitation provider delivered 50 uninterrupted chest compressions to a manikin. Results The CCD measured by the manikin and the device were as follows: 57.42 ± 2.23 and 53.92 ± 2.92 mm, respectively in trial 1 (p < 0.001); 56.29 ± 1.96 and 54.16 ± 3.90 mm, respectively in trial 2 (p < 0.001); 55.61 ± 1.57 and 103.48 ± 10.48 mm, respectively in trial 3 (p < 0.001); 57.14 ± 3.99 and 55.51 ± 3.39 mm, respectively in trial 4 (p = 0.012). The gaps between the CCD measured by the manikin and the devices (accelerometer device vs. IR-UWB sensor) on the floor were not different (3.50 ± 2.08 mm vs. 3.15 ± 2.27 mm, respectively, p = 0.136). However, the gaps were significantly different on the foam mattress (48.53 ± 5.65 mm vs. 4.10 ± 2.47 mm, p < 0.001). Conclusion The IR-UWB sensor could measure the CCD accurately both on the floor and on the foam mattress. PMID:28854262

  14. Accurate measurement of chest compression depth using impulse-radio ultra-wideband sensor on a mattress.

    PubMed

    Yu, Byung Gyu; Oh, Je Hyeok; Kim, Yeomyung; Kim, Tae Wook

    2017-01-01

    We developed a new chest compression depth (CCD) measuring technology using radar and impulse-radio ultra-wideband (IR-UWB) sensor. This study was performed to determine its accuracy on a soft surface. Four trials, trial 1: chest compressions on the floor using an accelerometer device; trial 2: chest compressions on the floor using an IR-UWB sensor; trial 3: chest compressions on a foam mattress using an accelerometer device; trial 4: chest compressions on a foam mattress using an IR-UWB sensor, were performed in a random order. In all the trials, a cardiopulmonary resuscitation provider delivered 50 uninterrupted chest compressions to a manikin. The CCD measured by the manikin and the device were as follows: 57.42 ± 2.23 and 53.92 ± 2.92 mm, respectively in trial 1 (p < 0.001); 56.29 ± 1.96 and 54.16 ± 3.90 mm, respectively in trial 2 (p < 0.001); 55.61 ± 1.57 and 103.48 ± 10.48 mm, respectively in trial 3 (p < 0.001); 57.14 ± 3.99 and 55.51 ± 3.39 mm, respectively in trial 4 (p = 0.012). The gaps between the CCD measured by the manikin and the devices (accelerometer device vs. IR-UWB sensor) on the floor were not different (3.50 ± 2.08 mm vs. 3.15 ± 2.27 mm, respectively, p = 0.136). However, the gaps were significantly different on the foam mattress (48.53 ± 5.65 mm vs. 4.10 ± 2.47 mm, p < 0.001). The IR-UWB sensor could measure the CCD accurately both on the floor and on the foam mattress.

  15. Environmental Recognition and Guidance Control for Autonomous Vehicles using Dual Vision Sensor and Applications

    NASA Astrophysics Data System (ADS)

    Moriwaki, Katsumi; Koike, Issei; Sano, Tsuyoshi; Fukunaga, Tetsuya; Tanaka, Katsuyuki

    We propose a new method of environmental recognition around an autonomous vehicle using dual vision sensor and navigation control based on binocular images. We consider to develop a guide robot that can play the role of a guide dog as the aid to people such as the visually impaired or the aged, as an application of above-mentioned techniques. This paper presents a recognition algorithm, which finds out the line of a series of Braille blocks and the boundary line between a sidewalk and a roadway where a difference in level exists by binocular images obtained from a pair of parallelarrayed CCD cameras. This paper also presents a tracking algorithm, with which the guide robot traces along a series of Braille blocks and avoids obstacles and unsafe areas which exist in the way of a person with the guide robot.

  16. A CMOS-based large-area high-resolution imaging system for high-energy x-ray applications

    NASA Astrophysics Data System (ADS)

    Rodricks, Brian; Fowler, Boyd; Liu, Chiao; Lowes, John; Haeffner, Dean; Lienert, Ulrich; Almer, John

    2008-08-01

    CCDs have been the primary sensor in imaging systems for x-ray diffraction and imaging applications in recent years. CCDs have met the fundamental requirements of low noise, high-sensitivity, high dynamic range and spatial resolution necessary for these scientific applications. State-of-the-art CMOS image sensor (CIS) technology has experienced dramatic improvements recently and their performance is rivaling or surpassing that of most CCDs. The advancement of CIS technology is at an ever-accelerating pace and is driven by the multi-billion dollar consumer market. There are several advantages of CIS over traditional CCDs and other solid-state imaging devices; they include low power, high-speed operation, system-on-chip integration and lower manufacturing costs. The combination of superior imaging performance and system advantages makes CIS a good candidate for high-sensitivity imaging system development. This paper will describe a 1344 x 1212 CIS imaging system with a 19.5μm pitch optimized for x-ray scattering studies at high-energies. Fundamental metrics of linearity, dynamic range, spatial resolution, conversion gain, sensitivity are estimated. The Detective Quantum Efficiency (DQE) is also estimated. Representative x-ray diffraction images are presented. Diffraction images are compared against a CCD-based imaging system.

  17. CNES developments of key detection technologies to prepare next generation focal planes for high resolution Earth observation

    NASA Astrophysics Data System (ADS)

    Materne, A.; Virmontois, C.; Bardoux, A.; Gimenez, T.; Biffi, J. M.; Laubier, D.; Delvit, J. M.

    2014-10-01

    This paper describes the activities managed by CNES (French National Space Agency) for the development of focal planes for next generation of optical high resolution Earth observation satellites, in low sun-synchronous orbit. CNES has launched a new programme named OTOS, to increase the level of readiness (TRL) of several key technologies for high resolution Earth observation satellites. The OTOS programme includes several actions in the field of detection and focal planes: a new generation of CCD and CMOS image sensors, updated analog front-end electronics and analog-to-digital converters. The main features that must be achieved on focal planes for high resolution Earth Observation, are: readout speed, signal to noise ratio at low light level, anti-blooming efficiency, geometric stability, MTF and line of sight stability. The next steps targeted are presented in comparison to the in-flight measured performance of the PLEIADES satellites launched in 2011 and 2012. The high resolution panchromatic channel is still based upon Backside illuminated (BSI) CCDs operated in Time Delay Integration (TDI). For the multispectral channel, the main evolution consists in moving to TDI mode and the competition is open with the concurrent development of a CCD solution versus a CMOS solution. New CCDs will be based upon several process blocks under evaluation on the e2v 6 inches BSI wafer manufacturing line. The OTOS strategy for CMOS image sensors investigates on one hand custom TDI solutions within a similar approach to CCDs, and, on the other hand, investigates ways to take advantage of existing performance of off-the-shelf 2D arrays CMOS image sensors. We present the characterization results obtained from test vehicles designed for custom TDI operation on several CIS technologies and results obtained before and after radiation on snapshot 2D arrays from the CMOSIS CMV family.

  18. Improved Space Object Orbit Determination Using CMOS Detectors

    NASA Astrophysics Data System (ADS)

    Schildknecht, T.; Peltonen, J.; Sännti, T.; Silha, J.; Flohrer, T.

    2014-09-01

    CMOS-sensors, or in general Active Pixel Sensors (APS), are rapidly replacing CCDs in the consumer camera market. Due to significant technological advances during the past years these devices start to compete with CCDs also for demanding scientific imaging applications, in particular in the astronomy community. CMOS detectors offer a series of inherent advantages compared to CCDs, due to the structure of their basic pixel cells, which each contains their own amplifier and readout electronics. The most prominent advantages for space object observations are the extremely fast and flexible readout capabilities, feasibility for electronic shuttering and precise epoch registration, and the potential to perform image processing operations on-chip and in real-time. The major challenges and design drivers for ground-based and space-based optical observation strategies have been analyzed. CMOS detector characteristics were critically evaluated and compared with the established CCD technology, especially with respect to the above mentioned observations. Similarly, the desirable on-chip processing functionalities which would further enhance the object detection and image segmentation were identified. Finally, we simulated several observation scenarios for ground- and space-based sensor by assuming different observation and sensor properties. We will introduce the analyzed end-to-end simulations of the ground- and space-based strategies in order to investigate the orbit determination accuracy and its sensitivity which may result from different values for the frame-rate, pixel scale, astrometric and epoch registration accuracies. Two cases were simulated, a survey using a ground-based sensor to observe objects in LEO for surveillance applications, and a statistical survey with a space-based sensor orbiting in LEO observing small-size debris in LEO. The ground-based LEO survey uses a dynamical fence close to the Earth shadow a few hours after sunset. For the space-based scenario a sensor in a sun-synchronous LEO orbit, always pointing in the anti-sun direction to achieve optimum illumination conditions for small LEO debris, was simulated. For the space-based scenario the simulations showed a 20 130 % improvement of the accuracy of all orbital parameters when varying the frame rate from 1/3 fps, which is the fastest rate for a typical CCD detector, to 50 fps, which represents the highest rate of scientific CMOS cameras. Changing the epoch registration accuracy from a typical 20.0 ms for a mechanical shutter to 0.025 ms, the theoretical value for the electronic shutter of a CMOS camera, improved the orbit accuracy by 4 to 190 %. The ground-based scenario also benefit from the specific CMOS characteristics, but to a lesser extent.

  19. [Techniques for pixel response nonuniformity correction of CCD in interferential imaging spectrometer].

    PubMed

    Yao, Tao; Yin, Shi-Min; Xiangli, Bin; Lü, Qun-Bo

    2010-06-01

    Based on in-depth analysis of the relative radiation scaling theorem and acquired scaling data of pixel response nonuniformity correction of CCD (charge-coupled device) in spaceborne visible interferential imaging spectrometer, a pixel response nonuniformity correction method of CCD adapted to visible and infrared interferential imaging spectrometer system was studied out, and it availably resolved the engineering technical problem of nonuniformity correction in detector arrays for interferential imaging spectrometer system. The quantitative impact of CCD nonuniformity on interferogram correction and recovery spectrum accuracy was given simultaneously. Furthermore, an improved method with calibration and nonuniformity correction done after the instrument is successfully assembled was proposed. The method can save time and manpower. It can correct nonuniformity caused by other reasons in spectrometer system besides CCD itself's nonuniformity, can acquire recalibration data when working environment is changed, and can also more effectively improve the nonuniformity calibration accuracy of interferential imaging

  20. High resolution biomedical imaging system with direct detection of x-rays via a charge coupled device

    DOEpatents

    Atac, M.; McKay, T.A.

    1998-04-21

    An imaging system is provided for direct detection of x-rays from an irradiated biological tissue. The imaging system includes an energy source for emitting x-rays toward the biological tissue and a charge coupled device (CCD) located immediately adjacent the biological tissue and arranged transverse to the direction of irradiation along which the x-rays travel. The CCD directly receives and detects the x-rays after passing through the biological tissue. The CCD is divided into a matrix of cells, each of which individually stores a count of x-rays directly detected by the cell. The imaging system further includes a pattern generator electrically coupled to the CCD for reading a count from each cell. A display device is provided for displaying an image representative of the count read by the pattern generator from the cells of the CCD. 13 figs.

  1. High resolution biomedical imaging system with direct detection of x-rays via a charge coupled device

    DOEpatents

    Atac, Muzaffer; McKay, Timothy A.

    1998-01-01

    An imaging system is provided for direct detection of x-rays from an irradiated biological tissue. The imaging system includes an energy source for emitting x-rays toward the biological tissue and a charge coupled device (CCD) located immediately adjacent the biological tissue and arranged transverse to the direction of irradiation along which the x-rays travel. The CCD directly receives and detects the x-rays after passing through the biological tissue. The CCD is divided into a matrix of cells, each of which individually stores a count of x-rays directly detected by the cell. The imaging system further includes a pattern generator electrically coupled to the CCD for reading a count from each cell. A display device is provided for displaying an image representative of the count read by the pattern generator from the cells of the CCD.

  2. Three-channel imaging fabry-perot interferometer for measurement of mid-latitude airglow.

    PubMed

    Shiokawa, K; Kadota, T; Ejiri, M K; Otsuka, Y; Katoh, Y; Satoh, M; Ogawa, T

    2001-08-20

    We have developed a three-channel imaging Fabry-Perot interferometer with which to measure atmospheric wind and temperature in the mesosphere and thermosphere through nocturnal airglow emissions. The interferometer measures two-dimensional wind and temperature for wavelengths of 630.0 nm (OI, altitude, 200-300 km), 557.7 nm (OI, 96 km), and 839.9 nm (OH, 86 km) simultaneously with a time resolution of 20 min, using three cooled CCD detectors with liquid-N(2) Dewars. Because we found that the CCD sensor moves as a result of changes in the level of liquid N(2) in the Dewars, the cooling system has been replaced by thermoelectric coolers. The fringe drift that is due to changes in temperature of the etalon is monitored with a frequency-stabilized He-Ne laser. We also describe a data-reduction scheme for calculating wind and temperature from the observed fringes. The system is fully automated and has been in operation since June 1999 at the Shigaraki Observatory (34.8N, 136.1E), Shiga, Japan.

  3. Cryogenic irradiation of an EMCCD for the WFIRST coronagraph: preliminary performance analysis

    NASA Astrophysics Data System (ADS)

    Bush, Nathan; Hall, David; Holland, Andrew; Burgon, Ross; Murray, Neil; Gow, Jason; Jordan, Douglas; Demers, Richard; Harding, Leon K.; Nemati, Bijan; Hoenk, Michael; Michaels, Darren; Peddada, Pavani

    2016-08-01

    The Wide Field Infra-Red Survey Telescope (WFIRST) is a NASA observatory scheduled to launch in the next decade that will settle essential questions in exoplanet science. The Wide Field Instrument (WFI) offers Hubble quality imaging over a 0.28 square degree field of view and will gather NIR statistical data on exoplanets through gravitational microlensing. An on-board coronagraph will for the first time perform direct imaging and spectroscopic analysis of exoplanets with properties analogous to those within our own solar system, including cold Jupiters, mini Neptunes and potentially super Earths. The Coronagraph Instrument (CGI) will be required to operate with low signal flux for long integration times, demanding all noise sources are kept to a minimum. The Electron Multiplication (EM)-CCD has been baselined for both the imaging and spectrograph cameras due its ability to operate with sub-electron effective read noise values with appropriate multiplication gain setting. The presence of other noise sources, however, such as thermal dark signal and Clock Induced Charge (CIC), need to be characterized and mitigated. In addition, operation within a space environment will subject the device to radiation damage that will degrade the Charge Transfer Effciency (CTE) of the device throughout the mission lifetime. Irradiation at the nominal instrument operating temperature has the potential to provide the best estimate of performance degradation that will be experienced in-flight, since the final population of silicon defects has been shown to be dependent upon the temperature at which the sensor is irradiated. Here we present initial findings from pre- and post- cryogenic irradiation testing of the e2v CCD201-20 BI EMCCD sensor, baselined for the WFIRST coronagraph instrument. The motivation for irradiation at cryogenic temperatures is discussed with reference to previous investigations of a similar nature. The results are presented in context with those from a previous room temperature irradiation investigation that was performed on a CCD201-20 operated under the same conditions. A key conclusion is that the measured performance degradation for a given proton fluence is seen to measurably differ for the cryogenic case compared to the room temperature equivalent for the conditions of this study.

  4. Toward a digital camera to rival the human eye

    NASA Astrophysics Data System (ADS)

    Skorka, Orit; Joseph, Dileepan

    2011-07-01

    All things considered, electronic imaging systems do not rival the human visual system despite notable progress over 40 years since the invention of the CCD. This work presents a method that allows design engineers to evaluate the performance gap between a digital camera and the human eye. The method identifies limiting factors of the electronic systems by benchmarking against the human system. It considers power consumption, visual field, spatial resolution, temporal resolution, and properties related to signal and noise power. A figure of merit is defined as the performance gap of the weakest parameter. Experimental work done with observers and cadavers is reviewed to assess the parameters of the human eye, and assessment techniques are also covered for digital cameras. The method is applied to 24 modern image sensors of various types, where an ideal lens is assumed to complete a digital camera. Results indicate that dynamic range and dark limit are the most limiting factors. The substantial functional gap, from 1.6 to 4.5 orders of magnitude, between the human eye and digital cameras may arise from architectural differences between the human retina, arranged in a multiple-layer structure, and image sensors, mostly fabricated in planar technologies. Functionality of image sensors may be significantly improved by exploiting technologies that allow vertical stacking of active tiers.

  5. Comparison of two methods of digital imaging technology for small diameter K-file length determination.

    PubMed

    Maryam, Ehsani; Farida, Abesi; Farhad, Akbarzade; Soraya, Khafri

    2013-11-01

    Obtaining the proper working length in endodontic treatment is essential. The aim of this study was to compare the working length (WL) assessment of small diameter K-files using the two different digital imaging methods. The samples for this in-vitro experimental study consisted of 40 extracted single-rooted premolars. After access cavity preparation, the ISO files no. 6, 8, and 10 stainless steel K-files were inserted in the canals in the three different lengths to evaluate the results in a blinded manner: At the level of apical foramen(actual)1 mm short of apical foramen2 mm short of apical foramen A digital caliper was used to measure the length of the files which was considered as the Gold Standard. Five observers (two oral and maxillofacial radiologists and three endodontists) observed the digital radiographs which were obtained using PSP and CCD digital imaging sensors. The collected data were analyzed by SPSS 17 and Repeated Measures Paired T-test. In WL assessment of small diameter K-files, a significant statistical relationship was seen among the observers of two digital imaging techniques (P<0.001). However, no significant difference was observed between the two digital techniques in WL assessment of small diameter K-files (P<0.05). PSP and CCD digital imaging techniques were similar in WL assessment of canals using no. 6, 8, and 10 K-files.

  6. Comparative analysis of data quality and applications in vegetation of HJ-1A CCD images

    NASA Astrophysics Data System (ADS)

    Wei, Hongwei; Tian, Qingjiu; Huang, Yan; Wang, Yan

    2014-05-01

    To study the data quality and to find the differences in vegetation monitoring applications, the same region at Chuzhou Lai 'an, the data of HJ-1A CCD1 on the April 1st, 2012 and the data of HJ-1A CCD2 on the March 31, 2012 have being comparative analysis by the method of objective quality (image)assessment which selecting over five spectral image evaluation parameters: radiation precision (mean, variance, inclination, steepness), information entropy, signal-to-noise ratio, sharpness, contrast, and normalized differential vegetation index. The results show that there is little differences between the HJ-1A CCD1 and CCD2 by objective evaluation of data quality except radiation precision conform to their design theory, so the conclusion is that the difference of them without considering on the usual unless continuation;and Combination of field observation data Lai'an spectral data and GPS data (each point),selecting the normalized difference vegetation index as CCD1, CCD2 in vegetation monitoring application on the evaluation of the differences, and the specific process is based on GPS data is divided into nine small plots of spectral data ,and image data of nine one-to-one correspondence plots, and their normalized difference vegetation index values were calculated ,and measured spectra data resampling HJ-1A CCD1, CCD2 spectral response function calculated NDVI, and the results show that there is little differences between the HJ-1A CCD1 and CCD2 by objective evaluation of data quality, and, the differences of wheat `s reflection and normalized vegetation index is mainly due to calibration coefficients of CCD1 and CCD2, the differences of the solar elevation angle when obtaining the image and atmospheric conditions, so it has to consider the performance indicators as well as access conditions of CCD1 and CCD2, and to be take the normalization techniques for processing for the comparison analysis in the use of HJ-1A CCD Data to surface dynamic changes; Finally, in order to study the response of the spectral response function proposed spectral response function of impact factor, and in view of the spectral response function measured spectral data resampling only HJ-1A CCD spectral response function, calculated according to the formula of the equivalent reflectivity quantitative spectral response function, and spectral normalization of proposed theoretical Technical Support. The Objective evaluation of its application of HJ-1A CCD1, and CCD2 data quality differences research has important implications for broader application to further promote China-made remote sensing satellite data, future research also needs calibration coefficient, the solar elevation angle atmospheric conditions and its image scanning angle be taken into account, and to make the corresponding normalized its impact quantitative research has important significance for the timing changes in the application of the ecological environment in China.

  7. Registering coherent change detection products associated with large image sets and long capture intervals

    DOEpatents

    Perkins, David Nikolaus; Gonzales, Antonio I

    2014-04-08

    A set of co-registered coherent change detection (CCD) products is produced from a set of temporally separated synthetic aperture radar (SAR) images of a target scene. A plurality of transformations are determined, which transformations are respectively for transforming a plurality of the SAR images to a predetermined image coordinate system. The transformations are used to create, from a set of CCD products produced from the set of SAR images, a corresponding set of co-registered CCD products.

  8. Mosaic CCD method: A new technique for observing dynamics of cometary magnetospheres

    NASA Technical Reports Server (NTRS)

    Saito, T.; Takeuchi, H.; Kozuba, Y.; Okamura, S.; Konno, I.; Hamabe, M.; Aoki, T.; Minami, S.; Isobe, S.

    1992-01-01

    On April 29, 1990, the plasma tail of Comet Austin was observed with a CCD camera on the 105-cm Schmidt telescope at the Kiso Observatory of the University of Tokyo. The area of the CCD used in this observation is only about 1 sq cm. When this CCD is used on the 105-cm Schmidt telescope at the Kiso Observatory, the area corresponds to a narrow square view of 12 ft x 12 ft. By comparison with the photograph of Comet Austin taken by Numazawa (personal communication) on the same night, we see that only a small part of the plasma tail can be photographed at one time with the CCD. However, by shifting the view on the CCD after each exposure, we succeeded in imaging the entire length of the cometary magnetosphere of 1.6 x 10(exp 6) km. This new technique is called 'the mosaic CCD method'. In order to study the dynamics of cometary plasma tails, seven frames of the comet from the head to the tail region were twice imaged with the mosaic CCD method and two sets of images were obtained. Six microstructures, including arcade structures, were identified in both the images. Sketches of the plasma tail including microstructures are included.

  9. CCD-Based Skinning Injury Recognition on Potato Tubers (Solanum tuberosum L.): A Comparison between Visible and Biospeckle Imaging

    PubMed Central

    Gao, Yingwang; Geng, Jinfeng; Rao, Xiuqin; Ying, Yibin

    2016-01-01

    Skinning injury on potato tubers is a kind of superficial wound that is generally inflicted by mechanical forces during harvest and postharvest handling operations. Though skinning injury is pervasive and obstructive, its detection is very limited. This study attempted to identify injured skin using two CCD (Charge Coupled Device) sensor-based machine vision technologies, i.e., visible imaging and biospeckle imaging. The identification of skinning injury was realized via exploiting features extracted from varied ROIs (Region of Interests). The features extracted from visible images were pixel-wise color and texture features, while region-wise BA (Biospeckle Activity) was calculated from biospeckle imaging. In addition, the calculation of BA using varied numbers of speckle patterns were compared. Finally, extracted features were implemented into classifiers of LS-SVM (Least Square Support Vector Machine) and BLR (Binary Logistic Regression), respectively. Results showed that color features performed better than texture features in classifying sound skin and injured skin, especially for injured skin stored no less than 1 day, with the average classification accuracy of 90%. Image capturing and processing efficiency can be speeded up in biospeckle imaging, with captured 512 frames reduced to 125 frames. Classification results obtained based on the feature of BA were acceptable for early skinning injury stored within 1 day, with the accuracy of 88.10%. It is concluded that skinning injury can be recognized by visible and biospeckle imaging during different stages. Visible imaging has the aptitude in recognizing stale skinning injury, while fresh injury can be discriminated by biospeckle imaging. PMID:27763555

  10. CCD-Based Skinning Injury Recognition on Potato Tubers (Solanum tuberosum L.): A Comparison between Visible and Biospeckle Imaging.

    PubMed

    Gao, Yingwang; Geng, Jinfeng; Rao, Xiuqin; Ying, Yibin

    2016-10-18

    Skinning injury on potato tubers is a kind of superficial wound that is generally inflicted by mechanical forces during harvest and postharvest handling operations. Though skinning injury is pervasive and obstructive, its detection is very limited. This study attempted to identify injured skin using two CCD (Charge Coupled Device) sensor-based machine vision technologies, i.e., visible imaging and biospeckle imaging. The identification of skinning injury was realized via exploiting features extracted from varied ROIs (Region of Interests). The features extracted from visible images were pixel-wise color and texture features, while region-wise BA (Biospeckle Activity) was calculated from biospeckle imaging. In addition, the calculation of BA using varied numbers of speckle patterns were compared. Finally, extracted features were implemented into classifiers of LS-SVM (Least Square Support Vector Machine) and BLR (Binary Logistic Regression), respectively. Results showed that color features performed better than texture features in classifying sound skin and injured skin, especially for injured skin stored no less than 1 day, with the average classification accuracy of 90%. Image capturing and processing efficiency can be speeded up in biospeckle imaging, with captured 512 frames reduced to 125 frames. Classification results obtained based on the feature of BA were acceptable for early skinning injury stored within 1 day, with the accuracy of 88.10%. It is concluded that skinning injury can be recognized by visible and biospeckle imaging during different stages. Visible imaging has the aptitude in recognizing stale skinning injury, while fresh injury can be discriminated by biospeckle imaging.

  11. Research of BRDF effects on remote sensing imagery

    NASA Astrophysics Data System (ADS)

    Nina, Peng; Kun, Wang; Tao, Li; Yang, Pan

    2011-08-01

    The gray distribution and contrast of the optical satellite remote sensing imagery in the same kind of ground surface acquired by sensor is quite different, it depends not only on the satellite's observation and the sun incidence orientation but also the structural and optical properties of the surface. Therefore, the objectives of this research are to analyze the different BRDF characters of soil, vegetation, water and urban surface and also their BRDF effects on the quality of satellite image through 6S radiative transfer model. Furthermore, the causation of CCD blooming and spilling by ground reflectance is discussed by using QUICKBIRD image data and the corresponding ground image data. The general conclusion of BRDF effects on remote sensing imagery is proposed.

  12. High frame rate imaging systems developed in Northwest Institute of Nuclear Technology

    NASA Astrophysics Data System (ADS)

    Li, Binkang; Wang, Kuilu; Guo, Mingan; Ruan, Linbo; Zhang, Haibing; Yang, Shaohua; Feng, Bing; Sun, Fengrong; Chen, Yanli

    2007-01-01

    This paper presents high frame rate imaging systems developed in Northwest Institute of Nuclear Technology in recent years. Three types of imaging systems are included. The first type of system utilizes EG&G RETICON Photodiode Array (PDA) RA100A as the image sensor, which can work at up to 1000 frame per second (fps). Besides working continuously, the PDA system is also designed to switch to capture flash light event working mode. A specific time sequence is designed to satisfy this request. The camera image data can be transmitted to remote area by coaxial or optic fiber cable and then be stored. The second type of imaging system utilizes PHOTOBIT Complementary Metal Oxygen Semiconductor (CMOS) PB-MV13 as the image sensor, which has a high resolution of 1280 (H) ×1024 (V) pixels per frame. The CMOS system can operate at up to 500fps in full frame and 4000fps partially. The prototype scheme of the system is presented. The third type of imaging systems adopts charge coupled device (CCD) as the imagers. MINTRON MTV-1881EX, DALSA CA-D1 and CA-D6 camera head are used in the systems development. The features comparison of the RA100A, PB-MV13, and CA-D6 based systems are given in the end.

  13. Generative technique for dynamic infrared image sequences

    NASA Astrophysics Data System (ADS)

    Zhang, Qian; Cao, Zhiguo; Zhang, Tianxu

    2001-09-01

    The generative technique of the dynamic infrared image was discussed in this paper. Because infrared sensor differs from CCD camera in imaging mechanism, it generates the infrared image by incepting the infrared radiation of scene (including target and background). The infrared imaging sensor is affected deeply by the atmospheric radiation, the environmental radiation and the attenuation of atmospheric radiation transfers. Therefore at first in this paper the imaging influence of all kinds of the radiations was analyzed and the calculation formula of radiation was provided, in addition, the passive scene and the active scene were analyzed separately. Then the methods of calculation in the passive scene were provided, and the functions of the scene model, the atmospheric transmission model and the material physical attribute databases were explained. Secondly based on the infrared imaging model, the design idea, the achievable way and the software frame for the simulation software of the infrared image sequence were introduced in SGI workstation. Under the guidance of the idea above, in the third segment of the paper an example of simulative infrared image sequences was presented, which used the sea and sky as background and used the warship as target and used the aircraft as eye point. At last the simulation synthetically was evaluated and the betterment scheme was presented.

  14. Development of integrated semiconductor optical sensors for functional brain imaging

    NASA Astrophysics Data System (ADS)

    Lee, Thomas T.

    Optical imaging of neural activity is a widely accepted technique for imaging brain function in the field of neuroscience research, and has been used to study the cerebral cortex in vivo for over two decades. Maps of brain activity are obtained by monitoring intensity changes in back-scattered light, called Intrinsic Optical Signals (IOS), that correspond to fluctuations in blood oxygenation and volume associated with neural activity. Current imaging systems typically employ bench-top equipment including lamps and CCD cameras to study animals using visible light. Such systems require the use of anesthetized or immobilized subjects with craniotomies, which imposes limitations on the behavioral range and duration of studies. The ultimate goal of this work is to overcome these limitations by developing a single-chip semiconductor sensor using arrays of sources and detectors operating at near-infrared (NIR) wavelengths. A single-chip implementation, combined with wireless telemetry, will eliminate the need for immobilization or anesthesia of subjects and allow in vivo studies of free behavior. NIR light offers additional advantages because it experiences less absorption in animal tissue than visible light, which allows for imaging through superficial tissues. This, in turn, reduces or eliminates the need for traumatic surgery and enables long-term brain-mapping studies in freely-behaving animals. This dissertation concentrates on key engineering challenges of implementing the sensor. This work shows the feasibility of using a GaAs-based array of vertical-cavity surface emitting lasers (VCSELs) and PIN photodiodes for IOS imaging. I begin with in-vivo studies of IOS imaging through the skull in mice, and use these results along with computer simulations to establish minimum performance requirements for light sources and detectors. I also evaluate the performance of a current commercial VCSEL for IOS imaging, and conclude with a proposed prototype sensor.

  15. Modelling and testing the x-ray performance of CCD and CMOS APS detectors using numerical finite element simulations

    NASA Astrophysics Data System (ADS)

    Weatherill, Daniel P.; Stefanov, Konstantin D.; Greig, Thomas A.; Holland, Andrew D.

    2014-07-01

    Pixellated monolithic silicon detectors operated in a photon-counting regime are useful in spectroscopic imaging applications. Since a high energy incident photon may produce many excess free carriers upon absorption, both energy and spatial information can be recovered by resolving each interaction event. The performance of these devices in terms of both the energy and spatial resolution is in large part determined by the amount of diffusion which occurs during the collection of the charge cloud by the pixels. Past efforts to predict the X-ray performance of imaging sensors have used either analytical solutions to the diffusion equation or simplified monte carlo electron transport models. These methods are computationally attractive and highly useful but may be complemented using more physically detailed models based on TCAD simulations of the devices. Here we present initial results from a model which employs a full transient numerical solution of the classical semiconductor equations to model charge collection in device pixels under stimulation from initially Gaussian photogenerated charge clouds, using commercial TCAD software. Realistic device geometries and doping are included. By mapping the pixel response to different initial interaction positions and charge cloud sizes, the charge splitting behaviour of the model sensor under various illuminations and operating conditions is investigated. Experimental validation of the model is presented from an e2v CCD30-11 device under varying substrate bias, illuminated using an Fe-55 source.

  16. 3D morphology reconstruction using linear array CCD binocular stereo vision imaging system

    NASA Astrophysics Data System (ADS)

    Pan, Yu; Wang, Jinjiang

    2018-01-01

    Binocular vision imaging system, which has a small field of view, cannot reconstruct the 3-D shape of the dynamic object. We found a linear array CCD binocular vision imaging system, which uses different calibration and reconstruct methods. On the basis of the binocular vision imaging system, the linear array CCD binocular vision imaging systems which has a wider field of view can reconstruct the 3-D morphology of objects in continuous motion, and the results are accurate. This research mainly introduces the composition and principle of linear array CCD binocular vision imaging system, including the calibration, capture, matching and reconstruction of the imaging system. The system consists of two linear array cameras which were placed in special arrangements and a horizontal moving platform that can pick up objects. The internal and external parameters of the camera are obtained by calibrating in advance. And then using the camera to capture images of moving objects, the results are then matched and 3-D reconstructed. The linear array CCD binocular vision imaging systems can accurately measure the 3-D appearance of moving objects, this essay is of great significance to measure the 3-D morphology of moving objects.

  17. Detection systems for mass spectrometry imaging: a perspective on novel developments with a focus on active pixel detectors.

    PubMed

    Jungmann, Julia H; Heeren, Ron M A

    2013-01-15

    Instrumental developments for imaging and individual particle detection for biomolecular mass spectrometry (imaging) and fundamental atomic and molecular physics studies are reviewed. Ion-counting detectors, array detection systems and high mass detectors for mass spectrometry (imaging) are treated. State-of-the-art detection systems for multi-dimensional ion, electron and photon detection are highlighted. Their application and performance in three different imaging modes--integrated, selected and spectral image detection--are described. Electro-optical and microchannel-plate-based systems are contrasted. The analytical capabilities of solid-state pixel detectors--both charge coupled device (CCD) and complementary metal oxide semiconductor (CMOS) chips--are introduced. The Medipix/Timepix detector family is described as an example of a CMOS hybrid active pixel sensor. Alternative imaging methods for particle detection and their potential for future applications are investigated. Copyright © 2012 John Wiley & Sons, Ltd.

  18. Automatic calibration system for analog instruments based on DSP and CCD sensor

    NASA Astrophysics Data System (ADS)

    Lan, Jinhui; Wei, Xiangqin; Bai, Zhenlong

    2008-12-01

    Currently, the calibration work of analog measurement instruments is mainly completed by manual and there are many problems waiting for being solved. In this paper, an automatic calibration system (ACS) based on Digital Signal Processor (DSP) and Charge Coupled Device (CCD) sensor is developed and a real-time calibration algorithm is presented. In the ACS, TI DM643 DSP processes the data received by CCD sensor and the outcome is displayed on Liquid Crystal Display (LCD) screen. For the algorithm, pointer region is firstly extracted for improving calibration speed. And then a math model of the pointer is built to thin the pointer and determine the instrument's reading. Through numbers of experiments, the time of once reading is no more than 20 milliseconds while it needs several seconds if it is done manually. At the same time, the error of the instrument's reading satisfies the request of the instruments. It is proven that the automatic calibration system can effectively accomplish the calibration work of the analog measurement instruments.

  19. Optical system design of CCD star sensor with large aperture and wide field of view

    NASA Astrophysics Data System (ADS)

    Wang, Chao; Jiang, Lun; Li, Ying-chao; Liu, Zhuang

    2017-10-01

    The star sensor is one of the sensors which are used to determine the spatial attitude of the space vehicle. An optical system of star sensor with large aperture and wide field of view was designed in this paper. The effective focal length of the optics was 16mm, and the F-number is 1.2, the field of view of the optical system is 20°.The working spectrum is 500 to 800 nanometer. The lens system selects a similar complicated Petzval structure and special glass-couple, and get a high imaging quality in the whole spectrum range. For each field-of-view point, the values of the modulation transfer function at 50 cycles/mm is higher than 0.3. On the detecting plane, the encircled energy in a circle of 14μm diameter could be up to 80% of the total energy. In the whole range of the field of view, the dispersion spot diameter in the imaging plane is no larger than 13μm. The full field distortion was less than 0.1%, which was helpful to obtain the accurate location of the reference star through the picture gotten by the star sensor. The lateral chromatic aberration is less than 2μm in the whole spectrum range.

  20. Accurate attitude determination of the LACE satellite

    NASA Technical Reports Server (NTRS)

    Miglin, M. F.; Campion, R. E.; Lemos, P. J.; Tran, T.

    1993-01-01

    The Low-power Atmospheric Compensation Experiment (LACE) satellite, launched in February 1990 by the Naval Research Laboratory, uses a magnetic damper on a gravity gradient boom and a momentum wheel with its axis perpendicular to the plane of the orbit to stabilize and maintain its attitude. Satellite attitude is determined using three types of sensors: a conical Earth scanner, a set of sun sensors, and a magnetometer. The Ultraviolet Plume Instrument (UVPI), on board LACE, consists of two intensified CCD cameras and a gimbal led pointing mirror. The primary purpose of the UVPI is to image rocket plumes from space in the ultraviolet and visible wavelengths. Secondary objectives include imaging stars, atmospheric phenomena, and ground targets. The problem facing the UVPI experimenters is that the sensitivity of the LACF satellite attitude sensors is not always adequate to correctly point the UVPI cameras. Our solution is to point the UVPI cameras at known targets and use the information thus gained to improve attitude measurements. This paper describes the three methods developed to determine improved attitude values using the UVPI for both real-time operations and post observation analysis.

  1. Portal imaging with flat-panel detector and CCD camera

    NASA Astrophysics Data System (ADS)

    Roehrig, Hans; Tang, Chuankun; Cheng, Chee-Wai; Dallas, William J.

    1997-07-01

    This paper provides a comparison of imaging parameters of two portal imaging systems at 6 MV: a flat panel detector and a CCD-camera based portal imaging system. Measurements were made of the signal and noise and consequently of signal-to-noise per pixel as a function of the exposure. Both systems have a linear response with respect to exposure, and the noise is proportional to the square-root of the exposure, indicating photon-noise limitation. The flat-panel detector has a signal- to-noise ratio, which is higher than that observed wit the CCD-camera based portal imaging system. This is expected because most portal imaging systems using optical coupling with a lens exhibit severe quantum-sinks. The paper also presents data on the screen's photon gain (the number of light-photons per interacting x-ray photon), as well as on the magnitude of the Swank-noise, (which describes fluctuation in the screen's photon gain). Images of a Las Vegas-type aluminum contrast detail phantom, located at the ISO-Center, were generated at an exposure of 1 MU. The CCD-camera based system permits detection of aluminum-holes of 0.01194 cm diameter and 0.228 mm depth while the flat-panel detector permits detection of aluminum holes of 0.01194 cm diameter and 0.1626 mm depth, indicating a better signal-to-noise ratio. Rank order filtering was applied to the raw images from the CCD-based system in order to remove the direct hits. These are camera responses to scattered x-ray photons which interact directly with the CCD of the CCD-camera and generate 'salt and pepper type noise,' which interferes severely with attempts to determine accurate estimates of the image noise.

  2. Modeling the impact of preflushing on CTE in proton irradiated CCD-based detectors

    NASA Astrophysics Data System (ADS)

    Philbrick, R. H.

    2002-04-01

    A software model is described that performs a "real world" simulation of the operation of several types of charge-coupled device (CCD)-based detectors in order to accurately predict the impact that high-energy proton radiation has on image distortion and modulation transfer function (MTF). The model was written primarily to predict the effectiveness of vertical preflushing on the custom full frame CCD-based detectors intended for use on the proposed Kepler Discovery mission, but it is capable of simulating many other types of CCD detectors and operating modes as well. The model keeps track of the occupancy of all phosphorous-silicon (P-V), divacancy (V-V) and oxygen-silicon (O-V) defect centers under every CCD electrode over the entire detector area. The integrated image is read out by simulating every electrode-to-electrode charge transfer in both the vertical and horizontal CCD registers. A signal level dependency on the capture and emission of signal is included and the current state of each electrode (e.g., barrier or storage) is considered when distributing integrated and emitted signal. Options for performing preflushing, preflashing, and including mini-channels are available on both the vertical and horizontal CCD registers. In addition, dark signal generation and image transfer smear can be selectively enabled or disabled. A comparison of the charge transfer efficiency (CTE) data measured on the Hubble space telescope imaging spectrometer (STIS) CCD with the CTE extracted from model simulations of the STIS CCD show good agreement.

  3. Cross delay line sensor characterization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Owens, Israel J; Remelius, Dennis K; Tiee, Joe J

    There exists a wealth of information in the scientific literature on the physical properties and device characterization procedures for complementary metal oxide semiconductor (CMOS), charge coupled device (CCD) and avalanche photodiode (APD) format detectors. Numerous papers and books have also treated photocathode operation in the context of photomultiplier tube (PMT) operation for either non imaging applications or limited night vision capability. However, much less information has been reported in the literature about the characterization procedures and properties of photocathode detectors with novel cross delay line (XDL) anode structures. These allow one to detect single photons and create images by recordingmore » space and time coordinate (X, Y & T) information. In this paper, we report on the physical characteristics and performance of a cross delay line anode sensor with an enhanced near infrared wavelength response photocathode and high dynamic range micro channel plate (MCP) gain (> 10{sup 6}) multiplier stage. Measurement procedures and results including the device dark event rate (DER), pulse height distribution, quantum and electronic device efficiency (QE & DQE) and spatial resolution per effective pixel region in a 25 mm sensor array are presented. The overall knowledge and information obtained from XDL sensor characterization allow us to optimize device performance and assess capability. These device performance properties and capabilities make XDL detectors ideal for remote sensing field applications that require single photon detection, imaging, sub nano-second timing response, high spatial resolution (10's of microns) and large effective image format.« less

  4. Evaluation on Radiometric Capability of Chinese Optical Satellite Sensors

    PubMed Central

    Yang, Aixia; Zhong, Bo; Wu, Shanlong; Liu, Qinhuo

    2017-01-01

    The radiometric capability of on-orbit sensors should be updated on time due to changes induced by space environmental factors and instrument aging. Some sensors, such as Moderate Resolution Imaging Spectroradiometer (MODIS), have onboard calibrators, which enable real-time calibration. However, most Chinese remote sensing satellite sensors lack onboard calibrators. Their radiometric calibrations have been updated once a year based on a vicarious calibration procedure, which has affected the applications of the data. Therefore, a full evaluation of the sensors’ radiometric capabilities is essential before quantitative applications can be made. In this study, a comprehensive procedure for evaluating the radiometric capability of several Chinese optical satellite sensors is proposed. In this procedure, long-term radiometric stability and radiometric accuracy are the two major indicators for radiometric evaluation. The radiometric temporal stability is analyzed by the tendency of long-term top-of-atmosphere (TOA) reflectance variation; the radiometric accuracy is determined by comparison with the TOA reflectance from MODIS after spectrally matching. Three Chinese sensors including the Charge-Coupled Device (CCD) camera onboard Huan Jing 1 satellite (HJ-1), as well as the Visible and Infrared Radiometer (VIRR) and Medium-Resolution Spectral Imager (MERSI) onboard the Feng Yun 3 satellite (FY-3) are evaluated in reflective bands based on this procedure. The results are reasonable, and thus can provide reliable reference for the sensors’ application, and as such will promote the development of Chinese satellite data. PMID:28117745

  5. Flat-panel detector, CCD cameras, and electron-beam-tube-based video for use in portal imaging

    NASA Astrophysics Data System (ADS)

    Roehrig, Hans; Tang, Chuankun; Cheng, Chee-Way; Dallas, William J.

    1998-07-01

    This paper provides a comparison of some imaging parameters of four portal imaging systems at 6 MV: a flat panel detector, two CCD cameras and an electron beam tube based video camera. Measurements were made of signal and noise and consequently of signal-to-noise per pixel as a function of the exposure. All systems have a linear response with respect to exposure, and with the exception of the electron beam tube based video camera, the noise is proportional to the square-root of the exposure, indicating photon-noise limitation. The flat-panel detector has a signal-to-noise ratio, which is higher than that observed with both CCD-Cameras or with the electron beam tube based video camera. This is expected because most portal imaging systems using optical coupling with a lens exhibit severe quantum-sinks. The measurements of signal-and noise were complemented by images of a Las Vegas-type aluminum contrast detail phantom, located at the ISO-Center. These images were generated at an exposure of 1 MU. The flat-panel detector permits detection of Aluminum holes of 1.2 mm diameter and 1.6 mm depth, indicating the best signal-to-noise ratio. The CCD-cameras rank second and third in signal-to- noise ratio, permitting detection of Aluminum-holes of 1.2 mm diameter and 2.2 mm depth (CCD_1) and of 1.2 mm diameter and 3.2 mm depth (CCD_2) respectively, while the electron beam tube based video camera permits detection of only a hole of 1.2 mm diameter and 4.6 mm depth. Rank Order Filtering was applied to the raw images from the CCD-based systems in order to remove the direct hits. These are camera responses to scattered x-ray photons which interact directly with the CCD of the CCD-Camera and generate 'Salt and Pepper type noise,' which interferes severely with attempts to determine accurate estimates of the image noise. The paper also presents data on the metal-phosphor's photon gain (the number of light-photons per interacting x-ray photon).

  6. Improved Space Object Observation Techniques Using CMOS Detectors

    NASA Astrophysics Data System (ADS)

    Schildknecht, T.; Hinze, A.; Schlatter, P.; Silha, J.; Peltonen, J.; Santti, T.; Flohrer, T.

    2013-08-01

    CMOS-sensors, or in general Active Pixel Sensors (APS), are rapidly replacing CCDs in the consumer camera market. Due to significant technological advances during the past years these devices start to compete with CCDs also for demanding scientific imaging applications, in particular in the astronomy community. CMOS detectors offer a series of inherent advantages compared to CCDs, due to the structure of their basic pixel cells, which each contain their own amplifier and readout electronics. The most prominent advantages for space object observations are the extremely fast and flexible readout capabilities, feasibility for electronic shuttering and precise epoch registration, and the potential to perform image processing operations on-chip and in real-time. Presently applied and proposed optical observation strategies for space debris surveys and space surveillance applications had to be analyzed. The major design drivers were identified and potential benefits from using available and future CMOS sensors were assessed. The major challenges and design drivers for ground-based and space-based optical observation strategies have been analyzed. CMOS detector characteristics were critically evaluated and compared with the established CCD technology, especially with respect to the above mentioned observations. Similarly, the desirable on-chip processing functionalities which would further enhance the object detection and image segmentation were identified. Finally, the characteristics of a particular CMOS sensor available at the Zimmerwald observatory were analyzed by performing laboratory test measurements.

  7. Improved sensitivity high-definition interline CCD using the KODAK TRUESENSE color filter pattern

    NASA Astrophysics Data System (ADS)

    DiBella, James; Andreghetti, Marco; Enge, Amy; Chen, William; Stanka, Timothy; Kaser, Robert

    2010-01-01

    The KODAK TRUESENSE Color Filter Pattern has technology that for the first time is applied to a commercially available interline CCD. This 2/3" true-HD sensor will be described along with its performance attributes, including sensitivity improvement as compared to the Bayer CFA version of the same sensor. In addition, an overview of the system developed for demonstration and evaluation will be provided. Examples of the benefits of the new technology in specific applications including surveillance and intelligent traffic systems will be discussed.

  8. 1920x1080 pixel color camera with progressive scan at 50 to 60 frames per second

    NASA Astrophysics Data System (ADS)

    Glenn, William E.; Marcinka, John W.

    1998-09-01

    For over a decade, the broadcast industry, the film industry and the computer industry have had a long-range objective to originate high definition images with progressive scan. This produces images with better vertical resolution and much fewer artifacts than interlaced scan. Computers almost universally use progressive scan. The broadcast industry has resisted switching from interlace to progressive because no cameras were available in that format with the 1920 X 1080 resolution that had obtained international acceptance for high definition program production. The camera described in this paper produces an output in that format derived from two 1920 X 1080 CCD sensors produced by Eastman Kodak.

  9. Photon counting image sensor development for astronomical applications

    NASA Technical Reports Server (NTRS)

    Jenkins, Edward B.

    1987-01-01

    Specially built intensified CCD (ICCD) detector tubes were purchased and the performance of the electron bombardment process was investigated. In addition to studying the signal characteristics of the photoevents, there was interest in demonstrating that back-illuminated chips were not susceptible to radiation damage to their clocking electrodes. How to perform a centroid analysis for a 2-dimensional Gaussian distribution of charge is described. Measurement of the projection (along columns or rows) of the average charge spread profile is discussed. The development and flight of the Interstellar Medium Absorption Profile Spectrograph (IMAPS) is discussed.

  10. Manipulating Digital Holograms to Modify Phase of Reconstructed Wavefronts

    NASA Astrophysics Data System (ADS)

    Ferraro, Pietro; Paturzo, Melania; Memmolo, Pasquale; Finizio, Andrea

    2010-04-01

    We show that through an adaptive deformation of digital holograms it is possible to manage the depth of focus in the numerical reconstruction. Deformation is applied to the original hologram with the aim to put simultaneously in-focus, and in one reconstructed image plane, different objects lying at different distance from the hologram plane (i.e. CCD sensor), but in the same field of view. In the same way it is possible to extend the depth of field for 3D object having a tilted object whole in-focus.

  11. A CMOS In-Pixel CTIA High Sensitivity Fluorescence Imager.

    PubMed

    Murari, Kartikeya; Etienne-Cummings, Ralph; Thakor, Nitish; Cauwenberghs, Gert

    2011-10-01

    Traditionally, charge coupled device (CCD) based image sensors have held sway over the field of biomedical imaging. Complementary metal oxide semiconductor (CMOS) based imagers so far lack sensitivity leading to poor low-light imaging. Certain applications including our work on animal-mountable systems for imaging in awake and unrestrained rodents require the high sensitivity and image quality of CCDs and the low power consumption, flexibility and compactness of CMOS imagers. We present a 132×124 high sensitivity imager array with a 20.1 μm pixel pitch fabricated in a standard 0.5 μ CMOS process. The chip incorporates n-well/p-sub photodiodes, capacitive transimpedance amplifier (CTIA) based in-pixel amplification, pixel scanners and delta differencing circuits. The 5-transistor all-nMOS pixel interfaces with peripheral pMOS transistors for column-parallel CTIA. At 70 fps, the array has a minimum detectable signal of 4 nW/cm(2) at a wavelength of 450 nm while consuming 718 μA from a 3.3 V supply. Peak signal to noise ratio (SNR) was 44 dB at an incident intensity of 1 μW/cm(2). Implementing 4×4 binning allowed the frame rate to be increased to 675 fps. Alternately, sensitivity could be increased to detect about 0.8 nW/cm(2) while maintaining 70 fps. The chip was used to image single cell fluorescence at 28 fps with an average SNR of 32 dB. For comparison, a cooled CCD camera imaged the same cell at 20 fps with an average SNR of 33.2 dB under the same illumination while consuming over a watt.

  12. A CMOS In-Pixel CTIA High Sensitivity Fluorescence Imager

    PubMed Central

    Murari, Kartikeya; Etienne-Cummings, Ralph; Thakor, Nitish; Cauwenberghs, Gert

    2012-01-01

    Traditionally, charge coupled device (CCD) based image sensors have held sway over the field of biomedical imaging. Complementary metal oxide semiconductor (CMOS) based imagers so far lack sensitivity leading to poor low-light imaging. Certain applications including our work on animal-mountable systems for imaging in awake and unrestrained rodents require the high sensitivity and image quality of CCDs and the low power consumption, flexibility and compactness of CMOS imagers. We present a 132×124 high sensitivity imager array with a 20.1 μm pixel pitch fabricated in a standard 0.5 μ CMOS process. The chip incorporates n-well/p-sub photodiodes, capacitive transimpedance amplifier (CTIA) based in-pixel amplification, pixel scanners and delta differencing circuits. The 5-transistor all-nMOS pixel interfaces with peripheral pMOS transistors for column-parallel CTIA. At 70 fps, the array has a minimum detectable signal of 4 nW/cm2 at a wavelength of 450 nm while consuming 718 μA from a 3.3 V supply. Peak signal to noise ratio (SNR) was 44 dB at an incident intensity of 1 μW/cm2. Implementing 4×4 binning allowed the frame rate to be increased to 675 fps. Alternately, sensitivity could be increased to detect about 0.8 nW/cm2 while maintaining 70 fps. The chip was used to image single cell fluorescence at 28 fps with an average SNR of 32 dB. For comparison, a cooled CCD camera imaged the same cell at 20 fps with an average SNR of 33.2 dB under the same illumination while consuming over a watt. PMID:23136624

  13. The impact of radiation damage on photon counting with an EMCCD for the WFIRST-AFTA coronagraph

    NASA Astrophysics Data System (ADS)

    Bush, Nathan; Hall, David; Holland, Andrew; Burgon, Ross; Murray, Neil; Gow, Jason; Soman, Matthew; Jordan, Douglas; Demers, Richard; Harding, Leon; Hoenk, Michael; Michaels, Darren; Nemati, Bijan; Peddada, Pavani

    2015-09-01

    WFIRST-AFTA is a 2.4m class NASA observatory designed to address a wide range of science objectives using two complementary scientific payloads. The Wide Field Instrument (WFI) offers Hubble quality imaging over a 0.28 square degree field of view, and will gather NIR statistical data on exoplanets through gravitational microlensing. The second instrument is a high contrast coronagraph that will carry out the direct imaging and spectroscopic analysis of exoplanets, providing a means to probe the structure and composition of planetary systems. The coronagraph instrument is expected to operate in low photon flux for long integration times, meaning all noise sources must be kept to a minimum. In order to satisfy the low noise requirements, the Electron Multiplication (EM)-CCD has been baselined for both the imaging and spectrograph cameras. The EMCCD was selected in comparison with other candidates because of its low effective electronic read noise at sub-electron values with appropriate multiplication gain setting. The presence of other noise sources, however, such as thermal dark signal and Clock Induced Charge (CIC), need to be characterised and mitigated. In addition, operation within a space environment will subject the device to radiation damage that will degrade the Charge Transfer Efficiency (CTE) of the device throughout the mission lifetime. Here we present our latest results from pre- and post-irradiation testing of the e2v CCD201-20 BI EMCCD sensor, baselined for the WFIRST-AFTA coronagraph instrument. A description of the detector technology is presented, alongside considerations for operation within a space environment. The results from a room temperature irradiation are discussed in context with the nominal operating requirements of AFTA-C and future work which entails a cryogenic irradiation of the CCD201-20 is presented.

  14. Radiation imaging with a new scintillator and a CMOS camera

    NASA Astrophysics Data System (ADS)

    Kurosawa, S.; Shoji, Y.; Pejchal, J.; Yokota, Y.; Yoshikawa, A.

    2014-07-01

    A new imaging system consisting of a high-sensitivity complementary metal-oxide semiconductor (CMOS) sensor, a microscope and a new scintillator, Ce-doped Gd3(Al,Ga)5O12 (Ce:GAGG) grown by the Czochralski process, has been developed. The noise, the dark current and the sensitivity of the CMOS camera (ORCA-Flash4.0, Hamamatsu) was revised and compared to a conventional CMOS, whose sensitivity is at the same level as that of a charge coupled device (CCD) camera. Without the scintillator, this system had a good position resolution of 2.1 ± 0.4 μm and we succeeded in obtaining the alpha-ray images using 1-mm thick Ce:GAGG crystal. This system can be applied for example to high energy X-ray beam profile monitor, etc.

  15. Subelectron readout noise focal plane arrays for space imaging

    NASA Astrophysics Data System (ADS)

    Atlas, Gene; Wadsworth, Mark

    2004-01-01

    Readout noise levels of under 1 electron have long been a goal for the FPA community. In the quest to enhance the FPA sensitivity, various approaches have been attempted ranging from the exotic Photo-multiplier tubes, Image Intensifier tubes, Avalanche photo diodes, and now the on-chip avalanche charge amplification technologies from the CCD manufacturers. While these techniques reduce the readout noise, each offers a set of compromises that negatively affect the overall performance of the sensor in parameters such as power dissipation, dynamic range, uniformity or system complexity. In this work, we overview the benefits and tradeoffs of each approach, and introduce a new technique based on ImagerLabs" exclusive HIT technology which promises sub-electron read noise and other benefits without the tradeoffs of the other noise reduction techniques.

  16. Defect inspection in hot slab surface: multi-source CCD imaging based fuzzy-rough sets method

    NASA Astrophysics Data System (ADS)

    Zhao, Liming; Zhang, Yi; Xu, Xiaodong; Xiao, Hong; Huang, Chao

    2016-09-01

    To provide an accurate surface defects inspection method and make the automation of robust image region of interests(ROI) delineation strategy a reality in production line, a multi-source CCD imaging based fuzzy-rough sets method is proposed for hot slab surface quality assessment. The applicability of the presented method and the devised system are mainly tied to the surface quality inspection for strip, billet and slab surface etcetera. In this work we take into account the complementary advantages in two common machine vision (MV) systems(line array CCD traditional scanning imaging (LS-imaging) and area array CCD laser three-dimensional (3D) scanning imaging (AL-imaging)), and through establishing the model of fuzzy-rough sets in the detection system the seeds for relative fuzzy connectedness(RFC) delineation for ROI can placed adaptively, which introduces the upper and lower approximation sets for RIO definition, and by which the boundary region can be delineated by RFC region competitive classification mechanism. For the first time, a Multi-source CCD imaging based fuzzy-rough sets strategy is attempted for CC-slab surface defects inspection that allows an automatic way of AI algorithms and powerful ROI delineation strategies to be applied to the MV inspection field.

  17. A High-Speed Vision-Based Sensor for Dynamic Vibration Analysis Using Fast Motion Extraction Algorithms.

    PubMed

    Zhang, Dashan; Guo, Jie; Lei, Xiujun; Zhu, Changan

    2016-04-22

    The development of image sensor and optics enables the application of vision-based techniques to the non-contact dynamic vibration analysis of large-scale structures. As an emerging technology, a vision-based approach allows for remote measuring and does not bring any additional mass to the measuring object compared with traditional contact measurements. In this study, a high-speed vision-based sensor system is developed to extract structure vibration signals in real time. A fast motion extraction algorithm is required for this system because the maximum sampling frequency of the charge-coupled device (CCD) sensor can reach up to 1000 Hz. Two efficient subpixel level motion extraction algorithms, namely the modified Taylor approximation refinement algorithm and the localization refinement algorithm, are integrated into the proposed vision sensor. Quantitative analysis shows that both of the two modified algorithms are at least five times faster than conventional upsampled cross-correlation approaches and achieve satisfactory error performance. The practicability of the developed sensor is evaluated by an experiment in a laboratory environment and a field test. Experimental results indicate that the developed high-speed vision-based sensor system can extract accurate dynamic structure vibration signals by tracking either artificial targets or natural features.

  18. Resolution Properties of a Calcium Tungstate (CaWO4) Screen Coupled to a CMOS Imaging Detector

    NASA Astrophysics Data System (ADS)

    Koukou, Vaia; Martini, Niki; Valais, Ioannis; Bakas, Athanasios; Kalyvas, Nektarios; Lavdas, Eleftherios; Fountos, George; Kandarakis, Ioannis; Michail, Christos

    2017-11-01

    The aim of the current work was to assess the resolution properties of a calcium tungstate (CaWO4) screen (screen coating thickness: 50.09 mg/cm2, actual thickness: 167.2 μm) coupled to a high resolution complementary metal oxide semiconductor (CMOS) digital imaging sensor. A 2.7x3.6 cm2 CaWO4 sample was extracted from an Agfa Curix universal screen and was coupled directly with the active area of the active pixel sensor (APS) CMOS sensor. Experiments were performed following the new IEC 62220-1-1:2015 International Standard, using an RQA-5 beam quality. Resolution was assessed in terms of the Modulation Transfer Function (MTF), using the slanted-edge method. The CaWO4/CMOS detector configuration was found with linear response, in the exposure range under investigation. The final MTF was obtained through averaging the oversampled edge spread function (ESF), using a custom-made software developed by our team, according to the IEC 62220-1-1:2015. Considering the renewed interest in calcium tungstate for various applications, along with the resolution results of this work, CaWO4 could be also considered for use in X-ray imaging devices such as charged-coupled devices (CCD) and CMOS.

  19. Experimental research on thermal conductive fillers for CCD module in space borne optical remote sensor

    NASA Astrophysics Data System (ADS)

    Zeng, Yi; Han, Xue-bing; Yang, Dong-shang; Gui, Li-jia; Zhao, Xiao-xiang; Si, Fu-qi

    2016-03-01

    A space-borne differential optical absorption spectrometer is a high precision aerospace optical remote sensor. It obtains the hyper-spectral,high spatial resolution radiation information by using the spectrometer with CCD(Charge Coupled Device)array detectors. Since a few CCDs are used as the key detector, the performance of the entire instrument is greatly affected by working condition of CCDs. The temperature of CCD modules has a great impact on the instrument measurement accuracy. It requires strict temperature control. The selection of the thermal conductive filler sticking CCD to the radiator is important in the CCD thermal design. Besides,due tothe complex and compact structure, it needs to take into account the anti-pollution of the optical system. Therefore, it puts forward high requirements on the selection of the conductive filler. In this paper, according to the structure characteristics of the CCD modules and the distribution of heat consumption, the thermal analysis tool I-DEAS/TMG is utilized to compute and simulate the temperature level of the CCD modules, while filling in thermal grease and thermal pad respectively. The temperature distribution of CCD heat dissipation in typical operating conditions is obtained. In addition, the heat balance test was carried out under the condition of two kinds of thermal conductive fillers. The thermal control of CCD was tested under various conditions, and the results were compared with the results of thermal analysis. The results show that there are some differences in thermal performance between the two kinds of thermal conductive fillers. Although they both can meet the thermal performance requirements of the instrument, either would be chosen taking account of other conditions and requirements such as anti-pollution and insulation. The content and results of this paper will be a good reference for the thermal design of the CCD in the aerospace optical payload.

  20. Computation of dark frames in digital imagers

    NASA Astrophysics Data System (ADS)

    Widenhorn, Ralf; Rest, Armin; Blouke, Morley M.; Berry, Richard L.; Bodegom, Erik

    2007-02-01

    Dark current is caused by electrons that are thermally exited into the conduction band. These electrons are collected by the well of the CCD and add a false signal to the chip. We will present an algorithm that automatically corrects for dark current. It uses a calibration protocol to characterize the image sensor for different temperatures. For a given exposure time, the dark current of every pixel is characteristic of a specific temperature. The dark current of every pixel can therefore be used as an indicator of the temperature. Hot pixels have the highest signal-to-noise ratio and are the best temperature sensors. We use the dark current of a several hundred hot pixels to sense the chip temperature and predict the dark current of all pixels on the chip. Dark current computation is not a new concept, but our approach is unique. Some advantages of our method include applicability for poorly temperature-controlled camera systems and the possibility of ex post facto dark current correction.

  1. Autonomous Mobile Platform for Research in Cooperative Robotics

    NASA Technical Reports Server (NTRS)

    Daemi, Ali; Pena, Edward; Ferguson, Paul

    1998-01-01

    This paper describes the design and development of a platform for research in cooperative mobile robotics. The structure and mechanics of the vehicles are based on R/C cars. The vehicle is rendered mobile by a DC motor and servo motor. The perception of the robot's environment is achieved using IR sensors and a central vision system. A laptop computer processes images from a CCD camera located above the testing area to determine the position of objects in sight. This information is sent to each robot via RF modem. Each robot is operated by a Motorola 68HC11E micro-controller, and all actions of the robots are realized through the connections of IR sensors, modem, and motors. The intelligent behavior of each robot is based on a hierarchical fuzzy-rule based approach.

  2. Sun glitter imaging analysis of submarine sand waves in HJ-1A/B satellite CCD images

    NASA Astrophysics Data System (ADS)

    Zhang, Huaguo; He, Xiekai; Yang, Kang; Fu, Bin; Guan, Weibing

    2014-11-01

    Submarine sand waves are a widespread bed-form in tidal environment. Submarine sand waves induce current convergence and divergence that affect sea surface roughness thus become visible in sun glitter images. These sun glitter images have been employed for mapping sand wave topography. However, there are lots of effect factors in sun glitter imaging of the submarine sand waves, such as the imaging geometry and dynamic environment condition. In this paper, several sun glitter images from HJ-1A/B in the Taiwan Banks are selected. These satellite sun glitter images are used to discuss sun glitter imaging characteristics in different sensor parameters and dynamic environment condition. To interpret the imaging characteristics, calculating the sun glitter radiance and analyzing its spatial characteristics of the sand wave in different images is the best way. In this study, a simulated model based on sun glitter radiation transmission is adopted to certify the imaging analysis in further. Some results are drawn based on the study. Firstly, the sun glitter radiation is mainly determined by sensor view angle. Second, the current is another key factor for the sun glitter. The opposite current direction will cause exchanging of bright stripes and dark stripes. Third, brightness reversal would happen at the critical angle. Therefore, when using sun glitter image to obtain depth inversion, one is advised to take advantage of image properties of sand waves and to pay attention to key dynamic environment condition and brightness reversal.

  3. The Mapping X-ray Fluorescence Spectrometer (MapX)

    NASA Astrophysics Data System (ADS)

    Sarrazin, P.; Blake, D. F.; Marchis, F.; Bristow, T.; Thompson, K.

    2017-12-01

    Many planetary surface processes leave traces of their actions as features in the size range 10s to 100s of microns. The Mapping X-ray Fluorescence Spectrometer (MapX) will provide elemental imaging at 100 micron spatial resolution, yielding elemental chemistry at a scale where many relict physical, chemical, or biological features can be imaged and interpreted in ancient rocks on planetary bodies and planetesimals. MapX is an arm-based instrument positioned on a rock or regolith with touch sensors. During an analysis, an X-ray source (tube or radioisotope) bombards the sample with X-rays or alpha-particles / gamma-rays, resulting in sample X-ray Fluorescence (XRF). X-rays emitted in the direction of an X-ray sensitive CCD imager pass through a 1:1 focusing lens (X-ray micro-pore Optic (MPO)) that projects a spatially resolved image of the X-rays onto the CCD. The CCD is operated in single photon counting mode so that the energies and positions of individual X-ray photons are recorded. In a single analysis, several thousand frames are both stored and processed in real-time. Higher level data products include single-element maps with a lateral spatial resolution of 100 microns and quantitative XRF spectra from ground- or instrument- selected Regions of Interest (ROI). XRF spectra from ROI are compared with known rock and mineral compositions to extrapolate the data to rock types and putative mineralogies. When applied to airless bodies and implemented with an appropriate radioisotope source for alpha-particle excitation, MapX will be able to analyze biogenic elements C, N, O, P, S, in addition to the cations of the rock-forming elements >Na, accessible with either X-ray or gamma-ray excitation. The MapX concept has been demonstrated with a series of lab-based prototypes and is currently under refinement and TRL maturation.

  4. Phase shifting white light interferometry using colour CCD for optical metrology and bio-imaging applications

    NASA Astrophysics Data System (ADS)

    Upputuri, Paul Kumar; Pramanik, Manojit

    2018-02-01

    Phase shifting white light interferometry (PSWLI) has been widely used for optical metrology applications because of their precision, reliability, and versatility. White light interferometry using monochrome CCD makes the measurement process slow for metrology applications. WLI integrated with Red-Green-Blue (RGB) CCD camera is finding imaging applications in the fields optical metrology and bio-imaging. Wavelength dependent refractive index profiles of biological samples were computed from colour white light interferograms. In recent years, whole-filed refractive index profiles of red blood cells (RBCs), onion skin, fish cornea, etc. were measured from RGB interferograms. In this paper, we discuss the bio-imaging applications of colour CCD based white light interferometry. The approach makes the measurement faster, easier, cost-effective, and even dynamic by using single fringe analysis methods, for industrial applications.

  5. A High Fidelity Approach to Data Simulation for Space Situational Awareness Missions

    NASA Astrophysics Data System (ADS)

    Hagerty, S.; Ellis, H., Jr.

    2016-09-01

    Space Situational Awareness (SSA) is vital to maintaining our Space Superiority. A high fidelity, time-based simulation tool, PROXOR™ (Proximity Operations and Rendering), supports SSA by generating realistic mission scenarios including sensor frame data with corresponding truth. This is a unique and critical tool for supporting mission architecture studies, new capability (algorithm) development, current/future capability performance analysis, and mission performance prediction. PROXOR™ provides a flexible architecture for sensor and resident space object (RSO) orbital motion and attitude control that simulates SSA, rendezvous and proximity operations scenarios. The major elements of interest are based on the ability to accurately simulate all aspects of the RSO model, viewing geometry, imaging optics, sensor detector, and environmental conditions. These capabilities enhance the realism of mission scenario models and generated mission image data. As an input, PROXOR™ uses a library of 3-D satellite models containing 10+ satellites, including low-earth orbit (e.g., DMSP) and geostationary (e.g., Intelsat) spacecraft, where the spacecraft surface properties are those of actual materials and include Phong and Maxwell-Beard bidirectional reflectance distribution function (BRDF) coefficients for accurate radiometric modeling. We calculate the inertial attitude, the changing solar and Earth illumination angles of the satellite, and the viewing angles from the sensor as we propagate the RSO in its orbit. The synthetic satellite image is rendered at high resolution and aggregated to the focal plane resolution resulting in accurate radiometry even when the RSO is a point source. The sensor model includes optical effects from the imaging system [point spread function (PSF) includes aberrations, obscurations, support structures, defocus], detector effects (CCD blooming, left/right bias, fixed pattern noise, image persistence, shot noise, read noise, and quantization noise), and environmental effects (radiation hits with selectable angular distributions and 4-layer atmospheric turbulence model for ground based sensors). We have developed an accurate flash Light Detection and Ranging (LIDAR) model that supports reconstruction of 3-dimensional information on the RSO. PROXOR™ contains many important imaging effects such as intra-frame smear, realized by oversampling the image in time and capturing target motion and jitter during the integration time.

  6. Profile fitting in crowded astronomical images

    NASA Astrophysics Data System (ADS)

    Manish, Raja

    Around 18,000 known objects currently populate the near Earth space. These constitute active space assets as well as space debris objects. The tracking and cataloging of such objects relies on observations, most of which are ground based. Also, because of the great distance to the objects, only non-resolved object images can be obtained from the observations. Optical systems consist of telescope optics and a detector. Nowadays, usually CCD detectors are used. The information that is sought to be extracted from the frames are the individual object's astrometric position. In order to do so, the center of the object's image on the CCD frame has to be found. However, the observation frames that are read out of the detector are subject to noise. There are three different sources of noise: celestial background sources, the object signal itself and the sensor noise. The noise statistics are usually modeled as Gaussian or Poisson distributed or their combined distribution. In order to achieve a near real time processing, computationally fast and reliable methods for the so-called centroiding are desired; analytical methods are preferred over numerical ones of comparable accuracy. In this work, an analytic method for the centroiding is investigated and compared to numerical methods. Though the work focuses mainly on astronomical images, same principle could be applied on non-celestial images containing similar data. The method is based on minimizing weighted least squared (LS) error between observed data and the theoretical model of point sources in a novel yet simple way. Synthetic image frames have been simulated. The newly developed method is tested in both crowded and non-crowded fields where former needs additional image handling procedures to separate closely packed objects. Subsequent analysis on real celestial images corroborate the effectiveness of the approach.

  7. ManPortable and UGV LIVAR: advances in sensor suite integration bring improvements to target observation and identification for the electronic battlefield

    NASA Astrophysics Data System (ADS)

    Lynam, Jeff R.

    2001-09-01

    A more highly integrated, electro-optical sensor suite using Laser Illuminated Viewing and Ranging (LIVAR) techniques is being developed under the Army Advanced Concept Technology- II (ACT-II) program for enhanced manportable target surveillance and identification. The ManPortable LIVAR system currently in development employs a wide-array of sensor technologies that provides the foot-bound soldier and UGV significant advantages and capabilities in lightweight, fieldable, target location, ranging and imaging systems. The unit incorporates a wide field-of-view, 5DEG x 3DEG, uncooled LWIR passive sensor for primary target location. Laser range finding and active illumination is done with a triggered, flash-lamp pumped, eyesafe micro-laser operating in the 1.5 micron region, and is used in conjunction with a range-gated, electron-bombarded CCD digital camera to then image the target objective in a more- narrow, 0.3$DEG, field-of-view. Target range determination is acquired using the integrated LRF and a target position is calculated using data from other onboard devices providing GPS coordinates, tilt, bank and corrected magnetic azimuth. Range gate timing and coordinated receiver optics focus control allow for target imaging operations to be optimized. The onboard control electronics provide power efficient, system operations for extended field use periods from the internal, rechargeable battery packs. Image data storage, transmission, and processing performance capabilities are also being incorporated to provide the best all-around support, for the electronic battlefield, in this type of system. The paper will describe flash laser illumination technology, EBCCD camera technology with flash laser detection system, and image resolution improvement through frame averaging.

  8. Super-resolution for scanning light stimulation systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bitzer, L. A.; Neumann, K.; Benson, N., E-mail: niels.benson@uni-due.de

    Super-resolution (SR) is a technique used in digital image processing to overcome the resolution limitation of imaging systems. In this process, a single high resolution image is reconstructed from multiple low resolution images. SR is commonly used for CCD and CMOS (Complementary Metal-Oxide-Semiconductor) sensor images, as well as for medical applications, e.g., magnetic resonance imaging. Here, we demonstrate that super-resolution can be applied with scanning light stimulation (LS) systems, which are common to obtain space-resolved electro-optical parameters of a sample. For our purposes, the Projection Onto Convex Sets (POCS) was chosen and modified to suit the needs of LS systems.more » To demonstrate the SR adaption, an Optical Beam Induced Current (OBIC) LS system was used. The POCS algorithm was optimized by means of OBIC short circuit current measurements on a multicrystalline solar cell, resulting in a mean square error reduction of up to 61% and improved image quality.« less

  9. Stereo imaging velocimetry for microgravity applications

    NASA Technical Reports Server (NTRS)

    Miller, Brian B.; Meyer, Maryjo B.; Bethea, Mark D.

    1994-01-01

    Stereo imaging velocimetry is the quantitative measurement of three-dimensional flow fields using two sensors recording data from different vantage points. The system described in this paper, under development at NASA Lewis Research Center in Cleveland, Ohio, uses two CCD cameras placed perpendicular to one another, laser disk recorders, an image processing substation, and a 586-based computer to record data at standard NTSC video rates (30 Hertz) and reduce it offline. The flow itself is marked with seed particles, hence the fluid must be transparent. The velocimeter tracks the motion of the particles, and from these we deduce a multipoint (500 or more), quantitative map of the flow. Conceptually, the software portion of the velocimeter can be divided into distinct modules. These modules are: camera calibration, particle finding (image segmentation) and centroid location, particle overlap decomposition, particle tracking, and stereo matching. We discuss our approach to each module, and give our currently achieved speed and accuracy for each where available.

  10. Development of InSb charge-coupled infrared imaging devices: Linear imager

    NASA Technical Reports Server (NTRS)

    Phillips, J. D.

    1976-01-01

    The following results were accomplished in the development of charge coupled infrared imaging devices: (1) a four-phase overlapping gate with 9 transfers (2-bits) and 1.0-mil gate lengths was successfully operated, (2) the measured transfer efficiency of 0.975 for this device is in excellent agreement with predictions for the reduced gate length device, (3) mask revisions of the channel stop metal on the 8582 mask have been carried out with the result being a large increase in the dc yield of the tested devices, (4) partial optical sensitivity to chopped blackbody radiation was observed for an 8582 9-bit imager, (5) analytical consideration of the modulation transfer function degradation caused by transfer inefficiency in the CCD registers was presented, and (6) for larger array lengths or for the insertion of isolated bits between sensors, improvements in InSb fabrication technology with corresponding decrease in the interface state density are required.

  11. Hyperspectral imaging with deformable gratings fabricated with metal-elastomer nanocomposites

    NASA Astrophysics Data System (ADS)

    Potenza, Marco A. C.; Nazzari, Daniele; Cremonesi, Llorenç; Denti, Ilaria; Milani, Paolo

    2017-11-01

    We report the fabrication and characterization of a simple and compact hyperspectral imaging setup based on a stretchable diffraction grating made with a metal-polymer nanocomposite. The nanocomposite is produced by implanting Ag clusters in a poly(dimethylsiloxane) film by supersonic cluster beam implantation. The deformable grating has curved grooves and is imposed on a concave cylindrical surface, thus obtaining optical power in two orthogonal directions. Both diffractive and optical powers are obtained by reflection, thus realizing a diffractive-catoptric optical device. This makes it easier to minimize aberrations. We prove that, despite the extended spectral range and the simplified optical scheme, it is actually possible to work with a traditional CCD sensor and achieve a good spectral and spatial resolution.

  12. Driving techniques for high frame rate CCD camera

    NASA Astrophysics Data System (ADS)

    Guo, Weiqiang; Jin, Longxu; Xiong, Jingwu

    2008-03-01

    This paper describes a high-frame rate CCD camera capable of operating at 100 frames/s. This camera utilizes Kodak KAI-0340, an interline transfer CCD with 640(vertical)×480(horizontal) pixels. Two output ports are used to read out CCD data and pixel rates approaching 30 MHz. Because of its reduced effective opacity of vertical charge transfer registers, interline transfer CCD can cause undesired image artifacts, such as random white spots and smear generated in the registers. To increase frame rate, a kind of speed-up structure has been incorporated inside KAI-0340, then it is vulnerable to a vertical stripe effect. The phenomena which mentioned above may severely impair the image quality. To solve these problems, some electronic methods of eliminating these artifacts are adopted. Special clocking mode can dump the unwanted charge quickly, then the fast readout of the images, cleared of smear, follows immediately. Amplifier is used to sense and correct delay mismatch between the dual phase vertical clock pulses, the transition edges become close to coincident, so vertical stripes disappear. Results obtained with the CCD camera are shown.

  13. Back-illuminated large area frame transfer CCDs for space-based hyper-spectral imaging applications

    NASA Astrophysics Data System (ADS)

    Philbrick, Robert H.; Gilmore, Angelo S.; Schrein, Ronald J.

    2016-07-01

    Standard offerings of large area, back-illuminated full frame CCD sensors are available from multiple suppliers and they continue to be commonly deployed in ground- and space-based applications. By comparison the availability of large area frame transfers CCDs is sparse, with the accompanying 2x increase in die area no doubt being a contributing factor. Modern back-illuminated CCDs yield very high quantum efficiency in the 290 to 400 nm band, a wavelength region of great interest in space-based instruments studying atmospheric phenomenon. In fast framing (e.g. 10 - 20 Hz), space-based applications such as hyper-spectral imaging, the use of a mechanical shutter to block incident photons during readout can prove costly and lower instrument reliability. The emergence of large area, all-digital visible CMOS sensors, with integrate while read functionality, are an alternative solution to CCDs; but, even after factoring in reduced complexity and cost of support electronics, the present cost to implement such novel sensors is prohibitive to cost constrained missions. Hence, there continues to be a niche set of applications where large area, back-illuminated frame transfer CCDs with high UV quantum efficiency, high frame rate, high full well, and low noise provide an advantageous solution. To address this need a family of large area frame transfer CCDs has been developed that includes 2048 (columns) x 256 (rows) (FT4), 2048 x 512 (FT5), and 2048 x 1024 (FT6) full frame transfer CCDs; and a 2048 x 1024 (FT7) split-frame transfer CCD. Each wafer contains 4 FT4, 2 FT5, 2 FT6, and 2 FT7 die. The designs have undergone radiation and accelerated life qualification and the electro-optical performance of these CCDs over the wavelength range of 290 to 900 nm is discussed.

  14. Stare and chase of space debris targets using real-time derived pointing data

    NASA Astrophysics Data System (ADS)

    Steindorfer, Michael A.; Kirchner, Georg; Koidl, Franz; Wang, Peiyuan; Antón, Alfredo; Fernández Sánchez, Jaime; Merz, Klaus

    2017-09-01

    We successfully demonstrate Stare & Chase: Space debris laser ranging to uncooperative targets has been achieved without a priori knowledge of any orbital information. An analog astronomy CCD with a standard objective, piggyback mounted on our 50 cm Graz SLR receive telescope, 'stares' into the sky in a fixed direction. The CCD records the stellar background within a field of view of approx. 7°. From the stellar X/Y positions on the sensor a plate solving algorithm determines the pointing data of the image center with an accuracy of approx. 15 arc seconds. If a sunlit target passes through this field of view, its equatorial coordinates are calculated, stored and a Consolidated Prediction Format (CPF) file is created in near real time. The derived CPF data is used to start laser ranging ('chase' the object) within the same pass to retrieve highly accurate distance information. A comparison of Stare & Chase CPFs with standard TLE predictions shows the possibilities and limits of this method.

  15. A study of astrometric distortions due to “tree rings” in CCD sensors using LSST Photon Simulator

    DOE PAGES

    Beamer, Benjamin; Nomerotski, Andrei; Tsybychev, Dmitri

    2015-05-22

    Imperfections in the production process of thick CCDs lead to circularly symmetric dopant concentration variations, which in turn produce electric fields transverse to the surface of the fully depleted CCD that displace the photogenerated charges. We use PhoSim, a Monte Carlo photon simulator, to explore and examine the likely impacts these dopant concentration variations will have on astrometric measurements in LSST. The scale and behavior of both the astrometric shifts imparted to point sources and the intensity variations in flat field images that result from these doping imperfections are similar to those previously observed in Dark Energy Camera CCDs, givingmore » initial confirmation of PhoSim's model for these effects. In addition, the organized shape distortions were observed as a result of the symmetric nature of these dopant variations, causing nominally round sources to be imparted with a measurable ellipticity either aligned with or transverse to the radial direction of this dopant variation pattern.« less

  16. MSTI-3 sensor package optical design

    NASA Astrophysics Data System (ADS)

    Horton, Richard F.; Baker, William G.; Griggs, Michael; Nguyen, Van; Baker, H. Vernon

    1995-06-01

    The MSTI-3 sensor package is a three band imaging telescope for military and dual use sensing missions. The MSTI-3 mission is one of the Air Force Phillips Laboratory's Pegasus launched space missions, a third in the series of state-of-the-art lightweight sensors on low cost satellites. The satellite is planned for launch into a 425 Km orbit in late 1995. The MSTI- 3 satellite is configured with a down looking two axis gimbal and gimbal mirror. The gimbal mirror is an approximately 13 cm by 29 cm mirror which allows a field of regard approximately 100 degrees by 180 degrees. The optical train uses several novel optical features to allow for compactness and light weight. A 105 mm Ritchey Chretien Cassegrain imaging system with a CaF(subscript 2) dome astigmatism corrector is followed by a CaF(subscript 2) beamsplitter cube assembly at the systems first focus. The dichroic beamsplitter cube assembly separates the light into a visible and two IR channels of approximately 2.5 to 3.3, (SWIR), and 3.5 to 4.5, (MWIR), micron wavelength bands. The two IR imaging channels each consist of unity power re-imaging lens cluster, a cooled seven position filter wheel, a cooled Lyot stop and an Amber 256 X 256 InSb array camera. The visible channel uses a unity power re- imaging system prior to a linear variable filter with a Sony CCD array, which allows for a multispectral imaging capability in the 0.5 to 0.8 micron region. The telescope field of view is 1.4 degrees square.

  17. Attitude measurement: Principles and sensors

    NASA Technical Reports Server (NTRS)

    Duchon, P.; Vermande, M. P.

    1981-01-01

    Tools used in the measurement of satellite attitude are described. Attention is given to the elements that characterize an attitude sensor, the references employed (stars, moon, Sun, Earth, magnetic fields, etc.), and the detectors (optical, magnetic, and inertial). Several examples of attitude sensors are described, including sun sensors, star sensors, earth sensors, triaxial magnetometers, and gyrometers. Finally, sensor combinations that make it possible to determine a complete attitude are considered; the SPOT attitude measurement system and a combined CCD star sensor-gyrometer system are discussed.

  18. Automatic Generation of Wide Dynamic Range Image without Pseudo-Edge Using Integration of Multi-Steps Exposure Images

    NASA Astrophysics Data System (ADS)

    Migiyama, Go; Sugimura, Atsuhiko; Osa, Atsushi; Miike, Hidetoshi

    Recently, digital cameras are offering technical advantages rapidly. However, the shot image is different from the sight image generated when that scenery is seen with the naked eye. There are blown-out highlights and crushed blacks in the image that photographed the scenery of wide dynamic range. The problems are hardly generated in the sight image. These are contributory cause of difference between the shot image and the sight image. Blown-out highlights and crushed blacks are caused by the difference of dynamic range between the image sensor installed in a digital camera such as CCD and CMOS and the human visual system. Dynamic range of the shot image is narrower than dynamic range of the sight image. In order to solve the problem, we propose an automatic method to decide an effective exposure range in superposition of edges. We integrate multi-step exposure images using the method. In addition, we try to erase pseudo-edges using the process to blend exposure values. Afterwards, we get a pseudo wide dynamic range image automatically.

  19. Modular Scanning Confocal Microscope with Digital Image Processing.

    PubMed

    Ye, Xianjun; McCluskey, Matthew D

    2016-01-01

    In conventional confocal microscopy, a physical pinhole is placed at the image plane prior to the detector to limit the observation volume. In this work, we present a modular design of a scanning confocal microscope which uses a CCD camera to replace the physical pinhole for materials science applications. Experimental scans were performed on a microscope resolution target, a semiconductor chip carrier, and a piece of etched silicon wafer. The data collected by the CCD were processed to yield images of the specimen. By selecting effective pixels in the recorded CCD images, a virtual pinhole is created. By analyzing the image moments of the imaging data, a lateral resolution enhancement is achieved by using a 20 × / NA = 0.4 microscope objective at 532 nm laser wavelength.

  20. An improved arterial pulsation measurement system based on optical triangulation and its application in the traditional Chinese medicine

    NASA Astrophysics Data System (ADS)

    Wu, Jih-Huah; Lee, Wen-Li; Lee, Yun-Parn; Lin, Ching-Huang; Chiou, Ji-Yi; Tai, Chuan-Fu; Jiang, Joe-Air

    2011-08-01

    An improved arterial pulsation measurement (APM) system that uses three LED light sources and a CCD image sensor to measure pulse waveforms of artery is presented. The relative variations of the pulses at three measurement points near wrist joints can be determined by the APM system simultaneously. The height of the arterial pulsations measured by the APM system achieves a resolution of better than 2 μm. These pulsations contain useful information that can be used as diagnostic references in the traditional Chinese medicine (TCM) in the future.

  1. Systems and methods for optically measuring properties of hydrocarbon fuel gases

    DOEpatents

    Adler-Golden, S.; Bernstein, L.S.; Bien, F.; Gersh, M.E.; Goldstein, N.

    1998-10-13

    A system and method for optical interrogation and measurement of a hydrocarbon fuel gas includes a light source generating light at near-visible wavelengths. A cell containing the gas is optically coupled to the light source which is in turn partially transmitted by the sample. A spectrometer disperses the transmitted light and captures an image thereof. The image is captured by a low-cost silicon-based two-dimensional CCD array. The captured spectral image is processed by electronics for determining energy or BTU content and composition of the gas. The innovative optical approach provides a relatively inexpensive, durable, maintenance-free sensor and method which is reliable in the field and relatively simple to calibrate. In view of the above, accurate monitoring is possible at a plurality of locations along the distribution chain leading to more efficient distribution. 14 figs.

  2. Systems and methods for optically measuring properties of hydrocarbon fuel gases

    DOEpatents

    Adler-Golden, Steven; Bernstein, Lawrence S.; Bien, Fritz; Gersh, Michael E.; Goldstein, Neil

    1998-10-13

    A system and method for optical interrogation and measurement of a hydrocarbon fuel gas includes a light source generating light at near-visible wavelengths. A cell containing the gas is optically coupled to the light source which is in turn partially transmitted by the sample. A spectrometer disperses the transmitted light and captures an image thereof. The image is captured by a low-cost silicon-based two-dimensional CCD array. The captured spectral image is processed by electronics for determining energy or BTU content and composition of the gas. The innovative optical approach provides a relatively inexpensive, durable, maintenance-free sensor and method which is reliable in the field and relatively simple to calibrate. In view of the above, accurate monitoring is possible at a plurality of locations along the distribution chain leading to more efficient distribution.

  3. Athena Microscopic Imager investigation

    NASA Astrophysics Data System (ADS)

    Herkenhoff, K. E.; Squyres, S. W.; Bell, J. F.; Maki, J. N.; Arneson, H. M.; Bertelsen, P.; Brown, D. I.; Collins, S. A.; Dingizian, A.; Elliott, S. T.; Goetz, W.; Hagerott, E. C.; Hayes, A. G.; Johnson, M. J.; Kirk, R. L.; McLennan, S.; Morris, R. V.; Scherr, L. M.; Schwochert, M. A.; Shiraishi, L. R.; Smith, G. H.; Soderblom, L. A.; Sohl-Dickstein, J. N.; Wadsworth, M. V.

    2003-11-01

    The Athena science payload on the Mars Exploration Rovers (MER) includes the Microscopic Imager (MI). The MI is a fixed-focus camera mounted on the end of an extendable instrument arm, the Instrument Deployment Device (IDD). The MI was designed to acquire images at a spatial resolution of 30 microns/pixel over a broad spectral range (400-700 nm). The MI uses the same electronics design as the other MER cameras but has optics that yield a field of view of 31 × 31 mm across a 1024 × 1024 pixel CCD image. The MI acquires images using only solar or skylight illumination of the target surface. A contact sensor is used to place the MI slightly closer to the target surface than its best focus distance (about 66 mm), allowing concave surfaces to be imaged in good focus. Coarse focusing (~2 mm precision) is achieved by moving the IDD away from a rock target after the contact sensor has been activated. The MI optics are protected from the Martian environment by a retractable dust cover. The dust cover includes a Kapton window that is tinted orange to restrict the spectral bandpass to 500-700 nm, allowing color information to be obtained by taking images with the dust cover open and closed. MI data will be used to place other MER instrument data in context and to aid in petrologic and geologic interpretations of rocks and soils on Mars.

  4. CFCCD Manual | CTIO

    Science.gov Websites

    DECam SAM 0.9-m CCD Goodman SOI Optical Spectrographs CHIRON COSMOS Goodman Filters Telescopes Blanco 4 4.4.4 Gain 4.5: CCD scales at various foci APPENDIX I: Filters for CCD Imaging II: Gain and Readout

  5. A configurable distributed high-performance computing framework for satellite's TDI-CCD imaging simulation

    NASA Astrophysics Data System (ADS)

    Xue, Bo; Mao, Bingjing; Chen, Xiaomei; Ni, Guoqiang

    2010-11-01

    This paper renders a configurable distributed high performance computing(HPC) framework for TDI-CCD imaging simulation. It uses strategy pattern to adapt multi-algorithms. Thus, this framework help to decrease the simulation time with low expense. Imaging simulation for TDI-CCD mounted on satellite contains four processes: 1) atmosphere leads degradation, 2) optical system leads degradation, 3) electronic system of TDI-CCD leads degradation and re-sampling process, 4) data integration. Process 1) to 3) utilize diversity data-intensity algorithms such as FFT, convolution and LaGrange Interpol etc., which requires powerful CPU. Even uses Intel Xeon X5550 processor, regular series process method takes more than 30 hours for a simulation whose result image size is 1500 * 1462. With literature study, there isn't any mature distributing HPC framework in this field. Here we developed a distribute computing framework for TDI-CCD imaging simulation, which is based on WCF[1], uses Client/Server (C/S) layer and invokes the free CPU resources in LAN. The server pushes the process 1) to 3) tasks to those free computing capacity. Ultimately we rendered the HPC in low cost. In the computing experiment with 4 symmetric nodes and 1 server , this framework reduced about 74% simulation time. Adding more asymmetric nodes to the computing network, the time decreased namely. In conclusion, this framework could provide unlimited computation capacity in condition that the network and task management server are affordable. And this is the brand new HPC solution for TDI-CCD imaging simulation and similar applications.

  6. Parallel Group and Sunspot Counts from SDO/HMI and AAVSO Visual Observers (Abstract)

    NASA Astrophysics Data System (ADS)

    Howe, R.; Alvestad, J.

    2015-06-01

    (Abstract only) Creating group and sunspot counts from the SDO/HMI detector on the Solar Dynamics Observatory (SDO) satellite requires software that calculates sunspots from a “white light” intensity-gram (CCD image) and group counts from a filtered CCD magneto-gram. Images from the satellite come from here http://jsoc.stanford.edu/data/hmi/images/latest/ Together these two sets of images can be used to estimate the Wolf number as W = (10g + s), which is used to calculate the American Relative index. AAVSO now has approximately two years of group and sunspot counts in the SunEntry database as SDOH observer Jan Alvestad. It is important that we compare these satellite CCD image data with our visual observer daily submissions to determine if the SDO/HMI data should be included in calculating the American Relative index. These satellite data are continuous observations with excellent seeing. This contrasts with “snapshot” earth-based observations with mixed seeing. The SDO/HIM group and sunspot counts could be considered unbiased, except that they show a not normal statistical distribution when compared to the overall visual observations, which show a Poisson distribution. One challenge that should be addressed by AAVSO using these SDO/HMI data is the splitting of groups and deriving group properties from the magneto-grams. The filtered CCD detector that creates the magento-grams is not something our visual observers can relate too, unless they were to take CCD images in H-alpha and/or the Calcium spectrum line. So, questions remain as to how these satellite CCD image counts can be integrated into the overall American Relative index.

  7. Development of a CCD array as an imaging detector for advanced X-ray astrophysics facilities

    NASA Technical Reports Server (NTRS)

    Schwartz, D. A.

    1981-01-01

    The development of a charge coupled device (CCD) X-ray imager for a large aperture, high angular resolution X-ray telescope is discussed. Existing CCDs were surveyed and three candidate concepts were identified. An electronic camera control and computer interface, including software to drive a Fairchild 211 CCD, is described. In addition a vacuum mounting and cooling system is discussed. Performance data for the various components are given.

  8. Method for implementation of back-illuminated CMOS or CCD imagers

    NASA Technical Reports Server (NTRS)

    Pain, Bedabrata (Inventor)

    2008-01-01

    A method for implementation of back-illuminated CMOS or CCD imagers. An oxide layer buried between silicon wafer and device silicon is provided. The oxide layer forms a passivation layer in the imaging structure. A device layer and interlayer dielectric are formed, and the silicon wafer is removed to expose the oxide layer.

  9. A Star Image Extractor for Small Satellites

    NASA Astrophysics Data System (ADS)

    Yamada, Yoshiyuki; Yamauchi, Masahiro; Gouda, Naoteru; Kobayashi, Yukiyasu; Tsujimoto, Takuji; Yano, Taihei; Suganuma, Masahiro; Nakasuka, Shinichi; Sako, Nobutada; Inamori, Takaya

    We have developed a Star Image Extractor (SIE) which works as an on-board real-time image processor. It is a logic circuit written on an FPGA(Field Programmable Gate Array) device. It detects and extracts only an object data from raw image data. SIE will be required with the Nano-JASMINE 1) satellite. Nano-JASMINE is the small astrometry satellite that observes objects in our galaxy. It will be launched in 2010 and needs two years mission period. Nano-JASMINE observes an object with the TDI (Time Delayed Integration) observation mode. TDI is one of operation modes of CCD detector. Data is obtained, by rotating the imaging system including CCD at a rated synchronized with a vertical charge transfer of CCD. Obtained image data is sent through SIE to the Mission-controller.

  10. Modular Scanning Confocal Microscope with Digital Image Processing

    PubMed Central

    McCluskey, Matthew D.

    2016-01-01

    In conventional confocal microscopy, a physical pinhole is placed at the image plane prior to the detector to limit the observation volume. In this work, we present a modular design of a scanning confocal microscope which uses a CCD camera to replace the physical pinhole for materials science applications. Experimental scans were performed on a microscope resolution target, a semiconductor chip carrier, and a piece of etched silicon wafer. The data collected by the CCD were processed to yield images of the specimen. By selecting effective pixels in the recorded CCD images, a virtual pinhole is created. By analyzing the image moments of the imaging data, a lateral resolution enhancement is achieved by using a 20 × / NA = 0.4 microscope objective at 532 nm laser wavelength. PMID:27829052

  11. Curved CCD detector devices and arrays for multispectral astrophysical applications and terrestrial stereo panoramic cameras

    NASA Astrophysics Data System (ADS)

    Swain, Pradyumna; Mark, David

    2004-09-01

    The emergence of curved CCD detectors as individual devices or as contoured mosaics assembled to match the curved focal planes of astronomical telescopes and terrestrial stereo panoramic cameras represents a major optical design advancement that greatly enhances the scientific potential of such instruments. In altering the primary detection surface within the telescope"s optical instrumentation system from flat to curved, and conforming the applied CCD"s shape precisely to the contour of the telescope"s curved focal plane, a major increase in the amount of transmittable light at various wavelengths through the system is achieved. This in turn enables multi-spectral ultra-sensitive imaging with much greater spatial resolution necessary for large and very large telescope applications, including those involving infrared image acquisition and spectroscopy, conducted over very wide fields of view. For earth-based and space-borne optical telescopes, the advent of curved CCD"s as the principle detectors provides a simplification of the telescope"s adjoining optics, reducing the number of optical elements and the occurrence of optical aberrations associated with large corrective optics used to conform to flat detectors. New astronomical experiments may be devised in the presence of curved CCD applications, in conjunction with large format cameras and curved mosaics, including three dimensional imaging spectroscopy conducted over multiple wavelengths simultaneously, wide field real-time stereoscopic tracking of remote objects within the solar system at high resolution, and deep field survey mapping of distant objects such as galaxies with much greater multi-band spatial precision over larger sky regions. Terrestrial stereo panoramic cameras equipped with arrays of curved CCD"s joined with associative wide field optics will require less optical glass and no mechanically moving parts to maintain continuous proper stereo convergence over wider perspective viewing fields than their flat CCD counterparts, lightening the cameras and enabling faster scanning and 3D integration of objects moving within a planetary terrain environment. Preliminary experiments conducted at the Sarnoff Corporation indicate the feasibility of curved CCD imagers with acceptable electro-optic integrity. Currently, we are in the process of evaluating the electro-optic performance of a curved wafer scale CCD imager. Detailed ray trace modeling and experimental electro-optical data performance obtained from the curved imager will be presented at the conference.

  12. CTK-II & RTK: The CCD-cameras operated at the auxiliary telescopes of the University Observatory Jena

    NASA Astrophysics Data System (ADS)

    Mugrauer, M.

    2016-03-01

    The Cassegrain-Teleskop-Kamera (CTK-II) and the Refraktor-Teleskop-Kamera (RTK) are two CCD-imagers which are operated at the 25 cm Cassegrain and 20 cm refractor auxiliary telescopes of the University Observatory Jena. This article describes the main characteristics of these instruments. The properties of the CCD-detectors, the astrometry, the image quality, and the detection limits of both CCD-cameras, as well as some results of ongoing observing projects, carried out with these instruments, are presented. Based on observations obtained with telescopes of the University Observatory Jena, which is operated by the Astrophysical Institute of the Friedrich-Schiller-University.

  13. Delta-doped CCD's as low-energy particle detectors and imagers

    NASA Technical Reports Server (NTRS)

    Nikzad, Shouleh (Inventor); Hoenk, Michael E. (Inventor); Hecht, Michael H. (Inventor)

    2002-01-01

    The back surface of a thinned charged-coupled device (CCD) is treated to eliminate the backside potential well that appears in a conventional thinned CCD during backside illumination. The backside of the CCD includes a delta layer of high-concentration dopant confined to less than one monolayer of the crystal semiconductor. The thinned, delta-doped CCD is used to detect very low-energy particles that penetrate less than 1.0 nm into the CCD, including electrons having energies less than 1000 eV and protons having energies less than 10 keV.

  14. Imaging tristimulus colorimeter for the evaluation of color in printed textiles

    NASA Astrophysics Data System (ADS)

    Hunt, Martin A.; Goddard, James S., Jr.; Hylton, Kathy W.; Karnowski, Thomas P.; Richards, Roger K.; Simpson, Marc L.; Tobin, Kenneth W., Jr.; Treece, Dale A.

    1999-03-01

    The high-speed production of textiles with complicated printed patterns presents a difficult problem for a colorimetric measurement system. Accurate assessment of product quality requires a repeatable measurement using a standard color space, such as CIELAB, and the use of a perceptually based color difference formula, e.g. (Delta) ECMC color difference formula. Image based color sensors used for on-line measurement are not colorimetric by nature and require a non-linear transformation of the component colors based on the spectral properties of the incident illumination, imaging sensor, and the actual textile color. This research and development effort describes a benchtop, proof-of-principle system that implements a projection onto convex sets (POCS) algorithm for mapping component color measurements to standard tristimulus values and incorporates structural and color based segmentation for improved precision and accuracy. The POCS algorithm consists of determining the closed convex sets that describe the constraints on the reconstruction of the true tristimulus values based on the measured imperfect values. We show that using a simulated D65 standard illuminant, commercial filters and a CCD camera, accurate (under perceptibility limits) per-region based (Delta) ECMC values can be measured on real textile samples.

  15. MagAO: status and science

    NASA Astrophysics Data System (ADS)

    Morzinski, Katie M.; Close, Laird M.; Males, Jared R.; Hinz, Phil M.; Esposito, Simone; Riccardi, Armando; Briguglio, Runa; Follette, Katherine B.; Pinna, Enrico; Puglisi, Alfio; Vezilj, Jennifer; Xompero, Marco; Wu, Ya-Lin

    2016-07-01

    "MagAO" is the adaptive optics instrument at the Magellan Clay telescope at Las Campanas Observatory, Chile. MagAO has a 585-actuator adaptive secondary mirror and 1000-Hz pyramid wavefront sensor, operating on natural guide stars from R-magnitudes of -1 to 15. MagAO has been in on-sky operation for 166 nights since installation in 2012. MagAO's unique capabilities are simultaneous imaging in the visible and infrared with VisAO and Clio, excellent performance at an excellent site, and a lean operations model. Science results from MagAO include the first ground-based CCD image of an exoplanet, demonstration of the first accreting protoplanets, discovery of a new wide-orbit exoplanet, and the first empirical bolometric luminosity of an exoplanet. We describe the status, report the AO performance, and summarize the science results. New developments reported here include color corrections on red guide stars for the wavefront sensor; a new field stop stage to facilitate VisAO imaging of extended sources; and eyepiece observing at the visible-light diffraction limit of a 6.5-m telescope. We also discuss a recent hose failure that led to a glycol coolant leak, and the recovery of the adaptive secondary mirror (ASM) after this recent (Feb. 2016) incident.

  16. The imaging system design of three-line LMCCD mapping camera

    NASA Astrophysics Data System (ADS)

    Zhou, Huai-de; Liu, Jin-Guo; Wu, Xing-Xing; Lv, Shi-Liang; Zhao, Ying; Yu, Da

    2011-08-01

    In this paper, the authors introduced the theory about LMCCD (line-matrix CCD) mapping camera firstly. On top of the introduction were consists of the imaging system of LMCCD mapping camera. Secondly, some pivotal designs which were Introduced about the imaging system, such as the design of focal plane module, the video signal's procession, the controller's design of the imaging system, synchronous photography about forward and nadir and backward camera and the nadir camera of line-matrix CCD. At last, the test results of LMCCD mapping camera imaging system were introduced. The results as following: the precision of synchronous photography about forward and nadir and backward camera is better than 4 ns and the nadir camera of line-matrix CCD is better than 4 ns too; the photography interval of line-matrix CCD of the nadir camera can satisfy the butter requirements of LMCCD focal plane module; the SNR tested in laboratory is better than 95 under typical working condition(the solar incidence degree is 30, the reflectivity of the earth's surface is 0.3) of each CCD image; the temperature of the focal plane module is controlled under 30° in a working period of 15 minutes. All of these results can satisfy the requirements about the synchronous photography, the temperature control of focal plane module and SNR, Which give the guarantee of precision for satellite photogrammetry.

  17. CTK: A new CCD Camera at the University Observatory Jena

    NASA Astrophysics Data System (ADS)

    Mugrauer, M.

    2009-05-01

    The Cassegrain-Teleskop-Kamera (CTK) is a new CCD imager which is operated at the University Observatory Jena since begin of 2006. This article describes the main characteristics of the new camera. The properties of the CCD detector, the CTK image quality, as well as its detection limits for all filters are presented. Based on observations obtained with telescopes of the University Observatory Jena, which is operated by the Astrophysical Institute of the Friedrich-Schiller-University.

  18. Astronomical Archive at Tartu Observatory

    NASA Astrophysics Data System (ADS)

    Annuk, K.

    2007-10-01

    Archiving astronomical data is important task not only at large observatories but also at small observatories. Here we describe the astronomical archive at Tartu Observatory. The archive consists of old photographic plate images, photographic spectrograms, CCD direct--images and CCD spectroscopic data. The photographic plate digitizing project was started in 2005. An on-line database (based on MySQL) was created. The database includes CCD data as well photographic data. A PHP-MySQL interface was written for access to all data.

  19. High Resolution Airborne Digital Imagery for Precision Agriculture

    NASA Technical Reports Server (NTRS)

    Herwitz, Stanley R.

    1998-01-01

    The Environmental Research Aircraft and Sensor Technology (ERAST) program is a NASA initiative that seeks to demonstrate the application of cost-effective aircraft and sensor technology to private commercial ventures. In 1997-98, a series of flight-demonstrations and image acquisition efforts were conducted over the Hawaiian Islands using a remotely-piloted solar- powered platform (Pathfinder) and a fixed-wing piloted aircraft (Navajo) equipped with a Kodak DCS450 CIR (color infrared) digital camera. As an ERAST Science Team Member, I defined a set of flight lines over the largest coffee plantation in Hawaii: the Kauai Coffee Company's 4,000 acre Koloa Estate. Past studies have demonstrated the applications of airborne digital imaging to agricultural management. Few studies have examined the usefulness of high resolution airborne multispectral imagery with 10 cm pixel sizes. The Kodak digital camera integrated with ERAST's Airborne Real Time Imaging System (ARTIS) which generated multiband CCD images consisting of 6 x 106 pixel elements. At the designated flight altitude of 1,000 feet over the coffee plantation, pixel size was 10 cm. The study involved the analysis of imagery acquired on 5 March 1998 for the detection of anomalous reflectance values and for the definition of spectral signatures as indicators of tree vigor and treatment effectiveness (e.g., drip irrigation; fertilizer application).

  20. Early flame development image comparison of low calorific value syngas and CNG in DI SI gas engine

    NASA Astrophysics Data System (ADS)

    >Ftwi Yohaness Hagos, A. Rashid A.; Sulaiman, Shaharin A.

    2013-06-01

    The early flame development stage of syngas and CNG are analysed and compared from the flame images taken over 20° CA from the start of ignition. An imitated syngas with a composition of 19.2% H2, 29.6% CO, 5.3% CH4 and balance with nitrogen and carbon dioxide, which resembles the typical product of wood biomass gasification, was used in the study. A CCD camera triggered externally through the signals from the camshaft and crank angle sensors was used in capturing of the images. The engine was accessed through an endoscope access and a self-illumination inside the chamber. The results of the image analysis are further compared with the mass fraction burn curve of both syngas and CNG analysed from the pressure data. The analysis result of the flame image of syngas validates the double rapid burning stage of the mass fraction burn of syngas analysed from in-cylinder pressure data.

  1. The Art of Astrophotography

    NASA Astrophysics Data System (ADS)

    Morison, Ian

    2017-02-01

    1. Imaging star trails; 2. Imaging a constellation with a DSLR and tripod; 3. Imaging the Milky Way with a DSLR and tracking mount; 4. Imaging the Moon with a compact camera or smartphone; 5. Imaging the Moon with a DSLR; 6. Imaging the Pleiades Cluster with a DSLR and small refractor; 7. Imaging the Orion Nebula, M42, with a modified Canon DSLR; 8. Telescopes and their accessories for use in astroimaging; 9. Towards stellar excellence; 10. Cooling a DSLR camera to reduce sensor noise; 11. Imaging the North American and Pelican Nebulae; 12. Combating light pollution - the bane of astrophotographers; 13. Imaging planets with an astronomical video camera or Canon DSLR; 14. Video imaging the Moon with a webcam or DSLR; 15. Imaging the Sun in white light; 16. Imaging the Sun in the light of its H-alpha emission; 17. Imaging meteors; 18. Imaging comets; 19. Using a cooled 'one shot colour' camera; 20. Using a cooled monochrome CCD camera; 21. LRGB colour imaging; 22. Narrow band colour imaging; Appendix A. Telescopes for imaging; Appendix B. Telescope mounts; Appendix C. The effects of the atmosphere; Appendix D. Auto guiding; Appendix E. Image calibration; Appendix F. Practical aspects of astroimaging.

  2. MUSIC - Multifunctional stereo imaging camera system for wide angle and high resolution stereo and color observations on the Mars-94 mission

    NASA Astrophysics Data System (ADS)

    Oertel, D.; Jahn, H.; Sandau, R.; Walter, I.; Driescher, H.

    1990-10-01

    Objectives of the multifunctional stereo imaging camera (MUSIC) system to be deployed on the Soviet Mars-94 mission are outlined. A high-resolution stereo camera (HRSC) and wide-angle opto-electronic stereo scanner (WAOSS) are combined in terms of hardware, software, technology aspects, and solutions. Both HRSC and WAOSS are push-button instruments containing a single optical system and focal plates with several parallel CCD line sensors. Emphasis is placed on the MUSIC system's stereo capability, its design, mass memory, and data compression. A 1-Gbit memory is divided into two parts: 80 percent for HRSC and 20 percent for WAOSS, while the selected on-line compression strategy is based on macropixel coding and real-time transform coding.

  3. Neural network-based system for pattern recognition through a fiber optic bundle

    NASA Astrophysics Data System (ADS)

    Gamo-Aranda, Javier; Rodriguez-Horche, Paloma; Merchan-Palacios, Miguel; Rosales-Herrera, Pablo; Rodriguez, M.

    2001-04-01

    A neural network based system to identify images transmitted through a Coherent Fiber-optic Bundle (CFB) is presented. Patterns are generated in a computer, displayed on a Spatial Light Modulator, imaged onto the input face of the CFB, and recovered optically by a CCD sensor array for further processing. Input and output optical subsystems were designed and used to that end. The recognition step of the transmitted patterns is made by a powerful, widely-used, neural network simulator running on the control PC. A complete PC-based interface was developed to control the different tasks involved in the system. An optical analysis of the system capabilities was carried out prior to performing the recognition step. Several neural network topologies were tested, and the corresponding numerical results are also presented and discussed.

  4. Linear CCD attitude measurement system based on the identification of the auxiliary array CCD

    NASA Astrophysics Data System (ADS)

    Hu, Yinghui; Yuan, Feng; Li, Kai; Wang, Yan

    2015-10-01

    Object to the high precision flying target attitude measurement issues of a large space and large field of view, comparing existing measurement methods, the idea is proposed of using two array CCD to assist in identifying the three linear CCD with multi-cooperative target attitude measurement system, and to address the existing nonlinear system errors and calibration parameters and more problems with nine linear CCD spectroscopic test system of too complicated constraints among camera position caused by excessive. The mathematical model of binocular vision and three linear CCD test system are established, co-spot composition triangle utilize three red LED position light, three points' coordinates are given in advance by Cooperate Measuring Machine, the red LED in the composition of the three sides of a triangle adds three blue LED light points as an auxiliary, so that array CCD is easier to identify three red LED light points, and linear CCD camera is installed of a red filter to filter out the blue LED light points while reducing stray light. Using array CCD to measure the spot, identifying and calculating the spatial coordinates solutions of red LED light points, while utilizing linear CCD to measure three red LED spot for solving linear CCD test system, which can be drawn from 27 solution. Measured with array CCD coordinates auxiliary linear CCD has achieved spot identification, and has solved the difficult problems of multi-objective linear CCD identification. Unique combination of linear CCD imaging features, linear CCD special cylindrical lens system is developed using telecentric optical design, the energy center of the spot position in the depth range of convergence in the direction is perpendicular to the optical axis of the small changes ensuring highprecision image quality, and the entire test system improves spatial object attitude measurement speed and precision.

  5. Design and build a compact Raman sensor for identification of chemical composition

    NASA Astrophysics Data System (ADS)

    Garcia, Christopher S.; Abedin, M. Nurul; Ismail, Syed; Sharma, Shiv K.; Misra, Anupam K.; Sandford, Stephen P.; Elsayed-Ali, Hani

    2008-04-01

    A compact remote Raman sensor system was developed at NASA Langley Research Center. This sensor is an improvement over the previously reported system, which consisted of a 532 nm pulsed laser, a 4-inch telescope, a spectrograph, and an intensified CCD camera. One of the attractive features of the previous system was its portability, thereby making it suitable for applications such as planetary surface explorations, homeland security and defense applications where a compact portable instrument is important. The new system was made more compact by replacing bulky components with smaller and lighter components. The new compact system uses a smaller spectrograph measuring 9 x 4 x 4 in. and a smaller intensified CCD camera measuring 5 in. long and 2 in. in diameter. The previous system was used to obtain the Raman spectra of several materials that are important to defense and security applications. Furthermore, the new compact Raman sensor system is used to obtain the Raman spectra of a diverse set of materials to demonstrate the sensor system's potential use in the identification of unknown materials.

  6. Design and Build a Compact Raman Sensor for Identification of Chemical Composition

    NASA Technical Reports Server (NTRS)

    Garcia, Christopher S.; Abedin, M. Nurul; Ismail, Syed; Sharma, Shiv K.; Misra, Anupam K.; Sandford, Stephen P.; Elsayed-Ali, Hani

    2008-01-01

    A compact remote Raman sensor system was developed at NASA Langley Research Center. This sensor is an improvement over the previously reported system, which consisted of a 532 nm pulsed laser, a 4-inch telescope, a spectrograph, and an intensified charge-coupled devices (CCD) camera. One of the attractive features of the previous system was its portability, thereby making it suitable for applications such as planetary surface explorations, homeland security and defense applications where a compact portable instrument is important. The new system was made more compact by replacing bulky components with smaller and lighter components. The new compact system uses a smaller spectrograph measuring 9 x 4 x 4 in. and a smaller intensified CCD camera measuring 5 in. long and 2 in. in diameter. The previous system was used to obtain the Raman spectra of several materials that are important to defense and security applications. Furthermore, the new compact Raman sensor system is used to obtain the Raman spectra of a diverse set of materials to demonstrate the sensor system's potential use in the identification of unknown materials.

  7. Development of a Robust star identification technique for use in attitude determination of the ACE spacecraft

    NASA Technical Reports Server (NTRS)

    Woodard, Mark; Rohrbaugh, Dave

    1995-01-01

    The Advanced Composition Explorer (ACE) spacecraft is designed to fly in a spin-stabilized attitude. The spacecraft will carry two attitude sensors - a digital fine Sun sensor and a charge coupled device (CCD) star tracker - to allow ground-based determination of the spacecraft attitude and spin rate. Part of the processing that must be performed on the CCD star tracker data is the star identification. Star data received from the spacecraft must be matched with star information in the SKYMAP catalog to determine exactly which stars the sensor is tracking. This information, along with the Sun vector measured by the Sun sensor, is used to determine the spacecraft attitude. Several existing star identification (star ID) systems were examined to determine whether they could be modified for use on the ACE mission. Star ID systems which exist for three-axis stabilized spacecraft tend to be complex in nature and many require fairly good knowledge of the spacecraft attitude, making their use for ACE excessive. Star ID systems used for spinners carrying traditional slit star sensors would have to be modified to model the CCD star tracker. The ACE star ID algorithm must also be robust, in that it will be able to correctly identify stars even though the attitude is not known to a high degree of accuracy, and must be very efficient to allow real-time star identification. The paper presents the star ID algorithm that was developed for ACE. Results from prototype testing are also presented to demonstrate the efficiency, accuracy, and robustness of the algorithm.

  8. Autonomous star tracker based on active pixel sensors (APS)

    NASA Astrophysics Data System (ADS)

    Schmidt, U.

    2017-11-01

    Star trackers are opto-electronic sensors used onboard of satellites for the autonomous inertial attitude determination. During the last years, star trackers became more and more important in the field of the attitude and orbit control system (AOCS) sensors. High performance star trackers are based up today on charge coupled device (CCD) optical camera heads. The Jena-Optronik GmbH is active in the field of opto-electronic sensors like star trackers since the early 80-ties. Today, with the product family ASTRO5, ASTRO10 and ASTRO15, all marked segments like earth observation, scientific applications and geo-telecom are supplied to European and Overseas customers. A new generation of star trackers can be designed based on the APS detector technical features. The measurement performance of the current CCD based star trackers can be maintained, the star tracker functionality, reliability and robustness can be increased while the unit costs are saved.

  9. Research on the liquid crystal adaptive optics system for human retinal imaging

    NASA Astrophysics Data System (ADS)

    Zhang, Lei; Tong, Shoufeng; Song, Yansong; Zhao, Xin

    2013-12-01

    The blood vessels only in Human eye retinal can be observed directly. Many diseases that are not obvious in their early symptom can be diagnosed through observing the changes of distal micro blood vessel. In order to obtain the high resolution human retinal images,an adaptive optical system for correcting the aberration of the human eye was designed by using the Shack-Hartmann wavefront sensor and the Liquid Crystal Spatial Light Modulator(LCLSM) .For a subject eye with 8m-1 (8D)myopia, the wavefront error is reduced to 0.084 λ PV and 0.12 λRMS after adaptive optics(AO) correction ,which has reached diffraction limit.The results show that the LCLSM based AO system has the ability of correcting the aberration of the human eye efficiently,and making the blurred photoreceptor cell to clearly image on a CCD camera.

  10. Performance characteristics of CCDs for the ACIS experiment. [Advanced X-ray Astrophysics Facility CCD Imaging Spectrometer

    NASA Technical Reports Server (NTRS)

    Garmire, Gordon P.; Nousek, John; Burrows, David; Ricker, George; Bautz, Mark; Doty, John; Collins, Stewart; Janesick, James

    1988-01-01

    The search for the optimum CCD to be used at the focal surface of the Advanced X-ray Astrophysics Facility (AXAF) is described. The physics of the interaction of X-rays in silicon through the photoelectric effect is reviewed. CCD technology at the beginning of the AXAF definition phase is summarized, and the results of the CCD enhancement program are discussed. Other sources of optimum CCDs are examined, and CCD enhancements made at MIT Lincoln Laboratory are addressed.

  11. CCD research. [design, fabrication, and applications

    NASA Technical Reports Server (NTRS)

    Gassaway, J. D.

    1976-01-01

    The fundamental problems encountered in designing, fabricating, and applying CCD's are reviewed. Investigations are described and results and conclusions are given for the following: (1) the development of design analyses employing computer aided techniques and their application to the design of a grapped structure; (2) the role of CCD's in applications to electronic functions, in particular, signal processing; (3) extending the CCD to silicon films on sapphire (SOS); and (4) all aluminum transfer structure with low noise input-output circuits. Related work on CCD imaging devices is summarized.

  12. A programmable CCD driver circuit for multiphase CCD operation

    NASA Technical Reports Server (NTRS)

    Ewin, Audrey J.; Reed, Kenneth V.

    1989-01-01

    A programmable CCD (charge-coupled device) driver circuit was designed to drive CCDs in multiphased modes. The purpose of the drive electronics is to operate developmental CCD imaging arrays for NASA's tiltable moderate resolution imaging spectrometer (MODIS-T). Five objectives for the driver were considered during its design: (1) the circuit drives CCD electrode voltages between 0 V and +30 V to produce reasonable potential wells, (2) the driving sequence is started with one input signal, (3) the driving sequence is started with one input signal, (4) the circuit allows programming of frame sequences required by arrays of any size, (5) it produces interfacing signals for the CCD and the DTF (detector test facility). Simulation of the driver verified its function with the master clock running up to 10 MHz. This suggests a maximum rate of 400,000 pixels/s. Timing and packaging parameters were verified. The design uses 54 TTL (transistor-transistor logic) chips. Two versions of hardware were fabricated: wirewrap and printed circuit board. Both were verified functionally with a logic analyzer.

  13. Digital Image Sensor-Based Assessment of the Status of Oat (Avena sativa L.) Crops after Frost Damage

    PubMed Central

    Macedo-Cruz, Antonia; Pajares, Gonzalo; Santos, Matilde; Villegas-Romero, Isidro

    2011-01-01

    The aim of this paper is to classify the land covered with oat crops, and the quantification of frost damage on oats, while plants are still in the flowering stage. The images are taken by a digital colour camera CCD-based sensor. Unsupervised classification methods are applied because the plants present different spectral signatures, depending on two main factors: illumination and the affected state. The colour space used in this application is CIELab, based on the decomposition of the colour in three channels, because it is the closest to human colour perception. The histogram of each channel is successively split into regions by thresholding. The best threshold to be applied is automatically obtained as a combination of three thresholding strategies: (a) Otsu’s method, (b) Isodata algorithm, and (c) Fuzzy thresholding. The fusion of these automatic thresholding techniques and the design of the classification strategy are some of the main findings of the paper, which allows an estimation of the damages and a prediction of the oat production. PMID:22163940

  14. Digital image sensor-based assessment of the status of oat (Avena sativa L.) crops after frost damage.

    PubMed

    Macedo-Cruz, Antonia; Pajares, Gonzalo; Santos, Matilde; Villegas-Romero, Isidro

    2011-01-01

    The aim of this paper is to classify the land covered with oat crops, and the quantification of frost damage on oats, while plants are still in the flowering stage. The images are taken by a digital colour camera CCD-based sensor. Unsupervised classification methods are applied because the plants present different spectral signatures, depending on two main factors: illumination and the affected state. The colour space used in this application is CIELab, based on the decomposition of the colour in three channels, because it is the closest to human colour perception. The histogram of each channel is successively split into regions by thresholding. The best threshold to be applied is automatically obtained as a combination of three thresholding strategies: (a) Otsu's method, (b) Isodata algorithm, and (c) Fuzzy thresholding. The fusion of these automatic thresholding techniques and the design of the classification strategy are some of the main findings of the paper, which allows an estimation of the damages and a prediction of the oat production.

  15. VIS-NIR multispectral synchronous imaging pyrometer for high-temperature measurements.

    PubMed

    Fu, Tairan; Liu, Jiangfan; Tian, Jibin

    2017-06-01

    A visible-infrared multispectral synchronous imaging pyrometer was developed for simultaneous, multispectral, two-dimensional high temperature measurements. The multispectral image pyrometer uses prism separation construction in the spectrum range of 650-950 nm and multi-sensor fusion of three CCD sensors for high-temperature measurements. The pyrometer had 650-750 nm, 750-850 nm, and 850-950 nm channels all with the same optical path. The wavelength choice for each channel is flexible with three center wavelengths (700 nm, 810 nm, and 920 nm) with a full width at half maximum of the spectrum of 3 nm used here. The three image sensors were precisely aligned to avoid spectrum artifacts by micro-mechanical adjustments of the sensors relative to each other to position them within a quarter pixel of each other. The pyrometer was calibrated with the standard blackbody source, and the temperature measurement uncertainty was within 0.21 °C-0.99 °C in the temperatures of 600 °C-1800 °C for the blackbody measurements. The pyrometer was then used to measure the leading edge temperatures of a ceramics model exposed to high-enthalpy plasma aerodynamic heating environment to verify the system applicability. The measured temperature ranges are 701-991 °C, 701-1134 °C, and 701-834 °C at the heating transient, steady state, and cooling transient times. A significant temperature gradient (170 °C/mm) was observed away from the leading edge facing the plasma jet during the steady state heating time. The temperature non-uniformity on the surface occurs during the entire aerodynamic heating process. However, the temperature distribution becomes more uniform after the heater is shut down and the experimental model is naturally cooled. This result shows that the multispectral simultaneous image measurement mode provides a wider temperature range for one imaging measurement of high spatial temperature gradients in transient applications.

  16. Fifty Years of Lightning Observations from Space

    NASA Astrophysics Data System (ADS)

    Christian, H. J., Jr.

    2017-12-01

    Some of the earliest satellites, starting with OSO (1965), ARIEL (1967), and RAE (1968), detected lightning using either optical and RF sensors, although that was not their intent. One of the earliest instruments designed to detect lightning was the PBE (1977). The use of space to study lightning activity has exploded since these early days. The advent of focal-plane imaging arrays made it possible to develop high performance optical lightning sensors. Prior to the use of charged-coupled devices (CCD), most space-based lightning sensors used only a few photo-diodes, which limited the location accuracy and detection efficiency (DE) of the instruments. With CCDs, one can limit the field of view of each detector (pixel), and thus improve the signal to noise ratio over single-detectors that summed the light reflected from many clouds with the lightning produced by a single cloud. This pixelization enabled daytime DE to increase from a few percent to close to 90%. The OTD (1995), and the LIS (1997), were the first lightning sensors to utilize focal-plane arrays. Together they detected global lightning activity for more than twenty years, providing the first detailed information on the distribution of global lightning and its variability. The FORTE satellite was launched shortly after LIS, and became the first dedicated satellite to simultaneously measure RF and optical lightning emissions. It too used a CCD focal plane to detect and locate lightning. In November 2016, the GLM became the first lightning instrument in geostationary orbit. Shortly thereafter, China placed its GLI in orbit. Lightning sensors in geostationary orbit significantly increase the value of space-based observations. For the first time, lightning activity can be monitored continuously, over large areas of the Earth with high, uniform DE and location accuracy. In addition to observing standard lightning, a number of sensors have been placed in orbit to detect transient luminous events and tropospheric gamma-ray flashes. A lineal history of space-based lightning observations will be presented as well as a discussion of the scientific contributions made possible by these instruments. In addition, relative merits of space versus ground measurements will be addressed, as well as an effort to demonstrate the complementary nature of the two approaches.

  17. Development of a CCD based solar speckle imaging system

    NASA Astrophysics Data System (ADS)

    Nisenson, Peter; Stachnik, Robert V.; Noyes, Robert W.

    1986-02-01

    A program to develop software and hardware for the purpose of obtaining high angular resolution images of the solar surface is described. The program included the procurement of a Charge Coupled Devices imaging system; an extensive laboratory and remote site testing of the camera system; the development of a software package for speckle image reconstruction which was eventually installed and tested at the Sacramento Peak Observatory; and experiments of the CCD system (coupled to an image intensifier) for low light level, narrow spectral band solar imaging.

  18. Direct measurement and calibration of the Kepler CCD Pixel Response Function for improved photometry and astrometry

    NASA Astrophysics Data System (ADS)

    Ninkov, Zoran

    Stellar images taken with telescopes and detectors in space are usually undersampled, and to correct for this, an accurate pixel response function is required. The standard approach for HST and KEPLER has been to measure the telescope PSF combined ("convolved") with the actual pixel response function, super-sampled by taking into account dithered or offset observed images of many stars (Lauer [1999]). This combined response function has been called the "PRF" (Bryson et al. [2011]). However, using such results has not allowed astrometry from KEPLER to reach its full potential (Monet et al. [2010], [2014]). Given the precision of KEPLER photometry, it should be feasible to use a pre-determined detector pixel response function (PRF) and an optical point spread function (PSF) as separable quantities to more accurately correct photometry and astrometry for undersampling. Wavelength (i.e. stellar color) and instrumental temperature should be affecting each of these differently. Discussion of the PRF in the "KEPLER Instrument Handbook" is limited to an ad-hoc extension of earlier measurements on a quite different CCD. It is known that the KEPLER PSF typically has a sharp spike in the middle, and the main bulk of the PSF is still small enough to be undersampled, so that any substructure in the pixel may interact significantly with the optical PSF. Both the PSF and PRF are probably asymmetric. We propose to measure the PRF for an example of the CCD sensors used on KEPLER at sufficient sampling resolution to allow significant improvement of KEPLER photometry and astrometry, in particular allowing PSF fitting techniques to be used on the data archive.

  19. Vacuum compatible miniature CCD camera head

    DOEpatents

    Conder, Alan D.

    2000-01-01

    A charge-coupled device (CCD) camera head which can replace film for digital imaging of visible light, ultraviolet radiation, and soft to penetrating x-rays, such as within a target chamber where laser produced plasmas are studied. The camera head is small, capable of operating both in and out of a vacuum environment, and is versatile. The CCD camera head uses PC boards with an internal heat sink connected to the chassis for heat dissipation, which allows for close(0.04" for example) stacking of the PC boards. Integration of this CCD camera head into existing instrumentation provides a substantial enhancement of diagnostic capabilities for studying high energy density plasmas, for a variety of military industrial, and medical imaging applications.

  20. Test technology on divergence angle of laser range finder based on CCD imaging fusion

    NASA Astrophysics Data System (ADS)

    Shi, Sheng-bing; Chen, Zhen-xing; Lv, Yao

    2016-09-01

    Laser range finder has been equipped with all kinds of weapons, such as tank, ship, plane and so on, is important component of fire control system. Divergence angle is important performance and incarnation of horizontal resolving power for laser range finder, is necessary appraised test item in appraisal test. In this paper, based on high accuracy test on divergence angle of laser range finder, divergence angle test system is designed based on CCD imaging, divergence angle of laser range finder is acquired through fusion technology for different attenuation imaging, problem that CCD characteristic influences divergence angle test is solved.

  1. Evaluation of a New Prototype Geodetic Astrolabe for Measuring Deflections of the Vertical

    NASA Astrophysics Data System (ADS)

    Slater, J. A.; Thompson, N.; Angell, L. E.; Belenkii, M. S.; Bruns, D. G.; Johnson, D. O.

    2009-12-01

    During the last three years, the National Geospatial-Intelligence Agency (NGA), with assistance from the U.S. Naval Observatory (USNO), sponsored the development of a new electronic geodetic astrolabe for measuring deflections of the vertical (DoV). NGA’s current operational astrolabes, built in 1995, have a number of undesirable features including the need for a pool of liquid mercury as a reflecting surface. The new state-of-the-art prototype instrument, completed by Trex Enterprises in early 2009, was designed to meet a 0.2 arcsec accuracy requirement. It reduces the weight, eliminates the mercury, and dramatically reduces observation times. The new astrolabe consists of a 101 mm aperture telescope with a 1.5° field of view and an inclinometer mounted inside a 92-cm high, 30-cm diameter tube, an external GPS receiver for timing, and a laptop computer that controls and monitors the instrument and performs the computations. Star images are recorded by an astronomical-grade camera with a 2,048 x 2,048 pixel CCD sensor that is externally triggered by time pulses from the GPS receiver. The prototype was designed for nighttime observation of visible stars equal to or brighter than magnitude 10.0. The inclinometer is a system of two orthogonal pendula that define the local gravitational vertical, each consisting of a brass plumb bob suspended from an aluminized polymer ribbon set between two electrodes. An internal reference collimator is rigidly tied to the inclinometer and projects an array of reference points of light onto the CCD sensor. After the astrolabe is coarsely leveled to within 20 arcsec, voice coil actuators automatically adjust and maintain the inclinometer vertical to within 0.02 arcsec. Independent images are collected at 6 second intervals using a 200 msec exposure time. The CCD coordinates are determined for each star and a collimator reference point on each image. Stars are identified by referencing a customized star catalog produced by USNO. A plate model is fitted to the topocentric coordinates of the stars, and then used to solve for the astronomical latitude and longitude of the vertical reference point on the CCD. The average of 100-150 individual image solutions (10-15 minutes) defines the astronomical position for the observation session. In order to remove an azimuthal orientation bias, the astrolabe is rotated 180°, a new observation session solution is produced for that orientation and then averaged with the first solution to get the final astronomical position of the site. By combining these coordinates with GPS-derived geodetic latitude and longitude, one obtains the DoV. Initial testing of the prototype at a known astronomic position has been completed. The tests evaluated the session-to-session and day-to-day repeatability of the solutions, the number of observations required for a solution, the accuracy with respect to the known position, and the operational robustness of the hardware and software. Based on the field tests, Trex will make improvements to the prototype hardware and software and then produce operational units for use by NGA.

  2. A comparison of imaging methods for use in an array biosensor

    NASA Technical Reports Server (NTRS)

    Golden, Joel P.; Ligler, Frances S.

    2002-01-01

    An array biosensor has been developed which uses an actively-cooled, charge-coupled device (CCD) imager. In an effort to save money and space, a complementary metal-oxide semiconductor (CMOS) camera and photodiode were tested as replacements for the cooled CCD imager. Different concentrations of CY5 fluorescent dye in glycerol were imaged using the three different detection systems with the same imaging optics. Signal discrimination above noise was compared for each of the three systems.

  3. CCDs in the Mechanics Lab--A Competitive Alternative? (Part I).

    ERIC Educational Resources Information Center

    Pinto, Fabrizio

    1995-01-01

    Reports on the implementation of a relatively low-cost, versatile, and intuitive system to teach basic mechanics based on the use of a Charge-Coupled Device (CCD) camera and inexpensive image-processing and analysis software. Discusses strengths and limitations of CCD imaging technologies. (JRH)

  4. Athena microscopic Imager investigation

    USGS Publications Warehouse

    Herkenhoff, K. E.; Squyres, S. W.; Bell, J.F.; Maki, J.N.; Arneson, H.M.; Bertelsen, P.; Brown, D.I.; Collins, S.A.; Dingizian, A.; Elliott, S.T.; Goetz, W.; Hagerott, E.C.; Hayes, A.G.; Johnson, M.J.; Kirk, R.L.; McLennan, S.; Morris, R.V.; Scherr, L.M.; Schwochert, M.A.; Shiraishi, L.R.; Smith, G.H.; Soderblom, L.A.; Sohl-Dickstein, J. N.; Wadsworth, M.V.

    2003-01-01

    The Athena science payload on the Mars Exploration Rovers (MER) includes the Microscopic Imager (MI). The MI is a fixed-focus camera mounted on the end of an extendable instrument arm, the Instrument Deployment Device (IDD). The MI was designed to acquire images at a spatial resolution of 30 microns/pixel over a broad spectral range (400-700 nm). The MI uses the same electronics design as the other MER cameras but has optics that yield a field of view of 31 ?? 31 mm across a 1024 ?? 1024 pixel CCD image. The MI acquires images using only solar or skylight illumination of the target surface. A contact sensor is used to place the MI slightly closer to the target surface than its best focus distance (about 66 mm), allowing concave surfaces to be imaged in good focus. Coarse focusing (???2 mm precision) is achieved by moving the IDD away from a rock target after the contact sensor has been activated. The MI optics are protected from the Martian environment by a retractable dust cover. The dust cover includes a Kapton window that is tinted orange to restrict the spectral bandpass to 500-700 nm, allowing color information to be obtained by taking images with the dust cover open and closed. MI data will be used to place other MER instrument data in context and to aid in petrologic and geologic interpretations of rocks and soils on Mars. Copyright 2003 by the American Geophysical Union.

  5. Restoration of non-uniform exposure motion blurred image

    NASA Astrophysics Data System (ADS)

    Luo, Yuanhong; Xu, Tingfa; Wang, Ningming; Liu, Feng

    2014-11-01

    Restoring motion-blurred image is the key technologies in the opto-electronic detection system. The imaging sensors such as CCD and infrared imaging sensor, which are mounted on the motion platforms, quickly move together with the platforms of high speed. As a result, the images become blur. The image degradation will cause great trouble for the succeeding jobs such as objects detection, target recognition and tracking. So the motion-blurred images must be restoration before detecting motion targets in the subsequent images. On the demand of the real weapon task, in order to deal with targets in the complex background, this dissertation uses the new theories in the field of image processing and computer vision to research the new technology of motion deblurring and motion detection. The principle content is as follows: 1) When the prior knowledge about degradation function is unknown, the uniform motion blurred images are restored. At first, the blur parameters, including the motion blur extent and direction of PSF(point spread function), are estimated individually in domain of logarithmic frequency. The direction of PSF is calculated by extracting the central light line of the spectrum, and the extent is computed by minimizing the correction between the fourier spectrum of the blurred image and a detecting function. Moreover, in order to remove the strip in the deblurred image, windows technique is employed in the algorithm, which makes the deblurred image clear. 2) According to the principle of infrared image non-uniform exposure, a new restoration model for infrared blurred images is developed. The fitting of infrared image non-uniform exposure curve is performed by experiment data. The blurred images are restored by the fitting curve.

  6. Flagging and Correction of Pattern Noise in the Kepler Focal Plane Array

    NASA Technical Reports Server (NTRS)

    Kolodziejczak, Jeffery J.; Caldwell, Douglas A.; VanCleve, Jeffrey E.; Clarke, Bruce D.; Jenkins, Jon M.; Cote, Miles T.; Klaus, Todd C.; Argabright, Vic S.

    2010-01-01

    In order for Kepler to achieve its required less than 20 PPM photometric precision for magnitude 12 and brighter stars, instrument-induced variations in the CCD readout bias pattern (our "2D black image"), which are either fixed or slowly varying in time, must be identified and the corresponding pixels either corrected or removed from further data processing. The two principle sources of these readout bias variations are crosstalk between the 84 science CCDs and the 4 fine guidance sensor (FGS) CCDs and a high frequency amplifier oscillation on less than 40% of the CCD readout channels. The crosstalk produces a synchronous pattern in the 2D black image with time-variation observed in less than 10% of individual pixel bias histories. We will describe a method of removing the crosstalk signal using continuously-collected data from masked and over-clocked image regions (our "collateral data"), and occasionally-collected full-frame images and reverse-clocked readout signals. We use this same set to detect regions affected by the oscillating amplifiers. The oscillations manifest as time-varying moir pattern and rolling bands in the affected channels. Because this effect reduces the performance in only a small fraction of the array at any given time, we have developed an approach for flagging suspect data. The flags will provide the necessary means to resolve any potential ambiguity between instrument-induced variations and real photometric variations in a target time series. We will also evaluate the effectiveness of these techniques using flight data from background and selected target pixels.

  7. Biomechanical and mathematical analysis of human movement in medical rehabilitation science using time-series data from two video cameras and force-plate sensor

    NASA Astrophysics Data System (ADS)

    Tsuruoka, Masako; Shibasaki, Ryosuke; Box, Elgene O.; Murai, Shunji; Mori, Eiji; Wada, Takao; Kurita, Masahiro; Iritani, Makoto; Kuroki, Yoshikatsu

    1994-08-01

    In medical rehabilitation science, quantitative understanding of patient movement in 3-D space is very important. The patient with any joint disorder will experience its influence on other body parts in daily movement. The alignment of joints in movement is able to improve under medical therapy process. In this study, the newly developed system is composed of two non- metri CCD video cameras and a force plate sensor, which are controlled simultaneously by a personal computer. By this system time-series digital data from 3-D image photogrammetry, each foot pressure and its center position, is able to provide efficient information for biomechanical and mathematical analysis of human movement. Each specific and common points are indicated in any patient movement. This study suggests more various, quantitative understanding in medical rehabilitation science.

  8. Efficient single-pixel multispectral imaging via non-mechanical spatio-spectral modulation.

    PubMed

    Li, Ziwei; Suo, Jinli; Hu, Xuemei; Deng, Chao; Fan, Jingtao; Dai, Qionghai

    2017-01-27

    Combining spectral imaging with compressive sensing (CS) enables efficient data acquisition by fully utilizing the intrinsic redundancies in natural images. Current compressive multispectral imagers, which are mostly based on array sensors (e.g, CCD or CMOS), suffer from limited spectral range and relatively low photon efficiency. To address these issues, this paper reports a multispectral imaging scheme with a single-pixel detector. Inspired by the spatial resolution redundancy of current spatial light modulators (SLMs) relative to the target reconstruction, we design an all-optical spectral splitting device to spatially split the light emitted from the object into several counterparts with different spectrums. Separated spectral channels are spatially modulated simultaneously with individual codes by an SLM. This no-moving-part modulation ensures a stable and fast system, and the spatial multiplexing ensures an efficient acquisition. A proof-of-concept setup is built and validated for 8-channel multispectral imaging within 420~720 nm wavelength range on both macro and micro objects, showing a potential for efficient multispectral imager in macroscopic and biomedical applications.

  9. Beauty and Astrophysics

    NASA Astrophysics Data System (ADS)

    Bessell, Michael S.

    2000-08-01

    Spectacular colour images have been made by combining CCD images in three different passbands using Adobe Photoshop. These beautiful images highlight a variety of astrophysical phenomena and should be a valuable resource for science education and public awareness of science. The wide field images were obtained at the Siding Spring Observatory (SSO) by mounting a Hasselblad or Nikkor telephoto lens in front of a 2K × 2K CCD. Options of more than 30 degrees or 6 degrees square coverage are produced in a single exposure in this way. Narrow band or broad band filters were placed between lens and CCD enabling deep, linear images in a variety of passbands to be obtained. We have mapped the LMC and SMC and are mapping the Galactic Plane for comparison with the Molonglo Radio Survey. Higher resolution images have also been made with the 40 inch telescope of galaxies and star forming regions in the Milky Way.

  10. Digital radiography and caries diagnosis.

    PubMed

    Wenzel, A

    1998-01-01

    Direct digital acquisition of intra-oral radiographs has been possible only in the last decade. Several studies have shown that, theoretically, there are a number of advantages of direct digital radiography compared with conventional film. Laboratory as well as controlled clinical studies are needed to determine whether new digital imaging systems alter diagnosis, treatment and prognosis compared with conventional methods. Most studies so far have evaluated their diagnostic performance only in laboratory settings. This review concentrates on what evidence we have for the diagnostic efficacy of digital systems for caries detection. Digital systems are compared with film and those studies which have evaluated the effects on diagnostic accuracy of contrast and edge enhancement, image size, variations in radiation dose and image compression are reviewed together with the use of automated image analysis for caries diagnosis. Digital intra-oral radiographic systems seem to be as accurate as the currently available dental films for the detection of caries. Sensitivities are relatively high (0.6-0.8) for detection of occlusal lesions into dentine with false positive fractions of 5-10%. A radiolucency in dentine is recognised as a good predictor for demineralisation. Radiography is of no value for the detection of initial (enamel) occlusal lesions. For detection of approximal dentinal lesions, sensitivities, specificities as well as the predictive values are fair, but are very poor for lesions known to be confined to enamel. Very little documented information exists, however, on the utilization of digital systems in the clinic. It is not known whether dose is actually reduced with the storage phosphor system, or whether collimator size is adjusted to fit sensor size in the CCD-based systems. There is no evidence that the number of retakes have been reduced. It is not known how many images are needed with the various CCD systems when compared with a conventional bitewing, nor how stable these systems are in the daily clinical use or whether proper cross-infection control can be maintained in relation to scanning the storage phosphor plates and the sensors and the cable. There is only sparse evidence that the enhancement facilities are used when interpreting images, and none that this has changed working practices or treatment decisions. The economic consequences for the patient, dentist and society require examination.

  11. Line scanning system for direct digital chemiluminescence imaging of DNA sequencing blots

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karger, A.E.; Weiss, R.; Gesteland, R.F.

    A cryogenically cooled charge-coupled device (CCD) camera equipped with an area CCD array is used in a line scanning system for low-light-level imaging of chemiluminescent DNA sequencing blots. Operating the CCD camera in time-delayed integration (TDI) mode results in continuous data acquisition independent of the length of the CCD array. Scanning is possible with a resolution of 1.4 line pairs/mm at the 50% level of the modulation transfer function. High-sensitivity, low-light-level scanning of chemiluminescent direct-transfer electrophoresis (DTE) DNA sequencing blots is shown. The detection of DNA fragments on the blot involves DNA-DNA hybridization with oligonucleotide-alkaline phosphatase conjugate and 1,2-dioxetane-based chemiluminescence.more » The width of the scan allows the recording of up to four sequencing reactions (16 lanes) on one scan. The scan speed of 52 cm/h used for the sequencing blots corresponds to a data acquisition rate of 384 pixels/s. The chemiluminescence detection limit on the scanned images is 3.9 [times] 10[sup [minus]18] mol of plasmid DNA. A conditional median filter is described to remove spikes caused by cosmic ray events from the CCD images. 39 refs., 9 refs.« less

  12. Development and use of an L3CCD high-cadence imaging system for Optical Astronomy

    NASA Astrophysics Data System (ADS)

    Sheehan, Brendan J.; Butler, Raymond F.

    2008-02-01

    A high cadence imaging system, based on a Low Light Level CCD (L3CCD) camera, has been developed for photometric and polarimetric applications. The camera system is an iXon DV-887 from Andor Technology, which uses a CCD97 L3CCD detector from E2V technologies. This is a back illuminated device, giving it an extended blue response, and has an active area of 512×512 pixels. The camera system allows frame-rates ranging from 30 fps (full frame) to 425 fps (windowed & binned frame). We outline the system design, concentrating on the calibration and control of the L3CCD camera. The L3CCD detector can be either triggered directly by a GPS timeserver/frequency generator or be internally triggered. A central PC remotely controls the camera computer system and timeserver. The data is saved as standard `FITS' files. The large data loads associated with high frame rates, leads to issues with gathering and storing the data effectively. To overcome such problems, a specific data management approach is used, and a Python/PYRAF data reduction pipeline was written for the Linux environment. This uses calibration data collected either on-site, or from lab based measurements, and enables a fast and reliable method for reducing images. To date, the system has been used twice on the 1.5 m Cassini Telescope in Loiano (Italy) we present the reduction methods and observations made.

  13. Miniature Spatial Heterodyne Raman Spectrometer with a Cell Phone Camera Detector.

    PubMed

    Barnett, Patrick D; Angel, S Michael

    2017-05-01

    A spatial heterodyne Raman spectrometer (SHRS) with millimeter-sized optics has been coupled with a standard cell phone camera as a detector for Raman measurements. The SHRS is a dispersive-based interferometer with no moving parts and the design is amenable to miniaturization while maintaining high resolution and large spectral range. In this paper, a SHRS with 2.5 mm diffraction gratings has been developed with 17.5 cm -1 theoretical spectral resolution. The footprint of the SHRS is orders of magnitude smaller than the footprint of charge-coupled device (CCD) detectors typically employed in Raman spectrometers, thus smaller detectors are being explored to shrink the entire spectrometer package. This paper describes the performance of a SHRS with 2.5 mm wide diffraction gratings and a cell phone camera detector, using only the cell phone's built-in optics to couple the output of the SHRS to the sensor. Raman spectra of a variety of samples measured with the cell phone are compared to measurements made using the same miniature SHRS with high-quality imaging optics and a high-quality, scientific-grade, thermoelectrically cooled CCD.

  14. Fundamental performance differences of CMOS and CCD imagers: part V

    NASA Astrophysics Data System (ADS)

    Janesick, James R.; Elliott, Tom; Andrews, James; Tower, John; Pinter, Jeff

    2013-02-01

    Previous papers delivered over the last decade have documented developmental progress made on large pixel scientific CMOS imagers that match or surpass CCD performance. New data and discussions presented in this paper include: 1) a new buried channel CCD fabricated on a CMOS process line, 2) new data products generated by high performance custom scientific CMOS 4T/5T/6T PPD pixel imagers, 3) ultimate CTE and speed limits for large pixel CMOS imagers, 4) fabrication and test results of a flight 4k x 4k CMOS imager for NRL's SoloHi Solar Orbiter Mission, 5) a progress report on ultra large stitched Mk x Nk CMOS imager, 6) data generated by on-chip sub-electron CDS signal chain circuitry used in our imagers, 7) CMOS and CMOSCCD proton and electron radiation damage data for dose levels up to 10 Mrd, 8) discussions and data for a new class of PMOS pixel CMOS imagers and 9) future CMOS development work planned.

  15. CMOS Active Pixel Sensor Star Tracker with Regional Electronic Shutter

    NASA Technical Reports Server (NTRS)

    Yadid-Pecht, Orly; Pain, Bedabrata; Staller, Craig; Clark, Christopher; Fossum, Eric

    1996-01-01

    The guidance system in a spacecraft determines spacecraft attitude by matching an observed star field to a star catalog....An APS(active pixel sensor)-based system can reduce mass and power consumption and radiation effects compared to a CCD(charge-coupled device)-based system...This paper reports an APS (active pixel sensor) with locally variable times, achieved through individual pixel reset (IPR).

  16. CMOS cassette for digital upgrade of film-based mammography systems

    NASA Astrophysics Data System (ADS)

    Baysal, Mehmet A.; Toker, Emre

    2006-03-01

    While full-field digital mammography (FFDM) technology is gaining clinical acceptance, the overwhelming majority (96%) of the installed base of mammography systems are conventional film-screen (FSM) systems. A high performance, and economical digital cassette based product to conveniently upgrade FSM systems to FFDM would accelerate the adoption of FFDM, and make the clinical and technical advantages of FFDM available to a larger population of women. The planned FFDM cassette is based on our commercial Digital Radiography (DR) cassette for 10 cm x 10 cm field-of-view spot imaging and specimen radiography, utilizing a 150 micron columnar CsI(Tl) scintillator and 48 micron active-pixel CMOS sensor modules. Unlike a Computer Radiography (CR) cassette, which requires an external digitizer, our DR cassette transfers acquired images to a display workstation within approximately 5 seconds of exposure, greatly enhancing patient flow. We will present the physical performance of our prototype system against other FFDM systems in clinical use today, using established objective criteria such as the Modulation Transfer Function (MTF), Detective Quantum Efficiency (DQE), and subjective criteria, such as a contrast-detail (CD-MAM) observer performance study. Driven by the strong demand from the computer industry, CMOS technology is one of the lowest cost, and the most readily accessible technologies available for FFDM today. Recent popular use of CMOS imagers in high-end consumer cameras have also resulted in significant advances in the imaging performance of CMOS sensors against rivaling CCD sensors. This study promises to take advantage of these unique features to develop the first CMOS based FFDM upgrade cassette.

  17. Optomechanical System Development of the AWARE Gigapixel Scale Camera

    NASA Astrophysics Data System (ADS)

    Son, Hui S.

    Electronic focal plane arrays (FPA) such as CMOS and CCD sensors have dramatically improved to the point that digital cameras have essentially phased out film (except in very niche applications such as hobby photography and cinema). However, the traditional method of mating a single lens assembly to a single detector plane, as required for film cameras, is still the dominant design used in cameras today. The use of electronic sensors and their ability to capture digital signals that can be processed and manipulated post acquisition offers much more freedom of design at system levels and opens up many interesting possibilities for the next generation of computational imaging systems. The AWARE gigapixel scale camera is one such computational imaging system. By utilizing a multiscale optical design, in which a large aperture objective lens is mated with an array of smaller, well corrected relay lenses, we are able to build an optically simple system that is capable of capturing gigapixel scale images via post acquisition stitching of the individual pictures from the array. Properly shaping the array of digital cameras allows us to form an effectively continuous focal surface using off the shelf (OTS) flat sensor technology. This dissertation details developments and physical implementations of the AWARE system architecture. It illustrates the optomechanical design principles and system integration strategies we have developed through the course of the project by summarizing the results of the two design phases for AWARE: AWARE-2 and AWARE-10. These systems represent significant advancements in the pursuit of scalable, commercially viable snapshot gigapixel imaging systems and should serve as a foundation for future development of such systems.

  18. Earth elevation map production and high resolution sensing camera imaging analysis

    NASA Astrophysics Data System (ADS)

    Yang, Xiubin; Jin, Guang; Jiang, Li; Dai, Lu; Xu, Kai

    2010-11-01

    The Earth's digital elevation which impacts space camera imaging has prepared and imaging has analysed. Based on matching error that TDI CCD integral series request of the speed of image motion, statistical experimental methods-Monte Carlo method is used to calculate the distribution histogram of Earth's elevation in image motion compensated model which includes satellite attitude changes, orbital angular rate changes, latitude, longitude and the orbital inclination changes. And then, elevation information of the earth's surface from SRTM is read. Earth elevation map which produced for aerospace electronic cameras is compressed and spliced. It can get elevation data from flash according to the shooting point of latitude and longitude. If elevation data between two data, the ways of searching data uses linear interpolation. Linear interpolation can better meet the rugged mountains and hills changing requests. At last, the deviant framework and camera controller are used to test the character of deviant angle errors, TDI CCD camera simulation system with the material point corresponding to imaging point model is used to analyze the imaging's MTF and mutual correlation similarity measure, simulation system use adding cumulation which TDI CCD imaging exceeded the corresponding pixel horizontal and vertical offset to simulate camera imaging when stability of satellite attitude changes. This process is practicality. It can effectively control the camera memory space, and meet a very good precision TDI CCD camera in the request matches the speed of image motion and imaging.

  19. Cloud detection method for Chinese moderate high resolution satellite imagery (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Zhong, Bo; Chen, Wuhan; Wu, Shanlong; Liu, Qinhuo

    2016-10-01

    Cloud detection of satellite imagery is very important for quantitative remote sensing research and remote sensing applications. However, many satellite sensors don't have enough bands for a quick, accurate, and simple detection of clouds. Particularly, the newly launched moderate to high spatial resolution satellite sensors of China, such as the charge-coupled device on-board the Chinese Huan Jing 1 (HJ-1/CCD) and the wide field of view (WFV) sensor on-board the Gao Fen 1 (GF-1), only have four available bands including blue, green, red, and near infrared bands, which are far from the requirements of most could detection methods. In order to solve this problem, an improved and automated cloud detection method for Chinese satellite sensors called OCM (Object oriented Cloud and cloud-shadow Matching method) is presented in this paper. It firstly modified the Automatic Cloud Cover Assessment (ACCA) method, which was developed for Landsat-7 data, to get an initial cloud map. The modified ACCA method is mainly based on threshold and different threshold setting produces different cloud map. Subsequently, a strict threshold is used to produce a cloud map with high confidence and large amount of cloud omission and a loose threshold is used to produce a cloud map with low confidence and large amount of commission. Secondly, a corresponding cloud-shadow map is also produced using the threshold of near-infrared band. Thirdly, the cloud maps and cloud-shadow map are transferred to cloud objects and cloud-shadow objects. Cloud and cloud-shadow are usually in pairs; consequently, the final cloud and cloud-shadow maps are made based on the relationship between cloud and cloud-shadow objects. OCM method was tested using almost 200 HJ-1/CCD images across China and the overall accuracy of cloud detection is close to 90%.

  20. CCD centroiding analysis for Nano-JASMINE observation data

    NASA Astrophysics Data System (ADS)

    Niwa, Yoshito; Yano, Taihei; Araki, Hiroshi; Gouda, Naoteru; Kobayashi, Yukiyasu; Yamada, Yoshiyuki; Tazawa, Seiichi; Hanada, Hideo

    2010-07-01

    Nano-JASMINE is a very small satellite mission for global space astrometry with milli-arcsecond accuracy, which will be launched in 2011. In this mission, centroids of stars in CCD image frames are estimated with sub-pixel accuracy. In order to realize such a high precision centroiding an algorithm utilizing a least square method is employed. One of the advantages is that centroids can be calculated without explicit assumption of the point spread functions of stars. CCD centroiding experiment has been performed to investigate whether this data analysis is available, and centroids of artificial star images on a CCD are determined with a precision of less than 0.001 pixel. This result indicates parallaxes of stars within 300 pc from Sun can be observed in Nano-JASMINE.

  1. Printed circuit board for a CCD camera head

    DOEpatents

    Conder, Alan D.

    2002-01-01

    A charge-coupled device (CCD) camera head which can replace film for digital imaging of visible light, ultraviolet radiation, and soft to penetrating x-rays, such as within a target chamber where laser produced plasmas are studied. The camera head is small, capable of operating both in and out of a vacuum environment, and is versatile. The CCD camera head uses PC boards with an internal heat sink connected to the chassis for heat dissipation, which allows for close (0.04" for example) stacking of the PC boards. Integration of this CCD camera head into existing instrumentation provides a substantial enhancement of diagnostic capabilities for studying high energy density plasmas, for a variety of military industrial, and medical imaging applications.

  2. A compact bio-inspired visible/NIR imager for image-guided surgery (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Gao, Shengkui; Garcia, Missael; Edmiston, Chris; York, Timothy; Marinov, Radoslav; Mondal, Suman B.; Zhu, Nan; Sudlow, Gail P.; Akers, Walter J.; Margenthaler, Julie A.; Liang, Rongguang; Pepino, Marta; Achilefu, Samuel; Gruev, Viktor

    2016-03-01

    Inspired by the visual system of the morpho butterfly, we have designed, fabricated, tested and clinically translated an ultra-sensitive, light weight and compact imaging sensor capable of simultaneously capturing near infrared (NIR) and visible spectrum information. The visual system of the morpho butterfly combines photosensitive cells with spectral filters at the receptor level. The spectral filters are realized by alternating layers of high and low dielectric constant, such as air and cytoplasm. We have successfully mimicked this concept by integrating pixelated spectral filters, realized by alternating silicon dioxide and silicon nitrate layers, with an array of CCD detectors. There are four different types of pixelated spectral filters in the imaging plane: red, green, blue and NIR. The high optical density (OD) of all spectral filters (OD>4) allow for efficient rejections of photons from unwanted bands. The single imaging chip weighs 20 grams with form factor of 5mm by 5mm. The imaging camera is integrated with a goggle display system. A tumor targeted agent, LS301, is used to identify all spontaneous tumors in a transgenic PyMT murine model of breast cancer. The imaging system achieved sensitivity of 98% and selectivity of 95%. We also used our imaging sensor to locate sentinel lymph nodes (SLNs) in patients with breast cancer using indocyanine green tracer. The surgeon was able to identify 100% of SLNs when using our bio-inspired imaging system, compared to 93% when using information from the lymphotropic dye and 96% when using information from the radioactive tracer.

  3. A Wide Dynamic Range Tapped Linear Array Image Sensor

    NASA Astrophysics Data System (ADS)

    Washkurak, William D.; Chamberlain, Savvas G.; Prince, N. Daryl

    1988-08-01

    Detectors for acousto-optic signal processing applications require fast transient response as well as wide dynamic range. There are two major choices of detectors: conductive or integration mode. Conductive mode detectors have an initial transient period before they reach then' i equilibrium state. The duration of 1 his period is dependent on light level as well as detector capacitance. At low light levels a conductive mode detector is very slow; response time is typically on the order of milliseconds. Generally. to obtain fast transient response an integrating mode detector is preferred. With integrating mode detectors. the dynamic range is determined by the charge storage capability of the tran-sport shift registers and the noise level of the image sensor. The conventional net hod used to improve dynamic range is to increase the shift register charge storage capability. To achieve a dynamic range of fifty thousand assuming two hundred noise equivalent electrons, a charge storage capability of ten million electrons would be required. In order to accommodate this amount of charge. unrealistic shift registers widths would be required. Therefore, with an integrating mode detector it is difficult to achieve a dynamic range of over four orders of magnitude of input light intensity. Another alternative is to solve the problem at the photodetector aml not the shift, register. DALSA's wide dynamic range detector utilizes an optimized, ion implant doped, profiled MOSFET photodetector specifically designed for wide dynamic range. When this new detector operates at high speed and at low light levels the photons are collected and stored in an integrating fashion. However. at bright light levels where transient periods are short, the detector switches into a conductive mode. The light intensity is logarithmically compressed into small charge packets, easily carried by the CCD shift register. As a result of the logarithmic conversion, dynamic ranges of over six orders of magnitide are obtained. To achieve the short integration times necessary in acousto-optic applications. t he wide dynamic range detector has been implemented into a tapped array architecture with eight outputs and 256 photoelements. Operation of each 01)1,1)111 at 16 MHz yields detector integration times of 2 micro-seconds. Buried channel two phase CCD shift register technology is utilized to minimize image sensor noise improve video output rates and increase ease of operation.

  4. Development of a pyramidal wavefront sensor test-bench at INO

    NASA Astrophysics Data System (ADS)

    Turbide, Simon; Wang, Min; Gauvin, Jonny; Martin, Olivier; Savard, Maxime; Bourqui, Pascal; Veran, Jean-Pierre; Deschenes, William; Anctil, Genevieve; Chateauneuf, François

    2013-12-01

    The key technical element of the adaptive optics in astronomy is the wavefront sensing (WFS). One of the advantages of the pyramid wavefront sensor (P-WFS) over the widely used Shack-Hartmann wavefront sensor seems to be the increased sensitivity in closed-loop applications. A high-sensitivity and large dynamic-range WFS, such as P-WFS technology, still needs to be further investigated for proper justification in future Extremely Large Telescopes application. At INO, we have recently carried out the optical design, testing and performance evaluation of a P-WFS bench setup. The optical design of the bench setup mainly consists of the super-LED fiber source, source collimator, spatial light modulator (SLM), relay lenses, tip-tilt mirror, Fourier-transforming lens, and a four-faceted glass pyramid with a large vertex angle as well as pupil re-imaged optics. The phase-only SLM has been introduced in the bench setup to generate atmospheric turbulence with a maximum phase shift of more than 2π at each pixel (256 grey levels). Like a modified Foucault knife-edge test, the refractive pyramid element is used to produce four images of the entrance pupil on a CCD camera. The Fourier-transforming lens, which is used before the pyramid prism, is designed for telecentric output to allow dynamic modulation (rotation of the beam around the pyramid-prism center) from a tip-tilt mirror. Furthermore, a P-WFS diffraction-based model has been developed. This model includes most of the system limitations such as the SLM discrete voltage steps and the CCD pixel pitch. The pyramid effects (edges and tip) are considered as well. The modal wavefront reconstruction algorithm relies on the construction of an interaction matrix (one for each modulation's amplitude). Each column of the interaction matrix represents the combination of the four pupil images for a given wavefront aberration. The nice agreement between the data and the model suggest that the limitation of the system is not the P-WFS itself, but rather its environment such as source intensity fluctuation and vibration of the optical bench. Finally, the phase-reconstruction errors of the P-WFS have been compared to those of a Shack-Hartmann, showing the regions of interest of the former system. The bench setup will be focusing on the astronomy application as well as commercial applications, such as bio-medical application etc.

  5. Quasi-Speckle Measurements of Close Double Stars With a CCD Camera

    NASA Astrophysics Data System (ADS)

    Harshaw, Richard

    2017-01-01

    CCD measurements of visual double stars have been an active area of amateur observing for several years now. However, most CCD measurements rely on “lucky imaging” (selecting a very small percentage of the best frames of a larger frame set so as to get the best “frozen” atmosphere for the image), a technique that has limitations with regards to how close the stars can be and still be cleanly resolved in the lucky image. In this paper, the author reports how using deconvolution stars in the analysis of close double stars can greatly enhance the quality of the autocorellogram, leading to a more precise solution using speckle reduction software rather than lucky imaging.

  6. Compression of CCD raw images for digital still cameras

    NASA Astrophysics Data System (ADS)

    Sriram, Parthasarathy; Sudharsanan, Subramania

    2005-03-01

    Lossless compression of raw CCD images captured using color filter arrays has several benefits. The benefits include improved storage capacity, reduced memory bandwidth, and lower power consumption for digital still camera processors. The paper discusses the benefits in detail and proposes the use of a computationally efficient block adaptive scheme for lossless compression. Experimental results are provided that indicate that the scheme performs well for CCD raw images attaining compression factors of more than two. The block adaptive method also compares favorably with JPEG-LS. A discussion is provided indicating how the proposed lossless coding scheme can be incorporated into digital still camera processors enabling lower memory bandwidth and storage requirements.

  7. Real-Time Label-Free Detection of Suspicious Powders Using Noncontact Optical Methods

    DTIC Science & Technology

    2013-11-05

    energy in a small, 1 pound, low power consumption package; and 2) new technology resistive gate linear CCD array detectors developed by Hamamatsu Corp...as a wide range of possible interferent or confusant organic materials such as powdered sugar, granulate sugar, fruit pectin, flower, corn starch ...resolution, room temperature, resistive gate linear CCD array, the BRANE sensor SWAP decreases along with a decrease in sensitivity, but the information

  8. Benchtop and Animal Validation of a Projective Imaging System for Potential Use in Intraoperative Surgical Guidance

    PubMed Central

    Gan, Qi; Wang, Dong; Ye, Jian; Zhang, Zeshu; Wang, Xinrui; Hu, Chuanzhen; Shao, Pengfei; Xu, Ronald X.

    2016-01-01

    We propose a projective navigation system for fluorescence imaging and image display in a natural mode of visual perception. The system consists of an excitation light source, a monochromatic charge coupled device (CCD) camera, a host computer, a projector, a proximity sensor and a Complementary metal–oxide–semiconductor (CMOS) camera. With perspective transformation and calibration, our surgical navigation system is able to achieve an overall imaging speed higher than 60 frames per second, with a latency of 330 ms, a spatial sensitivity better than 0.5 mm in both vertical and horizontal directions, and a projection bias less than 1 mm. The technical feasibility of image-guided surgery is demonstrated in both agar-agar gel phantoms and an ex vivo chicken breast model embedding Indocyanine Green (ICG). The biological utility of the system is demonstrated in vivo in a classic model of ICG hepatic metabolism. Our benchtop, ex vivo and in vivo experiments demonstrate the clinical potential for intraoperative delineation of disease margin and image-guided resection surgery. PMID:27391764

  9. UV-visible sensors based on polymorphous silicon

    NASA Astrophysics Data System (ADS)

    Guedj, Cyril S.; Cabarrocas, Pere R. i.; Massoni, Nicolas; Moussy, Norbert; Morel, Damien; Tchakarov, Svetoslav; Bonnassieux, Yvan

    2003-09-01

    UV-based imaging systems can be used for low-altitude rockets detection or biological agents identification (for instance weapons containing ANTHRAX). Compared to conventional CCD technology, CMOS-based active pixel sensors provide several advantages, including excellent electro-optical performances, high integration, low voltage operation, low power consumption, low cost, long lifetime, and robustness against environment. The monolithic integration of UV, visible and infrared detectors on the same uncooled CMOS smart system would therefore represent a major advance in the combat field, for characterization and representation of targets and backgrounds. In this approach, we have recently developped a novel technology using polymorphous silicon. This new material, fully compatible with above-IC silicon technology, is made of nanometric size ordered domains embedded in an amorphous matrix. The typical quantum efficiency of detectors made of this nano-material reach up to 80 % at 550 nm and 30 % in the UV range, depending of the design and the growth parameters. Furthermore, a record dark current of 20 pA/cm2 at -3 V has been reached. In addition, this new generation of sensors is significantly faster and more stable than their amorphous silicon counterparts. In this paper, we will present the relationship between the sensor technology and the overall performances.

  10. Performance evaluation of low-cost airglow cameras for mesospheric gravity wave measurements

    NASA Astrophysics Data System (ADS)

    Suzuki, S.; Shiokawa, K.

    2016-12-01

    Atmospheric gravity waves significantly contribute to the wind/thermal balances in the mesosphere and lower thermosphere (MLT) through their vertical transport of horizontal momentum. It has been reported that the gravity wave momentum flux preferentially associated with the scale of the waves; the momentum fluxes of the waves with a horizontal scale of 10-100 km are particularly significant. Airglow imaging is a useful technique to observe two-dimensional structure of small-scale (<100 km) gravity waves in the MLT region and has been used to investigate global behaviour of the waves. Recent studies with simultaneous/multiple airglow cameras have derived spatial extent of the MLT waves. Such network imaging observations are advantageous to ever better understanding of coupling between the lower and upper atmosphere via gravity waves. In this study, we newly developed low-cost airglow cameras to enlarge the airglow imaging network. Each of the cameras has a fish-eye lens with a 185-deg field-of-view and equipped with a CCD video camera (WATEC WAT-910HX) ; the camera is small (W35.5 x H36.0 x D63.5 mm) and inexpensive, much more than the airglow camera used for the existing ground-based network (Optical Mesosphere Thermosphere Imagers (OMTI) operated by Solar-Terrestrial Environmental Laboratory, Nagoya University), and has a CCD sensor with 768 x 494 pixels that is highly sensitive enough to detect the mesospheric OH airglow emission perturbations. In this presentation, we will report some results of performance evaluation of this camera made at Shigaraki (35-deg N, 136-deg E), Japan, where is one of the OMTI station. By summing 15-images (i.e., 1-min composition of the images) we recognised clear gravity wave patterns in the images with comparable quality to the OMTI's image. Outreach and educational activities based on this research will be also reported.

  11. Adaptive Optics at Lawrence Livermore National Laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gavel, D T

    2003-03-10

    Adaptive optics enables high resolution imaging through the atmospheric by correcting for the turbulent air's aberrations to the light waves passing through it. The Lawrence Livermore National Laboratory for a number of years has been at the forefront of applying adaptive optics technology to astronomy on the world's largest astronomical telescopes, in particular at the Keck 10-meter telescope on Mauna Kea, Hawaii. The technology includes the development of high-speed electrically driven deformable mirrors, high-speed low-noise CCD sensors, and real-time wavefront reconstruction and control hardware. Adaptive optics finds applications in many other areas where light beams pass through aberrating media andmore » must be corrected to maintain diffraction-limited performance. We describe systems and results in astronomy, medicine (vision science), and horizontal path imaging, all active programs in our group.« less

  12. A matter of collection and detection for intraoperative and noninvasive near-infrared fluorescence molecular imaging: To see or not to see?

    PubMed Central

    Zhu, Banghe; Rasmussen, John C.; Sevick-Muraca, Eva M.

    2014-01-01

    Purpose: Although fluorescence molecular imaging is rapidly evolving as a new combinational drug/device technology platform for molecularly guided surgery and noninvasive imaging, there remains no performance standards for efficient translation of “first-in-humans” fluorescent imaging agents using these devices. Methods: The authors employed a stable, solid phantom designed to exaggerate the confounding effects of tissue light scattering and to mimic low concentrations (nM–pM) of near-infrared fluorescent dyes expected clinically for molecular imaging in order to evaluate and compare the commonly used charge coupled device (CCD) camera systems employed in preclinical studies and in human investigational studies. Results: The results show that intensified CCD systems offer greater contrast with larger signal-to-noise ratios in comparison to their unintensified CCD systems operated at clinically reasonable, subsecond acquisition times. Conclusions: Camera imaging performance could impact the success of future “first-in-humans” near-infrared fluorescence imaging agent studies. PMID:24506637

  13. A matter of collection and detection for intraoperative and noninvasive near-infrared fluorescence molecular imaging: To see or not to see?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Banghe; Rasmussen, John C.; Sevick-Muraca, Eva M., E-mail: Eva.Sevick@uth.tmc.edu

    2014-02-15

    Purpose: Although fluorescence molecular imaging is rapidly evolving as a new combinational drug/device technology platform for molecularly guided surgery and noninvasive imaging, there remains no performance standards for efficient translation of “first-in-humans” fluorescent imaging agents using these devices. Methods: The authors employed a stable, solid phantom designed to exaggerate the confounding effects of tissue light scattering and to mimic low concentrations (nM–pM) of near-infrared fluorescent dyes expected clinically for molecular imaging in order to evaluate and compare the commonly used charge coupled device (CCD) camera systems employed in preclinical studies and in human investigational studies. Results: The results show thatmore » intensified CCD systems offer greater contrast with larger signal-to-noise ratios in comparison to their unintensified CCD systems operated at clinically reasonable, subsecond acquisition times. Conclusions: Camera imaging performance could impact the success of future “first-in-humans” near-infrared fluorescence imaging agent studies.« less

  14. A New Serial-direction Trail Effect in CCD Images of the Lunar-based Ultraviolet Telescope

    NASA Astrophysics Data System (ADS)

    Wu, C.; Deng, J. S.; Guyonnet, A.; Antilogus, P.; Cao, L.; Cai, H. B.; Meng, X. M.; Han, X. H.; Qiu, Y. L.; Wang, J.; Wang, S.; Wei, J. Y.; Xin, L. P.; Li, G. W.

    2016-10-01

    Unexpected trails have been seen subsequent to relative bright sources in astronomical images taken with the CCD camera of the Lunar-based Ultraviolet Telescope (LUT) since its first light on the Moon’s surface. The trails can only be found in the serial-direction of CCD readout, differing themselves from image trails of radiation-damaged space-borne CCDs, which usually appear in the parallel-readout direction. After analyzing the same trail defects following warm pixels (WPs) in dark frames, we found that the relative intensity profile of the LUT CCD trails can be expressed as an exponential function of the distance i (in number of pixels) of the trailing pixel to the original source (or WP), i.e., {\\mathtt{\\exp }}(α {\\mathtt{i}}+β ). The parameters α and β seem to be independent of the CCD temperature, intensity of the source (or WP), and its position in the CCD frame. The main trail characteristics show evolution occurring at an increase rate of ˜(7.3 ± 3.6) × 10-4 in the first two operation years. The trails affect the consistency of the profiles of different brightness sources, which make smaller aperture photometry have larger extra systematic error. The astrometric uncertainty caused by the trails is too small to be acceptable based on LUT requirements for astrometry accuracy. Based on the empirical profile model, a correction method has been developed for LUT images that works well for restoring the fluxes of astronomical sources that are lost in trailing pixels.

  15. Event-Driven Random-Access-Windowing CCD Imaging System

    NASA Technical Reports Server (NTRS)

    Monacos, Steve; Portillo, Angel; Ortiz, Gerardo; Alexander, James; Lam, Raymond; Liu, William

    2004-01-01

    A charge-coupled-device (CCD) based high-speed imaging system, called a realtime, event-driven (RARE) camera, is undergoing development. This camera is capable of readout from multiple subwindows [also known as regions of interest (ROIs)] within the CCD field of view. Both the sizes and the locations of the ROIs can be controlled in real time and can be changed at the camera frame rate. The predecessor of this camera was described in High-Frame-Rate CCD Camera Having Subwindow Capability (NPO- 30564) NASA Tech Briefs, Vol. 26, No. 12 (December 2002), page 26. The architecture of the prior camera requires tight coupling between camera control logic and an external host computer that provides commands for camera operation and processes pixels from the camera. This tight coupling limits the attainable frame rate and functionality of the camera. The design of the present camera loosens this coupling to increase the achievable frame rate and functionality. From a host computer perspective, the readout operation in the prior camera was defined on a per-line basis; in this camera, it is defined on a per-ROI basis. In addition, the camera includes internal timing circuitry. This combination of features enables real-time, event-driven operation for adaptive control of the camera. Hence, this camera is well suited for applications requiring autonomous control of multiple ROIs to track multiple targets moving throughout the CCD field of view. Additionally, by eliminating the need for control intervention by the host computer during the pixel readout, the present design reduces ROI-readout times to attain higher frame rates. This camera (see figure) includes an imager card consisting of a commercial CCD imager and two signal-processor chips. The imager card converts transistor/ transistor-logic (TTL)-level signals from a field programmable gate array (FPGA) controller card. These signals are transmitted to the imager card via a low-voltage differential signaling (LVDS) cable assembly. The FPGA controller card is connected to the host computer via a standard peripheral component interface (PCI).

  16. Laser-induced damage threshold of camera sensors and micro-opto-electro-mechanical systems

    NASA Astrophysics Data System (ADS)

    Schwarz, Bastian; Ritt, Gunnar; Körber, Michael; Eberle, Bernd

    2016-10-01

    The continuous development of laser systems towards more compact and efficient devices constitutes an increasing threat to electro-optical imaging sensors such as complementary metal-oxide-semiconductors (CMOS) and charge-coupled devices (CCD). These types of electronic sensors are used in day-to-day life but also in military or civil security applications. In camera systems dedicated to specific tasks, also micro-opto-electro-mechanical systems (MOEMS) like a digital micromirror device (DMD) are part of the optical setup. In such systems, the DMD can be located at an intermediate focal plane of the optics and it is also susceptible to laser damage. The goal of our work is to enhance the knowledge of damaging effects on such devices exposed to laser light. The experimental setup for the investigation of laser-induced damage is described in detail. As laser sources both pulsed lasers and continuous-wave (CW) lasers are used. The laser-induced damage threshold (LIDT) is determined by the single-shot method by increasing the pulse energy from pulse to pulse or in the case of CW-lasers, by increasing the laser power. Furthermore, we investigate the morphology of laser-induced damage patterns and the dependence of the number of destructed device elements on the laser pulse energy or laser power. In addition to the destruction of single pixels, we observe aftereffects like persisting dead columns or rows of pixels in the sensor image.

  17. The astro-geodetic use of CCD for gravity field refinement

    NASA Astrophysics Data System (ADS)

    Gerstbach, G.

    1996-07-01

    The paper starts with a review of geoid projects, where vertical deflections are more effective than gravimetry. In alpine regions the economy of astrogeoids is at least 10 times higher, but many countries do not make use of this fact - presumably because the measurements are not fully automated up to now. Based upon the experiences of astrometry of high satellites and own tests the author analyses the use of CCD for astro-geodetic measurements. Automation and speeding up will be possible in a few years, the latter depending on the observation scheme. Sensor characteristics, cooling and reading out of the devices should be harmonized. Using line sensors in small prism astrolabes, the CCD accuracy will reach the visual one (±0.2″) within 5-10 years. Astrogeoids can be combined ideally with geological data, because vertical variation of rock densities does not cause systematic effects (contrary to gravimetry). So a geoid of ±5 cm accuracy (achieved in Austria and other alpine countries by 5-10 points per 1000 km 2) can be improved to ±2 cm without additional observations and border effects.

  18. Genetically expressed voltage sensor ArcLight for imaging large scale cortical activity in the anesthetized and awake mouse

    PubMed Central

    Borden, Peter Y.; Ortiz, Alex D.; Waiblinger, Christian; Sederberg, Audrey J.; Morrissette, Arthur E.; Forest, Craig R.; Jaeger, Dieter; Stanley, Garrett B.

    2017-01-01

    Abstract. With the recent breakthrough in genetically expressed voltage indicators (GEVIs), there has been a tremendous demand to determine the capabilities of these sensors in vivo. Novel voltage sensitive fluorescent proteins allow for direct measurement of neuron membrane potential changes through changes in fluorescence. Here, we utilized ArcLight, a recently developed GEVI, and examined the functional characteristics in the widely used mouse somatosensory whisker pathway. We measured the resulting evoked fluorescence using a wide-field microscope and a CCD camera at 200 Hz, which enabled voltage recordings over the entire cortical region with high temporal resolution. We found that ArcLight produced a fluorescent response in the S1 barrel cortex during sensory stimulation at single whisker resolution. During wide-field cortical imaging, we encountered substantial hemodynamic noise that required additional post hoc processing through noise subtraction techniques. Over a period of 28 days, we found clear and consistent ArcLight fluorescence responses to a simple sensory input. Finally, we demonstrated the use of ArcLight to resolve cortical S1 sensory responses in the awake mouse. Taken together, our results demonstrate the feasibility of ArcLight as a measurement tool for mesoscopic, chronic imaging. PMID:28491905

  19. Multipurpose active pixel sensor (APS)-based microtracker

    NASA Astrophysics Data System (ADS)

    Eisenman, Allan R.; Liebe, Carl C.; Zhu, David Q.

    1998-12-01

    A new, photon-sensitive, imaging array, the active pixel sensor (APS) has emerged as a competitor to the CCD imager for use in star and target trackers. The Jet Propulsion Laboratory (JPL) has undertaken a program to develop a new generation, highly integrated, APS-based, multipurpose tracker: the Programmable Intelligent Microtracker (PIM). The supporting hardware used in the PIM has been carefully selected to enhance the inherent advantages of the APS. Adequate computation power is included to perform star identification, star tracking, attitude determination, space docking, feature tracking, descent imaging for landing control, and target tracking capabilities. Its first version uses a JPL developed 256 X 256-pixel APS and an advanced 32-bit RISC microcontroller. By taking advantage of the unique features of the APS/microcontroller combination, the microtracker will achieve about an order-of-magnitude reduction in mass and power consumption compared to present state-of-the-art star trackers. It will also add the advantage of programmability to enable it to perform a variety of star, other celestial body, and target tracking tasks. The PIM is already proving the usefulness of its design concept for space applications. It is demonstrating the effectiveness of taking such an integrated approach in building a new generation of high performance, general purpose, tracking instruments to be applied to a large variety of future space missions.

  20. Design, development, and testing of the DCT Cassegrain instrument support assembly

    NASA Astrophysics Data System (ADS)

    Bida, Thomas A.; Dunham, Edward W.; Nye, Ralph A.; Chylek, Tomas; Oliver, Richard C.

    2012-09-01

    The 4.3m Discovery Channel Telescope delivers an f/6.1 unvignetted 0.5° field to its RC focal plane. In order to support guiding, wavefront sensing, and instrument installations, a Cassegrain instrument support assembly has been developed which includes a facility guider and wavefront sensor package (GWAVES) and multiple interfaces for instrumentation. A 2-element, all-spherical, fused-silica corrector compensates for field curvature and astigmatism over the 0.5° FOV, while reducing ghost pupil reflections to minimal levels. Dual roving GWAVES camera probes pick off stars in the outer annulus of the corrected field, providing simultaneous guiding and wavefront sensing for telescope operations. The instrument cube supports 5 co-mounted instruments with rapid feed selection via deployable fold mirrors. The corrected beam passes through a dual filter wheel before imaging with the 6K x 6K single CCD of the Large Monolithic Imager (LMI). We describe key development strategies for the DCT Cassegrain instrument assembly and GWAVES, including construction of a prime focus test assembly with wavefront sensor utilized in fall 2011 to begin characterization of the DCT primary mirror support. We also report on 2012 on-sky test results of wavefront sensing, guiding, and imaging with the integrated Cassegrain cube.

  1. Colorimetric Sensor Array for White Wine Tasting.

    PubMed

    Chung, Soo; Park, Tu San; Park, Soo Hyun; Kim, Joon Yong; Park, Seongmin; Son, Daesik; Bae, Young Min; Cho, Seong In

    2015-07-24

    A colorimetric sensor array was developed to characterize and quantify the taste of white wines. A charge-coupled device (CCD) camera captured images of the sensor array from 23 different white wine samples, and the change in the R, G, B color components from the control were analyzed by principal component analysis. Additionally, high performance liquid chromatography (HPLC) was used to analyze the chemical components of each wine sample responsible for its taste. A two-dimensional score plot was created with 23 data points. It revealed clusters created from the same type of grape, and trends of sweetness, sourness, and astringency were mapped. An artificial neural network model was developed to predict the degree of sweetness, sourness, and astringency of the white wines. The coefficients of determination (R2) for the HPLC results and the sweetness, sourness, and astringency were 0.96, 0.95, and 0.83, respectively. This research could provide a simple and low-cost but sensitive taste prediction system, and, by helping consumer selection, will be able to have a positive effect on the wine industry.

  2. Colorimetric Sensor Array for White Wine Tasting

    PubMed Central

    Chung, Soo; Park, Tu San; Park, Soo Hyun; Kim, Joon Yong; Park, Seongmin; Son, Daesik; Bae, Young Min; Cho, Seong In

    2015-01-01

    A colorimetric sensor array was developed to characterize and quantify the taste of white wines. A charge-coupled device (CCD) camera captured images of the sensor array from 23 different white wine samples, and the change in the R, G, B color components from the control were analyzed by principal component analysis. Additionally, high performance liquid chromatography (HPLC) was used to analyze the chemical components of each wine sample responsible for its taste. A two-dimensional score plot was created with 23 data points. It revealed clusters created from the same type of grape, and trends of sweetness, sourness, and astringency were mapped. An artificial neural network model was developed to predict the degree of sweetness, sourness, and astringency of the white wines. The coefficients of determination (R2) for the HPLC results and the sweetness, sourness, and astringency were 0.96, 0.95, and 0.83, respectively. This research could provide a simple and low-cost but sensitive taste prediction system, and, by helping consumer selection, will be able to have a positive effect on the wine industry. PMID:26213946

  3. Solution processed integrated pixel element for an imaging device

    NASA Astrophysics Data System (ADS)

    Swathi, K.; Narayan, K. S.

    2016-09-01

    We demonstrate the implementation of a solid state circuit/structure comprising of a high performing polymer field effect transistor (PFET) utilizing an oxide layer in conjunction with a self-assembled monolayer (SAM) as the dielectric and a bulk-heterostructure based organic photodiode as a CMOS-like pixel element for an imaging sensor. Practical usage of functional organic photon detectors requires on chip components for image capture and signal transfer as in the CMOS/CCD architecture rather than simple photodiode arrays in order to increase speed and sensitivity of the sensor. The availability of high performing PFETs with low operating voltage and photodiodes with high sensitivity provides the necessary prerequisite to implement a CMOS type image sensing device structure based on organic electronic devices. Solution processing routes in organic electronics offers relatively facile procedures to integrate these components, combined with unique features of large-area, form factor and multiple optical attributes. We utilize the inherent property of a binary mixture in a blend to phase-separate vertically and create a graded junction for effective photocurrent response. The implemented design enables photocharge generation along with on chip charge to voltage conversion with performance parameters comparable to traditional counterparts. Charge integration analysis for the passive pixel element using 2D TCAD simulations is also presented to evaluate the different processes that take place in the monolithic structure.

  4. The application of infrared speckle interferometry to the imaging of remote galaxies and AGN

    NASA Technical Reports Server (NTRS)

    Olivares, Robert O.

    1995-01-01

    A 1.5 meter reflector, used for both infrared and optical astronomy, is also being used for infrared speckle interferometry and CCD imaging. The application of these imaging techniques to remote galaxies and active galactic nuclei are discussed. A simple model for the origin of speckle in coherent imaging systems is presented. Very careful photometry of the continuum of the galaxy M31 is underway using CCD images. It involves extremely intensive data reduction because the object itself is very large and has low surface brightness.

  5. New Optical Sensing Materials for Application in Marine Research

    NASA Astrophysics Data System (ADS)

    Borisov, S.; Klimant, I.

    2012-04-01

    Optical chemosensors are versatile analytical tools which find application in numerous fields of science and technology. They proved to be a promising alternative to electrochemical methods and are applied increasingly often in marine research. However, not all state-of-the- art optical chemosensors are suitable for these demanding applications since they do not fully fulfil the requirements of high luminescence brightness, high chemical- and photochemical stability or their spectral properties are not adequate. Therefore, development of new advanced sensing materials is still of utmost importance. Here we present a set of novel optical sensing materials recently developed in the Institute of Analytical Chemistry and Food Chemistry which are optimized for marine applications. Particularly, we present new NIR indicators and sensors for oxygen and pH which feature high brightness and low level of autofluorescence. The oxygen sensors rely on highly photostable metal complexes of benzoporphyrins and azabenzoporphyrins and enable several important applications such as simultaneous monitoring of oxygen and chlorophyll or ultra-fast oxygen monitoring (Eddy correlation). We also developed ulta-sensitive oxygen optodes which enable monitoring in nM range and are primary designed for investigation of oxygen minimum zones. The dynamic range of our new NIR pH indicators based on aza-BODIPY dyes is optimized for the marine environment. A highly sensitive NIR luminescent phosphor (chromium(III) doped yttrium aluminium borate) can be used for non-invasive temperature measurements. Notably, the oxygen, pH sensors and temperature sensors are fully compatible with the commercially available fiber-optic readers (Firesting from PyroScience). An optical CO2 sensor for marine applications employs novel diketopyrrolopyrrol indicators and enables ratiometric imaging using a CCD camera. Oxygen, pH and temperature sensors suitable for lifetime and ratiometric imaging of analytes distribution are also realized. To enable versatility of applications we also obtained a range of nano- and microparticles suitable for intra- and extracellular imaging of the above analytes. Bright ratiometric 2-photon-excitable probes were also developed. Magnetic microparticles are demonstrated to be very promising tools for imaging of oxygen, temperature and other parameters in biofilms, corals etc. since they combine the sensing function with the possibility of external manipulation.

  6. Vision Sensors and Cameras

    NASA Astrophysics Data System (ADS)

    Hoefflinger, Bernd

    Silicon charge-coupled-device (CCD) imagers have been and are a specialty market ruled by a few companies for decades. Based on CMOS technologies, active-pixel sensors (APS) began to appear in 1990 at the 1 μm technology node. These pixels allow random access, global shutters, and they are compatible with focal-plane imaging systems combining sensing and first-level image processing. The progress towards smaller features and towards ultra-low leakage currents has provided reduced dark currents and μm-size pixels. All chips offer Mega-pixel resolution, and many have very high sensitivities equivalent to ASA 12.800. As a result, HDTV video cameras will become a commodity. Because charge-integration sensors suffer from a limited dynamic range, significant processing effort is spent on multiple exposure and piece-wise analog-digital conversion to reach ranges >10,000:1. The fundamental alternative is log-converting pixels with an eye-like response. This offers a range of almost a million to 1, constant contrast sensitivity and constant colors, important features in professional, technical and medical applications. 3D retino-morphic stacking of sensing and processing on top of each other is being revisited with sub-100 nm CMOS circuits and with TSV technology. With sensor outputs directly on top of neurons, neural focal-plane processing will regain momentum, and new levels of intelligent vision will be achieved. The industry push towards thinned wafers and TSV enables backside-illuminated and other pixels with a 100% fill-factor. 3D vision, which relies on stereo or on time-of-flight, high-speed circuitry, will also benefit from scaled-down CMOS technologies both because of their size as well as their higher speed.

  7. On a photon-counting array using the Fairchild CCD-201

    NASA Technical Reports Server (NTRS)

    Currie, D. G.

    1975-01-01

    The evaluation of certain performance parameters of the Fairchild CCD 201 and the proposed method of operation of an electron bombarded charge coupled device are described. Work in progress on the evaluation of the parameters relevant to remote, low noise operation is reported. These tests have been conducted using light input. The video data from the CCD are amplified, digitized, stored in a minicomputer memory, and then recorded on magnetic tape for analyzing. The device will be used in an array of sensors in the aperture plane of a telescope to discriminate between photoelectron events, and in the focal plane operating at single photoelectron sensitivity at a minimum of blooming and lag.

  8. Optics design of laser spotter camera for ex-CCD sensor

    NASA Astrophysics Data System (ADS)

    Nautiyal, R. P.; Mishra, V. K.; Sharma, P. K.

    2015-06-01

    Development of Laser based instruments like laser range finder and laser ranger designator has received prominence in modern day military application. Aiming the laser on the target is done with the help of a bore sighted graticule as human eye cannot see the laser beam directly. To view Laser spot there are two types of detectors available, InGaAs detector and Ex-CCD detector, the latter being a cost effective solution. In this paper optics design for Ex-CCD based camera is discussed. The designed system is light weight and compact and has the ability to see the 1064nm pulsed laser spot upto a range of 5 km.

  9. Optical sample-position sensing for electrostatic levitation

    NASA Technical Reports Server (NTRS)

    Sridharan, G.; Chung, S.; Elleman, D.; Whim, W. K.

    1989-01-01

    A comparative study is conducted for optical position-sensing techniques applicable to micro-G conditions sample-levitation systems. CCD sensors are compared with one- and two-dimensional position detectors used in electrostatic particle levitation. In principle, the CCD camera method can be improved from current resolution levels of 200 microns through the incorporation of a higher-pixel device and more complex digital signal processor interface. Nevertheless, the one-dimensional position detectors exhibited superior, better-than-one-micron resolution.

  10. Digital holographic diagnostics of near-injector region

    NASA Astrophysics Data System (ADS)

    Lee, Jaiho

    Study of primary breakup of liquid jets is important because it is motivated by the application to gas turbine fuel injectors, diesel fuel injectors, industrial cleaning and washing machine, medical spray, and inkjet printers, among others. When it comes to good injectors, a liquid jet has to be disintegrated into a fine spray near injector region during primary breakup. However the dense spray region near the injectors is optically obscure for Phase Doppler Interferometer like Phase Doppler Particle Analyzers (PDPA). Holography can provide three dimensional image of the dense spray and eliminate the problem of the small depth of focus associated with shadowgraphs. Traditional film-based holographic technique has long been used for three dimensional measurements in particle fields, but it is time consuming, expensive, chemically hazardous. With the development of the CCD sensor, holograms were recorded and reconstructed digitally. Digital microscopic holography (DMH) is similar to digital inline holography (DIH) except that no lens is used to collimate the object beam. The laser beams are expanded with an objective lens and a spatial filter. This eliminates two lenses from the typical optical path used for in-line holography, which results in a much cleaner hologram recording. The DMH was used for drop size and velocity measurements of the breakup of aerated liquid jets because it is unaffected by the non-spherical droplets that are encountered very close to the injector exit, which would cause problems for techniques such as Phase Doppler Particle Analyzer, otherwise. Large field of view was obtained by patching several high resolution holograms. Droplet velocities in three dimensions were measured by tracking their displacements in the streamwise and cross-stream direction and by tracking the change in the plane of focus in the spanwise direction. The uncertainty in spanwise droplet location and velocity measurements using single view DMH was large at least 33%. This large uncertainty in the spanwise direction, however, can be reduced to 2% by employing double view DMH. Double view DMH successfully tracked the three dimensional bending trajectories of polymer jets during electrospinning. The uncertainty in the spatial growth measurements of the bending instability was reduced using orthogonal double view DMH. Moreover, a commercial grade CCD was successfully used for single- and double-pulsed DMH of micro liquid jet breakup. Using a commercial grade CCD for the DMH, the cost of CCD sensor needed for recording holograms can be reduced.

  11. STARL -- a Program to Correct CCD Image Defects

    NASA Astrophysics Data System (ADS)

    Narbutis, D.; Vanagas, R.; Vansevičius, V.

    We present a program tool, STARL, designed for automatic detection and correction of various defects in CCD images. It uses genetic algorithm for deblending and restoring of overlapping saturated stars in crowded stellar fields. Using Subaru Telescope Suprime-Cam images we demonstrate that the program can be implemented in the wide-field survey data processing pipelines for production of high quality color mosaics. The source code and examples are available at the STARL website.

  12. Scintillation imaging of tritium radioactivity distribution during tritiated thymidine uptake by PC12 cells using a melt-on scintillator.

    PubMed

    Irikura, Namiko; Miyoshi, Hirokazu; Shinohara, Yasuo

    2017-02-01

    A scintillation image of tritium fixed in a melt-on scintillator was obtained using a charged-coupled device (CCD) imager, and a linear relationship was observed between the intensity of the scintillation image and the radioactivity of tritium. In a [ 3 H]thymidine uptake experiment, a linear correlation between the intensity of the CCD image and the dilution ratio of cells was confirmed. Scintillation imaging has the potential for use in direct observation of tritium radioactivity distribution. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. The Speckle Toolbox: A Powerful Data Reduction Tool for CCD Astrometry

    NASA Astrophysics Data System (ADS)

    Harshaw, Richard; Rowe, David; Genet, Russell

    2017-01-01

    Recent advances in high-speed low-noise CCD and CMOS cameras, coupled with breakthroughs in data reduction software that runs on desktop PCs, has opened the domain of speckle interferometry and high-accuracy CCD measurements of double stars to amateurs, allowing them to do useful science of high quality. This paper describes how to use a speckle interferometry reduction program, the Speckle Tool Box (STB), to achieve this level of result. For over a year the author (Harshaw) has been using STB (and its predecessor, Plate Solve 3) to obtain measurements of double stars based on CCD camera technology for pairs that are either too wide (the stars not sharing the same isoplanatic patch, roughly 5 arc-seconds in diameter) or too faint to image in the coherence time required for speckle (usually under 40ms). This same approach - using speckle reduction software to measure CCD pairs with greater accuracy than possible with lucky imaging - has been used, it turns out, for several years by the U. S. Naval Observatory.

  14. An imaging colorimeter for noncontact tissue color mapping.

    PubMed

    Balas, C

    1997-06-01

    There has been a considerable effort in several medical fields, for objective color analysis and characterization of biological tissues. Conventional colorimeters have proved inadequate for this purpose, since they do not provide spatial color information and because the measuring procedure randomly affects the color of the tissue. In this paper an imaging colorimeter is presented, where the nonimaging optical photodetector of colorimeters is replaced with the charge-coupled device (CCD) sensor of a color video camera, enabling the independent capturing of the color information for any spatial point within its field-of-view. Combining imaging and colorimetry methods, the acquired image is calibrated and corrected, under several ambient light conditions, providing noncontact reproducible color measurements and mapping, free of the errors and the limitations present in conventional colorimeters. This system was used for monitoring of blood supply changes of psoriatic plaques, that have undergone Psoralens and ultraviolet-A radiation (PUVA) therapy, where reproducible and reliable measurements were demonstrated. These features highlight the potential of the imaging colorimeters as clinical and research tools for the standardization of clinical diagnosis and for the objective evaluation of treatment effectiveness.

  15. Fluorescence laminar optical tomography for brain imaging: system implementation and performance evaluation.

    PubMed

    Azimipour, Mehdi; Sheikhzadeh, Mahya; Baumgartner, Ryan; Cullen, Patrick K; Helmstetter, Fred J; Chang, Woo-Jin; Pashaie, Ramin

    2017-01-01

    We present our effort in implementing a fluorescence laminar optical tomography scanner which is specifically designed for noninvasive three-dimensional imaging of fluorescence proteins in the brains of small rodents. A laser beam, after passing through a cylindrical lens, scans the brain tissue from the surface while the emission signal is captured by the epi-fluorescence optics and is recorded using an electron multiplication CCD sensor. Image reconstruction algorithms are developed based on Monte Carlo simulation to model light–tissue interaction and generate the sensitivity matrices. To solve the inverse problem, we used the iterative simultaneous algebraic reconstruction technique. The performance of the developed system was evaluated by imaging microfabricated silicon microchannels embedded inside a substrate with optical properties close to the brain as a tissue phantom and ultimately by scanning brain tissue in vivo. Details of the hardware design and reconstruction algorithms are discussed and several experimental results are presented. The developed system can specifically facilitate neuroscience experiments where fluorescence imaging and molecular genetic methods are used to study the dynamics of the brain circuitries.

  16. Plane development of lateral surfaces for inspection systems

    NASA Astrophysics Data System (ADS)

    Francini, F.; Fontani, D.; Jafrancesco, D.; Mercatelli, L.; Sansoni, P.

    2006-08-01

    The problem of developing the lateral surfaces of a 3D object can arise in item inspection using automated imaging systems. In an industrial environment, these control systems typically work at high rate and they have to assure a reliable inspection of the single item. For compactness requirements it is not convenient to utilise three or four CCD cameras to control all the lateral surfaces of an object. Moreover it is impossible to mount optical components near the object if it is placed on a conveyor belt. The paper presents a system that integrates on a single CCD picture the images of both the frontal surface and the lateral surface of an object. It consists of a freeform lens mounted in front of a CCD camera with a commercial lens. The aim is to have a good magnification of the lateral surface, maintaining a low aberration level for exploiting the pictures in an image processing software. The freeform lens, made in plastics, redirects the light coming from the object to the camera lens. The final result is to obtain on the CCD: - the frontal and lateral surface images, with a selected magnification (even with two different values for the two images); - a gap between these two images, so an automatic method to analyse the images can be easily applied. A simple method to design the freeform lens is illustrated. The procedure also allows to obtain the imaging system modifying a current inspection system reducing the cost.

  17. Improving the Sensitivity and Functionality of Mobile Webcam-Based Fluorescence Detectors for Point-of-Care Diagnostics in Global Health.

    PubMed

    Rasooly, Reuven; Bruck, Hugh Alan; Balsam, Joshua; Prickril, Ben; Ossandon, Miguel; Rasooly, Avraham

    2016-05-17

    Resource-poor countries and regions require effective, low-cost diagnostic devices for accurate identification and diagnosis of health conditions. Optical detection technologies used for many types of biological and clinical analysis can play a significant role in addressing this need, but must be sufficiently affordable and portable for use in global health settings. Most current clinical optical imaging technologies are accurate and sensitive, but also expensive and difficult to adapt for use in these settings. These challenges can be mitigated by taking advantage of affordable consumer electronics mobile devices such as webcams, mobile phones, charge-coupled device (CCD) cameras, lasers, and LEDs. Low-cost, portable multi-wavelength fluorescence plate readers have been developed for many applications including detection of microbial toxins such as C. Botulinum A neurotoxin, Shiga toxin, and S. aureus enterotoxin B (SEB), and flow cytometry has been used to detect very low cell concentrations. However, the relatively low sensitivities of these devices limit their clinical utility. We have developed several approaches to improve their sensitivity presented here for webcam based fluorescence detectors, including (1) image stacking to improve signal-to-noise ratios; (2) lasers to enable fluorescence excitation for flow cytometry; and (3) streak imaging to capture the trajectory of a single cell, enabling imaging sensors with high noise levels to detect rare cell events. These approaches can also help to overcome some of the limitations of other low-cost optical detection technologies such as CCD or phone-based detectors (like high noise levels or low sensitivities), and provide for their use in low-cost medical diagnostics in resource-poor settings.

  18. Improving the Sensitivity and Functionality of Mobile Webcam-Based Fluorescence Detectors for Point-of-Care Diagnostics in Global Health

    PubMed Central

    Rasooly, Reuven; Bruck, Hugh Alan; Balsam, Joshua; Prickril, Ben; Ossandon, Miguel; Rasooly, Avraham

    2016-01-01

    Resource-poor countries and regions require effective, low-cost diagnostic devices for accurate identification and diagnosis of health conditions. Optical detection technologies used for many types of biological and clinical analysis can play a significant role in addressing this need, but must be sufficiently affordable and portable for use in global health settings. Most current clinical optical imaging technologies are accurate and sensitive, but also expensive and difficult to adapt for use in these settings. These challenges can be mitigated by taking advantage of affordable consumer electronics mobile devices such as webcams, mobile phones, charge-coupled device (CCD) cameras, lasers, and LEDs. Low-cost, portable multi-wavelength fluorescence plate readers have been developed for many applications including detection of microbial toxins such as C. Botulinum A neurotoxin, Shiga toxin, and S. aureus enterotoxin B (SEB), and flow cytometry has been used to detect very low cell concentrations. However, the relatively low sensitivities of these devices limit their clinical utility. We have developed several approaches to improve their sensitivity presented here for webcam based fluorescence detectors, including (1) image stacking to improve signal-to-noise ratios; (2) lasers to enable fluorescence excitation for flow cytometry; and (3) streak imaging to capture the trajectory of a single cell, enabling imaging sensors with high noise levels to detect rare cell events. These approaches can also help to overcome some of the limitations of other low-cost optical detection technologies such as CCD or phone-based detectors (like high noise levels or low sensitivities), and provide for their use in low-cost medical diagnostics in resource-poor settings. PMID:27196933

  19. The Mapping X-Ray Fluorescence Spectrometer (MAPX)

    NASA Technical Reports Server (NTRS)

    Blake, David; Sarrazin, Philippe; Bristow, Thomas; Downs, Robert; Gailhanou, Marc; Marchis, Franck; Ming, Douglas; Morris, Richard; Sole, Vincente Armando; Thompson, Kathleen; hide

    2016-01-01

    MapX will provide elemental imaging at =100 micron spatial resolution over 2.5 X 2.5 centimeter areas, yielding elemental chemistry at or below the scale length where many relict physical, chemical, and biological features can be imaged and interpreted in ancient rocks. MapX is a full-frame spectroscopic imager positioned on soil or regolith with touch sensors. During an analysis, an X-ray source (tube or radioisotope) bombards the sample surface with X-rays or alpha-particles / gamma rays, resulting in sample X-ray Fluorescence (XRF). Fluoresced X-rays pass through an X-ray lens (X-ray µ-Pore Optic, "MPO") that projects a spatially resolved image of the X-rays onto a CCD. The CCD is operated in single photon counting mode so that the positions and energies of individual photons are retained. In a single analysis, several thousand frames are stored and processed. A MapX experiment provides elemental maps having a spatial resolution of =100 micron and quantitative XRF spectra from Regions of Interest (ROI) 2 centimers = x = 100 micron. ROI are compared with known rock and mineral compositions to extrapolate the data to rock types and putative mineralogies. The MapX geometry is being refined with ray-tracing simulations and with synchrotron experiments at SLAC. Source requirements are being determined through Monte Carlo modeling and experiment using XMIMSIM [1], GEANT4 [2] and PyMca [3] and a dedicated XRF test fixture. A flow-down of requirements for both tube and radioisotope sources is being developed from these experiments. In addition to Mars lander and rover missions, MapX could be used for landed science on other airless bodies (Phobos/Deimos, Comet nucleus, asteroids, the Earth's moon, and the icy satellites of the outer planets, including Europa.

  20. Modified modular imaging system designed for a sounding rocket experiment

    NASA Astrophysics Data System (ADS)

    Veach, Todd J.; Scowen, Paul A.; Beasley, Matthew; Nikzad, Shouleh

    2012-09-01

    We present the design and system calibration results from the fabrication of a charge-coupled device (CCD) based imaging system designed using a modified modular imager cell (MIC) used in an ultraviolet sounding rocket mission. The heart of the imaging system is the MIC, which provides the video pre-amplifier circuitry and CCD clock level filtering. The MIC is designed with standard four-layer FR4 printed circuit board (PCB) with surface mount and through-hole components for ease of testing and lower fabrication cost. The imager is a 3.5k by 3.5k LBNL p-channel CCD with enhanced quantum efficiency response in the UV using delta-doping technology at JPL. The recently released PCIe/104 Small-Cam CCD controller from Astronomical Research Cameras, Inc (ARC) performs readout of the detector. The PCIe/104 Small-Cam system has the same capabilities as its larger PCI brethren, but in a smaller form factor, which makes it ideally suited for sub-orbital ballistic missions. The overall control is then accomplished using a PCIe/104 computer from RTD Embedded Technologies, Inc. The design, fabrication, and testing was done at the Laboratory for Astronomical and Space Instrumentation (LASI) at Arizona State University. Integration and flight calibration are to be completed at the University of Colorado Boulder before integration into CHESS.

  1. FlyEyes: A CCD-Based Wavefront Sensor for PUEO, the CFHT Curvature AO System

    DTIC Science & Technology

    2010-09-28

    Charles Cuillandre, Kevin K.Y. Ho, Marc Baril , Tom Benedict, Jeff Ward, Jim Thomas, Derrick Salmon, Chueh-Jen Lin, Shiang-Yu Wang, Gerry Luppino...sensor for PUEO, the CFHT curvature AO system Olivier Lai, Jean-Charles Cuillandre , Kevin K.Y. Ho, lVIarc Baril , Tom Benedict, Jeff ’Varel, Jim Thomas

  2. NASA Tech Briefs, January 2004

    NASA Technical Reports Server (NTRS)

    2004-01-01

    Topics covered include: Multisensor Instrument for Real-Time Biological Monitoring; Sensor for Monitoring Nanodevice-Fabrication Plasmas; Backed Bending Actuator; Compact Optoelectronic Compass; Micro Sun Sensor for Spacecraft; Passive IFF: Autonomous Nonintrusive Rapid Identification of Friendly Assets; Finned-Ladder Slow-Wave Circuit for a TWT; Directional Radio-Frequency Identification Tag Reader; Integrated Solar-Energy-Harvesting and -Storage Device; Event-Driven Random-Access-Windowing CCD Imaging System; Stroboscope Controller for Imaging Helicopter Rotors; Software for Checking State-charts; Program Predicts Broadband Noise from a Turbofan Engine; Protocol for a Delay-Tolerant Data-Communication Network; Software Implements a Space-Mission File-Transfer Protocol; Making Carbon-Nanotube Arrays Using Block Copolymers: Part 2; Modular Rake of Pitot Probes; Preloading To Accelerate Slow-Crack-Growth Testing; Miniature Blimps for Surveillance and Collection of Samples; Hybrid Automotive Engine Using Ethanol-Burning Miller Cycle; Fabricating Blazed Diffraction Gratings by X-Ray Lithography; Freeze-Tolerant Condensers; The StarLight Space Interferometer; Champagne Heat Pump; Controllable Sonar Lenses and Prisms Based on ERFs; Measuring Gravitation Using Polarization Spectroscopy; Serial-Turbo-Trellis-Coded Modulation with Rate-1 Inner Code; Enhanced Software for Scheduling Space-Shuttle Processing; Bayesian-Augmented Identification of Stars in a Narrow View; Spacecraft Orbits for Earth/Mars-Lander Radio Relay; and Self-Inflatable/Self-Rigidizable Reflectarray Antenna.

  3. Optical Observation, Image-processing, and Detection of Space Debris in Geosynchronous Earth Orbit

    NASA Astrophysics Data System (ADS)

    Oda, H.; Yanagisawa, T.; Kurosaki, H.; Tagawa, M.

    2014-09-01

    We report on optical observations and an efficient detection method of space debris in the geosynchronous Earth orbit (GEO). We operate our new Australia Remote Observatory (ARO) where an 18 cm optical telescope with a charged-coupled device (CCD) camera covering a 3.14-degree field of view is used for GEO debris survey, and analyse datasets of successive CCD images using the line detection method (Yanagisawa and Nakajima 2005). In our operation, the exposure time of each CCD image is set to be 3 seconds (or 5 seconds), and the time interval of CCD shutter open is about 4.7 seconds (or 6.7 seconds). In the line detection method, a sufficient number of sample objects are taken from each image based on their shape and intensity, which includes not only faint signals but also background noise (we take 500 sample objects from each image in this paper). Then we search a sequence of sample objects aligning in a straight line in the successive images to exclude the noise sample. We succeed in detecting faint signals (down to about 1.8 sigma of background noise) by applying the line detection method to 18 CCD images. As a result, we detected about 300 GEO objects up to magnitude of 15.5 among 5 nights data. We also calculate orbits of objects detected using the Simplified General Perturbations Satellite Orbit Model 4(SGP4), and identify the objects listed in the two-line-element (TLE) data catalogue publicly provided by the U.S. Strategic Command (USSTRATCOM). We found that a certain amount of our detections are new objects that are not contained in the catalogue. We conclude that our ARO and detection method posse a high efficiency detection of GEO objects despite the use of comparatively-inexpensive observation and analysis system. We also describe the image-processing specialized for the detection of GEO objects (not for usual astronomical objects like stars) in this paper.

  4. Panoramic 3D Reconstruction by Fusing Color Intensity and Laser Range Data

    NASA Astrophysics Data System (ADS)

    Jiang, Wei; Lu, Jian

    Technology for capturing panoramic (360 degrees) three-dimensional information in a real environment have many applications in fields: virtual and complex reality, security, robot navigation, and so forth. In this study, we examine an acquisition device constructed of a regular CCD camera and a 2D laser range scanner, along with a technique for panoramic 3D reconstruction using a data fusion algorithm based on an energy minimization framework. The acquisition device can capture two types of data of a panoramic scene without occlusion between two sensors: a dense spatio-temporal volume from a camera and distance information from a laser scanner. We resample the dense spatio-temporal volume for generating a dense multi-perspective panorama that has equal spatial resolution to that of the original images acquired using a regular camera, and also estimate a dense panoramic depth-map corresponding to the generated reference panorama by extracting trajectories from the dense spatio-temporal volume with a selecting camera. Moreover, for determining distance information robustly, we propose a data fusion algorithm that is embedded into an energy minimization framework that incorporates active depth measurements using a 2D laser range scanner and passive geometry reconstruction from an image sequence obtained using the CCD camera. Thereby, measurement precision and robustness can be improved beyond those available by conventional methods using either passive geometry reconstruction (stereo vision) or a laser range scanner. Experimental results using both synthetic and actual images show that our approach can produce high-quality panoramas and perform accurate 3D reconstruction in a panoramic environment.

  5. CCD radiation damage in ESA Cosmic Visions missions: assessment and mitigation

    NASA Astrophysics Data System (ADS)

    Lumb, David H.

    2009-08-01

    Charge Coupled Device (CCD) imagers have been widely used in space-borne astronomical instruments. A frequent concern has been the radiation damage effects on the CCD charge transfer properties. We review some methods for assessing the Charge Transfer Inefficiency (CTI) in CCDs. Techniques to minimise degradation using background charge injection and p-channel CCD architectures are discussed. A critical review of the claims for p-channel architectures is presented. The performance advantage for p-channel CCD performance is shown to be lower than claimed previously. Finally we present some projections for the performance in the context of some future ESA missions.

  6. SU-E-T-161: SOBP Beam Analysis Using Light Output of Scintillation Plate Acquired by CCD Camera.

    PubMed

    Cho, S; Lee, S; Shin, J; Min, B; Chung, K; Shin, D; Lim, Y; Park, S

    2012-06-01

    To analyze Bragg-peak beams in SOBP (spread-out Bragg-peak) beam using CCD (charge-coupled device) camera - scintillation screen system. We separated each Bragg-peak beam using light output of high sensitivity scintillation material acquired by CCD camera and compared with Bragg-peak beams calculated by Monte Carlo simulation. In this study, CCD camera - scintillation screen system was constructed with a high sensitivity scintillation plate (Gd2O2S:Tb) and a right-angled prismatic PMMA phantom, and a Marlin F-201B, EEE-1394 CCD camera. SOBP beam irradiated by the double scattering mode of a PROTEUS 235 proton therapy machine in NCC is 8 cm width, 13 g/cm 2 range. The gain, dose rate and current of this beam is 50, 2 Gy/min and 70 nA, respectively. Also, we simulated the light output of scintillation plate for SOBP beam using Geant4 toolkit. We evaluated the light output of high sensitivity scintillation plate according to intergration time (0.1 - 1.0 sec). The images of CCD camera during the shortest intergration time (0.1 sec) were acquired automatically and randomly, respectively. Bragg-peak beams in SOBP beam were analyzed by the acquired images. Then, the SOBP beam used in this study was calculated by Geant4 toolkit and Bragg-peak beams in SOBP beam were obtained by ROOT program. The SOBP beam consists of 13 Bragg-peak beams. The results of experiment were compared with that of simulation. We analyzed Bragg-peak beams in SOBP beam using light output of scintillation plate acquired by CCD camera and compared with that of Geant4 simulation. We are going to study SOBP beam analysis using more effective the image acquisition technique. © 2012 American Association of Physicists in Medicine.

  7. DQE analysis for CCD imaging arrays

    NASA Astrophysics Data System (ADS)

    Shaw, Rodney

    1997-05-01

    By consideration of the statistical interaction between exposure quanta and the mechanisms of image detection, the signal-to-noise limitations of a variety of image acquisition technologies are now well understood. However in spite of the growing fields of application for CCD imaging- arrays and the obvious advantages of their multi-level mode of quantum detection, only limited and largely empirical approaches have been made to quantify these advantages on an absolute basis. Here an extension is made of a previous model for noise-free sequential photon-counting to the more general case involving both count-noise and arbitrary separation functions between count levels. This allows a basic model to be developed for the DQE associated with devices which approximate to the CCD mode of operation, and conclusions to be made concerning the roles of the separation-function and count-noise in defining the departure from the ideal photon counter.

  8. Experimental teaching and training system based on volume holographic storage

    NASA Astrophysics Data System (ADS)

    Jiang, Zhuqing; Wang, Zhe; Sun, Chan; Cui, Yutong; Wan, Yuhong; Zou, Rufei

    2017-08-01

    The experiment of volume holographic storage for teaching and training the practical ability of senior students in Applied Physics is introduced. The students can learn to use advanced optoelectronic devices and the automatic control means via this experiment, and further understand the theoretical knowledge of optical information processing and photonics disciplines that have been studied in some courses. In the experiment, multiplexing holographic recording and readout is based on Bragg selectivity of volume holographic grating, in which Bragg diffraction angle is dependent on grating-recording angel. By using different interference angle between reference and object beams, the holograms can be recorded into photorefractive crystal, and then the object images can be read out from these holograms via angular addressing by using the original reference beam. In this system, the experimental data acquisition and the control of the optoelectronic devices, such as the shutter on-off, image loaded in SLM and image acquisition of a CCD sensor, are automatically realized by using LabVIEW programming.

  9. Per-Pixel Coded Exposure for High-Speed and High-Resolution Imaging Using a Digital Micromirror Device Camera

    PubMed Central

    Feng, Wei; Zhang, Fumin; Qu, Xinghua; Zheng, Shiwei

    2016-01-01

    High-speed photography is an important tool for studying rapid physical phenomena. However, low-frame-rate CCD (charge coupled device) or CMOS (complementary metal oxide semiconductor) camera cannot effectively capture the rapid phenomena with high-speed and high-resolution. In this paper, we incorporate the hardware restrictions of existing image sensors, design the sampling functions, and implement a hardware prototype with a digital micromirror device (DMD) camera in which spatial and temporal information can be flexibly modulated. Combined with the optical model of DMD camera, we theoretically analyze the per-pixel coded exposure and propose a three-element median quicksort method to increase the temporal resolution of the imaging system. Theoretically, this approach can rapidly increase the temporal resolution several, or even hundreds, of times without increasing bandwidth requirements of the camera. We demonstrate the effectiveness of our method via extensive examples and achieve 100 fps (frames per second) gain in temporal resolution by using a 25 fps camera. PMID:26959023

  10. Per-Pixel Coded Exposure for High-Speed and High-Resolution Imaging Using a Digital Micromirror Device Camera.

    PubMed

    Feng, Wei; Zhang, Fumin; Qu, Xinghua; Zheng, Shiwei

    2016-03-04

    High-speed photography is an important tool for studying rapid physical phenomena. However, low-frame-rate CCD (charge coupled device) or CMOS (complementary metal oxide semiconductor) camera cannot effectively capture the rapid phenomena with high-speed and high-resolution. In this paper, we incorporate the hardware restrictions of existing image sensors, design the sampling functions, and implement a hardware prototype with a digital micromirror device (DMD) camera in which spatial and temporal information can be flexibly modulated. Combined with the optical model of DMD camera, we theoretically analyze the per-pixel coded exposure and propose a three-element median quicksort method to increase the temporal resolution of the imaging system. Theoretically, this approach can rapidly increase the temporal resolution several, or even hundreds, of times without increasing bandwidth requirements of the camera. We demonstrate the effectiveness of our method via extensive examples and achieve 100 fps (frames per second) gain in temporal resolution by using a 25 fps camera.

  11. Automatic target detection using binary template matching

    NASA Astrophysics Data System (ADS)

    Jun, Dong-San; Sun, Sun-Gu; Park, HyunWook

    2005-03-01

    This paper presents a new automatic target detection (ATD) algorithm to detect targets such as battle tanks and armored personal carriers in ground-to-ground scenarios. Whereas most ATD algorithms were developed for forward-looking infrared (FLIR) images, we have developed an ATD algorithm for charge-coupled device (CCD) images, which have superior quality to FLIR images in daylight. The proposed algorithm uses fast binary template matching with an adaptive binarization, which is robust to various light conditions in CCD images and saves computation time. Experimental results show that the proposed method has good detection performance.

  12. Breast Biopsy System

    NASA Technical Reports Server (NTRS)

    1994-01-01

    Charge Coupled Devices (CCDs) are high technology silicon chips that connect light directly into electronic or digital images, which can be manipulated or enhanced by computers. When Goddard Space Flight Center (GSFC) scientists realized that existing CCD technology could not meet scientific requirements for the Hubble Space Telescope Imagining Spectrograph, GSFC contracted with Scientific Imaging Technologies, Inc. (SITe) to develop an advanced CCD. SITe then applied many of the NASA-driven enhancements to the manufacture of CCDs for digital mammography. The resulting device images breast tissue more clearly and efficiently. The LORAD Stereo Guide Breast Biopsy system incorporates SITe's CCD as part of a digital camera system that is replacing surgical biopsy in many cases. Known as stereotactic needle biopsy, it is performed under local anesthesia with a needle and saves women time, pain, scarring, radiation exposure and money.

  13. New feature of the neutron color image intensifier

    NASA Astrophysics Data System (ADS)

    Nittoh, Koichi; Konagai, Chikara; Noji, Takashi; Miyabe, Keisuke

    2009-06-01

    We developed prototype neutron color image intensifiers with high-sensitivity, wide dynamic range and long-life characteristics. In the prototype intensifier (Gd-Type 1), a terbium-activated Gd 2O 2S is used as the input-screen phosphor. In the upgraded model (Gd-Type 2), Gd 2O 3 and CsI:Na are vacuum deposited to form the phosphor layer, which improved the sensitivity and the spatial uniformity. A europium-activated Y 2O 2S multi-color scintillator, emitting red, green and blue photons with different intensities, is utilized as the output screen of the intensifier. By combining this image intensifier with a suitably tuned high-sensitive color CCD camera, higher sensitivity and wider dynamic range could be simultaneously attained than that of the conventional P20-phosphor-type image intensifier. The results of experiments at the JRR-3M neutron radiography irradiation port (flux: 1.5×10 8 n/cm 2/s) showed that these neutron color image intensifiers can clearly image dynamic phenomena with a 30 frame/s video picture. It is expected that the color image intensifier will be used as a new two-dimensional neutron sensor in new application fields.

  14. Digital solar edge tracker for the Halogen Occultation Experiment

    NASA Technical Reports Server (NTRS)

    Mauldin, L. E., III; Moore, A. S.; Stump, C. W.; Mayo, L. S.

    1987-01-01

    The optical and electronic design of the Halogen Occultation Experiment (Haloe) elevation sun sensor is described. The Haloe instrument is a gas-correlation radiometer now being developed at NASA Langley for the Upper Atmosphere Research Satellite. The system uses a Galilean telescope to form a solar image on a linear silicon photodiode array. The array is a self-scanned monolithic CCD. The addresses of both solar edges imaged on the array are used by the control/pointing system to scan the Haloe science instantaneous field of view (IFOV) across the vertical solar diameter during instrument calibration and then to maintain the science IFOV 4 arcmin below the top edge during the science data occultation event. Vertical resolution of 16 arcsec and a radiometric dynamic range of 100 are achieved at the 700-nm operating wavelength. The design provides for loss of individual photodiode elements without loss of angular tracking capability.

  15. A Vision-Based Driver Nighttime Assistance and Surveillance System Based on Intelligent Image Sensing Techniques and a Heterogamous Dual-Core Embedded System Architecture

    PubMed Central

    Chen, Yen-Lin; Chiang, Hsin-Han; Chiang, Chuan-Yen; Liu, Chuan-Ming; Yuan, Shyan-Ming; Wang, Jenq-Haur

    2012-01-01

    This study proposes a vision-based intelligent nighttime driver assistance and surveillance system (VIDASS system) implemented by a set of embedded software components and modules, and integrates these modules to accomplish a component-based system framework on an embedded heterogamous dual-core platform. Therefore, this study develops and implements computer vision and sensing techniques of nighttime vehicle detection, collision warning determination, and traffic event recording. The proposed system processes the road-scene frames in front of the host car captured from CCD sensors mounted on the host vehicle. These vision-based sensing and processing technologies are integrated and implemented on an ARM-DSP heterogamous dual-core embedded platform. Peripheral devices, including image grabbing devices, communication modules, and other in-vehicle control devices, are also integrated to form an in-vehicle-embedded vision-based nighttime driver assistance and surveillance system. PMID:22736956

  16. A vision-based driver nighttime assistance and surveillance system based on intelligent image sensing techniques and a heterogamous dual-core embedded system architecture.

    PubMed

    Chen, Yen-Lin; Chiang, Hsin-Han; Chiang, Chuan-Yen; Liu, Chuan-Ming; Yuan, Shyan-Ming; Wang, Jenq-Haur

    2012-01-01

    This study proposes a vision-based intelligent nighttime driver assistance and surveillance system (VIDASS system) implemented by a set of embedded software components and modules, and integrates these modules to accomplish a component-based system framework on an embedded heterogamous dual-core platform. Therefore, this study develops and implements computer vision and sensing techniques of nighttime vehicle detection, collision warning determination, and traffic event recording. The proposed system processes the road-scene frames in front of the host car captured from CCD sensors mounted on the host vehicle. These vision-based sensing and processing technologies are integrated and implemented on an ARM-DSP heterogamous dual-core embedded platform. Peripheral devices, including image grabbing devices, communication modules, and other in-vehicle control devices, are also integrated to form an in-vehicle-embedded vision-based nighttime driver assistance and surveillance system.

  17. The Development of the Spanish Fireball Network Using a New All-Sky CCD System

    NASA Astrophysics Data System (ADS)

    Trigo-Rodríguez, J. M.; Castro-Tirado, A. J.; Llorca, J.; Fabregat, J.; Martínez, V. J.; Reglero, V.; Jelínek, M.; Kubánek, P.; Mateo, T.; Postigo, A. De Ugarte

    2004-12-01

    We have developed an all-sky charge coupled devices (CCD) automatic system for detecting meteors and fireballs that will be operative in four stations in Spain during 2005. The cameras were developed following the BOOTES-1 prototype installed at the El Arenosillo Observatory in 2002, which is based on a CCD detector of 4096 × 4096 pixels with a fish-eye lens that provides an all-sky image with enough resolution to make accurate astrometric measurements. Since late 2004, a couple of cameras at two of the four stations operate for 30 s in alternate exposures, allowing 100% time coverage. The stellar limiting magnitude of the images is +10 in the zenith, and +8 below ~ 65° of zenithal angle. As a result, the images provide enough comparison stars to make astrometric measurements of faint meteors and fireballs with an accuracy of ~ 2°arcminutes. Using this prototype, four automatic all-sky CCD stations have been developed, two in Andalusia and two in the Valencian Community, to start full operation of the Spanish Fireball Network. In addition to all-sky coverage, we are developing a fireball spectroscopy program using medium field lenses with additional CCD cameras. Here we present the first images obtained from the El Arenosillo and La Mayora stations in Andalusia during their first months of activity. The detection of the Jan 27, 2003 superbolide of ± 17 ± 1 absolute magnitude that overflew Algeria and Morocco is an example of the detection capability of our prototype.

  18. Intense acoustic bursts as a signal-enhancement mechanism in ultrasound-modulated optical tomography.

    PubMed

    Kim, Chulhong; Zemp, Roger J; Wang, Lihong V

    2006-08-15

    Biophotonic imaging with ultrasound-modulated optical tomography (UOT) promises ultrasonically resolved imaging in biological tissues. A key challenge in this imaging technique is a low signal-to-noise ratio (SNR). We show significant UOT signal enhancement by using intense time-gated acoustic bursts. A CCD camera captured the speckle pattern from a laser-illuminated tissue phantom. Differences in speckle contrast were observed when ultrasonic bursts were applied, compared with when no ultrasound was applied. When CCD triggering was synchronized with burst initiation, acoustic-radiation-force-induced displacements were detected. To avoid mechanical contrast in UOT images, the CCD camera acquisition was delayed several milliseconds until transient effects of acoustic radiation force attenuated to a satisfactory level. The SNR of our system was sufficiently high to provide an image pixel per acoustic burst without signal averaging. Because of the substantially improved SNR, the use of intense acoustic bursts is a promising signal enhancement strategy for UOT.

  19. Biometric iris image acquisition system with wavefront coding technology

    NASA Astrophysics Data System (ADS)

    Hsieh, Sheng-Hsun; Yang, Hsi-Wen; Huang, Shao-Hung; Li, Yung-Hui; Tien, Chung-Hao

    2013-09-01

    Biometric signatures for identity recognition have been practiced for centuries. Basically, the personal attributes used for a biometric identification system can be classified into two areas: one is based on physiological attributes, such as DNA, facial features, retinal vasculature, fingerprint, hand geometry, iris texture and so on; the other scenario is dependent on the individual behavioral attributes, such as signature, keystroke, voice and gait style. Among these features, iris recognition is one of the most attractive approaches due to its nature of randomness, texture stability over a life time, high entropy density and non-invasive acquisition. While the performance of iris recognition on high quality image is well investigated, not too many studies addressed that how iris recognition performs subject to non-ideal image data, especially when the data is acquired in challenging conditions, such as long working distance, dynamical movement of subjects, uncontrolled illumination conditions and so on. There are three main contributions in this paper. Firstly, the optical system parameters, such as magnification and field of view, was optimally designed through the first-order optics. Secondly, the irradiance constraints was derived by optical conservation theorem. Through the relationship between the subject and the detector, we could estimate the limitation of working distance when the camera lens and CCD sensor were known. The working distance is set to 3m in our system with pupil diameter 86mm and CCD irradiance 0.3mW/cm2. Finally, We employed a hybrid scheme combining eye tracking with pan and tilt system, wavefront coding technology, filter optimization and post signal recognition to implement a robust iris recognition system in dynamic operation. The blurred image was restored to ensure recognition accuracy over 3m working distance with 400mm focal length and aperture F/6.3 optics. The simulation result as well as experiment validates the proposed code apertured imaging system, where the imaging volume was 2.57 times extended over the traditional optics, while keeping sufficient recognition accuracy.

  20. Experimental research on femto-second laser damaging array CCD cameras

    NASA Astrophysics Data System (ADS)

    Shao, Junfeng; Guo, Jin; Wang, Ting-feng; Wang, Ming

    2013-05-01

    Charged Coupled Devices (CCD) are widely used in military and security applications, such as airborne and ship based surveillance, satellite reconnaissance and so on. Homeland security requires effective means to negate these advanced overseeing systems. Researches show that CCD based EO systems can be significantly dazzled or even damaged by high-repetition rate pulsed lasers. Here, we report femto - second laser interaction with CCD camera, which is probable of great importance in future. Femto - second laser is quite fresh new lasers, which has unique characteristics, such as extremely short pulse width (1 fs = 10-15 s), extremely high peak power (1 TW = 1012W), and especially its unique features when interacting with matters. Researches in femto second laser interaction with materials (metals, dielectrics) clearly indicate non-thermal effect dominates the process, which is of vast difference from that of long pulses interaction with matters. Firstly, the damage threshold test are performed with femto second laser acting on the CCD camera. An 800nm, 500μJ, 100fs laser pulse is used to irradiate interline CCD solid-state image sensor in the experiment. In order to focus laser energy onto tiny CCD active cells, an optical system of F/5.6 is used. A Sony production CCDs are chose as typical targets. The damage threshold is evaluated with multiple test data. Point damage, line damage and full array damage were observed when the irradiated pulse energy continuously increase during the experiment. The point damage threshold is found 151.2 mJ/cm2.The line damage threshold is found 508.2 mJ/cm2.The full-array damage threshold is found to be 5.91 J/cm2. Although the phenomenon is almost the same as that of nano laser interaction with CCD, these damage thresholds are substantially lower than that of data obtained from nano second laser interaction with CCD. Then at the same time, the electric features after different degrees of damage are tested with electronic multi meter. The resistance values between clock signal lines are measured. Contrasting the resistance values of the CCD before and after damage, it is found that the resistances decrease significantly between the vertical transfer clock signal lines values. The same results are found between the vertical transfer clock signal line and the earth electrode (ground).At last, the damage position and the damage mechanism were analyzed with above results and SEM morphological experiments. The point damage results in the laser destroying material, which shows no macro electro influence. The line damage is quite different from that of point damage, which shows deeper material corroding effect. More importantly, short circuits are found between vertical clock lines. The full array damage is even more severe than that of line damage starring with SEM, while no obvious different electrical features than that of line damage are found. Further researches are anticipated in femto second laser caused CCD damage mechanism with more advanced tools. This research is valuable in EO countermeasure and/or laser shielding applications.

  1. An open architecture for hybrid force-visual servo control of robotic manipulators in unstructured environments

    NASA Astrophysics Data System (ADS)

    Hassanzadeh, Iraj; Janabi-Sharifi, Farrokh

    2005-12-01

    In this paper, a new open architecture for visual servo control tasks is illustrated. A Puma-560 robotic manipulator is used to prove the concept. This design enables doing hybrid forcehisual servo control in an unstructured environment in different modes. Also, it can be controlled through Internet in teleoperation mode using a haptic device. Our proposed structure includes two major parts, hardware and software. In terms of hardware, it consists of a master (host) computer, a slave (target) computer, a Puma 560 manipulator, a CCD camera, a force sensor and a haptic device. There are five DAQ cards, interfacing Puma 560 and a slave computer. An open architecture package is developed using Matlab (R), Simulink (R) and XPC target toolbox. This package has the Hardware-In-the-Loop (HIL) property, i.e., enables one to readily implement different configurations of force, visual or hybrid control in real time. The implementation includes the following stages. First of all, retrofitting of puma was carried out. Then a modular joint controller for Puma 560 was realized using Simulink (R). Force sensor driver and force control implementation were written, using sjknction blocks of Simulink (R). Visual images were captured through Image Acquisition Toolbox of Matlab (R), and processed using Image Processing Toolbox. A haptic device interface was also written in Simulink (R). Thus, this setup could be readily reconfigured and accommodate any other robotic manipulator and/or other sensors without the trouble of the external issues relevant to the control, interface and software, while providing flexibility in components modification.

  2. Structure for implementation of back-illuminated CMOS or CCD imagers

    NASA Technical Reports Server (NTRS)

    Pain, Bedabrata (Inventor); Cunningham, Thomas J. (Inventor)

    2009-01-01

    A structure for implementation of back-illuminated CMOS or CCD imagers. An epitaxial silicon layer is connected with a passivation layer, acting as a junction anode. The epitaxial silicon layer converts light passing through the passivation layer and collected by the imaging structure to photoelectrons. A semiconductor well is also provided, located opposite the passivation layer with respect to the epitaxial silicon layer, acting as a junction cathode. Prior to detection, light does not pass through a dielectric separating interconnection metal layers.

  3. Night Vision Camera

    NASA Technical Reports Server (NTRS)

    1996-01-01

    PixelVision, Inc. developed the Night Video NV652 Back-illuminated CCD Camera, based on the expertise of a former Jet Propulsion Laboratory employee and a former employee of Scientific Imaging Technologies, Inc. The camera operates without an image intensifier, using back-illuminated and thinned CCD technology to achieve extremely low light level imaging performance. The advantages of PixelVision's system over conventional cameras include greater resolution and better target identification under low light conditions, lower cost and a longer lifetime. It is used commercially for research and aviation.

  4. Establishing Information Security Systems via Optical Imaging

    DTIC Science & Technology

    2015-08-11

    SLM, spatial light modulator; BSC, non - polarizing beam splitter cube; CCD, charge-coupled device. In computational ghost imaging, a series of...Laser Object Computer Fig. 5. A schematic setup for the proposed method using holography: BSC, Beam splitter cube; CCD, Charge-coupled device. The...interference between reference and object beams . (a) (e) (d) (c) (b) Distribution Code A: Approved for public release, distribution is unlimited

  5. Double and Multiple Star Measurements at the Southern Sky with a 50cm-Cassegrain and a Fast CCD Camera in 2008

    NASA Astrophysics Data System (ADS)

    Anton, Rainer

    2011-04-01

    Using a 50cm Cassegrain in Namibia, recordings of double and multiple stars were made with a fast CCD camera and a notebook computer. From superpositions of "lucky images", measurements of 149 systems were obtained and compared with literature data. B/W and color images of some remarkable systems are also presented.

  6. Double and Multiple Star Measurements in the Northern Sky with a 10" Newtonian and a Fast CCD Camera in 2006 through 2009

    NASA Astrophysics Data System (ADS)

    Anton, Rainer

    2010-07-01

    Using a 10" Newtonian and a fast CCD camera, recordings of double and multiple stars were made at high frame rates with a notebook computer. From superpositions of "lucky images", measurements of 139 systems were obtained and compared with literature data. B/w and color images of some noteworthy systems are also presented.

  7. Enhancement of sun-tracking with optoelectronic devices

    NASA Astrophysics Data System (ADS)

    Wu, Jiunn-Chi

    2015-09-01

    Sun-tracking is one of the most challenging tasks in implementing CPV. In order to justify the additional complexity of sun-tracking, careful assessment of performance of CPV by monitoring the performance of sun-tracking is vital. Measurement of accuracy of sun-tracking is one of the important tasks in an outdoor test. This study examines techniques with three optoelectronic devices (i.e. position sensitive device (PSD), CCD and webcam). Outdoor measurements indicated that during sunny days (global horizontal insolation (GHI) > 700 W/m2), three devices recorded comparable tracking accuracy of 0.16˜0.3°. The method using a PSD has fastest sampling rate and is able to detect the sun's position without additional image processing. Yet, it cannot identify the sunlight effectively during low insolation. The techniques with a CCD and a webcam enhance the accuracy of centroid of sunlight via the optical lens and image processing. The image quality acquired using a webcam and a CCD is comparable but the webcam is more affordable than that of CCD because it can be assembled with consumer-graded products.

  8. The Soft X-ray Imager (SXI) for the ASTRO-H Mission

    NASA Astrophysics Data System (ADS)

    Tanaka, Takaaki; Tsunemi, Hiroshi; Hayashida, Kiyoshi; Tsuru, Takeshi G.; Dotani, Tadayasu; Nakajima, Hiroshi; Anabuki, Naohisa; Nagino, Ryo; Uchida, Hiroyuki; Nobukawa, Masayoshi; Ozaki, Masanobu; Natsukari, Chikara; Tomida, Hiroshi; Ueda, Shutaro; Kimura, Masashi; Hiraga, Junko S.; Kohmura, Takayoshi; Murakami, Hiroshi; Mori, Koji; Yamauchi, Makoto; Hatsukade, Isamu; Nishioka, Yusuke; Bamba, Aya; Doty, John P.

    2015-09-01

    The Soft X-ray Imager (SXI) is an X-ray CCD camera onboard the ASTRO-H X-ray observatory. The CCD chip used is a P-channel back-illuminated type, and has a 200-µm thick depletion layer, with which the SXI covers the energy range between 0.4 keV and 12 keV. Its imaging area has a size of 31 mm x 31 mm. We arrange four of the CCD chips in a 2 by 2 grid so that we can cover a large field-of-view of 38' x 38'. We cool the CCDs to -120 °C with a single-stage Stirling cooler. As was done for the CCD camera of the Suzaku satellite, XIS, artificial charges are injected to selected rows in order to recover charge transfer inefficiency due to radiation damage caused by in-orbit cosmic rays. We completed fabrication of flight models of the SXI and installed them into the satellite. We verified the performance of the SXI in a series of satellite tests. On-ground calibrations were also carried out and detailed studies are ongoing.

  9. Advances in detector technologies for visible and infrared wavefront sensing

    NASA Astrophysics Data System (ADS)

    Feautrier, Philippe; Gach, Jean-Luc; Downing, Mark; Jorden, Paul; Kolb, Johann; Rothman, Johan; Fusco, Thierry; Balard, Philippe; Stadler, Eric; Guillaume, Christian; Boutolleau, David; Destefanis, Gérard; Lhermet, Nicolas; Pacaud, Olivier; Vuillermet, Michel; Kerlain, Alexandre; Hubin, Norbert; Reyes, Javier; Kasper, Markus; Ivert, Olaf; Suske, Wolfgang; Walker, Andrew; Skegg, Michael; Derelle, Sophie; Deschamps, Joel; Robert, Clélia; Vedrenne, Nicolas; Chazalet, Frédéric; Tanchon, Julien; Trollier, Thierry; Ravex, Alain; Zins, Gérard; Kern, Pierre; Moulin, Thibaut; Preis, Olivier

    2012-07-01

    The purpose of this paper is to give an overview of the state of the art wavefront sensor detectors developments held in Europe for the last decade. The success of the next generation of instruments for 8 to 40-m class telescopes will depend on the ability of Adaptive Optics (AO) systems to provide excellent image quality and stability. This will be achieved by increasing the sampling, wavelength range and correction quality of the wave front error in both spatial and time domains. The modern generation of AO wavefront sensor detectors development started in the late nineties with the CCD50 detector fabricated by e2v technologies under ESO contract for the ESO NACO AO system. With a 128x128 pixels format, this 8 outputs CCD offered a 500 Hz frame rate with a readout noise of 7e-. A major breakthrough has been achieved with the recent development by e2v technologies of the CCD220. This 240x240 pixels 8 outputs EMCCD (CCD with internal multiplication) has been jointly funded by ESO and Europe under the FP6 programme. The CCD220 and the OCAM2 camera that operates the detector are now the most sensitive system in the world for advanced adaptive optics systems, offering less than 0.2 e readout noise at a frame rate of 1500 Hz with negligible dark current. Extremely easy to operate, OCAM2 only needs a 24 V power supply and a modest water cooling circuit. This system, commercialized by First Light Imaging, is extensively described in this paper. An upgrade of OCAM2 is foreseen to boost its frame rate to 2 kHz, opening the window of XAO wavefront sensing for the ELT using 4 synchronized cameras and pyramid wavefront sensing. Since this major success, new developments started in Europe. One is fully dedicated to Natural and Laser Guide Star AO for the E-ELT with ESO involvement. The spot elongation from a LGS Shack Hartman wavefront sensor necessitates an increase of the pixel format. Two detectors are currently developed by e2v. The NGSD will be a 880x840 pixels CMOS detector with a readout noise of 3 e (goal 1e) at 700 Hz frame rate. The LGSD is a scaling of the NGSD with 1760x1680 pixels and 3 e readout noise (goal 1e) at 700 Hz (goal 1000 Hz) frame rate. New technologies will be developed for that purpose: advanced CMOS pixel architecture, CMOS back thinned and back illuminated device for very high QE, full digital outputs with signal digital conversion on chip. In addition, the CMOS technology is extremely robust in a telescope environment. Both detectors will be used on the European ELT but also interest potentially all giant telescopes under development. Additional developments also started for wavefront sensing in the infrared based on a new technological breakthrough using ultra low noise Avalanche Photodiode (APD) arrays within the RAPID project. Developed by the SOFRADIR and CEA/LETI manufacturers, the latter will offer a 320x240 8 outputs 30 microns IR array, sensitive from 0.4 to 3.2 microns, with 2 e readout noise at 1500 Hz frame rate. The high QE response is almost flat over this wavelength range. Advanced packaging with miniature cryostat using liquid nitrogen free pulse tube cryocoolers is currently developed for this programme in order to allow use on this detector in any type of environment. First results of this project are detailed here. These programs are held with several partners, among them are the French astronomical laboratories (LAM, OHP, IPAG), the detector manufacturers (e2v technologies, Sofradir, CEA/LETI) and other partners (ESO, ONERA, IAC, GTC). Funding is: Opticon FP6 and FP7 from European Commission, ESO, CNRS and Université de Provence, Sofradir, ONERA, CEA/LETI and the French FUI (DGCIS).

  10. Front-end multiplexing—applied to SQUID multiplexing: Athena X-IFU and QUBIC experiments

    NASA Astrophysics Data System (ADS)

    Prele, D.

    2015-08-01

    As we have seen for digital camera market and a sensor resolution increasing to "megapixels", all the scientific and high-tech imagers (whatever the wave length - from radio to X-ray range) tends also to always increases the pixels number. So the constraints on front-end signals transmission increase too. An almost unavoidable solution to simplify integration of large arrays of pixels is front-end multiplexing. Moreover, "simple" and "efficient" techniques allow integration of read-out multiplexers in the focal plane itself. For instance, CCD (Charge Coupled Device) technology has boost number of pixels in digital camera. Indeed, this is exactly a planar technology which integrates both the sensors and a front-end multiplexed readout. In this context, front-end multiplexing techniques will be discussed for a better understanding of their advantages and their limits. Finally, the cases of astronomical instruments in the millimeter and in the X-ray ranges using SQUID (Superconducting QUantum Interference Device) will be described.

  11. VizieR Online Data Catalog: Mission Accessible Near-Earth Objects Survey (Thirouin+, 2016)

    NASA Astrophysics Data System (ADS)

    Thirouin, A.; Moskovitz, N.; Binzel, R. P.; Christensen, E.; DeMeo, F. E.; Person, M. J.; Polishook, D.; Thomas, C. A.; Trilling, D.; Willman, M.; Hinkle, M.; Burt, B.; Avner, D.; Aceituno, F. J.

    2017-06-01

    The data were obtained with the 4.3m Lowell Discovery Channel Telescope (DCT), the 4.1m Southern Astrophysical Research (SOAR) telescope, the 4m Nicholas U. Mayall Telescope, the 2.1m at Kitt Peak Observatory, the 1.8m Perkins telescope, the 1.5m Sierra Nevada Observatory (OSN), and the 1.3m SMARTS telescope between 2013 August and 2015 October. The DCT is forty miles southeast of Flagstaff at the Happy Jack site (Arizona, USA). Images were obtained using the Large Monolithic Imager (LMI), which is a 6144*6160 CCD. The total field of view is 12.5*12.5 with a plate scale of 0.12''/pixel (unbinned). Images were obtained using the 3*3 or 2*2 binning modes. Observations were carried out in situ. The SOAR telescope is located on Cerro Pachon, Chile. Images were obtained using the Goodman High Throughput Spectrograph (Goodman-HTS) instrument in its imaging mode. The instrument consists of a 4096*4096 Fairchild CCD, with a 7.2' diameter field of view (circular field of view) and a plate scale of 0.15''/pixel. Images were obtained using the 2*2 binning mode. Observations were conducted remotely. The Mayall telescope is a 4m telescope located at the Kitt Peak National Observatory (Tucson, Arizona, USA). The National Optical Astronomy Observatory (NOAO) CCD Mosaic-1.1 is a wide field imager composed of an array of eight CCD chips. The field of view is 36'*36', and the plate scale is 0.26''/pixel. Observations were performed remotely. The 2.1m at Kitt Peak Observatory was operated with the STA3 2k*4k CCD, which has a plate scale of 0.305''/pixel and a field of view of 10.2'*6.6'. The instrument was binned 2*2 and the observations were conducted in situ. The Perkins 72'' telescope is located at the Anderson Mesa station at Lowell Observatory (Flagstaff, Arizona, USA). We used the Perkins ReImaging SysteM (PRISM) instrument, a 2*2k Fairchild CCD. The PRISM plate scale is 0.39''/pixel for a field of view of 13'*13'. Observations were performed in situ. The 1.5m telescope located at the OSN at Loma de Dilar in the National Park of Sierra Nevada (Granada, Spain) was operated in situ. Observations were carried out with a 2k*2k CCD, with a total field of view of 7.8'*7.8'. We used 2*2 binning mode, resulting in an effective plate scale of 0.46''/pixel. The 1.3m SMARTS telescope is located at the Cerro Tololo Inter-American Observatory (Coquimbo region, Chile). This telescope is equipped with a camera called ANDICAM (A Novel Dual Imaging CAMera). ANDICAM is a Fairchild 2048*2048 CCD. The pixel scale is 0.371''/pixel, and the field of view is 6'*6'. Observations were carried out in queue mode. (2 data files).

  12. Final Report, January 1991 - July 1992

    NASA Astrophysics Data System (ADS)

    Ferrara, Jon

    1992-07-01

    This report covers final schedules, expenses and billings, monthly reports, testing, and deliveries for this contract. The goal of the detector development program for the Solar and Heliospheric Spacecraft (SOHO) EUV Imaging Telescope (EIT) is an Extreme UltraViolet (EUV) CCD (Change Collecting Device) camera. As a part of the CCD screening effort, the quantum efficiency (QE) of a prototype CCD has been measured in the NRL EUV laboratory over the wavelength range of 256 to 735 Angstroms. A simplified model has been applied to these QE measurements to illustrate the relevant physical processes that determine the performance of the detector. The charge transfer efficiency (CTE) characteristics of the Tektronix 1024 X 1024 CCD being developed for STIS/SOHO space imaging applications have been characterized at different signal levels, operating conditions, and temperatures using a variety of test methods. A number of CCD's have been manufactured using processing techniques developed to improve CTE, and test results on these devices will be used in determining the final chip design. In this paper, we discuss the CTE test methods used and present the results and conclusions of these tests.

  13. The future scientific CCD

    NASA Technical Reports Server (NTRS)

    Janesick, J. R.; Elliott, T.; Collins, S.; Marsh, H.; Blouke, M. M.

    1984-01-01

    Since the first introduction of charge-coupled devices (CCDs) in 1970, CCDs have been considered for applications related to memories, logic circuits, and the detection of visible radiation. It is pointed out, however, that the mass market orientation of CCD development has left largely untapped the enormous potential of these devices for advanced scientific instrumentation. The present paper has, therefore, the objective to introduce the CCD characteristics to the scientific community, taking into account prospects for further improvement. Attention is given to evaluation criteria, a summary of current CCDs, CCD performance characteristics, absolute calibration tools, quantum efficiency, aspects of charge collection, charge transfer efficiency, read noise, and predictions regarding the characteristics of the next generation of silicon scientific CCD imagers.

  14. Advances in CCD detector technology for x-ray diffraction applications

    NASA Astrophysics Data System (ADS)

    Thorson, Timothy A.; Durst, Roger D.; Frankel, Dan; Bordwell, Rex L.; Camara, Jose R.; Leon-Guerrero, Edward; Onishi, Steven K.; Pang, Francis; Vu, Paul; Westbrook, Edwin M.

    2004-01-01

    Phosphor-coupled CCDs are established as one of the most successful technologies for x-ray diffraction. This application demands that the CCD simultaneously achieve both the highest possible sensitivity and high readout speeds. Recently, wafer-scale, back illuminated devices have become available which offer significantly higher quantum efficiency than conventional devices (the Fairchild Imaging CCD 486 BI). However, since back thinning significantly changes the electrical properties of the CCD the high speed operation of wafer-scale, back-illuminated devices is not well understood. Here we describe the operating characteristics (including noise, linearity, full well capacity and CTE) of the back-illuminated CCD 486 at readout speeds up to 4 MHz.

  15. [Development of a Surgical Navigation System with Beam Split and Fusion of the Visible and Near-Infrared Fluorescence].

    PubMed

    Yang, Xiaofeng; Wu, Wei; Wang, Guoan

    2015-04-01

    This paper presents a surgical optical navigation system with non-invasive, real-time, and positioning characteristics for open surgical procedure. The design was based on the principle of near-infrared fluorescence molecular imaging. The in vivo fluorescence excitation technology, multi-channel spectral camera technology and image fusion software technology were used. Visible and near-infrared light ring LED excitation source, multi-channel band pass filters, spectral camera 2 CCD optical sensor technology and computer systems were integrated, and, as a result, a new surgical optical navigation system was successfully developed. When the near-infrared fluorescence was injected, the system could display anatomical images of the tissue surface and near-infrared fluorescent functional images of surgical field simultaneously. The system can identify the lymphatic vessels, lymph node, tumor edge which doctor cannot find out with naked eye intra-operatively. Our research will guide effectively the surgeon to remove the tumor tissue to improve significantly the success rate of surgery. The technologies have obtained a national patent, with patent No. ZI. 2011 1 0292374. 1.

  16. Occultation Predictions Using CCD Strip-Scanning Astrometry

    NASA Technical Reports Server (NTRS)

    Dunham, Edward W.; Ford, C. H.; Stone, R. P. S.; McDonald, S. W.; Olkin, C. B.; Elliot, J. L.; Witteborn, Fred C. (Technical Monitor)

    1994-01-01

    We are developing the method of CCD strip-scanning astrometry for the purpose of deriving reliable advance predictions for occultations involving small objects in the outer solar system. We are using a camera system based on a Ford/Loral 2Kx2K CCD with the Crossley telescope at Lick Observatory for this work. The columns of die CCD are aligned East-West, the telescope drive is stopped, and the CCD is clocked at the same rate that the stars drift across it. In this way we obtain arbitrary length strip images 20 arcmin wide with 0.58" pixels. Since planets move mainly in RA, it is possible to obtain images of the planet and star to be occulted on the same strip well before the occultation occurs. The strip-to-strip precision (i.e. reproducibility) of positions is limited by atmospheric image motion to about 0.1" rms per strip. However, for objects that are nearby in R.A., the image motion is highly correlated and their relative positions are good to 0.02" rms per strip. We will show that the effects of atmospheric image motion on a given strip can be removed if a sufficient number of strips of a given area have been obtained. Thus, it is possible to reach an rms precision of 0.02" per strip, corresponding to about 0.3 of Pluto or Triton's angular radius. The ultimate accuracy of a prediction based on strip-scanning astrometry is currently limited by the accuracy of the positions of the stars in the astrometric network used and by systematic errors most likely due to the optical system. We will show the results of . the prediction of some recent occultations as examples of the current capabilities and limitations of this technique.

  17. On the development of new SPMN diurnal video systems for daylight fireball monitoring

    NASA Astrophysics Data System (ADS)

    Madiedo, J. M.; Trigo-Rodríguez, J. M.; Castro-Tirado, A. J.

    2008-09-01

    Daylight fireball video monitoring High-sensitivity video devices are commonly used for the study of the activity of meteor streams during the night. These provide useful data for the determination, for instance, of radiant, orbital and photometric parameters ([1] to [7]). With this aim, during 2006 three automated video stations supported by Universidad de Huelva were set up in Andalusia within the framework of the SPanish Meteor Network (SPMN). These are endowed with 8-9 high sensitivity wide-field video cameras that achieve a meteor limiting magnitude of about +3. These stations have increased the coverage performed by the low-scan allsky CCD systems operated by the SPMN and, besides, achieve a time accuracy of about 0.01s for determining the appearance of meteor and fireball events. Despite of these nocturnal monitoring efforts, we realised the need of setting up stations for daylight fireball detection. Such effort was also motivated by the appearance of the two recent meteorite-dropping events of Villalbeto de la Peña [8,9] and Puerto Lápice [10]. Although the Villalbeto de la Peña event was casually videotaped, and photographed, no direct pictures or videos were obtained for the Puerto Lápice event. Consequently, in order to perform a continuous recording of daylight fireball events, we setup new automated systems based on CCD video cameras. However, the development of these video stations implies several issues with respect to nocturnal systems that must be properly solved in order to get an optimal operation. The first of these video stations, also supported by University of Huelva, has been setup in Sevilla (Andalusia) during May 2007. But, of course, fireball association is unequivocal only in those cases when two or more stations recorded the fireball, and when consequently the geocentric radiant is accurately determined. With this aim, a second diurnal video station is being setup in Andalusia in the facilities of Centro Internacional de Estudios y Convenciones Ecológicas y Medioambientales (CIECEM, University of Huelva), in the environment of Doñana Natural Park (Huelva province). In this way, both stations, which are separated by a distance of 75 km, will work as a double video station system in order to provide trajectory and orbit information of mayor bolides and, thus, increase the chance of meteorite recovery in the Iberian Peninsula. The new diurnal SPMN video stations are endowed with different models of Mintron cameras (Mintron Enterprise Co., LTD). These are high-sensitivity devices that employ a colour 1/2" Sony interline transfer CCD image sensor. Aspherical lenses are attached to the video cameras in order to maximize image quality. However, the use of fast lenses is not a priority here: while most of our nocturnal cameras use f0.8 or f1.0 lenses in order to detect meteors as faint as magnitude +3, diurnal systems employ in most cases f1.4 to f2.0 lenses. Their focal length ranges from 3.8 to 12 mm to cover different atmospheric volumes. The cameras are arranged in such a way that the whole sky is monitored from every observing station. Figure 1. A daylight event recorded from Sevilla on May 26, 2008 at 4h30m05.4 +-0.1s UT. The way our diurnal video cameras work is similar to the operation of our nocturnal systems [1]. Thus, diurnal stations are automatically switched on and off at sunrise and sunset, respectively. The images taken at 25 fps and with a resolution of 720x576 pixels are continuously sent to PC computers through a video capture device. The computers run a software (UFOCapture, by SonotaCo, Japan) that automatically registers meteor trails and stores the corresponding video frames on hard disk. Besides, before the signal from the cameras reaches the computers, a video time inserter that employs a GPS device (KIWI-OSD, by PFD Systems) inserts time information on every video frame. This allows us to measure time in a precise way (about 0.01 sec.) along the whole fireball path. EPSC Abstracts, Vol. 3, EPSC2008-A-00319, 2008 European Planetary Science Congress, Author(s) 2008 However, one of the issues with respect to nocturnal observing stations is the high number of false detections as a consequence of several factors: higher activity of birds and insects, reflection of sunlight on planes and helicopters, etc. Sometimes some of these false events follow a pattern which is very similar to fireball trails, which makes absolutely necessary the use of a second station in order to discriminate between them. Other key issue is related to the passage of the Sun before the field of view of some of the cameras. In fact, special care is necessary with this to avoid any damage to the CCD sensor. Besides, depending on atmospheric conditions (dust or moisture, for instance), the Sun may saturate most of the video frame. To solve this, our automated system determines which camera is pointing towards the Sun at a given moment and disconnects it. As the cameras are endowed with autoiris lenses, its disconnection means that the optics is fully closed and, so, the CCD sensor is protected. This, of course, means that when this happens the atmospheric volume covered by the corresponding camera is not monitored. It must be also taken into account that, in general, operation temperatures are higher for diurnal cameras. This results in higher thermal noise and, so, poses some difficulties to the detection software. To minimize this effect, it is necessary to employ CCD video cameras with proper signal to noise ratio. Refrigeration of the CCD sensor with, for instance, a Peltier system, can also be considered. The astrometric reduction procedure is also somewhat different for daytime events: it requires that reference objects are located within the field of view of every camera in order to calibrate the corresponding images. This is done by allowing every camera to capture distant buildings that, by means of said calibration, would allow us to obtain the equatorial coordinates of the fireball along its path by measuring its corresponding X and Y positions on every video frame. Such calibration can be performed from stars positions measured from nocturnal images taken with the same cameras. Once made, if the cameras are not moved it is possible to estimate the equatorial coordinates of any future fireball event. We don't use any software for automatic astrometry of the images. This crucial step is made via direct measurements of the pixel position as in all our previous work. Then, from these astrometric measurements, our software estimates the atmospheric trajectory and radiant for each fireball ([10] to [13]). During 2007 and 2008 the SPMN has also setup other diurnal stations based on 1/3' progressive-scan CMOS sensors attached to modified wide-field lenses covering a 120x80 degrees FOV. They are placed in Andalusia: El Arenosillo (Huelva), La Mayora (Málaga) and Murtas (Granada). They have also night sensitivity thanks to a infrared cut filter (ICR) which enables the camera to perform well in both high and low light condition in colour as well as provide IR sensitive Black/White video at night. Conclusions First detections of daylight fireballs by CCD video camera are being achieved in the SPMN framework. Future expansion and set up of new observing stations is currently being planned. The future establishment of additional diurnal SPMN stations will allow an increase in the number of daytime fireballs detected. This will also increase our chance of meteorite recovery.

  18. A fringe projector-based study of the Brighter-Fatter Effect in LSST CCDs

    DOE PAGES

    Gilbertson, W.; Nomerotski, A.; Takacs, P.

    2017-09-07

    In order to achieve the goals of the Large Synoptic Survey Telescope for Dark Energy science requires a detailed understanding of CCD sensor effects. One such sensor effect is the Point Spread Function (PSF) increasing with flux, alternatively called the `Brighter-Fatter Effect.' Here a novel approach was tested to perform the PSF measurements in the context of the Brighter-Fatter Effect employing a Michelson interferometer to project a sinusoidal fringe pattern onto the CCD. The Brighter-Fatter effect predicts that the fringe pattern should become asymmetric in the intensity pattern as the brighter peaks corresponding to a larger flux are smeared bymore » a larger PSF. By fitting the data with a model that allows for a changing PSF, the strength of the Brighter-Fatter effect can be evaluated.« less

  19. A fringe projector-based study of the Brighter-Fatter Effect in LSST CCDs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gilbertson, W.; Nomerotski, A.; Takacs, P.

    In order to achieve the goals of the Large Synoptic Survey Telescope for Dark Energy science requires a detailed understanding of CCD sensor effects. One such sensor effect is the Point Spread Function (PSF) increasing with flux, alternatively called the `Brighter-Fatter Effect.' Here a novel approach was tested to perform the PSF measurements in the context of the Brighter-Fatter Effect employing a Michelson interferometer to project a sinusoidal fringe pattern onto the CCD. The Brighter-Fatter effect predicts that the fringe pattern should become asymmetric in the intensity pattern as the brighter peaks corresponding to a larger flux are smeared bymore » a larger PSF. By fitting the data with a model that allows for a changing PSF, the strength of the Brighter-Fatter effect can be evaluated.« less

  20. Method for 3D noncontact measurements of cut trees package area

    NASA Astrophysics Data System (ADS)

    Knyaz, Vladimir A.; Vizilter, Yuri V.

    2001-02-01

    Progress in imaging sensors and computers create the background for numerous 3D imaging application for wide variety of manufacturing activity. Many demands for automated precise measurements are in wood branch of industry. One of them is the accurate volume definition for cut trees carried on the truck. The key point for volume estimation is determination of the front area of the cut tree package. To eliminate slow and inaccurate manual measurements being now in practice the experimental system for automated non-contact wood measurements is developed. The system includes two non-metric CCD video cameras, PC as central processing unit, frame grabbers and original software for image processing and 3D measurements. The proposed method of measurement is based on capturing the stereo pair of front of trees package and performing the image orthotranformation into the front plane. This technique allows to process transformed image for circle shapes recognition and calculating their area. The metric characteristics of the system are provided by special camera calibration procedure. The paper presents the developed method of 3D measurements, describes the hardware used for image acquisition and the software realized the developed algorithms, gives the productivity and precision characteristics of the system.

  1. Image-based spectroscopy for environmental monitoring

    NASA Astrophysics Data System (ADS)

    Bachmakov, Eduard; Molina, Carolyn; Wynne, Rosalind

    2014-03-01

    An image-processing algorithm for use with a nano-featured spectrometer chemical agent detection configuration is presented. The spectrometer chip acquired from Nano-Optic DevicesTM can reduce the size of the spectrometer down to a coin. The nanospectrometer chip was aligned with a 635nm laser source, objective lenses, and a CCD camera. The images from a nanospectrometer chip were collected and compared to reference spectra. Random background noise contributions were isolated and removed from the diffraction pattern image analysis via a threshold filter. Results are provided for the image-based detection of the diffraction pattern produced by the nanospectrometer. The featured PCF spectrometer has the potential to measure optical absorption spectra in order to detect trace amounts of contaminants. MATLAB tools allow for implementation of intelligent, automatic detection of the relevant sub-patterns in the diffraction patterns and subsequent extraction of the parameters using region-detection algorithms such as the generalized Hough transform, which detects specific shapes within the image. This transform is a method for detecting curves by exploiting the duality between points on a curve and parameters of that curve. By employing this imageprocessing technique, future sensor systems will benefit from new applications such as unsupervised environmental monitoring of air or water quality.

  2. Monitoring Powdery Mildew of Winter Wheat by Using Moderate Resolution Multi-Temporal Satellite Imagery

    PubMed Central

    Zhang, Jingcheng; Pu, Ruiliang; Yuan, Lin; Wang, Jihua; Huang, Wenjiang; Yang, Guijun

    2014-01-01

    Powdery mildew is one of the most serious diseases that have a significant impact on the production of winter wheat. As an effective alternative to traditional sampling methods, remote sensing can be a useful tool in disease detection. This study attempted to use multi-temporal moderate resolution satellite-based data of surface reflectances in blue (B), green (G), red (R) and near infrared (NIR) bands from HJ-CCD (CCD sensor on Huanjing satellite) to monitor disease at a regional scale. In a suburban area in Beijing, China, an extensive field campaign for disease intensity survey was conducted at key growth stages of winter wheat in 2010. Meanwhile, corresponding time series of HJ-CCD images were acquired over the study area. In this study, a number of single-stage and multi-stage spectral features, which were sensitive to powdery mildew, were selected by using an independent t-test. With the selected spectral features, four advanced methods: mahalanobis distance, maximum likelihood classifier, partial least square regression and mixture tuned matched filtering were tested and evaluated for their performances in disease mapping. The experimental results showed that all four algorithms could generate disease maps with a generally correct distribution pattern of powdery mildew at the grain filling stage (Zadoks 72). However, by comparing these disease maps with ground survey data (validation samples), all of the four algorithms also produced a variable degree of error in estimating the disease occurrence and severity. Further, we found that the integration of MTMF and PLSR algorithms could result in a significant accuracy improvement of identifying and determining the disease intensity (overall accuracy of 72% increased to 78% and kappa coefficient of 0.49 increased to 0.59). The experimental results also demonstrated that the multi-temporal satellite images have a great potential in crop diseases mapping at a regional scale. PMID:24691435

  3. Monitoring powdery mildew of winter wheat by using moderate resolution multi-temporal satellite imagery.

    PubMed

    Zhang, Jingcheng; Pu, Ruiliang; Yuan, Lin; Wang, Jihua; Huang, Wenjiang; Yang, Guijun

    2014-01-01

    Powdery mildew is one of the most serious diseases that have a significant impact on the production of winter wheat. As an effective alternative to traditional sampling methods, remote sensing can be a useful tool in disease detection. This study attempted to use multi-temporal moderate resolution satellite-based data of surface reflectances in blue (B), green (G), red (R) and near infrared (NIR) bands from HJ-CCD (CCD sensor on Huanjing satellite) to monitor disease at a regional scale. In a suburban area in Beijing, China, an extensive field campaign for disease intensity survey was conducted at key growth stages of winter wheat in 2010. Meanwhile, corresponding time series of HJ-CCD images were acquired over the study area. In this study, a number of single-stage and multi-stage spectral features, which were sensitive to powdery mildew, were selected by using an independent t-test. With the selected spectral features, four advanced methods: mahalanobis distance, maximum likelihood classifier, partial least square regression and mixture tuned matched filtering were tested and evaluated for their performances in disease mapping. The experimental results showed that all four algorithms could generate disease maps with a generally correct distribution pattern of powdery mildew at the grain filling stage (Zadoks 72). However, by comparing these disease maps with ground survey data (validation samples), all of the four algorithms also produced a variable degree of error in estimating the disease occurrence and severity. Further, we found that the integration of MTMF and PLSR algorithms could result in a significant accuracy improvement of identifying and determining the disease intensity (overall accuracy of 72% increased to 78% and kappa coefficient of 0.49 increased to 0.59). The experimental results also demonstrated that the multi-temporal satellite images have a great potential in crop diseases mapping at a regional scale.

  4. Ultrasound-modulated optical tomography with intense acoustic bursts.

    PubMed

    Zemp, Roger J; Kim, Chulhong; Wang, Lihong V

    2007-04-01

    Ultrasound-modulated optical tomography (UOT) detects ultrasonically modulated light to spatially localize multiply scattered photons in turbid media with the ultimate goal of imaging the optical properties in living subjects. A principal challenge of the technique is weak modulated signal strength. We discuss ways to push the limits of signal enhancement with intense acoustic bursts while conforming to optical and ultrasonic safety standards. A CCD-based speckle-contrast detection scheme is used to detect acoustically modulated light by measuring changes in speckle statistics between ultrasound-on and ultrasound-off states. The CCD image capture is synchronized with the ultrasound burst pulse sequence. Transient acoustic radiation force, a consequence of bursts, is seen to produce slight signal enhancement over pure ultrasonic-modulation mechanisms for bursts and CCD exposure times of the order of milliseconds. However, acoustic radiation-force-induced shear waves are launched away from the acoustic sample volume, which degrade UOT spatial resolution. By time gating the CCD camera to capture modulated light before radiation force has an opportunity to accumulate significant tissue displacement, we reduce the effects of shear-wave image degradation, while enabling very high signal-to-noise ratios. Additionally, we maintain high-resolution images representative of optical and not mechanical contrast. Signal-to-noise levels are sufficiently high so as to enable acquisition of 2D images of phantoms with one acoustic burst per pixel.

  5. Noninvasive imaging of protein-protein interactions from live cells and living subjects using bioluminescence resonance energy transfer.

    PubMed

    De, Abhijit; Gambhir, Sanjiv Sam

    2005-12-01

    This study demonstrates a significant advancement of imaging of a distance-dependent physical process, known as the bioluminescent resonance energy transfer (BRET2) signal in living subjects, by using a cooled charge-coupled device (CCD) camera. A CCD camera-based spectral imaging strategy enables simultaneous visualization and quantitation of BRET signal from live cells and cells implanted in living mice. We used the BRET2 system, which utilizes Renilla luciferase (hRluc) protein and its substrate DeepBlueC (DBC) as an energy donor and a mutant green fluorescent protein (GFP2) as the acceptor. To accomplish this objective in this proof-of-principle study, the donor and acceptor proteins were fused to FKBP12 and FRB, respectively, which are known to interact only in the presence of the small molecule mediator rapamycin. Mammalian cells expressing these fusion constructs were imaged using a cooled-CCD camera either directly from culture dishes or by implanting them into mice. By comparing the emission photon yields in the presence and absence of rapamycin, the specific BRET signal was determined. The CCD imaging approach of BRET signal is particularly appealing due to its capacity to seamlessly bridge the gap between in vitro and in vivo studies. This work validates BRET as a powerful tool for interrogating and observing protein-protein interactions directly at limited depths in living mice.

  6. A Pipeline Tool for CCD Image Processing

    NASA Astrophysics Data System (ADS)

    Bell, Jon F.; Young, Peter J.; Roberts, William H.; Sebo, Kim M.

    MSSSO is part of a collaboration developing a wide field imaging CCD mosaic (WFI). As part of this project, we have developed a GUI based pipeline tool that is an integrated part of MSSSO's CICADA data acquisition environment and processes CCD FITS images as they are acquired. The tool is also designed to run as a stand alone program to process previously acquired data. IRAF tasks are used as the central engine, including the new NOAO mscred package for processing multi-extension FITS files. The STScI OPUS pipeline environment may be used to manage data and process scheduling. The Motif GUI was developed using SUN Visual Workshop. C++ classes were written to facilitate launching of IRAF and OPUS tasks. While this first version implements calibration processing up to and including flat field corrections, there is scope to extend it to other processing.

  7. Matching CCD images to a stellar catalog using locality-sensitive hashing

    NASA Astrophysics Data System (ADS)

    Liu, Bo; Yu, Jia-Zong; Peng, Qing-Yu

    2018-02-01

    The usage of a subset of observed stars in a CCD image to find their corresponding matched stars in a stellar catalog is an important issue in astronomical research. Subgraph isomorphic-based algorithms are the most widely used methods in star catalog matching. When more subgraph features are provided, the CCD images are recognized better. However, when the navigation feature database is large, the method requires more time to match the observing model. To solve this problem, this study investigates further and improves subgraph isomorphic matching algorithms. We present an algorithm based on a locality-sensitive hashing technique, which allocates quadrilateral models in the navigation feature database into different hash buckets and reduces the search range to the bucket in which the observed quadrilateral model is located. Experimental results indicate the effectivity of our method.

  8. High dynamic spectroscopy using a digital micromirror device and periodic shadowing.

    PubMed

    Kristensson, Elias; Ehn, Andreas; Berrocal, Edouard

    2017-01-09

    We present an optical solution called DMD-PS to boost the dynamic range of 2D imaging spectroscopic measurements up to 22 bits by incorporating a digital micromirror device (DMD) prior to detection in combination with the periodic shadowing (PS) approach. In contrast to high dynamic range (HDR), where the dynamic range is increased by recording several images at different exposure times, the current approach has the potential of improving the dynamic range from a single exposure and without saturation of the CCD sensor. In the procedure, the spectrum is imaged onto the DMD that selectively reduces the reflection from the intense spectral lines, allowing the signal from the weaker lines to be increased by a factor of 28 via longer exposure times, higher camera gains or increased laser power. This manipulation of the spectrum can either be based on a priori knowledge of the spectrum or by first performing a calibration measurement to sense the intensity distribution. The resulting benefits in detection sensitivity come, however, at the cost of strong generation of interfering stray light. To solve this issue the Periodic Shadowing technique, which is based on spatial light modulation, is also employed. In this proof-of-concept article we describe the full methodology of DMD-PS and demonstrate - using the calibration-based concept - an improvement in dynamic range by a factor of ~100 over conventional imaging spectroscopy. The dynamic range of the presented approach will directly benefit from future technological development of DMDs and camera sensors.

  9. Application of PLZT electro-optical shutter to diaphragm of visible and mid-infrared cameras

    NASA Astrophysics Data System (ADS)

    Fukuyama, Yoshiyuki; Nishioka, Shunji; Chonan, Takao; Sugii, Masakatsu; Shirahata, Hiromichi

    1997-04-01

    Pb0.9La0.09(Zr0.65,Ti0.35)0.9775O3 9/65/35) commonly used as an electro-optical shutter exhibits large phase retardation with low applied voltage. This shutter features as follows; (1) high shutter speed, (2) wide optical transmittance, and (3) high optical density in 'OFF'-state. If the shutter is applied to a diaphragm of video-camera, it could protect its sensor from intense lights. We have tested the basic characteristics of the PLZT electro-optical shutter and resolved power of imaging. The ratio of optical transmittance at 'ON' and 'OFF'-states was 1.1 X 103. The response time of the PLZT shutter from 'ON'-state to 'OFF'-state was 10 micro second. MTF reduction when putting the PLZT shutter in from of the visible video- camera lens has been observed only with 12 percent at a spatial frequency of 38 cycles/mm which are sensor resolution of the video-camera. Moreover, we took the visible image of the Si-CCD video-camera. The He-Ne laser ghost image was observed at 'ON'-state. On the contrary, the ghost image was totally shut out at 'OFF'-state. From these teste, it has been found that the PLZT shutter is useful for the diaphragm of the visible video-camera. The measured optical transmittance of PLZT wafer with no antireflection coating was 78 percent over the range from 2 to 6 microns.

  10. An Investigation into the Spectral Imaging of Hall Thruster Plumes

    DTIC Science & Technology

    2015-07-01

    imaging experiment. It employs a Kodak KAF-3200E 3 megapixel CCD (2184×1472 with 6.8 µm pixels). The camera was designed for astronomical imaging and thus...19 mml 14c--7_0_m_m_~•... ,. ,. 50 mm I· ·I ,. 41 mm I Kodak KAF- 3200E ceo 2184 x 1472 px 14.9 x 10.0 mm 6.8 x 6.8J..Lm pixel size SBIG ST...It employs a Kodak KAF-3200E 3 megapixel CCD (2184×1472 with 6.8 µm pixels). The camera was designed for astronomical imaging and thus long exposure

  11. Model-based frequency response characterization of a digital-image analysis system for epifluorescence microscopy

    NASA Technical Reports Server (NTRS)

    Hazra, Rajeeb; Viles, Charles L.; Park, Stephen K.; Reichenbach, Stephen E.; Sieracki, Michael E.

    1992-01-01

    Consideration is given to a model-based method for estimating the spatial frequency response of a digital-imaging system (e.g., a CCD camera) that is modeled as a linear, shift-invariant image acquisition subsystem that is cascaded with a linear, shift-variant sampling subsystem. The method characterizes the 2D frequency response of the image acquisition subsystem to beyond the Nyquist frequency by accounting explicitly for insufficient sampling and the sample-scene phase. Results for simulated systems and a real CCD-based epifluorescence microscopy system are presented to demonstrate the accuracy of the method.

  12. Double Star Measurements at the Southern Sky with 50 cm Reflectors and Fast CCD Cameras in 2012

    NASA Astrophysics Data System (ADS)

    Anton, Rainer

    2014-07-01

    A Cassegrain and a Ritchey-Chrétien reflector, both with 50 cm aperture, were used in Namibia for recordings of double stars with fast CCD cameras and a notebook computer. From superposition of "lucky images", measurements of 39 double and multiple systems were obtained and compared with literature data. Occasional deviations are discussed. Images of some remarkable systems are also presented.

  13. Performance evaluation of integrating detectors for near-infrared fluorescence molecular imaging

    NASA Astrophysics Data System (ADS)

    Zhu, Banghe; Rasmussen, John C.; Sevick-Muraca, Eva M.

    2014-05-01

    Although there has been a plethora of devices advanced for clinical translation, there has been no standards to compare and determine the optical device for fluorescence molecular imaging. In this work, we compare different CCD configurations using a solid phantom developed to mimic pM - fM concentrations of near-infrared fluorescent dyes in tissues. Our results show that intensified CCD systems (ICCDs) offer greater contrast at larger signal-tonoise ratios (SNRs) in comparison to their un-intensified CCD systems operated at clinically reasonable, sub-second acquisition times. Furthermore, we compared our investigational ICCD device to the commercial NOVADAQ SPY system, demonstrating different performance in both SNR and contrast.

  14. Visualization of permanent marks in progressive addition lenses by digital in-line holography

    NASA Astrophysics Data System (ADS)

    Perucho, Beatriz; Micó, Vicente

    2013-04-01

    A critical issue in the production of ophthalmic lenses is to guarantee the correct centering and alignment throughout the manufacturing and mounting processes. Aimed to that, progressive addition lenses (PALs) incorporate permanent marks at standardized locations at the lens. Those marks are engraved upon the surface and provide the model identification and addition power of the PAL, as well as to serve as locator marks to re-ink the removable marks again if necessary. Although the permanent marks should be visible by simple visual inspection, those marks are often faint and weak on new lenses providing low contrast, obscured by scratches on older lenses, and partially occluded and difficult to recognize on tinted or anti-reflection coated lenses. In this contribution, we present an extremely simple visualization system for permanent marks in PALs based on digital in-line holography. Light emitted by a superluminescent diode (SLD) is used to illuminate the PAL which is placed just before a digital (CCD) sensor. Thus, the CCD records an in-line hologram incoming from the diffracted wavefront provided by the PAL. As a result, it is possible to recover an in-focus image of the PAL inspected region by means of classical holographic tools applied in the digital domain. This numerical process involves digital recording of the in-line hologram, numerical back propagation to the PAL plane, and some digital processing to reduce noise and present a high quality final image. Preliminary experimental results are provided showing the applicability of the proposed method.

  15. Development and Application of a Structural Health Monitoring System Based on Wireless Smart Aggregates

    PubMed Central

    Ma, Haoyan; Li, Peng; Song, Gangbing; Wu, Jianxin

    2017-01-01

    Structural health monitoring (SHM) systems can improve the safety and reliability of structures, reduce maintenance costs, and extend service life. Research on concrete SHMs using piezoelectric-based smart aggregates have reached great achievements. However, the newly developed techniques have not been widely applied in practical engineering, largely due to the wiring problems associated with large-scale structural health monitoring. The cumbersome wiring requires much material and labor work, and more importantly, the associated maintenance work is also very heavy. Targeting a practical large scale concrete crack detection (CCD) application, a smart aggregates-based wireless sensor network system is proposed for the CCD application. The developed CCD system uses Zigbee 802.15.4 protocols, and is able to perform dynamic stress monitoring, structural impact capturing, and internal crack detection. The system has been experimentally validated, and the experimental results demonstrated the effectiveness of the proposed system. This work provides important support for practical CCD applications using wireless smart aggregates. PMID:28714927

  16. Development and Application of a Structural Health Monitoring System Based on Wireless Smart Aggregates.

    PubMed

    Yan, Shi; Ma, Haoyan; Li, Peng; Song, Gangbing; Wu, Jianxin

    2017-07-17

    Structural health monitoring (SHM) systems can improve the safety and reliability of structures, reduce maintenance costs, and extend service life. Research on concrete SHMs using piezoelectric-based smart aggregates have reached great achievements. However, the newly developed techniques have not been widely applied in practical engineering, largely due to the wiring problems associated with large-scale structural health monitoring. The cumbersome wiring requires much material and labor work, and more importantly, the associated maintenance work is also very heavy. Targeting a practical large scale concrete crack detection (CCD) application, a smart aggregates-based wireless sensor network system is proposed for the CCD application. The developed CCD system uses Zigbee 802.15.4 protocols, and is able to perform dynamic stress monitoring, structural impact capturing, and internal crack detection. The system has been experimentally validated, and the experimental results demonstrated the effectiveness of the proposed system. This work provides important support for practical CCD applications using wireless smart aggregates.

  17. Toolkit for testing scientific CCD cameras

    NASA Astrophysics Data System (ADS)

    Uzycki, Janusz; Mankiewicz, Lech; Molak, Marcin; Wrochna, Grzegorz

    2006-03-01

    The CCD Toolkit (1) is a software tool for testing CCD cameras which allows to measure important characteristics of a camera like readout noise, total gain, dark current, 'hot' pixels, useful area, etc. The application makes a statistical analysis of images saved in files with FITS format, commonly used in astronomy. A graphical interface is based on the ROOT package, which offers high functionality and flexibility. The program was developed in a way to ensure future compatibility with different operating systems: Windows and Linux. The CCD Toolkit was created for the "Pie of the Sky" project collaboration (2).

  18. Low temperature multi-alkali photocathode processing technique for sealed intensified CCD tubes

    NASA Technical Reports Server (NTRS)

    Doliber, D. L.; Dozier, E. E.; Wenzel, H.; Beaver, E. A.; Hier, R. G.

    1989-01-01

    A low temperature photocathode process has been used to fabricate an intensified CCD visual photocathode image tube, by incorporating a thinned, backside-illumined CCD as the target anode of a digicon tube of Hubble Space Telescope (HST) design. The CCD digicon tube employs the HST's sodium bialkali photocathode and MgF2 substrate, thereby allowing a direct photocathode quantum efficiency comparison between photocathodes produced by the presently employed low temperature process and those of the conventional high temperature process. Attention is given to the processing chamber used, as well as the details of gas desorption and photocathode processing.

  19. Improved Scanners for Microscopic Hyperspectral Imaging

    NASA Technical Reports Server (NTRS)

    Mao, Chengye

    2009-01-01

    Improved scanners to be incorporated into hyperspectral microscope-based imaging systems have been invented. Heretofore, in microscopic imaging, including spectral imaging, it has been customary to either move the specimen relative to the optical assembly that includes the microscope or else move the entire assembly relative to the specimen. It becomes extremely difficult to control such scanning when submicron translation increments are required, because the high magnification of the microscope enlarges all movements in the specimen image on the focal plane. To overcome this difficulty, in a system based on this invention, no attempt would be made to move either the specimen or the optical assembly. Instead, an objective lens would be moved within the assembly so as to cause translation of the image at the focal plane: the effect would be equivalent to scanning in the focal plane. The upper part of the figure depicts a generic proposed microscope-based hyperspectral imaging system incorporating the invention. The optical assembly of this system would include an objective lens (normally, a microscope objective lens) and a charge-coupled-device (CCD) camera. The objective lens would be mounted on a servomotor-driven translation stage, which would be capable of moving the lens in precisely controlled increments, relative to the camera, parallel to the focal-plane scan axis. The output of the CCD camera would be digitized and fed to a frame grabber in a computer. The computer would store the frame-grabber output for subsequent viewing and/or processing of images. The computer would contain a position-control interface board, through which it would control the servomotor. There are several versions of the invention. An essential feature common to all versions is that the stationary optical subassembly containing the camera would also contain a spatial window, at the focal plane of the objective lens, that would pass only a selected portion of the image. In one version, the window would be a slit, the CCD would contain a one-dimensional array of pixels, and the objective lens would be moved along an axis perpendicular to the slit to spatially scan the image of the specimen in pushbroom fashion. The image built up by scanning in this case would be an ordinary (non-spectral) image. In another version, the optics of which are depicted in the lower part of the figure, the spatial window would be a slit, the CCD would contain a two-dimensional array of pixels, the slit image would be refocused onto the CCD by a relay-lens pair consisting of a collimating and a focusing lens, and a prism-gratingprism optical spectrometer would be placed between the collimating and focusing lenses. Consequently, the image on the CCD would be spatially resolved along the slit axis and spectrally resolved along the axis perpendicular to the slit. As in the first-mentioned version, the objective lens would be moved along an axis perpendicular to the slit to spatially scan the image of the specimen in pushbroom fashion.

  20. Automated micromanipulation desktop station based on mobile piezoelectric microrobots

    NASA Astrophysics Data System (ADS)

    Fatikow, Sergej

    1996-12-01

    One of the main problems of present-day research on microsystem technology (MST) is to assemble a whole micro- system from different microcomponents. This paper presents a new concept of an automated micromanipulation desktop- station including piezoelectrically driven microrobots placed on a high-precise x-y-stage of a light microscope, a CCD-camera as a local sensor subsystem, a laser sensor unit as a global sensor subsystem, a parallel computer system with C167 microcontrollers, and a Pentium PC equipped additionally with an optical grabber. The microrobots can perform high-precise manipulations (with an accuracy of up to 10 nm) and a nondestructive transport (at a speed of about 3 cm/sec) of very small objects under the microscope. To control the desktop-station automatically, an advanced control system that includes a task planning level and a real-time execution level is being developed. The main function of the task planning sub-system is to interpret the implicit action plan and to generate a sequence of explicit operations which are sent to the execution level of the control system. The main functions of the execution control level are the object recognition, image processing and feedback position control of the microrobot and the microscope stage.

  1. A high-sensitivity EM-CCD camera for the open port telescope cavity of SOFIA

    NASA Astrophysics Data System (ADS)

    Wiedemann, Manuel; Wolf, Jürgen; McGrotty, Paul; Edwards, Chris; Krabbe, Alfred

    2016-08-01

    The Stratospheric Observatory for Infrared Astronomy (SOFIA) has three target acquisition and tracking cameras. All three imagers originally used the same cameras, which did not meet the sensitivity requirements, due to low quantum efficiency and high dark current. The Focal Plane Imager (FPI) suffered the most from high dark current, since it operated in the aircraft cabin at room temperatures without active cooling. In early 2013 the FPI was upgraded with an iXon3 888 from Andor Techonolgy. Compared to the original cameras, the iXon3 has a factor five higher QE, thanks to its back-illuminated sensor, and orders of magnitude lower dark current, due to a thermo-electric cooler and "inverted mode operation." This leads to an increase in sensitivity of about five stellar magnitudes. The Wide Field Imager (WFI) and Fine Field Imager (FFI) shall now be upgraded with equally sensitive cameras. However, they are exposed to stratospheric conditions in flight (typical conditions: T≍-40° C, p≍ 0:1 atm) and there are no off-the-shelf CCD cameras with the performance of an iXon3, suited for these conditions. Therefore, Andor Technology and the Deutsches SOFIA Institut (DSI) are jointly developing and qualifying a camera for these conditions, based on the iXon3 888. These changes include replacement of electrical components with MIL-SPEC or industrial grade components and various system optimizations, a new data interface that allows the image data transmission over 30m of cable from the camera to the controller, a new power converter in the camera to generate all necessary operating voltages of the camera locally and a new housing that fulfills airworthiness requirements. A prototype of this camera has been built and tested in an environmental test chamber at temperatures down to T=-62° C and pressure equivalent to 50 000 ft altitude. In this paper, we will report about the development of the camera and present results from the environmental testing.

  2. Solid State Research

    DTIC Science & Technology

    1998-05-15

    2 Bioaerosol fluorescence sensor concept. 2 1-3 Bioaerosol fluorescence sensor detection geometry: (a) signal collection (side view... wavelength light, (b) Strength of output signal along vertical line trace indicated by arrow in (a). 37 5-2 Brick wall pattern revealed by chemical...etchant. 38 5-3 (a) Flat-field illumination of improved laser-annealed CCD at -90°C with 410-nm wavelength light, (b) Strength of output signal along

  3. Design and development of a fiber optic TDI CCD-based slot-scan digital mammography system

    NASA Astrophysics Data System (ADS)

    Toker, Emre; Piccaro, Michele F.

    1993-12-01

    We previously reported on the development, design, and clinical evaluation of a CCD-based, high performance, filmless imaging system for stereotactic needle biopsy procedures in mammography. The MammoVision system has a limited imaging area of 50 mm X 50 mm, since it is designed specifically for breast biopsy applications. We are currently developing a new filmless imaging system designed to cover the 18 cm X 24 cm imaging area required for screening and diagnostic mammography. The diagnostic mammography system is based on four 1100 X 330 pixel format, full-frame, scientific grade, front illuminated, MPP mode CCDs, with 24 micrometers X 24 micrometers square pixels Each CCD is coupled to an x-ray intensifying screen via a 1.7:1 fiber optic reducer. The detector assembly (180 mm long and 13.5 mm wide) is scanned across the patient's breast synchronously with the x-ray source, with the CCDs operated in time-delay integration (TDI) mode. The total scan time is 4.0 seconds.

  4. Comparison of lens- and fiber-coupled CCD detectors for X-ray computed tomography

    PubMed Central

    Uesugi, K.; Hoshino, M.; Yagi, N.

    2011-01-01

    X-ray imaging detectors with an identical phosphor and a CCD chip but employing lens- and fiber-coupling between them have been compared. These are designed for X-ray imaging experiments, especially computed tomography, at the medium-length beamline at the SPring-8 synchrotron radiation facility. It was found that the transmittance of light to the CCD is about four times higher in the fiber-coupled detector. The uniformity of response in the lens-coupled detector has a global shading of up to 40%, while pixel-to-pixel variation owing to a chicken-wire pattern was dominant in the fiber-coupled detector. Apart from the higher transmittance, the fiber-coupled detector has a few characteristics that require attention when it is used for computed tomography, which are browning of the fiber, discontinuity in the image, image distortion, and dark spots in the chicken-wire pattern. Thus, it is most suitable for high-speed tomography of samples that tend to deform, for example biological and soft materials. PMID:21335908

  5. CCD Astrometry with Robotic Telescopes

    NASA Astrophysics Data System (ADS)

    AlZaben, Faisal; Li, Dewei; Li, Yongyao; Dennis, Aren Fene, Michael; Boyce, Grady; Boyce, Pat

    2016-01-01

    CCD images were acquired of three binary star systems: WDS06145+1148, WDS06206+1803, and WDS06224+2640. The astrometric solution, position angle, and separation of each system were calculated with MaximDL v6 and Mira Pro x64 software suites. The results were consistent with historical measurements in the Washington Double Star Catalog. Our analysis found some differences in measurements between single-shot color CCD cameras and traditional monochrome CCDs using a filter wheel.

  6. Chromatic Modulator for a High-Resolution CCD or APS

    NASA Technical Reports Server (NTRS)

    Hartley, Frank; Hull, Anthony

    2008-01-01

    A chromatic modulator has been proposed to enable the separate detection of the red, green, and blue (RGB) color components of the same scene by a single charge-coupled device (CCD), active-pixel sensor (APS), or similar electronic image detector. Traditionally, the RGB color-separation problem in an electronic camera has been solved by use of either (1) fixed color filters over three separate image detectors; (2) a filter wheel that repeatedly imposes a red, then a green, then a blue filter over a single image detector; or (3) different fixed color filters over adjacent pixels. The use of separate image detectors necessitates precise registration of the detectors and the use of complicated optics; filter wheels are expensive and add considerably to the bulk of the camera; and fixed pixelated color filters reduce spatial resolution and introduce color-aliasing effects. The proposed chromatic modulator would not exhibit any of these shortcomings. The proposed chromatic modulator would be an electromechanical device fabricated by micromachining. It would include a filter having a spatially periodic pattern of RGB strips at a pitch equal to that of the pixels of the image detector. The filter would be placed in front of the image detector, supported at its periphery by a spring suspension and electrostatic comb drive. The spring suspension would bias the filter toward a middle position in which each filter strip would be registered with a row of pixels of the image detector. Hard stops would limit the excursion of the spring suspension to precisely one pixel row above and one pixel row below the middle position. In operation, the electrostatic comb drive would be actuated to repeatedly snap the filter to the upper extreme, middle, and lower extreme positions. This action would repeatedly place a succession of the differently colored filter strips in front of each pixel of the image detector. To simplify the processing, it would be desirable to encode information on the color of the filter strip over each row (or at least over some representative rows) of pixels at a given instant of time in synchronism with the pixel output at that instant.

  7. Fiber optic, Fabry-Perot high temperature sensor

    NASA Technical Reports Server (NTRS)

    James, K.; Quick, B.

    1984-01-01

    A digital, fiber optic temperature sensor using a variable Fabry-Perot cavity as the sensor element was analyzed, designed, fabricated, and tested. The fiber transmitted cavity reflection spectra is dispersed then converted from an optical signal to electrical information by a charged coupled device (CCD). A microprocessor-based color demodulation system converts the wavelength information to temperature. This general sensor concept not only utilizes an all-optical means of parameter sensing and transmitting, but also exploits microprocessor technology for automated control, calibration, and enhanced performance. The complete temperature sensor system was evaluated in the laboratory. Results show that the Fabry-Perot temperature sensor has good resolution (0.5% of full seale), high accuracy, and potential high temperature ( 1000 C) applications.

  8. Improving depth estimation from a plenoptic camera by patterned illumination

    NASA Astrophysics Data System (ADS)

    Marshall, Richard J.; Meah, Chris J.; Turola, Massimo; Claridge, Ela; Robinson, Alex; Bongs, Kai; Gruppetta, Steve; Styles, Iain B.

    2015-05-01

    Plenoptic (light-field) imaging is a technique that allows a simple CCD-based imaging device to acquire both spatially and angularly resolved information about the "light-field" from a scene. It requires a microlens array to be placed between the objective lens and the sensor of the imaging device1 and the images under each microlens (which typically span many pixels) can be computationally post-processed to shift perspective, digital refocus, extend the depth of field, manipulate the aperture synthetically and generate a depth map from a single image. Some of these capabilities are rigid functions that do not depend upon the scene and work by manipulating and combining a well-defined set of pixels in the raw image. However, depth mapping requires specific features in the scene to be identified and registered between consecutive microimages. This process requires that the image has sufficient features for the registration, and in the absence of such features the algorithms become less reliable and incorrect depths are generated. The aim of this study is to investigate the generation of depth-maps from light-field images of scenes with insufficient features for accurate registration, using projected patterns to impose a texture on the scene that provides sufficient landmarks for the registration methods.

  9. Design and fabrication of an angle-scanning based platform for the construction of surface plasmon resonance biosensor

    NASA Astrophysics Data System (ADS)

    Hu, Jiandong; Cao, Baiqiong; Wang, Shun; Li, Jianwei; Wei, Wensong; Zhao, Yuanyuan; Hu, Xinran; Zhu, Juanhua; Jiang, Min; Sun, Xiaohui; Chen, Ruipeng; Ma, Liuzheng

    2016-03-01

    A sensing system for an angle-scanning optical surface-plasmon-resonance (SPR) based biosensor has been designed with a laser line generator in which a P polarizer is embedded to utilize as an excitation source for producing the surface plasmon wave. In this system, the emitting beam from the laser line generator is controlled to realize the angle-scanning using a variable speed direct current (DC) motor. The light beam reflected from the prism deposited with a 50 nm Au film is then captured using the area CCD array which was controlled by a personal computer (PC) via a universal serial bus (USB) interface. The photoelectric signals from the high speed digital camera (an area CCD array) were converted by a 16 bit A/D converter before it transferred to the PC. One of the advantages of this SPR biosensing platform is greatly demonstrated by the label-free and real-time bio-molecular analysis without moving the area CCD array by following the laser line generator. It also could provide a low-cost surface plasmon resonance platform to improve the detection range in the measurement of bioanalytes. The SPR curve displayed on the PC screen promptly is formed by the effective data from the image on the area CCD array and the sensing responses of the platform to bulk refractive indices were calibrated using various concentrations of ethanol solution. These ethanol concentrations indicated with volumetric fraction of 5%, 10%, 15%, 20%, and 25%, respectively, were experimented to validate the performance of the angle-scanning optic SPR biosensing platform. As a result, the SPR sensor was capable to detect a change in the refractive index of the ethanol solution with the relative high linearity at the correlation coefficient of 0.9842. This greatly enhanced detection range is obtained from the position relationship between the laser line generator and the right-angle prism to allow direct quantification of the samples over a wide range of concentrations.

  10. Double Star Measurements at the Southern Sky with a 50 cm Reflector and a Fast CCD Camera in 2014

    NASA Astrophysics Data System (ADS)

    Anton, Rainer

    2015-04-01

    A Ritchey-Chrétien reflector with 50 cm aperture was used in Namibia for recordings of double stars with a fast CCD camera and a notebook computer. From superposition of "lucky images", measurements of 91 pairings in 79 double and multiple systems were obtained and compared with literature data. Occasional deviations are discussed. Some images of noteworthy systems are also presented.

  11. Inexpensive Neutron Imaging Cameras Using CCDs for Astronomy

    NASA Astrophysics Data System (ADS)

    Hewat, A. W.

    We have developed inexpensive neutron imaging cameras using CCDs originally designed for amateur astronomical observation. The low-light, high resolution requirements of such CCDs are similar to those for neutron imaging, except that noise as well as cost is reduced by using slower read-out electronics. For example, we use the same 2048x2048 pixel ;Kodak; KAI-4022 CCD as used in the high performance PCO-2000 CCD camera, but our electronics requires ∼5 sec for full-frame read-out, ten times slower than the PCO-2000. Since neutron exposures also require several seconds, this is not seen as a serious disadvantage for many applications. If higher frame rates are needed, the CCD unit on our camera can be easily swapped for a faster readout detector with similar chip size and resolution, such as the PCO-2000 or the sCMOS PCO.edge 4.2.

  12. Aerosol mobility imaging for rapid size distribution measurements

    DOEpatents

    Wang, Jian; Hering, Susanne Vera; Spielman, Steven Russel; Kuang, Chongai

    2016-07-19

    A parallel plate dimensional electrical mobility separator and laminar flow water condensation provide rapid, mobility-based particle sizing at concentrations typical of the remote atmosphere. Particles are separated spatially within the electrical mobility separator, enlarged through water condensation, and imaged onto a CCD array. The mobility separation distributes particles in accordance with their size. The condensation enlarges size-separated particles by water condensation while they are still within the gap of the mobility drift tube. Once enlarged the particles are illuminated by a laser. At a pre-selected frequency, typically 10 Hz, the position of all of the individual particles illuminated by the laser are captured by CCD camera. This instantly records the particle number concentration at each position. Because the position is directly related to the particle size (or mobility), the particle size spectra is derived from the images recorded by the CCD.

  13. NGS2: a focal plane array upgrade for the GeMS multiple tip-tilt wavefront sensor

    NASA Astrophysics Data System (ADS)

    Rigaut, François; Price, Ian; d'Orgeville, Céline; Bennet, Francis; Herrald, Nick; Paulin, Nicolas; Uhlendorf, Kristina; Garrel, Vincent; Sivo, Gaetano; Montes, Vanessa; Trujillo, Chad

    2016-07-01

    NGS2 is an upgrade for the multi-natural guide star tip-tilt & plate scale wavefront sensor for GeMS (Gemini Multi-Conjugate Adaptive Optics system). It uses a single Nüvü HNü-512 Electron-Multiplied CCD array that spans the entire GeMS wavefront sensor focal plane. Multiple small regions-of-interest are used to enable frame rates up to 800Hz. This set up will improve the optical throughput with respect to the current wavefront sensor, as well as streamline acquisition and allow for distortion compensation.

  14. 3D FaceCam: a fast and accurate 3D facial imaging device for biometrics applications

    NASA Astrophysics Data System (ADS)

    Geng, Jason; Zhuang, Ping; May, Patrick; Yi, Steven; Tunnell, David

    2004-08-01

    Human faces are fundamentally three-dimensional (3D) objects, and each face has its unique 3D geometric profile. The 3D geometric features of a human face can be used, together with its 2D texture, for rapid and accurate face recognition purposes. Due to the lack of low-cost and robust 3D sensors and effective 3D facial recognition (FR) algorithms, almost all existing FR systems use 2D face images. Genex has developed 3D solutions that overcome the inherent problems in 2D while also addressing limitations in other 3D alternatives. One important aspect of our solution is a unique 3D camera (the 3D FaceCam) that combines multiple imaging sensors within a single compact device to provide instantaneous, ear-to-ear coverage of a human face. This 3D camera uses three high-resolution CCD sensors and a color encoded pattern projection system. The RGB color information from each pixel is used to compute the range data and generate an accurate 3D surface map. The imaging system uses no moving parts and combines multiple 3D views to provide detailed and complete 3D coverage of the entire face. Images are captured within a fraction of a second and full-frame 3D data is produced within a few seconds. This described method provides much better data coverage and accuracy in feature areas with sharp features or details (such as the nose and eyes). Using this 3D data, we have been able to demonstrate that a 3D approach can significantly improve the performance of facial recognition. We have conducted tests in which we have varied the lighting conditions and angle of image acquisition in the "field." These tests have shown that the matching results are significantly improved when enrolling a 3D image rather than a single 2D image. With its 3D solutions, Genex is working toward unlocking the promise of powerful 3D FR and transferring FR from a lab technology into a real-world biometric solution.

  15. Effects of the source, surface, and sensor couplings and colorimetric of laser speckle pattern on the performance of optical imaging system

    NASA Astrophysics Data System (ADS)

    Darwiesh, M.; El-Sherif, Ashraf F.; El-Ghandour, Hatem; Aly, Hussein A.; Mokhtar, A. M.

    2011-03-01

    Optical imaging systems are widely used in different applications include tracking for portable scanners; input pointing devices for laptop computers, cell phones, and cameras, fingerprint-identification scanners, optical navigation for target tracking, and in optical computer mouse. We presented an experimental work to measure and analyze the laser speckle pattern (LSP) produced from different optical sources (i.e. various color LEDs, 3 mW diode laser, and 10mW He-Ne laser) with different produced operating surfaces (Gabor hologram diffusers), and how they affects the performance of the optical imaging systems; speckle size and signal-to-noise ratio (signal is represented by the patches of the speckles that contain or carry information, and noise is represented by the whole remaining part of the selected image). The theoretical and experimental studies of the colorimetry (color correction is done in the color images captured by the optical imaging system to produce realistic color images which contains most of the information in the image by selecting suitable gray scale which contains most of the informative data in the image, this is done by calculating the accurate Red-Green-Blue (RGB) color components making use of the measured spectrum for light sources, and color matching functions of International Telecommunication Organization (ITU-R709) for CRT phosphorus, Tirinton-SONY Model ) for the used optical sources are investigated and introduced to present the relations between the signal-to-noise ratios with different diffusers for each light source. The source surface coupling has been discussed and concludes that the performance of the optical imaging system for certain source varies from worst to best based on the operating surface. The sensor /surface coupling has been studied and discussed for the case of He-Ne laser and concludes the speckle size is ranged from 4.59 to 4.62 μm, which are slightly different or approximately the same for all produced diffusers (which satisfies the fact that the speckle size is independent on the illuminating surface). But, the calculated value of signal-tonoise ratio takes different values ranged from 0.71 to 0.92 for different diffuser. This means that the surface texture affects the performance of the optical sensor because, all images captured for all diffusers under the same conditions [same source (He-Ne laser), same distances of the experimental set-up, and the same sensor (CCD camera)].

  16. Combining Charge Couple Devices and Rate Sensors for the Feedforward Control System of a Charge Coupled Device Tracking Loop.

    PubMed

    Tang, Tao; Tian, Jing; Zhong, Daijun; Fu, Chengyu

    2016-06-25

    A rate feed forward control-based sensor fusion is proposed to improve the closed-loop performance for a charge couple device (CCD) tracking loop. The target trajectory is recovered by combining line of sight (LOS) errors from the CCD and the angular rate from a fiber-optic gyroscope (FOG). A Kalman filter based on the Singer acceleration model utilizes the reconstructive target trajectory to estimate the target velocity. Different from classical feed forward control, additive feedback loops are inevitably added to the original control loops due to the fact some closed-loop information is used. The transfer function of the Kalman filter in the frequency domain is built for analyzing the closed loop stability. The bandwidth of the Kalman filter is the major factor affecting the control stability and close-loop performance. Both simulations and experiments are provided to demonstrate the benefits of the proposed algorithm.

  17. ARGOS wavefront sensing: from detection to correction

    NASA Astrophysics Data System (ADS)

    Orban de Xivry, Gilles; Bonaglia, M.; Borelli, J.; Busoni, L.; Connot, C.; Esposito, S.; Gaessler, W.; Kulas, M.; Mazzoni, T.; Puglisi, A.; Rabien, S.; Storm, J.; Ziegleder, J.

    2014-08-01

    Argos is the ground-layer adaptive optics system for the Large Binocular Telescope. In order to perform its wide-field correction, Argos uses three laser guide stars which sample the atmospheric turbulence. To perform the correction, Argos has at disposal three different wavefront sensing measurements : its three laser guide stars, a NGS tip-tilt, and a third wavefront sensor. We present the wavefront sensing architecture and its individual components, in particular: the finalized Argos pnCCD camera detecting the 3 laser guide stars at 1kHz, high quantum efficiency and 4e- noise; the Argos tip-tilt sensor based on a quad-cell avalanche photo-diodes; and the Argos wavefront computer. Being in the middle of the commissioning, we present the first wavefront sensing configurations and operations performed at LBT, and discuss further improvements in the measurements of the 3 laser guide star slopes as detected by the pnCCD.

  18. Preparation of New Scintillation Imaging Material Composed of Scintillator-Silica Fine Powders and its Imaging of Tritium.

    PubMed

    Miyoshi, Hirokazu; Hiroura, Mitsunori; Tsujimoto, Kazunori; Irikura, Namiko; Otani, Tamaki; Shinohara, Yasuo

    2017-05-01

    A new scintillation imaging material [scintillator-silica fine powder (FP)] was prepared using silica FPs and scintillator-encapsulating silica nanoparticles (NPs) (scintillator-silica NPs). The wt% values of scintillator-silica NPs on the scintillator-silica FPs were 38, 43, 36 and 44%. Scintillation images of 3H, 63Ni, 35S, 33P, 204Tl, 89Sr and 32P dropped on the scintillator-silica FPs were obtained at about 37 kBq per 0.1-10 µl with a charge-coupled device (CCD) imager for a 5 min exposure. In particular, high-intensity CCD images of 35S were selectively obtained using the 2.25, 4.77 and 10 µm silica FPs with scintillator-silica NPs owing to the residual S of dimethyl sulfoxide in the preparation. Scintillation images of 3H at 1670 ± 9 Bq/0.5 µl and 347 ± 6 Bq/0.5 µl dropped in a 2 mm hole on the scintillator-silica FPs (6.78 and 10 µm) were also obtained using the CCD imager for a 2 h exposure. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  19. Pattern-Recognition Processor Using Holographic Photopolymer

    NASA Technical Reports Server (NTRS)

    Chao, Tien-Hsin; Cammack, Kevin

    2006-01-01

    proposed joint-transform optical correlator (JTOC) would be capable of operating as a real-time pattern-recognition processor. The key correlation-filter reading/writing medium of this JTOC would be an updateable holographic photopolymer. The high-resolution, high-speed characteristics of this photopolymer would enable pattern-recognition processing to occur at a speed three orders of magnitude greater than that of state-of-the-art digital pattern-recognition processors. There are many potential applications in biometric personal identification (e.g., using images of fingerprints and faces) and nondestructive industrial inspection. In order to appreciate the advantages of the proposed JTOC, it is necessary to understand the principle of operation of a conventional JTOC. In a conventional JTOC (shown in the upper part of the figure), a collimated laser beam passes through two side-by-side spatial light modulators (SLMs). One SLM displays a real-time input image to be recognized. The other SLM displays a reference image from a digital memory. A Fourier-transform lens is placed at its focal distance from the SLM plane, and a charge-coupled device (CCD) image detector is placed at the back focal plane of the lens for use as a square-law recorder. Processing takes place in two stages. In the first stage, the CCD records the interference pattern between the Fourier transforms of the input and reference images, and the pattern is then digitized and saved in a buffer memory. In the second stage, the reference SLM is turned off and the interference pattern is fed back to the input SLM. The interference pattern thus becomes Fourier-transformed, yielding at the CCD an image representing the joint-transform correlation between the input and reference images. This image contains a sharp correlation peak when the input and reference images are matched. The drawbacks of a conventional JTOC are the following: The CCD has low spatial resolution and is not an ideal square-law detector for the purpose of holographic recording of interference fringes. A typical state-of-the-art CCD has a pixel-pitch limited resolution of about 100 lines/mm. In contrast, the holographic photopolymer to be used in the proposed JTOC offers a resolution > 2,000 lines/mm. In addition to being disadvantageous in itself, the low resolution of the CCD causes overlap of a DC term and the desired correlation term in the output image. This overlap severely limits the correlation signal-to-noise ratio. The two-stage nature of the process limits the achievable throughput rate. A further limit is imposed by the low frame rate (typical video rates) of low- and medium-cost commercial CCDs.

  20. CAMEO-SIM: a physics-based broadband scene simulation tool for assessment of camouflage, concealment, and deception methodologies

    NASA Astrophysics Data System (ADS)

    Moorhead, Ian R.; Gilmore, Marilyn A.; Houlbrook, Alexander W.; Oxford, David E.; Filbee, David R.; Stroud, Colin A.; Hutchings, G.; Kirk, Albert

    2001-09-01

    Assessment of camouflage, concealment, and deception (CCD) methodologies is not a trivial problem; conventionally the only method has been to carry out field trials, which are both expensive and subject to the vagaries of the weather. In recent years computing power has increased, such that there are now many research programs using synthetic environments for CCD assessments. Such an approach is attractive; the user has complete control over the environmental parameters and many more scenarios can be investigated. The UK Ministry of Defence is currently developing a synthetic scene generation tool for assessing the effectiveness of air vehicle camouflage schemes. The software is sufficiently flexible to allow it to be used in a broader range of applications, including full CCD assessment. The synthetic scene simulation system (CAMEO- SIM) has been developed, as an extensible system, to provide imagery within the 0.4 to 14 micrometers spectral band with as high a physical fidelity as possible. it consists of a scene design tool, an image generator, that incorporates both radiosity and ray-tracing process, and an experimental trials tool. The scene design tool allows the user to develop a 3D representation of the scenario of interest from a fixed viewpoint. Target(s) of interest can be placed anywhere within this 3D representation and may be either static or moving. Different illumination conditions and effects of the atmosphere can be modeled together with directional reflectance effects. The user has complete control over the level of fidelity of the final image. The output from the rendering tool is a sequence of radiance maps, which may be used by sensor models or for experimental trials in which observers carry out target acquisition tasks. The software also maintains an audit trail of all data selected to generate a particular image, both in terms of material properties used and the rendering options chosen. A range of verification tests has shown that the software computes the correct values for analytically tractable scenarios. Validation test using simple scenes have also been undertaken. More complex validation tests using observer trials are planned. The current version of CAMEO-SIM and how its images are used for camouflage assessment is described. The verification and validation tests undertaken are discussed. In addition, example images will be used to demonstrate the significance of different effects, such as spectral rendering and shadows. Planned developments of CAMEO-SIM are also outlined.

  1. MMW/THz imaging using upconversion to visible, based on glow discharge detector array and CCD camera

    NASA Astrophysics Data System (ADS)

    Aharon, Avihai; Rozban, Daniel; Abramovich, Amir; Yitzhaky, Yitzhak; Kopeika, Natan S.

    2017-10-01

    An inexpensive upconverting MMW/THz imaging method is suggested here. The method is based on glow discharge detector (GDD) and silicon photodiode or simple CCD/CMOS camera. The GDD was previously found to be an excellent room-temperature MMW radiation detector by measuring its electrical current. The GDD is very inexpensive and it is advantageous due to its wide dynamic range, broad spectral range, room temperature operation, immunity to high power radiation, and more. An upconversion method is demonstrated here, which is based on measuring the visual light emitting from the GDD rather than its electrical current. The experimental setup simulates a setup that composed of a GDD array, MMW source, and a basic CCD/CMOS camera. The visual light emitting from the GDD array is directed to the CCD/CMOS camera and the change in the GDD light is measured using image processing algorithms. The combination of CMOS camera and GDD focal plane arrays can yield a faster, more sensitive, and very inexpensive MMW/THz camera, eliminating the complexity of the electronic circuits and the internal electronic noise of the GDD. Furthermore, three dimensional imaging systems based on scanning prohibited real time operation of such imaging systems. This is easily solved and is economically feasible using a GDD array. This array will enable us to acquire information on distance and magnitude from all the GDD pixels in the array simultaneously. The 3D image can be obtained using methods like frequency modulation continuous wave (FMCW) direct chirp modulation, and measuring the time of flight (TOF).

  2. Demonstration of plant fluorescence by imaging technique and Intelligent FluoroSensor

    NASA Astrophysics Data System (ADS)

    Lenk, Sándor; Gádoros, Patrik; Kocsányi, László; Barócsi, Attila

    2015-10-01

    Photosynthesis is a process that converts carbon-dioxide into organic compounds, especially into sugars, using the energy of sunlight. The absorbed light energy is used mainly for photosynthesis initiated at the reaction centers of chlorophyll-protein complexes, but part of it is lost as heat and chlorophyll fluorescence. Therefore, the measurement of the latter can be used to estimate the photosynthetic activity. The basic method, when illuminating intact leaves with strong light after a dark adaptation of at least 20 minutes resulting in a transient change of fluorescence emission of the fluorophore chlorophyll-a called `Kautsky effect', is demonstrated by an imaging setup. The experimental kit includes a high radiant blue LED and a CCD camera (or a human eye) equipped with a red transmittance filter to detect the changing fluorescence radiation. However, for the measurement of several fluorescence parameters, describing the plant physiological processes in detail, the variation of several excitation light sources and an adequate detection method are needed. Several fluorescence induction protocols (e.g. traditional Kautsky, pulse amplitude modulated and excitation kinetic), are realized in the Intelligent FluoroSensor instrument. Using it, students are able to measure different plant fluorescence induction curves, quantitatively determine characteristic parameters and qualitatively interpret the measured signals.

  3. Denoising Algorithm for CFA Image Sensors Considering Inter-Channel Correlation.

    PubMed

    Lee, Min Seok; Park, Sang Wook; Kang, Moon Gi

    2017-05-28

    In this paper, a spatio-spectral-temporal filter considering an inter-channel correlation is proposed for the denoising of a color filter array (CFA) sequence acquired by CCD/CMOS image sensors. Owing to the alternating under-sampled grid of the CFA pattern, the inter-channel correlation must be considered in the direct denoising process. The proposed filter is applied in the spatial, spectral, and temporal domain, considering the spatio-tempo-spectral correlation. First, nonlocal means (NLM) spatial filtering with patch-based difference (PBD) refinement is performed by considering both the intra-channel correlation and inter-channel correlation to overcome the spatial resolution degradation occurring with the alternating under-sampled pattern. Second, a motion-compensated temporal filter that employs inter-channel correlated motion estimation and compensation is proposed to remove the noise in the temporal domain. Then, a motion adaptive detection value controls the ratio of the spatial filter and the temporal filter. The denoised CFA sequence can thus be obtained without motion artifacts. Experimental results for both simulated and real CFA sequences are presented with visual and numerical comparisons to several state-of-the-art denoising methods combined with a demosaicing method. Experimental results confirmed that the proposed frameworks outperformed the other techniques in terms of the objective criteria and subjective visual perception in CFA sequences.

  4. Characterization of a 512x512-pixel 8-output full-frame CCD for high-speed imaging

    NASA Astrophysics Data System (ADS)

    Graeve, Thorsten; Dereniak, Eustace L.

    1993-01-01

    The characterization of a 512 by 512 pixel, eight-output full frame CCD manufactured by English Electric Valve under part number CCD13 is discussed. This device is a high- resolution Silicon-based array designed for visible imaging applications at readout periods as low as two milliseconds. The characterization of the device includes mean-variance analysis to determine read noise and dynamic range, as well as charge transfer efficiency, MTF, and quantum efficiency measurements. Dark current and non-uniformity issues on a pixel-to-pixel basis and between individual outputs are also examined. The characterization of the device is restricted by hardware limitations to a one MHz pixel rate, corresponding to a 40 ms readout time. However, subsections of the device have been operated at up to an equivalent 100 frames per second. To maximize the frame rate, the CCD is illuminated by a synchronized strobe flash in between frame readouts. The effects of the strobe illumination on the imagery obtained from the device is discussed.

  5. The development of large-aperture test system of infrared camera and visible CCD camera

    NASA Astrophysics Data System (ADS)

    Li, Yingwen; Geng, Anbing; Wang, Bo; Wang, Haitao; Wu, Yanying

    2015-10-01

    Infrared camera and CCD camera dual-band imaging system is used in many equipment and application widely. If it is tested using the traditional infrared camera test system and visible CCD test system, 2 times of installation and alignment are needed in the test procedure. The large-aperture test system of infrared camera and visible CCD camera uses the common large-aperture reflection collimator, target wheel, frame-grabber, computer which reduces the cost and the time of installation and alignment. Multiple-frame averaging algorithm is used to reduce the influence of random noise. Athermal optical design is adopted to reduce the change of focal length location change of collimator when the environmental temperature is changing, and the image quality of the collimator of large field of view and test accuracy are also improved. Its performance is the same as that of the exotic congener and is much cheaper. It will have a good market.

  6. Performance of a day time star sensor for a stabilized balloon platform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rossi, E.; DiCocco, G.; Donati, A.

    1989-02-01

    A modified version of a CCD star tracker originally designed for use on the ROSAT X ray astronomy satellite, has been built for use on a three axis stabilized balloon platform. The first flight of this star sensor was planned for may 1988 from the NASA Balloon base at Palestine, Texas. The expected performance of this instrument is described along with the preflight results.

  7. Image stacking approach to increase sensitivity of fluorescence detection using a low cost complementary metal-oxide-semiconductor (CMOS) webcam.

    PubMed

    Balsam, Joshua; Bruck, Hugh Alan; Kostov, Yordan; Rasooly, Avraham

    2012-01-01

    Optical technologies are important for biological analysis. Current biomedical optical analyses rely on high-cost, high-sensitivity optical detectors such as photomultipliers, avalanched photodiodes or cooled CCD cameras. In contrast, Webcams, mobile phones and other popular consumer electronics use lower-sensitivity, lower-cost optical components such as photodiodes or CMOS sensors. In order for consumer electronics devices, such as webcams, to be useful for biomedical analysis, they must have increased sensitivity. We combined two strategies to increase the sensitivity of CMOS-based fluorescence detector. We captured hundreds of low sensitivity images using a Webcam in video mode, instead of a single image typically used in cooled CCD devices.We then used a computational approach consisting of an image stacking algorithm to remove the noise by combining all of the images into a single image. While video mode is widely used for dynamic scene imaging (e.g. movies or time-lapse photography), it is not used to capture a single static image, which removes noise and increases sensitivity by more than thirty fold. The portable, battery-operated Webcam-based fluorometer system developed here consists of five modules: (1) a low cost CMOS Webcam to monitor light emission, (2) a plate to perform assays, (3) filters and multi-wavelength LED illuminator for fluorophore excitation, (4) a portable computer to acquire and analyze images, and (5) image stacking software for image enhancement. The samples consisted of various concentrations of fluorescein, ranging from 30 μM to 1000 μM, in a 36-well miniature plate. In the single frame mode, the fluorometer's limit-of-detection (LOD) for fluorescein is ∼1000 μM, which is relatively insensitive. However, when used in video mode combined with image stacking enhancement, the LOD is dramatically reduced to 30 μM, sensitivity which is similar to that of state-of-the-art ELISA plate photomultiplier-based readers. Numerous medical diagnostics assays rely on optical and fluorescence readers. Our novel combination of detection technologies, which is new to biodetection may enable the development of new low cost optical detectors based on an inexpensive Webcam (<$10). It has the potential to form the basis for high sensitivity, low cost medical diagnostics in resource-poor settings.

  8. Image stacking approach to increase sensitivity of fluorescence detection using a low cost complementary metal-oxide-semiconductor (CMOS) webcam

    PubMed Central

    Balsam, Joshua; Bruck, Hugh Alan; Kostov, Yordan; Rasooly, Avraham

    2013-01-01

    Optical technologies are important for biological analysis. Current biomedical optical analyses rely on high-cost, high-sensitivity optical detectors such as photomultipliers, avalanched photodiodes or cooled CCD cameras. In contrast, Webcams, mobile phones and other popular consumer electronics use lower-sensitivity, lower-cost optical components such as photodiodes or CMOS sensors. In order for consumer electronics devices, such as webcams, to be useful for biomedical analysis, they must have increased sensitivity. We combined two strategies to increase the sensitivity of CMOS-based fluorescence detector. We captured hundreds of low sensitivity images using a Webcam in video mode, instead of a single image typically used in cooled CCD devices.We then used a computational approach consisting of an image stacking algorithm to remove the noise by combining all of the images into a single image. While video mode is widely used for dynamic scene imaging (e.g. movies or time-lapse photography), it is not used to capture a single static image, which removes noise and increases sensitivity by more than thirty fold. The portable, battery-operated Webcam-based fluorometer system developed here consists of five modules: (1) a low cost CMOS Webcam to monitor light emission, (2) a plate to perform assays, (3) filters and multi-wavelength LED illuminator for fluorophore excitation, (4) a portable computer to acquire and analyze images, and (5) image stacking software for image enhancement. The samples consisted of various concentrations of fluorescein, ranging from 30 μM to 1000 μM, in a 36-well miniature plate. In the single frame mode, the fluorometer's limit-of-detection (LOD) for fluorescein is ∼1000 μM, which is relatively insensitive. However, when used in video mode combined with image stacking enhancement, the LOD is dramatically reduced to 30 μM, sensitivity which is similar to that of state-of-the-art ELISA plate photomultiplier-based readers. Numerous medical diagnostics assays rely on optical and fluorescence readers. Our novel combination of detection technologies, which is new to biodetection may enable the development of new low cost optical detectors based on an inexpensive Webcam (<$10). It has the potential to form the basis for high sensitivity, low cost medical diagnostics in resource-poor settings. PMID:23990697

  9. Io's Sodium Cloud On-Chip Format (Clear and Green-Yellow Filters Superimposed)

    NASA Technical Reports Server (NTRS)

    1997-01-01

    This image of Jupiter's moon Io and its surrounding sky is shown in false color. The solid state imaging (CCD) system on NASA's Galileo spacecraft originally took two images of this scene, one through a clear filter and one through a green-yellow filter. [Versions of these images have been released over the past 3 days.] This picture was created by: (i) adding green color to the image taken through the green-yellow filter, and red color to the image taken through the clear filter; (ii) superimposing the two resulting images. Thus features in this picture which are purely green (or purely red) originally appeared only in the green-yellow (or clear) filter image of this scene. Features which are yellowish appeared in both filters. North is at the top, and east is to the right.

    This image reveals several new things about this scene. For example:

    (1) The reddish emission south of Io came dominantly through the clear filter. It therefore probably represents scattered light from Io's lit crescent and Prometheus' plume, rather than emission from Io's Sodium Cloud (which came through both filters).

    (2) The roundish red spot in Io's southern hemisphere contains a small yellow spot. This means that some thermal emission from the volcano Pele was detected by the green-yellow filter (as well as by the clear filter).

    (3) The sky contains several concentrated yellowish spots which were thus seen at the same location on the sky through both filters (one such spot appears in the picture's northeast corner). These spots are almost certainly stars. By contrast, the eastern half of this image contains a number of green spots whose emission was thus detected by the green-yellow filter only. Since any star visible through the green-yellow filter would also be visible through the clear filter, these green spots are probably artifacts (e.g., cosmic ray hits on the CCD sensor).

    The Jet Propulsion Laboratory, Pasadena, CA manages the mission for NASA's Office of Space Science, Washington, DC.

    This image and other images and data received from Galileo are posted on the World Wide Web, on the Galileo mission home page at URL http://galileo.jpl.nasa.gov.

  10. French Meteor Network for High Precision Orbits of Meteoroids

    NASA Technical Reports Server (NTRS)

    Atreya, P.; Vaubaillon, J.; Colas, F.; Bouley, S.; Gaillard, B.; Sauli, I.; Kwon, M. K.

    2011-01-01

    There is a lack of precise meteoroids orbit from video observations as most of the meteor stations use off-the-shelf CCD cameras. Few meteoroids orbit with precise semi-major axis are available using film photographic method. Precise orbits are necessary to compute the dust flux in the Earth s vicinity, and to estimate the ejection time of the meteoroids accurately by comparing them with the theoretical evolution model. We investigate the use of large CCD sensors to observe multi-station meteors and to compute precise orbit of these meteoroids. An ideal spatial and temporal resolution to get an accuracy to those similar of photographic plates are discussed. Various problems faced due to the use of large CCD, such as increasing the spatial and the temporal resolution at the same time and computational problems in finding the meteor position are illustrated.

  11. Calibration of a shock wave position sensor using artificial neural networks

    NASA Technical Reports Server (NTRS)

    Decker, Arthur J.; Weiland, Kenneth E.

    1993-01-01

    This report discusses the calibration of a shock wave position sensor. The position sensor works by using artificial neural networks to map cropped CCD frames of the shadows of the shock wave into the value of the shock wave position. This project was done as a tutorial demonstration of method and feasibility. It used a laboratory shadowgraph, nozzle, and commercial neural network package. The results were quite good, indicating that artificial neural networks can be used efficiently to automate the semi-quantitative applications of flow visualization.

  12. Sensory Interactive Teleoperator Robotic Grasping

    NASA Technical Reports Server (NTRS)

    Alark, Keli; Lumia, Ron

    1997-01-01

    As the technological world strives for efficiency, the need for economical equipment that increases operator proficiency in minimal time is fundamental. This system links a CCD camera, a controller and a robotic arm to a computer vision system to provide an alternative method of image analysis. The machine vision system which was employed possesses software tools for acquiring and analyzing images which are received through a CCD camera. After feature extraction on the object in the image was performed, information about the object's location, orientation and distance from the robotic gripper is sent to the robot controller so that the robot can manipulate the object.

  13. Design of a CCD Camera for Space Surveillance

    DTIC Science & Technology

    2016-03-05

    Laboratory fabricated CCID-51M, a 2048x1024 pixel Charge Couple Device (CCD) imager. [1] The mission objective is to observe and detect satellites in...phased to transfer the charge to the outputs. An electronic shutter is created by having an equal area of pixels covered by an opaque metal mask. The...Figure 4 CDS Timing Diagram By design the CCD readout rate is 400 KHz. This rate was chosen so reading the 2E6 pixels from one output is less than

  14. Characterising CCDs with cosmic rays

    DOE PAGES

    Fisher-Levine, M.; Nomerotski, A.

    2015-08-06

    The properties of cosmic ray muons make them a useful probe for measuring the properties of thick, fully depleted CCD sensors. The known energy deposition per unit length allows measurement of the gain of the sensor's amplifiers, whilst the straightness of the tracks allows for a crude assessment of the static lateral electric fields at the sensor's edges. The small volume in which the muons deposit their energy allows measurement of the contribution to the PSF from the diffusion of charge as it drifts across the sensor. In this work we present a validation of the cosmic ray gain measurementmore » technique by comparing with radioisotope gain measurments, and calculate the charge diffusion coefficient for prototype LSST sensors.« less

  15. Graphical user interface for image acquisition and processing

    DOEpatents

    Goldberg, Kenneth A.

    2002-01-01

    An event-driven GUI-based image acquisition interface for the IDL programming environment designed for CCD camera control and image acquisition directly into the IDL environment where image manipulation and data analysis can be performed, and a toolbox of real-time analysis applications. Running the image acquisition hardware directly from IDL removes the necessity of first saving images in one program and then importing the data into IDL for analysis in a second step. Bringing the data directly into IDL creates an opportunity for the implementation of IDL image processing and display functions in real-time. program allows control over the available charge coupled device (CCD) detector parameters, data acquisition, file saving and loading, and image manipulation and processing, all from within IDL. The program is built using IDL's widget libraries to control the on-screen display and user interface.

  16. Fabrication of Robust, Flat, Thinned, UV-Imaging CCDs

    NASA Technical Reports Server (NTRS)

    Grunthaner, Paula; Elliott, Stythe; Jones, Todd; Nikzad, Shouleh

    2004-01-01

    An improved process that includes a high-temperature bonding subprocess has been developed to enable the fabrication of robust, flat, silicon-based charge-coupled devices (CCDs) for imaging in ultraviolet (UV) light and/or for detecting low-energy charged particles. The CCDs in question are devices on which CCD circuitry has already been formed and have been thinned for backsurface illumination. These CCDs may be delta doped, and aspects of this type of CCD have been described in several prior articles in NASA Tech Briefs. Unlike prior low-temperature bonding subprocesses based on the use of epoxies or waxes, the high-temperature bonding subprocess is compatible with the deltadoping process as well as with other CCD-fabrication processes. The present improved process and its bonding, thinning, and delta-doping subprocesses, are characterized as postfabrication processes because they are undertaken after the fabrication of CCD circuitry on the front side of a full-thickness silicon substrate. In a typical case, it is necessary to reduce the thickness of the CCD to between 10 and 20 m in order to take advantage of back-side illumination and in order to perform delta doping and/or other back-side treatment to enhance the quantum efficiency. In the prior approach to the fabrication of back-side-illuminated CCDs, the thinning subprocess turned each CCD into a free-standing membrane that was fragile and tended to become wrinkled. In the present improved process, prior to thinning and delta doping, a CCD is bonded on its front side to a silicon substrate that has been prefabricated to include cutouts to accommodate subsequent electrical connections to bonding pads on the CCD circuitry. The substrate provides structural support to increase ruggedness and maintain flatness. At the beginning of this process, the back side of a CCD as fabricated on a full-thickness substrate is polished. Silicon nitride is deposited on the back side, opposite the bonding pads on the front side, in order to define a relatively thick frame. The portion of the CCD not covered by the frame is the portion to be thinned by etching.

  17. Dynamic autofocus for continuous-scanning time-delay-and-integration image acquisition in automated microscopy.

    PubMed

    Bravo-Zanoguera, Miguel E; Laris, Casey A; Nguyen, Lam K; Oliva, Mike; Price, Jeffrey H

    2007-01-01

    Efficient image cytometry of a conventional microscope slide means rapid acquisition and analysis of 20 gigapixels of image data (at 0.3-microm sampling). The voluminous data motivate increased acquisition speed to enable many biomedical applications. Continuous-motion time-delay-and-integrate (TDI) scanning has the potential to speed image acquisition while retaining sensitivity, but the challenge of implementing high-resolution autofocus operating simultaneously with acquisition has limited its adoption. We develop a dynamic autofocus system for this need using: 1. a "volume camera," consisting of nine fiber optic imaging conduits to charge-coupled device (CCD) sensors, that acquires images in parallel from different focal planes, 2. an array of mixed analog-digital processing circuits that measure the high spatial frequencies of the multiple image streams to create focus indices, and 3. a software system that reads and analyzes the focus data streams and calculates best focus for closed feedback loop control. Our system updates autofocus at 56 Hz (or once every 21 microm of stage travel) to collect sharply focused images sampled at 0.3x0.3 microm(2)/pixel at a stage speed of 2.3 mms. The system, tested by focusing in phase contrast and imaging long fluorescence strips, achieves high-performance closed-loop image-content-based autofocus in continuous scanning for the first time.

  18. Images of the laser entrance hole from the static x-ray imager at NIF.

    PubMed

    Schneider, M B; Jones, O S; Meezan, N B; Milovich, J L; Town, R P; Alvarez, S S; Beeler, R G; Bradley, D K; Celeste, J R; Dixit, S N; Edwards, M J; Haugh, M J; Kalantar, D H; Kline, J L; Kyrala, G A; Landen, O L; MacGowan, B J; Michel, P; Moody, J D; Oberhelman, S K; Piston, K W; Pivovaroff, M J; Suter, L J; Teruya, A T; Thomas, C A; Vernon, S P; Warrick, A L; Widmann, K; Wood, R D; Young, B K

    2010-10-01

    The static x-ray imager at the National Ignition Facility is a pinhole camera using a CCD detector to obtain images of Hohlraum wall x-ray drive illumination patterns seen through the laser entrance hole (LEH). Carefully chosen filters, combined with the CCD response, allow recording images in the x-ray range of 3-5 keV with 60 μm spatial resolution. The routines used to obtain the apparent size of the backlit LEH and the location and intensity of beam spots are discussed and compared to predictions. A new soft x-ray channel centered at 870 eV (near the x-ray peak of a 300 eV temperature ignition Hohlraum) is discussed.

  19. Astrometrica: Astrometric data reduction of CCD images

    NASA Astrophysics Data System (ADS)

    Raab, Herbert

    2012-03-01

    Astrometrica is an interactive software tool for scientific grade astrometric data reduction of CCD images. The current version of the software is for the Windows 32bit operating system family. Astrometrica reads FITS (8, 16 and 32 bit integer files) and SBIG image files. The size of the images is limited only by available memory. It also offers automatic image calibration (Dark Frame and Flat Field correction), automatic reference star identification, automatic moving object detection and identification, and access to new-generation star catalogs (PPMXL, UCAC 3 and CMC-14), in addition to online help and other features. Astrometrica is shareware, available for use for a limited period of time (100 days) for free; special arrangements can be made for educational projects.

  20. Array biosensor: recent developments

    NASA Astrophysics Data System (ADS)

    Golden, Joel P.; Rowe-Taitt, Chris A.; Feldstein, Mark J.; Ligler, Frances S.

    1999-05-01

    A fluorescence-based immunosensor has been developed for simultaneous analyses of multiple samples for 1 to 6 different antigens. A patterned array of recognition antibodies immobilized on the surface of a planar waveguide is used to 'capture' analyte present in samples. Bound analyte is then quantified by means of fluorescent detector molecules. Upon excitation of the fluorescent label by a small diode laser, a CCD camera detects the pattern of fluorescent antigen:antibody complexes on the sensor surface. Image analysis software correlates the position of fluorescent signals with the identity of the analyte. A new design for a fluidics distribution system is shown, as well as results from assays for physiologically relevant concentrations of staphylococcal enterotoxin B (SEB), F1 antigen from Yersinia pestis, and D- dimer, a marker of sepsis and thrombotic disorders.

Top