Sample records for device emccd camera

  1. The darkest EMCCD ever

    NASA Astrophysics Data System (ADS)

    Daigle, Olivier; Quirion, Pierre-Olivier; Lessard, Simon

    2010-07-01

    EMCCDs are devices capable of sub-electron read-out noise at high pixel rate, together with a high quantum efficiency (QE). However, they are plagued by an excess noise factor (ENF) which has the same effect on photometric measurement as if the QE would be halved. In order to get rid of the ENF, the photon counting (PC) operation is mandatory, with the drawback of counting only one photon per pixel per frame. The high frame rate capability of the EMCCDs comes to the rescue, at the price of increased clock induced charges (CIC), which dominates the noise budget of the EMCCD. The CIC can be greatly reduced with an appropriate clocking, which renders the PC operation of the EMCCD very efficient for faint flux photometry or spectroscopy, adaptive optics, ultrafast imaging and Lucky Imaging. This clocking is achievable with a new EMCCD controller: CCCP, the CCD Controller for Counting Photons. This new controller, which is now commercialized by Nüvü cameras inc., was integrated into an EMCCD camera and tested at the observatoire du mont-M'egantic. The results are presented in this paper.

  2. Maturing CCD Photon-Counting Technology for Space Flight

    NASA Technical Reports Server (NTRS)

    Mallik, Udayan; Lyon, Richard; Petrone, Peter; McElwain, Michael; Benford, Dominic; Clampin, Mark; Hicks, Brian

    2015-01-01

    This paper discusses charge blooming and starlight saturation - two potential technical problems - when using an Electron Multiplying Charge Coupled Device (EMCCD) type detector in a high-contrast instrument for imaging exoplanets. These problems especially affect an interferometric type coronagraph - coronagraphs that do not use a mask to physically block starlight in the science channel of the instrument. These problems are presented using images taken with a commercial Princeton Instrument EMCCD camera in the Goddard Space Flight Center's (GSFC), Interferometric Coronagraph facility. In addition, this paper discusses techniques to overcome such problems. This paper also discusses the development and architecture of a Field Programmable Gate Array and Digital-to-Analog Converter based shaped clock controller for a photon-counting EMCCD camera. The discussion contained here will inform high-contrast imaging groups in their work with EMCCD detectors.

  3. Graphical user interface for a dual-module EMCCD x-ray detector array

    NASA Astrophysics Data System (ADS)

    Wang, Weiyuan; Ionita, Ciprian; Kuhls-Gilcrist, Andrew; Huang, Ying; Qu, Bin; Gupta, Sandesh K.; Bednarek, Daniel R.; Rudin, Stephen

    2011-03-01

    A new Graphical User Interface (GUI) was developed using Laboratory Virtual Instrumentation Engineering Workbench (LabVIEW) for a high-resolution, high-sensitivity Solid State X-ray Image Intensifier (SSXII), which is a new x-ray detector for radiographic and fluoroscopic imaging, consisting of an array of Electron-Multiplying CCDs (EMCCDs) each having a variable on-chip electron-multiplication gain of up to 2000x to reduce the effect of readout noise. To enlarge the field-of-view (FOV), each EMCCD sensor is coupled to an x-ray phosphor through a fiberoptic taper. Two EMCCD camera modules are used in our prototype to form a computer-controlled array; however, larger arrays are under development. The new GUI provides patient registration, EMCCD module control, image acquisition, and patient image review. Images from the array are stitched into a 2kx1k pixel image that can be acquired and saved at a rate of 17 Hz (faster with pixel binning). When reviewing the patient's data, the operator can select images from the patient's directory tree listed by the GUI and cycle through the images using a slider bar. Commonly used camera parameters including exposure time, trigger mode, and individual EMCCD gain can be easily adjusted using the GUI. The GUI is designed to accommodate expansion of the EMCCD array to even larger FOVs with more modules. The high-resolution, high-sensitivity EMCCD modular-array SSXII imager with the new user-friendly GUI should enable angiographers and interventionalists to visualize smaller vessels and endovascular devices, helping them to make more accurate diagnoses and to perform more precise image-guided interventions.

  4. Graphical User Interface for a Dual-Module EMCCD X-ray Detector Array.

    PubMed

    Wang, Weiyuan; Ionita, Ciprian; Kuhls-Gilcrist, Andrew; Huang, Ying; Qu, Bin; Gupta, Sandesh K; Bednarek, Daniel R; Rudin, Stephen

    2011-03-16

    A new Graphical User Interface (GUI) was developed using Laboratory Virtual Instrumentation Engineering Workbench (LabVIEW) for a high-resolution, high-sensitivity Solid State X-ray Image Intensifier (SSXII), which is a new x-ray detector for radiographic and fluoroscopic imaging, consisting of an array of Electron-Multiplying CCDs (EMCCDs) each having a variable on-chip electron-multiplication gain of up to 2000× to reduce the effect of readout noise. To enlarge the field-of-view (FOV), each EMCCD sensor is coupled to an x-ray phosphor through a fiberoptic taper. Two EMCCD camera modules are used in our prototype to form a computer-controlled array; however, larger arrays are under development. The new GUI provides patient registration, EMCCD module control, image acquisition, and patient image review. Images from the array are stitched into a 2k×1k pixel image that can be acquired and saved at a rate of 17 Hz (faster with pixel binning). When reviewing the patient's data, the operator can select images from the patient's directory tree listed by the GUI and cycle through the images using a slider bar. Commonly used camera parameters including exposure time, trigger mode, and individual EMCCD gain can be easily adjusted using the GUI. The GUI is designed to accommodate expansion of the EMCCD array to even larger FOVs with more modules. The high-resolution, high-sensitivity EMCCD modular-array SSXII imager with the new user-friendly GUI should enable angiographers and interventionalists to visualize smaller vessels and endovascular devices, helping them to make more accurate diagnoses and to perform more precise image-guided interventions.

  5. Experimental comparison of high-density scintillators for EMCCD-based gamma ray imaging

    NASA Astrophysics Data System (ADS)

    Heemskerk, Jan W. T.; Kreuger, Rob; Goorden, Marlies C.; Korevaar, Marc A. N.; Salvador, Samuel; Seeley, Zachary M.; Cherepy, Nerine J.; van der Kolk, Erik; Payne, Stephen A.; Dorenbos, Pieter; Beekman, Freek J.

    2012-07-01

    Detection of x-rays and gamma rays with high spatial resolution can be achieved with scintillators that are optically coupled to electron-multiplying charge-coupled devices (EMCCDs). These can be operated at typical frame rates of 50 Hz with low noise. In such a set-up, scintillation light within each frame is integrated after which the frame is analyzed for the presence of scintillation events. This method allows for the use of scintillator materials with relatively long decay times of a few milliseconds, not previously considered for use in photon-counting gamma cameras, opening up an unexplored range of dense scintillators. In this paper, we test CdWO4 and transparent polycrystalline ceramics of Lu2O3:Eu and (Gd,Lu)2O3:Eu as alternatives to currently used CsI:Tl in order to improve the performance of EMCCD-based gamma cameras. The tested scintillators were selected for their significantly larger cross-sections at 140 keV (99mTc) compared to CsI:Tl combined with moderate to good light yield. A performance comparison based on gamma camera spatial and energy resolution was done with all tested scintillators having equal (66%) interaction probability at 140 keV. CdWO4, Lu2O3:Eu and (Gd,Lu)2O3:Eu all result in a significantly improved spatial resolution over CsI:Tl, albeit at the cost of reduced energy resolution. Lu2O3:Eu transparent ceramic gives the best spatial resolution: 65 µm full-width-at-half-maximum (FWHM) compared to 147 µm FWHM for CsI:Tl. In conclusion, these ‘slow’ dense scintillators open up new possibilities for improving the spatial resolution of EMCCD-based scintillation cameras.

  6. Extreme Faint Flux Imaging with an EMCCD

    NASA Astrophysics Data System (ADS)

    Daigle, Olivier; Carignan, Claude; Gach, Jean-Luc; Guillaume, Christian; Lessard, Simon; Fortin, Charles-Anthony; Blais-Ouellette, Sébastien

    2009-08-01

    An EMCCD camera, designed from the ground up for extreme faint flux imaging, is presented. CCCP, the CCD Controller for Counting Photons, has been integrated with a CCD97 EMCCD from e2v technologies into a scientific camera at the Laboratoire d’Astrophysique Expérimentale (LAE), Université de Montréal. This new camera achieves subelectron readout noise and very low clock-induced charge (CIC) levels, which are mandatory for extreme faint flux imaging. It has been characterized in laboratory and used on the Observatoire du Mont Mégantic 1.6 m telescope. The performance of the camera is discussed and experimental data with the first scientific data are presented.

  7. Optimizing low-light microscopy with back-illuminated electron multiplying charge-coupled device: enhanced sensitivity, speed, and resolution.

    PubMed

    Coates, Colin G; Denvir, Donal J; McHale, Noel G; Thornbury, Keith D; Hollywood, Mark A

    2004-01-01

    The back-illuminated electron multiplying charge-coupled device (EMCCD) camera is having a profound influence on the field of low-light dynamic cellular microscopy, combining highest possible photon collection efficiency with the ability to virtually eliminate the readout noise detection limit. We report here the use of this camera, in 512 x 512 frame-transfer chip format at 10-MHz pixel readout speed, in optimizing a demanding ultra-low-light intracellular calcium flux microscopy setup. The arrangement employed includes a spinning confocal Nipkow disk, which, while facilitating the need to both generate images at very rapid frame rates and minimize background photons, yields very weak signals. The challenge for the camera lies not just in detecting as many of these scarce photons as possible, but also in operating at a frame rate that meets the temporal resolution requirements of many low-light microscopy approaches, a particular demand of smooth muscle calcium flux microscopy. Results presented illustrate both the significant sensitivity improvement offered by this technology over the previous standard in ultra-low-light CCD detection, the GenIII+intensified charge-coupled device (ICCD), and also portray the advanced temporal and spatial resolution capabilities of the EMCCD. Copyright 2004 Society of Photo-Optical Instrumentation Engineers.

  8. Circuit design of an EMCCD camera

    NASA Astrophysics Data System (ADS)

    Li, Binhua; Song, Qian; Jin, Jianhui; He, Chun

    2012-07-01

    EMCCDs have been used in the astronomical observations in many ways. Recently we develop a camera using an EMCCD TX285. The CCD chip is cooled to -100°C in an LN2 dewar. The camera controller consists of a driving board, a control board and a temperature control board. Power supplies and driving clocks of the CCD are provided by the driving board, the timing generator is located in the control board. The timing generator and an embedded Nios II CPU are implemented in an FPGA. Moreover the ADC and the data transfer circuit are also in the control board, and controlled by the FPGA. The data transfer between the image workstation and the camera is done through a Camera Link frame grabber. The software of image acquisition is built using VC++ and Sapera LT. This paper describes the camera structure, the main components and circuit design for video signal processing channel, clock driver, FPGA and Camera Link interfaces, temperature metering and control system. Some testing results are presented.

  9. The challenge of sCMOS image sensor technology to EMCCD

    NASA Astrophysics Data System (ADS)

    Chang, Weijing; Dai, Fang; Na, Qiyue

    2018-02-01

    In the field of low illumination image sensor, the noise of the latest scientific-grade CMOS image sensor is close to EMCCD, and the industry thinks it has the potential to compete and even replace EMCCD. Therefore we selected several typical sCMOS and EMCCD image sensors and cameras to compare their performance parameters. The results show that the signal-to-noise ratio of sCMOS is close to EMCCD, and the other parameters are superior. But signal-to-noise ratio is very important for low illumination imaging, and the actual imaging results of sCMOS is not ideal. EMCCD is still the first choice in the high-performance application field.

  10. Spectro-imagerie optique a faible flux et comparaison de la cinematique Halpha et HI d'un echantillon de galaxies proches

    NASA Astrophysics Data System (ADS)

    Daigle, Olivier

    A new EMCCD (Electron multiplying Charge Coupled Device) controller is presented. It allows the EMCCD to be used for photon counting by drastically taking down its dominating source of noise: the clock induced charges. A new EMCCD camera was built using this controller. It has been characterized in laboratory and tested at the observatoire du mont Megantic. When compared to the previous generation of photon counting cameras based on intensifier tubes, this new camera renders the observation of the galaxies kinematics with an integral field spectrometer with a Fabry-Perot interferometer in Halpha light much faster, and allows fainter galaxies to be observed. The integration time required to reach a given signal-to-noise ratio is about 4 times less than with the intensifier tubes. Many applications could benefit of such a camera: fast, faint flux photometry, high spectral and temporal resolution spectroscopy, earth-based diffraction limited imagery (lucky imaging), etc. Technically, the camera is dominated by the shot noise for flux higher than 0.002 photon/pixel/image. The 21 cm emission line of the neutral hydrogen (HI) is often used to map the galaxies kinematics. The extent of the distribution of the neutral hydrogen in galaxies, which goes well beyond the optical disk, is one of the reasons this line is used so often. However, the spatial resolution of such observations is limited when compared to their optical equivalents. When comparing the HI data to higher resolution ones, some differences were simply attributed to the beam smearing of the HI caused by its lower resolution. The THINGS (The H I Nearby Galaxy Survey) project observed many galaxies of the SINGS (Spitzer Infrared Nearby Galaxies Survey) project. The kinematics of THINGS will be compared to the kinematic data of the galaxies obtained in Halpha light. The comparison will try to determine whether the sole beam smearing is responsible of the differences observed. The results shows that intrinsic dissimilarities between the kinematical tracers used are responsible of some of the observed disagreements. The understanding of theses differences is of a high importance as the dark matter distribution, inferred from the rotation of the galaxies, is a test to some cosmological models. Keywords: Astronomical instrumentation - Photon counting - EMCCD - Clock Induced Charges - Galaxies - Kinematics - Dark matter - Fabry-Perot interferometry - 3D spectroscopy - SINGS.

  11. The impact of radiation damage on photon counting with an EMCCD for the WFIRST-AFTA coronagraph

    NASA Astrophysics Data System (ADS)

    Bush, Nathan; Hall, David; Holland, Andrew; Burgon, Ross; Murray, Neil; Gow, Jason; Soman, Matthew; Jordan, Douglas; Demers, Richard; Harding, Leon; Hoenk, Michael; Michaels, Darren; Nemati, Bijan; Peddada, Pavani

    2015-09-01

    WFIRST-AFTA is a 2.4m class NASA observatory designed to address a wide range of science objectives using two complementary scientific payloads. The Wide Field Instrument (WFI) offers Hubble quality imaging over a 0.28 square degree field of view, and will gather NIR statistical data on exoplanets through gravitational microlensing. The second instrument is a high contrast coronagraph that will carry out the direct imaging and spectroscopic analysis of exoplanets, providing a means to probe the structure and composition of planetary systems. The coronagraph instrument is expected to operate in low photon flux for long integration times, meaning all noise sources must be kept to a minimum. In order to satisfy the low noise requirements, the Electron Multiplication (EM)-CCD has been baselined for both the imaging and spectrograph cameras. The EMCCD was selected in comparison with other candidates because of its low effective electronic read noise at sub-electron values with appropriate multiplication gain setting. The presence of other noise sources, however, such as thermal dark signal and Clock Induced Charge (CIC), need to be characterised and mitigated. In addition, operation within a space environment will subject the device to radiation damage that will degrade the Charge Transfer Efficiency (CTE) of the device throughout the mission lifetime. Here we present our latest results from pre- and post-irradiation testing of the e2v CCD201-20 BI EMCCD sensor, baselined for the WFIRST-AFTA coronagraph instrument. A description of the detector technology is presented, alongside considerations for operation within a space environment. The results from a room temperature irradiation are discussed in context with the nominal operating requirements of AFTA-C and future work which entails a cryogenic irradiation of the CCD201-20 is presented.

  12. The simulated spectrum of the OGRE X-ray EM-CCD camera system

    NASA Astrophysics Data System (ADS)

    Lewis, M.; Soman, M.; Holland, A.; Lumb, D.; Tutt, J.; McEntaffer, R.; Schultz, T.; Holland, K.

    2017-12-01

    The X-ray astronomical telescopes in use today, such as Chandra and XMM-Newton, use X-ray grating spectrometers to probe the high energy physics of the Universe. These instruments typically use reflective optics for focussing onto gratings that disperse incident X-rays across a detector, often a Charge-Coupled Device (CCD). The X-ray energy is determined from the position that it was detected on the CCD. Improved technology for the next generation of X-ray grating spectrometers has been developed and will be tested on a sounding rocket experiment known as the Off-plane Grating Rocket Experiment (OGRE). OGRE aims to capture the highest resolution soft X-ray spectrum of Capella, a well-known astronomical X-ray source, during an observation period lasting between 3 and 6 minutes whilst proving the performance and suitability of three key components. These three components consist of a telescope made from silicon mirrors, gold coated silicon X-ray diffraction gratings and a camera that comprises of four Electron-Multiplying (EM)-CCDs that will be arranged to observe the soft X-rays dispersed by the gratings. EM-CCDs have an architecture similar to standard CCDs, with the addition of an EM gain register where the electron signal is amplified so that the effective signal-to-noise ratio of the imager is improved. The devices also have incredibly favourable Quantum Efficiency values for detecting soft X-ray photons. On OGRE, this improved detector performance allows for easier identification of low energy X-rays and fast readouts due to the amplified signal charge making readout noise almost negligible. A simulation that applies the OGRE instrument performance to the Capella soft X-ray spectrum has been developed that allows the distribution of X-rays onto the EM-CCDs to be predicted. A proposed optical model is also discussed which would enable the missions minimum success criteria's photon count requirement to have a high chance of being met with the shortest possible observation time. These results are compared to a Chandra observation to show the overall effectiveness of the new technologies. The current optical module is shown to narrowly meet the minimum success conditions whilst the proposed model comfortably demonstrates the effectiveness of the technologies if a larger effective area is provided.

  13. Performance Evaluation of 18F Radioluminescence Microscopy Using Computational Simulation

    PubMed Central

    Wang, Qian; Sengupta, Debanti; Kim, Tae Jin; Pratx, Guillem

    2017-01-01

    Purpose Radioluminescence microscopy can visualize the distribution of beta-emitting radiotracers in live single cells with high resolution. Here, we perform a computational simulation of 18F positron imaging using this modality to better understand how radioluminescence signals are formed and to assist in optimizing the experimental setup and image processing. Methods First, the transport of charged particles through the cell and scintillator and the resulting scintillation is modeled using the GEANT4 Monte-Carlo simulation. Then, the propagation of the scintillation light through the microscope is modeled by a convolution with a depth-dependent point-spread function, which models the microscope response. Finally, the physical measurement of the scintillation light using an electron-multiplying charge-coupled device (EMCCD) camera is modeled using a stochastic numerical photosensor model, which accounts for various sources of noise. The simulated output of the EMCCD camera is further processed using our ORBIT image reconstruction methodology to evaluate the endpoint images. Results The EMCCD camera model was validated against experimentally acquired images and the simulated noise, as measured by the standard deviation of a blank image, was found to be accurate within 2% of the actual detection. Furthermore, point-source simulations found that a reconstructed spatial resolution of 18.5 μm can be achieved near the scintillator. As the source is moved away from the scintillator, spatial resolution degrades at a rate of 3.5 μm per μm distance. These results agree well with the experimentally measured spatial resolution of 30–40 μm (live cells). The simulation also shows that the system sensitivity is 26.5%, which is also consistent with our previous experiments. Finally, an image of a simulated sparse set of single cells is visually similar to the measured cell image. Conclusions Our simulation methodology agrees with experimental measurements taken with radioluminescence microscopy. This in silico approach can be used to guide further instrumentation developments and to provide a framework for improving image reconstruction. PMID:28273348

  14. A Flight Photon Counting Camera for the WFIRST Coronagraph

    NASA Astrophysics Data System (ADS)

    Morrissey, Patrick

    2018-01-01

    A photon counting camera based on the Teledyne-e2v CCD201-20 electron multiplying CCD (EMCCD) is being developed for the NASA WFIRST coronagraph, an exoplanet imaging technology development of the Jet Propulsion Laboratory (Pasadena, CA) that is scheduled to launch in 2026. The coronagraph is designed to directly image planets around nearby stars, and to characterize their spectra. The planets are exceedingly faint, providing signals similar to the detector dark current, and require the use of photon counting detectors. Red sensitivity (600-980nm) is preferred to capture spectral features of interest. Since radiation in space affects the ability of the EMCCD to transfer the required single electron signals, care has been taken to develop appropriate shielding that will protect the cameras during a five year mission. In this poster, consideration of the effects of space radiation on photon counting observations will be described with the mitigating features of the camera design. An overview of the current camera flight system electronics requirements and design will also be described.

  15. Diffuse optical tomography for breast cancer imaging guided by computed tomography: A feasibility study.

    PubMed

    Baikejiang, Reheman; Zhang, Wei; Li, Changqing

    2017-01-01

    Diffuse optical tomography (DOT) has attracted attentions in the last two decades due to its intrinsic sensitivity in imaging chromophores of tissues such as hemoglobin, water, and lipid. However, DOT has not been clinically accepted yet due to its low spatial resolution caused by strong optical scattering in tissues. Structural guidance provided by an anatomical imaging modality enhances the DOT imaging substantially. Here, we propose a computed tomography (CT) guided multispectral DOT imaging system for breast cancer imaging. To validate its feasibility, we have built a prototype DOT imaging system which consists of a laser at the wavelength of 650 nm and an electron multiplying charge coupled device (EMCCD) camera. We have validated the CT guided DOT reconstruction algorithms with numerical simulations and phantom experiments, in which different imaging setup parameters, such as projection number of measurements and width of measurement patch, have been investigated. Our results indicate that an air-cooling EMCCD camera is good enough for the transmission mode DOT imaging. We have also found that measurements at six angular projections are sufficient for DOT to reconstruct the optical targets with 2 and 4 times absorption contrast when the CT guidance is applied. Finally, we have described our future research plan on integration of a multispectral DOT imaging system into a breast CT scanner.

  16. Experimental study of heavy-ion computed tomography using a scintillation screen and an electron-multiplying charged coupled device camera for human head imaging

    NASA Astrophysics Data System (ADS)

    Muraishi, Hiroshi; Hara, Hidetake; Abe, Shinji; Yokose, Mamoru; Watanabe, Takara; Takeda, Tohoru; Koba, Yusuke; Fukuda, Shigekazu

    2016-03-01

    We have developed a heavy-ion computed tomography (IonCT) system using a scintillation screen and an electron-multiplying charged coupled device (EMCCD) camera that can measure a large object such as a human head. In this study, objective with the development of the system was to investigate the possibility of applying this system to heavy-ion treatment planning from the point of view of spatial resolution in a reconstructed image. Experiments were carried out on a rotation phantom using 12C accelerated up to 430 MeV/u by the Heavy-Ion Medical Accelerator in Chiba (HIMAC) at the National Institute of Radiological Sciences (NIRS). We demonstrated that the reconstructed image of an object with a water equivalent thickness (WET) of approximately 18 cm was successfully achieved with the spatial resolution of 1 mm, which would make this IonCT system worth applying to the heavy-ion treatment planning for head and neck cancers.

  17. Ageing and proton irradiation damage of a low voltage EMCCD in a CMOS process

    NASA Astrophysics Data System (ADS)

    Dunford, A.; Stefanov, K.; Holland, A.

    2018-02-01

    Electron Multiplying Charge Coupled Devices (EMCCDs) have revolutionised low light level imaging, providing highly sensitive detection capabilities. Implementing Electron Multiplication (EM) in Charge Coupled Devices (CCDs) can increase the Signal to Noise Ratio (SNR) and lead to further developments in low light level applications such as improvements in image contrast and single photon imaging. Demand has grown for EMCCD devices with properties traditionally restricted to Complementary Metal-Oxide-Semiconductor (CMOS) image sensors, such as lower power consumption and higher radiation tolerance. However, EMCCDs are known to experience an ageing effect, such that the gain gradually decreases with time. This paper presents results detailing EM ageing in an Electron Multiplying Complementary Metal-Oxide-Semiconductor (EMCMOS) device and its effect on several device characteristics such as Charge Transfer Inefficiency (CTI) and thermal dark signal. When operated at room temperature an average decrease in gain of over 20% after an operational period of 175 hours was detected. With many image sensors deployed in harsh radiation environments, the radiation hardness of the device following proton irradiation was also tested. This paper presents the results of a proton irradiation completed at the Paul Scherrer Institut (PSI) at a 10 MeV equivalent fluence of 4.15× 1010 protons/cm2. The pre-irradiation characterisation, irradiation methodology and post-irradiation results are detailed, demonstrating an increase in dark current and a decrease in its activation energy. Finally, this paper presents a comparison of the damage caused by EM gain ageing and proton irradiation.

  18. High frame rate imaging based photometry. Photometric reduction of data from electron-multiplying charge coupled devices (EMCCDs)

    NASA Astrophysics Data System (ADS)

    Harpsøe, K. B. W.; Jørgensen, U. G.; Andersen, M. I.; Grundahl, F.

    2012-06-01

    Context. The EMCCD is a type of CCD that delivers fast readout times and negligible readout noise, making it an ideal detector for high frame rate applications which improve resolution, like lucky imaging or shift-and-add. This improvement in resolution can potentially improve the photometry of faint stars in extremely crowded fields significantly by alleviating crowding. Alleviating crowding is a prerequisite for observing gravitational microlensing in main sequence stars towards the galactic bulge. However, the photometric stability of this device has not been assessed. The EMCCD has sources of noise not found in conventional CCDs, and new methods for handling these must be developed. Aims: We aim to investigate how the normal photometric reduction steps from conventional CCDs should be adjusted to be applicable to EMCCD data. One complication is that a bias frame cannot be obtained conventionally, as the output from an EMCCD is not normally distributed. Also, the readout process generates spurious charges in any CCD, but in EMCCD data, these charges are visible as opposed to the conventional CCD. Furthermore we aim to eliminate the photon waste associated with lucky imaging by combining this method with shift-and-add. Methods: A simple probabilistic model for the dark output of an EMCCD is developed. Fitting this model with the expectation-maximization algorithm allows us to estimate the bias, readout noise, amplification, and spurious charge rate per pixel and thus correct for these phenomena. To investigate the stability of the photometry, corrected frames of a crowded field are reduced with a point spread function (PSF) fitting photometry package, where a lucky image is used as a reference. Results: We find that it is possible to develop an algorithm that elegantly reduces EMCCD data and produces stable photometry at the 1% level in an extremely crowded field. Based on observation with the Danish 1.54 m telescope at ESO La Silla Observatory.

  19. TH-EF-207A-03: Photon Counting Implementation Challenges Using An Electron Multiplying Charged-Coupled Device Based Micro-CT System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Podgorsak, A; Bednarek, D; Rudin, S

    2016-06-15

    Purpose: To successfully implement and operate a photon counting scheme on an electron multiplying charged-coupled device (EMCCD) based micro-CT system. Methods: We built an EMCCD based micro-CT system and implemented a photon counting scheme. EMCCD detectors use avalanche transfer registries to multiply the input signal far above the readout noise floor. Due to intrinsic differences in the pixel array, using a global threshold for photon counting is not optimal. To address this shortcoming, we generated a threshold array based on sixty dark fields (no x-ray exposure). We calculated an average matrix and a variance matrix of the dark field sequence.more » The average matrix was used for the offset correction while the variance matrix was used to set individual pixel thresholds for the photon counting scheme. Three hundred photon counting frames were added for each projection and 360 projections were acquired for each object. The system was used to scan various objects followed by reconstruction using an FDK algorithm. Results: Examination of the projection images and reconstructed slices of the objects indicated clear interior detail free of beam hardening artifacts. This suggests successful implementation of the photon counting scheme on our EMCCD based micro-CT system. Conclusion: This work indicates that it is possible to implement and operate a photon counting scheme on an EMCCD based micro-CT system, suggesting that these devices might be able to operate at very low x-ray exposures in a photon counting mode. Such devices could have future implications in clinical CT protocols. NIH Grant R01EB002873; Toshiba Medical Systems Corp.« less

  20. New light-amplifier-based detector designs for high spatial resolution and high sensitivity CBCT mammography and fluoroscopy

    PubMed Central

    Rudin, Stephen; Kuhls, Andrew T.; Yadava, Girijesh K.; Josan, Gaurav C.; Wu, Ye; Chityala, Ravishankar N.; Rangwala, Hussain S.; Ciprian Ionita, N.; Hoffmann, Kenneth R.; Bednarek, Daniel R.

    2011-01-01

    New cone-beam computed tomographic (CBCT) mammography system designs are presented where the detectors provide high spatial resolution, high sensitivity, low noise, wide dynamic range, negligible lag and high frame rates similar to features required for high performance fluoroscopy detectors. The x-ray detectors consist of a phosphor coupled by a fiber-optic taper to either a high gain image light amplifier (LA) then CCD camera or to an electron multiplying CCD. When a square-array of such detectors is used, a field-of-view (FOV) to 20 × 20 cm can be obtained where the images have pixel-resolution of 100 µm or better. To achieve practical CBCT mammography scan-times, 30 fps may be acquired with quantum limited (noise free) performance below 0.2 µR detector exposure per frame. Because of the flexible voltage controlled gain of the LA’s and EMCCDs, large detector dynamic range is also achievable. Features of such detector systems with arrays of either generation 2 (Gen 2) or 3 (Gen 3) LAs optically coupled to CCD cameras or arrays of EMCCDs coupled directly are compared. Quantum accounting analysis is done for a variety of such designs where either the lowest number of information carriers off the LA photo-cathode or electrons released in the EMCCDs per x-ray absorbed in the phosphor are large enough to imply no quantum sink for the design. These new LA- or EMCCD-based systems could lead to vastly improved CBCT mammography, ROI-CT, or fluoroscopy performance compared to systems using flat panels. PMID:21297904

  1. New light-amplifier-based detector designs for high spatial resolution and high sensitivity CBCT mammography and fluoroscopy

    NASA Astrophysics Data System (ADS)

    Rudin, Stephen; Kuhls, Andrew T.; Yadava, Girijesh K.; Josan, Gaurav C.; Wu, Ye; Chityala, Ravishankar N.; Rangwala, Hussain S.; Ionita, N. Ciprian; Hoffmann, Kenneth R.; Bednarek, Daniel R.

    2006-03-01

    New cone-beam computed tomographic (CBCT) mammography system designs are presented where the detectors provide high spatial resolution, high sensitivity, low noise, wide dynamic range, negligible lag and high frame rates similar to features required for high performance fluoroscopy detectors. The x-ray detectors consist of a phosphor coupled by a fiber-optic taper to either a high gain image light amplifier (LA) then CCD camera or to an electron multiplying CCD. When a square-array of such detectors is used, a field-of-view (FOV) to 20 x 20 cm can be obtained where the images have pixel-resolution of 100 μm or better. To achieve practical CBCT mammography scan-times, 30 fps may be acquired with quantum limited (noise free) performance below 0.2 μR detector exposure per frame. Because of the flexible voltage controlled gain of the LA's and EMCCDs, large detector dynamic range is also achievable. Features of such detector systems with arrays of either generation 2 (Gen 2) or 3 (Gen 3) LAs optically coupled to CCD cameras or arrays of EMCCDs coupled directly are compared. Quantum accounting analysis is done for a variety of such designs where either the lowest number of information carriers off the LA photo-cathode or electrons released in the EMCCDs per x-ray absorbed in the phosphor are large enough to imply no quantum sink for the design. These new LA- or EMCCD-based systems could lead to vastly improved CBCT mammography, ROI-CT, or fluoroscopy performance compared to systems using flat panels.

  2. Quantitative evaluation of the accuracy and variance of individual pixels in a scientific CMOS (sCMOS) camera for computational imaging

    NASA Astrophysics Data System (ADS)

    Watanabe, Shigeo; Takahashi, Teruo; Bennett, Keith

    2017-02-01

    The"scientific" CMOS (sCMOS) camera architecture fundamentally differs from CCD and EMCCD cameras. In digital CCD and EMCCD cameras, conversion from charge to the digital output is generally through a single electronic chain, and the read noise and the conversion factor from photoelectrons to digital outputs are highly uniform for all pixels, although quantum efficiency may spatially vary. In CMOS cameras, the charge to voltage conversion is separate for each pixel and each column has independent amplifiers and analog-to-digital converters, in addition to possible pixel-to-pixel variation in quantum efficiency. The "raw" output from the CMOS image sensor includes pixel-to-pixel variability in the read noise, electronic gain, offset and dark current. Scientific camera manufacturers digitally compensate the raw signal from the CMOS image sensors to provide usable images. Statistical noise in images, unless properly modeled, can introduce errors in methods such as fluctuation correlation spectroscopy or computational imaging, for example, localization microscopy using maximum likelihood estimation. We measured the distributions and spatial maps of individual pixel offset, dark current, read noise, linearity, photoresponse non-uniformity and variance distributions of individual pixels for standard, off-the-shelf Hamamatsu ORCA-Flash4.0 V3 sCMOS cameras using highly uniform and controlled illumination conditions, from dark conditions to multiple low light levels between 20 to 1,000 photons / pixel per frame to higher light conditions. We further show that using pixel variance for flat field correction leads to errors in cameras with good factory calibration.

  3. Single-photon sensitive fast ebCMOS camera system for multiple-target tracking of single fluorophores: application to nano-biophotonics

    NASA Astrophysics Data System (ADS)

    Cajgfinger, Thomas; Chabanat, Eric; Dominjon, Agnes; Doan, Quang T.; Guerin, Cyrille; Houles, Julien; Barbier, Remi

    2011-03-01

    Nano-biophotonics applications will benefit from new fluorescent microscopy methods based essentially on super-resolution techniques (beyond the diffraction limit) on large biological structures (membranes) with fast frame rate (1000 Hz). This trend tends to push the photon detectors to the single-photon counting regime and the camera acquisition system to real time dynamic multiple-target tracing. The LUSIPHER prototype presented in this paper aims to give a different approach than those of Electron Multiplied CCD (EMCCD) technology and try to answer to the stringent demands of the new nano-biophotonics imaging techniques. The electron bombarded CMOS (ebCMOS) device has the potential to respond to this challenge, thanks to the linear gain of the accelerating high voltage of the photo-cathode, to the possible ultra fast frame rate of CMOS sensors and to the single-photon sensitivity. We produced a camera system based on a 640 kPixels ebCMOS with its acquisition system. The proof of concept for single-photon based tracking for multiple single-emitters is the main result of this paper.

  4. Red to far-red multispectral fluorescence image fusion for detection of fecal contamination on apples

    USDA-ARS?s Scientific Manuscript database

    This research developed a multispectral algorithm derived from hyperspectral line-scan fluorescence imaging under violet/blue LED excitation for detection of fecal contamination on Golden Delicious apples. Using a hyperspectral line-scan imaging system consisting of an EMCCD camera, spectrograph, an...

  5. Optical Demonstration of a Medical Imaging System with an EMCCD-Sensor Array for Use in a High Resolution Dynamic X-ray Imager

    PubMed Central

    Qu, Bin; Huang, Ying; Wang, Weiyuan; Sharma, Prateek; Kuhls-Gilcrist, Andrew T.; Cartwright, Alexander N.; Titus, Albert H.; Bednarek, Daniel R.; Rudin, Stephen

    2011-01-01

    Use of an extensible array of Electron Multiplying CCDs (EMCCDs) in medical x-ray imager applications was demonstrated for the first time. The large variable electronic-gain (up to 2000) and small pixel size of EMCCDs provide effective suppression of readout noise compared to signal, as well as high resolution, enabling the development of an x-ray detector with far superior performance compared to conventional x-ray image intensifiers and flat panel detectors. We are developing arrays of EMCCDs to overcome their limited field of view (FOV). In this work we report on an array of two EMCCD sensors running simultaneously at a high frame rate and optically focused on a mammogram film showing calcified ducts. The work was conducted on an optical table with a pulsed LED bar used to provide a uniform diffuse light onto the film to simulate x-ray projection images. The system can be selected to run at up to 17.5 frames per second or even higher frame rate with binning. Integration time for the sensors can be adjusted from 1 ms to 1000 ms. Twelve-bit correlated double sampling AD converters were used to digitize the images, which were acquired by a National Instruments dual-channel Camera Link PC board in real time. A user-friendly interface was programmed using LabVIEW to save and display 2K × 1K pixel matrix digital images. The demonstration tiles a 2 × 1 array to acquire increased-FOV stationary images taken at different gains and fluoroscopic-like videos recorded by scanning the mammogram simultaneously with both sensors. The results show high resolution and high dynamic range images stitched together with minimal adjustments needed. The EMCCD array design allows for expansion to an M×N array for arbitrarily larger FOV, yet with high resolution and large dynamic range maintained. PMID:23505330

  6. 24/7 security system: 60-FPS color EMCCD camera with integral human recognition

    NASA Astrophysics Data System (ADS)

    Vogelsong, T. L.; Boult, T. E.; Gardner, D. W.; Woodworth, R.; Johnson, R. C.; Heflin, B.

    2007-04-01

    An advanced surveillance/security system is being developed for unattended 24/7 image acquisition and automated detection, discrimination, and tracking of humans and vehicles. The low-light video camera incorporates an electron multiplying CCD sensor with a programmable on-chip gain of up to 1000:1, providing effective noise levels of less than 1 electron. The EMCCD camera operates in full color mode under sunlit and moonlit conditions, and monochrome under quarter-moonlight to overcast starlight illumination. Sixty frame per second operation and progressive scanning minimizes motion artifacts. The acquired image sequences are processed with FPGA-compatible real-time algorithms, to detect/localize/track targets and reject non-targets due to clutter under a broad range of illumination conditions and viewing angles. The object detectors that are used are trained from actual image data. Detectors have been developed and demonstrated for faces, upright humans, crawling humans, large animals, cars and trucks. Detection and tracking of targets too small for template-based detection is achieved. For face and vehicle targets the results of the detection are passed to secondary processing to extract recognition templates, which are then compared with a database for identification. When combined with pan-tilt-zoom (PTZ) optics, the resulting system provides a reliable wide-area 24/7 surveillance system that avoids the high life-cycle cost of infrared cameras and image intensifiers.

  7. Cryogenic irradiation of an EMCCD for the WFIRST coronagraph: preliminary performance analysis

    NASA Astrophysics Data System (ADS)

    Bush, Nathan; Hall, David; Holland, Andrew; Burgon, Ross; Murray, Neil; Gow, Jason; Jordan, Douglas; Demers, Richard; Harding, Leon K.; Nemati, Bijan; Hoenk, Michael; Michaels, Darren; Peddada, Pavani

    2016-08-01

    The Wide Field Infra-Red Survey Telescope (WFIRST) is a NASA observatory scheduled to launch in the next decade that will settle essential questions in exoplanet science. The Wide Field Instrument (WFI) offers Hubble quality imaging over a 0.28 square degree field of view and will gather NIR statistical data on exoplanets through gravitational microlensing. An on-board coronagraph will for the first time perform direct imaging and spectroscopic analysis of exoplanets with properties analogous to those within our own solar system, including cold Jupiters, mini Neptunes and potentially super Earths. The Coronagraph Instrument (CGI) will be required to operate with low signal flux for long integration times, demanding all noise sources are kept to a minimum. The Electron Multiplication (EM)-CCD has been baselined for both the imaging and spectrograph cameras due its ability to operate with sub-electron effective read noise values with appropriate multiplication gain setting. The presence of other noise sources, however, such as thermal dark signal and Clock Induced Charge (CIC), need to be characterized and mitigated. In addition, operation within a space environment will subject the device to radiation damage that will degrade the Charge Transfer Effciency (CTE) of the device throughout the mission lifetime. Irradiation at the nominal instrument operating temperature has the potential to provide the best estimate of performance degradation that will be experienced in-flight, since the final population of silicon defects has been shown to be dependent upon the temperature at which the sensor is irradiated. Here we present initial findings from pre- and post- cryogenic irradiation testing of the e2v CCD201-20 BI EMCCD sensor, baselined for the WFIRST coronagraph instrument. The motivation for irradiation at cryogenic temperatures is discussed with reference to previous investigations of a similar nature. The results are presented in context with those from a previous room temperature irradiation investigation that was performed on a CCD201-20 operated under the same conditions. A key conclusion is that the measured performance degradation for a given proton fluence is seen to measurably differ for the cryogenic case compared to the room temperature equivalent for the conditions of this study.

  8. Fast volumetric imaging with patterned illumination via digital micro-mirror device-based temporal focusing multiphoton microscopy.

    PubMed

    Chang, Chia-Yuan; Hu, Yvonne Yuling; Lin, Chun-Yu; Lin, Cheng-Han; Chang, Hsin-Yu; Tsai, Sheng-Feng; Lin, Tzu-Wei; Chen, Shean-Jen

    2016-05-01

    Temporal focusing multiphoton microscopy (TFMPM) has the advantage of area excitation in an axial confinement of only a few microns; hence, it can offer fast three-dimensional (3D) multiphoton imaging. Herein, fast volumetric imaging via a developed digital micromirror device (DMD)-based TFMPM has been realized through the synchronization of an electron multiplying charge-coupled device (EMCCD) with a dynamic piezoelectric stage for axial scanning. The volumetric imaging rate can achieve 30 volumes per second according to the EMCCD frame rate of more than 400 frames per second, which allows for the 3D Brownian motion of one-micron fluorescent beads to be spatially observed. Furthermore, it is demonstrated that the dynamic HiLo structural multiphoton microscope can reject background noise by way of the fast volumetric imaging with high-speed DMD patterned illumination.

  9. Optical registration of spaceborne low light remote sensing camera

    NASA Astrophysics Data System (ADS)

    Li, Chong-yang; Hao, Yan-hui; Xu, Peng-mei; Wang, Dong-jie; Ma, Li-na; Zhao, Ying-long

    2018-02-01

    For the high precision requirement of spaceborne low light remote sensing camera optical registration, optical registration of dual channel for CCD and EMCCD is achieved by the high magnification optical registration system. System integration optical registration and accuracy of optical registration scheme for spaceborne low light remote sensing camera with short focal depth and wide field of view is proposed in this paper. It also includes analysis of parallel misalignment of CCD and accuracy of optical registration. Actual registration results show that imaging clearly, MTF and accuracy of optical registration meet requirements, it provide important guarantee to get high quality image data in orbit.

  10. Two approximations for the geometric model of signal amplification in an electron-multiplying charge-coupled device detector

    PubMed Central

    Chao, Jerry; Ram, Sripad; Ward, E. Sally; Ober, Raimund J.

    2014-01-01

    The extraction of information from images acquired under low light conditions represents a common task in diverse disciplines. In single molecule microscopy, for example, techniques for superresolution image reconstruction depend on the accurate estimation of the locations of individual particles from generally low light images. In order to estimate a quantity of interest with high accuracy, however, an appropriate model for the image data is needed. To this end, we previously introduced a data model for an image that is acquired using the electron-multiplying charge-coupled device (EMCCD) detector, a technology of choice for low light imaging due to its ability to amplify weak signals significantly above its readout noise floor. Specifically, we proposed the use of a geometrically multiplied branching process to model the EMCCD detector’s stochastic signal amplification. Geometric multiplication, however, can be computationally expensive and challenging to work with analytically. We therefore describe here two approximations for geometric multiplication that can be used instead. The high gain approximation is appropriate when a high level of signal amplification is used, a scenario which corresponds to the typical usage of an EMCCD detector. It is an accurate approximation that is computationally more efficient, and can be used to perform maximum likelihood estimation on EMCCD image data. In contrast, the Gaussian approximation is applicable at all levels of signal amplification, but is only accurate when the initial signal to be amplified is relatively large. As we demonstrate, it can importantly facilitate the analysis of an information-theoretic quantity called the noise coefficient. PMID:25075263

  11. Diffraction-limited lucky imaging with a 12" commercial telescope

    NASA Astrophysics Data System (ADS)

    Baptista, Brian J.

    2014-08-01

    Here we demonstrate a novel lucky imaging camera which is designed to produce diffraction-limited imaging using small telescopes similar to ones used by many academic institutions for outreach and/or student training. We present a design that uses a Meade 12" SCT paired with an Andor iXon fast readout EMCCD. The PSF of the telescope is matched to the pixel size of the EMCCD by adding a simple, custom-fabricated, intervening optical system. We demonstrate performance of the system by observing both astronomical and terrestrial targets. The astronomical application requires simpler data reconstruction techniques as compared to the terrestrial case. We compare different lucky imaging registration and reconstruction algorithms for use with this imager for both astronomical and terrestrial targets. We also demonstrate how this type of instrument would be useful for both undergraduate and graduate student training. As an instructional aide, the instrument can provide a hands-on approach for teaching instrument design, standard data reduction techniques, lucky imaging data processing, and high resolution imaging concepts.

  12. On-sky performance of the tip-tilt correction system for GLAS using an EMCCD camera

    NASA Astrophysics Data System (ADS)

    Skvarč, Jure; Tulloch, Simon

    2008-07-01

    Adaptive optics systems based on laser guide stars still need a natural guide star (NGS) to correct for the image motion caused by the atmosphere and by imperfect telescope tracking. The ability to properly compensate for this motion using a faint NGS is critical to achieve large sky coverage. For the laser guide system (GLAS) on the 4.2 m William Herschel Telescope we designed and tested in the laboratory and on-sky a tip-tilt correction system based on a PC running Linux and an EMCCD technology camera. The control software allows selection of different centroiding algorithms and loop control methods as well as the control parameters. Parameter analysis has been performed using tip-tilt only correction before the laser commissioning and the selected sets of parameters were then used during commissioning of the laser guide star system. We have established the SNR of the guide star as a function of magnitude, depending on the image sampling frequency and on the dichroic used in the optical system; achieving a measurable improvement using full AO correction with NGSes down to magnitude range R=16.5 to R=18. A minimum SNR of about 10 was established to be necessary for a useful correction. The system was used to produce 0.16 arcsecond images in H band using bright NGS and laser correction during GLAS commissioning runs.

  13. Multiple-target tracking implementation in the ebCMOS camera system: the LUSIPHER prototype

    NASA Astrophysics Data System (ADS)

    Doan, Quang Tuyen; Barbier, Remi; Dominjon, Agnes; Cajgfinger, Thomas; Guerin, Cyrille

    2012-06-01

    The domain of the low light imaging systems progresses very fast, thanks to detection and electronic multiplication technology evolution, such as the emCCD (electron multiplying CCD) or the ebCMOS (electron bombarded CMOS). We present an ebCMOS camera system that is able to track every 2 ms more than 2000 targets with a mean number of photons per target lower than two. The point light sources (targets) are spots generated by a microlens array (Shack-Hartmann) used in adaptive optics. The Multiple-Target-Tracking designed and implemented on a rugged workstation is described. The results and the performances of the system on the identification and tracking are presented and discussed.

  14. Towards the evidence of a purely spatial Einstein-Podolsky-Rosen paradox in images: measurement scheme and first experimental results

    NASA Astrophysics Data System (ADS)

    Devaux, F.; Mougin-Sisini, J.; Moreau, P. A.; Lantz, E.

    2012-07-01

    We propose a scheme to evidence the Einstein-Podolsky-Rosen (EPR) paradox for photons produced by spontaneous down conversion, from measurement of purely spatial correlations of photon positions both in the near- and in the far-field. Experimentally, quantum correlations have been measured in the far-field of parametric fluorescence created in a type II BBO crystal. Imaging is performed in the photon counting regime with an electron-multiplying CCD (EMCCD) camera.

  15. Precision analysis for standard deviation measurements of immobile single fluorescent molecule images.

    PubMed

    DeSantis, Michael C; DeCenzo, Shawn H; Li, Je-Luen; Wang, Y M

    2010-03-29

    Standard deviation measurements of intensity profiles of stationary single fluorescent molecules are useful for studying axial localization, molecular orientation, and a fluorescence imaging system's spatial resolution. Here we report on the analysis of the precision of standard deviation measurements of intensity profiles of single fluorescent molecules imaged using an EMCCD camera.We have developed an analytical expression for the standard deviation measurement error of a single image which is a function of the total number of detected photons, the background photon noise, and the camera pixel size. The theoretical results agree well with the experimental, simulation, and numerical integration results. Using this expression, we show that single-molecule standard deviation measurements offer nanometer precision for a large range of experimental parameters.

  16. AO corrected satellite imaging from Mount Stromlo

    NASA Astrophysics Data System (ADS)

    Bennet, F.; Rigaut, F.; Price, I.; Herrald, N.; Ritchie, I.; Smith, C.

    2016-07-01

    The Research School of Astronomy and Astrophysics have been developing adaptive optics systems for space situational awareness. As part of this program we have developed satellite imaging using compact adaptive optics systems for small (1-2 m) telescopes such as those operated by Electro Optic Systems (EOS) from the Mount Stromlo Observatory. We have focused on making compact, simple, and high performance AO systems using modern high stroke high speed deformable mirrors and EMCCD cameras. We are able to track satellites down to magnitude 10 with a Strehl in excess of 20% in median seeing.

  17. Improved sensitivity to fluorescence for cancer detection in wide-field image-guided neurosurgery

    PubMed Central

    Jermyn, Michael; Gosselin, Yoann; Valdes, Pablo A.; Sibai, Mira; Kolste, Kolbein; Mercier, Jeanne; Angulo, Leticia; Roberts, David W.; Paulsen, Keith D.; Petrecca, Kevin; Daigle, Olivier; Wilson, Brian C.; Leblond, Frederic

    2015-01-01

    In glioma surgery, Protoporphyrin IX (PpIX) fluorescence may identify residual tumor that could be resected while minimizing damage to normal brain. We demonstrate that improved sensitivity for wide-field spectroscopic fluorescence imaging is achieved with minimal disruption to the neurosurgical workflow using an electron-multiplying charge-coupled device (EMCCD) relative to a state-of-the-art CMOS system. In phantom experiments the EMCCD system can detect at least two orders-of-magnitude lower PpIX. Ex vivo tissue imaging on a rat glioma model demonstrates improved fluorescence contrast compared with neurosurgical fluorescence microscope technology, and the fluorescence detection is confirmed with measurements from a clinically-validated spectroscopic probe. Greater PpIX sensitivity in wide-field fluorescence imaging may improve the residual tumor detection during surgery with consequent impact on survival. PMID:26713218

  18. TU-F-CAMPUS-I-05: Investigation of An EMCCD Detector with Variable Gain in a Micro-CT System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krishnakumar, S Bysani; Ionita, C; Rudin, S

    Purpose: To investigate the performance of a newly built Electron Multiplying Charged Coupled Device (EMCCD) based Micro-CT system, with variable detector gain, using a phantom containing contrast agent of different concentrations. Methods: We built a micro- CT system with an EMCCD having 8 microns pixels and on-chip variable gain. We tested the system using a phantom containing five tubes filled with different iodine contrast solutions (30% to 70%). First, we scanned the phantom using various x-ray exposures values at 40 kVp and constant detector gain. Next, for the same tube currents, the detector gain was increased to maintain the airmore » value of the projection image constant. A standard FDK algorithm was used to reconstruct the data. Performance was analyzed by comparing the signal-to-noise ratio (SNR) measurements for increased gain with those for the low constant gain at each exposure. Results: The high detector gain reconstructed data SNR was always greater than the low gain data SNR for all x-ray settings and for all iodine features. The largest increases were observed for low contrast features, 30% iodine concentration, where the SNR improvement approached 2. Conclusion: One of the first implementations of an EMCCD based micro- CT system was presented and used to image a phantom with various iodine solution concentrations. The analysis of the reconstructed volumes showed a significant improvement of the SNR especially for low contrast features. The unique on-chip gain feature is a substantial benefit allowing the use of the system at very low x-ray exposures per frame.Partial support: NIH grant R01EB002873 and Toshiba Medical Systems Corp. Partial support: NIH grant R01EB002873 and Toshiba Medical Systems Corp.« less

  19. Single Photon Counting Detectors for Low Light Level Imaging Applications

    NASA Astrophysics Data System (ADS)

    Kolb, Kimberly

    2015-10-01

    This dissertation presents the current state-of-the-art of semiconductor-based photon counting detector technologies. HgCdTe linear-mode avalanche photodiodes (LM-APDs), silicon Geiger-mode avalanche photodiodes (GM-APDs), and electron-multiplying CCDs (EMCCDs) are compared via their present and future performance in various astronomy applications. LM-APDs are studied in theory, based on work done at the University of Hawaii. EMCCDs are studied in theory and experimentally, with a device at NASA's Jet Propulsion Lab. The emphasis of the research is on GM-APD imaging arrays, developed at MIT Lincoln Laboratory and tested at the RIT Center for Detectors. The GM-APD research includes a theoretical analysis of SNR and various performance metrics, including dark count rate, afterpulsing, photon detection efficiency, and intrapixel sensitivity. The effects of radiation damage on the GM-APD were also characterized by introducing a cumulative dose of 50 krad(Si) via 60 MeV protons. Extensive development of Monte Carlo simulations and practical observation simulations was completed, including simulated astronomical imaging and adaptive optics wavefront sensing. Based on theoretical models and experimental testing, both the current state-of-the-art performance and projected future performance of each detector are compared for various applications. LM-APD performance is currently not competitive with other photon counting technologies, and are left out of the application-based comparisons. In the current state-of-the-art, EMCCDs in photon counting mode out-perform GM-APDs for long exposure scenarios, though GM-APDs are better for short exposure scenarios (fast readout) due to clock-induced-charge (CIC) in EMCCDs. In the long term, small improvements in GM-APD dark current will make them superior in both long and short exposure scenarios for extremely low flux. The efficiency of GM-APDs will likely always be less than EMCCDs, however, which is particularly disadvantageous for moderate to high flux rates where dark noise and CIC are insignificant noise sources. Research into decreasing the dark count rate of GM-APDs will lead to development of imaging arrays that are competitive for low light level imaging and spectroscopy applications in the near future.

  20. Luminescence evolution from alumina ceramic surface before flashover under direct and alternating current voltage in vacuum

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Su, Guo-Qiang; Wang, Yi-Bo; Song, Bai-Peng

    2016-06-15

    The luminescence evolution phenomena from alumina ceramic surface in vacuum under high voltage of direct and alternating current are reported, with the voltage covering a large range from far below to close to the flashover voltage. Its time resolved and spatial distributed behaviors are examined by a photon counting system and an electron-multiplying charge-coupled device (EMCCD) together with a digital camera, respectively. The luminescence before flashover exhibits two stages as voltage increasing, i.e., under a relative low voltage (Stage A), the luminescence is ascribed to radiative recombination of hetero-charges injected into the sample surface layer by Schottky effect; under amore » higher voltage (Stage B), a stable secondary electron emission process, resulting from the Fowler-Nordheim emission at the cathode triple junction (CTJ), is responsible for the luminescence. Spectrum analysis implies that inner secondary electrons within the surface layer of alumina generated during the SSEE process also participate in the luminescence of Stage B. A comprehensive interpretation of the flashover process is formulated, which might promote a better understanding of flashover issue in vacuum.« less

  1. Luminescence evolution from alumina ceramic surface before flashover under direct and alternating current voltage in vacuum

    NASA Astrophysics Data System (ADS)

    Su, Guo-Qiang; Wang, Yi-Bo; Song, Bai-Peng; Mu, Hai-Bao; Zhang, Guan-Jun; Li, Feng; Wang, Meng

    2016-06-01

    The luminescence evolution phenomena from alumina ceramic surface in vacuum under high voltage of direct and alternating current are reported, with the voltage covering a large range from far below to close to the flashover voltage. Its time resolved and spatial distributed behaviors are examined by a photon counting system and an electron-multiplying charge-coupled device (EMCCD) together with a digital camera, respectively. The luminescence before flashover exhibits two stages as voltage increasing, i.e., under a relative low voltage (Stage A), the luminescence is ascribed to radiative recombination of hetero-charges injected into the sample surface layer by Schottky effect; under a higher voltage (Stage B), a stable secondary electron emission process, resulting from the Fowler-Nordheim emission at the cathode triple junction (CTJ), is responsible for the luminescence. Spectrum analysis implies that inner secondary electrons within the surface layer of alumina generated during the SSEE process also participate in the luminescence of Stage B. A comprehensive interpretation of the flashover process is formulated, which might promote a better understanding of flashover issue in vacuum.

  2. High sensitivity optical molecular imaging system

    NASA Astrophysics Data System (ADS)

    An, Yu; Yuan, Gao; Huang, Chao; Jiang, Shixin; Zhang, Peng; Wang, Kun; Tian, Jie

    2018-02-01

    Optical Molecular Imaging (OMI) has the advantages of high sensitivity, low cost and ease of use. By labeling the regions of interest with fluorescent or bioluminescence probes, OMI can noninvasively obtain the distribution of the probes in vivo, which play the key role in cancer research, pharmacokinetics and other biological studies. In preclinical and clinical application, the image depth, resolution and sensitivity are the key factors for researchers to use OMI. In this paper, we report a high sensitivity optical molecular imaging system developed by our group, which can improve the imaging depth in phantom to nearly 5cm, high resolution at 2cm depth, and high image sensitivity. To validate the performance of the system, special designed phantom experiments and weak light detection experiment were implemented. The results shows that cooperated with high performance electron-multiplying charge coupled device (EMCCD) camera, precision design of light path system and high efficient image techniques, our OMI system can simultaneously collect the light-emitted signals generated by fluorescence molecular imaging, bioluminescence imaging, Cherenkov luminance and other optical imaging modality, and observe the internal distribution of light-emitting agents fast and accurately.

  3. Fisher information matrix for branching processes with application to electron-multiplying charge-coupled devices

    PubMed Central

    Chao, Jerry; Ward, E. Sally; Ober, Raimund J.

    2012-01-01

    The high quantum efficiency of the charge-coupled device (CCD) has rendered it the imaging technology of choice in diverse applications. However, under extremely low light conditions where few photons are detected from the imaged object, the CCD becomes unsuitable as its readout noise can easily overwhelm the weak signal. An intended solution to this problem is the electron-multiplying charge-coupled device (EMCCD), which stochastically amplifies the acquired signal to drown out the readout noise. Here, we develop the theory for calculating the Fisher information content of the amplified signal, which is modeled as the output of a branching process. Specifically, Fisher information expressions are obtained for a general and a geometric model of amplification, as well as for two approximations of the amplified signal. All expressions pertain to the important scenario of a Poisson-distributed initial signal, which is characteristic of physical processes such as photon detection. To facilitate the investigation of different data models, a “noise coefficient” is introduced which allows the analysis and comparison of Fisher information via a scalar quantity. We apply our results to the problem of estimating the location of a point source from its image, as observed through an optical microscope and detected by an EMCCD. PMID:23049166

  4. MAPLE: reflected light from exoplanets with a 50-cm diameter stratospheric balloon telescope

    NASA Astrophysics Data System (ADS)

    Marois, Christian; Bradley, Colin; Pazder, John; Nash, Reston; Metchev, Stanimir; Grandmont, Frédéric; Maire, Anne-Lise; Belikov, Ruslan; Macintosh, Bruce; Currie, Thayne; Galicher, Raphaël.; Marchis, Franck; Mawet, Dimitri; Serabyn, Eugene; Steinbring, Eric

    2014-08-01

    Detecting light reflected from exoplanets by direct imaging is the next major milestone in the search for, and characterization of, an Earth twin. Due to the high-risk and cost associated with satellites and limitations imposed by the atmosphere for ground-based instruments, we propose a bottom-up approach to reach that ultimate goal with an endeavor named MAPLE. MAPLE first project is a stratospheric balloon experiment called MAPLE-50. MAPLE-50 consists of a 50 cm diameter off-axis telescope working in the near-UV. The advantages of the near-UV are a small inner working angle and an improved contrast for blue planets. Along with the sophisticated tracking system to mitigate balloon pointing errors, MAPLE-50 will have a deformable mirror, a vortex coronograph, and a self-coherent camera as a focal plane wavefront-sensor which employs an Electron Multiplying CCD (EMCCD) as the science detector. The EMCCD will allow photon counting at kHz rates, thereby closely tracking telescope and instrument-bench-induced aberrations as they evolve with time. In addition, the EMCCD will acquire the science data with almost no read noise penalty. To mitigate risk and lower costs, MAPLE-50 will at first have a single optical channel with a minimum of moving parts. The goal is to reach a few times 109 contrast in 25 h worth of flying time, allowing direct detection of Jovians around the nearest stars. Once the 50 cm infrastructure has been validated, the telescope diameter will then be increased to a 1.5 m diameter (MAPLE-150) to reach 1010 contrast and have the capability to image another Earth.

  5. Rainbow correlation imaging with macroscopic twin beam

    NASA Astrophysics Data System (ADS)

    Allevi, Alessia; Bondani, Maria

    2017-06-01

    We present the implementation of a correlation-imaging protocol that exploits both the spatial and spectral correlations of macroscopic twin-beam states generated by parametric downconversion. In particular, the spectral resolution of an imaging spectrometer coupled to an EMCCD camera is used in a proof-of-principle experiment to encrypt and decrypt a simple code to be transmitted between two parties. In order to optimize the trade-off between visibility and resolution, we provide the characterization of the correlation images as a function of the spatio-spectral properties of twin beams generated at different pump power values.

  6. Soft X-ray radiation damage in EM-CCDs used for Resonant Inelastic X-ray Scattering

    NASA Astrophysics Data System (ADS)

    Gopinath, D.; Soman, M.; Holland, A.; Keelan, J.; Hall, D.; Holland, K.; Colebrook, D.

    2018-02-01

    Advancement in synchrotron and free electron laser facilities means that X-ray beams with higher intensity than ever before are being created. The high brilliance of the X-ray beam, as well as the ability to use a range of X-ray energies, means that they can be used in a wide range of applications. One such application is Resonant Inelastic X-ray Scattering (RIXS). RIXS uses the intense and tuneable X-ray beams in order to investigate the electronic structure of materials. The photons are focused onto a sample material and the scattered X-ray beam is diffracted off a high resolution grating to disperse the X-ray energies onto a position sensitive detector. Whilst several factors affect the total system energy resolution, the performance of RIXS experiments can be limited by the spatial resolution of the detector used. Electron-Multiplying CCDs (EM-CCDs) at high gain in combination with centroiding of the photon charge cloud across several detector pixels can lead to sub-pixel spatial resolution of 2-3 μm. X-ray radiation can cause damage to CCDs through ionisation damage resulting in increases in dark current and/or a shift in flat band voltage. Understanding the effect of radiation damage on EM-CCDs is important in order to predict lifetime as well as the change in performance over time. Two CCD-97s were taken to PTB at BESSY II and irradiated with large doses of soft X-rays in order to probe the front and back surfaces of the device. The dark current was shown to decay over time with two different exponential components to it. This paper will discuss the use of EM-CCDs for readout of RIXS spectrometers, and limitations on spatial resolution, together with any limitations on instrument use which may arise from X-ray-induced radiation damage.

  7. Quantum Error Correction with a Globally-Coupled Array of Neutral Atom Qubits

    DTIC Science & Technology

    2013-02-01

    magneto - optical trap ) located at the center of the science cell. Fluorescence...Bottle beam trap GBA Gaussian beam array EMCCD electron multiplying charge coupled device microsec. microsecond MOT Magneto - optical trap QEC quantum error correction qubit quantum bit ...developed and implemented an array of neutral atom qubits in optical traps for studies of quantum error correction. At the end of the three year

  8. Measurement of the deuterium Balmer series line emission on EAST

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, C. R.; Xu, Z.; Jin, Z.

    Volume recombination plays an important role towards plasma detachment for magnetically confined fusion devices. High quantum number states of the Balmer series of deuterium are used to study recombination. On EAST (Experimental Advanced Superconducting Tokamak), two visible spectroscopic measurements are applied for the upper/lower divertor with 13 channels, respectively. Both systems are coupled with Princeton Instruments ProEM EMCCD 1024B camera: one is equipped on an Acton SP2750 spectrometer, which has a high spectral resolution ∼0.0049 nm with 2400 gr/mm grating to measure the D{sub α}(H{sub α}) spectral line and with 1200 gr/mm grating to measure deuterium molecular Fulcher band emissionsmore » and another is equipped on IsoPlane SCT320 using 600 gr/mm to measure high-n Balmer series emission lines, allowing us to study volume recombination on EAST and to obtain the related line averaged plasma parameters (T{sub e}, n{sub e}) during EAST detached phases. This paper will present the details of the measurements and the characteristics of deuterium Balmer series line emissions during density ramp-up L-mode USN plasma on EAST.« less

  9. Effect of time discretization of the imaging process on the accuracy of trajectory estimation in fluorescence microscopy

    PubMed Central

    Wong, Yau; Chao, Jerry; Lin, Zhiping; Ober, Raimund J.

    2014-01-01

    In fluorescence microscopy, high-speed imaging is often necessary for the proper visualization and analysis of fast subcellular dynamics. Here, we examine how the speed of image acquisition affects the accuracy with which parameters such as the starting position and speed of a microscopic non-stationary fluorescent object can be estimated from the resulting image sequence. Specifically, we use a Fisher information-based performance bound to investigate the detector-dependent effect of frame rate on the accuracy of parameter estimation. We demonstrate that when a charge-coupled device detector is used, the estimation accuracy deteriorates as the frame rate increases beyond a point where the detector’s readout noise begins to overwhelm the low number of photons detected in each frame. In contrast, we show that when an electron-multiplying charge-coupled device (EMCCD) detector is used, the estimation accuracy improves with increasing frame rate. In fact, at high frame rates where the low number of photons detected in each frame renders the fluorescent object difficult to detect visually, imaging with an EMCCD detector represents a natural implementation of the Ultrahigh Accuracy Imaging Modality, and enables estimation with an accuracy approaching that which is attainable only when a hypothetical noiseless detector is used. PMID:25321248

  10. EMCCD calibration for astronomical imaging: Wide FastCam at the Telescopio Carlos Sánchez

    NASA Astrophysics Data System (ADS)

    Velasco, S.; Oscoz, A.; López, R. L.; Puga, M.; Pérez-Garrido, A.; Pallé, E.; Ricci, D.; Ayuso, I.; Hernández-Sánchez, M.; Vázquez-Martín, S.; Protasio, C.; Béjar, V.; Truant, N.

    2017-03-01

    The evident benefits of Electron Multiplying CCDs (EMCCDs) -speed, high sensitivity, low noise and their capability of detecting single photon events whilst maintaining high quantum efficiency- are bringing these kinds of detectors to many state-of-the-art astronomical instruments (Velasco et al. 2016; Oscoz et al. 2008). The EMCCDs are the perfect answer to the need for great sensitivity levels as they are not limited by the readout noise of the output amplifier, while conventional CCDs are, even when operated at high readout frame rates. Here we present a quantitative on-sky method to calibrate EMCCD detectors dedicated to astronomical imaging, developed during the commissioning process (Velasco et al. 2016) and first observations (Ricci et al. 2016, in prep.) with Wide FastCam (Marga et al. 2014) at Telescopio Carlos Sánchez (TCS) in the Observatorio del Teide.

  11. Development and flight testing of UV optimized Photon Counting CCDs

    NASA Astrophysics Data System (ADS)

    Hamden, Erika T.

    2018-06-01

    I will discuss the latest results from the Hamden UV/Vis Detector Lab and our ongoing work using a UV optimized EMCCD in flight. Our lab is currently testing efficiency and performance of delta-doped, anti-reflection coated EMCCDs, in collaboration with JPL. The lab has been set-up to test quantum efficiency, dark current, clock-induced-charge, and read noise. I will describe our improvements to our circuit boards for lower noise, updates from a new, more flexible NUVU controller, and the integration of an EMCCD in the FIREBall-2 UV spectrograph. I will also briefly describe future plans to conduct radiation testing on delta-doped EMCCDs (both warm, unbiased and cold, biased configurations) thus summer and longer term plans for testing newer photon counting CCDs as I move the HUVD Lab to the University of Arizona in the Fall of 2018.

  12. Measurements of Aitken Visual Binary Stars: 2017 Report

    NASA Astrophysics Data System (ADS)

    Sérot, J.

    2018-07-01

    This paper is a continuation of that published in [1]. It presents the measurements of 136 visual binary stars discovered by R.G. Aitken and listed in the WDS catalog. These measurements were obtained between January and December 2017 with an 11" reflector telescope and two types of cameras : an ASI 290MM CMOS-based camera and a Raptor Kite EM-CCD. Binaries with a secondary component up to magnitude 15 and separation between 0.4 and 5 arcsec have been measured. The selection also includes pairs exhibiting a large difference in magnitude between components (up to ?m=6). Measurements were mostly obtained using the auto-correlation technique described in [1] but also, and this is a innovative aspect of the paper, using the so-called bispectrum reduction technique supported by the latest version of the SpeckelToolBox software. As for [1], a significant part of the observed pairs had not been observed in the previous decades and show significant movements compared to their last measurement.

  13. The faint intergalactic-medium red-shifted emission balloon: future UV observations with EMCCDs

    NASA Astrophysics Data System (ADS)

    Kyne, Gillian; Hamden, Erika T.; Lingner, Nicole; Morrissey, Patrick; Nikzad, Shouleh; Martin, D. Christopher

    2016-08-01

    We present the latest developments in our joint NASA/CNES suborbital project. This project is a balloon-borne UV multi-object spectrograph, which has been designed to detect faint emission from the circumgalactic medium (CGM) around low redshift galaxies. One major change from FIREBall-1 has been the use of a delta-doped Electron Multiplying CCD (EMCCD). EMCCDs can be used in photon-counting (PC) mode to achieve extremely low readout noise (¡ 1e-). Our testing initially focused on reducing clock-induced-charge (CIC) through wave shaping and well depth optimisation with the CCD Controller for Counting Photons (CCCP) from Nüvü. This optimisation also includes methods for reducing dark current, via cooling and substrate voltage adjustment. We present result of laboratory noise measurements including dark current. Furthermore, we will briefly present some initial results from our first set of on-sky observations using a delta-doped EMCCD on the 200 inch telescope at Palomar using the Palomar Cosmic Web Imager (PCWI).

  14. Experimental observation of spatial quantum noise reduction below the standard quantum limit with bright twin beams of light

    NASA Astrophysics Data System (ADS)

    Kumar, Ashok; Nunley, Hayden; Marino, Alberto

    2016-05-01

    Quantum noise reduction (QNR) below the standard quantum limit (SQL) has been a subject of interest for the past two to three decades due to its wide range of applications in quantum metrology and quantum information processing. To date, most of the attention has focused on the study of QNR in the temporal domain. However, many areas in quantum optics, specifically in quantum imaging, could benefit from QNR not only in the temporal domain but also in the spatial domain. With the use of a high quantum efficiency electron multiplier charge coupled device (EMCCD) camera, we have observed spatial QNR below the SQL in bright narrowband twin light beams generated through a four-wave mixing (FWM) process in hot rubidium atoms. Owing to momentum conservation in this process, the twin beams are momentum correlated. This leads to spatial quantum correlations and spatial QNR. Our preliminary results show a spatial QNR of over 2 dB with respect to the SQL. Unlike previous results on spatial QNR with faint and broadband photon pairs from parametric down conversion (PDC), we demonstrate spatial QNR with spectrally and spatially narrowband bright light beams. The results obtained will be useful for atom light interaction based quantum protocols and quantum imaging. Work supported by the W.M. Keck Foundation.

  15. VizieR Online Data Catalog: The Hα rotation curve of M33 (Kam+, 2015)

    NASA Astrophysics Data System (ADS)

    Kam, Z. S.; Carignan, C.; Chemin, L.; Amram, P.; Epinat, B.

    2017-11-01

    This study presents Fabry-Perot mapping of M33 obtained at the Observatoire du mont Megantic (OMM). Relano et al. (2013, J/A+A/552/A140) have studied the spectral energy distribution of the H II regions of M33 and the star formation rate and star formation efficiency have been investigated by Gardan et al. (2007A&A...473...91G) and Kramer et al. (2011EAS....52..107K). More than 1000 H II regions can be resolved by the Hα observations; Courtes et al. (1987A&A...174...28C) gave a catalogue of 748 H II regions. Observing those regions allows us to derive the ionized gas (optical) kinematics of M33. Determination of the M33 kinematics with a spatial resolution ~<3 arcsec (~12 pc) using the H{spectra} velocity field and the derivation of an accurate RC in the inner parts are the main goals of this paper. The observations took place at the 1.6-m telescope of the OMM, Quebec, in 2012 September. A scanning FP etalon interferometer has been used during the observations with the device IXON888, a commercial Andor EM-CCD camera of 1024 pixelsx1024 pixels. (1 data file).

  16. Development of new photon-counting detectors for single-molecule fluorescence microscopy.

    PubMed

    Michalet, X; Colyer, R A; Scalia, G; Ingargiola, A; Lin, R; Millaud, J E; Weiss, S; Siegmund, Oswald H W; Tremsin, Anton S; Vallerga, John V; Cheng, A; Levi, M; Aharoni, D; Arisaka, K; Villa, F; Guerrieri, F; Panzeri, F; Rech, I; Gulinatti, A; Zappa, F; Ghioni, M; Cova, S

    2013-02-05

    Two optical configurations are commonly used in single-molecule fluorescence microscopy: point-like excitation and detection to study freely diffusing molecules, and wide field illumination and detection to study surface immobilized or slowly diffusing molecules. Both approaches have common features, but also differ in significant aspects. In particular, they use different detectors, which share some requirements but also have major technical differences. Currently, two types of detectors best fulfil the needs of each approach: single-photon-counting avalanche diodes (SPADs) for point-like detection, and electron-multiplying charge-coupled devices (EMCCDs) for wide field detection. However, there is room for improvements in both cases. The first configuration suffers from low throughput owing to the analysis of data from a single location. The second, on the other hand, is limited to relatively low frame rates and loses the benefit of single-photon-counting approaches. During the past few years, new developments in point-like and wide field detectors have started addressing some of these issues. Here, we describe our recent progresses towards increasing the throughput of single-molecule fluorescence spectroscopy in solution using parallel arrays of SPADs. We also discuss our development of large area photon-counting cameras achieving subnanosecond resolution for fluorescence lifetime imaging applications at the single-molecule level.

  17. Non-contact continuous-wave diffuse optical tomographic system to capture vascular dynamics in the foot

    NASA Astrophysics Data System (ADS)

    Hoi, Jennifer W.; Kim, Hyun K.; Khalil, Michael A.; Fong, Christopher J.; Marone, Alessandro; Shrikhande, Gautam; Hielscher, Andreas H.

    2015-03-01

    Dynamic optical tomographic imaging has shown promise in diagnosing and monitoring peripheral arterial disease (PAD), which affects 8 to 12 million in the United States. PAD is the narrowing of the arteries that supply blood to the lower extremities. Prolonged reduced blood flow to the foot leads to ulcers and gangrene, which makes placement of optical fibers for contact-based optical tomography systems difficult and cumbersome. Since many diabetic PAD patients have foot wounds, a non-contact interface is highly desirable. We present a novel non-contact dynamic continuous-wave optical tomographic imaging system that images the vasculature in the foot for evaluating PAD. The system images at up to 1Hz by delivering 2 wavelengths of light to the top of the foot at up to 20 source positions through collimated source fibers. Transmitted light is collected with an electron multiplying charge couple device (EMCCD) camera. We demonstrate that the system can resolve absorbers at various locations in a phantom study and show the system's first clinical 3D images of total hemoglobin changes in the foot during venous occlusion at the thigh. Our initial results indicate that this system is effective in capturing the vascular dynamics within the foot and can be used to diagnose and monitor treatment of PAD in diabetic patients.

  18. Development of new photon-counting detectors for single-molecule fluorescence microscopy

    PubMed Central

    Michalet, X.; Colyer, R. A.; Scalia, G.; Ingargiola, A.; Lin, R.; Millaud, J. E.; Weiss, S.; Siegmund, Oswald H. W.; Tremsin, Anton S.; Vallerga, John V.; Cheng, A.; Levi, M.; Aharoni, D.; Arisaka, K.; Villa, F.; Guerrieri, F.; Panzeri, F.; Rech, I.; Gulinatti, A.; Zappa, F.; Ghioni, M.; Cova, S.

    2013-01-01

    Two optical configurations are commonly used in single-molecule fluorescence microscopy: point-like excitation and detection to study freely diffusing molecules, and wide field illumination and detection to study surface immobilized or slowly diffusing molecules. Both approaches have common features, but also differ in significant aspects. In particular, they use different detectors, which share some requirements but also have major technical differences. Currently, two types of detectors best fulfil the needs of each approach: single-photon-counting avalanche diodes (SPADs) for point-like detection, and electron-multiplying charge-coupled devices (EMCCDs) for wide field detection. However, there is room for improvements in both cases. The first configuration suffers from low throughput owing to the analysis of data from a single location. The second, on the other hand, is limited to relatively low frame rates and loses the benefit of single-photon-counting approaches. During the past few years, new developments in point-like and wide field detectors have started addressing some of these issues. Here, we describe our recent progresses towards increasing the throughput of single-molecule fluorescence spectroscopy in solution using parallel arrays of SPADs. We also discuss our development of large area photon-counting cameras achieving subnanosecond resolution for fluorescence lifetime imaging applications at the single-molecule level. PMID:23267185

  19. Development of online lines-scan imaging system for chicken inspection and differentiation

    NASA Astrophysics Data System (ADS)

    Yang, Chun-Chieh; Chan, Diane E.; Chao, Kuanglin; Chen, Yud-Ren; Kim, Moon S.

    2006-10-01

    An online line-scan imaging system was developed for differentiation of wholesome and systemically diseased chickens. The hyperspectral imaging system used in this research can be directly converted to multispectral operation and would provide the ideal implementation of essential features for data-efficient high-speed multispectral classification algorithms. The imaging system consisted of an electron-multiplying charge-coupled-device (EMCCD) camera and an imaging spectrograph for line-scan images. The system scanned the surfaces of chicken carcasses on an eviscerating line at a poultry processing plant in December 2005. A method was created to recognize birds entering and exiting the field of view, and to locate a Region of Interest on the chicken images from which useful spectra were extracted for analysis. From analysis of the difference spectra between wholesome and systemically diseased chickens, four wavelengths of 468 nm, 501 nm, 582 nm and 629 nm were selected as key wavelengths for differentiation. The method of locating the Region of Interest will also have practical application in multispectral operation of the line-scan imaging system for online chicken inspection. This line-scan imaging system makes possible the implementation of multispectral inspection using the key wavelengths determined in this study with minimal software adaptations and without the need for cross-system calibration.

  20. A high-sensitivity EM-CCD camera for the open port telescope cavity of SOFIA

    NASA Astrophysics Data System (ADS)

    Wiedemann, Manuel; Wolf, Jürgen; McGrotty, Paul; Edwards, Chris; Krabbe, Alfred

    2016-08-01

    The Stratospheric Observatory for Infrared Astronomy (SOFIA) has three target acquisition and tracking cameras. All three imagers originally used the same cameras, which did not meet the sensitivity requirements, due to low quantum efficiency and high dark current. The Focal Plane Imager (FPI) suffered the most from high dark current, since it operated in the aircraft cabin at room temperatures without active cooling. In early 2013 the FPI was upgraded with an iXon3 888 from Andor Techonolgy. Compared to the original cameras, the iXon3 has a factor five higher QE, thanks to its back-illuminated sensor, and orders of magnitude lower dark current, due to a thermo-electric cooler and "inverted mode operation." This leads to an increase in sensitivity of about five stellar magnitudes. The Wide Field Imager (WFI) and Fine Field Imager (FFI) shall now be upgraded with equally sensitive cameras. However, they are exposed to stratospheric conditions in flight (typical conditions: T≍-40° C, p≍ 0:1 atm) and there are no off-the-shelf CCD cameras with the performance of an iXon3, suited for these conditions. Therefore, Andor Technology and the Deutsches SOFIA Institut (DSI) are jointly developing and qualifying a camera for these conditions, based on the iXon3 888. These changes include replacement of electrical components with MIL-SPEC or industrial grade components and various system optimizations, a new data interface that allows the image data transmission over 30m of cable from the camera to the controller, a new power converter in the camera to generate all necessary operating voltages of the camera locally and a new housing that fulfills airworthiness requirements. A prototype of this camera has been built and tested in an environmental test chamber at temperatures down to T=-62° C and pressure equivalent to 50 000 ft altitude. In this paper, we will report about the development of the camera and present results from the environmental testing.

  1. Depth-of-Interaction Compensation Using a Focused-Cut Scintillator for a Pinhole Gamma Camera.

    PubMed

    Alhassen, Fares; Kudrolli, Haris; Singh, Bipin; Kim, Sangtaek; Seo, Youngho; Gould, Robert G; Nagarkar, Vivek V

    2011-06-01

    Preclinical SPECT offers a powerful means to understand the molecular pathways of drug interactions in animal models by discovering and testing new pharmaceuticals and therapies for potential clinical applications. A combination of high spatial resolution and sensitivity are required in order to map radiotracer uptake within small animals. Pinhole collimators have been investigated, as they offer high resolution by means of image magnification. One of the limitations of pinhole geometries is that increased magnification causes some rays to travel through the detection scintillator at steep angles, introducing parallax errors due to variable depth-of-interaction in scintillator material, especially towards the edges of the detector field of view. These parallax errors ultimately limit the resolution of pinhole preclinical SPECT systems, especially for higher energy isotopes that can easily penetrate through millimeters of scintillator material. A pixellated, focused-cut (FC) scintillator, with its pixels laser-cut so that they are collinear with incoming rays, can potentially compensate for these parallax errors and thus improve the system resolution. We performed the first experimental evaluation of a newly developed focused-cut scintillator. We scanned a Tc-99m source across the field of view of pinhole gamma camera with a continuous scintillator, a conventional "straight-cut" (SC) pixellated scintillator, and a focused-cut scintillator, each coupled to an electron-multiplying charge coupled device (EMCCD) detector by a fiber-optic taper, and compared the measured full-width half-maximum (FWHM) values. We show that the FWHMs of the focused-cut scintillator projections are comparable to the FWHMs of the thinner SC scintillator, indicating the effectiveness of the focused-cut scintillator in compensating parallax errors.

  2. Depth-of-Interaction Compensation Using a Focused-Cut Scintillator for a Pinhole Gamma Camera

    PubMed Central

    Alhassen, Fares; Kudrolli, Haris; Singh, Bipin; Kim, Sangtaek; Seo, Youngho; Gould, Robert G.; Nagarkar, Vivek V.

    2011-01-01

    Preclinical SPECT offers a powerful means to understand the molecular pathways of drug interactions in animal models by discovering and testing new pharmaceuticals and therapies for potential clinical applications. A combination of high spatial resolution and sensitivity are required in order to map radiotracer uptake within small animals. Pinhole collimators have been investigated, as they offer high resolution by means of image magnification. One of the limitations of pinhole geometries is that increased magnification causes some rays to travel through the detection scintillator at steep angles, introducing parallax errors due to variable depth-of-interaction in scintillator material, especially towards the edges of the detector field of view. These parallax errors ultimately limit the resolution of pinhole preclinical SPECT systems, especially for higher energy isotopes that can easily penetrate through millimeters of scintillator material. A pixellated, focused-cut (FC) scintillator, with its pixels laser-cut so that they are collinear with incoming rays, can potentially compensate for these parallax errors and thus improve the system resolution. We performed the first experimental evaluation of a newly developed focused-cut scintillator. We scanned a Tc-99m source across the field of view of pinhole gamma camera with a continuous scintillator, a conventional “straight-cut” (SC) pixellated scintillator, and a focused-cut scintillator, each coupled to an electron-multiplying charge coupled device (EMCCD) detector by a fiber-optic taper, and compared the measured full-width half-maximum (FWHM) values. We show that the FWHMs of the focused-cut scintillator projections are comparable to the FWHMs of the thinner SC scintillator, indicating the effectiveness of the focused-cut scintillator in compensating parallax errors. PMID:21731108

  3. Depth-of-Interaction Compensation Using a Focused-Cut Scintillator for a Pinhole Gamma Camera

    NASA Astrophysics Data System (ADS)

    Alhassen, Fares; Kudrolli, Haris; Singh, Bipin; Kim, Sangtaek; Seo, Youngho; Gould, Robert G.; Nagarkar, Vivek V.

    2011-06-01

    Preclinical SPECT offers a powerful means to understand the molecular pathways of drug interactions in animal models by discovering and testing new pharmaceuticals and therapies for potential clinical applications. A combination of high spatial resolution and sensitivity are required in order to map radiotracer uptake within small animals. Pinhole collimators have been investigated, as they offer high resolution by means of image magnification. One of the limitations of pinhole geometries is that increased magnification causes some rays to travel through the detection scintillator at steep angles, introducing parallax errors due to variable depth-of-interaction in scintillator material, especially towards the edges of the detector field of view. These parallax errors ultimately limit the resolution of pinhole preclinical SPECT systems, especially for higher energy isotopes that can easily penetrate through millimeters of scintillator material. A pixellated, focused-cut (FC) scintillator, with its pixels laser-cut so that they are collinear with incoming rays, can potentially compensate for these parallax errors and thus improve the system resolution. We performed the first experimental evaluation of a newly developed focused-cut scintillator. We scanned a Tc-99 m source across the field of view of pinhole gamma camera with a continuous scintillator, a conventional “straight-cut” (SC) pixellated scintillator, and a focused-cut scintillator, each coupled to an electron-multiplying charge coupled device (EMCCD) detector by a fiber-optic taper, and compared the measured full-width half-maximum (FWHM) values. We show that the FWHMs of the focused-cut scintillator projections are comparable to the FWHMs of the thinner SC scintillator, indicating the effectiveness of the focused-cut scintillator in compensating parallax errors.

  4. A sniffer-camera for imaging of ethanol vaporization from wine: the effect of wine glass shape.

    PubMed

    Arakawa, Takahiro; Iitani, Kenta; Wang, Xin; Kajiro, Takumi; Toma, Koji; Yano, Kazuyoshi; Mitsubayashi, Kohji

    2015-04-21

    A two-dimensional imaging system (Sniffer-camera) for visualizing the concentration distribution of ethanol vapor emitting from wine in a wine glass has been developed. This system provides image information of ethanol vapor concentration using chemiluminescence (CL) from an enzyme-immobilized mesh. This system measures ethanol vapor concentration as CL intensities from luminol reactions induced by alcohol oxidase and a horseradish peroxidase (HRP)-luminol-hydrogen peroxide system. Conversion of ethanol distribution and concentration to two-dimensional CL was conducted using an enzyme-immobilized mesh containing an alcohol oxidase, horseradish peroxidase, and luminol solution. The temporal changes in CL were detected using an electron multiplier (EM)-CCD camera and analyzed. We selected three types of glasses-a wine glass, a cocktail glass, and a straight glass-to determine the differences in ethanol emission caused by the shape effects of the glass. The emission measurements of ethanol vapor from wine in each glass were successfully visualized, with pixel intensity reflecting ethanol concentration. Of note, a characteristic ring shape attributed to high alcohol concentration appeared near the rim of the wine glass containing 13 °C wine. Thus, the alcohol concentration in the center of the wine glass was comparatively lower. The Sniffer-camera was demonstrated to be sufficiently useful for non-destructive ethanol measurement for the assessment of food characteristics.

  5. C-RED One and C-RED2: SWIR high-performance cameras using Saphira e-APD and Snake InGaAs detectors

    NASA Astrophysics Data System (ADS)

    Gach, Jean-Luc; Feautrier, Philippe; Stadler, Eric; Clop, Fabien; Lemarchand, Stephane; Carmignani, Thomas; Wanwanscappel, Yann; Boutolleau, David

    2018-02-01

    After the development of the OCAM2 EMCCD fast visible camera dedicated to advanced adaptive optics wavefront sensing, First Light Imaging moved to the SWIR fast cameras with the development of the C-RED One and the C-RED 2 cameras. First Light Imaging's C-RED One infrared camera is capable of capturing up to 3500 full frames per second with a subelectron readout noise and very low background. C-RED One is based on the last version of the SAPHIRA detector developed by Leonardo UK. This breakthrough has been made possible thanks to the use of an e-APD infrared focal plane array which is a real disruptive technology in imagery. C-RED One is an autonomous system with an integrated cooling system and a vacuum regeneration system. It operates its sensor with a wide variety of read out techniques and processes video on-board thanks to an FPGA. We will show its performances and expose its main features. In addition to this project, First Light Imaging developed an InGaAs 640x512 fast camera with unprecedented performances in terms of noise, dark and readout speed based on the SNAKE SWIR detector from Sofradir. The camera was called C-RED 2. The C-RED 2 characteristics and performances will be described. The C-RED One project has received funding from the European Union's Horizon 2020 research and innovation program under grant agreement N° 673944. The C-RED 2 development is supported by the "Investments for the future" program and the Provence Alpes Côte d'Azur Region, in the frame of the CPER.

  6. Space and Time Resolved Detection of Platelet Activation and von Willebrand Factor Conformational Changes in Deep Suspensions.

    PubMed

    Biasetti, Jacopo; Sampath, Kaushik; Cortez, Angel; Azhir, Alaleh; Gilad, Assaf A; Kickler, Thomas S; Obser, Tobias; Ruggeri, Zaverio M; Katz, Joseph

    2017-01-01

    Tracking cells and proteins' phenotypic changes in deep suspensions is critical for the direct imaging of blood-related phenomena in in vitro replica of cardiovascular systems and blood-handling devices. This paper introduces fluorescence imaging techniques for space and time resolved detection of platelet activation, von Willebrand factor (VWF) conformational changes, and VWF-platelet interaction in deep suspensions. Labeled VWF, platelets, and VWF-platelet strands are suspended in deep cuvettes, illuminated, and imaged with a high-sensitivity EM-CCD camera, allowing detection using an exposure time of 1 ms. In-house postprocessing algorithms identify and track the moving signals. Recombinant VWF-eGFP (rVWF-eGFP) and VWF labeled with an FITC-conjugated polyclonal antibody are employed. Anti-P-Selectin FITC-conjugated antibodies and the calcium-sensitive probe Indo-1 are used to detect activated platelets. A positive correlation between the mean number of platelets detected per image and the percentage of activated platelets determined through flow cytometry is obtained, validating the technique. An increase in the number of rVWF-eGFP signals upon exposure to shear stress demonstrates the technique's ability to detect breakup of self-aggregates. VWF globular and unfolded conformations and self-aggregation are also observed. The ability to track the size and shape of VWF-platelet strands in space and time provides means to detect pro- and antithrombotic processes.

  7. Diffuse reflectance imaging: a tool for guided biopsy

    NASA Astrophysics Data System (ADS)

    Jayanthi, Jayaraj L.; Subhash, Narayanan; Manju, Stephen; Nisha, Unni G.; Beena, Valappil T.

    2012-01-01

    Accurate diagnosis of premalignant or malignant oral lesions depends on the quality of the biopsy, adequate clinical information and correct interpretation of the biopsy results. The major clinical challenge is to precisely locate the biopsy site in a clinically suspicious lesion. Dips due to oxygenated hemoglobin absorption have been noticed at 545 and 575 nm in the diffusely reflected white light spectra of oral mucosa and the intensity ratio R545/R575 has been found suited for early detection of oral pre-cancers. A multi-spectral diffuse reflectance (DR) imaging system has been developed consisting of an electron multiplying charge coupled device (EMCCD) camera and a liquid crystal tunable filter for guiding the clinician to an optimal biopsy site. Towards this DR images were recorded from 27 patients with potentially malignant lesions on their tongue (dorsal, lateral and ventral sides) and from 44 healthy controls at 545 and 575 nm with the DR imaging system. False colored ratio image R545/R575 of the lesion provides a visual discerning capability that helps in locating the most malignant site for biopsy. Histopathological report of guided biopsy showed that out of the 27 patients 16 were cancers, 9 pre-cancers and 2 lichen planus. In this clinical trial DR imaging has correctly guided 25 biopsy sites, yielding a sensitivity of 93% and a specificity of 98%, thereby establishing the potential of DR imaging as a tool for guided biopsy.

  8. Continuous Microfluidics (Ecology-on-a-Chip) Experiments for Long Term Observation of Bacteria at Liquid-Liquid Interfaces

    NASA Astrophysics Data System (ADS)

    Miranda, Michael; White, Andrew; Jalali, Maryam; Sheng, Jian

    2017-11-01

    A microfluidic bioassay incorporating a peristaltic pump and chemostat capable of continuously culturing a bacterial suspension through a microchannel for an extended period of time relevant to ecological processes is presented. A single crude oil droplet is dispensed on-chip and subsequently pinned to the top and bottom surfaces of the microchannel to establish a vertical curved oil-water interface to observe bacteria without boundary interference. The accumulation of extracellular polymeric substances (EPS), microbial film formation, and aggregation is provided by DIC microscopy with an EMCCD camera at an interval of 30 sec. Cell-interface interactions such as cell translational and angular motilities as well as encountering, attachment, detachment to the interface are obtained by a high speed camera at 1000 fps with a sampling interval of 10 min. Experiments on Pseudomonas sp. (P62) and isolated EPS suspensions from Sagitulla Stelleta and Roseobacter show rapid formation of bacterial aggregates including EPS streamers stretching tens of drop diameters long. These results provide crucial insights into environmentally relevant processes such as the initiation of marine oil snow, an alternative mode of biodegradation to conventional bioconsumption. Funded by GoMRI, NSF, ARO.

  9. An algorithm for automated detection, localization and measurement of local calcium signals from camera-based imaging.

    PubMed

    Ellefsen, Kyle L; Settle, Brett; Parker, Ian; Smith, Ian F

    2014-09-01

    Local Ca(2+) transients such as puffs and sparks form the building blocks of cellular Ca(2+) signaling in numerous cell types. They have traditionally been studied by linescan confocal microscopy, but advances in TIRF microscopy together with improved electron-multiplied CCD (EMCCD) cameras now enable rapid (>500 frames s(-1)) imaging of subcellular Ca(2+) signals with high spatial resolution in two dimensions. This approach yields vastly more information (ca. 1 Gb min(-1)) than linescan imaging, rendering visual identification and analysis of local events imaged both laborious and subject to user bias. Here we describe a routine to rapidly automate identification and analysis of local Ca(2+) events. This features an intuitive graphical user-interfaces and runs under Matlab and the open-source Python software. The underlying algorithm features spatial and temporal noise filtering to reliably detect even small events in the presence of noisy and fluctuating baselines; localizes sites of Ca(2+) release with sub-pixel resolution; facilitates user review and editing of data; and outputs time-sequences of fluorescence ratio signals for identified event sites along with Excel-compatible tables listing amplitudes and kinetics of events. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. High resolution Cerenkov light imaging of induced positron distribution in proton therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yamamoto, Seiichi, E-mail: s-yama@met.nagoya-u.ac.jp; Fujii, Kento; Morishita, Yuki

    2014-11-01

    Purpose: In proton therapy, imaging of the positron distribution produced by fragmentation during or soon after proton irradiation is a useful method to monitor the proton range. Although positron emission tomography (PET) is typically used for this imaging, its spatial resolution is limited. Cerenkov light imaging is a new molecular imaging technology that detects the visible photons that are produced from high-speed electrons using a high sensitivity optical camera. Because its inherent spatial resolution is much higher than PET, the authors can measure more precise information of the proton-induced positron distribution with Cerenkov light imaging technology. For this purpose, theymore » conducted Cerenkov light imaging of induced positron distribution in proton therapy. Methods: First, the authors evaluated the spatial resolution of our Cerenkov light imaging system with a {sup 22}Na point source for the actual imaging setup. Then the transparent acrylic phantoms (100 × 100 × 100 mm{sup 3}) were irradiated with two different proton energies using a spot scanning proton therapy system. Cerenkov light imaging of each phantom was conducted using a high sensitivity electron multiplied charge coupled device (EM-CCD) camera. Results: The Cerenkov light’s spatial resolution for the setup was 0.76 ± 0.6 mm FWHM. They obtained high resolution Cerenkov light images of the positron distributions in the phantoms for two different proton energies and made fused images of the reference images and the Cerenkov light images. The depths of the positron distribution in the phantoms from the Cerenkov light images were almost identical to the simulation results. The decay curves derived from the region-of-interests (ROIs) set on the Cerenkov light images revealed that Cerenkov light images can be used for estimating the half-life of the radionuclide components of positrons. Conclusions: High resolution Cerenkov light imaging of proton-induced positron distribution was possible. The authors conclude that Cerenkov light imaging of proton-induced positron is promising for proton therapy.« less

  11. Sound field measurement in a double layer cavitation cluster by rugged miniature needle hydrophones.

    PubMed

    Koch, Christian

    2016-03-01

    During multi-bubble cavitation the bubbles tend to organize themselves into clusters and thus the understanding of properties and dynamics of clustering is essential for controlling technical applications of cavitation. Sound field measurements are a potential technique to provide valuable experimental information about the status of cavitation clouds. Using purpose-made, rugged, wide band, and small-sized needle hydrophones, sound field measurements in bubble clusters were performed and time-dependent sound pressure waveforms were acquired and analyzed in the frequency domain up to 20 MHz. The cavitation clusters were synchronously observed by an electron multiplying charge-coupled device (EMCCD) camera and the relation between the sound field measurements and cluster behaviour was investigated. Depending on the driving power, three ranges could be identified and characteristic properties were assigned. At low power settings no transient and no or very low stable cavitation activity can be observed. The medium range is characterized by strong pressure peaks and various bubble cluster forms. At high power a stable double layer was observed which grew with further increasing power and became quite dynamic. The sound field was irregular and the fundamental at driving frequency decreased. Between the bubble clouds completely different sound field properties were found in comparison to those in the cloud where the cavitation activity is high. In between the sound field pressure amplitude was quite small and no collapses were detected. Copyright © 2015. Published by Elsevier B.V.

  12. Wide-field fluorescence diffuse optical tomography with epi-illumination of sinusoidal pattern

    NASA Astrophysics Data System (ADS)

    Li, Tongxin; Gao, Feng; Chen, Weiting; Qi, Caixia; Yan, Panpan; Zhao, Huijuan

    2017-02-01

    We present a wide-field fluorescence tomography with epi-illumination of sinusoidal pattern. In this scheme, a DMD projector is employed as a spatial light modulator to generate independently wide-field sinusoidal illumination patterns at varying spatial frequencies on a sample, and then the emitted photons at the sample surface were captured with a EM-CCD camera. This method results in a significantly reduced number of the optical field measurements as compared to the point-source-scanning ones and thereby achieves a fast data acquisition that is desired for a dynamic imaging application. Fluorescence yield images are reconstructed using the normalized-Born formulated inversion of the diffusion model. Experimental reconstructions are presented on a phantom embedding the fluorescent targets and compared for a combination of the multiply frequencies. The results validate the ability of the method to determine the target relative depth and quantification with an increasing accuracy.

  13. Automated telescope for variability studies

    NASA Astrophysics Data System (ADS)

    Ganesh, S.; Baliyan, K. S.; Chandra, S.; Joshi, U. C.; Kalyaan, A.; Mathur, S. N.

    PRL has installed a 50 cm telescope at Mt Abu, Gurushikhar. The backend instrument consists of a 1K × 1K EMCCD camera with standard UBVRI filters and also has polarization measurement capability using a second filter wheel with polaroid sheets oriented at different position angles. This 50 cm telescope observatory is operated in a robotic mode with different methods of scheduling of the objects being observed. This includes batch mode, fully manual as well as fully autonomous mode of operation. Linux based command line as well as GUI software are used entirely in this observatory. This talk will present the details of the telescope and associated instruments and auxiliary facilities for weather monitoring that were developed in house to ensure the safe and reliable operation of the telescope. The facility has been in use for a couple of years now and various objects have been observed. Some of the interesting results will also be presented.

  14. Image Geometric Corrections for a New EMCCD-based Dual Modular X-ray Imager

    PubMed Central

    Qu, Bin; Huang, Ying; Wang, Weiyuan; Cartwright, Alexander N.; Titus, Albert H.; Bednarek, Daniel R.; Rudin, Stephen

    2012-01-01

    An EMCCD-based dual modular x-ray imager was recently designed and developed from the component level, providing a high dynamic range of 53 dB and an effective pixel size of 26 μm for angiography and fluoroscopy. The unique 2×1 array design efficiently increased the clinical field of view, and also can be readily expanded to an M×N array implementation. Due to the alignment mismatches between the EMCCD sensors and the fiber optic tapers in each module, the output images or video sequences result in a misaligned 2048×1024 digital display if uncorrected. In this paper, we present a method for correcting display registration using a custom-designed two layer printed circuit board. This board was designed with grid lines to serve as the calibration pattern, and provides an accurate reference and sufficient contrast to enable proper display registration. Results show an accurate and fine stitching of the two outputs from the two modules. PMID:22254882

  15. Development of an EMCCD for LIDAR applications

    NASA Astrophysics Data System (ADS)

    De Monte, B.; Bell, R. T.

    2017-11-01

    A novel detector, incorporating e2v's EMCCD (L3VisionTM) [1] technology for use in LIDAR (Light Detection And Ranging) applications has been designed, manufactured and characterised. The most critical performance aspect was the requirement to collect charge from a 120μm square detection area for a 667ns temporal sampling window, with low crosstalk between successive samples, followed by signal readout with sub-electron effective noise. Additional requirements included low dark signal, high quantum efficiency at the 355nm laser wavelength and the ability to handle bright laser echoes, without corruption of the much fainter useful signals. The detector architecture used high speed charge binning to combine signal from each sampling window into a single charge packet. This was then passed through a multiplication register (EMCCD) operating with a typical gain of 100X to a conventional charge detection circuit. The detector achieved a typical quantum efficiency of 80% and a total noise in darkness of < 0.5 electrons rms. Development of the detector was supported by ESA.

  16. Developing a CCD camera with high spatial resolution for RIXS in the soft X-ray range

    NASA Astrophysics Data System (ADS)

    Soman, M. R.; Hall, D. J.; Tutt, J. H.; Murray, N. J.; Holland, A. D.; Schmitt, T.; Raabe, J.; Schmitt, B.

    2013-12-01

    The Super Advanced X-ray Emission Spectrometer (SAXES) at the Swiss Light Source contains a high resolution Charge-Coupled Device (CCD) camera used for Resonant Inelastic X-ray Scattering (RIXS). Using the current CCD-based camera system, the energy-dispersive spectrometer has an energy resolution (E/ΔE) of approximately 12,000 at 930 eV. A recent study predicted that through an upgrade to the grating and camera system, the energy resolution could be improved by a factor of 2. In order to achieve this goal in the spectral domain, the spatial resolution of the CCD must be improved to better than 5 μm from the current 24 μm spatial resolution (FWHM). The 400 eV-1600 eV energy X-rays detected by this spectrometer primarily interact within the field free region of the CCD, producing electron clouds which will diffuse isotropically until they reach the depleted region and buried channel. This diffusion of the charge leads to events which are split across several pixels. Through the analysis of the charge distribution across the pixels, various centroiding techniques can be used to pinpoint the spatial location of the X-ray interaction to the sub-pixel level, greatly improving the spatial resolution achieved. Using the PolLux soft X-ray microspectroscopy endstation at the Swiss Light Source, a beam of X-rays of energies from 200 eV to 1400 eV can be focused down to a spot size of approximately 20 nm. Scanning this spot across the 16 μm square pixels allows the sub-pixel response to be investigated. Previous work has demonstrated the potential improvement in spatial resolution achievable by centroiding events in a standard CCD. An Electron-Multiplying CCD (EM-CCD) has been used to improve the signal to effective readout noise ratio achieved resulting in a worst-case spatial resolution measurement of 4.5±0.2 μm and 3.9±0.1 μm at 530 eV and 680 eV respectively. A method is described that allows the contribution of the X-ray spot size to be deconvolved from these worst-case resolution measurements, estimating the spatial resolution to be approximately 3.5 μm and 3.0 μm at 530 eV and 680 eV, well below the resolution limit of 5 μm required to improve the spectral resolution by a factor of 2.

  17. An intraoperative spectroscopic imaging system for quantification of Protoporphyrin IX during glioma surgery (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Angulo-Rodríguez, Leticia M.; Laurence, Audrey; Jermyn, Michael; Sheehy, Guillaume; Sibai, Mira; Petrecca, Kevin; Roberts, David W.; Paulsen, Keith D.; Wilson, Brian C.; Leblond, Frédéric

    2016-03-01

    Cancer tissue often remains after brain tumor resection due to the inability to detect the full extent of cancer during surgery, particularly near tumor boundaries. Commercial systems are available for intra-operative real-time aminolevulenic acid (ALA)-induced protoporphyrin IX (PpIX) fluorescence imaging. These are standard white-light neurosurgical microscopes adapted with optical components for fluorescence excitation and detection. However, these instruments lack sensitivity and specificity, which limits the ability to detect low levels of PpIX and distinguish it from tissue auto-fluorescence. Current systems also cannot provide repeatable and un-biased quantitative fluorophore concentration values because of the unknown and highly variable light attenuation by tissue. We present a highly sensitive spectroscopic fluorescence imaging system that is seamlessly integrated onto a neurosurgical microscope. Hardware and software were developed to achieve through-microscope spatially-modulated illumination for 3D profilometry and to use this information to extract tissue optical properties to correct for the effects of tissue light attenuation. This gives pixel-by-pixel quantified fluorescence values and improves detection of low PpIX concentrations. This is achieved using a high-sensitivity Electron Multiplying Charge Coupled Device (EMCCD) with a Liquid Crystal Tunable Filter (LCTF) whereby spectral bands are acquired sequentially; and a snapshot camera system with simultaneous acquisition of all bands is used for profilometry and optical property recovery. Sensitivity and specificity to PpIX is demonstrated using brain tissue phantoms and intraoperative human data acquired in an on-going clinical study using PpIX fluorescence to guide glioma resection.

  18. 77 FR 20417 - Certain Cameras and Mobile Devices, Related Software and Firmware, and Components Thereof and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-04

    ... INTERNATIONAL TRADE COMMISSION [DN 2891] Certain Cameras and Mobile Devices, Related Software and... complaint entitled Certain Cameras and Mobile Devices, Related Software and Firmware, and Components Thereof... cameras and mobile devices, related software and firmware, and components thereof and products containing...

  19. 21 CFR 886.1120 - Opthalmic camera.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... DEVICES OPHTHALMIC DEVICES Diagnostic Devices § 886.1120 Opthalmic camera. (a) Identification. An ophthalmic camera is an AC-powered device intended to take photographs of the eye and the surrounding area...

  20. The Campaign for the Occultation of UCAC4-347-165728 (R=12m2) by Pluto on June 29th, 2015

    NASA Astrophysics Data System (ADS)

    Beisker, W.; Sicardy, B.; Berard, D.; Meza, E.; Herald, D.; Gault, D.; Talbot, J.; Bode, H.-J.; Braga-Ribas, F.; Barry, T.; Broughton, J.; Hanna, W.; Bradshaw, J.; Kerr, S.; Pavlov, H.

    2015-10-01

    The occultation of UCAC4-347-165728 (R=12m2)on the 29th of June 2015 by Pluto is the last important occultation by Pluto before the New Horizons flyby 15 days later. Therefore it is a great opportunity to measure details of Pluto's atmosphere from Earth at the same time as the "on-site" determination. Observations from mobile stations and from certain fixed site observatories are planned in an international campaign in Australia and New Zealand. The telescopes will be equipped with EMCCD or CCD cameras to record a frame sequence linked to the exact timing by GPS. With high resolution astrometry in the months and weeks before the event, we intend to define the central line of the occultation so accurate that a positioning of instruments in close proximity of the central line is possible. - First results of the campaign will be presented in this report.

  1. 21 CFR 892.1620 - Cine or spot fluorographic x-ray camera.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Cine or spot fluorographic x-ray camera. 892.1620... (CONTINUED) MEDICAL DEVICES RADIOLOGY DEVICES Diagnostic Devices § 892.1620 Cine or spot fluorographic x-ray camera. (a) Identification. A cine or spot fluorographic x-ray camera is a device intended to photograph...

  2. 21 CFR 892.1620 - Cine or spot fluorographic x-ray camera.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Cine or spot fluorographic x-ray camera. 892.1620... (CONTINUED) MEDICAL DEVICES RADIOLOGY DEVICES Diagnostic Devices § 892.1620 Cine or spot fluorographic x-ray camera. (a) Identification. A cine or spot fluorographic x-ray camera is a device intended to photograph...

  3. 21 CFR 892.1620 - Cine or spot fluorographic x-ray camera.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 21 Food and Drugs 8 2013-04-01 2013-04-01 false Cine or spot fluorographic x-ray camera. 892.1620... (CONTINUED) MEDICAL DEVICES RADIOLOGY DEVICES Diagnostic Devices § 892.1620 Cine or spot fluorographic x-ray camera. (a) Identification. A cine or spot fluorographic x-ray camera is a device intended to photograph...

  4. 21 CFR 892.1620 - Cine or spot fluorographic x-ray camera.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Cine or spot fluorographic x-ray camera. 892.1620... (CONTINUED) MEDICAL DEVICES RADIOLOGY DEVICES Diagnostic Devices § 892.1620 Cine or spot fluorographic x-ray camera. (a) Identification. A cine or spot fluorographic x-ray camera is a device intended to photograph...

  5. 21 CFR 892.1620 - Cine or spot fluorographic x-ray camera.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 21 Food and Drugs 8 2012-04-01 2012-04-01 false Cine or spot fluorographic x-ray camera. 892.1620... (CONTINUED) MEDICAL DEVICES RADIOLOGY DEVICES Diagnostic Devices § 892.1620 Cine or spot fluorographic x-ray camera. (a) Identification. A cine or spot fluorographic x-ray camera is a device intended to photograph...

  6. Development of a novel multi-point plastic scintillation detector with a single optical transmission line for radiation dose measurement*

    PubMed Central

    Therriault-Proulx, François; Archambault, Louis; Beaulieu, Luc; Beddar, Sam

    2013-01-01

    Purpose The goal of this study was to develop a novel multi-point plastic scintillation detector (mPSD) capable of measuring the dose accurately at multiple positions simultaneously using a single optical transmission line. Methods A 2-point mPSD used a band-pass approach that included splitters, color filters, and an EMCCD camera. The 3-point mPSD was based on a new full-spectrum approach, in which a spectrograph was coupled to a CCD camera. Irradiations of the mPSDs and of an ion chamber were performed with a 6-MV photon beam at various depths and lateral positions in a water tank. Results For the 2-point mPSD, the average relative differences between mPSD and ion chamber measurements for the depth-dose were 2.4±1.6% and 1.3±0.8% for BCF-60 and BCF-12, respectively. For the 3-point mPSD, the average relative differences over all conditions were 2.3±1.1%, 1.6±0.4%, and 0.32±0.19% for BCF-60, BCF-12, and BCF-10, respectively. Conclusions This study demonstrates the practical feasibility of mPSDs. This type of detector could be very useful for pre-treatment quality assurance applications as well as an accurate tool for real-time in vivo dosimetry. PMID:23060069

  7. X-ray luminescence imaging of water, air, and tissue phantoms

    NASA Astrophysics Data System (ADS)

    Lun, Michael C.; Li, Changqing

    2018-02-01

    X-ray luminescence computed tomography (XLCT) is an emerging hybrid molecular imaging modality. In XLCT, high energy x-ray photons excite phosphors emitting optical photons for tomographic image reconstruction. During XLCT, the optical signal obtained is thought to only originate from the embedded phosphor particles. However, numerous studies have reported other sources of optical photons such as in air, water, and tissue that are generated from ionization. These sources of optical photons will provide background noise and will limit the molecular sensitivity of XLCT imaging. In this study, using a water-cooled electron multiplying charge-coupled device (EMCCD) camera, we performed luminescence imaging of water, air, and several tissue mimicking phantoms including one embedded with a target containing 0.01 mg/mL of europium-doped gadolinium oxysulfide (GOS:Eu3+) particles during x-ray irradiation using a focused x-ray beam with energy less than the Cerenkov radiation threshold. In addition, a spectrograph was used to measure the x-ray luminescence spectrum. The phantom embedded with the GOS:Eu3+ target displayed the greatest luminescence intensity, followed by the tissue phantom, and finally the water phantom. Our results indicate that the x-ray luminescence intensity from a background phantom is equivalent to a GOS:Eu3+ concentration of 0.8 μg/mL. We also found a 3-fold difference in the radioluminescence intensity between liquid water and air. From the measurements of the emission spectra, we found that water produced a broad spectrum and that a tissue-mimicking phantom made from Intralipid had a different x-ray emission spectrum than one made with TiO2 and India ink. The measured spectra suggest that it is better to use Intralipid instead if TiO2 as optical scatterer for future XLCT imaging.

  8. Rapid Raman spectroscopy of musculoskeletal tissue using a visible laser and an electron-multiplying CCD (EMCCD) detector

    NASA Astrophysics Data System (ADS)

    Golcuk, Kurtulus; Mandair, Gurjit S.; Callender, Andrew F.; Finney, William F.; Sahar, Nadder; Kohn, David H.; Morris, Michael D.

    2006-02-01

    Background fluorescence can often complicate the use of Raman microspectroscopy in the study of musculoskeletal tissues. Such fluorescence interferences are undesirable as the Raman spectra of matrix and mineral phases can be used to differentiate between normal and pathological or microdamaged bone. Photobleaching with the excitation laser provides a non-invasive method for reducing background fluorescence, enabling 532 nm Raman hyperspectral imaging of bone tissue. The signal acquisition time for a 400 point Raman line image is reduced to 1-4 seconds using electronmultiplying CCD (EMCCD) detector, enabling acquisition of Raman images in less than 10 minutes. Rapid photobleaching depends upon multiple scattering effects in the tissue specimen and is applicable to some, but not all experimental situations.

  9. Implementation of material decomposition using an EMCCD and CMOS-based micro-CT system.

    PubMed

    Podgorsak, Alexander R; Nagesh, Sv Setlur; Bednarek, Daniel R; Rudin, Stephen; Ionita, Ciprian N

    2017-02-11

    This project assessed the effectiveness of using two different detectors to obtain dual-energy (DE) micro-CT data for the carrying out of material decomposition. A micro-CT coupled to either a complementary metal-oxide semiconductor (CMOS) or an electron multiplying CCD (EMCCD) detector was used to acquire image data of a 3D-printed phantom with channels filled with different materials. At any instance, materials such as iohexol contrast agent, water, and platinum were selected to make up the scanned object. DE micro-CT data was acquired, and slices of the scanned object were differentiated by material makeup. The success of the decomposition was assessed quantitatively through the computation of percentage normalized root-mean-square error (%NRMSE). Our results indicate a successful decomposition of iohexol for both detectors (%NRMSE values of 1.8 for EMCCD, 2.4 for CMOS), as well as platinum (%NRMSE value of 4.7). The CMOS detector performed material decomposition on air and water on average with 7 times more %NRMSE, possibly due to the decreased sensitivity of the CMOS system. Material decomposition showed the potential to differentiate between materials such as the iohexol and platinum, perhaps opening the door for its use in the neurovascular anatomical region. Work supported by Toshiba America Medical Systems, and partially supported by NIH grant 2R01EB002873.

  10. Implementation of material decomposition using an EMCCD and CMOS-based micro-CT system

    NASA Astrophysics Data System (ADS)

    Podgorsak, Alexander R.; Nagesh, S. V. Setlur; Bednarek, Daniel R.; Rudin, Stephen; Ionita, Ciprian N.

    2017-03-01

    This project assessed the effectiveness of using two different detectors to obtain dual-energy (DE) micro-CT data for the carrying out of material decomposition. A micro-CT coupled to either a complementary metal-oxide semiconductor (CMOS) or an electron multiplying CCD (EMCCD) detector was used to acquire image data of a 3D-printed phantom with channels filled with different materials. At any instance, materials such as iohexol contrast agent, water, and platinum were selected to make up the scanned object. DE micro-CT data was acquired, and slices of the scanned object were differentiated by material makeup. The success of the decomposition was assessed quantitatively through the computation of percentage normalized root-mean-square error (%NRMSE). Our results indicate a successful decomposition of iohexol for both detectors (%NRMSE values of 1.8 for EMCCD, 2.4 for CMOS), as well as platinum (%NRMSE value of 4.7). The CMOS detector performed material decomposition on air and water on average with 7 times more %NRMSE, possibly due to the decreased sensitivity of the CMOS system. Material decomposition showed the potential to differentiate between materials such as the iohexol and platinum, perhaps opening the door for its use in the neurovascular anatomical region. Work supported by Toshiba America Medical Systems, and partially supported by NIH grant 2R01EB002873.

  11. Fireball multi object spectrograph: as-built optic performances

    NASA Astrophysics Data System (ADS)

    Grange, R.; Milliard, B.; Lemaitre, G.; Quiret, S.; Pascal, S.; Origné, A.; Hamden, E.; Schiminovich, D.

    2016-07-01

    Fireball (Faint Intergalactic Redshifted Emission Balloon) is a NASA/CNES balloon-borne experiment to study the faint diffuse circumgalactic medium from the line emissions in the ultraviolet (200 nm) above 37 km flight altitude. Fireball relies on a Multi Object Spectrograph (MOS) that takes full advantage of the new high QE, low noise 13 μm pixels UV EMCCD. The MOS is fed by a 1 meter diameter parabola with an extended field (1000 arcmin2) using a highly aspherized two mirror corrector. All the optical train is working at F/2.5 to maintain a high signal to noise ratio. The spectrograph (R 2200 and 1.5 arcsec FWHM) is based on two identical Schmidt systems acting as collimator and camera sharing a 2400 g/mm aspherized reflective Schmidt grating. This grating is manufactured from active optics methods by double replication technique of a metal deformable matrix whose active clear aperture is built-in to a rigid elliptical contour. The payload and gondola are presently under integration at LAM. We will present the alignment procedure and the as-built optic performances of the Fireball instrument.

  12. VizieR Online Data Catalog: Near-infrared observations of 84 KOI systems (Atkinson+, 2017)

    NASA Astrophysics Data System (ADS)

    Atkinson, D.; Baranec, C.; Ziegler, C.; Law, N.; Riddle, R.; Morton, T.

    2017-06-01

    The initial observations identifying companion candidates are from multiple Robo-AO observing runs on the Palomar Observatory 1.5m telescope, spanning 2012 July to September, 2013 April to October, 2014 June to September, and 2015 June. Observations were in either Sloan-i or a long-pass 600nm (LP600) filter, the latter being similar to the Kepler-bandpass when combined with the EMCCD's quantum efficiency curve for red/cool stars. The near-infrared observations are from the Near-InfraRed Camera 2 (NIRC2) instrument on the 10m Keck II telescope, conducted on 2013 June 24, August 24 and 25, 2014 August 17, 2015 July 25, and August 4 in the J, H, K, and/or Kp filters in the narrow mode of NIRC2 (9.952mas/pixel). The relative positions and raw contrast measurements of imaged companions are presented in Table4. The reduced apparent magnitudes, which use both measured contrasts and canonical apparent magnitudes of combined systems in Kepler literature, are presented in Table5. (5 data files).

  13. SOUL: the Single conjugated adaptive Optics Upgrade for LBT

    NASA Astrophysics Data System (ADS)

    Pinna, E.; Esposito, S.; Hinz, P.; Agapito, G.; Bonaglia, M.; Puglisi, A.; Xompero, M.; Riccardi, A.; Briguglio, R.; Arcidiacono, C.; Carbonaro, L.; Fini, L.; Montoya, M.; Durney, O.

    2016-07-01

    We present here SOUL: the Single conjugated adaptive Optics Upgrade for LBT. Soul will upgrade the wavefront sensors replacing the existing CCD detector with an EMCCD camera and the rest of the system in order to enable the closed loop operations at a faster cycle rate and with higher number of slopes. Thanks to reduced noise, higher number of pixel and framerate, we expect a gain (for a given SR) around 1.5-2 magnitudes at all wavelengths in the range 7.5 70% in I-band and 0.6asec seeing) and the sky coverage will be multiplied by a factor 5 at all galactic latitudes. Upgrading the SCAO systems at all the 4 focal stations, SOUL will provide these benefits in 2017 to the LBTI interferometer and in 2018 to the 2 LUCI NIR spectro-imagers. In the same year the SOUL correction will be exploited also by the new generation of LBT instruments: V-SHARK, SHARK-NIR and iLocater.

  14. KAPAO-Alpha: An On-The-Sky Testbed for Adaptive Optics on Small Aperture Telescopes

    NASA Astrophysics Data System (ADS)

    Morrison, Will; Choi, P. I.; Severson, S. A.; Spjut, E.; Contreras, D. S.; Gilbreth, B. N.; McGonigle, L. P.; Rudy, A. R.; Xue, A.; Baranec, C.; Riddle, R.

    2012-05-01

    We present initial in-lab and on-sky results of a natural guide star adaptive optics instrument, KAPAO-Alpha, being deployed on Pomona College’s 1-meter telescope at Table Mountain Observatory. The instrument is an engineering prototype designed to help us identify and solve design and integration issues before building KAPAO, a low-cost, dual-band, natural guide star AO system currently in active development and scheduled for first light in 2013. The Alpha system operates at visible wavelengths, employs Shack-Hartmann wavefront sensing, and is assembled entirely from commercially available components that include: off-the-shelf optics, a 140-actuator BMC deformable mirror, a high speed SciMeasure Lil’ Joe camera, and an EMCCD for science image acquisition. Wavefront reconstruction operating at 1-kHz speeds is handled with a consumer-grade computer running custom software adopted from the Robo-AO project. The assembly and integration of the Alpha instrument has been undertaken as a Pomona College undergraduate thesis. As part of the larger KAPAO project, it is supported by the National Science Foundation under Grant No. 0960343.

  15. The performance of the MROI fast tip-tilt correction system

    NASA Astrophysics Data System (ADS)

    Young, John; Buscher, David; Fisher, Martin; Haniff, Christopher; Rea, Alexander; Seneta, Eugene; Sun, Xiaowei; Wilson, Donald; Farris, Allen; Olivares, Andres

    2014-07-01

    The fast tip-tilt (FTT) correction system for the Magdalena Ridge Observatory Interferometer (MROI) is being developed by the University of Cambridge. The design incorporates an EMCCD camera protected by a thermal enclosure, optical mounts with passive thermal compensation, and control software running under Xenomai real-time Linux. The complete FTT system is now undergoing laboratory testing prior to being installed on the first MROI unit telescope in the fall of 2014. We are following a twin-track approach to testing the closed-loop performance: tracking tip-tilt perturbations introduced by an actuated flat mirror in the laboratory, and undertaking end-to-end simulations that incorporate realistic higher-order atmospheric perturbations. We report test results that demonstrate (a) the high stability of the entire opto-mechanical system, realized with a completely passive design; and (b) the fast tip-tilt correction performance and limiting sensitivity. Our preliminary results in both areas are close to those needed to realise the ambitious stability and sensitivity goals of the MROI which aims to match the performance of current natural guide star adaptive optics systems.

  16. 21 CFR 878.4160 - Surgical camera and accessories.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 21 Food and Drugs 8 2012-04-01 2012-04-01 false Surgical camera and accessories. 878.4160 Section 878.4160 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES GENERAL AND PLASTIC SURGERY DEVICES Surgical Devices § 878.4160 Surgical camera...

  17. 21 CFR 878.4160 - Surgical camera and accessories.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 21 Food and Drugs 8 2013-04-01 2013-04-01 false Surgical camera and accessories. 878.4160 Section 878.4160 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES GENERAL AND PLASTIC SURGERY DEVICES Surgical Devices § 878.4160 Surgical camera...

  18. 21 CFR 878.4160 - Surgical camera and accessories.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Surgical camera and accessories. 878.4160 Section 878.4160 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES GENERAL AND PLASTIC SURGERY DEVICES Surgical Devices § 878.4160 Surgical camera...

  19. 21 CFR 878.4160 - Surgical camera and accessories.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Surgical camera and accessories. 878.4160 Section 878.4160 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES GENERAL AND PLASTIC SURGERY DEVICES Surgical Devices § 878.4160 Surgical camera...

  20. 21 CFR 878.4160 - Surgical camera and accessories.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Surgical camera and accessories. 878.4160 Section 878.4160 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES GENERAL AND PLASTIC SURGERY DEVICES Surgical Devices § 878.4160 Surgical camera...

  1. A multichannel visible spectroscopy system for the ITER-like W divertor on EAST.

    PubMed

    Mao, Hongmin; Ding, Fang; Luo, Guang-Nan; Hu, Zhenhua; Chen, Xiahua; Xu, Feng; Yang, Zhongshi; Chen, Jingbo; Wang, Liang; Ding, Rui; Zhang, Ling; Gao, Wei; Xu, Jichan; Wu, Chengrui

    2017-04-01

    To facilitate long-pulse high power operation, an ITER-like actively cooled tungsten (W) divertor was installed in Experimental Advanced Superconducting Tokamak (EAST) to replace the original upper graphite divertor in 2014. A dedicated multichannel visible spectroscopic diagnostic system has been accordingly developed for the characterization of the plasma and impurities in the W divertor. An array of 22 lines-of-sight (LOSs) provides a profile measurement of the light emitted from the plasma along upper outer divertor, and the other 17 vertical LOSs view the upper inner divertor, achieving a 13 mm poloidal resolution in both regions. The light emitted from the plasma is collected by a specially designed optical lens assembly and then transferred to a Czerny-Turner spectrometer via 40 m quartz fibers. At the end, the spectra dispersed by the spectrometer are recorded with an Electron-Multiplying Charge Coupled Device (EMCCD). The optical throughput and quantum efficiency of the system are optimized in the wavelength range 350-700 nm. The spectral resolution/coverage can be adjusted from 0.01 nm/3 nm to 0.41 nm/140 nm by switching the grating with suitable groove density. The frame rate depends on the setting of LOS number in EMCCD and can reach nearly 2 kHz for single LOS detection. The light collected by the front optical lens can also be divided and partly transferred to a photomultiplier tube array with specified bandpass filter, which can provide faster sampling rates by up to 200 kHz. The spectroscopic diagnostic is routinely operated in EAST discharges with absolute optical calibrations applied before and after each campaign, monitoring photon fluxes from impurities and H recycling in the upper divertor. This paper presents the technical details of the diagnostic and typical measurements during EAST discharges.

  2. A multichannel visible spectroscopy system for the ITER-like W divertor on EAST

    NASA Astrophysics Data System (ADS)

    Mao, Hongmin; Ding, Fang; Luo, Guang-Nan; Hu, Zhenhua; Chen, Xiahua; Xu, Feng; Yang, Zhongshi; Chen, Jingbo; Wang, Liang; Ding, Rui; Zhang, Ling; Gao, Wei; Xu, Jichan; Wu, Chengrui

    2017-04-01

    To facilitate long-pulse high power operation, an ITER-like actively cooled tungsten (W) divertor was installed in Experimental Advanced Superconducting Tokamak (EAST) to replace the original upper graphite divertor in 2014. A dedicated multichannel visible spectroscopic diagnostic system has been accordingly developed for the characterization of the plasma and impurities in the W divertor. An array of 22 lines-of-sight (LOSs) provides a profile measurement of the light emitted from the plasma along upper outer divertor, and the other 17 vertical LOSs view the upper inner divertor, achieving a 13 mm poloidal resolution in both regions. The light emitted from the plasma is collected by a specially designed optical lens assembly and then transferred to a Czerny-Turner spectrometer via 40 m quartz fibers. At the end, the spectra dispersed by the spectrometer are recorded with an Electron-Multiplying Charge Coupled Device (EMCCD). The optical throughput and quantum efficiency of the system are optimized in the wavelength range 350-700 nm. The spectral resolution/coverage can be adjusted from 0.01 nm/3 nm to 0.41 nm/140 nm by switching the grating with suitable groove density. The frame rate depends on the setting of LOS number in EMCCD and can reach nearly 2 kHz for single LOS detection. The light collected by the front optical lens can also be divided and partly transferred to a photomultiplier tube array with specified bandpass filter, which can provide faster sampling rates by up to 200 kHz. The spectroscopic diagnostic is routinely operated in EAST discharges with absolute optical calibrations applied before and after each campaign, monitoring photon fluxes from impurities and H recycling in the upper divertor. This paper presents the technical details of the diagnostic and typical measurements during EAST discharges.

  3. Development of the FPI+ as facility science instrument for SOFIA cycle four observations

    NASA Astrophysics Data System (ADS)

    Pfüller, Enrico; Wiedemann, Manuel; Wolf, Jürgen; Krabbe, Alfred

    2016-08-01

    The Stratospheric Observatory for Infrared Astronomy (SOFIA) is a heavily modified Boeing 747SP aircraft, accommodating a 2.5m infrared telescope. This airborne observation platform takes astronomers to flight altitudes of up to 13.7 km (45,000ft) and therefore allows an unobstructed view of the infrared universe at wavelengths between 0.3 m and 1600 m. SOFIA is currently completing its fourth cycle of observations and utilizes eight different imaging and spectroscopic science instruments. New instruments for SOFIAs cycle 4 observations are the High-resolution Airborne Wideband Camera-plus (HAWC+) and the Focal Plane Imager (FPI+). The latter is an integral part of the telescope assembly and is used on every SOFIA flight to ensure precise tracking on the desired targets. The FPI+ is used as a visual-light photometer in its role as facility science instrument. Since the upgrade of the FPI camera and electronics in 2013, it uses a thermo-electrically cooled science grade EM-CCD sensor inside a commercial-off-the-shelf Andor camera. The back-illuminated sensor has a peak quantum efficiency of 95% and the dark current is as low as 0.01 e-/pix/sec. With this new hardware the telescope has successfully tracked on 16th magnitude stars and thus the sky coverage, e.g. the area of sky that has suitable tracking stars, has increased to 99%. Before its use as an integrated tracking imager, the same type of camera has been used as a standalone diagnostic tool to analyze the telescope pointing stability at frequencies up to 200 Hz (imaging with 400 fps). These measurements help to improve the telescope pointing control algorithms and therefore reduce the image jitter in the focal plane. Science instruments benefit from this improvement with smaller image sizes for longer exposure times. The FPI has also been used to support astronomical observations like stellar occultations by the dwarf planet Pluto and a number of exoplanet transits. Especially the observation of the occultation events benefits from the high camera sensitivity, fast readout capability and the low read noise and it was possible to achieve high time resolution on the photometric light curves. This paper will give an overview of the development from the standalone diagnostic camera to the upgraded guiding/tracking camera, fully integrated into the telescope, while still offering the diagnostic capabilities and finally to the use as a facility science instrument on SOFIA.

  4. Fast sub-electron detectors review for interferometry

    NASA Astrophysics Data System (ADS)

    Feautrier, Philippe; Gach, Jean-Luc; Bério, Philippe

    2016-08-01

    New disruptive technologies are now emerging for detectors dedicated to interferometry. The detectors needed for this kind of applications need antonymic characteristics: the detector noise must be very low, especially when the signal is dispersed but at the same time must also sample the fast temporal characteristics of the signal. This paper describes the new fast low noise technologies that have been recently developed for interferometry and adaptive optics. The first technology is the Avalanche PhotoDiode (APD) infrared arrays made of HgCdTe. In this paper are presented the two programs that have been developed in that field: the Selex Saphira 320x256 [1] and the 320x255 RAPID detectors developed by Sofradir/CEA LETI in France [2], [3], [4]. Status of these two programs and future developments are presented. Sub-electron noise can now be achieved in the infrared using this technology. The exceptional characteristics of HgCdTe APDs are due to a nearly exclusive impaction ionization of the electrons, and this is why these devices have been called "electrons avalanche photodiodes" or e-APDs. These characteristics have inspired a large effort in developing focal plan arrays using HgCdTe APDs for low photon number applications such as active imaging in gated mode (2D) and/or with direct time of flight detection (3D imaging) and, more recently, passive imaging for infrared wave front correction and fringe tracking in astronomical observations. In addition, a commercial camera solution called C-RED, based on Selex Saphira and commercialized by First Light Imaging [5], is presented here. Some groups are also working with instruments in the visible. In that case, another disruptive technology is showing outstanding performances: the Electron Multiplying CCDs (EMCCD) developed mainly by e2v technologies in UK. The OCAM2 camera, commercialized by First Light Imaging [5], uses the 240x240 EMMCD from e2v and is successfully implemented on the VEGA instrument on the CHARA interferometer (US) by the Lagrange laboratory from Observatoire de la Cote d'Azur. By operating the detector at gain 1000, the readout noise is as low as 0.1 e and data can be analyzed with a better contrast in photon counting mode.

  5. Multi-color pyrometry imaging system and method of operating the same

    DOEpatents

    Estevadeordal, Jordi; Nirmalan, Nirm Velumylum; Tralshawala, Nilesh; Bailey, Jeremy Clyde

    2017-03-21

    A multi-color pyrometry imaging system for a high-temperature asset includes at least one viewing port in optical communication with at least one high-temperature component of the high-temperature asset. The system also includes at least one camera device in optical communication with the at least one viewing port. The at least one camera device includes a camera enclosure and at least one camera aperture defined in the camera enclosure, The at least one camera aperture is in optical communication with the at least one viewing port. The at least one camera device also includes a multi-color filtering mechanism coupled to the enclosure. The multi-color filtering mechanism is configured to sequentially transmit photons within a first predetermined wavelength band and transmit photons within a second predetermined wavelength band that is different than the first predetermined wavelength band.

  6. An automatic camera device for measuring waterfowl use

    USGS Publications Warehouse

    Cowardin, L.M.; Ashe, J.E.

    1965-01-01

    A Yashica Sequelle camera was modified and equipped with a timing device so that it would take pictures automatically at 15-minute intervals. Several of these cameras were used to photograph randomly selected quadrats located in different marsh habitats. The number of birds photographed in the different areas was used as an index of waterfowl use.

  7. Measuring the Human Ultra-Weak Photon Emission Distribution Using an Electron-Multiplying, Charge-Coupled Device as a Sensor.

    PubMed

    Ortega-Ojeda, Fernando; Calcerrada, Matías; Ferrero, Alejandro; Campos, Joaquín; Garcia-Ruiz, Carmen

    2018-04-10

    Ultra-weak photon emission (UPE) is the spontaneous emission from living systems mainly attributed to oxidation reactions, in which reactive oxygen species (ROS) may play a major role. Given the capability of the next-generation electron-multiplying CCD (EMCCD) sensors and the easy use of liquid crystal tunable filters (LCTF), the aim of this work was to explore the potential of a simple UPE spectrometer to measure the UPE from a human hand. Thus, an easy setup was configured based on a dark box for inserting the subject's hand prior to LCTF as a monochromator and an EMCCD sensor working in the full vertical binning mode (FVB) as a spectra detector. Under controlled conditions, both dark signals and left hand UPE were acquired by registering the UPE intensity at different selected wavelengths (400, 450, 500, 550, 600, 650, and 700 nm) during a period of 10 min each. Then, spurious signals were filtered out by ignoring the pixels whose values were clearly outside of the Gaussian distribution, and the dark signal was subtracted from the subject hand signal. The stepped spectrum with a peak of approximately 880 photons at 500 nm had a shape that agreed somewhat with previous reports, and agrees with previous UPE research that reported UPE from 420 to 570 nm, or 260 to 800 nm, with a range from 1 to 1000 photons s -1 cm -2 . Obtaining the spectral distribution instead of the total intensity of the UPE represents a step forward in this field, as it may provide extra information about a subject's personal states and relationship with ROS. A new generation of CCD sensors with lower dark signals, and spectrographs with a more uniform spectral transmittance, will open up new possibilities for configuring measuring systems in portable formats.

  8. MOOSE: A Multi-Spectral Observatory Of Sensitive EMCCDs for innovative research in space physics and aeronomy

    NASA Astrophysics Data System (ADS)

    Samara, M.; Michell, R. G.; Hampton, D. L.; Trondsen, T.

    2012-12-01

    The Multi-Spectral Observatory Of Sensitive EMCCDs (MOOSE) consists of 5 imaging systems and is the result of an NSF-funded Major Research Instrumentation project. The main objective of MOOSE is to provide a resource to all members of the scientific community that have interests in imaging low-light-level phenomena, such as aurora, airglow, and meteors. Each imager consists of an Andor DU-888 Electron Multiplying CCD (EMCCD), combined with a telecentric optics section, made by Keo Scientific Ltd., with a selection of available angular fields of view. During the northern hemisphere winter the system is typically based and operated at Poker Flat Research Range in Alaska, but any or all imagers can be shipped anywhere in individual stand-alone cases. We will discuss the main components of the MOOSE project, including the imagers, optics, lenses and filters, as well as the Linux-based control software that enables remote operation. We will also discuss the calibration of the imagers along with the initial deployments and testing done. We are requesting community input regarding operational modes, such as filter and field of view combinations, frame rates, and potentially moving some imagers to other locations, either for tomography or for larger spatial coverage. In addition, given the large volume of auroral image data already available, we are encouraging collaborations for which we will freely distribute the data and any analysis tools already developed. Most significantly, initial science highlights relating to aurora, airglow and meteors will be discussed in the context of the creative and innovative ways that the MOOSE observatory can be used in order to address a new realm of science topics, previously unachievable with traditional single imager systems.

  9. A mobile device-based imaging spectrometer for environmental monitoring by attaching a lightweight small module to a commercial digital camera.

    PubMed

    Cai, Fuhong; Lu, Wen; Shi, Wuxiong; He, Sailing

    2017-11-15

    Spatially-explicit data are essential for remote sensing of ecological phenomena. Lately, recent innovations in mobile device platforms have led to an upsurge in on-site rapid detection. For instance, CMOS chips in smart phones and digital cameras serve as excellent sensors for scientific research. In this paper, a mobile device-based imaging spectrometer module (weighing about 99 g) is developed and equipped on a Single Lens Reflex camera. Utilizing this lightweight module, as well as commonly used photographic equipment, we demonstrate its utility through a series of on-site multispectral imaging, including ocean (or lake) water-color sensing and plant reflectance measurement. Based on the experiments we obtain 3D spectral image cubes, which can be further analyzed for environmental monitoring. Moreover, our system can be applied to many kinds of cameras, e.g., aerial camera and underwater camera. Therefore, any camera can be upgraded to an imaging spectrometer with the help of our miniaturized module. We believe it has the potential to become a versatile tool for on-site investigation into many applications.

  10. A simple and inexpensive pulsing device for data-recording cameras

    Treesearch

    David L. Sonderman

    1973-01-01

    In some areas of forestry and wood utilization research, use of automatic data recording equipment has become commonplace. This research note describes the basic electronic components needed to modify an existing intervalometer into a simplified pulsing device for controlling an automatic data recording camera. The pulsing device is easily assembled and inexpensive,...

  11. Per-Pixel Coded Exposure for High-Speed and High-Resolution Imaging Using a Digital Micromirror Device Camera

    PubMed Central

    Feng, Wei; Zhang, Fumin; Qu, Xinghua; Zheng, Shiwei

    2016-01-01

    High-speed photography is an important tool for studying rapid physical phenomena. However, low-frame-rate CCD (charge coupled device) or CMOS (complementary metal oxide semiconductor) camera cannot effectively capture the rapid phenomena with high-speed and high-resolution. In this paper, we incorporate the hardware restrictions of existing image sensors, design the sampling functions, and implement a hardware prototype with a digital micromirror device (DMD) camera in which spatial and temporal information can be flexibly modulated. Combined with the optical model of DMD camera, we theoretically analyze the per-pixel coded exposure and propose a three-element median quicksort method to increase the temporal resolution of the imaging system. Theoretically, this approach can rapidly increase the temporal resolution several, or even hundreds, of times without increasing bandwidth requirements of the camera. We demonstrate the effectiveness of our method via extensive examples and achieve 100 fps (frames per second) gain in temporal resolution by using a 25 fps camera. PMID:26959023

  12. Per-Pixel Coded Exposure for High-Speed and High-Resolution Imaging Using a Digital Micromirror Device Camera.

    PubMed

    Feng, Wei; Zhang, Fumin; Qu, Xinghua; Zheng, Shiwei

    2016-03-04

    High-speed photography is an important tool for studying rapid physical phenomena. However, low-frame-rate CCD (charge coupled device) or CMOS (complementary metal oxide semiconductor) camera cannot effectively capture the rapid phenomena with high-speed and high-resolution. In this paper, we incorporate the hardware restrictions of existing image sensors, design the sampling functions, and implement a hardware prototype with a digital micromirror device (DMD) camera in which spatial and temporal information can be flexibly modulated. Combined with the optical model of DMD camera, we theoretically analyze the per-pixel coded exposure and propose a three-element median quicksort method to increase the temporal resolution of the imaging system. Theoretically, this approach can rapidly increase the temporal resolution several, or even hundreds, of times without increasing bandwidth requirements of the camera. We demonstrate the effectiveness of our method via extensive examples and achieve 100 fps (frames per second) gain in temporal resolution by using a 25 fps camera.

  13. Optimization of a miniature short-wavelength infrared objective optics of a short-wavelength infrared to visible upconversion layer attached to a mobile-devices visible camera

    NASA Astrophysics Data System (ADS)

    Kadosh, Itai; Sarusi, Gabby

    2017-10-01

    The use of dual cameras in parallax in order to detect and create 3-D images in mobile devices has been increasing over the last few years. We propose a concept where the second camera will be operating in the short-wavelength infrared (SWIR-1300 to 1800 nm) and thus have night vision capability while preserving most of the other advantages of dual cameras in terms of depth and 3-D capabilities. In order to maintain commonality of the two cameras, we propose to attach to one of the cameras a SWIR to visible upconversion layer that will convert the SWIR image into a visible image. For this purpose, the fore optics (the objective lenses) should be redesigned for the SWIR spectral range and the additional upconversion layer, whose thickness is <1 μm. Such layer should be attached in close proximity to the mobile device visible range camera sensor (the CMOS sensor). This paper presents such a SWIR objective optical design and optimization that is formed and fit mechanically to the visible objective design but with different lenses in order to maintain the commonality and as a proof-of-concept. Such a SWIR objective design is very challenging since it requires mimicking the original visible mobile camera lenses' sizes and the mechanical housing, so we can adhere to the visible optical and mechanical design. We present in depth a feasibility study and the overall optical system performance of such a SWIR mobile-device camera fore optics design.

  14. 3-dimensional telepresence system for a robotic environment

    DOEpatents

    Anderson, Matthew O.; McKay, Mark D.

    2000-01-01

    A telepresence system includes a camera pair remotely controlled by a control module affixed to an operator. The camera pair provides for three dimensional viewing and the control module, affixed to the operator, affords hands-free operation of the camera pair. In one embodiment, the control module is affixed to the head of the operator and an initial position is established. A triangulating device is provided to track the head movement of the operator relative to the initial position. A processor module receives input from the triangulating device to determine where the operator has moved relative to the initial position and moves the camera pair in response thereto. The movement of the camera pair is predetermined by a software map having a plurality of operation zones. Each zone therein corresponds to unique camera movement parameters such as speed of movement. Speed parameters include constant speed, or increasing or decreasing. Other parameters include pan, tilt, slide, raise or lowering of the cameras. Other user interface devices are provided to improve the three dimensional control capabilities of an operator in a local operating environment. Such other devices include a pair of visual display glasses, a microphone and a remote actuator. The pair of visual display glasses are provided to facilitate three dimensional viewing, hence depth perception. The microphone affords hands-free camera movement by utilizing voice commands. The actuator allows the operator to remotely control various robotic mechanisms in the remote operating environment.

  15. A novel fully integrated handheld gamma camera

    NASA Astrophysics Data System (ADS)

    Massari, R.; Ucci, A.; Campisi, C.; Scopinaro, F.; Soluri, A.

    2016-10-01

    In this paper, we present an innovative, fully integrated handheld gamma camera, namely designed to gather in the same device the gamma ray detector with the display and the embedded computing system. The low power consumption allows the prototype to be battery operated. To be useful in radioguided surgery, an intraoperative gamma camera must be very easy to handle since it must be moved to find a suitable view. Consequently, we have developed the first prototype of a fully integrated, compact and lightweight gamma camera for radiopharmaceuticals fast imaging. The device can operate without cables across the sterile field, so it may be easily used in the operating theater for radioguided surgery. The prototype proposed consists of a Silicon Photomultiplier (SiPM) array coupled with a proprietary scintillation structure based on CsI(Tl) crystals. To read the SiPM output signals, we have developed a very low power readout electronics and a dedicated analog to digital conversion system. One of the most critical aspects we faced designing the prototype was the low power consumption, which is mandatory to develop a battery operated device. We have applied this detection device in the lymphoscintigraphy technique (sentinel lymph node mapping) comparing the results obtained with those of a commercial gamma camera (Philips SKYLight). The results obtained confirm a rapid response of the device and an adequate spatial resolution for the use in the scintigraphic imaging. This work confirms the feasibility of a small gamma camera with an integrated display. This device is designed for radioguided surgery and small organ imaging, but it could be easily combined into surgical navigation systems.

  16. Computer vision camera with embedded FPGA processing

    NASA Astrophysics Data System (ADS)

    Lecerf, Antoine; Ouellet, Denis; Arias-Estrada, Miguel

    2000-03-01

    Traditional computer vision is based on a camera-computer system in which the image understanding algorithms are embedded in the computer. To circumvent the computational load of vision algorithms, low-level processing and imaging hardware can be integrated in a single compact module where a dedicated architecture is implemented. This paper presents a Computer Vision Camera based on an open architecture implemented in an FPGA. The system is targeted to real-time computer vision tasks where low level processing and feature extraction tasks can be implemented in the FPGA device. The camera integrates a CMOS image sensor, an FPGA device, two memory banks, and an embedded PC for communication and control tasks. The FPGA device is a medium size one equivalent to 25,000 logic gates. The device is connected to two high speed memory banks, an IS interface, and an imager interface. The camera can be accessed for architecture programming, data transfer, and control through an Ethernet link from a remote computer. A hardware architecture can be defined in a Hardware Description Language (like VHDL), simulated and synthesized into digital structures that can be programmed into the FPGA and tested on the camera. The architecture of a classical multi-scale edge detection algorithm based on a Laplacian of Gaussian convolution has been developed to show the capabilities of the system.

  17. Cost Effective Paper-Based Colorimetric Microfluidic Devices and Mobile Phone Camera Readers for the Classroom

    ERIC Educational Resources Information Center

    Koesdjojo, Myra T.; Pengpumkiat, Sumate; Wu, Yuanyuan; Boonloed, Anukul; Huynh, Daniel; Remcho, Thomas P.; Remcho, Vincent T.

    2015-01-01

    We have developed a simple and direct method to fabricate paper-based microfluidic devices that can be used for a wide range of colorimetric assay applications. With these devices, assays can be performed within minutes to allow for quantitative colorimetric analysis by use of a widely accessible iPhone camera and an RGB color reader application…

  18. Temperature measurement with industrial color camera devices

    NASA Astrophysics Data System (ADS)

    Schmidradler, Dieter J.; Berndorfer, Thomas; van Dyck, Walter; Pretschuh, Juergen

    1999-05-01

    This paper discusses color camera based temperature measurement. Usually, visual imaging and infrared image sensing are treated as two separate disciplines. We will show, that a well selected color camera device might be a cheaper, more robust and more sophisticated solution for optical temperature measurement in several cases. Herein, only implementation fragments and important restrictions for the sensing element will be discussed. Our aim is to draw the readers attention to the use of visual image sensors for measuring thermal radiation and temperature and to give reasons for the need of improved technologies for infrared camera devices. With AVL-List, our partner of industry, we successfully used the proposed sensor to perform temperature measurement for flames inside the combustion chamber of diesel engines which finally led to the presented insights.

  19. Experimental setup for investigation of nanoclusters at cryogenic temperatures by electron spin resonance and optical spectroscopies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mao, S., E-mail: maoshunghost@tamu.edu; Meraki, A.; McColgan, P. T.

    2014-07-15

    We present the design and performance of an experimental setup for simultaneous electron spin resonance (ESR) and optical studies of nanoclusters with stabilized free radicals at cryogenic temperatures. A gas mixture of impurities and helium after passing through a RF discharge for dissociation of molecules is directed onto the surface of superfluid helium to form the nanoclusters of impurities. A specially designed ESR cavity operated in the TE{sub 011} mode allows optical access to the sample. The cavity is incorporated into a homemade insert which is placed inside a variable temperature insert of a Janis {sup 4}He cryostat. The temperaturemore » range for sample investigation is 1.25–300 K. A Bruker EPR 300E and Andor 500i optical spectrograph incorporated with a Newton EMCCD camera are used for ESR and optical registration, respectively. The current experimental system makes it possible to study the ESR and optical spectra of impurity-helium condensates simultaneously. The setup allows a broad range of research at low temperatures including optically detected magnetic resonance, studies of chemical processes of the active species produced by photolysis in solid matrices, and investigations of nanoclusters produced by laser ablation in superfluid helium.« less

  20. Development and experimental testing of an optical micro-spectroscopic technique incorporating true line-scan excitation.

    PubMed

    Biener, Gabriel; Stoneman, Michael R; Acbas, Gheorghe; Holz, Jessica D; Orlova, Marianna; Komarova, Liudmila; Kuchin, Sergei; Raicu, Valerică

    2013-12-27

    Multiphoton micro-spectroscopy, employing diffraction optics and electron-multiplying CCD (EMCCD) cameras, is a suitable method for determining protein complex stoichiometry, quaternary structure, and spatial distribution in living cells using Förster resonance energy transfer (FRET) imaging. The method provides highly resolved spectra of molecules or molecular complexes at each image pixel, and it does so on a timescale shorter than that of molecular diffusion, which scrambles the spectral information. Acquisition of an entire spectrally resolved image, however, is slower than that of broad-bandwidth microscopes because it takes longer times to collect the same number of photons at each emission wavelength as in a broad bandwidth. Here, we demonstrate an optical micro-spectroscopic scheme that employs a laser beam shaped into a line to excite in parallel multiple sample voxels. The method presents dramatically increased sensitivity and/or acquisition speed and, at the same time, has excellent spatial and spectral resolution, similar to point-scan configurations. When applied to FRET imaging using an oligomeric FRET construct expressed in living cells and consisting of a FRET acceptor linked to three donors, the technique based on line-shaped excitation provides higher accuracy compared to the point-scan approach, and it reduces artifacts caused by photobleaching and other undesired photophysical effects.

  1. Common fluorescent proteins for single-molecule localization microscopy

    NASA Astrophysics Data System (ADS)

    Klementieva, Natalia V.; Bozhanova, Nina G.; Mishina, Natalie M.; Zagaynova, Elena V.; Lukyanov, Konstantin A.; Mishin, Alexander S.

    2015-07-01

    Super-resolution techniques for breaking the diffraction barrier are spread out over multiple studies nowadays. Single-molecule localization microscopy such as PALM, STORM, GSDIM, etc allow to get super-resolved images of cell ultrastructure by precise localization of individual fluorescent molecules via their temporal isolation. However, these methods are supposed the use of fluorescent dyes and proteins with special characteristics (photoactivation/photoconversion). At the same time, there is a need for retaining high photostability of fluorophores during long-term acquisition. Here, we first showed the potential of common red fluorescent protein for single-molecule localization microscopy based on spontaneous intrinsic blinking. Also, we assessed the effect of different imaging media on photobleaching of these fluorescent proteins. Monomeric orange and red fluorescent proteins were examined for stochastic switching from a dark state to a bright fluorescent state. We studied fusions with cytoskeletal proteins in NIH/3T3 and HeLa cells. Imaging was performed on the Nikon N-STORM system equipped with EMCCD camera. To define the optimal imaging conditions we tested several types of cell culture media and buffers. As a result, high-resolution images of cytoskeleton structure were obtained. Essentially, low-intensity light was sufficient to initiate the switching of tested red fluorescent protein reducing phototoxicity and provide long-term live-cell imaging.

  2. NESSI and `Alopeke: Two new dual-channel speckle imaging instruments

    NASA Astrophysics Data System (ADS)

    Scott, Nicholas J.

    2018-01-01

    NESSI and `Alopeke are two new speckle imagers built at NASA's Ames Research Center for community use at the WIYN and Gemini telescopes, respectively. The two instruments are functionally similar and include the capability for wide-field imaging in additional to speckle interferometry. The diffraction-limited imaging available through speckle effectively eliminates distortions due to the presence of Earth's atmosphere by `freezing out' changes in the atmosphere by taking extremely short exposures and combining the resultant speckles in Fourier space. This technique enables angular resolutions equal to the theoretical best possible for a given telescope, effectively giving space-based resolution from the ground. Our instruments provide the highest spatial resolution available today on any single aperture telescope.A primary role of these instruments is exoplanet validation for the Kepler, K2, TESS, and many RV programs. Contrast ratios of 6 or more magnitudes are easily obtained. The instrument uses two emCCD cameras providing simultaneous dual-color observations help to characterize detected companions. High resolution imaging enables the identification of blended binaries that contaminate many exoplanet detections, leading to incorrectly measured radii. In this way small, rocky systems, such as Kepler-186b and the TRAPPIST-1 planet family, may be validated and thus the detected planets radii are correctly measured.

  3. Achievable Rate Estimation of IEEE 802.11ad Visual Big-Data Uplink Access in Cloud-Enabled Surveillance Applications.

    PubMed

    Kim, Joongheon; Kim, Jong-Kook

    2016-01-01

    This paper addresses the computation procedures for estimating the impact of interference in 60 GHz IEEE 802.11ad uplink access in order to construct visual big-data database from randomly deployed surveillance camera sensing devices. The acquired large-scale massive visual information from surveillance camera devices will be used for organizing big-data database, i.e., this estimation is essential for constructing centralized cloud-enabled surveillance database. This performance estimation study captures interference impacts on the target cloud access points from multiple interference components generated by the 60 GHz wireless transmissions from nearby surveillance camera devices to their associated cloud access points. With this uplink interference scenario, the interference impacts on the main wireless transmission from a target surveillance camera device to its associated target cloud access point with a number of settings are measured and estimated under the consideration of 60 GHz radiation characteristics and antenna radiation pattern models.

  4. Feasibility Study of Utilization of Action Camera, GoPro Hero 4, Google Glass, and Panasonic HX-A100 in Spine Surgery.

    PubMed

    Lee, Chang Kyu; Kim, Youngjun; Lee, Nam; Kim, Byeongwoo; Kim, Doyoung; Yi, Seong

    2017-02-15

    Study for feasibility of commercially available action cameras in recording video of spine. Recent innovation of the wearable action camera with high-definition video recording enables surgeons to use camera in the operation at ease without high costs. The purpose of this study is to compare the feasibility, safety, and efficacy of commercially available action cameras in recording video of spine surgery. There are early reports of medical professionals using Google Glass throughout the hospital, Panasonic HX-A100 action camera, and GoPro. This study is the first report for spine surgery. Three commercially available cameras were tested: GoPro Hero 4 Silver, Google Glass, and Panasonic HX-A100 action camera. Typical spine surgery was selected for video recording; posterior lumbar laminectomy and fusion. Three cameras were used by one surgeon and video was recorded throughout the operation. The comparison was made on the perspective of human factor, specification, and video quality. The most convenient and lightweight device for wearing and holding throughout the long operation time was Google Glass. The image quality; all devices except Google Glass supported HD format and GoPro has unique 2.7K or 4K resolution. Quality of video resolution was best in GoPro. Field of view, GoPro can adjust point of interest, field of view according to the surgery. Narrow FOV option was the best for recording in GoPro to share the video clip. Google Glass has potentials by using application programs. Connectivity such as Wi-Fi and Bluetooth enables video streaming for audience, but only Google Glass has two-way communication feature in device. Action cameras have the potential to improve patient safety, operator comfort, and procedure efficiency in the field of spinal surgery and broadcasting a surgery with development of the device and applied program in the future. N/A.

  5. A four-lens based plenoptic camera for depth measurements

    NASA Astrophysics Data System (ADS)

    Riou, Cécile; Deng, Zhiyuan; Colicchio, Bruno; Lauffenburger, Jean-Philippe; Kohler, Sophie; Haeberlé, Olivier; Cudel, Christophe

    2015-04-01

    In previous works, we have extended the principles of "variable homography", defined by Zhang and Greenspan, for measuring height of emergent fibers on glass and non-woven fabrics. This method has been defined for working with fabric samples progressing on a conveyor belt. Triggered acquisition of two successive images was needed to perform the 3D measurement. In this work, we have retained advantages of homography variable for measurements along Z axis, but we have reduced acquisitions number to a single one, by developing an acquisition device characterized by 4 lenses placed in front of a single image sensor. The idea is then to obtain four projected sub-images on a single CCD sensor. The device becomes a plenoptic or light field camera, capturing multiple views on the same image sensor. We have adapted the variable homography formulation for this device and we propose a new formulation to calculate a depth with plenoptic cameras. With these results, we have transformed our plenoptic camera in a depth camera and first results given are very promising.

  6. Applications of digital image acquisition in anthropometry

    NASA Technical Reports Server (NTRS)

    Woolford, B.; Lewis, J. L.

    1981-01-01

    A description is given of a video kinesimeter, a device for the automatic real-time collection of kinematic and dynamic data. Based on the detection of a single bright spot by three TV cameras, the system provides automatic real-time recording of three-dimensional position and force data. It comprises three cameras, two incandescent lights, a voltage comparator circuit, a central control unit, and a mass storage device. The control unit determines the signal threshold for each camera before testing, sequences the lights, synchronizes and analyzes the scan voltages from the three cameras, digitizes force from a dynamometer, and codes the data for transmission to a floppy disk for recording. Two of the three cameras face each other along the 'X' axis; the third camera, which faces the center of the line between the first two, defines the 'Y' axis. An image from the 'Y' camera and either 'X' camera is necessary for determining the three-dimensional coordinates of the point.

  7. On the performances of computer vision algorithms on mobile platforms

    NASA Astrophysics Data System (ADS)

    Battiato, S.; Farinella, G. M.; Messina, E.; Puglisi, G.; Ravì, D.; Capra, A.; Tomaselli, V.

    2012-01-01

    Computer Vision enables mobile devices to extract the meaning of the observed scene from the information acquired with the onboard sensor cameras. Nowadays, there is a growing interest in Computer Vision algorithms able to work on mobile platform (e.g., phone camera, point-and-shot-camera, etc.). Indeed, bringing Computer Vision capabilities on mobile devices open new opportunities in different application contexts. The implementation of vision algorithms on mobile devices is still a challenging task since these devices have poor image sensors and optics as well as limited processing power. In this paper we have considered different algorithms covering classic Computer Vision tasks: keypoint extraction, face detection, image segmentation. Several tests have been done to compare the performances of the involved mobile platforms: Nokia N900, LG Optimus One, Samsung Galaxy SII.

  8. SHOK—The First Russian Wide-Field Optical Camera in Space

    NASA Astrophysics Data System (ADS)

    Lipunov, V. M.; Gorbovskoy, E. S.; Kornilov, V. G.; Panasyuk, M. I.; Amelushkin, A. M.; Petrov, V. L.; Yashin, I. V.; Svertilov, S. I.; Vedenkin, N. N.

    2018-02-01

    Onboard the spacecraft Lomonosov is established two fast, fixed, very wide-field cameras SHOK. The main goal of this experiment is the observation of GRB optical emission before, synchronously, and after the gamma-ray emission. The field of view of each of the cameras is placed in the gamma-ray burst detection area of other devices located onboard the "Lomonosov" spacecraft. SHOK provides measurements of optical emissions with a magnitude limit of ˜ 9-10m on a single frame with an exposure of 0.2 seconds. The device is designed for continuous sky monitoring at optical wavelengths in the very wide field of view (1000 square degrees each camera), detection and localization of fast time-varying (transient) optical sources on the celestial sphere, including provisional and synchronous time recording of optical emissions from the gamma-ray burst error boxes, detected by the BDRG device and implemented by a control signal (alert trigger) from the BDRG. The Lomonosov spacecraft has two identical devices, SHOK1 and SHOK2. The core of each SHOK device is a fast-speed 11-Megapixel CCD. Each of the SHOK devices represents a monoblock, consisting of a node observations of optical emission, the electronics node, elements of the mechanical construction, and the body.

  9. 77 FR 26041 - Certain Cameras and Mobile Devices, Related Software and Firmware, and Components Thereof and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-02

    ... Software and Firmware, and Components Thereof and Products Containing the Same; Institution of..., related software and firmware, and components thereof and products containing the same by reason of... after importation of certain cameras and mobile devices, related software and firmware, and components...

  10. Comparison between different cost devices for digital capture of X-ray films: an image characteristics detection approach.

    PubMed

    Salazar, Antonio José; Camacho, Juan Camilo; Aguirre, Diego Andrés

    2012-02-01

    A common teleradiology practice is digitizing films. The costs of specialized digitizers are very high, that is why there is a trend to use conventional scanners and digital cameras. Statistical clinical studies are required to determine the accuracy of these devices, which are very difficult to carry out. The purpose of this study was to compare three capture devices in terms of their capacity to detect several image characteristics. Spatial resolution, contrast, gray levels, and geometric deformation were compared for a specialized digitizer ICR (US$ 15,000), a conventional scanner UMAX (US$ 1,800), and a digital camera LUMIX (US$ 450, but require an additional support system and a light box for about US$ 400). Test patterns printed in films were used. The results detected gray levels lower than real values for all three devices; acceptable contrast and low geometric deformation with three devices. All three devices are appropriate solutions, but a digital camera requires more operator training and more settings.

  11. Spectral colors capture and reproduction based on digital camera

    NASA Astrophysics Data System (ADS)

    Chen, Defen; Huang, Qingmei; Li, Wei; Lu, Yang

    2018-01-01

    The purpose of this work is to develop a method for the accurate reproduction of the spectral colors captured by digital camera. The spectral colors being the purest color in any hue, are difficult to reproduce without distortion on digital devices. In this paper, we attempt to achieve accurate hue reproduction of the spectral colors by focusing on two steps of color correction: the capture of the spectral colors and the color characterization of digital camera. Hence it determines the relationship among the spectral color wavelength, the RGB color space of the digital camera device and the CIEXYZ color space. This study also provides a basis for further studies related to the color spectral reproduction on digital devices. In this paper, methods such as wavelength calibration of the spectral colors and digital camera characterization were utilized. The spectrum was obtained through the grating spectroscopy system. A photo of a clear and reliable primary spectrum was taken by adjusting the relative parameters of the digital camera, from which the RGB values of color spectrum was extracted in 1040 equally-divided locations. Calculated using grating equation and measured by the spectrophotometer, two wavelength values were obtained from each location. The polynomial fitting method for the camera characterization was used to achieve color correction. After wavelength calibration, the maximum error between the two sets of wavelengths is 4.38nm. According to the polynomial fitting method, the average color difference of test samples is 3.76. This has satisfied the application needs of the spectral colors in digital devices such as display and transmission.

  12. Solid state television camera

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The design, fabrication, and tests of a solid state television camera using a new charge-coupled imaging device are reported. An RCA charge-coupled device arranged in a 512 by 320 format and directly compatible with EIA format standards was the sensor selected. This is a three-phase, sealed surface-channel array that has 163,840 sensor elements, which employs a vertical frame transfer system for image readout. Included are test results of the complete camera system, circuit description and changes to such circuits as a result of integration and test, maintenance and operation section, recommendations to improve the camera system, and a complete set of electrical and mechanical drawing sketches.

  13. Calculation for simulation of archery goal value using a web camera and ultrasonic sensor

    NASA Astrophysics Data System (ADS)

    Rusjdi, Darma; Abdurrasyid, Wulandari, Dewi Arianti

    2017-08-01

    Development of the device simulator digital indoor archery-based embedded systems as a solution to the limitations of the field or open space is adequate, especially in big cities. Development of the device requires simulations to calculate the value of achieving the target based on the approach defined by the parabolic motion variable initial velocity and direction of motion of the arrow reaches the target. The simulator device should be complemented with an initial velocity measuring device using ultrasonic sensors and measuring direction of the target using a digital camera. The methodology uses research and development of application software from modeling and simulation approach. The research objective to create simulation applications calculating the value of the achievement of the target arrows. Benefits as a preliminary stage for the development of the simulator device of archery. Implementation of calculating the value of the target arrows into the application program generates a simulation game of archery that can be used as a reference development of the digital archery simulator in a room with embedded systems using ultrasonic sensors and web cameras. Applications developed with the simulation calculation comparing the outer radius of the circle produced a camera from a distance of three meters.

  14. Protective laser beam viewing device

    DOEpatents

    Neil, George R.; Jordan, Kevin Carl

    2012-12-18

    A protective laser beam viewing system or device including a camera selectively sensitive to laser light wavelengths and a viewing screen receiving images from the laser sensitive camera. According to a preferred embodiment of the invention, the camera is worn on the head of the user or incorporated into a goggle-type viewing display so that it is always aimed at the area of viewing interest to the user and the viewing screen is incorporated into a video display worn as goggles over the eyes of the user.

  15. Video sensor with range measurement capability

    NASA Technical Reports Server (NTRS)

    Howard, Richard T. (Inventor); Briscoe, Jeri M. (Inventor); Corder, Eric L. (Inventor); Broderick, David J. (Inventor)

    2008-01-01

    A video sensor device is provided which incorporates a rangefinder function. The device includes a single video camera and a fixed laser spaced a predetermined distance from the camera for, when activated, producing a laser beam. A diffractive optic element divides the beam so that multiple light spots are produced on a target object. A processor calculates the range to the object based on the known spacing and angles determined from the light spots on the video images produced by the camera.

  16. Flow visualization by mobile phone cameras

    NASA Astrophysics Data System (ADS)

    Cierpka, Christian; Hain, Rainer; Buchmann, Nicolas A.

    2016-06-01

    Mobile smart phones were completely changing people's communication within the last ten years. However, these devices do not only offer communication through different channels but also devices and applications for fun and recreation. In this respect, mobile phone cameras include now relatively fast (up to 240 Hz) cameras to capture high-speed videos of sport events or other fast processes. The article therefore explores the possibility to make use of this development and the wide spread availability of these cameras in the terms of velocity measurements for industrial or technical applications and fluid dynamics education in high schools and at universities. The requirements for a simplistic PIV (particle image velocimetry) system are discussed. A model experiment of a free water jet was used to prove the concept and shed some light on the achievable quality and determine bottle necks by comparing the results obtained with a mobile phone camera with data taken by a high-speed camera suited for scientific experiments.

  17. Remote media vision-based computer input device

    NASA Astrophysics Data System (ADS)

    Arabnia, Hamid R.; Chen, Ching-Yi

    1991-11-01

    In this paper, we introduce a vision-based computer input device which has been built at the University of Georgia. The user of this system gives commands to the computer without touching any physical device. The system receives input through a CCD camera; it is PC- based and is built on top of the DOS operating system. The major components of the input device are: a monitor, an image capturing board, a CCD camera, and some software (developed by use). These are interfaced with a standard PC running under the DOS operating system.

  18. Spacecraft 3D Augmented Reality Mobile App

    NASA Technical Reports Server (NTRS)

    Hussey, Kevin J.; Doronila, Paul R.; Kumanchik, Brian E.; Chan, Evan G.; Ellison, Douglas J.; Boeck, Andrea; Moore, Justin M.

    2013-01-01

    The Spacecraft 3D application allows users to learn about and interact with iconic NASA missions in a new and immersive way using common mobile devices. Using Augmented Reality (AR) techniques to project 3D renditions of the mission spacecraft into real-world surroundings, users can interact with and learn about Curiosity, GRAIL, Cassini, and Voyager. Additional updates on future missions, animations, and information will be ongoing. Using a printed AR Target and camera on a mobile device, users can get up close with these robotic explorers, see how some move, and learn about these engineering feats, which are used to expand knowledge and understanding about space. The software receives input from the mobile device's camera to recognize the presence of an AR marker in the camera's field of view. It then displays a 3D rendition of the selected spacecraft in the user's physical surroundings, on the mobile device's screen, while it tracks the device's movement in relation to the physical position of the spacecraft's 3D image on the AR marker.

  19. Single chip camera device having double sampling operation

    NASA Technical Reports Server (NTRS)

    Fossum, Eric R. (Inventor); Nixon, Robert (Inventor)

    2002-01-01

    A single chip camera device is formed on a single substrate including an image acquisition portion for control portion and the timing circuit formed on the substrate. The timing circuit also controls the photoreceptors in a double sampling mode in which are reset level is first read and then after an integration time a charged level is read.

  20. LAMOST CCD camera-control system based on RTS2

    NASA Astrophysics Data System (ADS)

    Tian, Yuan; Wang, Zheng; Li, Jian; Cao, Zi-Huang; Dai, Wei; Wei, Shou-Lin; Zhao, Yong-Heng

    2018-05-01

    The Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST) is the largest existing spectroscopic survey telescope, having 32 scientific charge-coupled-device (CCD) cameras for acquiring spectra. Stability and automation of the camera-control software are essential, but cannot be provided by the existing system. The Remote Telescope System 2nd Version (RTS2) is an open-source and automatic observatory-control system. However, all previous RTS2 applications were developed for small telescopes. This paper focuses on implementation of an RTS2-based camera-control system for the 32 CCDs of LAMOST. A virtual camera module inherited from the RTS2 camera module is built as a device component working on the RTS2 framework. To improve the controllability and robustness, a virtualized layer is designed using the master-slave software paradigm, and the virtual camera module is mapped to the 32 real cameras of LAMOST. The new system is deployed in the actual environment and experimentally tested. Finally, multiple observations are conducted using this new RTS2-framework-based control system. The new camera-control system is found to satisfy the requirements for automatic camera control in LAMOST. This is the first time that RTS2 has been applied to a large telescope, and provides a referential solution for full RTS2 introduction to the LAMOST observatory control system.

  1. A solid state lightning propagation speed sensor

    NASA Technical Reports Server (NTRS)

    Mach, Douglas M.; Rust, W. David

    1989-01-01

    A device to measure the propagation speeds of cloud-to-ground lightning has been developed. The lightning propagation speed (LPS) device consists of eight solid state silicon photodetectors mounted behind precision horizontal slits in the focal plane of a 50-mm lens on a 35-mm camera. Although the LPS device produces results similar to those obtained from a streaking camera, the LPS device has the advantages of smaller size, lower cost, mobile use, and easier data collection and analysis. The maximum accuracy for the LPS is 0.2 microsec, compared with about 0.8 microsecs for the streaking camera. It is found that the return stroke propagation speed for triggered lightning is different than that for natural lightning if measurements are taken over channel segments less than 500 m. It is suggested that there are no significant differences between the propagation speeds of positive and negative flashes. Also, differences between natural and triggered dart leaders are discussed.

  2. A computational approach to real-time image processing for serial time-encoded amplified microscopy

    NASA Astrophysics Data System (ADS)

    Oikawa, Minoru; Hiyama, Daisuke; Hirayama, Ryuji; Hasegawa, Satoki; Endo, Yutaka; Sugie, Takahisa; Tsumura, Norimichi; Kuroshima, Mai; Maki, Masanori; Okada, Genki; Lei, Cheng; Ozeki, Yasuyuki; Goda, Keisuke; Shimobaba, Tomoyoshi

    2016-03-01

    High-speed imaging is an indispensable technique, particularly for identifying or analyzing fast-moving objects. The serial time-encoded amplified microscopy (STEAM) technique was proposed to enable us to capture images with a frame rate 1,000 times faster than using conventional methods such as CCD (charge-coupled device) cameras. The application of this high-speed STEAM imaging technique to a real-time system, such as flow cytometry for a cell-sorting system, requires successively processing a large number of captured images with high throughput in real time. We are now developing a high-speed flow cytometer system including a STEAM camera. In this paper, we describe our approach to processing these large amounts of image data in real time. We use an analog-to-digital converter that has up to 7.0G samples/s and 8-bit resolution for capturing the output voltage signal that involves grayscale images from the STEAM camera. Therefore the direct data output from the STEAM camera generates 7.0G byte/s continuously. We provided a field-programmable gate array (FPGA) device as a digital signal pre-processor for image reconstruction and finding objects in a microfluidic channel with high data rates in real time. We also utilized graphics processing unit (GPU) devices for accelerating the calculation speed of identification of the reconstructed images. We built our prototype system, which including a STEAM camera, a FPGA device and a GPU device, and evaluated its performance in real-time identification of small particles (beads), as virtual biological cells, owing through a microfluidic channel.

  3. Design and evaluation of controls for drift, video gain, and color balance in spaceborne facsimile cameras

    NASA Technical Reports Server (NTRS)

    Katzberg, S. J.; Kelly, W. L., IV; Rowland, C. W.; Burcher, E. E.

    1973-01-01

    The facsimile camera is an optical-mechanical scanning device which has become an attractive candidate as an imaging system for planetary landers and rovers. This paper presents electronic techniques which permit the acquisition and reconstruction of high quality images with this device, even under varying lighting conditions. These techniques include a control for low frequency noise and drift, an automatic gain control, a pulse-duration light modulation scheme, and a relative spectral gain control. Taken together, these techniques allow the reconstruction of radiometrically accurate and properly balanced color images from facsimile camera video data. These techniques have been incorporated into a facsimile camera and reproduction system, and experimental results are presented for each technique and for the complete system.

  4. Camera-on-a-Chip

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Jet Propulsion Laboratory's research on a second generation, solid-state image sensor technology has resulted in the Complementary Metal- Oxide Semiconductor Active Pixel Sensor (CMOS), establishing an alternative to the Charged Coupled Device (CCD). Photobit Corporation, the leading supplier of CMOS image sensors, has commercialized two products of their own based on this technology: the PB-100 and PB-300. These devices are cameras on a chip, combining all camera functions. CMOS "active-pixel" digital image sensors offer several advantages over CCDs, a technology used in video and still-camera applications for 30 years. The CMOS sensors draw less energy, they use the same manufacturing platform as most microprocessors and memory chips, and they allow on-chip programming of frame size, exposure, and other parameters.

  5. Next-generation digital camera integration and software development issues

    NASA Astrophysics Data System (ADS)

    Venkataraman, Shyam; Peters, Ken; Hecht, Richard

    1998-04-01

    This paper investigates the complexities associated with the development of next generation digital cameras due to requirements in connectivity and interoperability. Each successive generation of digital camera improves drastically in cost, performance, resolution, image quality and interoperability features. This is being accomplished by advancements in a number of areas: research, silicon, standards, etc. As the capabilities of these cameras increase, so do the requirements for both hardware and software. Today, there are two single chip camera solutions in the market including the Motorola MPC 823 and LSI DCAM- 101. Real time constraints for a digital camera may be defined by the maximum time allowable between capture of images. Constraints in the design of an embedded digital camera include processor architecture, memory, processing speed and the real-time operating systems. This paper will present the LSI DCAM-101, a single-chip digital camera solution. It will present an overview of the architecture and the challenges in hardware and software for supporting streaming video in such a complex device. Issues presented include the development of the data flow software architecture, testing and integration on this complex silicon device. The strategy for optimizing performance on the architecture will also be presented.

  6. An Integrated System for Wildlife Sensing

    DTIC Science & Technology

    2014-08-14

    design requirement. “Sensor Controller” software. A custom Sensor Controller application was developed for the Android device in order to collect...and log readings from that device’s sensors. “Camera Controller” software. A custom Camera Controller application was developed for the Android device...into 2 separate Android applications (Figure 4). The Sensor Controller logs readings periodically from the Android device’s organic sensors, and

  7. B-1 AFT Nacelle Flow Visualization Study

    NASA Technical Reports Server (NTRS)

    Celniker, Robert

    1975-01-01

    A 2-month program was conducted to perform engineering evaluation and design tasks to prepare for visualization and photography of the airflow along the aft portion of the B-1 nacelles and nozzles during flight test. Several methods of visualizing the flow were investigated and compared with respect to cost, impact of the device on the flow patterns, suitability for use in the flight environment, and operability throughout the flight. Data were based on a literature search and discussions with the test personnel. Tufts were selected as the flow visualization device in preference to several other devices studied. A tuft installation pattern has been prepared for the right-hand aft nacelle area of B-1 air vehicle No.2. Flight research programs to develop flow visualization devices other than tufts for use in future testing are recommended. A design study was conducted to select a suitable motion picture camera, to select the camera location, and to prepare engineering drawings sufficient to permit installation of the camera. Ten locations on the air vehicle were evaluated before the selection of the location in the horizontal stabilizer actuator fairing. The considerations included cost, camera angle, available volume, environmental control, flutter impact, and interference with antennas or other instrumentation.

  8. Digital micromirror device camera with per-pixel coded exposure for high dynamic range imaging.

    PubMed

    Feng, Wei; Zhang, Fumin; Wang, Weijing; Xing, Wei; Qu, Xinghua

    2017-05-01

    In this paper, we overcome the limited dynamic range of the conventional digital camera, and propose a method of realizing high dynamic range imaging (HDRI) from a novel programmable imaging system called a digital micromirror device (DMD) camera. The unique feature of the proposed new method is that the spatial and temporal information of incident light in our DMD camera can be flexibly modulated, and it enables the camera pixels always to have reasonable exposure intensity by DMD pixel-level modulation. More importantly, it allows different light intensity control algorithms used in our programmable imaging system to achieve HDRI. We implement the optical system prototype, analyze the theory of per-pixel coded exposure for HDRI, and put forward an adaptive light intensity control algorithm to effectively modulate the different light intensity to recover high dynamic range images. Via experiments, we demonstrate the effectiveness of our method and implement the HDRI on different objects.

  9. 48 CFR 1552.208-70 - Printing.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ...” (or “camera-ready copy”) is a final document suitable for printing/duplication. “Desktop Publishing... from desktop publishing is being sent to a typesetting device (i.e., Linotronic) with camera copy being... preparing related illustrative material to a final document (camera-ready copy) using desktop publishing. (2...

  10. Development of an EMCCD for lidar applications

    NASA Astrophysics Data System (ADS)

    De Monte, B.; Bell, R. T.

    2017-11-01

    A novel detector, incorporating e2v's L3 CCD (L3Vision™) [1] technology for use in LIDAR (Light Detection And Ranging) applications has been designed, manufactured and characterised. The most critical performance aspect was the requirement to collect charge from a 120μm square detection area for a 667ns temporal sampling window, with low crosstalk between successive samples, followed by signal readout with sub-electron effective noise. Additional requirements included low dark signal, high quantum efficiency at the 355nm laser wavelength and the ability to handle bright laser echoes, without corruption of the much fainter useful signals. The detector architecture used high speed charge binning to combine signal from each sampling window into a single charge packet. This was then passed through a multiplication register (Electron Multiplying Charge Coupled Device) operating with a typical gain of 100X to a conventional charge detection circuit. The detector achieved a typical quantum efficiency of 80% and a total noise in darkness of < 0.5 electrons rms. Development of the detector was supported by ESA (European Space Agency).

  11. Radiation camera motion correction system

    DOEpatents

    Hoffer, P.B.

    1973-12-18

    The device determines the ratio of the intensity of radiation received by a radiation camera from two separate portions of the object. A correction signal is developed to maintain this ratio at a substantially constant value and this correction signal is combined with the camera signal to correct for object motion. (Official Gazette)

  12. Design and fabrication of a CCD camera for use with relay optics in solar X-ray astronomy

    NASA Technical Reports Server (NTRS)

    1984-01-01

    Configured as a subsystem of a sounding rocket experiment, a camera system was designed to record and transmit an X-ray image focused on a charge coupled device. The camera consists of a X-ray sensitive detector and the electronics for processing and transmitting image data. The design and operation of the camera are described. Schematics are included.

  13. Data-driven local-scale modeling of ionospheric responses to auroral forcing using incoherent scatter radar and ground-based imaging measurements

    NASA Astrophysics Data System (ADS)

    Grubbs, G. A., II; Zettergren, M. D.; Samara, M.; Michell, R.; Hampton, D. L.; Lynch, K. A.; Varney, R. H.; Reimer, A.; Burleigh, M.

    2017-12-01

    The aurora encapsulates a wide range of spatial and temporal scale sizes, particularly during active events such as those that exist during substorm expansion. Of interest to the present work are ionospheric responses to magnetospheric forcing at relatively small scales (0.5-20 km), including formation of structured auroral arc current systems, ion frictional heating, upflow, and density cavity formation among other processes. Even for carefully arranged experiments, it is often difficult to fully assess physical details (time evolution, causality, unobservable parameters) associated with these types of responses, thus highlighting the general need for high-resolution modeling efforts to support the observations. In this work, we develop and test a local-scale model to describe effects of precipitating electrons and electric fields on the ionospheric plasma responses using available remote sensing data (e.g. from ISRs and filtered cameras). Our model is based on a 3D multi-fluid/electrostatic ionospheric model, GEMINI (Zettergren et al., 2015), coupled a two-stream electron transport code which produces auroral intensities, impact ionization, and thermal electron heating GLobal airglOW (GLOW; Solomon, 2017). GEMINI-GLOW thus describes both thermal and suprathermal effects on the ionosphere and is driven by boundary conditions consisting of topside ionospheric field-aligned currents and suprathermal electrons. These boundary conditions are constrained using time and space-dependent electric field and precipitation estimates from recent sounding rocket campaigns, ISINGLASS (02 March 2017) and GREECE (03 March 2014), derived from the Poker Flat incoherent scatter radar (PFISR) drifts and filtered EMCCD cameras respectively. Results from these data-driven case studies are compared to plasma parameter responses (i.e. density and temperature) independently estimated by PFISR and from the sounding rockets. These studies are intended as a first step towards a local-scale assimilative modeling approach where data-derived information will be fed back into the model to update the system state.

  14. Extracting information of fixational eye movements through pupil tracking

    NASA Astrophysics Data System (ADS)

    Xiao, JiangWei; Qiu, Jian; Luo, Kaiqin; Peng, Li; Han, Peng

    2018-01-01

    Human eyes are never completely static even when they are fixing a stationary point. These irregular, small movements, which consist of micro-tremors, micro-saccades and drifts, can prevent the fading of the images that enter our eyes. The importance of researching the fixational eye movements has been experimentally demonstrated recently. However, the characteristics of fixational eye movements and their roles in visual process have not been explained clearly, because these signals can hardly be completely extracted by now. In this paper, we developed a new eye movement detection device with a high-speed camera. This device includes a beam splitter mirror, an infrared light source and a high-speed digital video camera with a frame rate of 200Hz. To avoid the influence of head shaking, we made the device wearable by fixing the camera on a safety helmet. Using this device, the experiments of pupil tracking were conducted. By localizing the pupil center and spectrum analysis, the envelope frequency spectrum of micro-saccades, micro-tremors and drifts are shown obviously. The experimental results show that the device is feasible and effective, so that the device can be applied in further characteristic analysis.

  15. Portable telepathology: methods and tools.

    PubMed

    Alfaro, Luis; Roca, Ma José

    2008-07-15

    Telepathology is becoming easier to implement in most pathology departments. In fact e-mail image transmit can be done from almost any pathologist as a simplistic telepathology system. We tried to develop a way to improve capabilities of communication among pathologists with the idea that the system should be affordable for everybody. We took the premise that any pathology department would have microscopes and computers with Internet connection, and selected a few elements to convert them into a telepathology station. Needs were reduced to a camera to collect images, a universal microscope adapter for the camera, a device to connect the camera to the computer, and a software for the remote image transmit. We found out a microscope adapter (MaxView Plus) that allowed us connect almost any domestic digital camera to any microscope. The video out signal from the camera was sent to the computer through an Aver Media USB connector. At last, we selected a group of portable applications that were assembled into a USB memory device. Portable applications are computer programs that can be carried generally on USB flash drives, but also in any other portable device, and used on any (Windows) computer without installation. Besides, when unplugging the device, none of personal data is left behind. We selected open-source applications, and based the pathology image transmission to VLC Media Player due to its functionality as streaming server, portability and ease of use and configuration. Audio transmission was usually done through normal phone lines. We also employed alternative videoconferencing software, SightSpeed for bi-directional image transmission from microscopes, and conventional cameras allowing visual communication and also image transmit from gross pathology specimens. All these elements allowed us to install and use a telepathology system in a few minutes, fully prepared for real time image broadcast.

  16. Portable telepathology: methods and tools

    PubMed Central

    Alfaro, Luis; Roca, Ma José

    2008-01-01

    Telepathology is becoming easier to implement in most pathology departments. In fact e-mail image transmit can be done from almost any pathologist as a simplistic telepathology system. We tried to develop a way to improve capabilities of communication among pathologists with the idea that the system should be affordable for everybody. We took the premise that any pathology department would have microscopes and computers with Internet connection, and selected a few elements to convert them into a telepathology station. Needs were reduced to a camera to collect images, a universal microscope adapter for the camera, a device to connect the camera to the computer, and a software for the remote image transmit. We found out a microscope adapter (MaxView Plus) that allowed us connect almost any domestic digital camera to any microscope. The video out signal from the camera was sent to the computer through an Aver Media USB connector. At last, we selected a group of portable applications that were assembled into a USB memory device. Portable applications are computer programs that can be carried generally on USB flash drives, but also in any other portable device, and used on any (Windows) computer without installation. Besides when unplugging the device, none of personal data is left behind. We selected open-source applications, and based the pathology image transmission to VLC Media Player due to its functionality as streaming server, portability and ease of use and configuration. Audio transmission was usually done through normal phone lines. We also employed alternative videoconferencing software, SightSpeed for bi-directional image transmission from microscopes, and conventional cameras allowing visual communication and also image transmit from gross pathology specimens. All these elements allowed us to install and use a telepathology system in a few minutes, fully prepared for real time image broadcast. PMID:18673507

  17. A goggle navigation system for cancer resection surgery

    NASA Astrophysics Data System (ADS)

    Xu, Junbin; Shao, Pengfei; Yue, Ting; Zhang, Shiwu; Ding, Houzhu; Wang, Jinkun; Xu, Ronald

    2014-02-01

    We describe a portable fluorescence goggle navigation system for cancer margin assessment during oncologic surgeries. The system consists of a computer, a head mount display (HMD) device, a near infrared (NIR) CCD camera, a miniature CMOS camera, and a 780 nm laser diode excitation light source. The fluorescence and the background images of the surgical scene are acquired by the CCD camera and the CMOS camera respectively, co-registered, and displayed on the HMD device in real-time. The spatial resolution and the co-registration deviation of the goggle navigation system are evaluated quantitatively. The technical feasibility of the proposed goggle system is tested in an ex vivo tumor model. Our experiments demonstrate the feasibility of using a goggle navigation system for intraoperative margin detection and surgical guidance.

  18. Development of Measurement Device of Working Radius of Crane Based on Single CCD Camera and Laser Range Finder

    NASA Astrophysics Data System (ADS)

    Nara, Shunsuke; Takahashi, Satoru

    In this paper, what we want to do is to develop an observation device to measure the working radius of a crane truck. The device has a single CCD camera, a laser range finder and two AC servo motors. First, in order to measure the working radius, we need to consider algorithm of a crane hook recognition. Then, we attach the cross mark on the crane hook. Namely, instead of the crane hook, we try to recognize the cross mark. Further, for the observation device, we construct PI control system with an extended Kalman filter to track the moving cross mark. Through experiments, we show the usefulness of our device including new control system of mark tracking.

  19. Studying Upper-Limb Amputee Prosthesis Use to Inform Device Design

    DTIC Science & Technology

    2015-10-01

    the study. This equipment has included a modified GoPro head-mounted camera and a Vicon 13-camera optical motion capture system, which was not part...also completed for relevant members of the study team. 4. The head-mounted camera setup has been established (a modified GoPro Hero 3 with external

  20. The Role of Counterintelligence in the European Theater of Operations During World War II

    DTIC Science & Technology

    1993-06-04

    revolvers, Minox cameras, portable typewriters, 48 fingerprint cameras, latent fingerprint kits, handcuffs, and listening and recording devices.13 This...Comments from the detachments indicated that the fingerprint equipment, and listening and recording devices were of little use. However, the revolvers...40-49. 138 Moulage* 2 Fingerprinting 2 Latent Fingerprinting 3 System of Identification 1 Codes and Ciphers 1 Handwriting Comparison 2 Documentary

  1. Measurement of the Shape of the Optical-IR Spectrum of Prompt Emission from Gamma-Ray Bursts

    NASA Astrophysics Data System (ADS)

    Grossan, Bruce; Kistaubayev, M.; Smoot, G.; Scherr, L.

    2017-06-01

    While the afterglow phase of gamma-ray bursts (GRBs) has been extensively measured, detections of prompt emission (i.e. during bright X-gamma emission) are more limited. Some prompt optical measurements are regularly made, but these are typically in a single wide band, with limited time resolution, and no measurement of spectral shape. Some models predict a synchrotron self-absorption spectral break somewhere in the IR-optical region. Measurement of the absorption frequency would give extensive information on each burst, including the electron Lorentz factor, the radius of emission, and more (Shen & Zhang 2008). Thus far the best prompt observations have been explained invoking a variety of models, but often with a non-unique interpretation. To understand this apparently heterogeneous behavior, and to reduce the number of possible models, it is critical to add data on the optical - IR spectral shape.Long GRB prompt X-gamma emission typically lasts ~40-80 s. The Swift BAT instrument rapidly measures GRB positions to within a few arc minutes and communicates them via the internet within a few seconds. We have measured the time for a fast-moving D=700 mm telescope to point and settle to be less than 9 s anywhere on the observable sky. Therefore, the majority of prompt optical-IR emission can be measured responding to BAT positions with this telescope. In this presentation, we describe our observing and science programs, and give our design for the Burst Simultaneous Three-channel Instrument (BSTI), which uses dichroics to send eparate bands to 3 cameras. Two EMCCD cameras, give high-time resolution in B and V; a third camera with a HgCdTe sensor covers H band, allowing us to study extinguished bursts. For a total exposure time of 10 s, we find a 5 sigma sensitivity of 21.3 and 20.3 mag in B and R for 1" seeing and Kitt Peak sky brightness, much fainter than typical previous prompt detections. We estimate 5 sigma H-band sensitivity for an IR optimized telescope to be ~16.9 mag in 20 s. With three channels broadly separated in wavelength, two separate slopes would be measured, or if present between our bands, the absorption frequency would be determined, a brand-new window into GRB physics.

  2. Intraocular and extraocular cameras for retinal prostheses: Effects of foveation by means of visual prosthesis simulation

    NASA Astrophysics Data System (ADS)

    McIntosh, Benjamin Patrick

    Blindness due to Age-Related Macular Degeneration and Retinitis Pigmentosa is unfortunately both widespread and largely incurable. Advances in visual prostheses that can restore functional vision in those afflicted by these diseases have evolved rapidly from new areas of research in ophthalmology and biomedical engineering. This thesis is focused on further advancing the state-of-the-art of both visual prostheses and implantable biomedical devices. A novel real-time system with a high performance head-mounted display is described that enables enhanced realistic simulation of intraocular retinal prostheses. A set of visual psychophysics experiments is presented using the visual prosthesis simulator that quantify, in several ways, the benefit of foveation afforded by an eye-pointed camera (such as an eye-tracked extraocular camera or an implantable intraocular camera) as compared with a head-pointed camera. A visual search experiment demonstrates a significant improvement in the time to locate a target on a screen when using an eye-pointed camera. A reach and grasp experiment demonstrates a 20% to 70% improvement in time to grasp an object when using an eye-pointed camera, with the improvement maximized when the percept is blurred. A navigation and mobility experiment shows a 10% faster walking speed and a 50% better ability to avoid obstacles when using an eye-pointed camera. Improvements to implantable biomedical devices are also described, including the design and testing of VLSI-integrable positive mobile ion contamination sensors and humidity sensors that can validate the hermeticity of biomedical device packages encapsulated by hermetic coatings, and can provide early warning of leaks or contamination that may jeopardize the implant. The positive mobile ion contamination sensors are shown to be sensitive to externally applied contamination. A model is proposed to describe sensitivity as a function of device geometry, and verified experimentally. Guidelines are provided on the use of spare CMOS oxide and metal layers to maximize the hermeticity of an implantable microchip. In addition, results are presented on the design and testing of small form factor, very low power, integrated CMOS clock generation circuits that are stable enough to drive commercial image sensor arrays, and therefore can be incorporated in an intraocular camera for retinal prostheses.

  3. Charge Diffusion Variations in Pan-STARRS1 CCDs

    NASA Astrophysics Data System (ADS)

    Magnier, Eugene A.; Tonry, J. L.; Finkbeiner, D.; Schlafly, E.; Burgett, W. S.; Chambers, K. C.; Flewelling, H. A.; Hodapp, K. W.; Kaiser, N.; Kudritzki, R.-P.; Metcalfe, N.; Wainscoat, R. J.; Waters, C. Z.

    2018-06-01

    Thick back-illuminated deep-depletion CCDs have superior quantum efficiency over previous generations of thinned and traditional thick CCDs. As a result, they are being used for wide-field imaging cameras in several major projects. We use observations from the Pan-STARRS 3π survey to characterize the behavior of the deep-depletion devices used in the Pan-STARRS 1 Gigapixel Camera. We have identified systematic spatial variations in the photometric measurements and stellar profiles that are similar in pattern to the so-called “tree rings” identified in devices used by other wide-field cameras (e.g., DECam and Hypersuprime Camera). The tree-ring features identified in these other cameras result from lateral electric fields that displace the electrons as they are transported in the silicon to the pixel location. In contrast, we show that the photometric and morphological modifications observed in the GPC1 detectors are caused by variations in the vertical charge transportation rate and resulting charge diffusion variations.

  4. Heart Imaging System

    NASA Technical Reports Server (NTRS)

    1993-01-01

    Johnson Space Flight Center's device to test astronauts' heart function in microgravity has led to the MultiWire Gamma Camera, which images heart conditions six times faster than conventional devices. Dr. Jeffrey Lacy, who developed the technology as a NASA researcher, later formed Proportional Technologies, Inc. to develop a commercially viable process that would enable use of Tantalum-178 (Ta-178), a radio-pharmaceutical. His company supplies the generator for the radioactive Ta-178 to Xenos Medical Systems, which markets the camera. Ta-178 can only be optimally imaged with the camera. Because the body is subjected to it for only nine minutes, the radiation dose is significantly reduced and the technique can be used more frequently. Ta-178 also enables the camera to be used on pediatric patients who are rarely studied with conventional isotopes because of the high radiation dosage.

  5. Analysis of crystalline lens coloration using a black and white charge-coupled device camera.

    PubMed

    Sakamoto, Y; Sasaki, K; Kojima, M

    1994-01-01

    To analyze lens coloration in vivo, we used a new type of Scheimpflug camera that is a black and white type of charge-coupled device (CCD) camera. A new methodology was proposed. Scheimpflug images of the lens were taken three times through red (R), green (G), and blue (B) filters, respectively. Three images corresponding with the R, G, and B channels were combined into one image on the cathode-ray tube (CRT) display. The spectral transmittance of the tricolor filters and the spectral sensitivity of the CCD camera were used to correct the scattering-light intensity of each image. Coloration of the lens was expressed on a CIE standard chromaticity diagram. The lens coloration of seven eyes analyzed by this method showed values almost the same as those obtained by the previous method using color film.

  6. Illumination-compensated non-contact imaging photoplethysmography via dual-mode temporally coded illumination

    NASA Astrophysics Data System (ADS)

    Amelard, Robert; Scharfenberger, Christian; Wong, Alexander; Clausi, David A.

    2015-03-01

    Non-contact camera-based imaging photoplethysmography (iPPG) is useful for measuring heart rate in conditions where contact devices are problematic due to issues such as mobility, comfort, and sanitation. Existing iPPG methods analyse the light-tissue interaction of either active or passive (ambient) illumination. Many active iPPG methods assume the incident ambient light is negligible to the active illumination, resulting in high power requirements, while many passive iPPG methods assume near-constant ambient conditions. These assumptions can only be achieved in environments with controlled illumination and thus constrain the use of such devices. To increase the number of possible applications of iPPG devices, we propose a dual-mode active iPPG system that is robust to changes in ambient illumination variations. Our system uses a temporally-coded illumination sequence that is synchronized with the camera to measure both active and ambient illumination interaction for determining heart rate. By subtracting the ambient contribution, the remaining illumination data can be attributed to the controlled illuminant. Our device comprises a camera and an LED illuminant controlled by a microcontroller. The microcontroller drives the temporal code via synchronizing the frame captures and illumination time at the hardware level. By simulating changes in ambient light conditions, experimental results show our device is able to assess heart rate accurately in challenging lighting conditions. By varying the temporal code, we demonstrate the trade-off between camera frame rate and ambient light compensation for optimal blood pulse detection.

  7. Agreement and reading time for differently-priced devices for the digital capture of X-ray films.

    PubMed

    Salazar, Antonio José; Camacho, Juan Camilo; Aguirre, Diego Andrés

    2012-03-01

    We assessed the reliability of three digital capture devices: a film digitizer (which cost US $15,000), a flat-bed scanner (US $1800) and a digital camera (US $450). Reliability was measured as the agreement between six observers when reading images acquired from a single device and also in terms of the pair-device agreement. The images were 136 chest X-ray cases. The variables measured were the interstitial opacities distribution, interstitial patterns, nodule size and percentage pneumothorax size. The agreement between the six readers when reading images acquired from a single device was similar for the three devices. The pair-device agreements were moderate for all variables. There were significant differences in reading-time between devices: the mean reading-time for the film digitizer was 93 s, it was 59 s for the flat-bed scanner and 70 s for the digital camera. Despite the differences in their cost, there were no substantial differences in the performance of the three devices.

  8. Stereo optical guidance system for control of industrial robots

    NASA Technical Reports Server (NTRS)

    Powell, Bradley W. (Inventor); Rodgers, Mike H. (Inventor)

    1992-01-01

    A device for the generation of basic electrical signals which are supplied to a computerized processing complex for the operation of industrial robots. The system includes a stereo mirror arrangement for the projection of views from opposite sides of a visible indicia formed on a workpiece. The views are projected onto independent halves of the retina of a single camera. The camera retina is of the CCD (charge-coupled-device) type and is therefore capable of providing signals in response to the image projected thereupon. These signals are then processed for control of industrial robots or similar devices.

  9. Soft X-ray and XUV imaging with a charge-coupled device /CCD/-based detector

    NASA Technical Reports Server (NTRS)

    Loter, N. G.; Burstein, P.; Krieger, A.; Ross, D.; Harrison, D.; Michels, D. J.

    1981-01-01

    A soft X-ray/XUV imaging camera which uses a thinned, back-illuminated, all-buried channel RCA CCD for radiation sensing has been built and tested. The camera is a slow-scan device which makes possible frame integration if necessary. The detection characteristics of the device have been tested over the 15-1500 eV range. The response was linear with exposure up to 0.2-0.4 erg/sq cm; saturation occurred at greater exposures. Attention is given to attempts to resolve single photons with energies of 1.5 keV.

  10. Creating and Using a Camera Obscura

    ERIC Educational Resources Information Center

    Quinnell, Justin

    2012-01-01

    The camera obscura (Latin for "darkened room") is the earliest optical device and goes back over 2500 years. The small pinhole or lens at the front of the room allows light to enter and this is then "projected" onto a screen inside the room. This differs from a camera, which projects its image onto light-sensitive material.…

  11. The CAOS camera platform: ushering in a paradigm change in extreme dynamic range imager design

    NASA Astrophysics Data System (ADS)

    Riza, Nabeel A.

    2017-02-01

    Multi-pixel imaging devices such as CCD, CMOS and Focal Plane Array (FPA) photo-sensors dominate the imaging world. These Photo-Detector Array (PDA) devices certainly have their merits including increasingly high pixel counts and shrinking pixel sizes, nevertheless, they are also being hampered by limitations in instantaneous dynamic range, inter-pixel crosstalk, quantum full well capacity, signal-to-noise ratio, sensitivity, spectral flexibility, and in some cases, imager response time. Recently invented is the Coded Access Optical Sensor (CAOS) Camera platform that works in unison with current Photo-Detector Array (PDA) technology to counter fundamental limitations of PDA-based imagers while providing high enough imaging spatial resolution and pixel counts. Using for example the Texas Instruments (TI) Digital Micromirror Device (DMD) to engineer the CAOS camera platform, ushered in is a paradigm change in advanced imager design, particularly for extreme dynamic range applications.

  12. Differentiating Biological Colours with Few and Many Sensors: Spectral Reconstruction with RGB and Hyperspectral Cameras

    PubMed Central

    Garcia, Jair E.; Girard, Madeline B.; Kasumovic, Michael; Petersen, Phred; Wilksch, Philip A.; Dyer, Adrian G.

    2015-01-01

    Background The ability to discriminate between two similar or progressively dissimilar colours is important for many animals as it allows for accurately interpreting visual signals produced by key target stimuli or distractor information. Spectrophotometry objectively measures the spectral characteristics of these signals, but is often limited to point samples that could underestimate spectral variability within a single sample. Algorithms for RGB images and digital imaging devices with many more than three channels, hyperspectral cameras, have been recently developed to produce image spectrophotometers to recover reflectance spectra at individual pixel locations. We compare a linearised RGB and a hyperspectral camera in terms of their individual capacities to discriminate between colour targets of varying perceptual similarity for a human observer. Main Findings (1) The colour discrimination power of the RGB device is dependent on colour similarity between the samples whilst the hyperspectral device enables the reconstruction of a unique spectrum for each sampled pixel location independently from their chromatic appearance. (2) Uncertainty associated with spectral reconstruction from RGB responses results from the joint effect of metamerism and spectral variability within a single sample. Conclusion (1) RGB devices give a valuable insight into the limitations of colour discrimination with a low number of photoreceptors, as the principles involved in the interpretation of photoreceptor signals in trichromatic animals also apply to RGB camera responses. (2) The hyperspectral camera architecture provides means to explore other important aspects of colour vision like the perception of certain types of camouflage and colour constancy where multiple, narrow-band sensors increase resolution. PMID:25965264

  13. Wide-Field-of-View, High-Resolution, Stereoscopic Imager

    NASA Technical Reports Server (NTRS)

    Prechtl, Eric F.; Sedwick, Raymond J.

    2010-01-01

    A device combines video feeds from multiple cameras to provide wide-field-of-view, high-resolution, stereoscopic video to the user. The prototype under development consists of two camera assemblies, one for each eye. One of these assemblies incorporates a mounting structure with multiple cameras attached at offset angles. The video signals from the cameras are fed to a central processing platform where each frame is color processed and mapped into a single contiguous wide-field-of-view image. Because the resolution of most display devices is typically smaller than the processed map, a cropped portion of the video feed is output to the display device. The positioning of the cropped window will likely be controlled through the use of a head tracking device, allowing the user to turn his or her head side-to-side or up and down to view different portions of the captured image. There are multiple options for the display of the stereoscopic image. The use of head mounted displays is one likely implementation. However, the use of 3D projection technologies is another potential technology under consideration, The technology can be adapted in a multitude of ways. The computing platform is scalable, such that the number, resolution, and sensitivity of the cameras can be leveraged to improve image resolution and field of view. Miniaturization efforts can be pursued to shrink the package down for better mobility. Power savings studies can be performed to enable unattended, remote sensing packages. Image compression and transmission technologies can be incorporated to enable an improved telepresence experience.

  14. 5 CFR 1201.52 - Public hearings.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    .... Any objections to the order will be made a part of the record. (b) Electronic devices. Absent express... room; all cell phones, text devices, and all other two-way communications devices shall be powered off in the hearing room. Further, no cameras, recording devices, and/or transmitting devices may be...

  15. 5 CFR 1201.52 - Public hearings.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    .... Any objections to the order will be made a part of the record. (b) Electronic devices. Absent express... room; all cell phones, text devices, and all other two-way communications devices shall be powered off in the hearing room. Further, no cameras, recording devices, and/or transmitting devices may be...

  16. Investigation of the influence of spatial degrees of freedom on thermal infrared measurement

    NASA Astrophysics Data System (ADS)

    Fleuret, Julien R.; Yousefi, Bardia; Lei, Lei; Djupkep Dizeu, Frank Billy; Zhang, Hai; Sfarra, Stefano; Ouellet, Denis; Maldague, Xavier P. V.

    2017-05-01

    Long Wavelength Infrared (LWIR) cameras can provide a representation of a part of the light spectrum that is sensitive to temperature. These cameras also named Thermal Infrared (TIR) cameras are powerful tools to detect features that cannot be seen by other imaging technologies. For instance they enable defect detection in material, fever and anxiety in mammals and many other features for numerous applications. However, the accuracy of thermal cameras can be affected by many parameters; the most critical involves the relative position of the camera with respect to the object of interest. Several models have been proposed in order to minimize the influence of some of the parameters but they are mostly related to specific applications. Because such models are based on some prior informations related to context, their applicability to other contexts cannot be easily assessed. The few models remaining are mostly associated with a specific device. In this paper the authors studied the influence of the camera position on the measurement accuracy. Modeling of the position of the camera from the object of interest depends on many parameters. In order to propose a study which is as accurate as possible, the position of the camera will be represented as a five dimensions model. The aim of this study is to investigate and attempt to introduce a model which is as independent from the device as possible.

  17. Adjustment of multi-CCD-chip-color-camera heads

    NASA Astrophysics Data System (ADS)

    Guyenot, Volker; Tittelbach, Guenther; Palme, Martin

    1999-09-01

    The principle of beam-splitter-multi-chip cameras consists in splitting an image into differential multiple images of different spectral ranges and in distributing these onto separate black and white CCD-sensors. The resulting electrical signals from the chips are recombined to produce a high quality color picture on the monitor. Because this principle guarantees higher resolution and sensitivity in comparison to conventional single-chip camera heads, the greater effort is acceptable. Furthermore, multi-chip cameras obtain the compete spectral information for each individual object point while single-chip system must rely on interpolation. In a joint project, Fraunhofer IOF and STRACON GmbH and in future COBRA electronic GmbH develop methods for designing the optics and dichroitic mirror system of such prism color beam splitter devices. Additionally, techniques and equipment for the alignment and assembly of color beam splitter-multi-CCD-devices on the basis of gluing with UV-curable adhesives have been developed, too.

  18. A Normal Incidence X-ray Telescope (NIXT) sounding rocket payload

    NASA Technical Reports Server (NTRS)

    Golub, Leon

    1989-01-01

    Work on the High Resolution X-ray (HRX) Detector Program is described. In the laboratory and flight programs, multiple copies of a general purpose set of electronics which control the camera, signal processing and data acquisition, were constructed. A typical system consists of a phosphor convertor, image intensifier, a fiber optics coupler, a charge coupled device (CCD) readout, and a set of camera, signal processing and memory electronics. An initial rocket detector prototype camera was tested in flight and performed perfectly. An advanced prototype detector system was incorporated on another rocket flight, in which a high resolution heterojunction vidicon tube was used as the readout device for the H(alpha) telescope. The camera electronics for this tube were built in-house and included in the flight electronics. Performance of this detector system was 100 percent satisfactory. The laboratory X-ray system for operation on the ground is also described.

  19. An electronic pan/tilt/zoom camera system

    NASA Technical Reports Server (NTRS)

    Zimmermann, Steve; Martin, H. Lee

    1991-01-01

    A camera system for omnidirectional image viewing applications that provides pan, tilt, zoom, and rotational orientation within a hemispherical field of view (FOV) using no moving parts was developed. The imaging device is based on the effect that from a fisheye lens, which produces a circular image of an entire hemispherical FOV, can be mathematically corrected using high speed electronic circuitry. An incoming fisheye image from any image acquisition source is captured in memory of the device, a transformation is performed for the viewing region of interest and viewing direction, and a corrected image is output as a video image signal for viewing, recording, or analysis. As a result, this device can accomplish the functions of pan, tilt, rotation, and zoom throughout a hemispherical FOV without the need for any mechanical mechanisms. A programmable transformation processor provides flexible control over viewing situations. Multiple images, each with different image magnifications and pan tilt rotation parameters, can be obtained from a single camera. The image transformation device can provide corrected images at frame rates compatible with RS-170 standard video equipment.

  20. Visual Enhancement of Laparoscopic Partial Nephrectomy With 3-Charge Coupled Device Camera: Assessing Intraoperative Tissue Perfusion and Vascular Anatomy by Visible Hemoglobin Spectral Response

    DTIC Science & Technology

    2010-10-01

    open nephron spanng surgery a single institution expenence. J Ural 2005; 174: 855 21 Bhayan• SB, Aha KH Pmto PA et al Laparoscopic partial...noninvasively assess laparoscopic intraoperative changes in renal tissue perfusion during and after warm ischemia. Materials and Methods: We analyzed select...TITLE AND SUBTITLE Visual Enhancement of Laparoscopic Partial Nephrectomy With 3-Charge Coupled Device Camera: Assessing Intraoperative Tissue

  1. General Model of Photon-Pair Detection with an Image Sensor

    NASA Astrophysics Data System (ADS)

    Defienne, Hugo; Reichert, Matthew; Fleischer, Jason W.

    2018-05-01

    We develop an analytic model that relates intensity correlation measurements performed by an image sensor to the properties of photon pairs illuminating it. Experiments using an effective single-photon counting camera, a linear electron-multiplying charge-coupled device camera, and a standard CCD camera confirm the model. The results open the field of quantum optical sensing using conventional detectors.

  2. Pipe inspection and repair system

    NASA Technical Reports Server (NTRS)

    Schempf, Hagen (Inventor); Mutschler, Edward (Inventor); Chemel, Brian (Inventor); Boehmke, Scott (Inventor); Crowley, William (Inventor)

    2004-01-01

    A multi-module pipe inspection and repair device. The device includes a base module, a camera module, a sensor module, an MFL module, a brush module, a patch set/test module, and a marker module. Each of the modules may be interconnected to construct one of an inspection device, a preparation device, a marking device, and a repair device.

  3. Comparison of parameters of modern cooled and uncooled thermal cameras

    NASA Astrophysics Data System (ADS)

    Bareła, Jarosław; Kastek, Mariusz; Firmanty, Krzysztof; Krupiński, Michał

    2017-10-01

    During the design of a system employing thermal cameras one always faces a problem of choosing the camera types best suited for the task. In many cases such a choice is far from optimal one, and there are several reasons for that. System designers often favor tried and tested solution they are used to. They do not follow the latest developments in the field of infrared technology and sometimes their choices are based on prejudice and not on facts. The paper presents the results of measurements of basic parameters of MWIR and LWIR thermal cameras, carried out in a specialized testing laboratory. The measured parameters are decisive in terms of image quality generated by thermal cameras. All measurements were conducted according to current procedures and standards. However the camera settings were not optimized for a specific test conditions or parameter measurements. Instead the real settings used in normal camera operations were applied to obtain realistic camera performance figures. For example there were significant differences between measured values of noise parameters and catalogue data provided by manufacturers, due to the application of edge detection filters to increase detection and recognition ranges. The purpose of this paper is to provide help in choosing the optimal thermal camera for particular application, answering the question whether to opt for cheaper microbolometer device or apply slightly better (in terms of specifications) yet more expensive cooled unit. Measurements and analysis were performed by qualified personnel with several dozen years of experience in both designing and testing of thermal camera systems with both cooled and uncooled focal plane arrays. Cameras of similar array sizes and optics were compared, and for each tested group the best performing devices were selected.

  4. Remote hardware-reconfigurable robotic camera

    NASA Astrophysics Data System (ADS)

    Arias-Estrada, Miguel; Torres-Huitzil, Cesar; Maya-Rueda, Selene E.

    2001-10-01

    In this work, a camera with integrated image processing capabilities is discussed. The camera is based on an imager coupled to an FPGA device (Field Programmable Gate Array) which contains an architecture for real-time computer vision low-level processing. The architecture can be reprogrammed remotely for application specific purposes. The system is intended for rapid modification and adaptation for inspection and recognition applications, with the flexibility of hardware and software reprogrammability. FPGA reconfiguration allows the same ease of upgrade in hardware as a software upgrade process. The camera is composed of a digital imager coupled to an FPGA device, two memory banks, and a microcontroller. The microcontroller is used for communication tasks and FPGA programming. The system implements a software architecture to handle multiple FPGA architectures in the device, and the possibility to download a software/hardware object from the host computer into its internal context memory. System advantages are: small size, low power consumption, and a library of hardware/software functionalities that can be exchanged during run time. The system has been validated with an edge detection and a motion processing architecture, which will be presented in the paper. Applications targeted are in robotics, mobile robotics, and vision based quality control.

  5. A single pixel camera video ophthalmoscope

    NASA Astrophysics Data System (ADS)

    Lochocki, B.; Gambin, A.; Manzanera, S.; Irles, E.; Tajahuerce, E.; Lancis, J.; Artal, P.

    2017-02-01

    There are several ophthalmic devices to image the retina, from fundus cameras capable to image the whole fundus to scanning ophthalmoscopes with photoreceptor resolution. Unfortunately, these devices are prone to a variety of ocular conditions like defocus and media opacities, which usually degrade the quality of the image. Here, we demonstrate a novel approach to image the retina in real-time using a single pixel camera, which has the potential to circumvent those optical restrictions. The imaging procedure is as follows: a set of spatially coded patterns is projected rapidly onto the retina using a digital micro mirror device. At the same time, the inner product's intensity is measured for each pattern with a photomultiplier module. Subsequently, an image of the retina is reconstructed computationally. Obtained image resolution is up to 128 x 128 px with a varying real-time video framerate up to 11 fps. Experimental results obtained in an artificial eye confirm the tolerance against defocus compared to a conventional multi-pixel array based system. Furthermore, the use of a multiplexed illumination offers a SNR improvement leading to a lower illumination of the eye and hence an increase in patient's comfort. In addition, the proposed system could enable imaging in wavelength ranges where cameras are not available.

  6. Colorimetric detection for paper-based biosensing applications

    NASA Astrophysics Data System (ADS)

    Brink, C.; Joubert, T.-H.

    2016-02-01

    Research on affordable point-of-care health diagnostics is rapidly advancing1. Colorimetric biosensor applications are typically qualitative, but recently the focus has been shifted to quantitative measurements2,3. Although numerous qualitative point-of-care (POC) health diagnostic devices are available, the challenge exists of developing a quantitative colorimetric array reader system that complies with the ASSURED (Affordable, Sensitive, Specific, User-friendly, Rapid and Robust, Equipment-free, Deliverable to end-users) principles of the World Health Organization4. This paper presents a battery powered 8-bit tonal resolution colorimetric sensor circuit for paper microfluidic assays using low cost photo-detection circuitry and a low-power LED light source. A colorimetric 3×3-pixel array reader was developed for rural environments where resources and personnel are limited. The device sports an ultralow-power E-ink paper display. The colorimetric device includes integrated GPS functionality and EEPROM memory to log measurements with geo-tags for possible analysis of regional trends. The device competes with colour intensity measurement techniques using smartphone cameras, but proves to be a cheaper solution, compensating for the typical performance variations between cameras of different brands of smartphones. Inexpensive methods for quantifying bacterial assays have been shown using desktop scanners, which are not portable, and cameras, which suffer severely from changes in ambient light in different environments. Promising colorimetric detection results have been demonstrated using devices such as video cameras5, digital colour analysers6, flatbed scanners7 or custom portable readers8. The major drawback of most of these methods is the need for specialized instrumentation and for image analysis on a computer.

  7. Non-invasive detection of periodontal disease using diffuse reflectance spectroscopy: a clinical study

    NASA Astrophysics Data System (ADS)

    Prasanth, Chandra Sekhar; Betsy, Joseph; Subhash, Narayanan; Jayanthi, Jayaraj L.; Prasanthila, Janam

    2012-03-01

    In clinical diagnostic procedures, gingival inflammation is considered as the initial stage of periodontal breakdown. This is often detected clinically by bleeding on probing as it is an objective measure of inflammation. Since conventional diagnostic procedures have several inherent drawbacks, development of novel non-invasive diagnostic techniques assumes significance. This clinical study was carried out in 15 healthy volunteers and 25 patients to demonstrate the applicability of diffuse reflectance (DR) spectroscopy for quantification and discrimination of various stages of inflammatory conditions in periodontal disease. The DR spectra of diseased lesions recorded using a point monitoring system consisting of a tungsten halogen lamp and a fiber-optic spectrometer showed oxygenated hemoglobin absorption dips at 545 and 575 nm. Mean DR spectra on normalization shows marked differences between healthy and different stages of gingival inflammation. Among the various DR intensity ratios investigated, involving oxy Hb absorption peaks, the R620/R575 ratio was found to be a good parameter of gingival inflammation. In order to screen the entire diseased area and its surroundings instantaneously, DR images were recorded with an EMCCD camera at 620 and 575 nm. We have observed that using the DR image intensity ratio R620/R575 mild inflammatory tissues could be discriminated from healthy with a sensitivity of 92% and specificity of 93%, and from moderate with a sensitivity of 83% and specificity of 96%. The sensitivity and specificity obtained between moderate and severe inflammation are 82% and 76% respectively.

  8. First lunar occultation results from the 2.4 m Thai national telescope equipped with ULTRASPEC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Richichi, A.; Irawati, P.; Soonthornthum, B.

    2014-11-01

    The recently inaugurated 2.4 m Thai National Telescope (TNT) is equipped with, among other instruments, the ULTRASPEC low-noise, frame-transfer EMCCD camera. At the end of its first official observing season, we report on the use of this facility to record high time resolution imaging using small detector subarrays with a sampling as fast as several 10{sup 2} Hz. In particular, we have recorded lunar occultations of several stars that represent the first contribution to this area of research made from Southeast Asia with a telescope of this class. Among the results, we discuss an accurate measurement of α Cnc, whichmore » has been reported previously as a suspected close binary. Attempts by several authors to resolve this star have so far met with a lack of unambiguous confirmation. With our observation we are able to place stringent limits on the projected angular separation (<0.''003) and brightness (Δm > 5) of a putative companion. We also present a measurement of the binary HR 7072, which extends considerably the time coverage available for its yet undetermined orbit. We discuss our precise determination of the flux ratio and projected separation in the context of other available data. We conclude by providing an estimate of the performance of ULTRASPEC at TNT for lunar occultation work. This facility can help to extend the lunar occultation technique in a geographical area where no comparable resources were available until now.« less

  9. The AOLI Non-Linear Curvature Wavefront Sensor: High sensitivity reconstruction for low-order AO

    NASA Astrophysics Data System (ADS)

    Crass, Jonathan; King, David; Mackay, Craig

    2013-12-01

    Many adaptive optics (AO) systems in use today require bright reference objects to determine the effects of atmospheric distortions on incoming wavefronts. This requirement is because Shack Hartmann wavefront sensors (SHWFS) distribute incoming light from reference objects into a large number of sub-apertures. Bright natural reference objects occur infrequently across the sky leading to the use of laser guide stars which add complexity to wavefront measurement systems. The non-linear curvature wavefront sensor as described by Guyon et al. has been shown to offer a significant increase in sensitivity when compared to a SHWFS. This facilitates much greater sky coverage using natural guide stars alone. This paper describes the current status of the non-linear curvature wavefront sensor being developed as part of an adaptive optics system for the Adaptive Optics Lucky Imager (AOLI) project. The sensor comprises two photon-counting EMCCD detectors from E2V Technologies, recording intensity at four near-pupil planes. These images are used with a reconstruction algorithm to determine the phase correction to be applied by an ALPAO 241-element deformable mirror. The overall system is intended to provide low-order correction for a Lucky Imaging based multi CCD imaging camera. We present the current optical design of the instrument including methods to minimise inherent optical effects, principally chromaticity. Wavefront reconstruction methods are discussed and strategies for their optimisation to run at the required real-time speeds are introduced. Finally, we discuss laboratory work with a demonstrator setup of the system.

  10. Experimental Comparison of Knife-Edge and Multi-Parallel Slit Collimators for Prompt Gamma Imaging of Proton Pencil Beams.

    PubMed

    Smeets, Julien; Roellinghoff, Frauke; Janssens, Guillaume; Perali, Irene; Celani, Andrea; Fiorini, Carlo; Freud, Nicolas; Testa, Etienne; Prieels, Damien

    2016-01-01

    More and more camera concepts are being investigated to try and seize the opportunity of instantaneous range verification of proton therapy treatments offered by prompt gammas emitted along the proton tracks. Focusing on one-dimensional imaging with a passive collimator, the present study experimentally compared in combination with the first, clinically compatible, dedicated camera device the performances of instances of the two main options: a knife-edge slit (KES) and a multi-parallel slit (MPS) design. These two options were experimentally assessed in this specific context as they were previously demonstrated through analytical and numerical studies to allow similar performances in terms of Bragg peak retrieval precision and spatial resolution in a general context. Both collimators were prototyped according to the conclusions of Monte Carlo optimization studies under constraints of equal weight (40 mm tungsten alloy equivalent thickness) and of the specificities of the camera device under consideration (in particular 4 mm segmentation along beam axis and no time-of-flight discrimination, both of which less favorable to the MPS performance than to the KES one). Acquisitions of proton pencil beams of 100, 160, and 230 MeV in a PMMA target revealed that, in order to reach a given level of statistical precision on Bragg peak depth retrieval, the KES collimator requires only half the dose the present MPS collimator needs, making the KES collimator a preferred option for a compact camera device aimed at imaging only the Bragg peak position. On the other hand, the present MPS collimator proves more effective at retrieving the entrance of the beam in the target in the context of an extended camera device aimed at imaging the whole proton track within the patient.

  11. Absolute calibration of a charge-coupled device camera with twin beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meda, A.; Ruo-Berchera, I., E-mail: i.ruoberchera@inrim.it; Degiovanni, I. P.

    2014-09-08

    We report on the absolute calibration of a Charge-Coupled Device (CCD) camera by exploiting quantum correlation. This method exploits a certain number of spatial pairwise quantum correlated modes produced by spontaneous parametric-down-conversion. We develop a measurement model accounting for all the uncertainty contributions, and we reach the relative uncertainty of 0.3% in low photon flux regime. This represents a significant step forward for the characterization of (scientific) CCDs used in mesoscopic light regime.

  12. Face recognition system for set-top box-based intelligent TV.

    PubMed

    Lee, Won Oh; Kim, Yeong Gon; Hong, Hyung Gil; Park, Kang Ryoung

    2014-11-18

    Despite the prevalence of smart TVs, many consumers continue to use conventional TVs with supplementary set-top boxes (STBs) because of the high cost of smart TVs. However, because the processing power of a STB is quite low, the smart TV functionalities that can be implemented in a STB are very limited. Because of this, negligible research has been conducted regarding face recognition for conventional TVs with supplementary STBs, even though many such studies have been conducted with smart TVs. In terms of camera sensors, previous face recognition systems have used high-resolution cameras, cameras with high magnification zoom lenses, or camera systems with panning and tilting devices that can be used for face recognition from various positions. However, these cameras and devices cannot be used in intelligent TV environments because of limitations related to size and cost, and only small, low cost web-cameras can be used. The resulting face recognition performance is degraded because of the limited resolution and quality levels of the images. Therefore, we propose a new face recognition system for intelligent TVs in order to overcome the limitations associated with low resource set-top box and low cost web-cameras. We implement the face recognition system using a software algorithm that does not require special devices or cameras. Our research has the following four novelties: first, the candidate regions in a viewer's face are detected in an image captured by a camera connected to the STB via low processing background subtraction and face color filtering; second, the detected candidate regions of face are transmitted to a server that has high processing power in order to detect face regions accurately; third, in-plane rotations of the face regions are compensated based on similarities between the left and right half sub-regions of the face regions; fourth, various poses of the viewer's face region are identified using five templates obtained during the initial user registration stage and multi-level local binary pattern matching. Experimental results indicate that the recall; precision; and genuine acceptance rate were about 95.7%; 96.2%; and 90.2%, respectively.

  13. Quantization error of CCD cameras and their influence on phase calculation in fringe pattern analysis.

    PubMed

    Skydan, Oleksandr A; Lilley, Francis; Lalor, Michael J; Burton, David R

    2003-09-10

    We present an investigation into the phase errors that occur in fringe pattern analysis that are caused by quantization effects. When acquisition devices with a limited value of camera bit depth are used, there are a limited number of quantization levels available to record the signal. This may adversely affect the recorded signal and adds a potential source of instrumental error to the measurement system. Quantization effects also determine the accuracy that may be achieved by acquisition devices in a measurement system. We used the Fourier fringe analysis measurement technique. However, the principles can be applied equally well for other phase measuring techniques to yield a phase error distribution that is caused by the camera bit depth.

  14. Optical frequency comb profilometry using a single-pixel camera composed of digital micromirror devices.

    PubMed

    Pham, Quang Duc; Hayasaki, Yoshio

    2015-01-01

    We demonstrate an optical frequency comb profilometer with a single-pixel camera to measure the position and profile of an object's surface that exceeds far beyond light wavelength without 2π phase ambiguity. The present configuration of the single-pixel camera can perform the profilometry with an axial resolution of 3.4 μm at 1 GHz operation corresponding to a wavelength of 30 cm. Therefore, the axial dynamic range was increased to 0.87×105. It was found from the experiments and computer simulations that the improvement was derived from higher modulation contrast of digital micromirror devices. The frame rate was also increased to 20 Hz.

  15. Cataloger's Camera Chaos

    ERIC Educational Resources Information Center

    Farris, Robert C.

    1974-01-01

    A "cataloger's camera" to facilitate the reproduction of catalog card sets has been sought actively by librarians for more than two decades, never with complete success. Several of the latest library and commercial developments in the continuing quest for this elusive device are described briefly. (Author)

  16. Photometric Calibration and Image Stitching for a Large Field of View Multi-Camera System

    PubMed Central

    Lu, Yu; Wang, Keyi; Fan, Gongshu

    2016-01-01

    A new compact large field of view (FOV) multi-camera system is introduced. The camera is based on seven tiny complementary metal-oxide-semiconductor sensor modules covering over 160° × 160° FOV. Although image stitching has been studied extensively, sensor and lens differences have not been considered in previous multi-camera devices. In this study, we have calibrated the photometric characteristics of the multi-camera device. Lenses were not mounted on the sensor in the process of radiometric response calibration to eliminate the influence of the focusing effect of uniform light from an integrating sphere. Linearity range of the radiometric response, non-linearity response characteristics, sensitivity, and dark current of the camera response function are presented. The R, G, and B channels have different responses for the same illuminance. Vignetting artifact patterns have been tested. The actual luminance of the object is retrieved by sensor calibration results, and is used to blend images to make panoramas reflect the objective luminance more objectively. This compensates for the limitation of stitching images that are more realistic only through the smoothing method. The dynamic range limitation of can be resolved by using multiple cameras that cover a large field of view instead of a single image sensor with a wide-angle lens. The dynamic range is expanded by 48-fold in this system. We can obtain seven images in one shot with this multi-camera system, at 13 frames per second. PMID:27077857

  17. Improved calibration-based non-uniformity correction method for uncooled infrared camera

    NASA Astrophysics Data System (ADS)

    Liu, Chengwei; Sui, Xiubao

    2017-08-01

    With the latest improvements of microbolometer focal plane arrays (FPA), uncooled infrared (IR) cameras are becoming the most widely used devices in thermography, especially in handheld devices. However the influences derived from changing ambient condition and the non-uniform response of the sensors make it more difficult to correct the nonuniformity of uncooled infrared camera. In this paper, based on the infrared radiation characteristic in the TEC-less uncooled infrared camera, a novel model was proposed for calibration-based non-uniformity correction (NUC). In this model, we introduce the FPA temperature, together with the responses of microbolometer under different ambient temperature to calculate the correction parameters. Based on the proposed model, we can work out the correction parameters with the calibration measurements under controlled ambient condition and uniform blackbody. All correction parameters can be determined after the calibration process and then be used to correct the non-uniformity of the infrared camera in real time. This paper presents the detail of the compensation procedure and the performance of the proposed calibration-based non-uniformity correction method. And our method was evaluated on realistic IR images obtained by a 384x288 pixels uncooled long wave infrared (LWIR) camera operated under changed ambient condition. The results show that our method can exclude the influence caused by the changed ambient condition, and ensure that the infrared camera has a stable performance.

  18. [Intraoperative augmented reality visualization. Current state of development and initial experiences with the CamC].

    PubMed

    Weidert, S; Wang, L; von der Heide, A; Navab, N; Euler, E

    2012-03-01

    The intraoperative application of augmented reality (AR) has so far mainly taken place in the field of endoscopy. Here, the camera image of the endoscope was augmented by computer graphics derived mostly from preoperative imaging. Due to the complex setup and operation of the devices, they have not yet become part of routine clinical practice. The Camera Augmented Mobile C-arm (CamC) that extends a classic C-arm by a video camera and mirror construction is characterized by its uncomplicated handling. It combines its video live stream geometrically correct with the acquired X-ray. The clinical application of the device in 43 cases showed the strengths of the device in positioning for X-ray acquisition, incision placement, K-wire placement, and instrument guidance. With its new function and the easy integration into the OR workflow of any procedure that requires X-ray imaging, the CamC has the potential to become the first widely used AR technology for orthopedic and trauma surgery.

  19. Development of a Portable 3CCD Camera System for Multispectral Imaging of Biological Samples

    PubMed Central

    Lee, Hoyoung; Park, Soo Hyun; Noh, Sang Ha; Lim, Jongguk; Kim, Moon S.

    2014-01-01

    Recent studies have suggested the need for imaging devices capable of multispectral imaging beyond the visible region, to allow for quality and safety evaluations of agricultural commodities. Conventional multispectral imaging devices lack flexibility in spectral waveband selectivity for such applications. In this paper, a recently developed portable 3CCD camera with significant improvements over existing imaging devices is presented. A beam-splitter prism assembly for 3CCD was designed to accommodate three interference filters that can be easily changed for application-specific multispectral waveband selection in the 400 to 1000 nm region. We also designed and integrated electronic components on printed circuit boards with firmware programming, enabling parallel processing, synchronization, and independent control of the three CCD sensors, to ensure the transfer of data without significant delay or data loss due to buffering. The system can stream 30 frames (3-waveband images in each frame) per second. The potential utility of the 3CCD camera system was demonstrated in the laboratory for detecting defect spots on apples. PMID:25350510

  20. 3D medical thermography device

    NASA Astrophysics Data System (ADS)

    Moghadam, Peyman

    2015-05-01

    In this paper, a novel handheld 3D medical thermography system is introduced. The proposed system consists of a thermal-infrared camera, a color camera and a depth camera rigidly attached in close proximity and mounted on an ergonomic handle. As a practitioner holding the device smoothly moves it around the human body parts, the proposed system generates and builds up a precise 3D thermogram model by incorporating information from each new measurement in real-time. The data is acquired in motion, thus it provides multiple points of view. When processed, these multiple points of view are adaptively combined by taking into account the reliability of each individual measurement which can vary due to a variety of factors such as angle of incidence, distance between the device and the subject and environmental sensor data or other factors influencing a confidence of the thermal-infrared data when captured. Finally, several case studies are presented to support the usability and performance of the proposed system.

  1. Fringe projection profilometry with portable consumer devices

    NASA Astrophysics Data System (ADS)

    Liu, Danji; Pan, Zhipeng; Wu, Yuxiang; Yue, Huimin

    2018-01-01

    A fringe projection profilometry (FPP) using portable consumer devices is attractive because it can realize optical three dimensional (3D) measurement for ordinary consumers in their daily lives. We demonstrate a FPP using a camera in a smart mobile phone and a digital consumer mini projector. In our experiment of testing the smart phone (iphone7) camera performance, the rare-facing camera in the iphone7 causes the FPP to have a fringe contrast ratio of 0.546, nonlinear carrier phase aberration value of 0.6 rad, and nonlinear phase error of 0.08 rad and RMS random phase error of 0.033 rad. In contrast, the FPP using the industrial camera has a fringe contrast ratio of 0.715, nonlinear carrier phase aberration value of 0.5 rad, nonlinear phase error of 0.05 rad and RMS random phase error of 0.011 rad. Good performance is achieved by using the FPP composed of an iphone7 and a mini projector. 3D information of a facemask with a size for an adult is also measured by using the FPP that uses portable consumer devices. After the system calibration, the 3D absolute information of the facemask is obtained. The measured results are in good agreement with the ones that are carried out in a traditional way. Our results show that it is possible to use portable consumer devices to construct a good FPP, which is useful for ordinary people to get 3D information in their daily lives.

  2. Broadly available imaging devices enable high-quality low-cost photometry.

    PubMed

    Christodouleas, Dionysios C; Nemiroski, Alex; Kumar, Ashok A; Whitesides, George M

    2015-09-15

    This paper demonstrates that, for applications in resource-limited environments, expensive microplate spectrophotometers that are used in many central laboratories for parallel measurement of absorbance of samples can be replaced by photometers based on inexpensive and ubiquitous, consumer electronic devices (e.g., scanners and cell-phone cameras). Two devices, (i) a flatbed scanner operating in transmittance mode and (ii) a camera-based photometer (constructed from a cell phone camera, a planar light source, and a cardboard box), demonstrate the concept. These devices illuminate samples in microtiter plates from one side and use the RGB-based imaging sensors of the scanner/camera to measure the light transmitted to the other side. The broadband absorbance of samples (RGB-resolved absorbance) can be calculated using the RGB color values of only three pixels per microwell. Rigorous theoretical analysis establishes a well-defined relationship between the absorbance spectrum of a sample and its corresponding RGB-resolved absorbance. The linearity and precision of measurements performed with these low-cost photometers on different dyes, which absorb across the range of the visible spectrum, and chromogenic products of assays (e.g., enzymatic, ELISA) demonstrate that these low-cost photometers can be used reliably in a broad range of chemical and biochemical analyses. The ability to perform accurate measurements of absorbance on liquid samples, in parallel and at low cost, would enable testing, typically reserved for well-equipped clinics and laboratories, to be performed in circumstances where resources and expertise are limited.

  3. Solid State Television Camera (CID)

    NASA Technical Reports Server (NTRS)

    Steele, D. W.; Green, W. T.

    1976-01-01

    The design, development and test are described of a charge injection device (CID) camera using a 244x248 element array. A number of video signal processing functions are included which maximize the output video dynamic range while retaining the inherently good resolution response of the CID. Some of the unique features of the camera are: low light level performance, high S/N ratio, antiblooming, geometric distortion, sequential scanning and AGC.

  4. Suitability of digital camcorders for virtual reality image data capture

    NASA Astrophysics Data System (ADS)

    D'Apuzzo, Nicola; Maas, Hans-Gerd

    1998-12-01

    Today's consumer market digital camcorders offer features which make them appear quite interesting devices for virtual reality data capture. The paper compares a digital camcorder with an analogue camcorder and a machine vision type CCD camera and discusses the suitability of these three cameras for virtual reality applications. Besides the discussion of technical features of the cameras, this includes a detailed accuracy test in order to define the range of applications. In combination with the cameras, three different framegrabbers are tested. The geometric accuracy potential of all three cameras turned out to be surprisingly large, and no problems were noticed in the radiometric performance. On the other hand, some disadvantages have to be reported: from the photogrammetrists point of view, the major disadvantage of most camcorders is the missing possibility to synchronize multiple devices, limiting the suitability for 3-D motion data capture. Moreover, the standard video format contains interlacing, which is also undesirable for all applications dealing with moving objects or moving cameras. Further disadvantages are computer interfaces with functionality, which is still suboptimal. While custom-made solutions to these problems are probably rather expensive (and will make potential users turn back to machine vision like equipment), this functionality could probably be included by the manufacturers at almost zero cost.

  5. Adaptive Monocular Visual-Inertial SLAM for Real-Time Augmented Reality Applications in Mobile Devices.

    PubMed

    Piao, Jin-Chun; Kim, Shin-Dug

    2017-11-07

    Simultaneous localization and mapping (SLAM) is emerging as a prominent issue in computer vision and next-generation core technology for robots, autonomous navigation and augmented reality. In augmented reality applications, fast camera pose estimation and true scale are important. In this paper, we present an adaptive monocular visual-inertial SLAM method for real-time augmented reality applications in mobile devices. First, the SLAM system is implemented based on the visual-inertial odometry method that combines data from a mobile device camera and inertial measurement unit sensor. Second, we present an optical-flow-based fast visual odometry method for real-time camera pose estimation. Finally, an adaptive monocular visual-inertial SLAM is implemented by presenting an adaptive execution module that dynamically selects visual-inertial odometry or optical-flow-based fast visual odometry. Experimental results show that the average translation root-mean-square error of keyframe trajectory is approximately 0.0617 m with the EuRoC dataset. The average tracking time is reduced by 7.8%, 12.9%, and 18.8% when different level-set adaptive policies are applied. Moreover, we conducted experiments with real mobile device sensors, and the results demonstrate the effectiveness of performance improvement using the proposed method.

  6. Simultaneous Calibration: A Joint Optimization Approach for Multiple Kinect and External Cameras.

    PubMed

    Liao, Yajie; Sun, Ying; Li, Gongfa; Kong, Jianyi; Jiang, Guozhang; Jiang, Du; Cai, Haibin; Ju, Zhaojie; Yu, Hui; Liu, Honghai

    2017-06-24

    Camera calibration is a crucial problem in many applications, such as 3D reconstruction, structure from motion, object tracking and face alignment. Numerous methods have been proposed to solve the above problem with good performance in the last few decades. However, few methods are targeted at joint calibration of multi-sensors (more than four devices), which normally is a practical issue in the real-time systems. In this paper, we propose a novel method and a corresponding workflow framework to simultaneously calibrate relative poses of a Kinect and three external cameras. By optimizing the final cost function and adding corresponding weights to the external cameras in different locations, an effective joint calibration of multiple devices is constructed. Furthermore, the method is tested in a practical platform, and experiment results show that the proposed joint calibration method can achieve a satisfactory performance in a project real-time system and its accuracy is higher than the manufacturer's calibration.

  7. Simultaneous Calibration: A Joint Optimization Approach for Multiple Kinect and External Cameras

    PubMed Central

    Liao, Yajie; Sun, Ying; Li, Gongfa; Kong, Jianyi; Jiang, Guozhang; Jiang, Du; Cai, Haibin; Ju, Zhaojie; Yu, Hui; Liu, Honghai

    2017-01-01

    Camera calibration is a crucial problem in many applications, such as 3D reconstruction, structure from motion, object tracking and face alignment. Numerous methods have been proposed to solve the above problem with good performance in the last few decades. However, few methods are targeted at joint calibration of multi-sensors (more than four devices), which normally is a practical issue in the real-time systems. In this paper, we propose a novel method and a corresponding workflow framework to simultaneously calibrate relative poses of a Kinect and three external cameras. By optimizing the final cost function and adding corresponding weights to the external cameras in different locations, an effective joint calibration of multiple devices is constructed. Furthermore, the method is tested in a practical platform, and experiment results show that the proposed joint calibration method can achieve a satisfactory performance in a project real-time system and its accuracy is higher than the manufacturer’s calibration. PMID:28672823

  8. Design of a high-numerical-aperture digital micromirror device camera with high dynamic range.

    PubMed

    Qiao, Yang; Xu, Xiping; Liu, Tao; Pan, Yue

    2015-01-01

    A high-NA imaging system with high dynamic range is presented based on a digital micromirror device (DMD). The DMD camera consists of an objective imaging system and a relay imaging system, connected by a DMD chip. With the introduction of a total internal reflection prism system, the objective imaging system is designed with a working F/# of 1.97, breaking through the F/2.45 limitation of conventional DMD projection lenses. As for the relay imaging system, an off-axis design that could correct off-axis aberrations of the tilt relay imaging system is developed. This structure has the advantage of increasing the NA of the imaging system while maintaining a compact size. Investigation revealed that the dynamic range of a DMD camera could be greatly increased, by 2.41 times. We built one prototype DMD camera with a working F/# of 1.23, and the field experiments proved the validity and reliability our work.

  9. Design of Belief Propagation Based on FPGA for the Multistereo CAFADIS Camera

    PubMed Central

    Magdaleno, Eduardo; Lüke, Jonás Philipp; Rodríguez, Manuel; Rodríguez-Ramos, José Manuel

    2010-01-01

    In this paper we describe a fast, specialized hardware implementation of the belief propagation algorithm for the CAFADIS camera, a new plenoptic sensor patented by the University of La Laguna. This camera captures the lightfield of the scene and can be used to find out at which depth each pixel is in focus. The algorithm has been designed for FPGA devices using VHDL. We propose a parallel and pipeline architecture to implement the algorithm without external memory. Although the BRAM resources of the device increase considerably, we can maintain real-time restrictions by using extremely high-performance signal processing capability through parallelism and by accessing several memories simultaneously. The quantifying results with 16 bit precision have shown that performances are really close to the original Matlab programmed algorithm. PMID:22163404

  10. Design of belief propagation based on FPGA for the multistereo CAFADIS camera.

    PubMed

    Magdaleno, Eduardo; Lüke, Jonás Philipp; Rodríguez, Manuel; Rodríguez-Ramos, José Manuel

    2010-01-01

    In this paper we describe a fast, specialized hardware implementation of the belief propagation algorithm for the CAFADIS camera, a new plenoptic sensor patented by the University of La Laguna. This camera captures the lightfield of the scene and can be used to find out at which depth each pixel is in focus. The algorithm has been designed for FPGA devices using VHDL. We propose a parallel and pipeline architecture to implement the algorithm without external memory. Although the BRAM resources of the device increase considerably, we can maintain real-time restrictions by using extremely high-performance signal processing capability through parallelism and by accessing several memories simultaneously. The quantifying results with 16 bit precision have shown that performances are really close to the original Matlab programmed algorithm.

  11. Printed products for digital cameras and mobile devices

    NASA Astrophysics Data System (ADS)

    Fageth, Reiner; Schmidt-Sacht, Wulf

    2005-01-01

    Digital photography is no longer simply a successor to film. The digital market is now driven by additional devices such as mobile phones with camera and video functions (camphones) as well as innovative products derived from digital files. A large number of consumers do not print their images and non-printing has become the major enemy of wholesale printers, home printing suppliers and retailers. This paper addresses the challenge facing our industry, namely how to encourage the consumer to print images easily and conveniently from all types of digital media.

  12. Deployable Wireless Camera Penetrators

    NASA Technical Reports Server (NTRS)

    Badescu, Mircea; Jones, Jack; Sherrit, Stewart; Wu, Jiunn Jeng

    2008-01-01

    A lightweight, low-power camera dart has been designed and tested for context imaging of sampling sites and ground surveys from an aerobot or an orbiting spacecraft in a microgravity environment. The camera penetrators also can be used to image any line-of-sight surface, such as cliff walls, that is difficult to access. Tethered cameras to inspect the surfaces of planetary bodies use both power and signal transmission lines to operate. A tether adds the possibility of inadvertently anchoring the aerobot, and requires some form of station-keeping capability of the aerobot if extended examination time is required. The new camera penetrators are deployed without a tether, weigh less than 30 grams, and are disposable. They are designed to drop from any altitude with the boost in transmitting power currently demonstrated at approximately 100-m line-of-sight. The penetrators also can be deployed to monitor lander or rover operations from a distance, and can be used for surface surveys or for context information gathering from a touch-and-go sampling site. Thanks to wireless operation, the complexity of the sampling or survey mechanisms may be reduced. The penetrators may be battery powered for short-duration missions, or have solar panels for longer or intermittent duration missions. The imaging device is embedded in the penetrator, which is dropped or projected at the surface of a study site at 90 to the surface. Mirrors can be used in the design to image the ground or the horizon. Some of the camera features were tested using commercial "nanny" or "spy" camera components with the charge-coupled device (CCD) looking at a direction parallel to the ground. Figure 1 shows components of one camera that weighs less than 8 g and occupies a volume of 11 cm3. This camera could transmit a standard television signal, including sound, up to 100 m. Figure 2 shows the CAD models of a version of the penetrator. A low-volume array of such penetrator cameras could be deployed from an aerobot or a spacecraft onto a comet or asteroid. A system of 20 of these penetrators could be designed and built in a 1- to 2-kg mass envelope. Possible future modifications of the camera penetrators, such as the addition of a chemical spray device, would allow the study of simple chemical reactions of reagents sprayed at the landing site and looking at the color changes. Zoom lenses also could be added for future use.

  13. Work zone speed reduction utilizing dynamic speed signs

    DOT National Transportation Integrated Search

    2011-08-30

    Vast quantities of transportation data are automatically recorded by intelligent transportations infrastructure, such as inductive loop detectors, video cameras, and side-fire radar devices. Such devices are typically deployed by traffic management c...

  14. Application of infrared uncooled cameras in surveillance systems

    NASA Astrophysics Data System (ADS)

    Dulski, R.; Bareła, J.; Trzaskawka, P.; PiÄ tkowski, T.

    2013-10-01

    The recent necessity to protect military bases, convoys and patrols gave serious impact to the development of multisensor security systems for perimeter protection. One of the most important devices used in such systems are IR cameras. The paper discusses technical possibilities and limitations to use uncooled IR camera in a multi-sensor surveillance system for perimeter protection. Effective ranges of detection depend on the class of the sensor used and the observed scene itself. Application of IR camera increases the probability of intruder detection regardless of the time of day or weather conditions. It also simultaneously decreased the false alarm rate produced by the surveillance system. The role of IR cameras in the system was discussed as well as technical possibilities to detect human being. Comparison of commercially available IR cameras, capable to achieve desired ranges was done. The required spatial resolution for detection, recognition and identification was calculated. The simulation of detection ranges was done using a new model for predicting target acquisition performance which uses the Targeting Task Performance (TTP) metric. Like its predecessor, the Johnson criteria, the new model bounds the range performance with image quality. The scope of presented analysis is limited to the estimation of detection, recognition and identification ranges for typical thermal cameras with uncooled microbolometer focal plane arrays. This type of cameras is most widely used in security systems because of competitive price to performance ratio. Detection, recognition and identification range calculations were made, and the appropriate results for the devices with selected technical specifications were compared and discussed.

  15. Visual enhancement of laparoscopic partial nephrectomy with 3-charge coupled device camera: assessing intraoperative tissue perfusion and vascular anatomy by visible hemoglobin spectral response.

    PubMed

    Crane, Nicole J; Gillern, Suzanne M; Tajkarimi, Kambiz; Levin, Ira W; Pinto, Peter A; Elster, Eric A

    2010-10-01

    We report the novel use of 3-charge coupled device camera technology to infer tissue oxygenation. The technique can aid surgeons to reliably differentiate vascular structures and noninvasively assess laparoscopic intraoperative changes in renal tissue perfusion during and after warm ischemia. We analyzed select digital video images from 10 laparoscopic partial nephrectomies for their individual 3-charge coupled device response. We enhanced surgical images by subtracting the red charge coupled device response from the blue response and overlaying the calculated image on the original image. Mean intensity values for regions of interest were compared and used to differentiate arterial and venous vasculature, and ischemic and nonischemic renal parenchyma. The 3-charge coupled device enhanced images clearly delineated the vessels in all cases. Arteries were indicated by an intense red color while veins were shown in blue. Differences in mean region of interest intensity values for arteries and veins were statistically significant (p >0.0001). Three-charge coupled device analysis of pre-clamp and post-clamp renal images revealed visible, dramatic color enhancement for ischemic vs nonischemic kidneys. Differences in the mean region of interest intensity values were also significant (p <0.05). We present a simple use of conventional 3-charge coupled device camera technology in a way that may provide urological surgeons with the ability to reliably distinguish vascular structures during hilar dissection, and detect and monitor changes in renal tissue perfusion during and after warm ischemia. Copyright © 2010 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.

  16. The future of consumer cameras

    NASA Astrophysics Data System (ADS)

    Battiato, Sebastiano; Moltisanti, Marco

    2015-03-01

    In the last two decades multimedia, and in particular imaging devices (camcorders, tablets, mobile phones, etc.) have been dramatically diffused. Moreover the increasing of their computational performances, combined with an higher storage capability, allows them to process large amount of data. In this paper an overview of the current trends of consumer cameras market and technology will be given, providing also some details about the recent past (from Digital Still Camera up today) and forthcoming key issues.

  17. Pettit runs a drill while looking through a camera mounted on the Nadir window in the U.S. Lab

    NASA Image and Video Library

    2003-04-05

    ISS006-E-44305 (5 April 2003) --- Astronaut Donald R. Pettit, Expedition Six NASA ISS science officer, runs a drill while looking through a camera mounted on the nadir window in the Destiny laboratory on the International Space Station (ISS). The device is called a “barn door tracker”. The drill turns the screw, which moves the camera and its spotting scope.

  18. 21 CFR 892.1110 - Positron camera.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 21 Food and Drugs 8 2012-04-01 2012-04-01 false Positron camera. 892.1110 Section 892.1110 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL.... This generic type of device may include signal analysis and display equipment, patient and equipment...

  19. 21 CFR 892.1110 - Positron camera.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 21 Food and Drugs 8 2013-04-01 2013-04-01 false Positron camera. 892.1110 Section 892.1110 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL.... This generic type of device may include signal analysis and display equipment, patient and equipment...

  20. 21 CFR 892.1110 - Positron camera.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Positron camera. 892.1110 Section 892.1110 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL.... This generic type of device may include signal analysis and display equipment, patient and equipment...

  1. 21 CFR 892.1110 - Positron camera.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Positron camera. 892.1110 Section 892.1110 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL.... This generic type of device may include signal analysis and display equipment, patient and equipment...

  2. 21 CFR 892.1110 - Positron camera.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Positron camera. 892.1110 Section 892.1110 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL.... This generic type of device may include signal analysis and display equipment, patient and equipment...

  3. Towards Kilo-Hertz 6-DoF Visual Tracking Using an Egocentric Cluster of Rolling Shutter Cameras.

    PubMed

    Bapat, Akash; Dunn, Enrique; Frahm, Jan-Michael

    2016-11-01

    To maintain a reliable registration of the virtual world with the real world, augmented reality (AR) applications require highly accurate, low-latency tracking of the device. In this paper, we propose a novel method for performing this fast 6-DOF head pose tracking using a cluster of rolling shutter cameras. The key idea is that a rolling shutter camera works by capturing the rows of an image in rapid succession, essentially acting as a high-frequency 1D image sensor. By integrating multiple rolling shutter cameras on the AR device, our tracker is able to perform 6-DOF markerless tracking in a static indoor environment with minimal latency. Compared to state-of-the-art tracking systems, this tracking approach performs at significantly higher frequency, and it works in generalized environments. To demonstrate the feasibility of our system, we present thorough evaluations on synthetically generated data with tracking frequencies reaching 56.7 kHz. We further validate the method's accuracy on real-world images collected from a prototype of our tracking system against ground truth data using standard commodity GoPro cameras capturing at 120 Hz frame rate.

  4. Determining the Performance of Fluorescence Molecular Imaging Devices using Traceable Working Standards with SI Units of Radiance

    PubMed Central

    Zhu, Banghe; Rasmussen, John C.; Litorja, Maritoni

    2017-01-01

    To date, no emerging preclinical or clinical near-infrared fluorescence (NIRF) imaging devices for non-invasive and/or surgical guidance have their performances validated on working standards with SI units of radiance that enable comparison or quantitative quality assurance. In this work, we developed and deployed a methodology to calibrate a stable, solid phantom for emission radiance with units of mW · sr−1 · cm−2 for use in characterizing the measurement sensitivity of ICCD and IsCMOS detection, signal-to-noise ratio, and contrast. In addition, at calibrated radiances, we assess transverse and lateral resolution of ICCD and IsCMOS camera systems. The methodology allowed determination of superior SNR of the ICCD over the IsCMOS camera system and superior resolution of the IsCMOS over the ICCD camera system. Contrast depended upon the camera settings (binning and integration time) and gain of intensifier. Finally, because of architecture of CMOS and CCD camera systems resulting in vastly different performance, we comment on the utility of these systems for small animal imaging as well as clinical applications for non-invasive and surgical guidance. PMID:26552078

  5. Web Camera Based Eye Tracking to Assess Visual Memory on a Visual Paired Comparison Task.

    PubMed

    Bott, Nicholas T; Lange, Alex; Rentz, Dorene; Buffalo, Elizabeth; Clopton, Paul; Zola, Stuart

    2017-01-01

    Background: Web cameras are increasingly part of the standard hardware of most smart devices. Eye movements can often provide a noninvasive "window on the brain," and the recording of eye movements using web cameras is a burgeoning area of research. Objective: This study investigated a novel methodology for administering a visual paired comparison (VPC) decisional task using a web camera.To further assess this method, we examined the correlation between a standard eye-tracking camera automated scoring procedure [obtaining images at 60 frames per second (FPS)] and a manually scored procedure using a built-in laptop web camera (obtaining images at 3 FPS). Methods: This was an observational study of 54 clinically normal older adults.Subjects completed three in-clinic visits with simultaneous recording of eye movements on a VPC decision task by a standard eye tracker camera and a built-in laptop-based web camera. Inter-rater reliability was analyzed using Siegel and Castellan's kappa formula. Pearson correlations were used to investigate the correlation between VPC performance using a standard eye tracker camera and a built-in web camera. Results: Strong associations were observed on VPC mean novelty preference score between the 60 FPS eye tracker and 3 FPS built-in web camera at each of the three visits ( r = 0.88-0.92). Inter-rater agreement of web camera scoring at each time point was high (κ = 0.81-0.88). There were strong relationships on VPC mean novelty preference score between 10, 5, and 3 FPS training sets ( r = 0.88-0.94). Significantly fewer data quality issues were encountered using the built-in web camera. Conclusions: Human scoring of a VPC decisional task using a built-in laptop web camera correlated strongly with automated scoring of the same task using a standard high frame rate eye tracker camera.While this method is not suitable for eye tracking paradigms requiring the collection and analysis of fine-grained metrics, such as fixation points, built-in web cameras are a standard feature of most smart devices (e.g., laptops, tablets, smart phones) and can be effectively employed to track eye movements on decisional tasks with high accuracy and minimal cost.

  6. Utilizing Commercial Hardware and Open Source Computer Vision Software to Perform Motion Capture for Reduced Gravity Flight

    NASA Technical Reports Server (NTRS)

    Humphreys, Brad; Bellisario, Brian; Gallo, Christopher; Thompson, William K.; Lewandowski, Beth

    2016-01-01

    Long duration space travel to Mars or to an asteroid will expose astronauts to extended periods of reduced gravity. Since gravity is not present to aid loading, astronauts will use resistive and aerobic exercise regimes for the duration of the space flight to minimize the loss of bone density, muscle mass and aerobic capacity that occurs during exposure to a reduced gravity environment. Unlike the International Space Station (ISS), the area available for an exercise device in the next generation of spacecraft is limited. Therefore, compact resistance exercise device prototypes are being developed. The NASA Digital Astronaut Project (DAP) is supporting the Advanced Exercise Concepts (AEC) Project, Exercise Physiology and Countermeasures (ExPC) project and the National Space Biomedical Research Institute (NSBRI) funded researchers by developing computational models of exercising with these new advanced exercise device concepts. To perform validation of these models and to support the Advanced Exercise Concepts Project, several candidate devices have been flown onboard NASAs Reduced Gravity Aircraft. In terrestrial laboratories, researchers typically have available to them motion capture systems for the measurement of subject kinematics. Onboard the parabolic flight aircraft it is not practical to utilize the traditional motion capture systems due to the large working volume they require and their relatively high replacement cost if damaged. To support measuring kinematics on board parabolic aircraft, a motion capture system is being developed utilizing open source computer vision code with commercial off the shelf (COTS) video camera hardware. While the systems accuracy is lower than lab setups, it provides a means to produce quantitative comparison motion capture kinematic data. Additionally, data such as required exercise volume for small spaces such as the Orion capsule can be determined. METHODS: OpenCV is an open source computer vision library that provides the ability to perform multi-camera 3 dimensional reconstruction. Utilizing OpenCV, via the Python programming language, a set of tools has been developed to perform motion capture in confined spaces using commercial cameras. Four Sony Video Cameras were intrinsically calibrated prior to flight. Intrinsic calibration provides a set of camera specific parameters to remove geometric distortion of the lens and sensor (specific to each individual camera). A set of high contrast markers were placed on the exercising subject (safety also necessitated that they be soft in case they become detached during parabolic flight); small yarn balls were used. Extrinsic calibration, the determination of camera location and orientation parameters, is performed using fixed landmark markers shared by the camera scenes. Additionally a wand calibration, the sweeping of the camera scenes simultaneously, was also performed. Techniques have been developed to perform intrinsic calibration, extrinsic calibration, isolation of the markers in the scene, calculation of marker 2D centroids, and 3D reconstruction from multiple cameras. These methods have been tested in the laboratory side-by-side comparison to a traditional motion capture system and also on a parabolic flight.

  7. Vacuum compatible miniature CCD camera head

    DOEpatents

    Conder, Alan D.

    2000-01-01

    A charge-coupled device (CCD) camera head which can replace film for digital imaging of visible light, ultraviolet radiation, and soft to penetrating x-rays, such as within a target chamber where laser produced plasmas are studied. The camera head is small, capable of operating both in and out of a vacuum environment, and is versatile. The CCD camera head uses PC boards with an internal heat sink connected to the chassis for heat dissipation, which allows for close(0.04" for example) stacking of the PC boards. Integration of this CCD camera head into existing instrumentation provides a substantial enhancement of diagnostic capabilities for studying high energy density plasmas, for a variety of military industrial, and medical imaging applications.

  8. The canopy camera

    Treesearch

    Harry E. Brown

    1962-01-01

    The canopy camera is a device of new design that takes wide-angle, overhead photographs of vegetation canopies, cloud cover, topographic horizons, and similar subjects. Since the entire hemisphere is photographed in a single exposure, the resulting photograph is circular, with the horizon forming the perimeter and the zenith the center. Photographs of this type provide...

  9. 21 CFR 876.1300 - Ingestible telemetric gastrointestinal capsule imaging system.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... images of the small bowel with a wireless camera contained in a capsule. This device includes an... receiving/recording unit, a data storage device, computer software to process the images, and accessories...

  10. 21 CFR 876.1300 - Ingestible telemetric gastrointestinal capsule imaging system.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... images of the small bowel with a wireless camera contained in a capsule. This device includes an... receiving/recording unit, a data storage device, computer software to process the images, and accessories...

  11. 21 CFR 876.1300 - Ingestible telemetric gastrointestinal capsule imaging system.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... images of the small bowel with a wireless camera contained in a capsule. This device includes an... receiving/recording unit, a data storage device, computer software to process the images, and accessories...

  12. 21 CFR 876.1300 - Ingestible telemetric gastrointestinal capsule imaging system.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... images of the small bowel with a wireless camera contained in a capsule. This device includes an... receiving/recording unit, a data storage device, computer software to process the images, and accessories...

  13. Towards automated assistance for operating home medical devices.

    PubMed

    Gao, Zan; Detyniecki, Marcin; Chen, Ming-Yu; Wu, Wen; Hauptmann, Alexander G; Wactlar, Howard D

    2010-01-01

    To detect errors when subjects operate a home medical device, we observe them with multiple cameras. We then perform action recognition with a robust approach to recognize action information based on explicitly encoding motion information. This algorithm detects interest points and encodes not only their local appearance but also explicitly models local motion. Our goal is to recognize individual human actions in the operations of a home medical device to see if the patient has correctly performed the required actions in the prescribed sequence. Using a specific infusion pump as a test case, requiring 22 operation steps from 6 action classes, our best classifier selects high likelihood action estimates from 4 available cameras, to obtain an average class recognition rate of 69%.

  14. Hand-to-hand coupling and strategies to minimize unintentional energy transfer during laparoscopic surgery.

    PubMed

    Overbey, Douglas M; Hilton, Sarah A; Chapman, Brandon C; Townsend, Nicole T; Barnett, Carlton C; Robinson, Thomas N; Jones, Edward L

    2017-11-01

    Energy-based devices are used in nearly every laparoscopic operation. Radiofrequency energy can transfer to nearby instruments via antenna and capacitive coupling without direct contact. Previous studies have described inadvertent energy transfer through bundled cords and nonelectrically active wires. The purpose of this study was to describe a new mechanism of stray energy transfer from the monopolar instrument through the operating surgeon to the laparoscopic telescope and propose practical measures to decrease the risk of injury. Radiofrequency energy was delivered to a laparoscopic L-hook (monopolar "bovie"), an advanced bipolar device, and an ultrasonic device in a laparoscopic simulator. The tip of a 10-mm telescope was placed adjacent but not touching bovine liver in a standard four-port laparoscopic cholecystectomy setup. Temperature increase was measured as tissue temperature from baseline nearest the tip of the telescope which was never in contact with the energy-based device after a 5-s open-air activation. The monopolar L-hook increased tissue temperature adjacent to the camera/telescope tip by 47 ± 8°C from baseline (P < 0.001). By having an assistant surgeon hold the camera/telescope (rather than one surgeon holding both the active electrode and the camera/telescope), temperature change was reduced to 26 ± 7°C (P < 0.001). Alternative energy devices significantly reduced temperature change in comparison to the monopolar instrument (47 ± 8°C) for both the advanced bipolar (1.2 ± 0.5°C; P < 0.001) and ultrasonic (0.6 ± 0.3°C; P < 0.001) devices. Stray energy transfers from the monopolar "bovie" instrument through the operating surgeon to standard electrically inactive laparoscopic instruments. Hand-to-hand coupling describes a new form of capacitive coupling where the surgeon's body acts as an electrical conductor to transmit energy. Strategies to reduce stray energy transfer include avoiding the same surgeon holding the active electrode and laparoscopic camera or using alternative energy devices. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. Astronomy and the camera obscura

    NASA Astrophysics Data System (ADS)

    Feist, M.

    2000-02-01

    The camera obscura (from Latin meaning darkened chamber) is a simple optical device with a long history. In the form considered here, it can be traced back to 1550. It had its heyday during the Victorian era when it was to be found at the seaside as a tourist attraction or sideshow. It was also used as an artist's drawing aid and, in 1620, the famous astronomer-mathematician, Johannes Kepler used a small tent camera obscura to trace the scenery.

  16. Use of a color CMOS camera as a colorimeter

    NASA Astrophysics Data System (ADS)

    Dallas, William J.; Roehrig, Hans; Redford, Gary R.

    2006-08-01

    In radiology diagnosis, film is being quickly replaced by computer monitors as the display medium for all imaging modalities. Increasingly, these monitors are color instead of monochrome. It is important to have instruments available to characterize the display devices in order to guarantee reproducible presentation of image material. We are developing an imaging colorimeter based on a commercially available color digital camera. The camera uses a sensor that has co-located pixels in all three primary colors.

  17. Weather and atmosphere observation with the ATOM all-sky camera

    NASA Astrophysics Data System (ADS)

    Jankowsky, Felix; Wagner, Stefan

    2015-03-01

    The Automatic Telescope for Optical Monitoring (ATOM) for H.E.S.S. is an 75 cm optical telescope which operates fully automated. As there is no observer present during observation, an auxiliary all-sky camera serves as weather monitoring system. This device takes an all-sky image of the whole sky every three minutes. The gathered data then undergoes live-analysis by performing astrometric comparison with a theoretical night sky model, interpreting the absence of stars as cloud coverage. The sky monitor also serves as tool for a meteorological analysis of the observation site of the the upcoming Cherenkov Telescope Array. This overview covers design and benefits of the all-sky camera and additionally gives an introduction into current efforts to integrate the device into the atmosphere analysis programme of H.E.S.S.

  18. Data transmission protocol for Pi-of-the-Sky cameras

    NASA Astrophysics Data System (ADS)

    Uzycki, J.; Kasprowicz, G.; Mankiewicz, M.; Nawrocki, K.; Sitek, P.; Sokolowski, M.; Sulej, R.; Tlaczala, W.

    2006-10-01

    The large amount of data collected by the automatic astronomical cameras has to be transferred to the fast computers in a reliable way. The method chosen should ensure data streaming in both directions but in nonsymmetrical way. The Ethernet interface is very good choice because of its popularity and proven performance. However it requires TCP/IP stack implementation in devices like cameras for full compliance with existing network and operating systems. This paper describes NUDP protocol, which was made as supplement to standard UDP protocol and can be used as a simple-network protocol. The NUDP does not need TCP protocol implementation and makes it possible to run the Ethernet network with simple devices based on microcontroller and/or FPGA chips. The data transmission idea was created especially for the "Pi of the Sky" project.

  19. TES development for a frequency selective bolometer camera.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Datesman, A. M.; Downes, T. P.; Perera, T. A.

    2009-06-01

    We discuss the development, at Argonne National Laboratory (ANL), of a four-pixel camera with four spectral channels centered at 150, 220, 270, and 360 GHz. The scientific motivation involves photometry of distant dusty galaxies located by Spitzer and SCUBA, as well as the study of other millimeter-wave sources such as ultra-luminous infrared galaxies, the Sunyaev-Zeldovich effect in clusters, and galactic dust. The camera incorporates Frequency Selective Bolometer (FSB) and superconducting Transition-Edge Sensor (TES) technology. The current generation of TES devices we examine utilizes proximity effect superconducting bilayers of Mo/Au, Ti, or Ti/Au as TESs, located along with frequency selective absorbingmore » structures on silicon nitride membranes. The detector incorporates lithographically patterned structures designed to address both TES device stability and detector thermal transport concerns. The membrane is not perforated, resulting in a detector which is comparatively robust mechanically. In this paper, we report on the development of the superconducting bilayer TES technology, the design and testing of the detector thermal transport and device stability control structures, optical and thermal test results, and the use of new materials.« less

  20. Characterization of SWIR cameras by MRC measurements

    NASA Astrophysics Data System (ADS)

    Gerken, M.; Schlemmer, H.; Haan, Hubertus A.; Siemens, Christofer; Münzberg, M.

    2014-05-01

    Cameras for the SWIR wavelength range are becoming more and more important because of the better observation range for day-light operation under adverse weather conditions (haze, fog, rain). In order to choose the best suitable SWIR camera or to qualify a camera for a given application, characterization of the camera by means of the Minimum Resolvable Contrast MRC concept is favorable as the MRC comprises all relevant properties of the instrument. With the MRC known for a given camera device the achievable observation range can be calculated for every combination of target size, illumination level or weather conditions. MRC measurements in the SWIR wavelength band can be performed widely along the guidelines of the MRC measurements of a visual camera. Typically measurements are performed with a set of resolution targets (e.g. USAF 1951 target) manufactured with different contrast values from 50% down to less than 1%. For a given illumination level the achievable spatial resolution is then measured for each target. The resulting curve is showing the minimum contrast that is necessary to resolve the structure of a target as a function of spatial frequency. To perform MRC measurements for SWIR cameras at first the irradiation parameters have to be given in radiometric instead of photometric units which are limited in their use to the visible range. In order to do so, SWIR illumination levels for typical daylight and twilight conditions have to be defined. At second, a radiation source is necessary with appropriate emission in the SWIR range (e.g. incandescent lamp) and the irradiance has to be measured in W/m2 instead of Lux = Lumen/m2. At third, the contrast values of the targets have to be calibrated newly for the SWIR range because they typically differ from the values determined for the visual range. Measured MRC values of three cameras are compared to the specified performance data of the devices and the results of a multi-band in-house designed Vis-SWIR camera system are discussed.

  1. Adaptive Monocular Visual–Inertial SLAM for Real-Time Augmented Reality Applications in Mobile Devices

    PubMed Central

    Piao, Jin-Chun; Kim, Shin-Dug

    2017-01-01

    Simultaneous localization and mapping (SLAM) is emerging as a prominent issue in computer vision and next-generation core technology for robots, autonomous navigation and augmented reality. In augmented reality applications, fast camera pose estimation and true scale are important. In this paper, we present an adaptive monocular visual–inertial SLAM method for real-time augmented reality applications in mobile devices. First, the SLAM system is implemented based on the visual–inertial odometry method that combines data from a mobile device camera and inertial measurement unit sensor. Second, we present an optical-flow-based fast visual odometry method for real-time camera pose estimation. Finally, an adaptive monocular visual–inertial SLAM is implemented by presenting an adaptive execution module that dynamically selects visual–inertial odometry or optical-flow-based fast visual odometry. Experimental results show that the average translation root-mean-square error of keyframe trajectory is approximately 0.0617 m with the EuRoC dataset. The average tracking time is reduced by 7.8%, 12.9%, and 18.8% when different level-set adaptive policies are applied. Moreover, we conducted experiments with real mobile device sensors, and the results demonstrate the effectiveness of performance improvement using the proposed method. PMID:29112143

  2. 15 CFR 740.19 - Consumer Communications Devices (CCD).

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...; (11) Memory devices classified under ECCN 5A992 or designated EAR99; (12) “Information security... 5D992 or designated EAR99; (13) Digital cameras and memory cards classified under ECCN 5A992 or...

  3. Image acquisition device of inspection robot based on adaptive rotation regulation of polarizer

    NASA Astrophysics Data System (ADS)

    Dong, Maoqi; Wang, Xingguang; Liang, Tao; Yang, Guoqing; Zhang, Chuangyou; Gao, Faqin

    2017-12-01

    An image processing device of inspection robot with adaptive polarization adjustment is proposed, that the device includes the inspection robot body, the image collecting mechanism, the polarizer and the polarizer automatic actuating device. Where, the image acquisition mechanism is arranged at the front of the inspection robot body for collecting equipment image data in the substation. Polarizer is fixed on the automatic actuating device of polarizer, and installed in front of the image acquisition mechanism, and that the optical axis of the camera vertically goes through the polarizer and the polarizer rotates with the optical axis of the visible camera as the central axis. The simulation results show that the system solves the fuzzy problems of the equipment that are caused by glare, reflection of light and shadow, and the robot can observe details of the running status of electrical equipment. And the full coverage of the substation equipment inspection robot observation target is achieved, which ensures the safe operation of the substation equipment.

  4. Natural Environment Illumination: Coherent Interactive Augmented Reality for Mobile and Non-Mobile Devices.

    PubMed

    Rohmer, Kai; Jendersie, Johannes; Grosch, Thorsten

    2017-11-01

    Augmented Reality offers many applications today, especially on mobile devices. Due to the lack of mobile hardware for illumination measurements, photorealistic rendering with consistent appearance of virtual objects is still an area of active research. In this paper, we present a full two-stage pipeline for environment acquisition and augmentation of live camera images using a mobile device with a depth sensor. We show how to directly work on a recorded 3D point cloud of the real environment containing high dynamic range color values. For unknown and automatically changing camera settings, a color compensation method is introduced. Based on this, we show photorealistic augmentations using variants of differential light simulation techniques. The presented methods are tailored for mobile devices and run at interactive frame rates. However, our methods are scalable to trade performance for quality and can produce quality renderings on desktop hardware.

  5. Algorithm design for automated transportation photo enforcement camera image and video quality diagnostic check modules

    NASA Astrophysics Data System (ADS)

    Raghavan, Ajay; Saha, Bhaskar

    2013-03-01

    Photo enforcement devices for traffic rules such as red lights, toll, stops, and speed limits are increasingly being deployed in cities and counties around the world to ensure smooth traffic flow and public safety. These are typically unattended fielded systems, and so it is important to periodically check them for potential image/video quality problems that might interfere with their intended functionality. There is interest in automating such checks to reduce the operational overhead and human error involved in manually checking large camera device fleets. Examples of problems affecting such camera devices include exposure issues, focus drifts, obstructions, misalignment, download errors, and motion blur. Furthermore, in some cases, in addition to the sub-algorithms for individual problems, one also has to carefully design the overall algorithm and logic to check for and accurately classifying these individual problems. Some of these issues can occur in tandem or have the potential to be confused for each other by automated algorithms. Examples include camera misalignment that can cause some scene elements to go out of focus for wide-area scenes or download errors that can be misinterpreted as an obstruction. Therefore, the sequence in which the sub-algorithms are utilized is also important. This paper presents an overview of these problems along with no-reference and reduced reference image and video quality solutions to detect and classify such faults.

  6. Comparison Between RGB and Rgb-D Cameras for Supporting Low-Cost Gnss Urban Navigation

    NASA Astrophysics Data System (ADS)

    Rossi, L.; De Gaetani, C. I.; Pagliari, D.; Realini, E.; Reguzzoni, M.; Pinto, L.

    2018-05-01

    A pure GNSS navigation is often unreliable in urban areas because of the presence of obstructions, thus preventing a correct reception of the satellite signal. The bridging between GNSS outages, as well as the vehicle attitude reconstruction, can be recovered by using complementary information, such as visual data acquired by RGB-D or RGB cameras. In this work, the possibility of integrating low-cost GNSS and visual data by means of an extended Kalman filter has been investigated. The focus is on the comparison between the use of RGB-D or RGB cameras. In particular, a Microsoft Kinect device (second generation) and a mirrorless Canon EOS M RGB camera have been compared. The former is an interesting RGB-D camera because of its low-cost, easiness of use and raw data accessibility. The latter has been selected for the high-quality of the acquired images and for the possibility of mounting fixed focal length lenses with a lower weight and cost with respect to a reflex camera. The designed extended Kalman filter takes as input the GNSS-only trajectory and the relative orientation between subsequent pairs of images. Depending on the visual data acquisition system, the filter is different because RGB-D cameras acquire both RGB and depth data, allowing to solve the scale problem, which is instead typical of image-only solutions. The two systems and filtering approaches were assessed by ad-hoc experimental tests, showing that the use of a Kinect device for supporting a u-blox low-cost receiver led to a trajectory with a decimeter accuracy, that is 15 % better than the one obtained when using the Canon EOS M camera.

  7. Jovian thundercloud observation with Jovian orbiter and ground-based telescope

    NASA Astrophysics Data System (ADS)

    Takahashi, Yukihiro; Nakajima, Kensuke; Takeuchi, Satoru; Sato, Mitsuteru; Fukuhara, Tetsuya; Watanabe, Makoto; Yair, Yoav; Fischer, Georg; Aplin, Karen

    The latest observational and theoretical studies suggest that thunderstorms in Jupiter's at-mosphere are very important subject not only for understanding of meteorology, which may determine the large scale structures such as belt/zone and big ovals, but also for probing the water abundance of the deep atmosphere, which is crucial to constrain the behavior of volatiles in early solar system. Here we suggest a very simple high-speed imager on board Jovian orbiter, Optical Lightning Detector, OLD, optimized for detecting optical emissions from lightning dis-charge in Jupiter. OLD consists of radiation-tolerant CMOS sensors and two H Balmer Alpha line (656.3nm) filters. In normal sampling mode the frame intervals is 29ms with a full frame format of 512x512 pixels and in high-speed sampling mode the interval could be reduced down to 0.1ms by concentrating a limited area of 30x30 pixels. Weight, size and power consump-tion are about 1kg, 16x7x5.5 cm (sensor) and 16x12x4 cm (circuit), and 4W, respectively, though they can be reduced according to the spacecraft resources and required environmental tolerance. Also we plan to investigate the optical flashes using a ground-based middle-sized telescope, which will be built by Hokkaido University, with narrow-band high speed imaging unit using an EM-CCD camera. Observational strategy with these optical lightning detectors and spectral imagers, which enables us to estimate the horizontal motion and altitude of clouds, will be introduced.

  8. Imaging Local Ca2+ Signals in Cultured Mammalian Cells

    PubMed Central

    Lock, Jeffrey T.; Ellefsen, Kyle L.; Settle, Bret; Parker, Ian; Smith, Ian F.

    2015-01-01

    Cytosolic Ca2+ ions regulate numerous aspects of cellular activity in almost all cell types, controlling processes as wide-ranging as gene transcription, electrical excitability and cell proliferation. The diversity and specificity of Ca2+ signaling derives from mechanisms by which Ca2+ signals are generated to act over different time and spatial scales, ranging from cell-wide oscillations and waves occurring over the periods of minutes to local transient Ca2+ microdomains (Ca2+ puffs) lasting milliseconds. Recent advances in electron multiplied CCD (EMCCD) cameras now allow for imaging of local Ca2+ signals with a 128 x 128 pixel spatial resolution at rates of >500 frames sec-1 (fps). This approach is highly parallel and enables the simultaneous monitoring of hundreds of channels or puff sites in a single experiment. However, the vast amounts of data generated (ca. 1 Gb per min) render visual identification and analysis of local Ca2+ events impracticable. Here we describe and demonstrate the procedures for the acquisition, detection, and analysis of local IP3-mediated Ca2+ signals in intact mammalian cells loaded with Ca2+ indicators using both wide-field epi-fluorescence (WF) and total internal reflection fluorescence (TIRF) microscopy. Furthermore, we describe an algorithm developed within the open-source software environment Python that automates the identification and analysis of these local Ca2+ signals. The algorithm localizes sites of Ca2+ release with sub-pixel resolution; allows user review of data; and outputs time sequences of fluorescence ratio signals together with amplitude and kinetic data in an Excel-compatible table. PMID:25867132

  9. Flexible scintillator autoradiography for tumor margin inspection using 18F-FDG

    NASA Astrophysics Data System (ADS)

    Vyas, K. N.; Grootendorst, M.; Mertzanidou, T.; Macholl, S.; Stoyanov, D.; Arridge, S. R.; Tuch, D. S.

    2018-03-01

    Autoradiography potentially offers high molecular sensitivity and spatial resolution for tumor margin estimation. However, conventional autoradiography requires sectioning the sample which is destructive and labor-intensive. Here we describe a novel autoradiography technique that uses a flexible ultra-thin scintillator which conforms to the sample surface. Imaging with the flexible scintillator enables direct, high-resolution and high-sensitivity imaging of beta particle emissions from targeted radiotracers. The technique has the potential to identify positive tumor margins in fresh unsectioned samples during surgery, eliminating the processing time demands of conventional autoradiography. We demonstrate the feasibility of the flexible autoradiography approach to directly image the beta emissions from radiopharmaceuticals using lab experiments and GEANT-4 simulations to determine i) the specificity for 18F compared to 99mTc-labeled tracers ii) the sensitivity to detect signal from various depths within the tissue. We found that an image resolution of 1.5 mm was achievable with a scattering background and we estimate a minimum detectable activity concentration of 0.9 kBq/ml for 18F. We show that the flexible autoradiography approach has high potential as a technique for molecular imaging of tumor margins using 18F-FDG in a tumor xenograft mouse model imaged with a radiation-shielded EMCCD camera. Due to the advantage of conforming to the specimen, the flexible scintillator showed significantly better image quality in terms of tumor signal to whole-body background noise compared to rigid and optimally thick CaF2:Eu and BC400. The sensitivity of the technique means it is suitable for clinical translation.

  10. Short infrared laser pulses block action potentials in neurons

    NASA Astrophysics Data System (ADS)

    Walsh, Alex J.; Tolstykh, Gleb P.; Martens, Stacey L.; Ibey, Bennett L.; Beier, Hope T.

    2017-02-01

    Short infrared laser pulses have many physiological effects on cells including the ability to stimulate action potentials in neurons. Here we show that short infrared laser pulses can also reversibly block action potentials. Primary rat hippocampal neurons were transfected with the Optopatch2 plasmid, which contains both a blue-light activated channel rhodopsin (CheRiff) and a red-light fluorescent membrane voltage reporter (QuasAr2). This optogenetic platform allows robust stimulation and recording of action potential activity in neurons in a non-contact, low noise manner. For all experiments, QuasAr2 was imaged continuously on a wide-field fluorescent microscope using a Krypton laser (647 nm) as the excitation source and an EMCCD camera operating at 1000 Hz to collect emitted fluorescence. A co-aligned Argon laser (488 nm, 5 ms at 10Hz) provided activation light for CheRiff. A 200 mm fiber delivered infrared light locally to the target neuron. Reversible action potential block in neurons was observed following a short infrared laser pulse (0.26-0.96 J/cm2; 1.37-5.01 ms; 1869 nm), with the block persisting for more than 1 s with exposures greater than 0.69 J/cm2. Action potential block was sustained for 30 s with the short infrared laser pulsed at 1-7 Hz. Full recovery of neuronal activity was observed 5-30s post-infrared exposure. These results indicate that optogenetics provides a robust platform for the study of action potential block and that short infrared laser pulses can be used for non-contact, reversible action potential block.

  11. Image deblurring in smartphone devices using built-in inertial measurement sensors

    NASA Astrophysics Data System (ADS)

    Šindelář, Ondřej; Šroubek, Filip

    2013-01-01

    Long-exposure handheld photography is degraded with blur, which is difficult to remove without prior information about the camera motion. In this work, we utilize inertial sensors (accelerometers and gyroscopes) in modern smartphones to detect exact motion trajectory of the smartphone camera during exposure and remove blur from the resulting photography based on the recorded motion data. The whole system is implemented on the Android platform and embedded in the smartphone device, resulting in a close-to-real-time deblurring algorithm. The performance of the proposed system is demonstrated in real-life scenarios.

  12. Electronic recording of holograms with applications to holographic displays

    NASA Technical Reports Server (NTRS)

    Claspy, P. C.; Merat, F. L.

    1979-01-01

    The paper describes an electronic heterodyne recording which uses electrooptic modulation to introduce a sinusoidal phase shift between the object and reference wave. The resulting temporally modulated holographic interference pattern is scanned by a commercial image dissector camera, and the rejection of the self-interference terms is accomplished by heterodyne detection at the camera output. The electrical signal representing this processed hologram can then be used to modify the properties of a liquid crystal light valve or a similar device. Such display devices transform the displayed interference pattern into a phase modulated wave front rendering a three-dimensional image.

  13. Martian Microscope

    NASA Technical Reports Server (NTRS)

    2004-01-01

    The microscopic imager (circular device in center) is in clear view above the surface at Meridiani Planum, Mars, in this approximate true-color image taken by the panoramic camera on the Mars Exploration Rover Opportunity. The image was taken on the 9th sol of the rover's journey. The microscopic imager is located on the rover's instrument deployment device, or arm. The arrow is pointing to the lens of the instrument. Note the dust cover, which flips out to the left of the lens, is open. This approximated color image was created using the camera's violet and infrared filters as blue and red.

  14. Low-Cost Alternative for Signal Generators in the Physics Laboratory

    ERIC Educational Resources Information Center

    Pathare, Shirish Rajan; Raghavendra, M. K.; Huli, Saurabhee

    2017-01-01

    Recently devices such as the optical mouse of a computer, webcams, Wii remote, and digital cameras have been used to record and analyze different physical phenomena quantitatively. Devices like tablets and smartphones are also becoming popular. Different scientific applications available at Google Play (Android devices) or the App Store (iOS…

  15. Field-Sequential Color Converter

    NASA Technical Reports Server (NTRS)

    Studer, Victor J.

    1989-01-01

    Electronic conversion circuit enables display of signals from field-sequential color-television camera on color video camera. Designed for incorporation into color-television monitor on Space Shuttle, circuit weighs less, takes up less space, and consumes less power than previous conversion equipment. Incorporates state-of-art memory devices, also used in terrestrial stationary or portable closed-circuit television systems.

  16. Dual beam optical interferometer

    NASA Technical Reports Server (NTRS)

    Gutierrez, Roman C. (Inventor)

    2003-01-01

    A dual beam interferometer device is disclosed that enables moving an optics module in a direction, which changes the path lengths of two beams of light. The two beams reflect off a surface of an object and generate different speckle patterns detected by an element, such as a camera. The camera detects a characteristic of the surface.

  17. LSST camera grid structure made out of ceramic composite material, HB-Cesic

    NASA Astrophysics Data System (ADS)

    Kroedel, Matthias R.; Langton, J. Bryan

    2016-08-01

    In this paper we are presenting the ceramic design and the fabrication of the camera structure which is using the unique manufacturing features of the HB-Cesic technology and associated with a dedicated metrology device in order to ensure the challenging flatness requirement of 4 micron over the full array.

  18. Web Camera Use in Developing Biology, Molecular Biology and Biochemistry Laboratories

    ERIC Educational Resources Information Center

    Ogren, Paul J.; Deibel, Michael; Kelly, Ian; Mulnix, Amy B.; Peck, Charlie

    2004-01-01

    The use of a network-ready color camera is described which is primarily marketed as a security device and is used for experiments in developmental biology, genetics and biochemistry laboratories and in special student research projects. Acquiring and analyzing project and archiving images is very important in microscopy, electrophoresis and…

  19. Thermal imaging as a smartphone application: exploring and implementing a new concept

    NASA Astrophysics Data System (ADS)

    Yanai, Omer

    2014-06-01

    Today's world is going mobile. Smartphone devices have become an important part of everyday life for billions of people around the globe. Thermal imaging cameras have been around for half a century and are now making their way into our daily lives. Originally built for military applications, thermal cameras are starting to be considered for personal use, enabling enhanced vision and temperature mapping for different groups of professional individuals. Through a revolutionary concept that turns smartphones into fully functional thermal cameras, we have explored how these two worlds can converge by utilizing the best of each technology. We will present the thought process, design considerations and outcome of our development process, resulting in a low-power, high resolution, lightweight USB thermal imaging device that turns Android smartphones into thermal cameras. We will discuss the technological challenges that we faced during the development of the product, and what are the system design decisions taken during the implementation. We will provide some insights we came across during this development process. Finally, we will discuss the opportunities that this innovative technology brings to the market.

  20. Analysis of Brown camera distortion model

    NASA Astrophysics Data System (ADS)

    Nowakowski, Artur; Skarbek, Władysław

    2013-10-01

    Contemporary image acquisition devices introduce optical distortion into image. It results in pixel displacement and therefore needs to be compensated for many computer vision applications. The distortion is usually modeled by the Brown distortion model, which parameters can be included in camera calibration task. In this paper we describe original model, its dependencies and analyze orthogonality with regard to radius for its decentering distortion component. We also report experiments with camera calibration algorithm included in OpenCV library, especially a stability of distortion parameters estimation is evaluated.

  1. Stereo electro-optical tracker study for the measurement of model deformations at the National Transonic Facility

    NASA Astrophysics Data System (ADS)

    Hertel, R. J.; Hoilman, K. A.

    1982-01-01

    The effects of model vibration, camera and window nonlinearities, and aerodynamic disturbances in the optical path on the measurement of target position is examined. Window distortion, temperature and pressure changes, laminar and turbulent boundary layers, shock waves, target intensity and, target vibration are also studied. A general computer program was developed to trace optical rays through these disturbances. The use of a charge injection device camera as an alternative to the image dissector camera was examined.

  2. Assessment of Risk Reduction for Lymphedema Following Sentinel Lymph Noded Guided Surgery for Primary Breast Cancer

    DTIC Science & Technology

    2006-10-01

    patients with breast cancer underwent scanning with a hybrid camera which combined a dual-head SPECT camera and a low-dose, single slice CT scanner , (GE...investigated a novel approach which combines the output of a dual-head SPECT camera and a low-dose, single slice CT scanner , (GE Hawkeye®). This... scanner , (Hawkeye®, GE Medical system) is attempted in this study. This device is widely available in cardiology community and has the potential to

  3. Photogrammetry System and Method for Determining Relative Motion Between Two Bodies

    NASA Technical Reports Server (NTRS)

    Miller, Samuel A. (Inventor); Severance, Kurt (Inventor)

    2014-01-01

    A photogrammetry system and method provide for determining the relative position between two objects. The system utilizes one or more imaging devices, such as high speed cameras, that are mounted on a first body, and three or more photogrammetry targets of a known location on a second body. The system and method can be utilized with cameras having fish-eye, hyperbolic, omnidirectional, or other lenses. The system and method do not require overlapping fields-of-view if two or more cameras are utilized. The system and method derive relative orientation by equally weighting information from an arbitrary number of heterogeneous cameras, all with non-overlapping fields-of-view. Furthermore, the system can make the measurements with arbitrary wide-angle lenses on the cameras.

  4. Overview of Digital Forensics Algorithms in Dslr Cameras

    NASA Astrophysics Data System (ADS)

    Aminova, E.; Trapeznikov, I.; Priorov, A.

    2017-05-01

    The widespread usage of the mobile technologies and the improvement of the digital photo devices getting has led to more frequent cases of falsification of images including in the judicial practice. Consequently, the actual task for up-to-date digital image processing tools is the development of algorithms for determining the source and model of the DSLR (Digital Single Lens Reflex) camera and improve image formation algorithms. Most research in this area based on the mention that the extraction of unique sensor trace of DSLR camera could be possible on the certain stage of the imaging process into the camera. It is considered that the study focuses on the problem of determination of unique feature of DSLR cameras based on optical subsystem artifacts and sensor noises.

  5. Printed circuit board for a CCD camera head

    DOEpatents

    Conder, Alan D.

    2002-01-01

    A charge-coupled device (CCD) camera head which can replace film for digital imaging of visible light, ultraviolet radiation, and soft to penetrating x-rays, such as within a target chamber where laser produced plasmas are studied. The camera head is small, capable of operating both in and out of a vacuum environment, and is versatile. The CCD camera head uses PC boards with an internal heat sink connected to the chassis for heat dissipation, which allows for close (0.04" for example) stacking of the PC boards. Integration of this CCD camera head into existing instrumentation provides a substantial enhancement of diagnostic capabilities for studying high energy density plasmas, for a variety of military industrial, and medical imaging applications.

  6. Sensors for 3D Imaging: Metric Evaluation and Calibration of a CCD/CMOS Time-of-Flight Camera.

    PubMed

    Chiabrando, Filiberto; Chiabrando, Roberto; Piatti, Dario; Rinaudo, Fulvio

    2009-01-01

    3D imaging with Time-of-Flight (ToF) cameras is a promising recent technique which allows 3D point clouds to be acquired at video frame rates. However, the distance measurements of these devices are often affected by some systematic errors which decrease the quality of the acquired data. In order to evaluate these errors, some experimental tests on a CCD/CMOS ToF camera sensor, the SwissRanger (SR)-4000 camera, were performed and reported in this paper. In particular, two main aspects are treated: the calibration of the distance measurements of the SR-4000 camera, which deals with evaluation of the camera warm up time period, the distance measurement error evaluation and a study of the influence on distance measurements of the camera orientation with respect to the observed object; the second aspect concerns the photogrammetric calibration of the amplitude images delivered by the camera using a purpose-built multi-resolution field made of high contrast targets.

  7. Binary pressure-sensitive paint measurements using miniaturised, colour, machine vision cameras

    NASA Astrophysics Data System (ADS)

    Quinn, Mark Kenneth

    2018-05-01

    Recent advances in machine vision technology and capability have led to machine vision cameras becoming applicable for scientific imaging. This study aims to demonstrate the applicability of machine vision colour cameras for the measurement of dual-component pressure-sensitive paint (PSP). The presence of a second luminophore component in the PSP mixture significantly reduces its inherent temperature sensitivity, increasing its applicability at low speeds. All of the devices tested are smaller than the cooled CCD cameras traditionally used and most are of significantly lower cost, thereby increasing the accessibility of such technology and techniques. Comparisons between three machine vision cameras, a three CCD camera, and a commercially available specialist PSP camera are made on a range of parameters, and a detailed PSP calibration is conducted in a static calibration chamber. The findings demonstrate that colour machine vision cameras can be used for quantitative, dual-component, pressure measurements. These results give rise to the possibility of performing on-board dual-component PSP measurements in wind tunnels or on real flight/road vehicles.

  8. Optical tests for using smartphones inside medical devices

    NASA Astrophysics Data System (ADS)

    Bernat, Amir S.; Acobas, Jennifer K.; Phang, Ye Shang; Hassan, David; Bolton, Frank J.; Levitz, David

    2018-02-01

    Smartphones are currently used in many medical applications and are more frequently being integrated into medical imaging devices. The regulatory requirements in existence today however, particularly the standardization of smartphone imaging through validation and verification testing, only partially cover imaging characteristics with a smartphone. Specifically, it has been shown that smartphone camera specifications are of sufficient quality for medical imaging, and there are devices which comply with the FDA's regulatory requirements for a medical device such as a device's field of view, direction of viewing and optical resolution and optical distortion. However, these regulatory requirements do not call specifically for color testing. Images of the same object using automatic settings or different light sources can show different color composition. Experimental results showing such differences are presented. Under some circumstances, such differences in color composition could potentially lead to incorrect diagnoses. It is therefore critical to control the smartphone camera and illumination parameters properly. This paper examines different smartphone camera settings that affect image quality and color composition. To test and select the correct settings, a test methodology is proposed. It aims at evaluating and testing image color correctness and white balance settings for mobile phones and LED light sources. Emphasis is placed on color consistency and deviation from gray values, specifically by evaluating the ΔC values based on the CIEL*a*b* color space. Results show that such standardization minimizes differences in color composition and thus could reduce the risk of a wrong diagnosis.

  9. In-vessel visible inspection system on KSTAR

    NASA Astrophysics Data System (ADS)

    Chung, Jinil; Seo, D. C.

    2008-08-01

    To monitor the global formation of the initial plasma and damage to the internal structures of the vacuum vessel, an in-vessel visible inspection system has been installed and operated on the Korean superconducting tokamak advanced research (KSTAR) device. It consists of four inspection illuminators and two visible/H-alpha TV cameras. Each illuminator uses four 150W metal-halide lamps with separate lamp controllers, and programmable progressive scan charge-coupled device cameras with 1004×1004 resolution at 48frames/s and a resolution of 640×480 at 210frames/s are used to capture images. In order to provide vessel inspection capability under any operation condition, the lamps and cameras are fully controlled from the main control room and protected by shutters from deposits during plasma operation. In this paper, we describe the design and operation results of the visible inspection system with the images of the KSTAR Ohmic discharges during the first plasma campaign.

  10. The exploration of outer space with cameras: A history of the NASA unmanned spacecraft missions

    NASA Astrophysics Data System (ADS)

    Mirabito, M. M.

    The use of television cameras and other video imaging devices to explore the solar system's planetary bodies with unmanned spacecraft is chronicled. Attention is given to the missions and the imaging devices, beginning with the Ranger 7 moon mission, which featured the first successfully operated electrooptical subsystem, six television cameras with vidicon image sensors. NASA established a network of parabolic, ground-based antennas on the earth (the Deep Space Network) to receive signals from spacecraft travelling farther than 16,000 km into space. The image processing and enhancement techniques used to convert spacecraft data transmissions into black and white and color photographs are described, together with the technological requirements that drove the development of the various systems. Terrestrial applications of the planetary imaging systems are explored, including medical and educational uses. Finally, the implementation and functional characteristics of CCDs are detailed, noting their installation on the Space Telescope.

  11. SU-E-J-72: Design and Study of In-House Web-Camera Based Automatic Continuous Patient Movement Monitoring and Controlling Device for EXRT.

    PubMed

    Senthil Kumar, S; Suresh Babu, S S; Anand, P; Dheva Shantha Kumari, G

    2012-06-01

    The purpose of our study was to fabricate in-house web-camera based automatic continuous patient movement monitoring device and control the movement of the patients during EXRT. Web-camera based patient movement monitoring device consists of a computer, digital web-camera, mounting system, breaker circuit, speaker, and visual indicator. The computer is used to control and analyze the patient movement using indigenously developed software. The speaker and the visual indicator are placed in the console room to indicate the positional displacement of the patient. Studies were conducted on phantom and 150 patients with different types of cancers. Our preliminary clinical results indicate that our device is highly reliable and can accurately report smaller movements of the patients in all directions. The results demonstrated that the device was able to detect patient's movements with the sensitivity of about 1 mm. When a patient moves, the receiver activates the circuit; an audible warning sound will be produced in the console. Through real-time measurements, an audible alarm can alert the radiation technologist to stop the treatment if the user defined positional threshold is violated. Simultaneously, the electrical circuit to the teletherapy machine will be activated and radiation will be halted. Patient's movement during the course for radiotherapy was studied. The beam is halted automatically when the threshold level of the system is exceeded. By using the threshold provided in the system, it is possible to monitor the patient continuously with certain fixed limits. An additional benefit is that it has reduced the tension and stress of a treatment team associated with treating patients who are not immobilized. It also enables the technologists to do their work more efficiently, because they don't have to continuously monitor patients with as much scrutiny as was required. © 2012 American Association of Physicists in Medicine.

  12. DataPlay's mobile recording technology

    NASA Astrophysics Data System (ADS)

    Bell, Bernard W., Jr.

    2002-01-01

    A small rotating memory device which utilizes optical prerecorded and writeable technology to provide a mobile recording technology solution for digital cameras, cell phones, music players, PDA's, and hybrid multipurpose devices have been developed. This solution encompasses writeable, read only, and encrypted storage media.

  13. Modulated electron-multiplied fluorescence lifetime imaging microscope: all-solid-state camera for fluorescence lifetime imaging.

    PubMed

    Zhao, Qiaole; Schelen, Ben; Schouten, Raymond; van den Oever, Rein; Leenen, René; van Kuijk, Harry; Peters, Inge; Polderdijk, Frank; Bosiers, Jan; Raspe, Marcel; Jalink, Kees; Geert Sander de Jong, Jan; van Geest, Bert; Stoop, Karel; Young, Ian Ted

    2012-12-01

    We have built an all-solid-state camera that is directly modulated at the pixel level for frequency-domain fluorescence lifetime imaging microscopy (FLIM) measurements. This novel camera eliminates the need for an image intensifier through the use of an application-specific charge coupled device design in a frequency-domain FLIM system. The first stage of evaluation for the camera has been carried out. Camera characteristics such as noise distribution, dark current influence, camera gain, sampling density, sensitivity, linearity of photometric response, and optical transfer function have been studied through experiments. We are able to do lifetime measurement using our modulated, electron-multiplied fluorescence lifetime imaging microscope (MEM-FLIM) camera for various objects, e.g., fluorescein solution, fixed green fluorescent protein (GFP) cells, and GFP-actin stained live cells. A detailed comparison of a conventional microchannel plate (MCP)-based FLIM system and the MEM-FLIM system is presented. The MEM-FLIM camera shows higher resolution and a better image quality. The MEM-FLIM camera provides a new opportunity for performing frequency-domain FLIM.

  14. Cinematic camera emulation using two-dimensional color transforms

    NASA Astrophysics Data System (ADS)

    McElvain, Jon S.; Gish, Walter

    2015-02-01

    For cinematic and episodic productions, on-set look management is an important component of the creative process, and involves iterative adjustments of the set, actors, lighting and camera configuration. Instead of using the professional motion capture device to establish a particular look, the use of a smaller form factor DSLR is considered for this purpose due to its increased agility. Because the spectral response characteristics will be different between the two camera systems, a camera emulation transform is needed to approximate the behavior of the destination camera. Recently, twodimensional transforms have been shown to provide high-accuracy conversion of raw camera signals to a defined colorimetric state. In this study, the same formalism is used for camera emulation, whereby a Canon 5D Mark III DSLR is used to approximate the behavior a Red Epic cinematic camera. The spectral response characteristics for both cameras were measured and used to build 2D as well as 3x3 matrix emulation transforms. When tested on multispectral image databases, the 2D emulation transforms outperform their matrix counterparts, particularly for images containing highly chromatic content.

  15. Use of body-mounted cameras to enhance data collection: an evaluation of two arthropod sampling techniques

    USDA-ARS?s Scientific Manuscript database

    A study was conducted that compared the effectiveness of a sweepnet versus a vacuum suction device for collecting arthropods in cotton. The study differs from previous research in that body-mounted action cameras (B-MACs) were used to record the activity of the person conducting the collections. The...

  16. The Topological Panorama Camera: A New Tool for Teaching Concepts Related to Space and Time.

    ERIC Educational Resources Information Center

    Gelphman, Janet L.; And Others

    1992-01-01

    Included are the description, operating characteristics, uses, and future plans for the Topological Panorama Camera, which is an experimental, robotic photographic device capable of producing visual renderings of the mathematical characteristics of an equation in terms of position changes of an object or in terms of the shape of the space…

  17. Low-Cost Alternative for Signal Generators in the Physics Laboratory

    NASA Astrophysics Data System (ADS)

    Pathare, Shirish Rajan; Raghavendra, M. K.; Huli, Saurabhee

    2017-05-01

    Recently devices such as the optical mouse of a computer, webcams, Wii remote, and digital cameras have been used to record and analyze different physical phenomena quantitatively. Devices like tablets and smartphones are also becoming popular. Different scientific applications available at Google Play (Android devices) or the App Store (iOS devices) make them versatile. One can find many websites that provide information regarding various scientific applications compatible with these systems. A variety of smartphones/tablets are available with different types of sensors embedded. Some of them have sensors that are capable of measuring intensity of light, sound, and magnetic field. The camera of these devices has been used to study projectile motion, and the same device, along with a sensor, has been used to study the physical pendulum. Accelerometers have been used to study free and damped harmonic oscillations and to measure acceleration due to gravity. Using accelerometers and gyroscopes, angular velocity and centripetal acceleration have been measured. The coefficient of restitution for a ball bouncing on the floor has been measured using the application Oscilloscope on the iPhone. In this article, we present the use of an Android device as a low-cost alternative for a signal generator. We use the Signal Generator application installed on the Android device along with an amplifier circuit.

  18. CAOS-CMOS camera.

    PubMed

    Riza, Nabeel A; La Torre, Juan Pablo; Amin, M Junaid

    2016-06-13

    Proposed and experimentally demonstrated is the CAOS-CMOS camera design that combines the coded access optical sensor (CAOS) imager platform with the CMOS multi-pixel optical sensor. The unique CAOS-CMOS camera engages the classic CMOS sensor light staring mode with the time-frequency-space agile pixel CAOS imager mode within one programmable optical unit to realize a high dynamic range imager for extreme light contrast conditions. The experimentally demonstrated CAOS-CMOS camera is built using a digital micromirror device, a silicon point-photo-detector with a variable gain amplifier, and a silicon CMOS sensor with a maximum rated 51.3 dB dynamic range. White light imaging of three different brightness simultaneously viewed targets, that is not possible by the CMOS sensor, is achieved by the CAOS-CMOS camera demonstrating an 82.06 dB dynamic range. Applications for the camera include industrial machine vision, welding, laser analysis, automotive, night vision, surveillance and multispectral military systems.

  19. High speed photography, videography, and photonics III; Proceedings of the Meeting, San Diego, CA, August 22, 23, 1985

    NASA Technical Reports Server (NTRS)

    Ponseggi, B. G. (Editor); Johnson, H. C. (Editor)

    1985-01-01

    Papers are presented on the picosecond electronic framing camera, photogrammetric techniques using high-speed cineradiography, picosecond semiconductor lasers for characterizing high-speed image shutters, the measurement of dynamic strain by high-speed moire photography, the fast framing camera with independent frame adjustments, design considerations for a data recording system, and nanosecond optical shutters. Consideration is given to boundary-layer transition detectors, holographic imaging, laser holographic interferometry in wind tunnels, heterodyne holographic interferometry, a multispectral video imaging and analysis system, a gated intensified camera, a charge-injection-device profile camera, a gated silicon-intensified-target streak tube and nanosecond-gated photoemissive shutter tubes. Topics discussed include high time-space resolved photography of lasers, time-resolved X-ray spectrographic instrumentation for laser studies, a time-resolving X-ray spectrometer, a femtosecond streak camera, streak tubes and cameras, and a short pulse X-ray diagnostic development facility.

  20. Manifold-Based Image Understanding

    DTIC Science & Technology

    2010-06-30

    3] employs a Texas Instruments digital micromirror device (DMD), which consists of an array of N electrostatically actuated micromirrors . The camera...image x) is reflected off a digital micromirror device (DMD) array whose mirror orientations are modulated in the pseudorandom pattern φm supplied by a

  1. New ultrasensitive pickup device for deep-sea robots: underwater super-HARP color TV camera

    NASA Astrophysics Data System (ADS)

    Maruyama, Hirotaka; Tanioka, Kenkichi; Uchida, Tetsuo

    1994-11-01

    An ultra-sensitive underwater super-HARP color TV camera has been developed. The characteristics -- spectral response, lag, etc. -- of the super-HARP tube had to be designed for use underwater because the propagation of light in water is very different from that in air, and also depends on the light's wavelength. The tubes have new electrostatic focusing and magnetic deflection functions and are arranged in parallel to miniaturize the camera. A deep sea robot (DOLPHIN 3K) was fitted with this camera and used for the first sea test in Sagami Bay, Japan. The underwater visual information was clear enough to promise significant improvements in both deep sea surveying and safety. It was thus confirmed that the Super- HARP camera is very effective for underwater use.

  2. Integration of USB and firewire cameras in machine vision applications

    NASA Astrophysics Data System (ADS)

    Smith, Timothy E.; Britton, Douglas F.; Daley, Wayne D.; Carey, Richard

    1999-08-01

    Digital cameras have been around for many years, but a new breed of consumer market cameras is hitting the main stream. By using these devices, system designers and integrators will be well posited to take advantage of technological advances developed to support multimedia and imaging applications on the PC platform. Having these new cameras on the consumer market means lower cost, but it does not necessarily guarantee ease of integration. There are many issues that need to be accounted for like image quality, maintainable frame rates, image size and resolution, supported operating system, and ease of software integration. This paper will describe briefly a couple of the consumer digital standards, and then discuss some of the advantages and pitfalls of integrating both USB and Firewire cameras into computer/machine vision applications.

  3. Use of digital micromirror devices as dynamic pinhole arrays for adaptive confocal fluorescence microscopy

    NASA Astrophysics Data System (ADS)

    Pozzi, Paolo; Wilding, Dean; Soloviev, Oleg; Vdovin, Gleb; Verhaegen, Michel

    2018-02-01

    In this work, we present a new confocal laser scanning microscope capable to perform sensorless wavefront optimization in real time. The device is a parallelized laser scanning microscope in which the excitation light is structured in a lattice of spots by a spatial light modulator, while a deformable mirror provides aberration correction and scanning. A binary DMD is positioned in an image plane of the detection optical path, acting as a dynamic array of reflective confocal pinholes, images by a high performance cmos camera. A second camera detects images of the light rejected by the pinholes for sensorless aberration correction.

  4. Low Noise Camera for Suborbital Science Applications

    NASA Technical Reports Server (NTRS)

    Hyde, David; Robertson, Bryan; Holloway, Todd

    2015-01-01

    Low-cost, commercial-off-the-shelf- (COTS-) based science cameras are intended for lab use only and are not suitable for flight deployment as they are difficult to ruggedize and repackage into instruments. Also, COTS implementation may not be suitable since mission science objectives are tied to specific measurement requirements, and often require performance beyond that required by the commercial market. Custom camera development for each application is cost prohibitive for the International Space Station (ISS) or midrange science payloads due to nonrecurring expenses ($2,000 K) for ground-up camera electronics design. While each new science mission has a different suite of requirements for camera performance (detector noise, speed of image acquisition, charge-coupled device (CCD) size, operation temperature, packaging, etc.), the analog-to-digital conversion, power supply, and communications can be standardized to accommodate many different applications. The low noise camera for suborbital applications is a rugged standard camera platform that can accommodate a range of detector types and science requirements for use in inexpensive to mid range payloads supporting Earth science, solar physics, robotic vision, or astronomy experiments. Cameras developed on this platform have demonstrated the performance found in custom flight cameras at a price per camera more than an order of magnitude lower.

  5. Proton radiation damage experiment on P-Channel CCD for an X-ray CCD camera onboard the ASTRO-H satellite

    NASA Astrophysics Data System (ADS)

    Mori, Koji; Nishioka, Yusuke; Ohura, Satoshi; Koura, Yoshiaki; Yamauchi, Makoto; Nakajima, Hiroshi; Ueda, Shutaro; Kan, Hiroaki; Anabuki, Naohisa; Nagino, Ryo; Hayashida, Kiyoshi; Tsunemi, Hiroshi; Kohmura, Takayoshi; Ikeda, Shoma; Murakami, Hiroshi; Ozaki, Masanobu; Dotani, Tadayasu; Maeda, Yukie; Sagara, Kenshi

    2013-12-01

    We report on a proton radiation damage experiment on P-channel CCD newly developed for an X-ray CCD camera onboard the ASTRO-H satellite. The device was exposed up to 109 protons cm-2 at 6.7 MeV. The charge transfer inefficiency (CTI) was measured as a function of radiation dose. In comparison with the CTI currently measured in the CCD camera onboard the Suzaku satellite for 6 years, we confirmed that the new type of P-channel CCD is radiation tolerant enough for space use. We also confirmed that a charge-injection technique and lowering the operating temperature efficiently work to reduce the CTI for our device. A comparison with other P-channel CCD experiments is also discussed. We performed a proton radiation damage experiment on a new P-channel CCD. The device was exposed up to 109 protons cm-2 at 6.7 MeV. We confirmed that it is radiation tolerant enough for space use. We confirmed that a charge-injection technique reduces the CTI. We confirmed that lowering the operating temperature also reduces the CTI.

  6. Video System for Viewing From a Remote or Windowless Cockpit

    NASA Technical Reports Server (NTRS)

    Banerjee, Amamath

    2009-01-01

    A system of electronic hardware and software synthesizes, in nearly real time, an image of a portion of a scene surveyed by as many as eight video cameras aimed, in different directions, at portions of the scene. This is a prototype of systems that would enable a pilot to view the scene outside a remote or windowless cockpit. The outputs of the cameras are digitized. Direct memory addressing is used to store the data of a few captured images in sequence, and the sequence is repeated in cycles. Cylindrical warping is used in merging adjacent images at their borders to construct a mosaic image of the scene. The mosaic-image data are written to a memory block from which they can be rendered on a head-mounted display (HMD) device. A subsystem in the HMD device tracks the direction of gaze of the wearer, providing data that are used to select, for display, the portion of the mosaic image corresponding to the direction of gaze. The basic functionality of the system has been demonstrated by mounting the cameras on the roof of a van and steering the van by use of the images presented on the HMD device.

  7. A novel near real-time laser scanning device for geometrical determination of pleural cavity surface.

    PubMed

    Kim, Michele M; Zhu, Timothy C

    2013-02-02

    During HPPH-mediated pleural photodynamic therapy (PDT), it is critical to determine the anatomic geometry of the pleural surface quickly as there may be movement during treatment resulting in changes with the cavity. We have developed a laser scanning device for this purpose, which has the potential to obtain the surface geometry in real-time. A red diode laser with a holographic template to create a pattern and a camera with auto-focusing abilities are used to scan the cavity. In conjunction with a calibration with a known surface, we can use methods of triangulation to reconstruct the surface. Using a chest phantom, we are able to obtain a 360 degree scan of the interior in under 1 minute. The chest phantom scan was compared to an existing CT scan to determine its accuracy. The laser-camera separation can be determined through the calibration with 2mm accuracy. The device is best suited for environments that are on the scale of a chest cavity (between 10cm and 40cm). This technique has the potential to produce cavity geometry in real-time during treatment. This would enable PDT treatment dosage to be determined with greater accuracy. Works are ongoing to build a miniaturized device that moves the light source and camera via a fiber-optics bundle commonly used for endoscopy with increased accuracy.

  8. Noncontact simultaneous dual wavelength photoplethysmography: A further step toward noncontact pulse oximetry

    NASA Astrophysics Data System (ADS)

    Humphreys, Kenneth; Ward, Tomas; Markham, Charles

    2007-04-01

    We present a camera-based device capable of capturing two photoplethysmographic (PPG) signals at two different wavelengths simultaneously, in a remote noncontact manner. The system comprises a complementary metal-oxide semiconductor camera and dual wavelength array of light emitting diodes (760 and 880nm). By alternately illuminating a region of tissue with each wavelength of light, and detecting the backscattered photons with the camera at a rate of 16frames/wavelengths, two multiplexed PPG wave forms are simultaneously captured. This process is the basis of pulse oximetry, and we describe how, with the inclusion of a calibration procedure, this system could be used as a noncontact pulse oximeter to measure arterial oxygen saturation (SpO2) remotely. Results from an experiment on ten subjects, exhibiting normal SpO2 readings, that demonstrate the instrument's ability to capture signals from a range of subjects under realistic lighting and environmental conditions are presented. We compare the signals captured by the noncontact system to a conventional PPG signal captured concurrently from a finger, and show by means of a J. Bland and D. Altman [Lancet 327, 307 (1986); Statistician 32, 307 (1983)] test, the noncontact device to be comparable to a contact device as a monitor of heart rate. We highlight some considerations that should be made when using camera-based "integrative" sampling methods and demonstrate through simulation, the suitability of the captured PPG signals for application of existing pulse oximetry calibration procedures.

  9. Concept of a photon-counting camera based on a diffraction-addressed Gray-code mask

    NASA Astrophysics Data System (ADS)

    Morel, Sébastien

    2004-09-01

    A new concept of photon counting camera for fast and low-light-level imaging applications is introduced. The possible spectrum covered by this camera ranges from visible light to gamma rays, depending on the device used to transform an incoming photon into a burst of visible photons (photo-event spot) localized in an (x,y) image plane. It is actually an evolution of the existing "PAPA" (Precision Analog Photon Address) Camera that was designed for visible photons. This improvement comes from a simplified optics. The new camera transforms, by diffraction, each photo-event spot from an image intensifier or a scintillator into a cross-shaped pattern, which is projected onto a specific Gray code mask. The photo-event position is then extracted from the signal given by an array of avalanche photodiodes (or photomultiplier tubes, alternatively) downstream of the mask. After a detailed explanation of this camera concept that we have called "DIAMICON" (DIffraction Addressed Mask ICONographer), we briefly discuss about technical solutions to build such a camera.

  10. Experimental setup for camera-based measurements of electrically and optically stimulated luminescence of silicon solar cells and wafers.

    PubMed

    Hinken, David; Schinke, Carsten; Herlufsen, Sandra; Schmidt, Arne; Bothe, Karsten; Brendel, Rolf

    2011-03-01

    We report in detail on the luminescence imaging setup developed within the last years in our laboratory. In this setup, the luminescence emission of silicon solar cells or silicon wafers is analyzed quantitatively. Charge carriers are excited electrically (electroluminescence) using a power supply for carrier injection or optically (photoluminescence) using a laser as illumination source. The luminescence emission arising from the radiative recombination of the stimulated charge carriers is measured spatially resolved using a camera. We give details of the various components including cameras, optical filters for electro- and photo-luminescence, the semiconductor laser and the four-quadrant power supply. We compare a silicon charged-coupled device (CCD) camera with a back-illuminated silicon CCD camera comprising an electron multiplier gain and a complementary metal oxide semiconductor indium gallium arsenide camera. For the detection of the luminescence emission of silicon we analyze the dominant noise sources along with the signal-to-noise ratio of all three cameras at different operation conditions.

  11. Optical synthesizer for a large quadrant-array CCD camera: Center director's discretionary fund

    NASA Technical Reports Server (NTRS)

    Hagyard, Mona J.

    1992-01-01

    The objective of this program was to design and develop an optical device, an optical synthesizer, that focuses four contiguous quadrants of a solar image on four spatially separated CCD arrays that are part of a unique CCD camera system. This camera and the optical synthesizer will be part of the new NASA-Marshall Experimental Vector Magnetograph, and instrument developed to measure the Sun's magnetic field as accurately as present technology allows. The tasks undertaken in the program are outlined and the final detailed optical design is presented.

  12. NEUTRON RADIATION DAMAGE IN CCD CAMERAS AT JOINT EUROPEAN TORUS (JET).

    PubMed

    Milocco, Alberto; Conroy, Sean; Popovichev, Sergey; Sergienko, Gennady; Huber, Alexander

    2017-10-26

    The neutron and gamma radiations in large fusion reactors are responsible for damage to charged couple device (CCD) cameras deployed for applied diagnostics. Based on the ASTM guide E722-09, the 'equivalent 1 MeV neutron fluence in silicon' was calculated for a set of CCD cameras at the Joint European Torus. Such evaluations would be useful to good practice in the operation of the video systems. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  13. Microscope on Mars

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This image taken at Meridiani Planum, Mars by the panoramic camera on the Mars Exploration Rover Opportunity shows the rover's microscopic imager (circular device in center), located on its instrument deployment device, or 'arm.' The image was acquired on the ninth martian day or sol of the rover's mission.

  14. LCD-based digital eyeglass for modulating spatial-angular information.

    PubMed

    Bian, Zichao; Liao, Jun; Guo, Kaikai; Heng, Xin; Zheng, Guoan

    2015-05-04

    Using programmable aperture to modulate spatial-angular information of light field is well-known in computational photography and microscopy. Inspired by this concept, we report a digital eyeglass design that adaptively modulates light field entering human eyes. The main hardware includes a transparent liquid crystal display (LCD) and a mini-camera. The device analyzes the spatial-angular information of the camera image in real time and subsequently sends a command to form a certain pattern on the LCD. We show that, the eyeglass prototype can adaptively reduce light transmission from bright sources by ~80% and retain transparency to other dim objects meanwhile. One application of the reported device is to reduce discomforting glare caused by vehicle headlamps. To this end, we report the preliminary result of using the reported device in a road test. The reported device may also find applications in military operations (sniper scope), laser counter measure, STEM education, and enhancing visual contrast for visually impaired patients and elderly people with low vision.

  15. Scaling device for photographic images

    NASA Technical Reports Server (NTRS)

    Rivera, Jorge E. (Inventor); Youngquist, Robert C. (Inventor); Cox, Robert B. (Inventor); Haskell, William D. (Inventor); Stevenson, Charles G. (Inventor)

    2005-01-01

    A scaling device projects a known optical pattern into the field of view of a camera, which can be employed as a reference scale in a resulting photograph of a remote object, for example. The device comprises an optical beam projector that projects two or more spaced, parallel optical beams onto a surface of a remotely located object to be photographed. The resulting beam spots or lines on the object are spaced from one another by a known, predetermined distance. As a result, the size of other objects or features in the photograph can be determined through comparison of their size to the known distance between the beam spots. Preferably, the device is a small, battery-powered device that can be attached to a camera and employs one or more laser light sources and associated optics to generate the parallel light beams. In a first embodiment of the invention, a single laser light source is employed, but multiple parallel beams are generated thereby through use of beam splitting optics. In another embodiment, multiple individual laser light sources are employed that are mounted in the device parallel to one another to generate the multiple parallel beams.

  16. Mars Science Laboratory Engineering Cameras

    NASA Technical Reports Server (NTRS)

    Maki, Justin N.; Thiessen, David L.; Pourangi, Ali M.; Kobzeff, Peter A.; Lee, Steven W.; Dingizian, Arsham; Schwochert, Mark A.

    2012-01-01

    NASA's Mars Science Laboratory (MSL) Rover, which launched to Mars in 2011, is equipped with a set of 12 engineering cameras. These cameras are build-to-print copies of the Mars Exploration Rover (MER) cameras, which were sent to Mars in 2003. The engineering cameras weigh less than 300 grams each and use less than 3 W of power. Images returned from the engineering cameras are used to navigate the rover on the Martian surface, deploy the rover robotic arm, and ingest samples into the rover sample processing system. The navigation cameras (Navcams) are mounted to a pan/tilt mast and have a 45-degree square field of view (FOV) with a pixel scale of 0.82 mrad/pixel. The hazard avoidance cameras (Haz - cams) are body-mounted to the rover chassis in the front and rear of the vehicle and have a 124-degree square FOV with a pixel scale of 2.1 mrad/pixel. All of the cameras utilize a frame-transfer CCD (charge-coupled device) with a 1024x1024 imaging region and red/near IR bandpass filters centered at 650 nm. The MSL engineering cameras are grouped into two sets of six: one set of cameras is connected to rover computer A and the other set is connected to rover computer B. The MSL rover carries 8 Hazcams and 4 Navcams.

  17. Evaluation of Real-Time Hand Motion Tracking Using a Range Camera and the Mean-Shift Algorithm

    NASA Astrophysics Data System (ADS)

    Lahamy, H.; Lichti, D.

    2011-09-01

    Several sensors have been tested for improving the interaction between humans and machines including traditional web cameras, special gloves, haptic devices, cameras providing stereo pairs of images and range cameras. Meanwhile, several methods are described in the literature for tracking hand motion: the Kalman filter, the mean-shift algorithm and the condensation algorithm. In this research, the combination of a range camera and the simple version of the mean-shift algorithm has been evaluated for its capability for hand motion tracking. The evaluation was assessed in terms of position accuracy of the tracking trajectory in x, y and z directions in the camera space and the time difference between image acquisition and image display. Three parameters have been analyzed regarding their influence on the tracking process: the speed of the hand movement, the distance between the camera and the hand and finally the integration time of the camera. Prior to the evaluation, the required warm-up time of the camera has been measured. This study has demonstrated the suitability of the range camera used in combination with the mean-shift algorithm for real-time hand motion tracking but for very high speed hand movement in the traverse plane with respect to the camera, the tracking accuracy is low and requires improvement.

  18. SU-F-BRA-16: Development of a Radiation Monitoring Device Using a Low-Cost CCD Camera Following Radionuclide Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taneja, S; Fru, L Che; Desai, V

    Purpose: It is now commonplace to handle treatments of hyperthyroidism using iodine-131 as an outpatient procedure due to lower costs and less stringent federal regulations. The Nuclear Regulatory Commission has currently updated release guidelines for these procedures, but there is still a large uncertainty in the dose to the public. Current guidelines to minimize dose to the public require patients to remain isolated after treatment. The purpose of this study was to use a low-cost common device, such as a cell phone, to estimate exposure emitted from a patient to the general public. Methods: Measurements were performed using an Applemore » iPhone 3GS and a Cs-137 irradiator. The charge-coupled device (CCD) camera on the phone was irradiated to exposure rates ranging from 0.1 mR/hr to 100 mR/hr and 30-sec videos were taken during irradiation with the camera lens covered by electrical tape. Interactions were detected as white pixels on a black background in each video. Both single threshold (ST) and colony counting (CC) methods were performed using MATLAB®. Calibration curves were determined by comparing the total pixel intensity output from each method to the known exposure rate. Results: The calibration curve showed a linear relationship above 5 mR/hr for both analysis techniques. The number of events counted per unit exposure rate within the linear region was 19.5 ± 0.7 events/mR and 8.9 ± 0.4 events/mR for the ST and CC methods respectively. Conclusion: Two algorithms were developed and show a linear relationship between photons detected by a CCD camera and low exposure rates, in the range of 5 mR/hr to 100-mR/hr. Future work aims to refine this model by investigating the dose-rate and energy dependencies of the camera response. This algorithm allows for quantitative monitoring of exposure from patients treated with iodine-131 using a simple device outside of the hospital.« less

  19. Estimating the Infrared Radiation Wavelength Emitted by a Remote Control Device Using a Digital Camera

    ERIC Educational Resources Information Center

    Catelli, Francisco; Giovannini, Odilon; Bolzan, Vicente Dall Agnol

    2011-01-01

    The interference fringes produced by a diffraction grating illuminated with radiation from a TV remote control and a red laser beam are, simultaneously, captured by a digital camera. Based on an image with two interference patterns, an estimate of the infrared radiation wavelength emitted by a TV remote control is made. (Contains 4 figures.)

  20. CMOS Image Sensors: Electronic Camera On A Chip

    NASA Technical Reports Server (NTRS)

    Fossum, E. R.

    1995-01-01

    Recent advancements in CMOS image sensor technology are reviewed, including both passive pixel sensors and active pixel sensors. On- chip analog to digital converters and on-chip timing and control circuits permit realization of an electronic camera-on-a-chip. Highly miniaturized imaging systems based on CMOS image sensor technology are emerging as a competitor to charge-coupled devices for low cost uses.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Conder, A.; Mummolo, F. J.

    The goal of the project was to develop a compact, large active area, high spatial resolution, high dynamic range, charge-coupled device (CCD) camera to replace film for digital imaging of visible light, ultraviolet radiation, and soft to penetrating X-rays. The camera head and controller needed to be capable of operation within a vacuum environment and small enough to be fielded within the small vacuum target chambers at LLNL.

  2. Fluorescence endoscopic video system

    NASA Astrophysics Data System (ADS)

    Papayan, G. V.; Kang, Uk

    2006-10-01

    This paper describes a fluorescence endoscopic video system intended for the diagnosis of diseases of the internal organs. The system operates on the basis of two-channel recording of the video fluxes from a fluorescence channel and a reflected-light channel by means of a high-sensitivity monochrome television camera and a color camera, respectively. Examples are given of the application of the device in gastroenterology.

  3. Passive stand-off terahertz imaging with 1 hertz frame rate

    NASA Astrophysics Data System (ADS)

    May, T.; Zieger, G.; Anders, S.; Zakosarenko, V.; Starkloff, M.; Meyer, H.-G.; Thorwirth, G.; Kreysa, E.

    2008-04-01

    Terahertz (THz) cameras are expected to be a powerful tool for future security applications. If such a technology shall be useful for typical security scenarios (e.g. airport check-in) it has to meet some minimum standards. A THz camera should record images with video rate from a safe distance (stand-off). Although active cameras are conceivable, a passive system has the benefit of concealed operation. Additionally, from an ethic perspective, the lack of exposure to a radiation source is a considerable advantage in public acceptance. Taking all these requirements into account, only cooled detectors are able to achieve the needed sensitivity. A big leap forward in the detector performance and scalability was driven by the astrophysics community. Superconducting bolometers and midsized arrays of them have been developed and are in routine use. Although devices with many pixels are foreseeable nowadays a device with an additional scanning optic is the straightest way to an imaging system with a useful resolution. We demonstrate the capabilities of a concept for a passive Terahertz video camera based on superconducting technology. The actual prototype utilizes a small Cassegrain telescope with a gyrating secondary mirror to record 2 kilopixel THz images with 1 second frame rate.

  4. Improved Feature Matching for Mobile Devices with IMU.

    PubMed

    Masiero, Andrea; Vettore, Antonio

    2016-08-05

    Thanks to the recent diffusion of low-cost high-resolution digital cameras and to the development of mostly automated procedures for image-based 3D reconstruction, the popularity of photogrammetry for environment surveys is constantly increasing in the last years. Automatic feature matching is an important step in order to successfully complete the photogrammetric 3D reconstruction: this step is the fundamental basis for the subsequent estimation of the geometry of the scene. This paper reconsiders the feature matching problem when dealing with smart mobile devices (e.g., when using the standard camera embedded in a smartphone as imaging sensor). More specifically, this paper aims at exploiting the information on camera movements provided by the inertial navigation system (INS) in order to make the feature matching step more robust and, possibly, computationally more efficient. First, a revised version of the affine scale-invariant feature transform (ASIFT) is considered: this version reduces the computational complexity of the original ASIFT, while still ensuring an increase of correct feature matches with respect to the SIFT. Furthermore, a new two-step procedure for the estimation of the essential matrix E (and the camera pose) is proposed in order to increase its estimation robustness and computational efficiency.

  5. Real time heart rate variability assessment from Android smartphone camera photoplethysmography: Postural and device influences.

    PubMed

    Guede-Fernandez, F; Ferrer-Mileo, V; Ramos-Castro, J; Fernandez-Chimeno, M; Garcia-Gonzalez, M A

    2015-01-01

    The aim of this paper is to present a smartphone based system for real-time pulse-to-pulse (PP) interval time series acquisition by frame-to-frame camera image processing. The developed smartphone application acquires image frames from built-in rear-camera at the maximum available rate (30 Hz) and the smartphone GPU has been used by Renderscript API for high performance frame-by-frame image acquisition and computing in order to obtain PPG signal and PP interval time series. The relative error of mean heart rate is negligible. In addition, measurement posture and the employed smartphone model influences on the beat-to-beat error measurement of heart rate and HRV indices have been analyzed. Then, the standard deviation of the beat-to-beat error (SDE) was 7.81 ± 3.81 ms in the worst case. Furthermore, in supine measurement posture, significant device influence on the SDE has been found and the SDE is lower with Samsung S5 than Motorola X. This study can be applied to analyze the reliability of different smartphone models for HRV assessment from real-time Android camera frames processing.

  6. Mask-to-wafer alignment system

    DOEpatents

    Sweatt, William C.; Tichenor, Daniel A.; Haney, Steven J.

    2003-11-04

    A modified beam splitter that has a hole pattern that is symmetric in one axis and anti-symmetric in the other can be employed in a mask-to-wafer alignment device. The device is particularly suited for rough alignment using visible light. The modified beam splitter transmits and reflects light from a source of electromagnetic radiation and it includes a substrate that has a first surface facing the source of electromagnetic radiation and second surface that is reflective of said electromagnetic radiation. The substrate defines a hole pattern about a central line of the substrate. In operation, an input beam from a camera is directed toward the modified beam splitter and the light from the camera that passes through the holes illuminates the reticle on the wafer. The light beam from the camera also projects an image of a corresponding reticle pattern that is formed on the mask surface of the that is positioned downstream from the camera. Alignment can be accomplished by detecting the radiation that is reflected from the second surface of the modified beam splitter since the reflected radiation contains both the image of the pattern from the mask and a corresponding pattern on the wafer.

  7. A method of camera calibration in the measurement process with reference mark for approaching observation space target

    NASA Astrophysics Data System (ADS)

    Zhang, Hua; Zeng, Luan

    2017-11-01

    Binocular stereoscopic vision can be used for space-based space targets near observation. In order to solve the problem that the traditional binocular vision system cannot work normally after interference, an online calibration method of binocular stereo measuring camera with self-reference is proposed. The method uses an auxiliary optical imaging device to insert the image of the standard reference object into the edge of the main optical path and image with the target on the same focal plane, which is equivalent to a standard reference in the binocular imaging optical system; When the position of the system and the imaging device parameters are disturbed, the image of the standard reference will change accordingly in the imaging plane, and the position of the standard reference object does not change. The camera's external parameters can be re-calibrated by the visual relationship of the standard reference object. The experimental results show that the maximum mean square error of the same object can be reduced from the original 72.88mm to 1.65mm when the right camera is deflected by 0.4 degrees and the left camera is high and low with 0.2° rotation. This method can realize the online calibration of binocular stereoscopic vision measurement system, which can effectively improve the anti - jamming ability of the system.

  8. A Robust Camera-Based Interface for Mobile Entertainment

    PubMed Central

    Roig-Maimó, Maria Francesca; Manresa-Yee, Cristina; Varona, Javier

    2016-01-01

    Camera-based interfaces in mobile devices are starting to be used in games and apps, but few works have evaluated them in terms of usability or user perception. Due to the changing nature of mobile contexts, this evaluation requires extensive studies to consider the full spectrum of potential users and contexts. However, previous works usually evaluate these interfaces in controlled environments such as laboratory conditions, therefore, the findings cannot be generalized to real users and real contexts. In this work, we present a robust camera-based interface for mobile entertainment. The interface detects and tracks the user’s head by processing the frames provided by the mobile device’s front camera, and its position is then used to interact with the mobile apps. First, we evaluate the interface as a pointing device to study its accuracy, and different factors to configure such as the gain or the device’s orientation, as well as the optimal target size for the interface. Second, we present an in the wild study to evaluate the usage and the user’s perception when playing a game controlled by head motion. Finally, the game is published in an application store to make it available to a large number of potential users and contexts and we register usage data. Results show the feasibility of using this robust camera-based interface for mobile entertainment in different contexts and by different people. PMID:26907288

  9. A novel super-resolution camera model

    NASA Astrophysics Data System (ADS)

    Shao, Xiaopeng; Wang, Yi; Xu, Jie; Wang, Lin; Liu, Fei; Luo, Qiuhua; Chen, Xiaodong; Bi, Xiangli

    2015-05-01

    Aiming to realize super resolution(SR) to single image and video reconstruction, a super resolution camera model is proposed for the problem that the resolution of the images obtained by traditional cameras behave comparatively low. To achieve this function we put a certain driving device such as piezoelectric ceramics in the camera. By controlling the driving device, a set of continuous low resolution(LR) images can be obtained and stored instantaneity, which reflect the randomness of the displacements and the real-time performance of the storage very well. The low resolution image sequences have different redundant information and some particular priori information, thus it is possible to restore super resolution image factually and effectively. The sample method is used to derive the reconstruction principle of super resolution, which analyzes the possible improvement degree of the resolution in theory. The super resolution algorithm based on learning is used to reconstruct single image and the variational Bayesian algorithm is simulated to reconstruct the low resolution images with random displacements, which models the unknown high resolution image, motion parameters and unknown model parameters in one hierarchical Bayesian framework. Utilizing sub-pixel registration method, a super resolution image of the scene can be reconstructed. The results of 16 images reconstruction show that this camera model can increase the image resolution to 2 times, obtaining images with higher resolution in currently available hardware levels.

  10. Measurement of marine picoplankton cell size by using a cooled, charge-coupled device camera with image-analyzed fluorescence microscopy.

    PubMed Central

    Viles, C L; Sieracki, M E

    1992-01-01

    Accurate measurement of the biomass and size distribution of picoplankton cells (0.2 to 2.0 microns) is paramount in characterizing their contribution to the oceanic food web and global biogeochemical cycling. Image-analyzed fluorescence microscopy, usually based on video camera technology, allows detailed measurements of individual cells to be taken. The application of an imaging system employing a cooled, slow-scan charge-coupled device (CCD) camera to automated counting and sizing of individual picoplankton cells from natural marine samples is described. A slow-scan CCD-based camera was compared to a video camera and was superior for detecting and sizing very small, dim particles such as fluorochrome-stained bacteria. Several edge detection methods for accurately measuring picoplankton cells were evaluated. Standard fluorescent microspheres and a Sargasso Sea surface water picoplankton population were used in the evaluation. Global thresholding was inappropriate for these samples. Methods used previously in image analysis of nanoplankton cells (2 to 20 microns) also did not work well with the smaller picoplankton cells. A method combining an edge detector and an adaptive edge strength operator worked best for rapidly generating accurate cell sizes. A complete sample analysis of more than 1,000 cells averages about 50 min and yields size, shape, and fluorescence data for each cell. With this system, the entire size range of picoplankton can be counted and measured. Images PMID:1610183

  11. A Vision-Based Motion Sensor for Undergraduate Laboratories.

    ERIC Educational Resources Information Center

    Salumbides, Edcel John; Maristela, Joyce; Uy, Alfredson; Karremans, Kees

    2002-01-01

    Introduces an alternative method to determine the mechanics of a moving object that uses computer vision algorithms with a charge-coupled device (CCD) camera as a recording device. Presents two experiments, pendulum motion and terminal velocity, to compare results of the alternative and conventional methods. (YDS)

  12. Smartphone and Curriculum Opportunities for College Faculty

    ERIC Educational Resources Information Center

    Migdalski, Scott T.

    2017-01-01

    The ever-increasing popularity of the smartphone continues to impact many professions. Physicians use the device for medication dosing, professional drivers use the GPS application, mariners use the navigation maps, builders use materials-estimator applications, property appraisers use the camera capability, and students use the device to search…

  13. Infrared cameras are potential traceable "fixed points" for future thermometry studies.

    PubMed

    Yap Kannan, R; Keresztes, K; Hussain, S; Coats, T J; Bown, M J

    2015-01-01

    The National physical laboratory (NPL) requires "fixed points" whose temperatures have been established by the International Temperature Scale of 1990 (ITS 90) be used for device calibration. In practice, "near" blackbody radiators together with the standard platinum resistance thermometer is accepted as a standard. The aim of this study was to report the correlation and limits of agreement (LOA) of the thermal infrared camera and non-contact infrared temporal thermometer against each other and the "near" blackbody radiator. Temperature readings from an infrared thermography camera (FLIR T650sc) and a non-contact infrared temporal thermometer (Hubdic FS-700) were compared to a near blackbody (Hyperion R blackbody model 982) at 0.5 °C increments between 20-40 °C. At each increment, blackbody cavity temperature was confirmed with the platinum resistance thermometer. Measurements were taken initially with the thermal infrared camera followed by the infrared thermometer, with each device mounted in turn on a stand at a fixed distance of 20 cm and 5 cm from the blackbody aperture, respectively. The platinum thermometer under-estimated the blackbody temperature by 0.015 °C (95% LOA: -0.08 °C to 0.05 °C), in contrast to the thermal infrared camera and infrared thermometer which over-estimated the blackbody temperature by 0.16 °C (95% LOA: 0.03 °C to 0.28 °C) and 0.75 °C (95% LOA: -0.30 °C to 1.79 °C), respectively. Infrared thermometer over-estimates thermal infrared camera measurements by 0.6 °C (95% LOA: -0.46 °C to 1.65 °C). In conclusion, the thermal infrared camera is a potential temperature reference "fixed point" that could substitute mercury thermometers. However, further repeatability and reproducibility studies will be required with different models of thermal infrared cameras.

  14. A high-speed digital camera system for the observation of rapid H-alpha fluctuations in solar flares

    NASA Technical Reports Server (NTRS)

    Kiplinger, Alan L.; Dennis, Brian R.; Orwig, Larry E.

    1989-01-01

    Researchers developed a prototype digital camera system for obtaining H-alpha images of solar flares with 0.1 s time resolution. They intend to operate this system in conjunction with SMM's Hard X Ray Burst Spectrometer, with x ray instruments which will be available on the Gamma Ray Observatory and eventually with the Gamma Ray Imaging Device (GRID), and with the High Resolution Gamma-Ray and Hard X Ray Spectrometer (HIREGS) which are being developed for the Max '91 program. The digital camera has recently proven to be successful as a one camera system operating in the blue wing of H-alpha during the first Max '91 campaign. Construction and procurement of a second and possibly a third camera for simultaneous observations at other wavelengths are underway as are analyses of the campaign data.

  15. Biplane reconstruction and visualization of virtual endoscopic and fluoroscopic views for interventional device navigation

    NASA Astrophysics Data System (ADS)

    Wagner, Martin G.; Strother, Charles M.; Schafer, Sebastian; Mistretta, Charles A.

    2016-03-01

    Biplane fluoroscopic imaging is an important tool for minimally invasive procedures for the treatment of cerebrovascular diseases. However, finding a good working angle for the C-arms of the angiography system as well as navigating based on the 2D projection images can be a difficult task. The purpose of this work is to propose a novel 4D reconstruction algorithm for interventional devices from biplane fluoroscopy images and to propose new techniques for a better visualization of the results. The proposed reconstruction methods binarizes the fluoroscopic images using a dedicated noise reduction algorithm for curvilinear structures and a global thresholding approach. A topology preserving thinning algorithm is then applied and a path search algorithm minimizing the curvature of the device is used to extract the 2D device centerlines. Finally, the 3D device path is reconstructed using epipolar geometry. The point correspondences are determined by a monotonic mapping function that minimizes the reconstruction error. The three dimensional reconstruction of the device path allows the rendering of virtual fluoroscopy images from arbitrary angles as well as 3D visualizations like virtual endoscopic views or glass pipe renderings, where the vessel wall is rendered with a semi-transparent material. This work also proposes a combination of different visualization techniques in order to increase the usability and spatial orientation for the user. A combination of synchronized endoscopic and glass pipe views is proposed, where the virtual endoscopic camera position is determined based on the device tip location as well as the previous camera position using a Kalman filter in order to create a smooth path. Additionally, vessel centerlines are displayed and the path to the target is highlighted. Finally, the virtual endoscopic camera position is also visualized in the glass pipe view to further improve the spatial orientation. The proposed techniques could considerably improve the workflow of minimally invasive procedures for the treatment of cerebrovascular diseases.

  16. Time-dependent spatial intensity profiles of near-infrared idler pulses from nanosecond optical parametric oscillators

    NASA Astrophysics Data System (ADS)

    Olafsen, L. J.; Olafsen, J. S.; Eaves, I. K.

    2018-06-01

    We report on an experimental investigation of the time-dependent spatial intensity distribution of near-infrared idler pulses from an optical parametric oscillator measured using an infrared (IR) camera, in contrast to beam profiles obtained using traditional knife-edge techniques. Comparisons show the information gained by utilizing the thermal camera provides more detail than the spatially- or time-averaged measurements from a knife-edge profile. Synchronization, averaging, and thresholding techniques are applied to enhance the images acquired. The additional information obtained can improve the process by which semiconductor devices and other IR lasers are characterized for their beam quality and output response and thereby result in IR devices with higher performance.

  17. A wearable device for emotional recognition using facial expression and physiological response.

    PubMed

    Jangho Kwon; Da-Hye Kim; Wanjoo Park; Laehyun Kim

    2016-08-01

    This paper introduces a glasses-typed wearable system to detect user's emotions using facial expression and physiological responses. The system is designed to acquire facial expression through a built-in camera and physiological responses such as photoplethysmogram (PPG) and electrodermal activity (EDA) in unobtrusive way. We used video clips for induced emotions to test the system suitability in the experiment. The results showed a few meaningful properties that associate emotions with facial expressions and physiological responses captured by the developed wearable device. We expect that this wearable system with a built-in camera and physiological sensors may be a good solution to monitor user's emotional state in daily life.

  18. Aluminum/ammonia heat pipe gas generation and long term system impact for the Space Telescope's Wide Field Planetary Camera

    NASA Technical Reports Server (NTRS)

    Jones, J. A.

    1983-01-01

    In the Space Telescope's Wide Field Planetary Camera (WFPC) project, eight heat pipes (HPs) are used to remove heat from the camera's inner electronic sensors to the spacecraft's outer, cold radiator surface. For proper device functioning and maximization of the signal-to-noise ratios, the Charge Coupled Devices (CCD's) must be maintained at -95 C or lower. Thermoelectric coolers (TEC's) cool the CCD's, and heat pipes deliver each TEC's nominal six to eight watts of heat to the space radiator, which reaches an equilibrium temperature between -15 C to -70 C. An initial problem was related to the difficulty to produce gas-free aluminum/ammonia heat pipes. An investigation was, therefore, conducted to determine the cause of the gas generation and the impact of this gas on CCD cooling. In order to study the effect of gas slugs in the WFPC system, a separate HP was made. Attention is given to fabrication, testing, and heat pipe gas generation chemistry studies.

  19. Imaging with organic indicators and high-speed charge-coupled device cameras in neurons: some applications where these classic techniques have advantages.

    PubMed

    Ross, William N; Miyazaki, Kenichi; Popovic, Marko A; Zecevic, Dejan

    2015-04-01

    Dynamic calcium and voltage imaging is a major tool in modern cellular neuroscience. Since the beginning of their use over 40 years ago, there have been major improvements in indicators, microscopes, imaging systems, and computers. While cutting edge research has trended toward the use of genetically encoded calcium or voltage indicators, two-photon microscopes, and in vivo preparations, it is worth noting that some questions still may be best approached using more classical methodologies and preparations. In this review, we highlight a few examples in neurons where the combination of charge-coupled device (CCD) imaging and classical organic indicators has revealed information that has so far been more informative than results using the more modern systems. These experiments take advantage of the high frame rates, sensitivity, and spatial integration of the best CCD cameras. These cameras can respond to the faster kinetics of organic voltage and calcium indicators, which closely reflect the fast dynamics of the underlying cellular events.

  20. Automatic portion estimation and visual refinement in mobile dietary assessment

    PubMed Central

    Woo, Insoo; Otsmo, Karl; Kim, SungYe; Ebert, David S.; Delp, Edward J.; Boushey, Carol J.

    2011-01-01

    As concern for obesity grows, the need for automated and accurate methods to monitor nutrient intake becomes essential as dietary intake provides a valuable basis for managing dietary imbalance. Moreover, as mobile devices with built-in cameras have become ubiquitous, one potential means of monitoring dietary intake is photographing meals using mobile devices and having an automatic estimate of the nutrient contents returned. One of the challenging problems of the image-based dietary assessment is the accurate estimation of food portion size from a photograph taken with a mobile digital camera. In this work, we describe a method to automatically calculate portion size of a variety of foods through volume estimation using an image. These “portion volumes” utilize camera parameter estimation and model reconstruction to determine the volume of food items, from which nutritional content is then extrapolated. In this paper, we describe our initial results of accuracy evaluation using real and simulated meal images and demonstrate the potential of our approach. PMID:22242198

  1. Automatic portion estimation and visual refinement in mobile dietary assessment

    NASA Astrophysics Data System (ADS)

    Woo, Insoo; Otsmo, Karl; Kim, SungYe; Ebert, David S.; Delp, Edward J.; Boushey, Carol J.

    2010-01-01

    As concern for obesity grows, the need for automated and accurate methods to monitor nutrient intake becomes essential as dietary intake provides a valuable basis for managing dietary imbalance. Moreover, as mobile devices with built-in cameras have become ubiquitous, one potential means of monitoring dietary intake is photographing meals using mobile devices and having an automatic estimate of the nutrient contents returned. One of the challenging problems of the image-based dietary assessment is the accurate estimation of food portion size from a photograph taken with a mobile digital camera. In this work, we describe a method to automatically calculate portion size of a variety of foods through volume estimation using an image. These "portion volumes" utilize camera parameter estimation and model reconstruction to determine the volume of food items, from which nutritional content is then extrapolated. In this paper, we describe our initial results of accuracy evaluation using real and simulated meal images and demonstrate the potential of our approach.

  2. Incorporating active-learning techniques into the photonics-related teaching in the Erasmus Mundus Master in "Color in Informatics and Media Technology"

    NASA Astrophysics Data System (ADS)

    Pozo, Antonio M.; Rubiño, Manuel; Hernández-Andrés, Javier; Nieves, Juan L.

    2014-07-01

    In this work, we present a teaching methodology using active-learning techniques in the course "Devices and Instrumentation" of the Erasmus Mundus Master's Degree in "Color in Informatics and Media Technology" (CIMET). A part of the course "Devices and Instrumentation" of this Master's is dedicated to the study of image sensors and methods to evaluate their image quality. The teaching methodology that we present consists of incorporating practical activities during the traditional lectures. One of the innovative aspects of this teaching methodology is that students apply the concepts and methods studied in class to real devices. For this, students use their own digital cameras, webcams, or cellphone cameras in class. These activities provide students a better understanding of the theoretical subject given in class and encourage the active participation of students.

  3. Non-contact measurement of helicopter device position in wind tunnels with the use of optical videogrammetry method

    NASA Astrophysics Data System (ADS)

    Kuruliuk, K. A.; Kulesh, V. P.

    2016-10-01

    An optical videogrammetry method using one digital camera for non-contact measurements of geometric shape parameters, position and motion of models and structural elements of aircraft in experimental aerodynamics was developed. The tests with the use of this method for measurement of six components (three linear and three angular ones) of real position of helicopter device in wind tunnel flow were conducted. The distance between camera and test object was 15 meters. It was shown in practice that, in the conditions of aerodynamic experiment instrumental measurement error (standard deviation) for angular and linear displacements of helicopter device does not exceed 0,02° and 0.3 mm, respectively. Analysis of the results shows that at the minimum rotor thrust deviations are systematic and generally are within ± 0.2 degrees. Deviations of angle values grow with the increase of rotor thrust.

  4. Combining shearography and interferometric fringe projection in a single device for complete control of industrial applications

    NASA Astrophysics Data System (ADS)

    Blain, Pascal; Michel, Fabrice; Piron, Pierre; Renotte, Yvon; Habraken, Serge

    2013-08-01

    Noncontact optical measurement methods are essential tools in many industrial and research domains. A family of new noncontact optical measurement methods based on the polarization states splitting technique and monochromatic light projection as a way to overcome ambient lighting for in-situ measurement has been developed. Recent works on a birefringent element, a Savart plate, allow one to build a more flexible and robust interferometer. This interferometer is a multipurpose metrological device. On one hand the interferometer can be set in front of a charge-coupled device (CCD) camera. This optical measurement system is called a shearography interferometer and allows one to measure microdisplacements between two states of the studied object under coherent lighting. On the other hand, by producing and shifting multiple sinusoidal Young's interference patterns with this interferometer, and using a CCD camera, it is possible to build a three-dimensional structured light profilometer.

  5. A stroboscopic technique for using CCD cameras in flow visualization systems for continuous viewing and stop action photography

    NASA Technical Reports Server (NTRS)

    Franke, John M.; Rhodes, David B.; Jones, Stephen B.; Dismond, Harriet R.

    1992-01-01

    A technique for synchronizing a pulse light source to charge coupled device cameras is presented. The technique permits the use of pulse light sources for continuous as well as stop action flow visualization. The technique has eliminated the need to provide separate lighting systems at facilities requiring continuous and stop action viewing or photography.

  6. Camera selection for real-time in vivo radiation treatment verification systems using Cherenkov imaging.

    PubMed

    Andreozzi, Jacqueline M; Zhang, Rongxiao; Glaser, Adam K; Jarvis, Lesley A; Pogue, Brian W; Gladstone, David J

    2015-02-01

    To identify achievable camera performance and hardware needs in a clinical Cherenkov imaging system for real-time, in vivo monitoring of the surface beam profile on patients, as novel visual information, documentation, and possible treatment verification for clinicians. Complementary metal-oxide-semiconductor (CMOS), charge-coupled device (CCD), intensified charge-coupled device (ICCD), and electron multiplying-intensified charge coupled device (EM-ICCD) cameras were investigated to determine Cherenkov imaging performance in a clinical radiotherapy setting, with one emphasis on the maximum supportable frame rate. Where possible, the image intensifier was synchronized using a pulse signal from the Linac in order to image with room lighting conditions comparable to patient treatment scenarios. A solid water phantom irradiated with a 6 MV photon beam was imaged by the cameras to evaluate the maximum frame rate for adequate Cherenkov detection. Adequate detection was defined as an average electron count in the background-subtracted Cherenkov image region of interest in excess of 0.5% (327 counts) of the 16-bit maximum electron count value. Additionally, an ICCD and an EM-ICCD were each used clinically to image two patients undergoing whole-breast radiotherapy to compare clinical advantages and limitations of each system. Intensifier-coupled cameras were required for imaging Cherenkov emission on the phantom surface with ambient room lighting; standalone CMOS and CCD cameras were not viable. The EM-ICCD was able to collect images from a single Linac pulse delivering less than 0.05 cGy of dose at 30 frames/s (fps) and pixel resolution of 512 × 512, compared to an ICCD which was limited to 4.7 fps at 1024 × 1024 resolution. An intensifier with higher quantum efficiency at the entrance photocathode in the red wavelengths [30% quantum efficiency (QE) vs previous 19%] promises at least 8.6 fps at a resolution of 1024 × 1024 and lower monetary cost than the EM-ICCD. The ICCD with an intensifier better optimized for red wavelengths was found to provide the best potential for real-time display (at least 8.6 fps) of radiation dose on the skin during treatment at a resolution of 1024 × 1024.

  7. Ultraviolet Imaging with Low Cost Smartphone Sensors: Development and Application of a Raspberry Pi-Based UV Camera.

    PubMed

    Wilkes, Thomas C; McGonigle, Andrew J S; Pering, Tom D; Taggart, Angus J; White, Benjamin S; Bryant, Robert G; Willmott, Jon R

    2016-10-06

    Here, we report, for what we believe to be the first time, on the modification of a low cost sensor, designed for the smartphone camera market, to develop an ultraviolet (UV) camera system. This was achieved via adaptation of Raspberry Pi cameras, which are based on back-illuminated complementary metal-oxide semiconductor (CMOS) sensors, and we demonstrated the utility of these devices for applications at wavelengths as low as 310 nm, by remotely sensing power station smokestack emissions in this spectral region. Given the very low cost of these units, ≈ USD 25, they are suitable for widespread proliferation in a variety of UV imaging applications, e.g., in atmospheric science, volcanology, forensics and surface smoothness measurements.

  8. Cameras for semiconductor process control

    NASA Technical Reports Server (NTRS)

    Porter, W. A.; Parker, D. L.

    1977-01-01

    The application of X-ray topography to semiconductor process control is described, considering the novel features of the high speed camera and the difficulties associated with this technique. The most significant results on the effects of material defects on device performance are presented, including results obtained using wafers processed entirely within this institute. Defects were identified using the X-ray camera and correlations made with probe data. Also included are temperature dependent effects of material defects. Recent applications and improvements of X-ray topographs of silicon-on-sapphire and gallium arsenide are presented with a description of a real time TV system prototype and of the most recent vacuum chuck design. Discussion is included of our promotion of the use of the camera by various semiconductor manufacturers.

  9. Low-complexity camera digital signal imaging for video document projection system

    NASA Astrophysics Data System (ADS)

    Hsia, Shih-Chang; Tsai, Po-Shien

    2011-04-01

    We present high-performance and low-complexity algorithms for real-time camera imaging applications. The main functions of the proposed camera digital signal processing (DSP) involve color interpolation, white balance, adaptive binary processing, auto gain control, and edge and color enhancement for video projection systems. A series of simulations demonstrate that the proposed method can achieve good image quality while keeping computation cost and memory requirements low. On the basis of the proposed algorithms, the cost-effective hardware core is developed using Verilog HDL. The prototype chip has been verified with one low-cost programmable device. The real-time camera system can achieve 1270 × 792 resolution with the combination of extra components and can demonstrate each DSP function.

  10. Single molecule photobleaching (SMPB) technology for counting of RNA, DNA, protein and other molecules in nanoparticles and biological complexes by TIRF instrumentation.

    PubMed

    Zhang, Hui; Guo, Peixuan

    2014-05-15

    Direct counting of biomolecules within biological complexes or nanomachines is demanding. Single molecule counting using optical microscopy is challenging due to the diffraction limit. The single molecule photobleaching (SMPB) technology for direct counting developed by our team (Shu et al., 2007 [18]; Zhang et al., 2007 [19]) offers a simple and straightforward method to determine the stoichiometry of molecules or subunits within biocomplexes or nanomachines at nanometer scales. Stoichiometry is determined by real-time observation of the number of descending steps resulted from the photobleaching of individual fluorophore. This technology has now been used extensively for single molecule counting of protein, RNA, and other macromolecules in a variety of complexes or nanostructures. Here, we elucidate the SMPB technology, using the counting of RNA molecules within a bacteriophage phi29 DNA-packaging biomotor as an example. The method described here can be applied to the single molecule counting of other molecules in other systems. The construction of a concise, simple and economical single molecule total internal reflection fluorescence (TIRF) microscope combining prism-type and objective-type TIRF is described. The imaging system contains a deep-cooled sensitive EMCCD camera with single fluorophore detection sensitivity, a laser combiner for simultaneous dual-color excitation, and a Dual-View™ imager to split the multiple outcome signals to different detector channels based on their wavelengths. Methodology of the single molecule photobleaching assay used to elucidate the stoichiometry of RNA on phi29 DNA packaging motor and the mechanism of protein/RNA interaction are described. Different methods for single fluorophore labeling of RNA molecules are reviewed. The process of statistical modeling to reveal the true copy number of the biomolecules based on binomial distribution is also described. Copyright © 2014 Elsevier Inc. All rights reserved.

  11. Joint L1 and Total Variation Regularization for Fluorescence Molecular Tomography

    PubMed Central

    Dutta, Joyita; Ahn, Sangtae; Li, Changqing; Cherry, Simon R.; Leahy, Richard M.

    2012-01-01

    Fluorescence molecular tomography (FMT) is an imaging modality that exploits the specificity of fluorescent biomarkers to enable 3D visualization of molecular targets and pathways in vivo in small animals. Owing to the high degree of absorption and scattering of light through tissue, the FMT inverse problem is inherently illconditioned making image reconstruction highly susceptible to the effects of noise and numerical errors. Appropriate priors or penalties are needed to facilitate reconstruction and to restrict the search space to a specific solution set. Typically, fluorescent probes are locally concentrated within specific areas of interest (e.g., inside tumors). The commonly used L2 norm penalty generates the minimum energy solution, which tends to be spread out in space. Instead, we present here an approach involving a combination of the L1 and total variation norm penalties, the former to suppress spurious background signals and enforce sparsity and the latter to preserve local smoothness and piecewise constancy in the reconstructed images. We have developed a surrogate-based optimization method for minimizing the joint penalties. The method was validated using both simulated and experimental data obtained from a mouse-shaped phantom mimicking tissue optical properties and containing two embedded fluorescent sources. Fluorescence data was collected using a 3D FMT setup that uses an EMCCD camera for image acquisition and a conical mirror for full-surface viewing. A range of performance metrics were utilized to evaluate our simulation results and to compare our method with the L1, L2, and total variation norm penalty based approaches. The experimental results were assessed using Dice similarity coefficients computed after co-registration with a CT image of the phantom. PMID:22390906

  12. Biological applications of an LCoS-based programmable array microscope (PAM)

    NASA Astrophysics Data System (ADS)

    Hagen, Guy M.; Caarls, Wouter; Thomas, Martin; Hill, Andrew; Lidke, Keith A.; Rieger, Bernd; Fritsch, Cornelia; van Geest, Bert; Jovin, Thomas M.; Arndt-Jovin, Donna J.

    2007-02-01

    We report on a new generation, commercial prototype of a programmable array optical sectioning fluorescence microscope (PAM) for rapid, light efficient 3D imaging of living specimens. The stand-alone module, including light source(s) and detector(s), features an innovative optical design and a ferroelectric liquid-crystal-on-silicon (LCoS) spatial light modulator (SLM) instead of the DMD used in the original PAM design. The LCoS PAM (developed in collaboration with Cairn Research, Ltd.) can be attached to a port of a(ny) unmodified fluorescence microscope. The prototype system currently operated at the Max Planck Institute incorporates a 6-position high-intensity LED illuminator, modulated laser and lamp light sources, and an Andor iXon emCCD camera. The module is mounted on an Olympus IX71 inverted microscope with 60-150X objectives with a Prior Scientific x,y, and z high resolution scanning stages. Further enhancements recently include: (i) point- and line-wise spectral resolution and (ii) lifetime imaging (FLIM) in the frequency domain. Multiphoton operation and other nonlinear techniques should be feasible. The capabilities of the PAM are illustrated by several examples demonstrating single molecule as well as lifetime imaging in live cells, and the unique capability to perform photoconversion with arbitrary patterns and high spatial resolution. Using quantum dot coupled ligands we show real-time binding and subsequent trafficking of individual ligand-growth factor receptor complexes on and in live cells with a temporal resolution and sensitivity exceeding those of conventional CLSM systems. The combined use of a blue laser and parallel LED or visible laser sources permits photoactivation and rapid kinetic analysis of cellular processes probed by photoswitchable visible fluorescent proteins such as DRONPA.

  13. Latitudinal Variations In Vertical Cloud Structure Of Jupiter As Determined By Ground- based Observation With Multispectral Imaging

    NASA Astrophysics Data System (ADS)

    Sato, T.; Kasaba, Y.; Takahashi, Y.; Murata, I.; Uno, T.; Tokimasa, N.; Sakamoto, M.

    2008-12-01

    We conducted ground-based observation of Jupiter with the liquid crystal tunable filter (LCTF) and EM-CCD camera in two methane absorption bands (700-757nm, 872-950nm at 3 nm step: total of 47 wavelengths) to derive detailed Jupiter's vertical cloud structure. The 2-meter reflector telescope at Nishi-Harima astronomical observatory in Japan was used for our observation on 26-30 May, 2008. After a series of image processing (composition of high quality images in each wavelength and geometry calibration), we converted observed intensity to absolute reflectivity at each pixel using standard star. As a result, we acquired Jupiter's data cubes with high-spatial resolution (about 1") and narrow band imaging (typically 7nm) in each methane absorption band by superimposing 30 Jupiter's images obtained in short exposure time (50 ms per one image). These data sets enable us to probe different altitudes of Jupiter from 100 mbar down to 1bar level with higher vertical resolution than using convectional interference filters. To interpret observed center-limb profiles, we developed radiative transfer code based on layer adding doubling algorithm to treat multiple scattering of solar light theoretically and extracted information on aerosol altitudes and optical properties using two-cloud model. First, we fit 5 different profiles simultaneously in continuum data (745-757 nm) to retrieve information on optical thickness of haze and single scattering albedo of cloud. Second, we fit 15 different profiles around 727nm methane absorption band and 13 different profiles around 890 nm methane absorption band to retrieve information on the aerosol altitude location and optical thickness of cloud. In this presentation, we present the results of these modeling simulations and discuss the latitudinal variations of Jupiter's vertical cloud structure.

  14. Single-cell resolution fluorescence imaging of circadian rhythms detected with a Nipkow spinning disk confocal system.

    PubMed

    Enoki, Ryosuke; Ono, Daisuke; Hasan, Mazahir T; Honma, Sato; Honma, Ken-Ichi

    2012-05-30

    Single-point laser scanning confocal imaging produces signals with high spatial resolution in living organisms. However, photo-induced toxicity, bleaching, and focus drift remain challenges, especially when recording over several days for monitoring circadian rhythms. Bioluminescence imaging is a tool widely used for this purpose, and does not cause photo-induced difficulties. However, bioluminescence signals are dimmer than fluorescence signals, and are potentially affected by levels of cofactors, including ATP, O(2), and the substrate, luciferin. Here we describe a novel time-lapse confocal imaging technique to monitor circadian rhythms in living tissues. The imaging system comprises a multipoint scanning Nipkow spinning disk confocal unit and a high-sensitivity EM-CCD camera mounted on an inverted microscope with auto-focusing function. Brain slices of the suprachiasmatic nucleus (SCN), the central circadian clock, were prepared from transgenic mice expressing a clock gene, Period 1 (Per1), and fluorescence reporter protein (Per1::d2EGFP). The SCN slices were cut out together with membrane, flipped over, and transferred to the collagen-coated glass dishes to obtain signals with a high signal-to-noise ratio and to minimize focus drift. The imaging technique and improved culture method enabled us to monitor the circadian rhythm of Per1::d2EGFP from optically confirmed single SCN neurons without noticeable photo-induced effects or focus drift. Using recombinant adeno-associated virus carrying a genetically encoded calcium indicator, we also monitored calcium circadian rhythms at a single-cell level in a large population of SCN neurons. Thus, the Nipkow spinning disk confocal imaging system developed here facilitates long-term visualization of circadian rhythms in living cells. Copyright © 2012 Elsevier B.V. All rights reserved.

  15. Autonomous pedestrian localization technique using CMOS camera sensors

    NASA Astrophysics Data System (ADS)

    Chun, Chanwoo

    2014-09-01

    We present a pedestrian localization technique that does not need infrastructure. The proposed angle-only measurement method needs specially manufactured shoes. Each shoe has two CMOS cameras and two markers such as LEDs attached on the inward side. The line of sight (LOS) angles towards the two markers on the forward shoe are measured using the two cameras on the other rear shoe. Our simulation results shows that a pedestrian walking down in a shopping mall wearing this device can be accurately guided to the front of a destination store located 100m away, if the floor plan of the mall is available.

  16. Research on the electro-optical assistant landing system based on the dual camera photogrammetry algorithm

    NASA Astrophysics Data System (ADS)

    Mi, Yuhe; Huang, Yifan; Li, Lin

    2015-08-01

    Based on the location technique of beacon photogrammetry, Dual Camera Photogrammetry (DCP) algorithm was used to assist helicopters landing on the ship. In this paper, ZEMAX was used to simulate the two Charge Coupled Device (CCD) cameras imaging four beacons on both sides of the helicopter and output the image to MATLAB. Target coordinate systems, image pixel coordinate systems, world coordinate systems and camera coordinate systems were established respectively. According to the ideal pin-hole imaging model, the rotation matrix and translation vector of the target coordinate systems and the camera coordinate systems could be obtained by using MATLAB to process the image information and calculate the linear equations. On the basis mentioned above, ambient temperature and the positions of the beacons and cameras were changed in ZEMAX to test the accuracy of the DCP algorithm in complex sea status. The numerical simulation shows that in complex sea status, the position measurement accuracy can meet the requirements of the project.

  17. Chrominance watermark for mobile applications

    NASA Astrophysics Data System (ADS)

    Reed, Alastair; Rogers, Eliot; James, Dan

    2010-01-01

    Creating an imperceptible watermark which can be read by a broad range of cell phone cameras is a difficult problem. The problems are caused by the inherently low resolution and noise levels of typical cell phone cameras. The quality limitations of these devices compared to a typical digital camera are caused by the small size of the cell phone and cost trade-offs made by the manufacturer. In order to achieve this, a low resolution watermark is required which can be resolved by a typical cell phone camera. The visibility of a traditional luminance watermark was too great at this lower resolution, so a chrominance watermark was developed. The chrominance watermark takes advantage of the relatively low sensitivity of the human visual system to chrominance changes. This enables a chrominance watermark to be inserted into an image which is imperceptible to the human eye but can be read using a typical cell phone camera. Sample images will be presented showing images with a very low visibility which can be easily read by a typical cell phone camera.

  18. Standard design for National Ignition Facility x-ray streak and framing cameras.

    PubMed

    Kimbrough, J R; Bell, P M; Bradley, D K; Holder, J P; Kalantar, D K; MacPhee, A G; Telford, S

    2010-10-01

    The x-ray streak camera and x-ray framing camera for the National Ignition Facility were redesigned to improve electromagnetic pulse hardening, protect high voltage circuits from pressure transients, and maximize the use of common parts and operational software. Both instruments use the same PC104 based controller, interface, power supply, charge coupled device camera, protective hermetically sealed housing, and mechanical interfaces. Communication is over fiber optics with identical facility hardware for both instruments. Each has three triggers that can be either fiber optic or coax. High voltage protection consists of a vacuum sensor to enable the high voltage and pulsed microchannel plate phosphor voltage. In the streak camera, the high voltage is removed after the sweep. Both rely on the hardened aluminum box and a custom power supply to reduce electromagnetic pulse/electromagnetic interference (EMP/EMI) getting into the electronics. In addition, the streak camera has an EMP/EMI shield enclosing the front of the streak tube.

  19. Ranging Apparatus and Method Implementing Stereo Vision System

    NASA Technical Reports Server (NTRS)

    Li, Larry C. (Inventor); Cox, Brian J. (Inventor)

    1997-01-01

    A laser-directed ranging system for use in telerobotics applications and other applications involving physically handicapped individuals. The ranging system includes a left and right video camera mounted on a camera platform, and a remotely positioned operator. The position of the camera platform is controlled by three servo motors to orient the roll axis, pitch axis and yaw axis of the video cameras, based upon an operator input such as head motion. A laser is provided between the left and right video camera and is directed by the user to point to a target device. The images produced by the left and right video cameras are processed to eliminate all background images except for the spot created by the laser. This processing is performed by creating a digital image of the target prior to illumination by the laser, and then eliminating common pixels from the subsequent digital image which includes the laser spot. The horizontal disparity between the two processed images is calculated for use in a stereometric ranging analysis from which range is determined.

  20. Lensless imaging for wide field of view

    NASA Astrophysics Data System (ADS)

    Nagahara, Hajime; Yagi, Yasushi

    2015-02-01

    It is desirable to engineer a small camera with a wide field of view (FOV) because of current developments in the field of wearable cameras and computing products, such as action cameras and Google Glass. However, typical approaches for achieving wide FOV, such as attaching a fisheye lens and convex mirrors, require a trade-off between optics size and the FOV. We propose camera optics that achieve a wide FOV, and are at the same time small and lightweight. The proposed optics are a completely lensless and catoptric design. They contain four mirrors, two for wide viewing, and two for focusing the image on the camera sensor. The proposed optics are simple and can be simply miniaturized, since we use only mirrors for the proposed optics and the optics are not susceptible to chromatic aberration. We have implemented the prototype optics of our lensless concept. We have attached the optics to commercial charge-coupled device/complementary metal oxide semiconductor cameras and conducted experiments to evaluate the feasibility of our proposed optics.

  1. 3D image processing architecture for camera phones

    NASA Astrophysics Data System (ADS)

    Atanassov, Kalin; Ramachandra, Vikas; Goma, Sergio R.; Aleksic, Milivoje

    2011-03-01

    Putting high quality and easy-to-use 3D technology into the hands of regular consumers has become a recent challenge as interest in 3D technology has grown. Making 3D technology appealing to the average user requires that it be made fully automatic and foolproof. Designing a fully automatic 3D capture and display system requires: 1) identifying critical 3D technology issues like camera positioning, disparity control rationale, and screen geometry dependency, 2) designing methodology to automatically control them. Implementing 3D capture functionality on phone cameras necessitates designing algorithms to fit within the processing capabilities of the device. Various constraints like sensor position tolerances, sensor 3A tolerances, post-processing, 3D video resolution and frame rate should be carefully considered for their influence on 3D experience. Issues with migrating functions such as zoom and pan from the 2D usage model (both during capture and display) to 3D needs to be resolved to insure the highest level of user experience. It is also very important that the 3D usage scenario (including interactions between the user and the capture/display device) is carefully considered. Finally, both the processing power of the device and the practicality of the scheme needs to be taken into account while designing the calibration and processing methodology.

  2. Evaluation of the Intel RealSense SR300 camera for image-guided interventions and application in vertebral level localization

    NASA Astrophysics Data System (ADS)

    House, Rachael; Lasso, Andras; Harish, Vinyas; Baum, Zachary; Fichtinger, Gabor

    2017-03-01

    PURPOSE: Optical pose tracking of medical instruments is often used in image-guided interventions. Unfortunately, compared to commonly used computing devices, optical trackers tend to be large, heavy, and expensive devices. Compact 3D vision systems, such as Intel RealSense cameras can capture 3D pose information at several magnitudes lower cost, size, and weight. We propose to use Intel SR300 device for applications where it is not practical or feasible to use conventional trackers and limited range and tracking accuracy is acceptable. We also put forward a vertebral level localization application utilizing the SR300 to reduce risk of wrong-level surgery. METHODS: The SR300 was utilized as an object tracker by extending the PLUS toolkit to support data collection from RealSense cameras. Accuracy of the camera was tested by comparing to a high-accuracy optical tracker. CT images of a lumbar spine phantom were obtained and used to create a 3D model in 3D Slicer. The SR300 was used to obtain a surface model of the phantom. Markers were attached to the phantom and a pointer and tracked using Intel RealSense SDK's built-in object tracking feature. 3D Slicer was used to align CT image with phantom using landmark registration and display the CT image overlaid on the optical image. RESULTS: Accuracy of the camera yielded a median position error of 3.3mm (95th percentile 6.7mm) and orientation error of 1.6° (95th percentile 4.3°) in a 20x16x10cm workspace, constantly maintaining proper marker orientation. The model and surface correctly aligned demonstrating the vertebral level localization application. CONCLUSION: The SR300 may be usable for pose tracking in medical procedures where limited accuracy is acceptable. Initial results suggest the SR300 is suitable for vertebral level localization.

  3. Application of infrared camera to bituminous concrete pavements: measuring vehicle

    NASA Astrophysics Data System (ADS)

    Janků, Michal; Stryk, Josef

    2017-09-01

    Infrared thermography (IR) has been used for decades in certain fields. However, the technological level of advancement of measuring devices has not been sufficient for some applications. Over the recent years, good quality thermal cameras with high resolution and very high thermal sensitivity have started to appear on the market. The development in the field of measuring technologies allowed the use of infrared thermography in new fields and for larger number of users. This article describes the research in progress in Transport Research Centre with a focus on the use of infrared thermography for diagnostics of bituminous road pavements. A measuring vehicle, equipped with a thermal camera, digital camera and GPS sensor, was designed for the diagnostics of pavements. New, highly sensitive, thermal cameras allow to measure very small temperature differences from the moving vehicle. This study shows the potential of a high-speed inspection without lane closures while using IR thermography.

  4. Flame Imaging System

    NASA Technical Reports Server (NTRS)

    Barnes, Heidi L. (Inventor); Smith, Harvey S. (Inventor)

    1998-01-01

    A system for imaging a flame and the background scene is discussed. The flame imaging system consists of two charge-coupled-device (CCD) cameras. One camera uses a 800 nm long pass filter which during overcast conditions blocks sufficient background light so the hydrogen flame is brighter than the background light, and the second CCD camera uses a 1100 nm long pass filter, which blocks the solar background in full sunshine conditions such that the hydrogen flame is brighter than the solar background. Two electronic viewfinders convert the signal from the cameras into a visible image. The operator can select the appropriate filtered camera to use depending on the current light conditions. In addition, a narrow band pass filtered InGaAs sensor at 1360 nm triggers an audible alarm and a flashing LED if the sensor detects a flame, providing additional flame detection so the operator does not overlook a small flame.

  5. Color constancy by characterization of illumination chromaticity

    NASA Astrophysics Data System (ADS)

    Nikkanen, Jarno T.

    2011-05-01

    Computational color constancy algorithms play a key role in achieving desired color reproduction in digital cameras. Failure to estimate illumination chromaticity correctly will result in invalid overall colour cast in the image that will be easily detected by human observers. A new algorithm is presented for computational color constancy. Low computational complexity and low memory requirement make the algorithm suitable for resource-limited camera devices, such as consumer digital cameras and camera phones. Operation of the algorithm relies on characterization of the range of possible illumination chromaticities in terms of camera sensor response. The fact that only illumination chromaticity is characterized instead of the full color gamut, for example, increases robustness against variations in sensor characteristics and against failure of diagonal model of illumination change. Multiple databases are used in order to demonstrate the good performance of the algorithm in comparison to the state-of-the-art color constancy algorithms.

  6. Issues in implementing services for a wireless web-enabled digital camera

    NASA Astrophysics Data System (ADS)

    Venkataraman, Shyam; Sampat, Nitin; Fisher, Yoram; Canosa, John; Noel, Nicholas

    2001-05-01

    The competition in the exploding digital photography market has caused vendors to explore new ways to increase their return on investment. A common view among industry analysts is that increasingly it will be services provided by these cameras, and not the cameras themselves, that will provide the revenue stream. These services will be coupled to e- Appliance based Communities. In addition, the rapidly increasing need to upload images to the Internet for photo- finishing services as well as the need to download software upgrades to the camera is driving many camera OEMs to evaluate the benefits of using the wireless web to extend their enterprise systems. Currently, creating a viable e- appliance such as a digital camera coupled with a wireless web service requires more than just a competency in product development. This paper will evaluate the system implications in the deployment of recurring revenue services and enterprise connectivity of a wireless, web-enabled digital camera. These include, among other things, an architectural design approach for services such as device management, synchronization, billing, connectivity, security, etc. Such an evaluation will assist, we hope, anyone designing or connecting a digital camera to the enterprise systems.

  7. Smartphone, tablet computer and e-reader use by people with vision impairment.

    PubMed

    Crossland, Michael D; Silva, Rui S; Macedo, Antonio F

    2014-09-01

    Consumer electronic devices such as smartphones, tablet computers, and e-book readers have become far more widely used in recent years. Many of these devices contain accessibility features such as large print and speech. Anecdotal experience suggests people with vision impairment frequently make use of these systems. Here we survey people with self-identified vision impairment to determine their use of this equipment. An internet-based survey was advertised to people with vision impairment by word of mouth, social media, and online. Respondents were asked demographic information, what devices they owned, what they used these devices for, and what accessibility features they used. One hundred and thirty-two complete responses were received. Twenty-six percent of the sample reported that they had no vision and the remainder reported they had low vision. One hundred and seven people (81%) reported using a smartphone. Those with no vision were as likely to use a smartphone or tablet as those with low vision. Speech was found useful by 59% of smartphone users. Fifty-one percent of smartphone owners used the camera and screen as a magnifier. Forty-eight percent of the sample used a tablet computer, and 17% used an e-book reader. The most frequently cited reason for not using these devices included cost and lack of interest. Smartphones, tablet computers, and e-book readers can be used by people with vision impairment. Speech is used by people with low vision as well as those with no vision. Many of our (self-selected) group used their smartphone camera and screen as a magnifier, and others used the camera flash as a spotlight. © 2014 The Authors Ophthalmic & Physiological Optics © 2014 The College of Optometrists.

  8. Real-time localization of mobile device by filtering method for sensor fusion

    NASA Astrophysics Data System (ADS)

    Fuse, Takashi; Nagara, Keita

    2017-06-01

    Most of the applications with mobile devices require self-localization of the devices. GPS cannot be used in indoor environment, the positions of mobile devices are estimated autonomously by using IMU. Since the self-localization is based on IMU of low accuracy, and then the self-localization in indoor environment is still challenging. The selflocalization method using images have been developed, and the accuracy of the method is increasing. This paper develops the self-localization method without GPS in indoor environment by integrating sensors, such as IMU and cameras, on mobile devices simultaneously. The proposed method consists of observations, forecasting and filtering. The position and velocity of the mobile device are defined as a state vector. In the self-localization, observations correspond to observation data from IMU and camera (observation vector), forecasting to mobile device moving model (system model) and filtering to tracking method by inertial surveying and coplanarity condition and inverse depth model (observation model). Positions of a mobile device being tracked are estimated by system model (forecasting step), which are assumed as linearly moving model. Then estimated positions are optimized referring to the new observation data based on likelihood (filtering step). The optimization at filtering step corresponds to estimation of the maximum a posterior probability. Particle filter are utilized for the calculation through forecasting and filtering steps. The proposed method is applied to data acquired by mobile devices in indoor environment. Through the experiments, the high performance of the method is confirmed.

  9. Recognizable-image selection for fingerprint recognition with a mobile-device camera.

    PubMed

    Lee, Dongjae; Choi, Kyoungtaek; Choi, Heeseung; Kim, Jaihie

    2008-02-01

    This paper proposes a recognizable-image selection algorithm for fingerprint-verification systems that use a camera embedded in a mobile device. A recognizable image is defined as the fingerprint image which includes the characteristics that are sufficiently discriminating an individual from other people. While general camera systems obtain focused images by using various gradient measures to estimate high-frequency components, mobile cameras cannot acquire recognizable images in the same way because the obtained images may not be adequate for fingerprint recognition, even if they are properly focused. A recognizable image has to meet the following two conditions: First, valid region in the recognizable image should be large enough compared with other nonrecognizable images. Here, a valid region is a well-focused part, and ridges in the region are clearly distinguishable from valleys. In order to select valid regions, this paper proposes a new focus-measurement algorithm using the secondary partial derivatives and a quality estimation utilizing the coherence and symmetry of gradient distribution. Second, rolling and pitching degrees of a finger measured from the camera plane should be within some limit for a recognizable image. The position of a core point and the contour of a finger are used to estimate the degrees of rolling and pitching. Experimental results show that our proposed method selects valid regions and estimates the degrees of rolling and pitching properly. In addition, fingerprint-verification performance is improved by detecting the recognizable images.

  10. Multi-MGy Radiation Hardened Camera for Nuclear Facilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Girard, Sylvain; Boukenter, Aziz; Ouerdane, Youcef

    There is an increasing interest in developing cameras for surveillance systems to monitor nuclear facilities or nuclear waste storages. Particularly, for today's and the next generation of nuclear facilities increasing safety requirements consecutive to Fukushima Daiichi's disaster have to be considered. For some applications, radiation tolerance needs to overcome doses in the MGy(SiO{sub 2}) range whereas the most tolerant commercial or prototypes products based on solid state image sensors withstand doses up to few kGy. The objective of this work is to present the radiation hardening strategy developed by our research groups to enhance the tolerance to ionizing radiations ofmore » the various subparts of these imaging systems by working simultaneously at the component and system design levels. Developing radiation-hardened camera implies to combine several radiation-hardening strategies. In our case, we decided not to use the simplest one, the shielding approach. This approach is efficient but limits the camera miniaturization and is not compatible with its future integration in remote-handling or robotic systems. Then, the hardening-by-component strategy appears mandatory to avoid the failure of one of the camera subparts at doses lower than the MGy. Concerning the image sensor itself, the used technology is a CMOS Image Sensor (CIS) designed by ISAE team with custom pixel designs used to mitigate the total ionizing dose (TID) effects that occur well below the MGy range in classical image sensors (e.g. Charge Coupled Devices (CCD), Charge Injection Devices (CID) and classical Active Pixel Sensors (APS)), such as the complete loss of functionality, the dark current increase and the gain drop. We'll present at the conference a comparative study between these radiation-hardened pixel radiation responses with respect to conventional ones, demonstrating the efficiency of the choices made. The targeted strategy to develop the complete radiation hard camera electronics will be exposed. Another important element of the camera is the optical system that transports the image from the scene to the image sensor. This arrangement of glass-based lenses is affected by radiations through two mechanisms: the radiation induced absorption and the radiation induced refractive index changes. The first one will limit the signal to noise ratio of the image whereas the second one will directly affect the resolution of the camera. We'll present at the conference a coupled simulation/experiment study of these effects for various commercial glasses and present vulnerability study of typical optical systems to radiations at MGy doses. The last very important part of the camera is the illumination system that can be based on various technologies of emitting devices like LED, SLED or lasers. The most promising solutions for high radiation doses will be presented at the conference. In addition to this hardening-by-component approach, the global radiation tolerance of the camera can be drastically improve by working at the system level, combining innovative approaches eg. for the optical and illumination systems. We'll present at the conference the developed approach allowing to extend the camera lifetime up to the MGy dose range. (authors)« less

  11. An integrated port camera and display system for laparoscopy.

    PubMed

    Terry, Benjamin S; Ruppert, Austin D; Steinhaus, Kristen R; Schoen, Jonathan A; Rentschler, Mark E

    2010-05-01

    In this paper, we built and tested the port camera, a novel, inexpensive, portable, and battery-powered laparoscopic tool that integrates the components of a vision system with a cannula port. This new device 1) minimizes the invasiveness of laparoscopic surgery by combining a camera port and tool port; 2) reduces the cost of laparoscopic vision systems by integrating an inexpensive CMOS sensor and LED light source; and 3) enhances laparoscopic surgical procedures by mechanically coupling the camera, tool port, and liquid crystal display (LCD) screen to provide an on-patient visual display. The port camera video system was compared to two laparoscopic video systems: a standard resolution unit from Karl Storz (model 22220130) and a high definition unit from Stryker (model 1188HD). Brightness, contrast, hue, colorfulness, and sharpness were compared. The port camera video is superior to the Storz scope and approximately equivalent to the Stryker scope. An ex vivo study was conducted to measure the operative performance of the port camera. The results suggest that simulated tissue identification and biopsy acquisition with the port camera is as efficient as with a traditional laparoscopic system. The port camera was successfully used by a laparoscopic surgeon for exploratory surgery and liver biopsy during a porcine surgery, demonstrating initial surgical feasibility.

  12. TASS - The Amateur Sky Survey

    NASA Astrophysics Data System (ADS)

    Droege, T. F.; Albertson, C.; Gombert, G.; Gutzwiller, M.; Molhant, N. W.; Johnson, H.; Skvarc, J.; Wickersham, R. J.; Richmond, M. W.; Rybski, P.; Henden, A.; Beser, N.; Pittinger, M.; Kluga, B.

    1997-05-01

    As a non-astronomer watching Shoemaker/Levy 9 crash into Jupiter through postings on sci.astro, it occurred to me that it might be fun to build a comet finding machine. After wild speculations on how such a device might be built - I considered a 26" x 40" fresnel lens and a string of pin diodes -- postings to sci.astro brought me down to earth. I quickly made contact with both professionals and amateurs and found that there was interesting science to be done with an all sky survey. After several prototype drift scan cameras were built using various CCDs, I determined the real problem was software. How does one get the software written for an all sky survey? Willie Sutton could tell you, "Go where the programmers are." Our strategy has been to build a bunch of drift scan cameras and just give them away (without software) to programmers found on the Internet. This author reports more success by this technique than when he had a business and hired and paid programmers at a cost of a million or so a year. To date, 22 drift scan cameras have been constructed. Most of these are operated as triplets spaced 15 degrees apart in Right Ascension and with I, V, I filters. The cameras use 135mm fl, f.2.8 camera lenses for a plate scale of 14 arc seconds per pixel and reach magnitude 13. With 512 pixels across the drift scan direction and running through the night, a triplet will collect 200 Mb of data on three overlapping areas of 3 x 120 degrees each. To date four of the triplets and one single have taken data. Production has started on 25 second generation cameras using 2k x 2k devices and a barn door mount.

  13. Image Capture with Synchronized Multiple-Cameras for Extraction of Accurate Geometries

    NASA Astrophysics Data System (ADS)

    Koehl, M.; Delacourt, T.; Boutry, C.

    2016-06-01

    This paper presents a project of recording and modelling tunnels, traffic circles and roads from multiple sensors. The aim is the representation and the accurate 3D modelling of a selection of road infrastructures as dense point clouds in order to extract profiles and metrics from it. Indeed, these models will be used for the sizing of infrastructures in order to simulate exceptional convoy truck routes. The objective is to extract directly from the point clouds the heights, widths and lengths of bridges and tunnels, the diameter of gyrating and to highlight potential obstacles for a convoy. Light, mobile and fast acquisition approaches based on images and videos from a set of synchronized sensors have been tested in order to obtain useable point clouds. The presented solution is based on a combination of multiple low-cost cameras designed on an on-boarded device allowing dynamic captures. The experimental device containing GoPro Hero4 cameras has been set up and used for tests in static or mobile acquisitions. That way, various configurations have been tested by using multiple synchronized cameras. These configurations are discussed in order to highlight the best operational configuration according to the shape of the acquired objects. As the precise calibration of each sensor and its optics are major factors in the process of creation of accurate dense point clouds, and in order to reach the best quality available from such cameras, the estimation of the internal parameters of fisheye lenses of the cameras has been processed. Reference measures were also realized by using a 3D TLS (Faro Focus 3D) to allow the accuracy assessment.

  14. Photogrammetric Applications of Immersive Video Cameras

    NASA Astrophysics Data System (ADS)

    Kwiatek, K.; Tokarczyk, R.

    2014-05-01

    The paper investigates immersive videography and its application in close-range photogrammetry. Immersive video involves the capture of a live-action scene that presents a 360° field of view. It is recorded simultaneously by multiple cameras or microlenses, where the principal point of each camera is offset from the rotating axis of the device. This issue causes problems when stitching together individual frames of video separated from particular cameras, however there are ways to overcome it and applying immersive cameras in photogrammetry provides a new potential. The paper presents two applications of immersive video in photogrammetry. At first, the creation of a low-cost mobile mapping system based on Ladybug®3 and GPS device is discussed. The amount of panoramas is much too high for photogrammetric purposes as the base line between spherical panoramas is around 1 metre. More than 92 000 panoramas were recorded in one Polish region of Czarny Dunajec and the measurements from panoramas enable the user to measure the area of outdoors (adverting structures) and billboards. A new law is being created in order to limit the number of illegal advertising structures in the Polish landscape and immersive video recorded in a short period of time is a candidate for economical and flexible measurements off-site. The second approach is a generation of 3d video-based reconstructions of heritage sites based on immersive video (structure from immersive video). A mobile camera mounted on a tripod dolly was used to record the interior scene and immersive video, separated into thousands of still panoramas, was converted from video into 3d objects using Agisoft Photoscan Professional. The findings from these experiments demonstrated that immersive photogrammetry seems to be a flexible and prompt method of 3d modelling and provides promising features for mobile mapping systems.

  15. Uav Photogrammetric Solution Using a Raspberry pi Camera Module and Smart Devices: Test and Results

    NASA Astrophysics Data System (ADS)

    Piras, M.; Grasso, N.; Jabbar, A. Abdul

    2017-08-01

    Nowadays, smart technologies are an important part of our action and life, both in indoor and outdoor environment. There are several smart devices very friendly to be setting, where they can be integrated and embedded with other sensors, having a very low cost. Raspberry allows to install an internal camera called Raspberry Pi Camera Module, both in RGB band and NIR band. The advantage of this system is the limited cost (< 60 euro), their light weight and their simplicity to be used and embedded. This paper will describe a research where a Raspberry Pi with the Camera Module was installed onto a UAV hexacopter based on arducopter system, with purpose to collect pictures for photogrammetry issue. Firstly, the system was tested with aim to verify the performance of RPi camera in terms of frame per second/resolution and the power requirement. Moreover, a GNSS receiver Ublox M8T was installed and connected to the Raspberry platform in order to collect real time position and the raw data, for data processing and to define the time reference. IMU was also tested to see the impact of UAV rotors noise on different sensors like accelerometer, Gyroscope and Magnetometer. A comparison of the achieved results (accuracy) on some check points of the point clouds obtained by the camera will be reported as well in order to analyse in deeper the main discrepancy on the generated point cloud and the potentiality of these proposed approach. In this contribute, the assembling of the system is described, in particular the dataset acquired and the results carried out will be analysed.

  16. Cross-Platform Mobile Application Development: A Pattern-Based Approach

    DTIC Science & Technology

    2012-03-01

    Additionally, developers should be aware of different hardware capabilities such as external SD cards and forward facing cameras. Finally, each...applications are written. Additionally, developers should be aware of different hardware capabilities such as external SD cards and forward facing cameras... iTunes library, allowing the user to update software and manage content on each device. However, in iOS5, the PC Free feature removes this constraint

  17. High Information Capacity Quantum Imaging

    DTIC Science & Technology

    2014-09-19

    single-pixel camera [41, 75]. An object is imaged onto a Digital Micromirror device ( DMD ), a 2D binary array of individually-addressable mirrors that...reflect light either to a single detector or a dump. Rows of the sensing matrix A consist of random, binary patterns placed sequentially on the DMD ...The single-pixel camera concept naturally adapts to imaging correlations by adding a second detector. Consider placing separate DMDs in the near-field

  18. High signal-to-noise-ratio electro-optical terahertz imaging system based on an optical demodulating detector array.

    PubMed

    Spickermann, Gunnar; Friederich, Fabian; Roskos, Hartmut G; Bolívar, Peter Haring

    2009-11-01

    We present a 64x48 pixel 2D electro-optical terahertz (THz) imaging system using a photonic mixing device time-of-flight camera as an optical demodulating detector array. The combination of electro-optic detection with a time-of-flight camera increases sensitivity drastically, enabling the use of a nonamplified laser source for high-resolution real-time THz electro-optic imaging.

  19. An Architecture to Support Wearables in Education and Wellbeing

    ERIC Educational Resources Information Center

    Luis-Ferreira, Fernando; Artifice, Andreia; McManus, Gary; Sarraipa, João

    2017-01-01

    Technological devices help extending a person's sensory experience of the environment. From sensors to cameras, devices currently use embedded systems that can be used for the main goal they were designed but they can also be used for other objectives without additional costs of material or service subscription. Emotional assessment is a useful…

  20. Innovative Use of Smartphones for a Sound Resonance Tube Experiment

    ERIC Educational Resources Information Center

    Tho, Siew Wei; Yeung, Yau Yuen

    2014-01-01

    A Smartphone is not only a mobile device that is used for communication but is also integrated with a personal digital assistant (PDA) and other technological capabilities such as built-in acceleration, magnetic and light sensors, microphone, camera and Global Positioning System (GPS) unit. This handheld device has become very popular with the…

  1. Watching the TV Audience.

    ERIC Educational Resources Information Center

    Collett, Peter

    Data were collected for this study of the relationship between television watching and family life via a recording device (C-Box) consisting of a television set and a video camera. Designed for the study, this device was installed in 20 homes for one week to record the viewing area in front of the television set together with information on…

  2. A Haptic Glove as a Tactile-Vision Sensory Substitution for Wayfinding.

    ERIC Educational Resources Information Center

    Zelek, John S.; Bromley, Sam; Asmar, Daniel; Thompson, David

    2003-01-01

    A device that relays navigational information using a portable tactile glove and a wearable computer and camera system was tested with nine adults with visual impairments. Paths traversed by subjects negotiating an obstacle course were not qualitatively different from paths produced with existing wayfinding devices and hitting probabilities were…

  3. Adapting smartphones for low-cost optical medical imaging

    NASA Astrophysics Data System (ADS)

    Pratavieira, Sebastião.; Vollet-Filho, José D.; Carbinatto, Fernanda M.; Blanco, Kate; Inada, Natalia M.; Bagnato, Vanderlei S.; Kurachi, Cristina

    2015-06-01

    Optical images have been used in several medical situations to improve diagnosis of lesions or to monitor treatments. However, most systems employ expensive scientific (CCD or CMOS) cameras and need computers to display and save the images, usually resulting in a high final cost for the system. Additionally, this sort of apparatus operation usually becomes more complex, requiring more and more specialized technical knowledge from the operator. Currently, the number of people using smartphone-like devices with built-in high quality cameras is increasing, which might allow using such devices as an efficient, lower cost, portable imaging system for medical applications. Thus, we aim to develop methods of adaptation of those devices to optical medical imaging techniques, such as fluorescence. Particularly, smartphones covers were adapted to connect a smartphone-like device to widefield fluorescence imaging systems. These systems were used to detect lesions in different tissues, such as cervix and mouth/throat mucosa, and to monitor ALA-induced protoporphyrin-IX formation for photodynamic treatment of Cervical Intraepithelial Neoplasia. This approach may contribute significantly to low-cost, portable and simple clinical optical imaging collection.

  4. Real-Time View Correction for Mobile Devices.

    PubMed

    Schops, Thomas; Oswald, Martin R; Speciale, Pablo; Yang, Shuoran; Pollefeys, Marc

    2017-11-01

    We present a real-time method for rendering novel virtual camera views from given RGB-D (color and depth) data of a different viewpoint. Missing color and depth information due to incomplete input or disocclusions is efficiently inpainted in a temporally consistent way. The inpainting takes the location of strong image gradients into account as likely depth discontinuities. We present our method in the context of a view correction system for mobile devices, and discuss how to obtain a screen-camera calibration and options for acquiring depth input. Our method has use cases in both augmented and virtual reality applications. We demonstrate the speed of our system and the visual quality of its results in multiple experiments in the paper as well as in the supplementary video.

  5. Nekton Interaction Monitoring System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2017-03-15

    The software provides a real-time processing system for sonar to detect and track animals, and to extract water column biomass statistics in order to facilitate continuous monitoring of an underwater environment. The Nekton Interaction Monitoring System (NIMS) extracts and archives tracking and backscatter statistics data from a real-time stream of data from a sonar device. NIMS also sends real-time tracking messages over the network that can be used by other systems to generate other metrics or to trigger instruments such as an optical video camera. A web-based user interface provides remote monitoring and control. NIMS currently supports three popular sonarmore » devices: M3 multi-beam sonar (Kongsberg), EK60 split-beam echo-sounder (Simrad) and BlueView acoustic camera (Teledyne).« less

  6. Robot Towed Shortwave Infrared Camera for Specific Surface Area Retrieval of Surface Snow

    NASA Astrophysics Data System (ADS)

    Elliott, J.; Lines, A.; Ray, L.; Albert, M. R.

    2017-12-01

    Optical grain size and specific surface area are key parameters for measuring the atmospheric interactions of snow, as well as tracking metamorphosis and allowing for the ground truthing of remote sensing data. We describe a device using a shortwave infrared camera with changeable optical bandpass filters (centered at 1300 nm and 1550 nm) that can be used to quickly measure the average SSA over an area of 0.25 m^2. The device and method are compared with calculations made from measurements taken with a field spectral radiometer. The instrument is designed to be towed by a small autonomous ground vehicle, and therefore rides above the snow surface on ultra high molecular weight polyethylene (UHMW) skis.

  7. Light-Directed Ranging System Implementing Single Camera System for Telerobotics Applications

    NASA Technical Reports Server (NTRS)

    Wells, Dennis L. (Inventor); Li, Larry C. (Inventor); Cox, Brian J. (Inventor)

    1997-01-01

    A laser-directed ranging system has utility for use in various fields, such as telerobotics applications and other applications involving physically handicapped individuals. The ranging system includes a single video camera and a directional light source such as a laser mounted on a camera platform, and a remotely positioned operator. In one embodiment, the position of the camera platform is controlled by three servo motors to orient the roll axis, pitch axis and yaw axis of the video cameras, based upon an operator input such as head motion. The laser is offset vertically and horizontally from the camera, and the laser/camera platform is directed by the user to point the laser and the camera toward a target device. The image produced by the video camera is processed to eliminate all background images except for the spot created by the laser. This processing is performed by creating a digital image of the target prior to illumination by the laser, and then eliminating common pixels from the subsequent digital image which includes the laser spot. A reference point is defined at a point in the video frame, which may be located outside of the image area of the camera. The disparity between the digital image of the laser spot and the reference point is calculated for use in a ranging analysis to determine range to the target.

  8. Use and validation of mirrorless digital single light reflex camera for recording of vitreoretinal surgeries in high definition

    PubMed Central

    Khanduja, Sumeet; Sampangi, Raju; Hemlatha, B C; Singh, Satvir; Lall, Ashish

    2018-01-01

    Purpose: The purpose of this study is to describe the use of commercial digital single light reflex (DSLR) for vitreoretinal surgery recording and compare it to standard 3-chip charged coupling device (CCD) camera. Methods: Simultaneous recording was done using Sony A7s2 camera and Sony high-definition 3-chip camera attached to each side of the microscope. The videos recorded from both the camera systems were edited and sequences of similar time frames were selected. Three sequences that selected for evaluation were (a) anterior segment surgery, (b) surgery under direct viewing system, and (c) surgery under indirect wide-angle viewing system. The videos of each sequence were evaluated and rated on a scale of 0-10 for color, contrast, and overall quality Results: Most results were rated either 8/10 or 9/10 for both the cameras. A noninferiority analysis by comparing mean scores of DSLR camera versus CCD camera was performed and P values were obtained. The mean scores of the two cameras were comparable for each other on all parameters assessed in the different videos except of color and contrast in posterior pole view and color on wide-angle view, which were rated significantly higher (better) in DSLR camera. Conclusion: Commercial DSLRs are an affordable low-cost alternative for vitreoretinal surgery recording and may be used for documentation and teaching. PMID:29283133

  9. Use and validation of mirrorless digital single light reflex camera for recording of vitreoretinal surgeries in high definition.

    PubMed

    Khanduja, Sumeet; Sampangi, Raju; Hemlatha, B C; Singh, Satvir; Lall, Ashish

    2018-01-01

    The purpose of this study is to describe the use of commercial digital single light reflex (DSLR) for vitreoretinal surgery recording and compare it to standard 3-chip charged coupling device (CCD) camera. Simultaneous recording was done using Sony A7s2 camera and Sony high-definition 3-chip camera attached to each side of the microscope. The videos recorded from both the camera systems were edited and sequences of similar time frames were selected. Three sequences that selected for evaluation were (a) anterior segment surgery, (b) surgery under direct viewing system, and (c) surgery under indirect wide-angle viewing system. The videos of each sequence were evaluated and rated on a scale of 0-10 for color, contrast, and overall quality Results: Most results were rated either 8/10 or 9/10 for both the cameras. A noninferiority analysis by comparing mean scores of DSLR camera versus CCD camera was performed and P values were obtained. The mean scores of the two cameras were comparable for each other on all parameters assessed in the different videos except of color and contrast in posterior pole view and color on wide-angle view, which were rated significantly higher (better) in DSLR camera. Commercial DSLRs are an affordable low-cost alternative for vitreoretinal surgery recording and may be used for documentation and teaching.

  10. Solid state television camera (CCD-buried channel)

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The development of an all solid state television camera, which uses a buried channel charge coupled device (CCD) as the image sensor, was undertaken. A 380 x 488 element CCD array is utilized to ensure compatibility with 525 line transmission and display monitor equipment. Specific camera design approaches selected for study and analysis included (a) optional clocking modes for either fast (1/60 second) or normal (1/30 second) frame readout, (b) techniques for the elimination or suppression of CCD blemish effects, and (c) automatic light control and video gain control (i.e., ALC and AGC) techniques to eliminate or minimize sensor overload due to bright objects in the scene. Preferred approaches were determined and integrated into a deliverable solid state TV camera which addressed the program requirements for a prototype qualifiable to space environment conditions.

  11. Solid state television camera (CCD-buried channel), revision 1

    NASA Technical Reports Server (NTRS)

    1977-01-01

    An all solid state television camera was designed which uses a buried channel charge coupled device (CCD) as the image sensor. A 380 x 488 element CCD array is utilized to ensure compatibility with 525-line transmission and display monitor equipment. Specific camera design approaches selected for study and analysis included (1) optional clocking modes for either fast (1/60 second) or normal (1/30 second) frame readout, (2) techniques for the elimination or suppression of CCD blemish effects, and (3) automatic light control and video gain control techniques to eliminate or minimize sensor overload due to bright objects in the scene. Preferred approaches were determined and integrated into a deliverable solid state TV camera which addressed the program requirements for a prototype qualifiable to space environment conditions.

  12. Ultraviolet Imaging with Low Cost Smartphone Sensors: Development and Application of a Raspberry Pi-Based UV Camera

    PubMed Central

    Wilkes, Thomas C.; McGonigle, Andrew J. S.; Pering, Tom D.; Taggart, Angus J.; White, Benjamin S.; Bryant, Robert G.; Willmott, Jon R.

    2016-01-01

    Here, we report, for what we believe to be the first time, on the modification of a low cost sensor, designed for the smartphone camera market, to develop an ultraviolet (UV) camera system. This was achieved via adaptation of Raspberry Pi cameras, which are based on back-illuminated complementary metal-oxide semiconductor (CMOS) sensors, and we demonstrated the utility of these devices for applications at wavelengths as low as 310 nm, by remotely sensing power station smokestack emissions in this spectral region. Given the very low cost of these units, ≈ USD 25, they are suitable for widespread proliferation in a variety of UV imaging applications, e.g., in atmospheric science, volcanology, forensics and surface smoothness measurements. PMID:27782054

  13. Solid state, CCD-buried channel, television camera study and design

    NASA Technical Reports Server (NTRS)

    Hoagland, K. A.; Balopole, H.

    1976-01-01

    An investigation of an all solid state television camera design, which uses a buried channel charge-coupled device (CCD) as the image sensor, was undertaken. A 380 x 488 element CCD array was utilized to ensure compatibility with 525 line transmission and display monitor equipment. Specific camera design approaches selected for study and analysis included (a) optional clocking modes for either fast (1/60 second) or normal (1/30 second) frame readout, (b) techniques for the elimination or suppression of CCD blemish effects, and (c) automatic light control and video gain control techniques to eliminate or minimize sensor overload due to bright objects in the scene. Preferred approaches were determined and integrated into a design which addresses the program requirements for a deliverable solid state TV camera.

  14. Two-Way Communication Using RFID Equipment and Techniques

    NASA Technical Reports Server (NTRS)

    Jedry, Thomas; Archer, Eric

    2007-01-01

    Equipment and techniques used in radio-frequency identification (RFID) would be extended, according to a proposal, to enable short-range, two-way communication between electronic products and host computers. In one example of a typical contemplated application, the purpose of the short-range radio communication would be to transfer image data from a user s digital still or video camera to the user s computer for recording and/or processing. The concept is also applicable to consumer electronic products other than digital cameras (for example, cellular telephones, portable computers, or motion sensors in alarm systems), and to a variety of industrial and scientific sensors and other devices that generate data. Until now, RFID has been used to exchange small amounts of mostly static information for identifying and tracking assets. Information pertaining to an asset (typically, an object in inventory to be tracked) is contained in miniature electronic circuitry in an RFID tag attached to the object. Conventional RFID equipment and techniques enable a host computer to read data from and, in some cases, to write data to, RFID tags, but they do not enable such additional functions as sending commands to, or retrieving possibly large quantities of dynamic data from, RFID-tagged devices. The proposal would enable such additional functions. The figure schematically depicts an implementation of the proposal for a sensory device (e.g., a digital camera) that includes circuitry that converts sensory information to digital data. In addition to the basic sensory device, there would be a controller and a memory that would store the sensor data and/or data from the controller. The device would also be equipped with a conventional RFID chipset and antenna, which would communicate with a host computer via an RFID reader. The controller would function partly as a communication interface, implementing two-way communication protocols at all levels (including RFID if needed) between the sensory device and the memory and between the host computer and the memory. The controller would perform power V

  15. Microgravity

    NASA Image and Video Library

    1991-04-03

    The USML-1 Glovebox (GBX) is a multi-user facility supporting 16 experiments in fluid dynamics, combustion sciences, crystal growth, and technology demonstration. The GBX has an enclosed working space which minimizes the contamination risks to both Spacelab and experiment samples. The GBX supports four charge-coupled device (CCD) cameras (two of which may be operated simultaneously) with three black-and-white and three color camera CCD heads available. The GBX also has a backlight panel, a 35 mm camera, and a stereomicroscope that offers high-magnification viewing of experiment samples. Video data can also be downlinked in real-time. The GBX also provides electrical power for experiment hardware, a time-temperature display, and cleaning supplies.

  16. Microgravity

    NASA Image and Video Library

    1995-08-29

    The USML-1 Glovebox (GBX) is a multi-user facility supporting 16 experiments in fluid dynamics, combustion sciences, crystal growth, and technology demonstration. The GBX has an enclosed working space which minimizes the contamination risks to both Spacelab and experiment samples. The GBX supports four charge-coupled device (CCD) cameras (two of which may be operated simultaneously) with three black-and-white and three color camera CCD heads available. The GBX also has a backlight panel, a 35 mm camera, and a stereomicroscope that offers high-magnification viewing of experiment samples. Video data can also be downlinked in real-time. The GBX also provides electrical power for experiment hardware, a time-temperature display, and cleaning supplies.

  17. Design and evaluation of a filter spectrometer concept for facsimile cameras

    NASA Technical Reports Server (NTRS)

    Kelly, W. L., IV; Jobson, D. J.; Rowland, C. W.

    1974-01-01

    The facsimile camera is an optical-mechanical scanning device which was selected as the imaging system for the Viking '75 lander missions to Mars. A concept which uses an interference filter-photosensor array to integrate a spectrometric capability with the basic imagery function of this camera was proposed for possible application to future missions. This paper is concerned with the design and evaluation of critical electronic circuits and components that are required to implement this concept. The feasibility of obtaining spectroradiometric data is demonstrated, and the performance of a laboratory model is described in terms of spectral range, angular and spectral resolution, and noise-equivalent radiance.

  18. A small field of view camera for hybrid gamma and optical imaging

    NASA Astrophysics Data System (ADS)

    Lees, J. E.; Bugby, S. L.; Bhatia, B. S.; Jambi, L. K.; Alqahtani, M. S.; McKnight, W. R.; Ng, A. H.; Perkins, A. C.

    2014-12-01

    The development of compact low profile gamma-ray detectors has allowed the production of small field of view, hand held imaging devices for use at the patient bedside and in operating theatres. The combination of an optical and a gamma camera, in a co-aligned configuration, offers high spatial resolution multi-modal imaging giving a superimposed scintigraphic and optical image. This innovative introduction of hybrid imaging offers new possibilities for assisting surgeons in localising the site of uptake in procedures such as sentinel node detection. Recent improvements to the camera system along with results of phantom and clinical imaging are reported.

  19. Enhanced Virtual Presence for Immersive Visualization of Complex Situations for Mission Rehearsal

    DTIC Science & Technology

    1997-06-01

    taken. We propose to join both these technologies together in a registration device . The registration device would be small and portable and easily...registering the panning of the camera (or other sensing device ) and also stitch together the shots to automatically generate panoramic files necessary to...database and as the base information changes each of the linked drawings is automatically updated. Filename Format A specific naming convention should be

  20. Micro-optical system based 3D imaging for full HD depth image capturing

    NASA Astrophysics Data System (ADS)

    Park, Yong-Hwa; Cho, Yong-Chul; You, Jang-Woo; Park, Chang-Young; Yoon, Heesun; Lee, Sang-Hun; Kwon, Jong-Oh; Lee, Seung-Wan

    2012-03-01

    20 Mega-Hertz-switching high speed image shutter device for 3D image capturing and its application to system prototype are presented. For 3D image capturing, the system utilizes Time-of-Flight (TOF) principle by means of 20MHz high-speed micro-optical image modulator, so called 'optical shutter'. The high speed image modulation is obtained using the electro-optic operation of the multi-layer stacked structure having diffractive mirrors and optical resonance cavity which maximizes the magnitude of optical modulation. The optical shutter device is specially designed and fabricated realizing low resistance-capacitance cell structures having small RC-time constant. The optical shutter is positioned in front of a standard high resolution CMOS image sensor and modulates the IR image reflected from the object to capture a depth image. Suggested novel optical shutter device enables capturing of a full HD depth image with depth accuracy of mm-scale, which is the largest depth image resolution among the-state-of-the-arts, which have been limited up to VGA. The 3D camera prototype realizes color/depth concurrent sensing optical architecture to capture 14Mp color and full HD depth images, simultaneously. The resulting high definition color/depth image and its capturing device have crucial impact on 3D business eco-system in IT industry especially as 3D image sensing means in the fields of 3D camera, gesture recognition, user interface, and 3D display. This paper presents MEMS-based optical shutter design, fabrication, characterization, 3D camera system prototype and image test results.

  1. Indirectly Funded Research and Exploratory Development at the Applied Physics Laboratory, Fiscal Year 1978.

    DTIC Science & Technology

    1979-12-01

    used to reduce costs ). The orbital data from the prototype ion composi- tion telescope will not only be of great scientific interest -pro- viding for...active device whose transfer function may be almost arbitrarily defined, and cost and production trends permit contemplation of networks containing...developing solid-state television camera systems based on CCD imagers. RICA hopes to produce a $500 color camera for consumer use. Fairchild and Texas

  2. Detection of non-classical space-time correlations with a novel type of single-photon camera.

    PubMed

    Just, Felix; Filipenko, Mykhaylo; Cavanna, Andrea; Michel, Thilo; Gleixner, Thomas; Taheri, Michael; Vallerga, John; Campbell, Michael; Tick, Timo; Anton, Gisela; Chekhova, Maria V; Leuchs, Gerd

    2014-07-14

    During the last decades, multi-pixel detectors have been developed capable of registering single photons. The newly developed hybrid photon detector camera has a remarkable property that it has not only spatial but also temporal resolution. In this work, we apply this device to the detection of non-classical light from spontaneous parametric down-conversion and use two-photon correlations for the absolute calibration of its quantum efficiency.

  3. Miniature wide field-of-view star trackers for spacecraft attitude sensing and navigation

    NASA Technical Reports Server (NTRS)

    Mccarty, William; Curtis, Eric; Hull, Anthony; Morgan, William

    1993-01-01

    Introducing a family of miniature, wide field-of-view star trackers for low cost, high performance spacecraft attitude determination and navigation applications. These devices, derivative of the WFOV Star Tracker Camera developed cooperatively by OCA Applied Optics and the Lawrence Livermore National Laboratory for the Brilliant Pebbles program, offer a suite of options addressing a wide range of spacecraft attitude measurement and control requirements. These sensors employ much wider fields than are customary (ranging between 20 and 60 degrees) to assure enough bright stars for quick and accurate attitude determinations without long integration intervals. The key benefit of this approach are light weight, low power, reduced data processing loads and high information carrier rates for wide ACS bandwidths. Devices described range from the proven OCA/LLNL WFOV Star Tracker Camera (a low-cost, space-qualified star-field imager utilizing the spacecraft's own computer and centroiding and position-finding), to a new autonomous subsystem design featuring dual-redundant cameras and completely self-contained star-field data processing with output quaternion solutions accurate to 100 micro-rad, 3 sigma, for stand-alone applications.

  4. Fast imaging diagnostics on the C-2U advanced beam-driven field-reversed configuration device

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Granstedt, E. M., E-mail: egranstedt@trialphaenergy.com; Petrov, P.; Knapp, K.

    2016-11-15

    The C-2U device employed neutral beam injection, end-biasing, and various particle fueling techniques to sustain a Field-Reversed Configuration (FRC) plasma. As part of the diagnostic suite, two fast imaging instruments with radial and nearly axial plasma views were developed using a common camera platform. To achieve the necessary viewing geometry, imaging lenses were mounted behind re-entrant viewports attached to welded bellows. During gettering, the vacuum optics were retracted and isolated behind a gate valve permitting their removal if cleaning was necessary. The axial view incorporated a stainless-steel mirror in a protective cap assembly attached to the vacuum-side of the viewport.more » For each system, a custom lens-based, high-throughput optical periscope was designed to relay the plasma image about half a meter to a high-speed camera. Each instrument also contained a remote-controlled filter wheel, set between shots to isolate a particular hydrogen or impurity emission line. The design of the camera platform, imaging performance, and sample data for each view is presented.« less

  5. Geolocating thermal binoculars based on a software defined camera core incorporating HOT MCT grown by MOVPE

    NASA Astrophysics Data System (ADS)

    Pillans, Luke; Harmer, Jack; Edwards, Tim; Richardson, Lee

    2016-05-01

    Geolocation is the process of calculating a target position based on bearing and range relative to the known location of the observer. A high performance thermal imager with integrated geolocation functions is a powerful long range targeting device. Firefly is a software defined camera core incorporating a system-on-a-chip processor running the AndroidTM operating system. The processor has a range of industry standard serial interfaces which were used to interface to peripheral devices including a laser rangefinder and a digital magnetic compass. The core has built in Global Positioning System (GPS) which provides the third variable required for geolocation. The graphical capability of Firefly allowed flexibility in the design of the man-machine interface (MMI), so the finished system can give access to extensive functionality without appearing cumbersome or over-complicated to the user. This paper covers both the hardware and software design of the system, including how the camera core influenced the selection of peripheral hardware, and the MMI design process which incorporated user feedback at various stages.

  6. Fast imaging diagnostics on the C-2U advanced beam-driven field-reversed configuration device

    NASA Astrophysics Data System (ADS)

    Granstedt, E. M.; Petrov, P.; Knapp, K.; Cordero, M.; Patel, V.

    2016-11-01

    The C-2U device employed neutral beam injection, end-biasing, and various particle fueling techniques to sustain a Field-Reversed Configuration (FRC) plasma. As part of the diagnostic suite, two fast imaging instruments with radial and nearly axial plasma views were developed using a common camera platform. To achieve the necessary viewing geometry, imaging lenses were mounted behind re-entrant viewports attached to welded bellows. During gettering, the vacuum optics were retracted and isolated behind a gate valve permitting their removal if cleaning was necessary. The axial view incorporated a stainless-steel mirror in a protective cap assembly attached to the vacuum-side of the viewport. For each system, a custom lens-based, high-throughput optical periscope was designed to relay the plasma image about half a meter to a high-speed camera. Each instrument also contained a remote-controlled filter wheel, set between shots to isolate a particular hydrogen or impurity emission line. The design of the camera platform, imaging performance, and sample data for each view is presented.

  7. Increasing Electrochemiluminescence Intensity of a Wireless Electrode Array Chip by Thousands of Times Using a Diode for Sensitive Visual Detection by a Digital Camera.

    PubMed

    Qi, Liming; Xia, Yong; Qi, Wenjing; Gao, Wenyue; Wu, Fengxia; Xu, Guobao

    2016-01-19

    Both a wireless electrochemiluminescence (ECL) electrode microarray chip and the dramatic increase in ECL by embedding a diode in an electromagnetic receiver coil have been first reported. The newly designed device consists of a chip and a transmitter. The chip has an electromagnetic receiver coil, a mini-diode, and a gold electrode array. The mini-diode can rectify alternating current into direct current and thus enhance ECL intensities by 18 thousand times, enabling a sensitive visual detection using common cameras or smart phones as low cost detectors. The detection limit of hydrogen peroxide using a digital camera is comparable to that using photomultiplier tube (PMT)-based detectors. Coupled with a PMT-based detector, the device can detect luminol with higher sensitivity with linear ranges from 10 nM to 1 mM. Because of the advantages including high sensitivity, high throughput, low cost, high portability, and simplicity, it is promising in point of care testing, drug screening, and high throughput analysis.

  8. The High Definition Earth Viewing (HDEV) Payload

    NASA Technical Reports Server (NTRS)

    Muri, Paul; Runco, Susan; Fontanot, Carlos; Getteau, Chris

    2017-01-01

    The High Definition Earth Viewing (HDEV) payload enables long-term experimentation of four, commercial-of-the-shelf (COTS) high definition video, cameras mounted on the exterior of the International Space Station. The payload enables testing of cameras in the space environment. The HDEV cameras transmit imagery continuously to an encoder that then sends the video signal via Ethernet through the space station for downlink. The encoder, cameras, and other electronics are enclosed in a box pressurized to approximately one atmosphere, containing dry nitrogen, to provide a level of protection to the electronics from the space environment. The encoded video format supports streaming live video of Earth for viewing online. Camera sensor types include charge-coupled device and complementary metal-oxide semiconductor. Received imagery data is analyzed on the ground to evaluate camera sensor performance. Since payload deployment, minimal degradation to imagery quality has been observed. The HDEV payload continues to operate by live streaming and analyzing imagery. Results from the experiment reduce risk in the selection of cameras that could be considered for future use on the International Space Station and other spacecraft. This paper discusses the payload development, end-to- end architecture, experiment operation, resulting image analysis, and future work.

  9. Quantifying Gold Nanoparticle Concentration in a Dietary Supplement Using Smartphone Colorimetry and Google Applications

    ERIC Educational Resources Information Center

    Campos, Antonio R.; Knutson, Cassandra M.; Knutson, Theodore R.; Mozzetti, Abbie R.; Haynes, Christy L.; Penn, R. Lee

    2016-01-01

    Spectrophotometry and colorimetry experiments are common in high school and college chemistry courses, and nanotechnology is increasingly common in every day products and new devices. Previous work has demonstrated that handheld camera devices can be used to quantify the concentration of a colored analyte in solution in place of traditional…

  10. Technique for improving the quality of images from digital cameras using ink-jet printers and smoothed RGB transfer curves

    NASA Astrophysics Data System (ADS)

    Sampat, Nitin; Grim, John F.; O'Hara, James E.

    1998-04-01

    The digital camera market is growing at an explosive rate. At the same time, the quality of photographs printed on ink- jet printers continues to improve. Most of the consumer cameras are designed with the monitor as the target output device and ont the printer. When a user is printing his images from a camera, he/she needs to optimize the camera and printer combination in order to maximize image quality. We describe the details of one such method for improving image quality using a AGFA digital camera and an ink jet printer combination. Using Adobe PhotoShop, we generated optimum red, green and blue transfer curves that match the scene content to the printers output capabilities. Application of these curves to the original digital image resulted in a print with more shadow detail, no loss of highlight detail, a smoother tone scale, and more saturated colors. The image also exhibited an improved tonal scale and visually more pleasing images than those captured and printed without any 'correction'. While we report the results for one camera-printer combination we tested this technique on numbers digital cameras and printer combinations and in each case produced a better looking image. We also discuss the problems we encountered in implementing this technique.

  11. First results of the multi-purpose real-time processing video camera system on the Wendelstein 7-X stellarator and implications for future devices

    NASA Astrophysics Data System (ADS)

    Zoletnik, S.; Biedermann, C.; Cseh, G.; Kocsis, G.; König, R.; Szabolics, T.; Szepesi, T.; Wendelstein 7-X Team

    2018-01-01

    A special video camera has been developed for the 10-camera overview video system of the Wendelstein 7-X (W7-X) stellarator considering multiple application needs and limitations resulting from this complex long-pulse superconducting stellarator experiment. The event detection intelligent camera (EDICAM) uses a special 1.3 Mpixel CMOS sensor with non-destructive read capability which enables fast monitoring of smaller Regions of Interest (ROIs) even during long exposures. The camera can perform simple data evaluation algorithms (minimum/maximum, mean comparison to levels) on the ROI data which can dynamically change the readout process and generate output signals. Multiple EDICAM cameras were operated in the first campaign of W7-X and capabilities were explored in the real environment. Data prove that the camera can be used for taking long exposure (10-100 ms) overview images of the plasma while sub-ms monitoring and even multi-camera correlated edge plasma turbulence measurements of smaller areas can be done in parallel. These latter revealed that filamentary turbulence structures extend between neighboring modules of the stellarator. Considerations emerging for future upgrades of this system and similar setups on future long-pulse fusion experiments such as ITER are discussed.

  12. Color reproduction software for a digital still camera

    NASA Astrophysics Data System (ADS)

    Lee, Bong S.; Park, Du-Sik; Nam, Byung D.

    1998-04-01

    We have developed a color reproduction software for a digital still camera. The image taken by the camera was colorimetrically reproduced on the monitor after characterizing the camera and the monitor, and color matching between two devices. The reproduction was performed at three levels; level processing, gamma correction, and color transformation. The image contrast was increased after the level processing adjusting the level of dark and bright portions of the image. The relationship between the level processed digital values and the measured luminance values of test gray samples was calculated, and the gamma of the camera was obtained. The method for getting the unknown monitor gamma was proposed. As a result, the level processed values were adjusted by the look-up table created by the camera and the monitor gamma correction. For a color transformation matrix for the camera, 3 by 3 or 3 by 4 matrix was used, which was calculated by the regression between the gamma corrected values and the measured tristimulus values of each test color samples the various reproduced images were displayed on the dialogue box implemented in our software, which were generated according to four illuminations for the camera and three color temperatures for the monitor. An user can easily choose he best reproduced image comparing each others.

  13. Super-Resolution in Plenoptic Cameras Using FPGAs

    PubMed Central

    Pérez, Joel; Magdaleno, Eduardo; Pérez, Fernando; Rodríguez, Manuel; Hernández, David; Corrales, Jaime

    2014-01-01

    Plenoptic cameras are a new type of sensor that extend the possibilities of current commercial cameras allowing 3D refocusing or the capture of 3D depths. One of the limitations of plenoptic cameras is their limited spatial resolution. In this paper we describe a fast, specialized hardware implementation of a super-resolution algorithm for plenoptic cameras. The algorithm has been designed for field programmable graphic array (FPGA) devices using VHDL (very high speed integrated circuit (VHSIC) hardware description language). With this technology, we obtain an acceleration of several orders of magnitude using its extremely high-performance signal processing capability through parallelism and pipeline architecture. The system has been developed using generics of the VHDL language. This allows a very versatile and parameterizable system. The system user can easily modify parameters such as data width, number of microlenses of the plenoptic camera, their size and shape, and the super-resolution factor. The speed of the algorithm in FPGA has been successfully compared with the execution using a conventional computer for several image sizes and different 3D refocusing planes. PMID:24841246

  14. Technical and instrumental prerequisites for single-port laparoscopic solo surgery: state of art.

    PubMed

    Kim, Say-June; Lee, Sang Chul

    2015-04-21

    With the aid of advanced surgical techniques and instruments, single-port laparoscopic surgery (SPLS) can be accomplished with just two surgical members: an operator and a camera assistant. Under these circumstances, the reasonable replacement of a human camera assistant by a mechanical camera holder has resulted in a new surgical procedure termed single-port solo surgery (SPSS). In SPSS, the fixation and coordinated movement of a camera held by mechanical devices provides fixed and stable operative images that are under the control of the operator. Therefore, SPSS primarily benefits from the provision of the operator's eye-to-hand coordination. Because SPSS is an intuitive modification of SPLS, the indications for SPSS are the same as those for SPLS. Though SPSS necessitates more actions than the surgery with a human assistant, these difficulties seem to be easily overcome by the greater provision of static operative images and the need for less lens cleaning and repositioning of the camera. When the operation is expected to be difficult and demanding, the SPSS process could be assisted by the addition of another instrument holder besides the camera holder.

  15. Super-resolution in plenoptic cameras using FPGAs.

    PubMed

    Pérez, Joel; Magdaleno, Eduardo; Pérez, Fernando; Rodríguez, Manuel; Hernández, David; Corrales, Jaime

    2014-05-16

    Plenoptic cameras are a new type of sensor that extend the possibilities of current commercial cameras allowing 3D refocusing or the capture of 3D depths. One of the limitations of plenoptic cameras is their limited spatial resolution. In this paper we describe a fast, specialized hardware implementation of a super-resolution algorithm for plenoptic cameras. The algorithm has been designed for field programmable graphic array (FPGA) devices using VHDL (very high speed integrated circuit (VHSIC) hardware description language). With this technology, we obtain an acceleration of several orders of magnitude using its extremely high-performance signal processing capability through parallelism and pipeline architecture. The system has been developed using generics of the VHDL language. This allows a very versatile and parameterizable system. The system user can easily modify parameters such as data width, number of microlenses of the plenoptic camera, their size and shape, and the super-resolution factor. The speed of the algorithm in FPGA has been successfully compared with the execution using a conventional computer for several image sizes and different 3D refocusing planes.

  16. Airborne imaging for heritage documentation using the Fotokite tethered flying camera

    NASA Astrophysics Data System (ADS)

    Verhoeven, Geert; Lupashin, Sergei; Briese, Christian; Doneus, Michael

    2014-05-01

    Since the beginning of aerial photography, researchers used all kinds of devices (from pigeons, kites, poles, and balloons to rockets) to take still cameras aloft and remotely gather aerial imagery. To date, many of these unmanned devices are still used for what has been referred to as Low-Altitude Aerial Photography or LAAP. In addition to these more traditional camera platforms, radio-controlled (multi-)copter platforms have recently added a new aspect to LAAP. Although model airplanes have been around for several decades, the decreasing cost, increasing functionality and stability of ready-to-fly multi-copter systems has proliferated their use among non-hobbyists. As such, they became a very popular tool for aerial imaging. The overwhelming amount of currently available brands and types (heli-, dual-, tri-, quad-, hexa-, octo-, dodeca-, deca-hexa and deca-octocopters), together with the wide variety of navigation options (e.g. altitude and position hold, waypoint flight) and camera mounts indicate that these platforms are here to stay for some time. Given the multitude of still camera types and the image quality they are currently capable of, endless combinations of low- and high-cost LAAP solutions are available. In addition, LAAP allows for the exploitation of new imaging techniques, as it is often only a matter of lifting the appropriate device (e.g. video cameras, thermal frame imagers, hyperspectral line sensors). Archaeologists were among the first to adopt this technology, as it provided them with a means to easily acquire essential data from a unique point of view, whether for simple illustration purposes of standing historic structures or to compute three-dimensional (3D) models and orthophotographs from excavation areas. However, even very cheap multi-copters models require certain skills to pilot them safely. Additionally, malfunction or overconfidence might lift these devices to altitudes where they can interfere with manned aircrafts. As such, the safe operation of these devices is still an issue, certainly when flying on locations which can be crowded (such as students on excavations or tourists walking around historic places). As the future of UAS regulation remains unclear, this talk presents an alternative approach to aerial imaging: the Fotokite. Developed at the ETH Zürich, the Fotokite is a tethered flying camera that is essentially a multi-copter connected to the ground with a taut tether to achieve controlled flight. Crucially, it relies solely on onboard IMU (Inertial Measurement Unit) measurements to fly, launches in seconds, and is classified as not a UAS (Unmanned Aerial System), e.g. in the latest FAA (Federal Aviation Administration) UAS proposal. As a result it may be used for imaging cultural heritage in a variety of environments and settings with minimal training by non-experienced pilots. Furthermore, it is subject to less extensive certification, regulation and import/export restrictions, making it a viable solution for use at a greater range of sites than traditional methods. Unlike a balloon or a kite it is not subject to particular weather conditions and, thanks to active stabilization, is capable of a variety of intelligent flight modes. Finally, it is compact and lightweight, making it easy to transport and deploy, and its lack of reliance on GNSS (Global Navigation Satellite System) makes it possible to use in urban, overbuilt areas. After outlining its operating principles, the talk will present some archaeological case studies in which the Fotokite was used, hereby assessing its capabilities compared to the conventional UAS's on the market.

  17. Design of a portable imager for near-infrared visualization of cutaneous wounds

    PubMed Central

    Peng, Zhaoqiang; Zhou, Jun; Dacy, Ashley; Zhao, Deyin; Kearney, Vasant; Zhou, Weidong; Tang, Liping; Hu, Wenjing

    2017-01-01

    Abstract. A portable imager developed for real-time imaging of cutaneous wounds in research settings is described. The imager consists of a high-resolution near-infrared CCD camera capable of detecting both bioluminescence and fluorescence illuminated by an LED ring with a rotatable filter wheel. All external components are integrated into a compact camera attachment. The device is demonstrated to have competitive performance with a commercial animal imaging enclosure box setup in beam uniformity and sensitivity. Specifically, the device was used to visualize the bioluminescence associated with increased reactive oxygen species activity during the wound healing process in a cutaneous wound inflammation model. In addition, this device was employed to observe the fluorescence associated with the activity of matrix metalloproteinases in a mouse lipopolysaccharide-induced infection model. Our results support the use of the portable imager design as a noninvasive and real-time imaging tool to assess the extent of wound inflammation and infection. PMID:28114448

  18. Full High-definition three-dimensional gynaecological laparoscopy--clinical assessment of a new robot-assisted device.

    PubMed

    Tuschy, Benjamin; Berlit, Sebastian; Brade, Joachim; Sütterlin, Marc; Hornemann, Amadeus

    2014-01-01

    To investigate the clinical assessment of a full high-definition (HD) three-dimensional robot-assisted laparoscopic device in gynaecological surgery. This study included 70 women who underwent gynaecological laparoscopic procedures. Demographic parameters, type and duration of surgery and perioperative complications were analyzed. Fifteen surgeons were postoperatively interviewed regarding their assessment of this new system with a standardized questionnaire. The clinical assessment revealed that three-dimensional full-HD visualisation is comfortable and improves spatial orientation and hand-to-eye coordination. The majority of the surgeons stated they would prefer a three-dimensional system to a conventional two-dimensional device and stated that the robotic camera arm led to more relaxed working conditions. Three-dimensional laparoscopy is feasible, comfortable and well-accepted in daily routine. The three-dimensional visualisation improves surgeons' hand-to-eye coordination, intracorporeal suturing and fine dissection. The combination of full-HD three-dimensional visualisation with the robotic camera arm results in very high image quality and stability.

  19. A matter of collection and detection for intraoperative and noninvasive near-infrared fluorescence molecular imaging: To see or not to see?

    PubMed Central

    Zhu, Banghe; Rasmussen, John C.; Sevick-Muraca, Eva M.

    2014-01-01

    Purpose: Although fluorescence molecular imaging is rapidly evolving as a new combinational drug/device technology platform for molecularly guided surgery and noninvasive imaging, there remains no performance standards for efficient translation of “first-in-humans” fluorescent imaging agents using these devices. Methods: The authors employed a stable, solid phantom designed to exaggerate the confounding effects of tissue light scattering and to mimic low concentrations (nM–pM) of near-infrared fluorescent dyes expected clinically for molecular imaging in order to evaluate and compare the commonly used charge coupled device (CCD) camera systems employed in preclinical studies and in human investigational studies. Results: The results show that intensified CCD systems offer greater contrast with larger signal-to-noise ratios in comparison to their unintensified CCD systems operated at clinically reasonable, subsecond acquisition times. Conclusions: Camera imaging performance could impact the success of future “first-in-humans” near-infrared fluorescence imaging agent studies. PMID:24506637

  20. A matter of collection and detection for intraoperative and noninvasive near-infrared fluorescence molecular imaging: To see or not to see?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Banghe; Rasmussen, John C.; Sevick-Muraca, Eva M., E-mail: Eva.Sevick@uth.tmc.edu

    2014-02-15

    Purpose: Although fluorescence molecular imaging is rapidly evolving as a new combinational drug/device technology platform for molecularly guided surgery and noninvasive imaging, there remains no performance standards for efficient translation of “first-in-humans” fluorescent imaging agents using these devices. Methods: The authors employed a stable, solid phantom designed to exaggerate the confounding effects of tissue light scattering and to mimic low concentrations (nM–pM) of near-infrared fluorescent dyes expected clinically for molecular imaging in order to evaluate and compare the commonly used charge coupled device (CCD) camera systems employed in preclinical studies and in human investigational studies. Results: The results show thatmore » intensified CCD systems offer greater contrast with larger signal-to-noise ratios in comparison to their unintensified CCD systems operated at clinically reasonable, subsecond acquisition times. Conclusions: Camera imaging performance could impact the success of future “first-in-humans” near-infrared fluorescence imaging agent studies.« less

  1. [Diagnostic use of positron emission tomography in France: from the coincidence gamma-camera to mobile hybrid PET/CT devices].

    PubMed

    Talbot, Jean-Noël

    2010-11-01

    Positron emission tomography (PET) is a well-established medical imaging method. PET is increasingly used for diagnostic purposes, especially in oncology. The most widely used radiopharmaceutical is FDG, a glucose analogue. Other radiopharmaceuticals have recently been registered or are in development. We outline technical improvements of PET machines during more than a decade of clinical use in France. Even though image quality has improved considerably and PET-CT hybrid machines have emerged, spending per examination has remained remarkably constant. Replacement and maintenance costs have remained in the range of 170-190 Euros per examination since 1997, whether early CDET gamma cameras or the latest time-of-flight PET/CT devices are used. This is mainly due to shorter acquisition times and more efficient use of FDG New reimbursement rates for PET/CT are needed in France in order to favor regular acquisition of state-of-the-art devices. One major development is the coupling of PET and MR imaging.

  2. Collaborated measurement of three-dimensional position and orientation errors of assembled miniature devices with two vision systems

    NASA Astrophysics Data System (ADS)

    Wang, Xiaodong; Zhang, Wei; Luo, Yi; Yang, Weimin; Chen, Liang

    2013-01-01

    In assembly of miniature devices, the position and orientation of the parts to be assembled should be guaranteed during or after assembly. In some cases, the relative position or orientation errors among the parts can not be measured from only one direction using visual method, because of visual occlusion or for the features of parts located in a three-dimensional way. An automatic assembly system for precise miniature devices is introduced. In the modular assembly system, two machine vision systems were employed for measurement of the three-dimensionally distributed assembly errors. High resolution CCD cameras and high position repeatability precision stages were integrated to realize high precision measurement in large work space. The two cameras worked in collaboration in measurement procedure to eliminate the influence of movement errors of the rotational or translational stages. A set of templates were designed for calibration of the vision systems and evaluation of the system's measurement accuracy.

  3. Panoramic 3D Reconstruction by Fusing Color Intensity and Laser Range Data

    NASA Astrophysics Data System (ADS)

    Jiang, Wei; Lu, Jian

    Technology for capturing panoramic (360 degrees) three-dimensional information in a real environment have many applications in fields: virtual and complex reality, security, robot navigation, and so forth. In this study, we examine an acquisition device constructed of a regular CCD camera and a 2D laser range scanner, along with a technique for panoramic 3D reconstruction using a data fusion algorithm based on an energy minimization framework. The acquisition device can capture two types of data of a panoramic scene without occlusion between two sensors: a dense spatio-temporal volume from a camera and distance information from a laser scanner. We resample the dense spatio-temporal volume for generating a dense multi-perspective panorama that has equal spatial resolution to that of the original images acquired using a regular camera, and also estimate a dense panoramic depth-map corresponding to the generated reference panorama by extracting trajectories from the dense spatio-temporal volume with a selecting camera. Moreover, for determining distance information robustly, we propose a data fusion algorithm that is embedded into an energy minimization framework that incorporates active depth measurements using a 2D laser range scanner and passive geometry reconstruction from an image sequence obtained using the CCD camera. Thereby, measurement precision and robustness can be improved beyond those available by conventional methods using either passive geometry reconstruction (stereo vision) or a laser range scanner. Experimental results using both synthetic and actual images show that our approach can produce high-quality panoramas and perform accurate 3D reconstruction in a panoramic environment.

  4. Event-Driven Random-Access-Windowing CCD Imaging System

    NASA Technical Reports Server (NTRS)

    Monacos, Steve; Portillo, Angel; Ortiz, Gerardo; Alexander, James; Lam, Raymond; Liu, William

    2004-01-01

    A charge-coupled-device (CCD) based high-speed imaging system, called a realtime, event-driven (RARE) camera, is undergoing development. This camera is capable of readout from multiple subwindows [also known as regions of interest (ROIs)] within the CCD field of view. Both the sizes and the locations of the ROIs can be controlled in real time and can be changed at the camera frame rate. The predecessor of this camera was described in High-Frame-Rate CCD Camera Having Subwindow Capability (NPO- 30564) NASA Tech Briefs, Vol. 26, No. 12 (December 2002), page 26. The architecture of the prior camera requires tight coupling between camera control logic and an external host computer that provides commands for camera operation and processes pixels from the camera. This tight coupling limits the attainable frame rate and functionality of the camera. The design of the present camera loosens this coupling to increase the achievable frame rate and functionality. From a host computer perspective, the readout operation in the prior camera was defined on a per-line basis; in this camera, it is defined on a per-ROI basis. In addition, the camera includes internal timing circuitry. This combination of features enables real-time, event-driven operation for adaptive control of the camera. Hence, this camera is well suited for applications requiring autonomous control of multiple ROIs to track multiple targets moving throughout the CCD field of view. Additionally, by eliminating the need for control intervention by the host computer during the pixel readout, the present design reduces ROI-readout times to attain higher frame rates. This camera (see figure) includes an imager card consisting of a commercial CCD imager and two signal-processor chips. The imager card converts transistor/ transistor-logic (TTL)-level signals from a field programmable gate array (FPGA) controller card. These signals are transmitted to the imager card via a low-voltage differential signaling (LVDS) cable assembly. The FPGA controller card is connected to the host computer via a standard peripheral component interface (PCI).

  5. 2D Measurements of the Balmer Series in Proto-MPEX using a Fast Visible Camera Setup

    NASA Astrophysics Data System (ADS)

    Lindquist, Elizabeth G.; Biewer, Theodore M.; Ray, Holly B.

    2017-10-01

    The Prototype Material Plasma Exposure eXperiment (Proto-MPEX) is a linear plasma device with densities up to 1020 m-3 and temperatures up to 20 eV. Broadband spectral measurements show the visible emission spectra are solely due to the Balmer lines of deuterium. Monochromatic and RGB color Sanstreak SC1 Edgertronic fast visible cameras capture high speed video of plasmas in Proto-MPEX. The color camera is equipped with a long pass 450 nm filter and an internal Bayer filter to view the Dα line at 656 nm on the red channel and the Dβ line at 486 nm on the blue channel. The monochromatic camera has a 434 nm narrow bandpass filter to view the Dγ intensity. In the setup, a 50/50 beam splitter is used so both cameras image the same region of the plasma discharge. Camera images were aligned to each other by viewing a grid ensuring 1 pixel registration between the two cameras. A uniform intensity calibrated white light source was used to perform a pixel-to-pixel relative and an absolute intensity calibration for both cameras. Python scripts that combined the dual camera data, rendering the Dα, Dβ, and Dγ intensity ratios. Observations from Proto-MPEX discharges will be presented. This work was supported by the US. D.O.E. contract DE-AC05-00OR22725.

  6. AsteroidFinder - the space-borne telescope to search for NEO Asteroids

    NASA Astrophysics Data System (ADS)

    Hartl, M.; Mosebach, H.; Schubert, J.; Michaelis, H.; Mottola, S.; Kührt, E.; Schindler, K.

    2017-11-01

    This paper presents the mission profile as well as the optical configuration of the space-borne AsteroidFinder telescope. Its main objective is to retrieve asteroids with orbits interior to the earth's orbit. The instrument requires high sensitivity to detect asteroids with a limiting magnitude of equal or larger than 18.5mag (V-Band) and astrometric accuracy of 1arcsec (1σ). This requires a telescope aperture greater than 400cm2, high image stability, detector with high quantum efficiency (peak > 90%) and very low noise, which is only limited by zodiacal background. The telescope will observe the sky between 30° and 60° in solar elongation. The telescope optics is based on a Cook type TMA. An effective 2°×2° field of view (FOV) is achieved by a fast F/3.4 telescope with near diffraction-limited performance. The absence of centre obscuration or spiders in combination with an accessible intermediate field plane and exit pupil allow for efficient stray light mitigation. Design drivers for the telescope are the required point spread function (PSF) values, an extremely efficient stray light suppression (due to the magnitude requirement mentioned above), the detector performance, and the overall optical and mechanical stability for all orientations of the satellite. To accommodate the passive thermal stabilization scheme and the necessary structural stability, the materials selection for the telescope main structure and the mirrors are of vital importance. A focal plane with four EMCCD detectors is envisaged. The EMCCD technology features shorter integration times, which is in favor regarding the pointing performance of the satellite. The launch of the mission is foreseen for the year 2013 with a subsequent mission lifetime of at least 1 year.

  7. Generic Dynamic Environment Perception Using Smart Mobile Devices.

    PubMed

    Danescu, Radu; Itu, Razvan; Petrovai, Andra

    2016-10-17

    The driving environment is complex and dynamic, and the attention of the driver is continuously challenged, therefore computer based assistance achieved by processing image and sensor data may increase traffic safety. While active sensors and stereovision have the advantage of obtaining 3D data directly, monocular vision is easy to set up, and can benefit from the increasing computational power of smart mobile devices, and from the fact that almost all of them come with an embedded camera. Several driving assistance application are available for mobile devices, but they are mostly targeted for simple scenarios and a limited range of obstacle shapes and poses. This paper presents a technique for generic, shape independent real-time obstacle detection for mobile devices, based on a dynamic, free form 3D representation of the environment: the particle based occupancy grid. Images acquired in real time from the smart mobile device's camera are processed by removing the perspective effect and segmenting the resulted bird-eye view image to identify candidate obstacle areas, which are then used to update the occupancy grid. The occupancy grid tracked cells are grouped into obstacles depicted as cuboids having position, size, orientation and speed. The easy to set up system is able to reliably detect most obstacles in urban traffic, and its measurement accuracy is comparable to a stereovision system.

  8. Camera memory study for large space telescope. [charge coupled devices

    NASA Technical Reports Server (NTRS)

    Hoffman, C. P.; Brewer, J. E.; Brager, E. A.; Farnsworth, D. L.

    1975-01-01

    Specifications were developed for a memory system to be used as the storage media for camera detectors on the large space telescope (LST) satellite. Detectors with limited internal storage time such as intensities charge coupled devices and silicon intensified targets are implied. The general characteristics are reported of different approaches to the memory system with comparisons made within the guidelines set forth for the LST application. Priority ordering of comparisons is on the basis of cost, reliability, power, and physical characteristics. Specific rationales are provided for the rejection of unsuitable memory technologies. A recommended technology was selected and used to establish specifications for a breadboard memory. Procurement scheduling is provided for delivery of system breadboards in 1976, prototypes in 1978, and space qualified units in 1980.

  9. Image Information Obtained Using a Charge-Coupled Device (CCD) Camera During an Immersion Liquid Evaporation Process for Measuring the Refractive Index of Solid Particles.

    PubMed

    Niskanen, Ilpo; Sutinen, Veijo; Thungström, Göran; Räty, Jukka

    2018-06-01

    The refractive index is a fundamental physical property of a medium, which can be used for the identification and purity issues of all media. Here we describe a refractive index measurement technique to determine simultaneously the refractive index of different solid particles by monitoring the transmittance of light from a suspension using a charge-coupled device (CCD) camera. An important feature of the measurement is the liquid evaporation process for the refractive index matching of the solid particle and the immersion liquid; this was realized by using a pair of volatile and non-volatile immersion liquids. In this study, refractive indices of calcium fluoride (CaF 2 ) and barium fluoride (BaF 2 ) were determined using the proposed method.

  10. Advances in x-ray framing cameras at the National Ignition Facility to improve quantitative precision in x-ray imaging

    DOE PAGES

    Benedetti, L. R.; Holder, J. P.; Perkins, M.; ...

    2016-02-26

    We describe an experimental method to measure the gate profile of an x-ray framing camera and to determine several important functional parameters: relative gain (between strips), relative gain droop (within each strip), gate propagation velocity, gate width, and actual inter-strip timing. Several of these parameters cannot be measured accurately by any other technique. This method is then used to document cross talk-induced gain variations and artifacts created by radiation that arrives before the framing camera is actively amplifying x-rays. Electromagnetic cross talk can cause relative gains to vary significantly as inter-strip timing is varied. This imposes a stringent requirement formore » gain calibration. If radiation arrives before a framing camera is triggered, it can cause an artifact that manifests as a high-intensity, spatially varying background signal. Furthermore, we have developed a device that can be added to the framing camera head to prevent these artifacts.« less

  11. Advances in x-ray framing cameras at the National Ignition Facility to improve quantitative precision in x-ray imaging.

    PubMed

    Benedetti, L R; Holder, J P; Perkins, M; Brown, C G; Anderson, C S; Allen, F V; Petre, R B; Hargrove, D; Glenn, S M; Simanovskaia, N; Bradley, D K; Bell, P

    2016-02-01

    We describe an experimental method to measure the gate profile of an x-ray framing camera and to determine several important functional parameters: relative gain (between strips), relative gain droop (within each strip), gate propagation velocity, gate width, and actual inter-strip timing. Several of these parameters cannot be measured accurately by any other technique. This method is then used to document cross talk-induced gain variations and artifacts created by radiation that arrives before the framing camera is actively amplifying x-rays. Electromagnetic cross talk can cause relative gains to vary significantly as inter-strip timing is varied. This imposes a stringent requirement for gain calibration. If radiation arrives before a framing camera is triggered, it can cause an artifact that manifests as a high-intensity, spatially varying background signal. We have developed a device that can be added to the framing camera head to prevent these artifacts.

  12. A method for measuring aircraft height and velocity using dual television cameras

    NASA Technical Reports Server (NTRS)

    Young, W. R.

    1977-01-01

    A unique electronic optical technique, consisting of two closed circuit television cameras and timing electronics, was devised to measure an aircraft's horizontal velocity and height above ground without the need for airborne cooperative devices. The system is intended to be used where the aircraft has a predictable flight path and a height of less than 660 meters (2,000 feet) at or near the end of an air terminal runway, but is suitable for greater aircraft altitudes whenever the aircraft remains visible. Two television cameras, pointed at zenith, are placed in line with the expected path of travel of the aircraft. Velocity is determined by measuring the time it takes the aircraft to travel the measured distance between cameras. Height is determined by correlating this speed with the time required to cross the field of view of either camera. Preliminary tests with a breadboard version of the system and a small model aircraft indicate the technique is feasible.

  13. A survey on consumers' attitude towards storing and end of life strategies of small information and communication technology devices in Spain.

    PubMed

    Bovea, María D; Ibáñez-Forés, Valeria; Pérez-Belis, Victoria; Juan, Pablo

    2018-01-01

    This study analyses the current habits and practices towards the store, repair and second-hand purchase of small electrical and electronic devices belonging to the category of information and communication technology (ICT). To this end, a survey was designed and conducted with a representative sample size of 400 individuals through telephone interviews for the following categories: MP3/MP4, video camera, photo camera, mobile phone, tablet, e-book, laptop, hard disk drive, navigator-GPS, radio/radio alarm clock. According to the results obtained, there is a tendency to store disused small ICT devices at home. On average for all the small ICT categories analysed, 73.91% of the respondents store disused small ICT devices at home. Related to the habits towards the repair and second-hand purchase of small ICT devices, 65.5% and 87.6% of the respondents have never taken to repair and have never purchased second-hand, respectively, small ICT devices. This paper provides useful and hitherto unavailable information about the current habits of discarding and reusing ICT devices. It can be concluded that there is a need to implement awareness-raising campaigns to encourage these practices, which are necessary to reach the minimum goals established regarding preparation for reuse set out in the Directive 2012/19/EU for the category small electrical and electronic equipment. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Student-Built Underwater Video and Data Capturing Device

    NASA Astrophysics Data System (ADS)

    Whitt, F.

    2016-12-01

    Students from Stockbridge High School Robotics Team invention is a low cost underwater video and data capturing device. This system is capable of shooting time-lapse photography and/or video for up to 3 days of video at a time. It can be used in remote locations without having to change batteries or adding additional external hard drives for data storage. The video capturing device has a unique base and mounting system which houses a pi drive and a programmable raspberry pi with a camera module. This system is powered by two 12 volt batteries, which makes it easier for users to recharge after use. Our data capturing device has the same unique base and mounting system as the underwater camera. The data capturing device consists of an Arduino and SD card shield that is capable of collecting continuous temperature and pH readings underwater. This data will then be logged onto the SD card for easy access and recording. The low cost underwater video and data capturing device can reach depths up to 100 meters while recording 36 hours of video on 1 terabyte of storage. It also features night vision infrared light capabilities. The cost to build our invention is $500. The goal of this was to provide a device that can easily be accessed by marine biologists, teachers, researchers and citizen scientists to capture photographic and water quality data in marine environments over extended periods of time.

  15. Distributing digital video to multiple computers

    PubMed Central

    Murray, James A.

    2004-01-01

    Video is an effective teaching tool, and live video microscopy is especially helpful in teaching dissection techniques and the anatomy of small neural structures. Digital video equipment is more affordable now and allows easy conversion from older analog video devices. I here describe a simple technique for bringing digital video from one camera to all of the computers in a single room. This technique allows students to view and record the video from a single camera on a microscope. PMID:23493464

  16. Comparison of three-dimensional optical coherence tomography and combining a rotating Scheimpflug camera with a Placido topography system for forme fruste keratoconus diagnosis.

    PubMed

    Fukuda, Shinichi; Beheregaray, Simone; Hoshi, Sujin; Yamanari, Masahiro; Lim, Yiheng; Hiraoka, Takahiro; Yasuno, Yoshiaki; Oshika, Tetsuro

    2013-12-01

    To evaluate the ability of parameters measured by three-dimensional (3D) corneal and anterior segment optical coherence tomography (CAS-OCT) and a rotating Scheimpflug camera combined with a Placido topography system (Scheimpflug camera with topography) to discriminate between normal eyes and forme fruste keratoconus. Forty-eight eyes of 48 patients with keratoconus, 25 eyes of 25 patients with forme fruste keratoconus and 128 eyes of 128 normal subjects were evaluated. Anterior and posterior keratometric parameters (steep K, flat K, average K), elevation, topographic parameters, regular and irregular astigmatism (spherical, asymmetry, regular and higher-order astigmatism) and five pachymetric parameters (minimum, minimum-median, inferior-superior, inferotemporal-superonasal, vertical thinnest location of the cornea) were measured using 3D CAS-OCT and a Scheimpflug camera with topography. The area under the receiver operating curve (AUROC) was calculated to assess the discrimination ability. Compatibility and repeatability of both devices were evaluated. Posterior surface elevation showed higher AUROC values in discrimination analysis of forme fruste keratoconus using both devices. Both instruments showed significant linear correlations (p<0.05, Pearson's correlation coefficient) and good repeatability (ICCs: 0.885-0.999) for normal and forme fruste keratoconus. Posterior elevation was the best discrimination parameter for forme fruste keratoconus. Both instruments presented good correlation and repeatability for this condition.

  17. A new spherical scanning system for infrared reflectography of paintings

    NASA Astrophysics Data System (ADS)

    Gargano, M.; Cavaliere, F.; Viganò, D.; Galli, A.; Ludwig, N.

    2017-03-01

    Infrared reflectography is an imaging technique used to visualize the underdrawings of ancient paintings; it relies on the fact that most pigment layers are quite transparent to infrared radiation in the spectral band between 0.8 μm and 2.5 μm. InGaAs sensor cameras are nowadays the most used devices to visualize the underdrawings but due to the small size of the detectors, these cameras are usually mounted on scanning systems to record high resolution reflectograms. This work describes a portable scanning system prototype based on a peculiar spherical scanning system built through a light weight and low cost motorized head. The motorized head was built with the purpose of allowing the refocusing adjustment needed to compensate the variable camera-painting distance during the rotation of the camera. The prototype has been tested first in laboratory and then in-situ for the Giotto panel "God the Father with Angels" with a 256 pixel per inch resolution. The system performance is comparable with that of other reflectographic devices with the advantage of extending the scanned area up to 1 m × 1 m, with a 40 min scanning time. The present configuration can be easily modified to increase the resolution up to 560 pixels per inch or to extend the scanned area up to 2 m × 2 m.

  18. ESR paper on the proper use of mobile devices in radiology.

    PubMed

    2018-04-01

    Mobile devices (smartphones, tablets, etc.) have become key methods of communication, data access and data sharing for the population in the past decade. The technological capabilities of these devices have expanded very rapidly; for example, their in-built cameras have largely replaced conventional cameras. Their processing power is often sufficient to handle the large data sets of radiology studies and to manipulate images and studies directly on hand-held devices. Thus, they can be used to transmit and view radiology studies, often in locations remote from the source of the imaging data. They are not recommended for primary interpretation of radiology studies, but they facilitate sharing of studies for second opinions, viewing of studies and reports by clinicians at the bedside, etc. Other potential applications include remote participation in educational activity (e.g. webinars) and consultation of online educational content, e-books, journals and reference sources. Social-networking applications can be used for exchanging professional information and teaching. Users of mobile device must be aware of the vulnerabilities and dangers of their use, in particular regarding the potential for inappropriate sharing of confidential patient information, and must take appropriate steps to protect confidential data. • Mobile devices have revolutionized communication in the past decade, and are now ubiquitous. • Mobile devices have sufficient processing power to manipulate and display large data sets of radiological images. • Mobile devices allow transmission & sharing of radiologic studies for purposes of second opinions, bedside review of images, teaching, etc. • Mobile devices are currently not recommended as tools for primary interpretation of radiologic studies. • The use of mobile devices for image and data transmission carries risks, especially regarding confidentiality, which must be considered.

  19. A comparison between conductive and infrared devices for measuring mean skin temperature at rest, during exercise in the heat, and recovery.

    PubMed

    Bach, Aaron J E; Stewart, Ian B; Disher, Alice E; Costello, Joseph T

    2015-01-01

    Skin temperature assessment has historically been undertaken with conductive devices affixed to the skin. With the development of technology, infrared devices are increasingly utilised in the measurement of skin temperature. Therefore, our purpose was to evaluate the agreement between four skin temperature devices at rest, during exercise in the heat, and recovery. Mean skin temperature ([Formula: see text]) was assessed in thirty healthy males during 30 min rest (24.0 ± 1.2°C, 56 ± 8%), 30 min cycle in the heat (38.0 ± 0.5°C, 41 ± 2%), and 45 min recovery (24.0 ± 1.3°C, 56 ± 9%). [Formula: see text] was assessed at four sites using two conductive devices (thermistors, iButtons) and two infrared devices (infrared thermometer, infrared camera). Bland-Altman plots demonstrated mean bias ± limits of agreement between the thermistors and iButtons as follows (rest, exercise, recovery): -0.01 ± 0.04, 0.26 ± 0.85, -0.37 ± 0.98°C; thermistors and infrared thermometer: 0.34 ± 0.44, -0.44 ± 1.23, -1.04 ± 1.75°C; thermistors and infrared camera (rest, recovery): 0.83 ± 0.77, 1.88 ± 1.87°C. Pairwise comparisons of [Formula: see text] found significant differences (p < 0.05) between thermistors and both infrared devices during resting conditions, and significant differences between the thermistors and all other devices tested during exercise in the heat and recovery. These results indicate poor agreement between conductive and infrared devices at rest, during exercise in the heat, and subsequent recovery. Infrared devices may not be suitable for monitoring [Formula: see text] in the presence of, or following, metabolic and environmental induced heat stress.

  20. Button batteries

    MedlinePlus

    Swallowing batteries ... These devices use button batteries: Calculators Cameras Hearing aids Penlights Watches ... If a person puts the battery up their nose and breathes it further in, ... problems Cough Pneumonia (if the battery goes unnoticed) ...

  1. Measuring Beam Sizes and Ultra-Small Electron Emittances Using an X-ray Pinhole Camera.

    PubMed

    Elleaume, P; Fortgang, C; Penel, C; Tarazona, E

    1995-09-01

    A very simple pinhole camera set-up has been built to diagnose the electron beam emittance of the ESRF. The pinhole is placed in the air next to an Al window. An image is obtained with a CCD camera imaging a fluorescent screen. The emittance is deduced from the size of the image. The relationship between the measured beam size and the electron beam emittance depends upon the lattice functions alpha, beta and eta, the screen resolution, pinhole size and photon beam divergence. The set-up is capable of measuring emittances as low as 5 pm rad and is presently routinely used as both an electron beam imaging device and an emittance diagnostic.

  2. A Third Arm for the Surgeon

    NASA Technical Reports Server (NTRS)

    1994-01-01

    In laparoscopic surgery, tiny incisions are made in the patient's body and a laparoscope (an optical tube with a camera at the end) is inserted. The camera's image is projected onto two video screens, whose views guide the surgeon through the procedure. AESOP, a medical robot developed by Computer Motion, Inc. with NASA assistance, eliminates the need for a human assistant to operate the camera. The surgeon uses a foot pedal control to move the device, allowing him to use both hands during the surgery. Miscommunication is avoided; AESOP's movement is smooth and steady, and the memory vision is invaluable. Operations can be completed more quickly, and the patient spends less time under anesthesia. AESOP has been approved by the FDA.

  3. Top Smartphone Apps to Improve Teaching, Research, and Your Life

    ERIC Educational Resources Information Center

    Young, Jeffrey R.

    2011-01-01

    Not long ago, it seemed absurd for academics to carry around a computer, camera, and GPS device every where they went. Actually, it still seems absurd. But many professors (and administrators) now do just that in the form of all-in-one devices. Smartphones or tablet computers combine many functions in a hand-held gadget, and some users are…

  4. Investigation into the use of smartphone as a machine vision device for engineering metrology and flaw detection, with focus on drilling

    NASA Astrophysics Data System (ADS)

    Razdan, Vikram; Bateman, Richard

    2015-05-01

    This study investigates the use of a Smartphone and its camera vision capabilities in Engineering metrology and flaw detection, with a view to develop a low cost alternative to Machine vision systems which are out of range for small scale manufacturers. A Smartphone has to provide a similar level of accuracy as Machine Vision devices like Smart cameras. The objective set out was to develop an App on an Android Smartphone, incorporating advanced Computer vision algorithms written in java code. The App could then be used for recording measurements of Twist Drill bits and hole geometry, and analysing the results for accuracy. A detailed literature review was carried out for in-depth study of Machine vision systems and their capabilities, including a comparison between the HTC One X Android Smartphone and the Teledyne Dalsa BOA Smart camera. A review of the existing metrology Apps in the market was also undertaken. In addition, the drilling operation was evaluated to establish key measurement parameters of a twist Drill bit, especially flank wear and diameter. The methodology covers software development of the Android App, including the use of image processing algorithms like Gaussian Blur, Sobel and Canny available from OpenCV software library, as well as designing and developing the experimental set-up for carrying out the measurements. The results obtained from the experimental set-up were analysed for geometry of Twist Drill bits and holes, including diametrical measurements and flaw detection. The results show that Smartphones like the HTC One X have the processing power and the camera capability to carry out metrological tasks, although dimensional accuracy achievable from the Smartphone App is below the level provided by Machine vision devices like Smart cameras. A Smartphone with mechanical attachments, capable of image processing and having a reasonable level of accuracy in dimensional measurement, has the potential to become a handy low-cost Machine vision system for small scale manufacturers, especially in field metrology and flaw detection.

  5. Smartlink - baseline for measurement of benefits.

    DOT National Transportation Integrated Search

    2015-11-16

    The North Carolina Department of Transportation (NCDOT) operates several traffic management centers across the state : along with accompanying field devices such as traffic condition data stations, traffic surveillance cameras, and variable : message...

  6. A Parallel Study between the Resource Typing as Outlined in the American NIMS Document and the Levels of Service Required of the Police Forces of Quebec

    DTIC Science & Technology

    2009-12-01

    characterize specific components and device type by function (ex : fiber optics camera) General Disruption Tools Explosive tools such as mineral ... flotation device 2 helicopters, 3 passengers. 1 helicopter, 6 passengers. Altitude : between 10km and 17km. Turbine-Jet : for the 3 helicopters...No fixed or inflatable flotation device. Aircraft Capabilities VFR SQ Same as type I Same as Type I Same as Type I Equipment Radios

  7. Color Imaging management in film processing

    NASA Astrophysics Data System (ADS)

    Tremeau, Alain; Konik, Hubert; Colantoni, Philippe

    2003-12-01

    The latest research projects in the laboratory LIGIV concerns capture, processing, archiving and display of color images considering the trichromatic nature of the Human Vision System (HSV). Among these projects one addresses digital cinematographic film sequences of high resolution and dynamic range. This project aims to optimize the use of content for the post-production operators and for the end user. The studies presented in this paper address the use of metadata to optimise the consumption of video content on a device of user's choice independent of the nature of the equipment that captured the content. Optimising consumption includes enhancing the quality of image reconstruction on a display. Another part of this project addresses the content-based adaptation of image display. Main focus is on Regions of Interest (ROI) operations, based on the ROI concepts of MPEG-7. The aim of this second part is to characterize and ensure the conditions of display even if display device or display media changes. This requires firstly the definition of a reference color space and the definition of bi-directional color transformations for each peripheral device (camera, display, film recorder, etc.). The complicating factor is that different devices have different color gamuts, depending on the chromaticity of their primaries and the ambient illumination under which they are viewed. To match the displayed image to the aimed appearance, all kind of production metadata (camera specification, camera colour primaries, lighting conditions) should be associated to the film material. Metadata and content build together rich content. The author is assumed to specify conditions as known from digital graphics arts. To control image pre-processing and image post-processing, these specifications should be contained in the film's metadata. The specifications are related to the ICC profiles but need additionally consider mesopic viewing conditions.

  8. HeatWave: the next generation of thermography devices

    NASA Astrophysics Data System (ADS)

    Moghadam, Peyman; Vidas, Stephen

    2014-05-01

    Energy sustainability is a major challenge of the 21st century. To reduce environmental impact, changes are required not only on the supply side of the energy chain by introducing renewable energy sources, but also on the demand side by reducing energy usage and improving energy efficiency. Currently, 2D thermal imaging is used for energy auditing, which measures the thermal radiation from the surfaces of objects and represents it as a set of color-mapped images that can be analysed for the purpose of energy efficiency monitoring. A limitation of such a method for energy auditing is that it lacks information on the geometry and location of objects with reference to each other, particularly across separate images. Such a limitation prevents any quantitative analysis to be done, for example, detecting any energy performance changes before and after retrofitting. To address these limitations, we have developed a next generation thermography device called Heat Wave. Heat Wave is a hand-held 3D thermography device that consists of a thermal camera, a range sensor and color camera, and can be used to generate precise 3D model of objects with augmented temperature and visible information. As an operator holding the device smoothly waves it around the objects of interest, Heat Wave can continuously track its own pose in space and integrate new information from the range and thermal and color cameras into a single, and precise 3D multi-modal model. Information from multiple viewpoints can be incorporated together to improve the accuracy, reliability and robustness of the global model. The approach also makes it possible to reduce any systematic errors associated with the estimation of surface temperature from the thermal images.

  9. Digital X-ray camera for quality evaluation three-dimensional topographic reconstruction of single crystals of biological macromolecules

    NASA Technical Reports Server (NTRS)

    Borgstahl, Gloria (Inventor); Lovelace, Jeff (Inventor); Snell, Edward Holmes (Inventor); Bellamy, Henry (Inventor)

    2008-01-01

    The present invention provides a digital topography imaging system for determining the crystalline structure of a biological macromolecule, wherein the system employs a charge coupled device (CCD) camera with antiblooming circuitry to directly convert x-ray signals to electrical signals without the use of phosphor and measures reflection profiles from the x-ray emitting source after x-rays are passed through a sample. Methods for using said system are also provided.

  10. Robot Tracer with Visual Camera

    NASA Astrophysics Data System (ADS)

    Jabbar Lubis, Abdul; Dwi Lestari, Yuyun; Dafitri, Haida; Azanuddin

    2017-12-01

    Robot is a versatile tool that can function replace human work function. The robot is a device that can be reprogrammed according to user needs. The use of wireless networks for remote monitoring needs can be utilized to build a robot that can be monitored movement and can be monitored using blueprints and he can track the path chosen robot. This process is sent using a wireless network. For visual robot using high resolution cameras to facilitate the operator to control the robot and see the surrounding circumstances.

  11. Post-explant visualization of thrombi in outflow grafts and their junction to a continuous-flow total artificial heart using a high-definition miniaturized camera.

    PubMed

    Karimov, Jamshid H; Horvath, David; Sunagawa, Gengo; Byram, Nicole; Moazami, Nader; Golding, Leonard A R; Fukamachi, Kiyotaka

    2015-12-01

    Post-explant evaluation of the continuous-flow total artificial heart in preclinical studies can be extremely challenging because of the device's unique architecture. Determining the exact location of tissue regeneration, neointima formation, and thrombus is particularly important. In this report, we describe our first successful experience with visualizing the Cleveland Clinic continuous-flow total artificial heart using a custom-made high-definition miniature camera.

  12. Using a trichromatic CCD camera for spectral skylight estimation.

    PubMed

    López-Alvarez, Miguel A; Hernández-Andrés, Javier; Romero, Javier; Olmo, F J; Cazorla, A; Alados-Arboledas, L

    2008-12-01

    In a previous work [J. Opt. Soc. Am. A 24, 942-956 (2007)] we showed how to design an optimum multispectral system aimed at spectral recovery of skylight. Since high-resolution multispectral images of skylight could be interesting for many scientific disciplines, here we also propose a nonoptimum but much cheaper and faster approach to achieve this goal by using a trichromatic RGB charge-coupled device (CCD) digital camera. The camera is attached to a fish-eye lens, hence permitting us to obtain a spectrum of every point of the skydome corresponding to each pixel of the image. In this work we show how to apply multispectral techniques to the sensors' responses of a common trichromatic camera in order to obtain skylight spectra from them. This spectral information is accurate enough to estimate experimental values of some climate parameters or to be used in algorithms for automatic cloud detection, among many other possible scientific applications.

  13. A time-resolved image sensor for tubeless streak cameras

    NASA Astrophysics Data System (ADS)

    Yasutomi, Keita; Han, SangMan; Seo, Min-Woong; Takasawa, Taishi; Kagawa, Keiichiro; Kawahito, Shoji

    2014-03-01

    This paper presents a time-resolved CMOS image sensor with draining-only modulation (DOM) pixels for tube-less streak cameras. Although the conventional streak camera has high time resolution, the device requires high voltage and bulky system due to the structure with a vacuum tube. The proposed time-resolved imager with a simple optics realize a streak camera without any vacuum tubes. The proposed image sensor has DOM pixels, a delay-based pulse generator, and a readout circuitry. The delay-based pulse generator in combination with an in-pixel logic allows us to create and to provide a short gating clock to the pixel array. A prototype time-resolved CMOS image sensor with the proposed pixel is designed and implemented using 0.11um CMOS image sensor technology. The image array has 30(Vertical) x 128(Memory length) pixels with the pixel pitch of 22.4um. .

  14. Miniaturisation of Pressure-Sensitive Paint Measurement Systems Using Low-Cost, Miniaturised Machine Vision Cameras.

    PubMed

    Quinn, Mark Kenneth; Spinosa, Emanuele; Roberts, David A

    2017-07-25

    Measurements of pressure-sensitive paint (PSP) have been performed using new or non-scientific imaging technology based on machine vision tools. Machine vision camera systems are typically used for automated inspection or process monitoring. Such devices offer the benefits of lower cost and reduced size compared with typically scientific-grade cameras; however, their optical qualities and suitability have yet to be determined. This research intends to show relevant imaging characteristics and also show the applicability of such imaging technology for PSP. Details of camera performance are benchmarked and compared to standard scientific imaging equipment and subsequent PSP tests are conducted using a static calibration chamber. The findings demonstrate that machine vision technology can be used for PSP measurements, opening up the possibility of performing measurements on-board small-scale model such as those used for wind tunnel testing or measurements in confined spaces with limited optical access.

  15. Spectroscopic interpretation and velocimetry analysis of fluctuations in a cylindrical plasma recorded by a fast camera

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oldenbuerger, S.; Brandt, C.; Brochard, F.

    2010-06-15

    Fast visible imaging is used on a cylindrical magnetized argon plasma produced by thermionic discharge in the Mirabelle device. To link the information collected with the camera to a physical quantity, fast camera movies of plasma structures are compared to Langmuir probe measurements. High correlation is found between light fluctuations and plasma density fluctuations. Contributions from neutral argon and ionized argon to the overall light intensity are separated by using interference filters and a light intensifier. Light emitting transitions are shown to involve a metastable neutral argon state that can be excited by thermal plasma electrons, thus explaining the goodmore » correlation between light and density fluctuations. The propagation velocity of plasma structures is calculated by adapting velocimetry methods to the fast camera movies. The resulting estimates of instantaneous propagation velocity are in agreement with former experiments. The computation of mean velocities is discussed.« less

  16. Lens and Camera Arrays for Sky Surveys and Space Surveillance

    NASA Astrophysics Data System (ADS)

    Ackermann, M.; Cox, D.; McGraw, J.; Zimmer, P.

    2016-09-01

    In recent years, a number of sky survey projects have chosen to use arrays of commercial cameras coupled with commercial photographic lenses to enable low-cost, wide-area observation. Projects such as SuperWASP, FAVOR, RAPTOR, Lotis, PANOPTES, and DragonFly rely on multiple cameras with commercial lenses to image wide areas of the sky each night. The sensors are usually commercial astronomical charge coupled devices (CCDs) or digital single reflex (DSLR) cameras, while the lenses are large-aperture, highend consumer items intended for general photography. While much of this equipment is very capable and relatively inexpensive, this approach comes with a number of significant limitations that reduce sensitivity and overall utility of the image data. The most frequently encountered limitations include lens vignetting, narrow spectral bandpass, and a relatively large point spread function. Understanding these limits helps to assess the utility of the data, and identify areas where advanced optical designs could significantly improve survey performance.

  17. Plenoptic Imager for Automated Surface Navigation

    NASA Technical Reports Server (NTRS)

    Zollar, Byron; Milder, Andrew; Milder, Andrew; Mayo, Michael

    2010-01-01

    An electro-optical imaging device is capable of autonomously determining the range to objects in a scene without the use of active emitters or multiple apertures. The novel, automated, low-power imaging system is based on a plenoptic camera design that was constructed as a breadboard system. Nanohmics proved feasibility of the concept by designing an optical system for a prototype plenoptic camera, developing simulated plenoptic images and range-calculation algorithms, constructing a breadboard prototype plenoptic camera, and processing images (including range calculations) from the prototype system. The breadboard demonstration included an optical subsystem comprised of a main aperture lens, a mechanical structure that holds an array of micro lenses at the focal distance from the main lens, and a structure that mates a CMOS imaging sensor the correct distance from the micro lenses. The demonstrator also featured embedded electronics for camera readout, and a post-processor executing image-processing algorithms to provide ranging information.

  18. Impact of New Camera Technologies on Discoveries in Cell Biology.

    PubMed

    Stuurman, Nico; Vale, Ronald D

    2016-08-01

    New technologies can make previously invisible phenomena visible. Nowhere is this more obvious than in the field of light microscopy. Beginning with the observation of "animalcules" by Antonie van Leeuwenhoek, when he figured out how to achieve high magnification by shaping lenses, microscopy has advanced to this day by a continued march of discoveries driven by technical innovations. Recent advances in single-molecule-based technologies have achieved unprecedented resolution, and were the basis of the Nobel prize in Chemistry in 2014. In this article, we focus on developments in camera technologies and associated image processing that have been a major driver of technical innovations in light microscopy. We describe five types of developments in camera technology: video-based analog contrast enhancement, charge-coupled devices (CCDs), intensified sensors, electron multiplying gain, and scientific complementary metal-oxide-semiconductor cameras, which, together, have had major impacts in light microscopy. © 2016 Marine Biological Laboratory.

  19. The multifocus plenoptic camera

    NASA Astrophysics Data System (ADS)

    Georgiev, Todor; Lumsdaine, Andrew

    2012-01-01

    The focused plenoptic camera is based on the Lippmann sensor: an array of microlenses focused on the pixels of a conventional image sensor. This device samples the radiance, or plenoptic function, as an array of cameras with large depth of field, focused at a certain plane in front of the microlenses. For the purpose of digital refocusing (which is one of the important applications) the depth of field needs to be large, but there are fundamental optical limitations to this. The solution of the above problem is to use and array of interleaved microlenses of different focal lengths, focused at two or more different planes. In this way a focused image can be constructed at any depth of focus, and a really wide range of digital refocusing can be achieved. This paper presents our theory and results of implementing such camera. Real world images are demonstrating the extended capabilities, and limitations are discussed.

  20. Miniaturisation of Pressure-Sensitive Paint Measurement Systems Using Low-Cost, Miniaturised Machine Vision Cameras

    PubMed Central

    Spinosa, Emanuele; Roberts, David A.

    2017-01-01

    Measurements of pressure-sensitive paint (PSP) have been performed using new or non-scientific imaging technology based on machine vision tools. Machine vision camera systems are typically used for automated inspection or process monitoring. Such devices offer the benefits of lower cost and reduced size compared with typically scientific-grade cameras; however, their optical qualities and suitability have yet to be determined. This research intends to show relevant imaging characteristics and also show the applicability of such imaging technology for PSP. Details of camera performance are benchmarked and compared to standard scientific imaging equipment and subsequent PSP tests are conducted using a static calibration chamber. The findings demonstrate that machine vision technology can be used for PSP measurements, opening up the possibility of performing measurements on-board small-scale model such as those used for wind tunnel testing or measurements in confined spaces with limited optical access. PMID:28757553

  1. Spectroscopic interpretation and velocimetry analysis of fluctuations in a cylindrical plasma recorded by a fast camera

    NASA Astrophysics Data System (ADS)

    Oldenbürger, S.; Brandt, C.; Brochard, F.; Lemoine, N.; Bonhomme, G.

    2010-06-01

    Fast visible imaging is used on a cylindrical magnetized argon plasma produced by thermionic discharge in the Mirabelle device. To link the information collected with the camera to a physical quantity, fast camera movies of plasma structures are compared to Langmuir probe measurements. High correlation is found between light fluctuations and plasma density fluctuations. Contributions from neutral argon and ionized argon to the overall light intensity are separated by using interference filters and a light intensifier. Light emitting transitions are shown to involve a metastable neutral argon state that can be excited by thermal plasma electrons, thus explaining the good correlation between light and density fluctuations. The propagation velocity of plasma structures is calculated by adapting velocimetry methods to the fast camera movies. The resulting estimates of instantaneous propagation velocity are in agreement with former experiments. The computation of mean velocities is discussed.

  2. Mach-zehnder based optical marker/comb generator for streak camera calibration

    DOEpatents

    Miller, Edward Kirk

    2015-03-03

    This disclosure is directed to a method and apparatus for generating marker and comb indicia in an optical environment using a Mach-Zehnder (M-Z) modulator. High speed recording devices are configured to record image or other data defining a high speed event. To calibrate and establish time reference, the markers or combs are indicia which serve as timing pulses (markers) or a constant-frequency train of optical pulses (comb) to be imaged on a streak camera for accurate time based calibration and time reference. The system includes a camera, an optic signal generator which provides an optic signal to an M-Z modulator and biasing and modulation signal generators configured to provide input to the M-Z modulator. An optical reference signal is provided to the M-Z modulator. The M-Z modulator modulates the reference signal to a higher frequency optical signal which is output through a fiber coupled link to the streak camera.

  3. Photodetectors for the Advanced Gamma-ray Imaging System (AGIS)

    NASA Astrophysics Data System (ADS)

    Wagner, Robert G.; Advanced Gamma-ray Imaging System AGIS Collaboration

    2010-03-01

    The Advanced Gamma-Ray Imaging System (AGIS) is a concept for the next generation very high energy gamma-ray observatory. Design goals include an order of magnitude better sensitivity, better angular resolution, and a lower energy threshold than existing Cherenkov telescopes. Each telescope is equipped with a camera that detects and records the Cherenkov-light flashes from air showers. The camera is comprised of a pixelated focal plane of blue sensitive and fast (nanosecond) photon detectors that detect the photon signal and convert it into an electrical one. Given the scale of AGIS, the camera must be reliable and cost effective. The Schwarzschild-Couder optical design yields a smaller plate scale than present-day Cherenkov telescopes, enabling the use of more compact, multi-pixel devices, including multianode photomultipliers or Geiger avalanche photodiodes. We present the conceptual design of the focal plane for the camera and results from testing candidate! focal plane sensors.

  4. Clinical applications of commercially available video recording and monitoring systems: inexpensive, high-quality video recording and monitoring systems for endoscopy and microsurgery.

    PubMed

    Tsunoda, Koichi; Tsunoda, Atsunobu; Ishimoto, ShinnIchi; Kimura, Satoko

    2006-01-01

    The exclusive charge-coupled device (CCD) camera system for the endoscope and electronic fiberscopes are in widespread use. However, both are usually stationary in an office or examination room, and a wheeled cart is needed for mobility. The total costs of the CCD camera system and electronic fiberscopy system are at least US Dollars 10,000 and US Dollars 30,000, respectively. Recently, the performance of audio and visual instruments has improved dramatically, with a concomitant reduction in their cost. Commercially available CCD video cameras with small monitors have become common. They provide excellent image quality and are much smaller and less expensive than previous models. The authors have developed adaptors for the popular mini-digital video (mini-DV) camera. The camera also provides video and acoustic output signals; therefore, the endoscopic images can be viewed on a large monitor simultaneously. The new system (a mini-DV video camera and an adaptor) costs only US Dollars 1,000. Therefore, the system is both cost-effective and useful for the outpatient clinic or casualty setting, or on house calls for the purpose of patient education. In the future, the authors plan to introduce the clinical application of a high-vision camera and an infrared camera as medical instruments for clinical and research situations.

  5. The software architecture of the camera for the ASTRI SST-2M prototype for the Cherenkov Telescope Array

    NASA Astrophysics Data System (ADS)

    Sangiorgi, Pierluca; Capalbi, Milvia; Gimenes, Renato; La Rosa, Giovanni; Russo, Francesco; Segreto, Alberto; Sottile, Giuseppe; Catalano, Osvaldo

    2016-07-01

    The purpose of this contribution is to present the current status of the software architecture of the ASTRI SST-2M Cherenkov Camera. The ASTRI SST-2M telescope is an end-to-end prototype for the Small Size Telescope of the Cherenkov Telescope Array. The ASTRI camera is an innovative instrument based on SiPM detectors and has several internal hardware components. In this contribution we will give a brief description of the hardware components of the camera of the ASTRI SST-2M prototype and of their interconnections. Then we will present the outcome of the software architectural design process that we carried out in order to identify the main structural components of the camera software system and the relationships among them. We will analyze the architectural model that describes how the camera software is organized as a set of communicating blocks. Finally, we will show where these blocks are deployed in the hardware components and how they interact. We will describe in some detail, the physical communication ports and external ancillary devices management, the high precision time-tag management, the fast data collection and the fast data exchange between different camera subsystems, and the interfacing with the external systems.

  6. Comparison of three different techniques for camera and motion control of a teleoperated robot.

    PubMed

    Doisy, Guillaume; Ronen, Adi; Edan, Yael

    2017-01-01

    This research aims to evaluate new methods for robot motion control and camera orientation control through the operator's head orientation in robot teleoperation tasks. Specifically, the use of head-tracking in a non-invasive way, without immersive virtual reality devices was combined and compared with classical control modes for robot movements and camera control. Three control conditions were tested: 1) a condition with classical joystick control of both the movements of the robot and the robot camera, 2) a condition where the robot movements were controlled by a joystick and the robot camera was controlled by the user head orientation, and 3) a condition where the movements of the robot were controlled by hand gestures and the robot camera was controlled by the user head orientation. Performance, workload metrics and their evolution as the participants gained experience with the system were evaluated in a series of experiments: for each participant, the metrics were recorded during four successive similar trials. Results shows that the concept of robot camera control by user head orientation has the potential of improving the intuitiveness of robot teleoperation interfaces, specifically for novice users. However, more development is needed to reach a margin of progression comparable to a classical joystick interface. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Development of two-framing camera with large format and ultrahigh speed

    NASA Astrophysics Data System (ADS)

    Jiang, Xiaoguo; Wang, Yuan; Wang, Yi

    2012-10-01

    High-speed imaging facility is important and necessary for the formation of time-resolved measurement system with multi-framing capability. The framing camera which satisfies the demands of both high speed and large format needs to be specially developed in the ultrahigh speed research field. A two-framing camera system with high sensitivity and time-resolution has been developed and used for the diagnosis of electron beam parameters of Dragon-I linear induction accelerator (LIA). The camera system, which adopts the principle of light beam splitting in the image space behind the lens with long focus length, mainly consists of lens-coupled gated image intensifier, CCD camera and high-speed shutter trigger device based on the programmable integrated circuit. The fastest gating time is about 3 ns, and the interval time between the two frames can be adjusted discretely at the step of 0.5 ns. Both the gating time and the interval time can be tuned to the maximum value of about 1 s independently. Two images with the size of 1024×1024 for each can be captured simultaneously in our developed camera. Besides, this camera system possesses a good linearity, uniform spatial response and an equivalent background illumination as low as 5 electrons/pix/sec, which fully meets the measurement requirements of Dragon-I LIA.

  8. System for critical infrastructure security based on multispectral observation-detection module

    NASA Astrophysics Data System (ADS)

    Trzaskawka, Piotr; Kastek, Mariusz; Życzkowski, Marek; Dulski, Rafał; Szustakowski, Mieczysław; Ciurapiński, Wiesław; Bareła, Jarosław

    2013-10-01

    Recent terrorist attacks and possibilities of such actions in future have forced to develop security systems for critical infrastructures that embrace sensors technologies and technical organization of systems. The used till now perimeter protection of stationary objects, based on construction of a ring with two-zone fencing, visual cameras with illumination are efficiently displaced by the systems of the multisensor technology that consists of: visible technology - day/night cameras registering optical contrast of a scene, thermal technology - cheap bolometric cameras recording thermal contrast of a scene and active ground radars - microwave and millimetre wavelengths that record and detect reflected radiation. Merging of these three different technologies into one system requires methodology for selection of technical conditions of installation and parameters of sensors. This procedure enables us to construct a system with correlated range, resolution, field of view and object identification. Important technical problem connected with the multispectral system is its software, which helps couple the radar with the cameras. This software can be used for automatic focusing of cameras, automatic guiding cameras to an object detected by the radar, tracking of the object and localization of the object on the digital map as well as target identification and alerting. Based on "plug and play" architecture, this system provides unmatched flexibility and simplistic integration of sensors and devices in TCP/IP networks. Using a graphical user interface it is possible to control sensors and monitor streaming video and other data over the network, visualize the results of data fusion process and obtain detailed information about detected intruders over a digital map. System provide high-level applications and operator workload reduction with features such as sensor to sensor cueing from detection devices, automatic e-mail notification and alarm triggering. The paper presents a structure and some elements of critical infrastructure protection solution which is based on a modular multisensor security system. System description is focused mainly on methodology of selection of sensors parameters. The results of the tests in real conditions are also presented.

  9. A laboratory verification sensor

    NASA Technical Reports Server (NTRS)

    Vaughan, Arthur H.

    1988-01-01

    The use of a variant of the Hartmann test is described to sense the coalignment of the 36 primary mirror segments of the Keck 10-meter Telescope. The Shack-Hartmann alignment camera is a surface-tilt-error-sensing device, operable with high sensitivity over a wide range of tilt errors. An interferometer, on the other hand, is a surface-height-error-sensing device. In general, if the surface height error exceeds a few wavelengths of the incident illumination, an interferogram is difficult to interpret and loses utility. The Shack-Hartmann aligment camera is, therefore, likely to be attractive as a development tool for segmented mirror telescopes, particularly at early stages of development in which the surface quality of developmental segments may be too poor to justify interferometric testing. The constraints are examined which would define the first-order properties of a Shack-Hartmann alignment camera and the precision and range of measurement one could expect to achieve with it are investigated. Fundamental constraints do arise, however, from consideration of geometrical imaging, diffraction, and the density of sampling of images at the detector array. Geometrical imagining determines the linear size of the image, and depends on the primary mirror diameter and the f-number of a lenslet. Diffraction is another constraint; it depends on the lenslet aperture. Finally, the sampling density at the detector array is important since the number of pixels in the image determines how accurately the centroid of the image can be measured. When these factors are considered under realistic assumptions it is apparent that the first order design of a Shack-Hartmann alignment camera is completely determined by the first-order constraints considered, and that in the case of a 20-meter telescope with seeing-limited imaging, such a camera, used with a suitable detector array, will achieve useful precision.

  10. Measurement of cosmic-ray muons with the Distributed Electronic Cosmic-ray Observatory, a network of smartphones

    NASA Astrophysics Data System (ADS)

    Vandenbroucke, J.; BenZvi, S.; Bravo, S.; Jensen, K.; Karn, P.; Meehan, M.; Peacock, J.; Plewa, M.; Ruggles, T.; Santander, M.; Schultz, D.; Simons, A. L.; Tosi, D.

    2016-04-01

    Solid-state camera image sensors can be used to detect ionizing radiation in addition to optical photons. We describe the Distributed Electronic Cosmic-ray Observatory (DECO), an app and associated public database that enables a network of consumer devices to detect cosmic rays and other ionizing radiation. In addition to terrestrial background radiation, cosmic-ray muon candidate events are detected as long, straight tracks passing through multiple pixels. The distribution of track lengths can be related to the thickness of the active (depleted) region of the camera image sensor through the known angular distribution of muons at sea level. We use a sample of candidate muon events detected by DECO to measure the thickness of the depletion region of the camera image sensor in a particular consumer smartphone model, the HTC Wildfire S. The track length distribution is fit better by a cosmic-ray muon angular distribution than an isotropic distribution, demonstrating that DECO can detect and identify cosmic-ray muons despite a background of other particle detections. Using the cosmic-ray distribution, we measure the depletion thickness to be 26.3 ± 1.4 μm. With additional data, the same method can be applied to additional models of image sensor. Once measured, the thickness can be used to convert track length to incident polar angle on a per-event basis. Combined with a determination of the incident azimuthal angle directly from the track orientation in the sensor plane, this enables direction reconstruction of individual cosmic-ray events using a single consumer device. The results simultaneously validate the use of cell phone camera image sensors as cosmic-ray muon detectors and provide a measurement of a parameter of camera image sensor performance which is not otherwise publicly available.

  11. LabVIEW Graphical User Interface for a New High Sensitivity, High Resolution Micro-Angio-Fluoroscopic and ROI-CBCT System

    PubMed Central

    Keleshis, C; Ionita, CN; Yadava, G; Patel, V; Bednarek, DR; Hoffmann, KR; Verevkin, A; Rudin, S

    2008-01-01

    A graphical user interface based on LabVIEW software was developed to enable clinical evaluation of a new High-Sensitivity Micro-Angio-Fluoroscopic (HSMAF) system for real-time acquisition, display and rapid frame transfer of high-resolution region-of-interest images. The HSMAF detector consists of a CsI(Tl) phosphor, a light image intensifier (LII), and a fiber-optic taper coupled to a progressive scan, frame-transfer, charged-coupled device (CCD) camera which provides real-time 12 bit, 1k × 1k images capable of greater than 10 lp/mm resolution. Images can be captured in continuous or triggered mode, and the camera can be programmed by a computer using Camera Link serial communication. A graphical user interface was developed to control the camera modes such as gain and pixel binning as well as to acquire, store, display, and process the images. The program, written in LabVIEW, has the following capabilities: camera initialization, synchronized image acquisition with the x-ray pulses, roadmap and digital subtraction angiography acquisition (DSA), flat field correction, brightness and contrast control, last frame hold in fluoroscopy, looped playback of the acquired images in angiography, recursive temporal filtering and LII gain control. Frame rates can be up to 30 fps in full-resolution mode. The user friendly implementation of the interface along with the high framerate acquisition and display for this unique high-resolution detector should provide angiographers and interventionalists with a new capability for visualizing details of small vessels and endovascular devices such as stents and hence enable more accurate diagnoses and image guided interventions. (Support: NIH Grants R01NS43924, R01EB002873) PMID:18836570

  12. LabVIEW Graphical User Interface for a New High Sensitivity, High Resolution Micro-Angio-Fluoroscopic and ROI-CBCT System.

    PubMed

    Keleshis, C; Ionita, Cn; Yadava, G; Patel, V; Bednarek, Dr; Hoffmann, Kr; Verevkin, A; Rudin, S

    2008-01-01

    A graphical user interface based on LabVIEW software was developed to enable clinical evaluation of a new High-Sensitivity Micro-Angio-Fluoroscopic (HSMAF) system for real-time acquisition, display and rapid frame transfer of high-resolution region-of-interest images. The HSMAF detector consists of a CsI(Tl) phosphor, a light image intensifier (LII), and a fiber-optic taper coupled to a progressive scan, frame-transfer, charged-coupled device (CCD) camera which provides real-time 12 bit, 1k × 1k images capable of greater than 10 lp/mm resolution. Images can be captured in continuous or triggered mode, and the camera can be programmed by a computer using Camera Link serial communication. A graphical user interface was developed to control the camera modes such as gain and pixel binning as well as to acquire, store, display, and process the images. The program, written in LabVIEW, has the following capabilities: camera initialization, synchronized image acquisition with the x-ray pulses, roadmap and digital subtraction angiography acquisition (DSA), flat field correction, brightness and contrast control, last frame hold in fluoroscopy, looped playback of the acquired images in angiography, recursive temporal filtering and LII gain control. Frame rates can be up to 30 fps in full-resolution mode. The user friendly implementation of the interface along with the high framerate acquisition and display for this unique high-resolution detector should provide angiographers and interventionalists with a new capability for visualizing details of small vessels and endovascular devices such as stents and hence enable more accurate diagnoses and image guided interventions. (Support: NIH Grants R01NS43924, R01EB002873).

  13. System for photometric calibration of optoelectronic imaging devices especially streak cameras

    DOEpatents

    Boni, Robert; Jaanimagi, Paul

    2003-11-04

    A system for the photometric calibration of streak cameras and similar imaging devices provides a precise knowledge of the camera's flat-field response as well as a mapping of the geometric distortions. The system provides the flat-field response, representing the spatial variations in the sensitivity of the recorded output, with a signal-to-noise ratio (SNR) greater than can be achieved in a single submicrosecond streak record. The measurement of the flat-field response is carried out by illuminating the input slit of the streak camera with a signal that is uniform in space and constant in time. This signal is generated by passing a continuous wave source through an optical homogenizer made up of a light pipe or pipes in which the illumination typically makes several bounces before exiting as a spatially uniform source field. The rectangular cross-section of the homogenizer is matched to the usable photocathode area of the streak tube. The flat-field data set is obtained by using a slow streak ramp that may have a period from one millisecond (ms) to ten seconds (s), but may be nominally one second in duration. The system also provides a mapping of the geometric distortions, by spatially and temporarily modulating the output of the homogenizer and obtaining a data set using the slow streak ramps. All data sets are acquired using a CCD camera and stored on a computer, which is used to calculate all relevant corrections to the signal data sets. The signal and flat-field data sets are both corrected for geometric distortions prior to applying the flat-field correction. Absolute photometric calibration is obtained by measuring the output fluence of the homogenizer with a "standard-traceable" meter and relating that to the CCD pixel values for a self-corrected flat-field data set.

  14. Occupancy models for monitoring marine fish: a bayesian hierarchical approach to model imperfect detection with a novel gear combination.

    PubMed

    Coggins, Lewis G; Bacheler, Nathan M; Gwinn, Daniel C

    2014-01-01

    Occupancy models using incidence data collected repeatedly at sites across the range of a population are increasingly employed to infer patterns and processes influencing population distribution and dynamics. While such work is common in terrestrial systems, fewer examples exist in marine applications. This disparity likely exists because the replicate samples required by these models to account for imperfect detection are often impractical to obtain when surveying aquatic organisms, particularly fishes. We employ simultaneous sampling using fish traps and novel underwater camera observations to generate the requisite replicate samples for occupancy models of red snapper, a reef fish species. Since the replicate samples are collected simultaneously by multiple sampling devices, many typical problems encountered when obtaining replicate observations are avoided. Our results suggest that augmenting traditional fish trap sampling with camera observations not only doubled the probability of detecting red snapper in reef habitats off the Southeast coast of the United States, but supplied the necessary observations to infer factors influencing population distribution and abundance while accounting for imperfect detection. We found that detection probabilities tended to be higher for camera traps than traditional fish traps. Furthermore, camera trap detections were influenced by the current direction and turbidity of the water, indicating that collecting data on these variables is important for future monitoring. These models indicate that the distribution and abundance of this species is more heavily influenced by latitude and depth than by micro-scale reef characteristics lending credence to previous characterizations of red snapper as a reef habitat generalist. This study demonstrates the utility of simultaneous sampling devices, including camera traps, in aquatic environments to inform occupancy models and account for imperfect detection when describing factors influencing fish population distribution and dynamics.

  15. Occupancy Models for Monitoring Marine Fish: A Bayesian Hierarchical Approach to Model Imperfect Detection with a Novel Gear Combination

    PubMed Central

    Coggins, Lewis G.; Bacheler, Nathan M.; Gwinn, Daniel C.

    2014-01-01

    Occupancy models using incidence data collected repeatedly at sites across the range of a population are increasingly employed to infer patterns and processes influencing population distribution and dynamics. While such work is common in terrestrial systems, fewer examples exist in marine applications. This disparity likely exists because the replicate samples required by these models to account for imperfect detection are often impractical to obtain when surveying aquatic organisms, particularly fishes. We employ simultaneous sampling using fish traps and novel underwater camera observations to generate the requisite replicate samples for occupancy models of red snapper, a reef fish species. Since the replicate samples are collected simultaneously by multiple sampling devices, many typical problems encountered when obtaining replicate observations are avoided. Our results suggest that augmenting traditional fish trap sampling with camera observations not only doubled the probability of detecting red snapper in reef habitats off the Southeast coast of the United States, but supplied the necessary observations to infer factors influencing population distribution and abundance while accounting for imperfect detection. We found that detection probabilities tended to be higher for camera traps than traditional fish traps. Furthermore, camera trap detections were influenced by the current direction and turbidity of the water, indicating that collecting data on these variables is important for future monitoring. These models indicate that the distribution and abundance of this species is more heavily influenced by latitude and depth than by micro-scale reef characteristics lending credence to previous characterizations of red snapper as a reef habitat generalist. This study demonstrates the utility of simultaneous sampling devices, including camera traps, in aquatic environments to inform occupancy models and account for imperfect detection when describing factors influencing fish population distribution and dynamics. PMID:25255325

  16. Interference of mobile phones and digitally enhanced cordless telecommunications mobile phones in renal scintigraphy.

    PubMed

    Stegmayr, Armin; Fessl, Benjamin; Hörtnagl, Richard; Marcadella, Michael; Perkhofer, Susanne

    2013-08-01

    The aim of the study was to assess the potential negative impact of cellular phones and digitally enhanced cordless telecommunication (DECT) devices on the quality of static and dynamic scintigraphy to avoid repeated testing in infant and teenage patients to protect them from unnecessary radiation exposure. The assessment was conducted by performing phantom measurements under real conditions. A functional renal-phantom acting as a pair of kidneys in dynamic scans was created. Data were collected using the setup of cellular phones and DECT phones placed in different positions in relation to a camera head to test the potential interference of cellular phones and DECT phones with the cameras. Cellular phones reproducibly interfered with the oldest type of gamma camera, which, because of its single-head specification, is the device most often used for renal examinations. Curves indicating the renal function were considerably disrupted; cellular phones as well as DECT phones showed a disturbance concerning static acquisition. Variable electromagnetic tolerance in different types of γ-cameras could be identified. Moreover, a straightforward, low-cost method of testing the susceptibility of equipment to interference caused by cellular phones and DECT phones was generated. Even though some departments use newer models of γ-cameras, which are less susceptible to electromagnetic interference, we recommend testing examination rooms to avoid any interference caused by cellular phones. The potential electromagnetic interference should be taken into account when the purchase of new sensitive medical equipment is being considered, not least because the technology of mobile communication is developing fast, which also means that different standards of wave bands will be issued in the future.

  17. Electronic cameras for low-light microscopy.

    PubMed

    Rasnik, Ivan; French, Todd; Jacobson, Ken; Berland, Keith

    2013-01-01

    This chapter introduces to electronic cameras, discusses the various parameters considered for evaluating their performance, and describes some of the key features of different camera formats. The chapter also presents the basic understanding of functioning of the electronic cameras and how these properties can be exploited to optimize image quality under low-light conditions. Although there are many types of cameras available for microscopy, the most reliable type is the charge-coupled device (CCD) camera, which remains preferred for high-performance systems. If time resolution and frame rate are of no concern, slow-scan CCDs certainly offer the best available performance, both in terms of the signal-to-noise ratio and their spatial resolution. Slow-scan cameras are thus the first choice for experiments using fixed specimens such as measurements using immune fluorescence and fluorescence in situ hybridization. However, if video rate imaging is required, one need not evaluate slow-scan CCD cameras. A very basic video CCD may suffice if samples are heavily labeled or are not perturbed by high intensity illumination. When video rate imaging is required for very dim specimens, the electron multiplying CCD camera is probably the most appropriate at this technological stage. Intensified CCDs provide a unique tool for applications in which high-speed gating is required. The variable integration time video cameras are very attractive options if one needs to acquire images at video rate acquisition, as well as with longer integration times for less bright samples. This flexibility can facilitate many diverse applications with highly varied light levels. Copyright © 2007 Elsevier Inc. All rights reserved.

  18. Real Time Measures of Effectiveness

    DOT National Transportation Integrated Search

    2003-06-01

    This report describes research that is focused on identifying and determining methods for automatically computing measures of effectiveness (MOEs) when supplied with real time information. The MOEs, along with detection devices such as cameras, roadw...

  19. Evaluation of sensors for inputting data in exergames for the elderly.

    PubMed

    Hors-Fraile, Santiago; Browne, James; Brox, Ellen; Evertsen, Gunn

    2013-01-01

    We aim to solve which off-the-shelf motion sensor device is the most suitable for extensive usage in PC open-source exergames for the elderly. To solve this problem, we studied the specifications of the market-available sensors to reduce the initial, broad set of sensors to only two candidates: the Nintendo Wii controllers and the Microsoft© Kinect™ camera. The capabilities of these two are tested with a demo implementation. We take into account both the accuracy in the movement-detection of the sensors, and the software-related issues. Our outcome indicates that the Microsoft© Kinect™ camera is the option that currently provides the best solution for our purpose. This study can be helpful for researchers to choose the device that suits their project needs better, removing the sensor-choosing task time from their schedule.

  20. Minimalist identification system based on venous map for security applications

    NASA Astrophysics Data System (ADS)

    Jacinto G., Edwar; Martínez S., Fredy; Martínez S., Fernando

    2015-07-01

    This paper proposes a technique and an algorithm used to build a device for people identification through the processing of a low resolution camera image. The infrared channel is the only information needed, sensing the blood reaction with the proper wave length, and getting a preliminary snapshot of the vascular map of the back side of the hand. The software uses this information to extract the characteristics of the user in a limited area (region of interest, ROI), unique for each user, which applicable to biometric access control devices. This kind of recognition prototypes functions are expensive, but in this case (minimalist design), the biometric equipment only used a low cost camera and the matrix of IR emitters adaptation to construct an economic and versatile prototype, without neglecting the high level of effectiveness that characterizes this kind of identification method.

  1. Design, implementation and accuracy of a prototype for medical augmented reality.

    PubMed

    Pandya, Abhilash; Siadat, Mohammad-Reza; Auner, Greg

    2005-01-01

    This paper is focused on prototype development and accuracy evaluation of a medical Augmented Reality (AR) system. The accuracy of such a system is of critical importance for medical use, and is hence considered in detail. We analyze the individual error contributions and the system accuracy of the prototype. A passive articulated arm is used to track a calibrated end-effector-mounted video camera. The live video view is superimposed in real time with the synchronized graphical view of CT-derived segmented object(s) of interest within a phantom skull. The AR accuracy mostly depends on the accuracy of the tracking technology, the registration procedure, the camera calibration, and the image scanning device (e.g., a CT or MRI scanner). The accuracy of the Microscribe arm was measured to be 0.87 mm. After mounting the camera on the tracking device, the AR accuracy was measured to be 2.74 mm on average (standard deviation = 0.81 mm). After using data from a 2-mm-thick CT scan, the AR error remained essentially the same at an average of 2.75 mm (standard deviation = 1.19 mm). For neurosurgery, the acceptable error is approximately 2-3 mm, and our prototype approaches these accuracy requirements. The accuracy could be increased with a higher-fidelity tracking system and improved calibration and object registration. The design and methods of this prototype device can be extrapolated to current medical robotics (due to the kinematic similarity) and neuronavigation systems.

  2. THE DARK ENERGY CAMERA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Flaugher, B.; Diehl, H. T.; Alvarez, O.

    2015-11-15

    The Dark Energy Camera is a new imager with a 2.°2 diameter field of view mounted at the prime focus of the Victor M. Blanco 4 m telescope on Cerro Tololo near La Serena, Chile. The camera was designed and constructed by the Dark Energy Survey Collaboration and meets or exceeds the stringent requirements designed for the wide-field and supernova surveys for which the collaboration uses it. The camera consists of a five-element optical corrector, seven filters, a shutter with a 60 cm aperture, and a charge-coupled device (CCD) focal plane of 250 μm thick fully depleted CCDs cooled inside a vacuummore » Dewar. The 570 megapixel focal plane comprises 62 2k × 4k CCDs for imaging and 12 2k × 2k CCDs for guiding and focus. The CCDs have 15 μm × 15 μm pixels with a plate scale of 0.″263 pixel{sup −1}. A hexapod system provides state-of-the-art focus and alignment capability. The camera is read out in 20 s with 6–9 electron readout noise. This paper provides a technical description of the camera's engineering, construction, installation, and current status.« less

  3. Visible camera imaging of plasmas in Proto-MPEX

    NASA Astrophysics Data System (ADS)

    Mosby, R.; Skeen, C.; Biewer, T. M.; Renfro, R.; Ray, H.; Shaw, G. C.

    2015-11-01

    The prototype Material Plasma Exposure eXperiment (Proto-MPEX) is a linear plasma device being developed at Oak Ridge National Laboratory (ORNL). This machine plans to study plasma-material interaction (PMI) physics relevant to future fusion reactors. Measurements of plasma light emission will be made on Proto-MPEX using fast, visible framing cameras. The cameras utilize a global shutter, which allows a full frame image of the plasma to be captured and compared at multiple times during the plasma discharge. Typical exposure times are ~10-100 microseconds. The cameras are capable of capturing images at up to 18,000 frames per second (fps). However, the frame rate is strongly dependent on the size of the ``region of interest'' that is sampled. The maximum ROI corresponds to the full detector area, of ~1000x1000 pixels. The cameras have an internal gain, which controls the sensitivity of the 10-bit detector. The detector includes a Bayer filter, for ``true-color'' imaging of the plasma emission. This presentation will exmine the optimized camera settings for use on Proto-MPEX. This work was supported by the US. D.O.E. contract DE-AC05-00OR22725.

  4. Calibration of Action Cameras for Photogrammetric Purposes

    PubMed Central

    Balletti, Caterina; Guerra, Francesco; Tsioukas, Vassilios; Vernier, Paolo

    2014-01-01

    The use of action cameras for photogrammetry purposes is not widespread due to the fact that until recently the images provided by the sensors, using either still or video capture mode, were not big enough to perform and provide the appropriate analysis with the necessary photogrammetric accuracy. However, several manufacturers have recently produced and released new lightweight devices which are: (a) easy to handle, (b) capable of performing under extreme conditions and more importantly (c) able to provide both still images and video sequences of high resolution. In order to be able to use the sensor of action cameras we must apply a careful and reliable self-calibration prior to the use of any photogrammetric procedure, a relatively difficult scenario because of the short focal length of the camera and its wide angle lens that is used to obtain the maximum possible resolution of images. Special software, using functions of the OpenCV library, has been created to perform both the calibration and the production of undistorted scenes for each one of the still and video image capturing mode of a novel action camera, the GoPro Hero 3 camera that can provide still images up to 12 Mp and video up 8 Mp resolution. PMID:25237898

  5. Calibration of action cameras for photogrammetric purposes.

    PubMed

    Balletti, Caterina; Guerra, Francesco; Tsioukas, Vassilios; Vernier, Paolo

    2014-09-18

    The use of action cameras for photogrammetry purposes is not widespread due to the fact that until recently the images provided by the sensors, using either still or video capture mode, were not big enough to perform and provide the appropriate analysis with the necessary photogrammetric accuracy. However, several manufacturers have recently produced and released new lightweight devices which are: (a) easy to handle, (b) capable of performing under extreme conditions and more importantly (c) able to provide both still images and video sequences of high resolution. In order to be able to use the sensor of action cameras we must apply a careful and reliable self-calibration prior to the use of any photogrammetric procedure, a relatively difficult scenario because of the short focal length of the camera and its wide angle lens that is used to obtain the maximum possible resolution of images. Special software, using functions of the OpenCV library, has been created to perform both the calibration and the production of undistorted scenes for each one of the still and video image capturing mode of a novel action camera, the GoPro Hero 3 camera that can provide still images up to 12 Mp and video up 8 Mp resolution.

  6. Technical and instrumental prerequisites for single-port laparoscopic solo surgery: State of art

    PubMed Central

    Kim, Say-June; Lee, Sang Chul

    2015-01-01

    With the aid of advanced surgical techniques and instruments, single-port laparoscopic surgery (SPLS) can be accomplished with just two surgical members: an operator and a camera assistant. Under these circumstances, the reasonable replacement of a human camera assistant by a mechanical camera holder has resulted in a new surgical procedure termed single-port solo surgery (SPSS). In SPSS, the fixation and coordinated movement of a camera held by mechanical devices provides fixed and stable operative images that are under the control of the operator. Therefore, SPSS primarily benefits from the provision of the operator’s eye-to-hand coordination. Because SPSS is an intuitive modification of SPLS, the indications for SPSS are the same as those for SPLS. Though SPSS necessitates more actions than the surgery with a human assistant, these difficulties seem to be easily overcome by the greater provision of static operative images and the need for less lens cleaning and repositioning of the camera. When the operation is expected to be difficult and demanding, the SPSS process could be assisted by the addition of another instrument holder besides the camera holder. PMID:25914453

  7. ATTICA family of thermal cameras in submarine applications

    NASA Astrophysics Data System (ADS)

    Kuerbitz, Gunther; Fritze, Joerg; Hoefft, Jens-Rainer; Ruf, Berthold

    2001-10-01

    Optronics Mast Systems (US: Photonics Mast Systems) are electro-optical devices which enable a submarine crew to observe the scenery above water during dive. Unlike classical submarine periscopes they are non-hull-penetrating and therefore have no direct viewing capability. Typically they have electro-optical cameras both for the visual and for an IR spectral band with panoramic view and a stabilized line of sight. They can optionally be equipped with laser range- finders, antennas, etc. The brand name ATTICA (Advanced Two- dimensional Thermal Imager with CMOS-Array) characterizes a family of thermal cameras using focal-plane-array (FPA) detectors which can be tailored to a variety of requirements. The modular design of the ATTICA components allows the use of various detectors (InSb, CMT 3...5 μm , CMT 7...11 μm ) for specific applications. By means of a microscanner ATTICA cameras achieve full standard TV resolution using detectors with only 288 X 384 (US:240 X 320) detector elements. A typical requirement for Optronics-Mast Systems is a Quick- Look-Around capability. For FPA cameras this implies the need for a 'descan' module which can be incorporated in the ATTICA cameras without complications.

  8. Development of an all-in-one gamma camera/CCD system for safeguard verification

    NASA Astrophysics Data System (ADS)

    Kim, Hyun-Il; An, Su Jung; Chung, Yong Hyun; Kwak, Sung-Woo

    2014-12-01

    For the purpose of monitoring and verifying efforts at safeguarding radioactive materials in various fields, a new all-in-one gamma camera/charged coupled device (CCD) system was developed. This combined system consists of a gamma camera, which gathers energy and position information on gamma-ray sources, and a CCD camera, which identifies the specific location in a monitored area. Therefore, 2-D image information and quantitative information regarding gamma-ray sources can be obtained using fused images. A gamma camera consists of a diverging collimator, a 22 × 22 array CsI(Na) pixelated scintillation crystal with a pixel size of 2 × 2 × 6 mm3 and Hamamatsu H8500 position-sensitive photomultiplier tube (PSPMT). The Basler scA640-70gc CCD camera, which delivers 70 frames per second at video graphics array (VGA) resolution, was employed. Performance testing was performed using a Co-57 point source 30 cm from the detector. The measured spatial resolution and sensitivity were 4.77 mm full width at half maximum (FWHM) and 7.78 cps/MBq, respectively. The energy resolution was 18% at 122 keV. These results demonstrate that the combined system has considerable potential for radiation monitoring.

  9. JPRS Report Science & Technology Japan

    DTIC Science & Technology

    1989-06-02

    Electronics •Superconducting Wiring in LSI •One Wafer Computer •Josephson Devices •SQUID Devices Infrared Sensor Magnetic Sensor •Superconducting...Guinier- de Wolff monochromatic focusing camera (CoK* radiation) and with Philips APD-10 auto-powder diffractometer (CuKÄ radiation). Pure Si was used as...crystallized and smooth surface. The values indicated in Fig. 2 were the thickness monitored by a quartz oscillating sensor located near the

  10. On the accuracy potential of focused plenoptic camera range determination in long distance operation

    NASA Astrophysics Data System (ADS)

    Sardemann, Hannes; Maas, Hans-Gerd

    2016-04-01

    Plenoptic cameras have found increasing interest in optical 3D measurement techniques in recent years. While their basic principle is 100 years old, the development in digital photography, micro-lens fabrication technology and computer hardware has boosted the development and lead to several commercially available ready-to-use cameras. Beyond their popular option of a posteriori image focusing or total focus image generation, their basic ability of generating 3D information from single camera imagery depicts a very beneficial option for certain applications. The paper will first present some fundamentals on the design and history of plenoptic cameras and will describe depth determination from plenoptic camera image data. It will then present an analysis of the depth determination accuracy potential of plenoptic cameras. While most research on plenoptic camera accuracy so far has focused on close range applications, we will focus on mid and long ranges of up to 100 m. This range is especially relevant, if plenoptic cameras are discussed as potential mono-sensorial range imaging devices in (semi-)autonomous cars or in mobile robotics. The results show the expected deterioration of depth measurement accuracy with depth. At depths of 30-100 m, which may be considered typical in autonomous driving, depth errors in the order of 3% (with peaks up to 10-13 m) were obtained from processing small point clusters on an imaged target. Outliers much higher than these values were observed in single point analysis, stressing the necessity of spatial or spatio-temporal filtering of the plenoptic camera depth measurements. Despite these obviously large errors, a plenoptic camera may nevertheless be considered a valid option for the application fields of real-time robotics like autonomous driving or unmanned aerial and underwater vehicles, where the accuracy requirements decrease with distance.

  11. Inferred UV Fluence Focal-Spot Profiles from Soft X-Ray Pinhole Camera Measurements on OMEGA

    NASA Astrophysics Data System (ADS)

    Theobald, W.; Sorce, C.; Epstein, R.; Keck, R. L.; Kellogg, C.; Kessler, T. J.; Kwiatkowski, J.; Marshall, F. J.; Seka, W.; Shvydky, A.; Stoeckl, C.

    2017-10-01

    The drive uniformity of OMEGA cryogenic implosions is affected by UV beamfluence variations on target, which require careful monitoring at full laser power. This is routinely performed with multiple pinhole cameras equipped with charge-injection devices (CID's) that record the x-ray emission in the 3- to 7-keV photon energy range from an Au-coated target. The technique relies on the knowledge of the relation between x-ray fluence Fx and UV fluence FUV ,Fx FUVγ , with a measured γ = 3.42 for the CID-based diagnostic and 1-ns laser pulse. It is demonstrated here that using a back-thinned charge-coupled-device camera with softer filtration for x-rays with photon energies <2 keV and well calibrated pinhole provides a lower γ 2 and a larger dynamic range in the measured UV fluence. Inferred UV fluence profiles were measured for 100-ps and 1-ns laser pulses and were compared to directly measured profiles from a UV equivalent-target-plane diagnostic. Good agreement between both techniques is reported for selected beams. This material is based upon work supported by the Department of Energy National Nuclear Security Administration under Award Number DE-NA0001944.

  12. Deep-UV-sensitive high-frame-rate backside-illuminated CCD camera developments

    NASA Astrophysics Data System (ADS)

    Dawson, Robin M.; Andreas, Robert; Andrews, James T.; Bhaskaran, Mahalingham; Farkas, Robert; Furst, David; Gershstein, Sergey; Grygon, Mark S.; Levine, Peter A.; Meray, Grazyna M.; O'Neal, Michael; Perna, Steve N.; Proefrock, Donald; Reale, Michael; Soydan, Ramazan; Sudol, Thomas M.; Swain, Pradyumna K.; Tower, John R.; Zanzucchi, Pete

    2002-04-01

    New applications for ultra-violet imaging are emerging in the fields of drug discovery and industrial inspection. High throughput is critical for these applications where millions of drug combinations are analyzed in secondary screenings or high rate inspection of small feature sizes over large areas is required. Sarnoff demonstrated in1990 a back illuminated, 1024 X 1024, 18 um pixel, split-frame-transfer device running at > 150 frames per second with high sensitivity in the visible spectrum. Sarnoff designed, fabricated and delivered cameras based on these CCDs and is now extending this technology to devices with higher pixel counts and higher frame rates through CCD architectural enhancements. The high sensitivities obtained in the visible spectrum are being pushed into the deep UV to support these new medical and industrial inspection applications. Sarnoff has achieved measured quantum efficiencies > 55% at 193 nm, rising to 65% at 300 nm, and remaining almost constant out to 750 nm. Optimization of the sensitivity is being pursued to tailor the quantum efficiency for particular wavelengths. Characteristics of these high frame rate CCDs and cameras will be described and results will be presented demonstrating high UV sensitivity down to 150 nm.

  13. A reference estimator based on composite sensor pattern noise for source device identification

    NASA Astrophysics Data System (ADS)

    Li, Ruizhe; Li, Chang-Tsun; Guan, Yu

    2014-02-01

    It has been proved that Sensor Pattern Noise (SPN) can serve as an imaging device fingerprint for source camera identification. Reference SPN estimation is a very important procedure within the framework of this application. Most previous works built reference SPN by averaging the SPNs extracted from 50 images of blue sky. However, this method can be problematic. Firstly, in practice we may face the problem of source camera identification in the absence of the imaging cameras and reference SPNs, which means only natural images with scene details are available for reference SPN estimation rather than blue sky images. It is challenging because the reference SPN can be severely contaminated by image content. Secondly, the number of available reference images sometimes is too few for existing methods to estimate a reliable reference SPN. In fact, existing methods lack consideration of the number of available reference images as they were designed for the datasets with abundant images to estimate the reference SPN. In order to deal with the aforementioned problem, in this work, a novel reference estimator is proposed. Experimental results show that our proposed method achieves better performance than the methods based on the averaged reference SPN, especially when few reference images used.

  14. Progress towards a semiconductor Compton camera for prompt gamma imaging during proton beam therapy for range and dose verification

    NASA Astrophysics Data System (ADS)

    Gutierrez, A.; Baker, C.; Boston, H.; Chung, S.; Judson, D. S.; Kacperek, A.; Le Crom, B.; Moss, R.; Royle, G.; Speller, R.; Boston, A. J.

    2018-01-01

    The main objective of this work is to test a new semiconductor Compton camera for prompt gamma imaging. Our device is composed of three active layers: a Si(Li) detector as a scatterer and two high purity Germanium detectors as absorbers of high-energy gamma rays. We performed Monte Carlo simulations using the Geant4 toolkit to characterise the expected gamma field during proton beam therapy and have made experimental measurements of the gamma spectrum with a 60 MeV passive scattering beam irradiating a phantom. In this proceeding, we describe the status of the Compton camera and present the first preliminary measurements with radioactive sources and their corresponding reconstructed images.

  15. System Architecture of the Dark Energy Survey Camera Readout Electronics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shaw, Theresa; /FERMILAB; Ballester, Otger

    2010-05-27

    The Dark Energy Survey makes use of a new camera, the Dark Energy Camera (DECam). DECam will be installed in the Blanco 4M telescope at Cerro Tololo Inter-American Observatory (CTIO). DECam is presently under construction and is expected to be ready for observations in the fall of 2011. The focal plane will make use of 62 2Kx4K and 12 2kx2k fully depleted Charge-Coupled Devices (CCDs) for guiding, alignment and focus. This paper will describe design considerations of the system; including, the entire signal path used to read out the CCDs, the development of a custom crate and backplane, the overallmore » grounding scheme and early results of system tests.« less

  16. Laser-Camera Vision Sensing for Spacecraft Mobile Robot Navigation

    NASA Technical Reports Server (NTRS)

    Maluf, David A.; Khalil, Ahmad S.; Dorais, Gregory A.; Gawdiak, Yuri

    2002-01-01

    The advent of spacecraft mobile robots-free-flyng sensor platforms and communications devices intended to accompany astronauts or remotely operate on space missions both inside and outside of a spacecraft-has demanded the development of a simple and effective navigation schema. One such system under exploration involves the use of a laser-camera arrangement to predict relative positioning of the mobile robot. By projecting laser beams from the robot, a 3D reference frame can be introduced. Thus, as the robot shifts in position, the position reference frame produced by the laser images is correspondingly altered. Using normalization and camera registration techniques presented in this paper, the relative translation and rotation of the robot in 3D are determined from these reference frame transformations.

  17. The Atacama Cosmology Telescope: Instrument

    NASA Astrophysics Data System (ADS)

    Thornton, Robert J.; Atacama Cosmology Telescope Team

    2010-01-01

    The 6-meter Atacama Cosmology Telescope (ACT) is making detailed maps of the Cosmic Microwave Background at Cerro Toco in northern Chile. In this talk, I focus on the design and operation of the telescope and its commissioning instrument, the Millimeter Bolometer Array Camera. The camera contains three independent sets of optics that operate at 148 GHz, 217 GHz, and 277 GHz with arcminute resolution, each of which couples to a 1024-element array of Transition Edge Sensor (TES) bolometers. I will report on the camera performance, including the beam patterns, optical efficiencies, and detector sensitivities. Under development for ACT is a new polarimeter based on feedhorn-coupled TES devices that have improved sensitivity and are planned to operate at 0.1 K.

  18. Super-resolution in a defocused plenoptic camera: a wave-optics-based approach.

    PubMed

    Sahin, Erdem; Katkovnik, Vladimir; Gotchev, Atanas

    2016-03-01

    Plenoptic cameras enable the capture of a light field with a single device. However, with traditional light field rendering procedures, they can provide only low-resolution two-dimensional images. Super-resolution is considered to overcome this drawback. In this study, we present a super-resolution method for the defocused plenoptic camera (Plenoptic 1.0), where the imaging system is modeled using wave optics principles and utilizing low-resolution depth information of the scene. We are particularly interested in super-resolution of in-focus and near in-focus scene regions, which constitute the most challenging cases. The simulation results show that the employed wave-optics model makes super-resolution possible for such regions as long as sufficiently accurate depth information is available.

  19. Representing videos in tangible products

    NASA Astrophysics Data System (ADS)

    Fageth, Reiner; Weiting, Ralf

    2014-03-01

    Videos can be taken with nearly every camera, digital point and shoot cameras, DSLRs as well as smartphones and more and more with so-called action cameras mounted on sports devices. The implementation of videos while generating QR codes and relevant pictures out of the video stream via a software implementation was contents in last years' paper. This year we present first data about what contents is displayed and how the users represent their videos in printed products, e.g. CEWE PHOTOBOOKS and greeting cards. We report the share of the different video formats used, the number of images extracted out of the video in order to represent the video, the positions in the book and different design strategies compared to regular books.

  20. A teledentistry system for the second opinion.

    PubMed

    Gambino, Orazio; Lima, Fausto; Pirrone, Roberto; Ardizzone, Edoardo; Campisi, Giuseppina; di Fede, Olga

    2014-01-01

    In this paper we present a Teledentistry system aimed to the Second Opinion task. It make use of a particular camera called intra-oral camera, also called dental camera, in order to perform the photo shooting and real-time video of the inner part of the mouth. The pictures acquired by the Operator with such a device are sent to the Oral Medicine Expert (OME) by means of a current File Transfer Protocol (FTP) service and the real-time video is channeled into a video streaming thanks to the VideoLan client/server (VLC) application. It is composed by a HTML5 web-pages generated by PHP and allows to perform the Second Opinion both when Operator and OME are logged and when one of them is offline.

  1. NPS assessment of color medical displays using a monochromatic CCD camera

    NASA Astrophysics Data System (ADS)

    Roehrig, Hans; Gu, Xiliang; Fan, Jiahua

    2012-02-01

    This paper presents an approach to Noise Power Spectrum (NPS) assessment of color medical displays without using an expensive imaging colorimeter. The R, G and B color uniform patterns were shown on the display under study and the images were taken using a high resolution monochromatic camera. A colorimeter was used to calibrate the camera images. Synthetic intensity images were formed by the weighted sum of the R, G, B and the dark screen images. Finally the NPS analysis was conducted on the synthetic images. The proposed method replaces an expensive imaging colorimeter for NPS evaluation, which also suggests a potential solution for routine color medical display QA/QC in the clinical area, especially when imaging of display devices is desired.

  2. NPS assessment of color medical image displays using a monochromatic CCD camera

    NASA Astrophysics Data System (ADS)

    Roehrig, Hans; Gu, Xiliang; Fan, Jiahua

    2012-10-01

    This paper presents an approach to Noise Power Spectrum (NPS) assessment of color medical displays without using an expensive imaging colorimeter. The R, G and B color uniform patterns were shown on the display under study and the images were taken using a high resolution monochromatic camera. A colorimeter was used to calibrate the camera images. Synthetic intensity images were formed by the weighted sum of the R, G, B and the dark screen images. Finally the NPS analysis was conducted on the synthetic images. The proposed method replaces an expensive imaging colorimeter for NPS evaluation, which also suggests a potential solution for routine color medical display QA/QC in the clinical area, especially when imaging of display devices is desired

  3. Investigation of solar active regions at high resolution by balloon flights of the solar optical universal polarimeter, extended definition phase

    NASA Technical Reports Server (NTRS)

    Tarbell, Theodore D.

    1993-01-01

    Technical studies of the feasibility of balloon flights of the former Spacelab instrument, the Solar Optical Universal Polarimeter, with a modern charge-coupled device (CCD) camera, to study the structure and evolution of solar active regions at high resolution, are reviewed. In particular, different CCD cameras were used at ground-based solar observatories with the SOUP filter, to evaluate their performance and collect high resolution images. High resolution movies of the photosphere and chromosphere were successfully obtained using four different CCD cameras. Some of this data was collected in coordinated observations with the Yohkoh satellite during May-July, 1992, and they are being analyzed scientifically along with simultaneous X-ray observations.

  4. Laser-induced damage threshold of camera sensors and micro-optoelectromechanical systems

    NASA Astrophysics Data System (ADS)

    Schwarz, Bastian; Ritt, Gunnar; Koerber, Michael; Eberle, Bernd

    2017-03-01

    The continuous development of laser systems toward more compact and efficient devices constitutes an increasing threat to electro-optical imaging sensors, such as complementary metal-oxide-semiconductors (CMOS) and charge-coupled devices. These types of electronic sensors are used in day-to-day life but also in military or civil security applications. In camera systems dedicated to specific tasks, micro-optoelectromechanical systems, such as a digital micromirror device (DMD), are part of the optical setup. In such systems, the DMD can be located at an intermediate focal plane of the optics and it is also susceptible to laser damage. The goal of our work is to enhance the knowledge of damaging effects on such devices exposed to laser light. The experimental setup for the investigation of laser-induced damage is described in detail. As laser sources, both pulsed lasers and continuous-wave (CW)-lasers are used. The laser-induced damage threshold is determined by the single-shot method by increasing the pulse energy from pulse to pulse or in the case of CW-lasers, by increasing the laser power. Furthermore, we investigate the morphology of laser-induced damage patterns and the dependence of the number of destructive device elements on the laser pulse energy or laser power. In addition to the destruction of single pixels, we observe aftereffects, such as persistent dead columns or rows of pixels in the sensor image.

  5. Experimental Characterization of Close-Emitter Interference in an Optical Camera Communication System

    PubMed Central

    Chavez-Burbano, Patricia; Rabadan, Jose; Perez-Jimenez, Rafael

    2017-01-01

    Due to the massive insertion of embedded cameras in a wide variety of devices and the generalized use of LED lamps, Optical Camera Communication (OCC) has been proposed as a practical solution for future Internet of Things (IoT) and smart cities applications. Influence of mobility, weather conditions, solar radiation interference, and external light sources over Visible Light Communication (VLC) schemes have been addressed in previous works. Some authors have studied the spatial intersymbol interference from close emitters within an OCC system; however, it has not been characterized or measured in function of the different transmitted wavelengths. In this work, this interference has been experimentally characterized and the Normalized Power Signal to Interference Ratio (NPSIR) for easily determining the interference in other implementations, independently of the selected system devices, has been also proposed. A set of experiments in a darkroom, working with RGB multi-LED transmitters and a general purpose camera, were performed in order to obtain the NPSIR values and to validate the deduced equations for 2D pixel representation of real distances. These parameters were used in the simulation of a wireless sensor network scenario in a small office, where the Bit Error Rate (BER) of the communication link was calculated. The experiments show that the interference of other close emitters in terms of the distance and the used wavelength can be easily determined with the NPSIR. Finally, the simulation validates the applicability of the deduced equations for scaling the initial results into real scenarios. PMID:28677613

  6. Experimental Characterization of Close-Emitter Interference in an Optical Camera Communication System.

    PubMed

    Chavez-Burbano, Patricia; Guerra, Victor; Rabadan, Jose; Rodríguez-Esparragón, Dionisio; Perez-Jimenez, Rafael

    2017-07-04

    Due to the massive insertion of embedded cameras in a wide variety of devices and the generalized use of LED lamps, Optical Camera Communication (OCC) has been proposed as a practical solution for future Internet of Things (IoT) and smart cities applications. Influence of mobility, weather conditions, solar radiation interference, and external light sources over Visible Light Communication (VLC) schemes have been addressed in previous works. Some authors have studied the spatial intersymbol interference from close emitters within an OCC system; however, it has not been characterized or measured in function of the different transmitted wavelengths. In this work, this interference has been experimentally characterized and the Normalized Power Signal to Interference Ratio (NPSIR) for easily determining the interference in other implementations, independently of the selected system devices, has been also proposed. A set of experiments in a darkroom, working with RGB multi-LED transmitters and a general purpose camera, were performed in order to obtain the NPSIR values and to validate the deduced equations for 2D pixel representation of real distances. These parameters were used in the simulation of a wireless sensor network scenario in a small office, where the Bit Error Rate (BER) of the communication link was calculated. The experiments show that the interference of other close emitters in terms of the distance and the used wavelength can be easily determined with the NPSIR. Finally, the simulation validates the applicability of the deduced equations for scaling the initial results into real scenarios.

  7. 21 CFR 886.5820 - Closed-circuit television reading system.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... of a lens, video camera, and video monitor that is intended for use by a patient who has subnormal vision to magnify reading material. (b) Classification. Class I (general controls). The device is exempt...

  8. Development of Open source-based automatic shooting and processing UAV imagery for Orthoimage Using Smart Camera UAV

    NASA Astrophysics Data System (ADS)

    Park, J. W.; Jeong, H. H.; Kim, J. S.; Choi, C. U.

    2016-06-01

    Recently, aerial photography with unmanned aerial vehicle (UAV) system uses UAV and remote controls through connections of ground control system using bandwidth of about 430 MHz radio Frequency (RF) modem. However, as mentioned earlier, existing method of using RF modem has limitations in long distance communication. The Smart Camera equipments's LTE (long-term evolution), Bluetooth, and Wi-Fi to implement UAV that uses developed UAV communication module system carried out the close aerial photogrammetry with the automatic shooting. Automatic shooting system is an image capturing device for the drones in the area's that needs image capturing and software for loading a smart camera and managing it. This system is composed of automatic shooting using the sensor of smart camera and shooting catalog management which manages filmed images and information. Processing UAV imagery module used Open Drone Map. This study examined the feasibility of using the Smart Camera as the payload for a photogrammetric UAV system. The open soure tools used for generating Android, OpenCV (Open Computer Vision), RTKLIB, Open Drone Map.

  9. Acoustic and Cavitation Fields of Shock Wave Therapy Devices

    NASA Astrophysics Data System (ADS)

    Chitnis, Parag V.; Cleveland, Robin O.

    2006-05-01

    Extracorporeal shock wave therapy (ESWT) is considered a viable treatment modality for orthopedic ailments. Despite increasing clinical use, the mechanisms by which ESWT devices generate a therapeutic effect are not yet understood. The mechanistic differences in various devices and their efficacies might be dependent on their acoustic and cavitation outputs. We report acoustic and cavitation measurements of a number of different shock wave therapy devices. Two devices were electrohydraulic: one had a large reflector (HMT Ossatron) and the other was a hand-held source (HMT Evotron); the other device was a pneumatically driven device (EMS Swiss DolorClast Vet). Acoustic measurements were made using a fiber-optic probe hydrophone and a PVDF hydrophone. A dual passive cavitation detection system was used to monitor cavitation activity. Qualitative differences between these devices were also highlighted using a high-speed camera. We found that the Ossatron generated focused shock waves with a peak positive pressure around 40 MPa. The Evotron produced peak positive pressure around 20 MPa, however, its acoustic output appeared to be independent of the power setting of the device. The peak positive pressure from the DolorClast was about 5 MPa without a clear shock front. The DolorClast did not generate a focused acoustic field. Shadowgraph images show that the wave propagating from the DolorClast is planar and not focused in the vicinity of the hand-piece. All three devices produced measurable cavitation with a characteristic time (cavitation inception to bubble collapse) that varied between 95 and 209 μs for the Ossatron, between 59 and 283 μs for the Evotron, and between 195 and 431 μs for the DolorClast. The high-speed camera images show that the cavitation activity for the DolorClast is primarily restricted to the contact surface of the hand-piece. These data indicate that the devices studied here vary in acoustic and cavitation output, which may imply that the mechanisms by which they generate therapeutic effects are different.

  10. Predicting Electron Population Characteristics in 2-D Using Multispectral Ground-Based Imaging

    NASA Astrophysics Data System (ADS)

    Grubbs, Guy; Michell, Robert; Samara, Marilia; Hampton, Donald; Jahn, Jorg-Micha

    2018-01-01

    Ground-based imaging and in situ sounding rocket data are compared to electron transport modeling for an active inverted-V type auroral event. The Ground-to-Rocket Electrodynamics-Electrons Correlative Experiment (GREECE) mission successfully launched from Poker Flat, Alaska, on 3 March 2014 at 11:09:50 UT and reached an apogee of approximately 335 km over the aurora. Multiple ground-based electron-multiplying charge-coupled device (EMCCD) imagers were positioned at Venetie, Alaska, and aimed toward magnetic zenith. The imagers observed the intensity of different auroral emission lines (427.8, 557.7, and 844.6 nm) at the magnetic foot point of the rocket payload. Emission line intensity data are correlated with electron characteristics measured by the GREECE onboard electron spectrometer. A modified version of the GLobal airglOW (GLOW) model is used to estimate precipitating electron characteristics based on optical emissions. GLOW predicted the electron population characteristics with 20% error given the observed spectral intensities within 10° of magnetic zenith. Predictions are within 30% of the actual values within 20° of magnetic zenith for inverted-V-type aurora. Therefore, it is argued that this technique can be used, at least in certain types of aurora, such as the inverted-V type presented here, to derive 2-D maps of electron characteristics. These can then be used to further derive 2-D maps of ionospheric parameters as a function of time, based solely on multispectral optical imaging data.

  11. Polymer-Based Dense Fluidic Networks for High Throughput Screening with Ultrasensitive Fluorescence Detection

    PubMed Central

    Okagbare, Paul I.; Soper, Steven A.

    2011-01-01

    Microfluidics represents a viable platform for performing High Throughput Screening (HTS) due to its ability to automate fluid handling and generate fluidic networks with high number densities over small footprints appropriate for the simultaneous optical interrogation of many screening assays. While most HTS campaigns depend on fluorescence, readers typically use point detection and serially address the assay results significantly lowering throughput or detection sensitivity due to a low duty cycle. To address this challenge, we present here the fabrication of a high density microfluidic network packed into the imaging area of a large field-of-view (FoV) ultrasensitive fluorescence detection system. The fluidic channels were 1, 5 or 10 μm (width), 1 μm (depth) with a pitch of 1–10 μm and each fluidic processor was individually addressable. The fluidic chip was produced from a molding tool using hot embossing and thermal fusion bonding to enclose the fluidic channels. A 40X microscope objective (numerical aperture = 0.75) created a FoV of 200 μm, providing the ability to interrogate ~25 channels using the current fluidic configuration. An ultrasensitive fluorescence detection system with a large FoV was used to transduce fluorescence signals simultaneously from each fluidic processor onto the active area of an electron multiplying charge-coupled device (EMCCD). The utility of these multichannel networks for HTS was demonstrated by carrying out the high throughput monitoring of the activity of an enzyme, APE1, used as a model screening assay. PMID:20872611

  12. Tempo-spatially resolved scattering correlation spectroscopy under dark-field illumination and its application to investigate dynamic behaviors of gold nanoparticles in live cells.

    PubMed

    Liu, Heng; Dong, Chaoqing; Ren, Jicun

    2014-02-19

    In this study, a new tempo-spatially resolved fluctuation spectroscopy under dark-field illumination is described, named dark-field illumination-based scattering correlation spectroscopy (DFSCS). DFSCS is a single-particle method, whose principle is similar to that of fluorescence correlation spectroscopy (FCS). DFSCS correlates the fluctuations of the scattered light from single nanoparticle under dark-field illumination. We developed a theoretical model for translational diffusion of nanoparticles in DFSCS system. The results of computer simulations documented that this model was able to well describe the diffusion behaviors of nanoparticles in uniformly illuminated field. The experimental setup of DFSCS was achieved by introducing a dark-field condenser to the frequently used bright-field microscope and an electron multiplying charge-coupled device (EMCCD) as the array detector. In the optimal condition, a stack of 500 000 frames were collected simultaneously on 64 detection channels for a single measurement with acquisition rate of 0.5 ms per frame. We systematically investigated the effect of certain factors such as particle concentration, viscosity of the solution, and heterogeneity of gold nanoparticles (GNPs) samples on DFSCS measurements. The experiment data confirmed theoretical model proposed. Furthermore, this new method was successfully used for investigating dynamic behaviors of GNPs in live cells. Our preliminary results demonstrate that DFSCS is a practical and affordable tool for ordinary laboratories to investigate the dynamic information of nanoparticles in vitro as well as in vivo.

  13. Wafer chamber having a gas curtain for extreme-UV lithography

    DOEpatents

    Kanouff, Michael P.; Ray-Chaudhuri, Avijit K.

    2001-01-01

    An EUVL device includes a wafer chamber that is separated from the upstream optics by a barrier having an aperture that is permeable to the inert gas. Maintaining an inert gas curtain in the proximity of a wafer positioned in a chamber of an extreme ultraviolet lithography device can effectively prevent contaminants from reaching the optics in an extreme ultraviolet photolithography device even though solid window filters are not employed between the source of reflected radiation, e.g., the camera, and the wafer. The inert gas removes the contaminants by entrainment.

  14. Human movement activity classification approaches that use wearable sensors and mobile devices

    NASA Astrophysics Data System (ADS)

    Kaghyan, Sahak; Sarukhanyan, Hakob; Akopian, David

    2013-03-01

    Cell phones and other mobile devices become part of human culture and change activity and lifestyle patterns. Mobile phone technology continuously evolves and incorporates more and more sensors for enabling advanced applications. Latest generations of smart phones incorporate GPS and WLAN location finding modules, vision cameras, microphones, accelerometers, temperature sensors etc. The availability of these sensors in mass-market communication devices creates exciting new opportunities for data mining applications. Particularly healthcare applications exploiting build-in sensors are very promising. This paper reviews different approaches of human activity recognition.

  15. Design and Implementation of Pointer-Type Multi Meters Intelligent Recognition Device Based on ARM Platform

    NASA Astrophysics Data System (ADS)

    Cui, Yang; Luo, Wang; Fan, Qiang; Peng, Qiwei; Cai, Yiting; Yao, Yiyang; Xu, Changfu

    2018-01-01

    This paper adopts a low power consumption ARM Hisilicon mobile processing platform and OV4689 camera, combined with a new skeleton extraction based on distance transform algorithm and the improved Hough algorithm for multi meters real-time reading. The design and implementation of the device were completed. Experimental results shows that The average error of measurement was 0.005MPa, and the average reading time was 5s. The device had good stability and high accuracy which meets the needs of practical application.

  16. Dual-modality imaging with a ultrasound-gamma device for oncology

    NASA Astrophysics Data System (ADS)

    Polito, C.; Pellegrini, R.; Cinti, M. N.; De Vincentis, G.; Lo Meo, S.; Fabbri, A.; Bennati, P.; Cencelli, V. Orsolini; Pani, R.

    2018-06-01

    Recently, dual-modality systems have been developed, aimed to correlate anatomical and functional information, improving disease localization and helping oncological or surgical treatments. Moreover, due to the growing interest in handheld detectors for preclinical trials or small animal imaging, in this work a new dual modality integrated device, based on a Ultrasounds probe and a small Field of View Single Photon Emission gamma camera, is proposed.

  17. Studying Upper-Limb Amputee Prosthesis Use to Inform Device Design

    DTIC Science & Technology

    2016-10-01

    study of the resulting videos led to a new prosthetics-use taxonomy that is generalizable to various levels of amputation and terminal devices. The...taxonomy was applied to classification of the recorded videos via custom tagging software with midi controller interface. The software creates...a motion capture studio and video cameras to record accurate and detailed upper body motion during a series of standardized tasks. These tasks are

  18. Respiratory rate estimation from the built-in cameras of smartphones and tablets.

    PubMed

    Nam, Yunyoung; Lee, Jinseok; Chon, Ki H

    2014-04-01

    This paper presents a method for respiratory rate estimation using the camera of a smartphone, an MP3 player or a tablet. The iPhone 4S, iPad 2, iPod 5, and Galaxy S3 were used to estimate respiratory rates from the pulse signal derived from a finger placed on the camera lens of these devices. Prior to estimation of respiratory rates, we systematically investigated the optimal signal quality of these 4 devices by dividing the video camera's resolution into 12 different pixel regions. We also investigated the optimal signal quality among the red, green and blue color bands for each of these 12 pixel regions for all four devices. It was found that the green color band provided the best signal quality for all 4 devices and that the left half VGA pixel region was found to be the best choice only for iPhone 4S. For the other three devices, smaller 50 × 50 pixel regions were found to provide better or equally good signal quality than the larger pixel regions. Using the green signal and the optimal pixel regions derived from the four devices, we then investigated the suitability of the smartphones, the iPod 5 and the tablet for respiratory rate estimation using three different computational methods: the autoregressive (AR) model, variable-frequency complex demodulation (VFCDM), and continuous wavelet transform (CWT) approaches. Specifically, these time-varying spectral techniques were used to identify the frequency and amplitude modulations as they contain respiratory rate information. To evaluate the performance of the three computational methods and the pixel regions for the optimal signal quality, data were collected from 10 healthy subjects. It was found that the VFCDM method provided good estimates of breathing rates that were in the normal range (12-24 breaths/min). Both CWT and VFCDM methods provided reasonably good estimates for breathing rates that were higher than 26 breaths/min but their accuracy degraded concomitantly with increased respiratory rates. Overall, the VFCDM method provided the best results for accuracy (smaller median error), consistency (smaller interquartile range of the median value), and computational efficiency (less than 0.5 s on 1 min of data using a MATLAB implementation) to extract breathing rates that varied from 12 to 36 breaths/min. The AR method provided the least accurate respiratory rate estimation among the three methods. This work illustrates that both heart rates and normal breathing rates can be accurately derived from a video signal obtained from smartphones, an MP3 player and tablets with or without a flashlight.

  19. Handheld Device Adapted to Smartphone Cameras for the Measurement of Sodium Ion Concentrations at Saliva-Relevant Levels via Fluorescence

    PubMed Central

    Lipowicz, Michelle; Garcia, Antonio

    2015-01-01

    The use of saliva sampling as a minimally-invasive means for drug testing and monitoring physiology is a subject of great interest to researchers and clinicians. This study describes a new optical method based on non-axially symmetric focusing of light using an oblate spheroid sample chamber. The device is simple, lightweight, low cost and is easily attached to several different brands/models of smartphones (Apple, Samsung, HTC and Nokia) for the measurement of sodium ion levels at physiologically-relevant saliva concentrations. The sample and fluorescent reagent solutions are placed in a specially-designed, lightweight device that excludes ambient light and concentrates 470-nm excitation light, from a low-power photodiode, within the sample through non-axially-symmetric refraction. The study found that smartphone cameras and post-image processing quantitated sodium ion concentration in water over the range of 0.5–10 mM, yielding best-fit regressions of the data that agree well with a data regression of microplate luminometer results. The data suggest that fluorescence can be used for the measurement of salivary sodium ion concentrations in low-resource or point-of-care settings. With further fluorescent assay testing, the device may find application in a variety of enzymatic or chemical assays. PMID:28955016

  20. A 5- μ m pitch charge-coupled device optimized for resonant inelastic soft X-ray scattering

    DOE PAGES

    Andresen, N. C.; Denes, P.; Goldschmidt, A.; ...

    2017-08-08

    Here, we have developed a charge-coupled device (CCD) with 5 μm × 45 μm pixels on high-resistivity silicon. The fully depleted 200 μm-thick silicon detector is back-illuminated through a 10 nm-thick in situ doped polysilicon window and is thus highly efficient for soft through > 8 keV hard X-rays. The device described here is a 1.5 megapixel CCD with 2496 × 620 pixels. The pixel and camera geometry was optimized for Resonant Inelastic X-ray Scattering (RIXS) and is particularly advantageous for spectrometers with limited arm lengths. In this article, we describe the device architecture, construction and operation, and its performancemore » during tests at the Advance Light Source (ALS) 8.0.1 RIXS beamline. The improved spectroscopic performance, when compared with a current standard commercial camera, is demonstrated with a ~280 eV (C K) X-ray beam on a graphite sample. Readout noise is typically 3-6 electrons and the point spread function for soft C K X-rays in the 5 μm direction is 4.0 μm ± 0.2 μm. Finally, the measured quantum efficiency of the CCD is greater than 75% in the range from 200 eV to 1 keV.« less

Top