Sample records for gamma camera systems

  1. Mini gamma camera, camera system and method of use

    DOEpatents

    Majewski, Stanislaw; Weisenberger, Andrew G.; Wojcik, Randolph F.

    2001-01-01

    A gamma camera comprising essentially and in order from the front outer or gamma ray impinging surface: 1) a collimator, 2) a scintillator layer, 3) a light guide, 4) an array of position sensitive, high resolution photomultiplier tubes, and 5) printed circuitry for receipt of the output of the photomultipliers. There is also described, a system wherein the output supplied by the high resolution, position sensitive photomultipiler tubes is communicated to: a) a digitizer and b) a computer where it is processed using advanced image processing techniques and a specific algorithm to calculate the center of gravity of any abnormality observed during imaging, and c) optional image display and telecommunications ports.

  2. Development of an all-in-one gamma camera/CCD system for safeguard verification

    NASA Astrophysics Data System (ADS)

    Kim, Hyun-Il; An, Su Jung; Chung, Yong Hyun; Kwak, Sung-Woo

    2014-12-01

    For the purpose of monitoring and verifying efforts at safeguarding radioactive materials in various fields, a new all-in-one gamma camera/charged coupled device (CCD) system was developed. This combined system consists of a gamma camera, which gathers energy and position information on gamma-ray sources, and a CCD camera, which identifies the specific location in a monitored area. Therefore, 2-D image information and quantitative information regarding gamma-ray sources can be obtained using fused images. A gamma camera consists of a diverging collimator, a 22 × 22 array CsI(Na) pixelated scintillation crystal with a pixel size of 2 × 2 × 6 mm3 and Hamamatsu H8500 position-sensitive photomultiplier tube (PSPMT). The Basler scA640-70gc CCD camera, which delivers 70 frames per second at video graphics array (VGA) resolution, was employed. Performance testing was performed using a Co-57 point source 30 cm from the detector. The measured spatial resolution and sensitivity were 4.77 mm full width at half maximum (FWHM) and 7.78 cps/MBq, respectively. The energy resolution was 18% at 122 keV. These results demonstrate that the combined system has considerable potential for radiation monitoring.

  3. Camera Concepts for the Advanced Gamma-Ray Imaging System (AGIS)

    NASA Astrophysics Data System (ADS)

    Nepomuk Otte, Adam

    2009-05-01

    The Advanced Gamma-Ray Imaging System (AGIS) is a concept for the next generation observatory in ground-based very high energy gamma-ray astronomy. Design goals are ten times better sensitivity, higher angular resolution, and a lower energy threshold than existing Cherenkov telescopes. Each telescope is equipped with a camera that detects and records the Cherenkov-light flashes from air showers. The camera is comprised of a pixelated focal plane of blue sensitive and fast (nanosecond) photon detectors that detect the photon signal and convert it into an electrical one. The incorporation of trigger electronics and signal digitization into the camera are under study. Given the size of AGIS, the camera must be reliable, robust, and cost effective. We are investigating several directions that include innovative technologies such as Geiger-mode avalanche-photodiodes as a possible detector and switched capacitor arrays for the digitization.

  4. A novel fully integrated handheld gamma camera

    NASA Astrophysics Data System (ADS)

    Massari, R.; Ucci, A.; Campisi, C.; Scopinaro, F.; Soluri, A.

    2016-10-01

    In this paper, we present an innovative, fully integrated handheld gamma camera, namely designed to gather in the same device the gamma ray detector with the display and the embedded computing system. The low power consumption allows the prototype to be battery operated. To be useful in radioguided surgery, an intraoperative gamma camera must be very easy to handle since it must be moved to find a suitable view. Consequently, we have developed the first prototype of a fully integrated, compact and lightweight gamma camera for radiopharmaceuticals fast imaging. The device can operate without cables across the sterile field, so it may be easily used in the operating theater for radioguided surgery. The prototype proposed consists of a Silicon Photomultiplier (SiPM) array coupled with a proprietary scintillation structure based on CsI(Tl) crystals. To read the SiPM output signals, we have developed a very low power readout electronics and a dedicated analog to digital conversion system. One of the most critical aspects we faced designing the prototype was the low power consumption, which is mandatory to develop a battery operated device. We have applied this detection device in the lymphoscintigraphy technique (sentinel lymph node mapping) comparing the results obtained with those of a commercial gamma camera (Philips SKYLight). The results obtained confirm a rapid response of the device and an adequate spatial resolution for the use in the scintigraphic imaging. This work confirms the feasibility of a small gamma camera with an integrated display. This device is designed for radioguided surgery and small organ imaging, but it could be easily combined into surgical navigation systems.

  5. Gamma ray camera

    DOEpatents

    Perez-Mendez, Victor

    1997-01-01

    A gamma ray camera for detecting rays emanating from a radiation source such as an isotope. The gamma ray camera includes a sensor array formed of a visible light crystal for converting incident gamma rays to a plurality of corresponding visible light photons, and a photosensor array responsive to the visible light photons in order to form an electronic image of the radiation therefrom. The photosensor array is adapted to record an integrated amount of charge proportional to the incident gamma rays closest to it, and includes a transparent metallic layer, photodiode consisting of a p-i-n structure formed on one side of the transparent metallic layer, and comprising an upper p-type layer, an intermediate layer and a lower n-type layer. In the preferred mode, the scintillator crystal is composed essentially of a cesium iodide (CsI) crystal preferably doped with a predetermined amount impurity, and the p-type upper intermediate layers and said n-type layer are essentially composed of hydrogenated amorphous silicon (a-Si:H). The gamma ray camera further includes a collimator interposed between the radiation source and the sensor array, and a readout circuit formed on one side of the photosensor array.

  6. Gamma ray camera

    DOEpatents

    Perez-Mendez, V.

    1997-01-21

    A gamma ray camera is disclosed for detecting rays emanating from a radiation source such as an isotope. The gamma ray camera includes a sensor array formed of a visible light crystal for converting incident gamma rays to a plurality of corresponding visible light photons, and a photosensor array responsive to the visible light photons in order to form an electronic image of the radiation therefrom. The photosensor array is adapted to record an integrated amount of charge proportional to the incident gamma rays closest to it, and includes a transparent metallic layer, photodiode consisting of a p-i-n structure formed on one side of the transparent metallic layer, and comprising an upper p-type layer, an intermediate layer and a lower n-type layer. In the preferred mode, the scintillator crystal is composed essentially of a cesium iodide (CsI) crystal preferably doped with a predetermined amount impurity, and the p-type upper intermediate layers and said n-type layer are essentially composed of hydrogenated amorphous silicon (a-Si:H). The gamma ray camera further includes a collimator interposed between the radiation source and the sensor array, and a readout circuit formed on one side of the photosensor array. 6 figs.

  7. Quality controls for gamma cameras and PET cameras: development of a free open-source ImageJ program

    NASA Astrophysics Data System (ADS)

    Carlier, Thomas; Ferrer, Ludovic; Berruchon, Jean B.; Cuissard, Regis; Martineau, Adeline; Loonis, Pierre; Couturier, Olivier

    2005-04-01

    Acquisition data and treatments for quality controls of gamma cameras and Positron Emission Tomography (PET) cameras are commonly performed with dedicated program packages, which are running only on manufactured computers and differ from each other, depending on camera company and program versions. The aim of this work was to develop a free open-source program (written in JAVA language) to analyze data for quality control of gamma cameras and PET cameras. The program is based on the free application software ImageJ and can be easily loaded on any computer operating system (OS) and thus on any type of computer in every nuclear medicine department. Based on standard parameters of quality control, this program includes 1) for gamma camera: a rotation center control (extracted from the American Association of Physics in Medicine, AAPM, norms) and two uniformity controls (extracted from the Institute of Physics and Engineering in Medicine, IPEM, and National Electronic Manufacturers Association, NEMA, norms). 2) For PET systems, three quality controls recently defined by the French Medical Physicist Society (SFPM), i.e. spatial resolution and uniformity in a reconstructed slice and scatter fraction, are included. The determination of spatial resolution (thanks to the Point Spread Function, PSF, acquisition) allows to compute the Modulation Transfer Function (MTF) in both modalities of cameras. All the control functions are included in a tool box which is a free ImageJ plugin and could be soon downloaded from Internet. Besides, this program offers the possibility to save on HTML format the uniformity quality control results and a warning can be set to automatically inform users in case of abnormal results. The architecture of the program allows users to easily add any other specific quality control program. Finally, this toolkit is an easy and robust tool to perform quality control on gamma cameras and PET cameras based on standard computation parameters, is free, run on

  8. Acquisition of gamma camera and physiological data by computer.

    PubMed

    Hack, S N; Chang, M; Line, B R; Cooper, J A; Robeson, G H

    1986-11-01

    We have designed, implemented, and tested a new Research Data Acquisition System (RDAS) that permits a general purpose digital computer to acquire signals from both gamma camera sources and physiological signal sources concurrently. This system overcomes the limited multi-source, high speed data acquisition capabilities found in most clinically oriented nuclear medicine computers. The RDAS can simultaneously input signals from up to four gamma camera sources with a throughput of 200 kHz per source and from up to eight physiological signal sources with an aggregate throughput of 50 kHz. Rigorous testing has found the RDAS to exhibit acceptable linearity and timing characteristics. In addition, flood images obtained by this system were compared with flood images acquired by a commercial nuclear medicine computer system. National Electrical Manufacturers Association performance standards of the flood images were found to be comparable.

  9. Commissioning of a new SeHCAT detector and comparison with an uncollimated gamma camera.

    PubMed

    Taylor, Jonathan C; Hillel, Philip G; Himsworth, John M

    2014-10-01

    Measurements of SeHCAT (tauroselcholic [75selenium] acid) retention have been used to diagnose bile acid malabsorption for a number of years. In current UK practice the vast majority of centres calculate uptake using an uncollimated gamma camera. Because of ever-increasing demands on gamma camera time, a new 'probe' detector was designed, assembled and commissioned. To validate the system, nine patients were scanned at day 0 and day 7 with both the new probe detector and an uncollimated gamma camera. Commissioning results were largely in line with expectations. Spatial resolution (full-width 95% of maximum) at 1 m was 36.6 cm, the background count rate was 24.7 cps and sensitivity at 1 m was 720.8 cps/MBq. The patient comparison study showed a mean absolute difference in retention measurements of 0.8% between the probe and uncollimated gamma camera, and SD of ± 1.8%. The study demonstrated that it is possible to create a simple, reproducible SeHCAT measurement system using a commercially available scintillation detector. Retention results from the probe closely agreed with those from the uncollimated gamma camera.

  10. The Advanced Gamma-ray Imaging System (AGIS): Camera Electronics Designs

    NASA Astrophysics Data System (ADS)

    Tajima, Hiroyasu; Buckley, J.; Byrum, K.; Drake, G.; Falcone, A.; Funk, S.; Holder, J.; Horan, D.; Krawczynski, H.; Ong, R.; Swordy, S.; Wagner, R.; Wakely, S.; Williams, D.; Camera Electronics Working Group; AGIS Collaboration

    2008-03-01

    AGIS, a next generation of atmospheric Cherenkov telescope arrays, aims to achieve a sensitivity level of a milliCrab for gamma-ray observations in in the energy band of 40 GeV to 100 TeV. Such improvement requires cost reduction of individual components with high reliability in order to equip the order of 100 telescopes necessary to achieve the sensitivity goal. We are exploring several design concepts to reduce the cost of camera electronics while improving their performance. These design concepts include systems based on multi-channel waveform sampling ASIC optimized for AGIS, a system based on IIT (image intensifier tube) for large channel (order of 1 million channels) readout as well as a multiplexed FADC system based on the current VERITAS readout design. Here we present trade-off studies of these design concepts.

  11. Feasibility study of a gamma camera for monitoring nuclear materials in the PRIDE facility

    NASA Astrophysics Data System (ADS)

    Jo, Woo Jin; Kim, Hyun-Il; An, Su Jung; Lee, Chae Young; Song, Han-Kyeol; Chung, Yong Hyun; Shin, Hee-Sung; Ahn, Seong-Kyu; Park, Se-Hwan

    2014-05-01

    The Korea Atomic Energy Research Institute (KAERI) has been developing pyroprocessing technology, in which actinides are recovered together with plutonium. There is no pure plutonium stream in the process, so it has an advantage of proliferation resistance. Tracking and monitoring of nuclear materials through the pyroprocess can significantly improve the transparency of the operation and safeguards. An inactive engineering-scale integrated pyroprocess facility, which is the PyRoprocess Integrated inactive DEmonstration (PRIDE) facility, was constructed to demonstrate engineering-scale processes and the integration of each unit process. the PRIDE facility may be a good test bed to investigate the feasibility of a nuclear material monitoring system. In this study, we designed a gamma camera system for nuclear material monitoring in the PRIDE facility by using a Monte Carlo simulation, and we validated the feasibility of this system. Two scenarios, according to locations of the gamma camera, were simulated using GATE (GEANT4 Application for Tomographic Emission) version 6. A prototype gamma camera with a diverging-slat collimator was developed, and the simulated and experimented results agreed well with each other. These results indicate that a gamma camera to monitor the nuclear material in the PRIDE facility can be developed.

  12. MONICA: A Compact, Portable Dual Gamma Camera System for Mouse Whole-Body Imaging

    PubMed Central

    Xi, Wenze; Seidel, Jurgen; Karkareka, John W.; Pohida, Thomas J.; Milenic, Diane E.; Proffitt, James; Majewski, Stan; Weisenberger, Andrew G.; Green, Michael V.; Choyke, Peter L.

    2009-01-01

    Introduction We describe a compact, portable dual-gamma camera system (named “MONICA” for MObile Nuclear Imaging CAmeras) for visualizing and analyzing the whole-body biodistribution of putative diagnostic and therapeutic single photon emitting radiotracers in animals the size of mice. Methods Two identical, miniature pixelated NaI(Tl) gamma cameras were fabricated and installed “looking up” through the tabletop of a compact portable cart. Mice are placed directly on the tabletop for imaging. Camera imaging performance was evaluated with phantoms and field performance was evaluated in a weeklong In-111 imaging study performed in a mouse tumor xenograft model. Results Tc-99m performance measurements, using a photopeak energy window of 140 keV ± 10%, yielded the following results: spatial resolution (FWHM at 1-cm), 2.2-mm; sensitivity, 149 cps/MBq (5.5 cps/μCi); energy resolution (FWHM), 10.8%; count rate linearity (count rate vs. activity), r2 = 0.99 for 0–185 MBq (0–5 mCi) in the field-of-view (FOV); spatial uniformity, < 3% count rate variation across the FOV. Tumor and whole-body distributions of the In-111 agent were well visualized in all animals in 5-minute images acquired throughout the 168-hour study period. Conclusion Performance measurements indicate that MONICA is well suited to whole-body single photon mouse imaging. The field study suggests that inter-device communications and user-oriented interfaces included in the MONICA design facilitate use of the system in practice. We believe that MONICA may be particularly useful early in the (cancer) drug development cycle where basic whole-body biodistribution data can direct future development of the agent under study and where logistical factors, e.g. limited imaging space, portability, and, potentially, cost are important. PMID:20346864

  13. The Advanced Gamma-ray Imaging System (AGIS): Camera Electronics Designs

    NASA Astrophysics Data System (ADS)

    Tajima, H.; Buckley, J.; Byrum, K.; Drake, G.; Falcone, A.; Funk, S.; Holder, J.; Horan, D.; Krawczynski, H.; Ong, R.; Swordy, S.; Wagner, R.; Williams, D.

    2008-04-01

    AGIS, a next generation of atmospheric Cherenkov telescope arrays, aims to achieve a sensitivity level of a milliCrab for gamma-ray observations in the energy band of 40 GeV to 100 TeV. Such improvement requires cost reduction of individual components with high reliability in order to equip the order of 100 telescopes necessary to achieve the sensitivity goal. We are exploring several design concepts to reduce the cost of camera electronics while improving their performance. These design concepts include systems based on multi-channel waveform sampling ASIC optimized for AGIS, a system based on IIT (image intensifier tube) for large channel (order of 1 million channels) readout as well as a multiplexed FADC system based on the current VERITAS readout design. Here we present trade-off in the studies of these design concepts.

  14. Development of Camera Electronics for the Advanced Gamma-ray Imaging System (AGIS)

    NASA Astrophysics Data System (ADS)

    Tajima, Hiroyasu

    2009-05-01

    AGIS, a next generation of atmospheric Cherenkov telescope arrays, aims to achieve a sensitivity level of a milliCrab for gamma-ray observations in in the energy band of 40 GeV to 100 TeV. Such improvement requires cost reduction of individual components with high reliability in order to equip the order of 100 telescopes necessary to achieve the sensitivity goal. We are exploring several design concepts to reduce the cost of camera electronics while improving their performance. We have developed test systems for some of these concepts and are testing their performance. Here we present test results of the test systems.

  15. The Advanced Gamma-ray Imaging System (AGIS) - Camera Electronics Development

    NASA Astrophysics Data System (ADS)

    Tajima, Hiroyasu; Bechtol, K.; Buehler, R.; Buckley, J.; Byrum, K.; Drake, G.; Falcone, A.; Funk, S.; Hanna, D.; Horan, D.; Humensky, B.; Karlsson, N.; Kieda, D.; Konopelko, A.; Krawczynski, H.; Krennrich, F.; Mukherjee, R.; Ong, R.; Otte, N.; Quinn, J.; Schroedter, M.; Swordy, S.; Wagner, R.; Wakely, S.; Weinstein, A.; Williams, D.; Camera Working Group; AGIS Collaboration

    2010-03-01

    AGIS, a next-generation imaging atmospheric Cherenkov telescope (IACT) array, aims to achieve a sensitivity level of about one milliCrab for gamma-ray observations in the energy band of 50 GeV to 100 TeV. Achieving this level of performance will require on the order of 50 telescopes with perhaps as many as 1M total electronics channels. The larger scale of AGIS requires a very different approach from the currently operating IACTs, with lower-cost and lower-power electronics incorporated into camera modules designed for high reliability and easy maintenance. Here we present the concept and development status of the AGIS camera electronics.

  16. Online gamma-camera imaging of 103Pd seeds (OGIPS) for permanent breast seed implantation

    NASA Astrophysics Data System (ADS)

    Ravi, Ananth; Caldwell, Curtis B.; Keller, Brian M.; Reznik, Alla; Pignol, Jean-Philippe

    2007-09-01

    Permanent brachytherapy seed implantation is being investigated as a mode of accelerated partial breast irradiation for early stage breast cancer patients. Currently, the seeds are poorly visualized during the procedure making it difficult to perform a real-time correction of the implantation if required. The objective was to determine if a customized gamma-camera can accurately localize the seeds during implantation. Monte Carlo simulations of a CZT based gamma-camera were used to assess whether images of suitable quality could be derived by detecting the 21 keV photons emitted from 74 MBq 103Pd brachytherapy seeds. A hexagonal parallel hole collimator with a hole length of 38 mm, hole diameter of 1.2 mm and 0.2 mm septa, was modeled. The design of the gamma-camera was evaluated on a realistic model of the breast and three layers of the seed distribution (55 seeds) based on a pre-implantation CT treatment plan. The Monte Carlo simulations showed that the gamma-camera was able to localize the seeds with a maximum error of 2.0 mm, using only two views and 20 s of imaging. A gamma-camera can potentially be used as an intra-procedural image guidance system for quality assurance for permanent breast seed implantation.

  17. The spatial resolution of a rotating gamma camera tomographic facility.

    PubMed

    Webb, S; Flower, M A; Ott, R J; Leach, M O; Inamdar, R

    1983-12-01

    An important feature determining the spatial resolution in transverse sections reconstructed by convolution and back-projection is the frequency filter corresponding to the convolution kernel. Equations have been derived giving the theoretical spatial resolution, for a perfect detector and noise-free data, using four filter functions. Experiments have shown that physical constraints will always limit the resolution that can be achieved with a given system. The experiments indicate that the region of the frequency spectrum between KN/2 and KN where KN is the Nyquist frequency does not contribute significantly to resolution. In order to investigate the physical effect of these filter functions, the spatial resolution of reconstructed images obtained with a GE 400T rotating gamma camera has been measured. The results obtained serve as an aid to choosing appropriate reconstruction filters for use with a rotating gamma camera system.

  18. A small field of view camera for hybrid gamma and optical imaging

    NASA Astrophysics Data System (ADS)

    Lees, J. E.; Bugby, S. L.; Bhatia, B. S.; Jambi, L. K.; Alqahtani, M. S.; McKnight, W. R.; Ng, A. H.; Perkins, A. C.

    2014-12-01

    The development of compact low profile gamma-ray detectors has allowed the production of small field of view, hand held imaging devices for use at the patient bedside and in operating theatres. The combination of an optical and a gamma camera, in a co-aligned configuration, offers high spatial resolution multi-modal imaging giving a superimposed scintigraphic and optical image. This innovative introduction of hybrid imaging offers new possibilities for assisting surgeons in localising the site of uptake in procedures such as sentinel node detection. Recent improvements to the camera system along with results of phantom and clinical imaging are reported.

  19. [Evaluation of crossing calibration of (123)I-MIBG H/M ration, with the IDW scatter correction method, on different gamma camera systems].

    PubMed

    Kittaka, Daisuke; Takase, Tadashi; Akiyama, Masayuki; Nakazawa, Yasuo; Shinozuka, Akira; Shirai, Muneaki

    2011-01-01

    (123)I-MIBG Heart-to-Mediastinum activity ratio (H/M) is commonly used as an indicator of relative myocardial (123)I-MIBG uptake. H/M ratios reflect myocardial sympathetic nerve function, therefore it is a useful parameter to assess regional myocardial sympathetic denervation in various cardiac diseases. However, H/M ratio values differ by site, gamma camera system, position and size of region of interest (ROI), and collimator. In addition to these factors, 529 keV scatter component may also affect (123)I-MIBG H/M ratio. In this study, we examined whether the H/M ratio shows correlation between two different gamma camera systems and that sought for H/M ratio calculation formula. Moreover, we assessed the feasibility of (123)I Dual Window (IDW) method, which is a scatter correction method, and compared H/M ratios with and without IDW method. H/M ratio displayed a good correlation between two gamma camera systems. Additionally, we were able to create a new H/M calculation formula. These results indicated that the IDW method is a useful scatter correction method for calculating (123)I-MIBG H/M ratios.

  20. Performance evaluation for pinhole collimators of small gamma camera by MTF and NNPS analysis: Monte Carlo simulation study

    NASA Astrophysics Data System (ADS)

    Jeon, Hosang; Kim, Hyunduk; Cha, Bo Kyung; Kim, Jong Yul; Cho, Gyuseong; Chung, Yong Hyun; Yun, Jong-Il

    2009-06-01

    Presently, the gamma camera system is widely used in various medical diagnostic, industrial and environmental fields. Hence, the quantitative and effective evaluation of its imaging performance is essential for design and quality assurance. The National Electrical Manufacturers Association (NEMA) standards for gamma camera evaluation are insufficient to perform sensitive evaluation. In this study, modulation transfer function (MTF) and normalized noise power spectrum (NNPS) will be suggested to evaluate the performance of small gamma camera with changeable pinhole collimators using Monte Carlo simulation. We simulated the system with a cylinder and a disk source, and seven different pinhole collimators from 1- to 4-mm-diameter pinhole with lead. The MTF and NNPS data were obtained from output images and were compared with full-width at half-maximum (FWHM), sensitivity and differential uniformity. In the result, we found that MTF and NNPS are effective and novel standards to evaluate imaging performance of gamma cameras instead of conventional NEMA standards.

  1. Slit-Slat Collimator Equipped Gamma Camera for Whole-Mouse SPECT-CT Imaging

    NASA Astrophysics Data System (ADS)

    Cao, Liji; Peter, Jörg

    2012-06-01

    A slit-slat collimator is developed for a gamma camera intended for small-animal imaging (mice). The tungsten housing of a roof-shaped collimator forms a slit opening, and the slats are made of lead foils separated by sparse polyurethane material. Alignment of the collimator with the camera's pixelated crystal is performed by adjusting a micrometer screw while monitoring a Co-57 point source for maximum signal intensity. For SPECT, the collimator forms a cylindrical field-of-view enabling whole mouse imaging with transaxial magnification and constant on-axis sensitivity over the entire axial direction. As the gamma camera is part of a multimodal imaging system incorporating also x-ray CT, five parameters corresponding to the geometric displacements of the collimator as well as to the mechanical co-alignment between the gamma camera and the CT subsystem are estimated by means of bimodal calibration sources. To illustrate the performance of the slit-slat collimator and to compare its performance to a single pinhole collimator, a Derenzo phantom study is performed. Transaxial resolution along the entire long axis is comparable to a pinhole collimator of same pinhole diameter. Axial resolution of the slit-slat collimator is comparable to that of a parallel beam collimator. Additionally, data from an in-vivo mouse study are presented.

  2. Optimizing a three-stage Compton camera for measuring prompt gamma rays emitted during proton radiotherapy

    PubMed Central

    Peterson, S W; Robertson, D; Polf, J

    2011-01-01

    In this work, we investigate the use of a three-stage Compton camera to measure secondary prompt gamma rays emitted from patients treated with proton beam radiotherapy. The purpose of this study was (1) to develop an optimal three-stage Compton camera specifically designed to measure prompt gamma rays emitted from tissue and (2) to determine the feasibility of using this optimized Compton camera design to measure and image prompt gamma rays emitted during proton beam irradiation. The three-stage Compton camera was modeled in Geant4 as three high-purity germanium detector stages arranged in parallel-plane geometry. Initially, an isotropic gamma source ranging from 0 to 15 MeV was used to determine lateral width and thickness of the detector stages that provided the optimal detection efficiency. Then, the gamma source was replaced by a proton beam irradiating a tissue phantom to calculate the overall efficiency of the optimized camera for detecting emitted prompt gammas. The overall calculated efficiencies varied from ~10−6 to 10−3 prompt gammas detected per proton incident on the tissue phantom for several variations of the optimal camera design studied. Based on the overall efficiency results, we believe it feasible that a three-stage Compton camera could detect a sufficient number of prompt gammas to allow measurement and imaging of prompt gamma emission during proton radiotherapy. PMID:21048295

  3. A versatile calibration procedure for portable coded aperture gamma cameras and RGB-D sensors

    NASA Astrophysics Data System (ADS)

    Paradiso, V.; Crivellaro, A.; Amgarou, K.; de Lanaute, N. Blanc; Fua, P.; Liénard, E.

    2018-04-01

    The present paper proposes a versatile procedure for the geometrical calibration of coded aperture gamma cameras and RGB-D depth sensors, using only one radioactive point source and a simple experimental set-up. Calibration data is then used for accurately aligning radiation images retrieved by means of the γ-camera with the respective depth images computed with the RGB-D sensor. The system resulting from such a combination is thus able to retrieve, automatically, the distance of radioactive hotspots by means of pixel-wise mapping between gamma and depth images. This procedure is of great interest for a wide number of applications, ranging from precise automatic estimation of the shape and distance of radioactive objects to Augmented Reality systems. Incidentally, the corresponding results validated the choice of a perspective design model for a coded aperture γ-camera.

  4. Comparison of myocardial perfusion imaging between the new high-speed gamma camera and the standard anger camera.

    PubMed

    Tanaka, Hirokazu; Chikamori, Taishiro; Hida, Satoshi; Uchida, Kenji; Igarashi, Yuko; Yokoyama, Tsuyoshi; Takahashi, Masaki; Shiba, Chie; Yoshimura, Mana; Tokuuye, Koichi; Yamashina, Akira

    2013-01-01

    Cadmium-zinc-telluride (CZT) solid-state detectors have been recently introduced into the field of myocardial perfusion imaging. The aim of this study was to prospectively compare the diagnostic performance of the CZT high-speed gamma camera (Discovery NM 530c) with that of the standard 3-head gamma camera in the same group of patients. The study group consisted of 150 consecutive patients who underwent a 1-day stress-rest (99m)Tc-sestamibi or tetrofosmin imaging protocol. Image acquisition was performed first on a standard gamma camera with a 15-min scan time each for stress and for rest. All scans were immediately repeated on a CZT camera with a 5-min scan time for stress and a 3-min scan time for rest, using list mode. The correlations between the CZT camera and the standard camera for perfusion and function analyses were strong within narrow Bland-Altman limits of agreement. Using list mode analysis, image quality for stress was rated as good or excellent in 97% of the 3-min scans, and in 100% of the ≥4-min scans. For CZT scans at rest, similarly, image quality was rated as good or excellent in 94% of the 1-min scans, and in 100% of the ≥2-min scans. The novel CZT camera provides excellent image quality, which is equivalent to standard myocardial single-photon emission computed tomography, despite a short scan time of less than half of the standard time.

  5. Coded-aperture Compton camera for gamma-ray imaging

    NASA Astrophysics Data System (ADS)

    Farber, Aaron M.

    This dissertation describes the development of a novel gamma-ray imaging system concept and presents results from Monte Carlo simulations of the new design. Current designs for large field-of-view gamma cameras suitable for homeland security applications implement either a coded aperture or a Compton scattering geometry to image a gamma-ray source. Both of these systems require large, expensive position-sensitive detectors in order to work effectively. By combining characteristics of both of these systems, a new design can be implemented that does not require such expensive detectors and that can be scaled down to a portable size. This new system has significant promise in homeland security, astronomy, botany and other fields, while future iterations may prove useful in medical imaging, other biological sciences and other areas, such as non-destructive testing. A proof-of-principle study of the new gamma-ray imaging system has been performed by Monte Carlo simulation. Various reconstruction methods have been explored and compared. General-Purpose Graphics-Processor-Unit (GPGPU) computation has also been incorporated. The resulting code is a primary design tool for exploring variables such as detector spacing, material selection and thickness and pixel geometry. The advancement of the system from a simple 1-dimensional simulation to a full 3-dimensional model is described. Methods of image reconstruction are discussed and results of simulations consisting of both a 4 x 4 and a 16 x 16 object space mesh have been presented. A discussion of the limitations and potential areas of further study is also presented.

  6. MO-AB-206-02: Testing Gamma Cameras Based On TG177 WG Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Halama, J.

    2016-06-15

    This education session will cover the physics and operation principles of gamma cameras and PET scanners. The first talk will focus on PET imaging. An overview of the principles of PET imaging will be provided, including positron decay physics, and the transition from 2D to 3D imaging. More recent advances in hardware and software will be discussed, such as time-of-flight imaging, and improvements in reconstruction algorithms that provide for options such as depth-of-interaction corrections. Quantitative applications of PET will be discussed, as well as the requirements for doing accurate quantitation. Relevant performance tests will also be described. Learning Objectives: Bemore » able to describe basic physics principles of PET and operation of PET scanners. Learn about recent advances in PET scanner hardware technology. Be able to describe advances in reconstruction techniques and improvements Be able to list relevant performance tests. The second talk will focus on gamma cameras. The Nuclear Medicine subcommittee has charged a task group (TG177) to develop a report on the current state of physics testing of gamma cameras, SPECT, and SPECT/CT systems. The report makes recommendations for performance tests to be done for routine quality assurance, annual physics testing, and acceptance tests, and identifies those needed satisfy the ACR accreditation program and The Joint Commission imaging standards. The report is also intended to be used as a manual with detailed instructions on how to perform tests under widely varying conditions. Learning Objectives: At the end of the presentation members of the audience will: Be familiar with the tests recommended for routine quality assurance, annual physics testing, and acceptance tests of gamma cameras for planar imaging. Be familiar with the tests recommended for routine quality assurance, annual physics testing, and acceptance tests of SPECT systems. Be familiar with the tests of a SPECT/CT system that include the CT

  7. COMPUTER ANALYSIS OF PLANAR GAMMA CAMERA IMAGES

    EPA Science Inventory



    COMPUTER ANALYSIS OF PLANAR GAMMA CAMERA IMAGES

    T Martonen1 and J Schroeter2

    1Experimental Toxicology Division, National Health and Environmental Effects Research Laboratory, U.S. EPA, Research Triangle Park, NC 27711 USA and 2Curriculum in Toxicology, Unive...

  8. 21 CFR 892.1100 - Scintillation (gamma) camera.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Scintillation (gamma) camera. 892.1100 Section 892.1100 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED... analysis and display equipment, patient and equipment supports, radionuclide anatomical markers, component...

  9. 21 CFR 892.1100 - Scintillation (gamma) camera.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Scintillation (gamma) camera. 892.1100 Section 892.1100 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED... analysis and display equipment, patient and equipment supports, radionuclide anatomical markers, component...

  10. 21 CFR 892.1100 - Scintillation (gamma) camera.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Scintillation (gamma) camera. 892.1100 Section 892.1100 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED... analysis and display equipment, patient and equipment supports, radionuclide anatomical markers, component...

  11. Compact CdZnTe-based gamma camera for prostate cancer imaging

    NASA Astrophysics Data System (ADS)

    Cui, Yonggang; Lall, Terry; Tsui, Benjamin; Yu, Jianhua; Mahler, George; Bolotnikov, Aleksey; Vaska, Paul; De Geronimo, Gianluigi; O'Connor, Paul; Meinken, George; Joyal, John; Barrett, John; Camarda, Giuseppe; Hossain, Anwar; Kim, Ki Hyun; Yang, Ge; Pomper, Marty; Cho, Steve; Weisman, Ken; Seo, Youngho; Babich, John; LaFrance, Norman; James, Ralph B.

    2011-06-01

    In this paper, we discuss the design of a compact gamma camera for high-resolution prostate cancer imaging using Cadmium Zinc Telluride (CdZnTe or CZT) radiation detectors. Prostate cancer is a common disease in men. Nowadays, a blood test measuring the level of prostate specific antigen (PSA) is widely used for screening for the disease in males over 50, followed by (ultrasound) imaging-guided biopsy. However, PSA tests have a high falsepositive rate and ultrasound-guided biopsy has a high likelihood of missing small cancerous tissues. Commercial methods of nuclear medical imaging, e.g. PET and SPECT, can functionally image the organs, and potentially find cancer tissues at early stages, but their applications in diagnosing prostate cancer has been limited by the smallness of the prostate gland and the long working distance between the organ and the detectors comprising these imaging systems. CZT is a semiconductor material with wide band-gap and relatively high electron mobility, and thus can operate at room temperature without additional cooling. CZT detectors are photon-electron direct-conversion devices, thus offering high energy-resolution in detecting gamma rays, enabling energy-resolved imaging, and reducing the background of Compton-scattering events. In addition, CZT material has high stopping power for gamma rays; for medical imaging, a few-mm-thick CZT material provides adequate detection efficiency for many SPECT radiotracers. Because of these advantages, CZT detectors are becoming popular for several SPECT medical-imaging applications. Most recently, we designed a compact gamma camera using CZT detectors coupled to an application-specific-integratedcircuit (ASIC). This camera functions as a trans-rectal probe to image the prostate gland from a distance of only 1-5 cm, thus offering higher detection efficiency and higher spatial resolution. Hence, it potentially can detect prostate cancers at their early stages. The performance tests of this camera

  12. [Results of testing of MINISKAN mobile gamma-ray camera and specific features of its design].

    PubMed

    Utkin, V M; Kumakhov, M A; Blinov, N N; Korsunskiĭ, V N; Fomin, D K; Kolesnikova, N V; Tultaev, A V; Nazarov, A A; Tararukhina, O B

    2007-01-01

    The main results of engineering, biomedical, and clinical testing of MINISKAN mobile gamma-ray camera are presented. Specific features of the camera hardware and software, as well as the main technical specifications, are described. The gamma-ray camera implements a new technology based on reconstructive tomography, aperture encoding, and digital processing of signals.

  13. Development and calibration of a new gamma camera detector using large square Photomultiplier Tubes

    NASA Astrophysics Data System (ADS)

    Zeraatkar, N.; Sajedi, S.; Teimourian Fard, B.; Kaviani, S.; Akbarzadeh, A.; Farahani, M. H.; Sarkar, S.; Ay, M. R.

    2017-09-01

    Large area scintillation detectors applied in gamma cameras as well as Single Photon Computed Tomography (SPECT) systems, have a major role in in-vivo functional imaging. Most of the gamma detectors utilize hexagonal arrangement of Photomultiplier Tubes (PMTs). In this work we applied large square-shaped PMTs with row/column arrangement and positioning. The Use of large square PMTs reduces dead zones in the detector surface. However, the conventional center of gravity method for positioning may not introduce an acceptable result. Hence, the digital correlated signal enhancement (CSE) algorithm was optimized to obtain better linearity and spatial resolution in the developed detector. The performance of the developed detector was evaluated based on NEMA-NU1-2007 standard. The acquired images using this method showed acceptable uniformity and linearity comparing to three commercial gamma cameras. Also the intrinsic and extrinsic spatial resolutions with low-energy high-resolution (LEHR) collimator at 10 cm from surface of the detector were 3.7 mm and 7.5 mm, respectively. The energy resolution of the camera was measured 9.5%. The performance evaluation demonstrated that the developed detector maintains image quality with a reduced number of used PMTs relative to the detection area.

  14. Estimation of bone mineral content using gamma camera: A real possibility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Levy, L.M.; Hoory, S.; Bandyopadhyay, D.

    1985-05-01

    Osteopenia and Osteoporosis are the diseases related to loss of bone minerals. At present, dual photon absorptiometry using a dedicated specially built scanner along with a very high source of Gd-153 is being used as a diagnostic tool for the early detection of bone loss. The present study was undertaken to explore the possibility that gamma cameras which are widely available in all Nuclear Medicine departments could be used successfully to evaluate bone mineral content. A Siemens LFOV gamma camera equipped with a converging collimator was used for this purpose. A fixed source (100 mCi) of Gd-153 was placed atmore » the focal point of the collimator. A series of calcium chloride solutions of varying concentrations in plastic vials were placed near the center of the collimator and imaged both in air and water. Both 44 Kev and 100 Kev images were digitized in 128 x 128 matrices and processed in a CD and A Delta system attached to a VAX 11-750 computer. Uniformity corrections for each field of view were applied and the attenuation coefficients of calcium chloride for both peaks of Gd-153 were evaluated. In addition, due to the high count rate, corrections for the dead time losses were also found to be essential. An excellent concordance between the estimated Calcium contents and that actually present were obtained by this technic. In conclusion, use of gamma camera for the routine evaluation of Osteoporosis appears to be highly promising and worth pursuing.« less

  15. Development of a high sensitivity pinhole type gamma camera using semiconductors for low dose rate fields

    NASA Astrophysics Data System (ADS)

    Ueno, Yuichiro; Takahashi, Isao; Ishitsu, Takafumi; Tadokoro, Takahiro; Okada, Koichi; Nagumo, Yasushi; Fujishima, Yasutake; Yoshida, Akira; Umegaki, Kikuo

    2018-06-01

    We developed a pinhole type gamma camera, using a compact detector module of a pixelated CdTe semiconductor, which has suitable sensitivity and quantitative accuracy for low dose rate fields. In order to improve the sensitivity of the pinhole type semiconductor gamma camera, we adopted three methods: a signal processing method to set the discriminating level lower, a high sensitivity pinhole collimator and a smoothing image filter that improves the efficiency of the source identification. We tested basic performances of the developed gamma camera and carefully examined effects of the three methods. From the sensitivity test, we found that the effective sensitivity was about 21 times higher than that of the gamma camera for high dose rate fields which we had previously developed. We confirmed that the gamma camera had sufficient sensitivity and high quantitative accuracy; for example, a weak hot spot (0.9 μSv/h) around a tree root could be detected within 45 min in a low dose rate field test, and errors of measured dose rates with point sources were less than 7% in a dose rate accuracy test.

  16. Performance of the prototype LaBr3 spectrometer developed for the JET gamma-ray camera upgrade.

    PubMed

    Rigamonti, D; Muraro, A; Nocente, M; Perseo, V; Boltruczyk, G; Fernandes, A; Figueiredo, J; Giacomelli, L; Gorini, G; Gosk, M; Kiptily, V; Korolczuk, S; Mianowski, S; Murari, A; Pereira, R C; Cippo, E P; Zychor, I; Tardocchi, M

    2016-11-01

    In this work, we describe the solution developed by the gamma ray camera upgrade enhancement project to improve the spectroscopic properties of the existing JET γ-ray camera. Aim of the project is to enable gamma-ray spectroscopy in JET deuterium-tritium plasmas. A dedicated pilot spectrometer based on a LaBr 3 crystal coupled to a silicon photo-multiplier has been developed. A proper pole zero cancellation network able to shorten the output signal to a length of 120 ns has been implemented allowing for spectroscopy at MHz count rates. The system has been characterized in the laboratory and shows an energy resolution of 5.5% at E γ = 0.662 MeV, which extrapolates favorably in the energy range of interest for gamma-ray emission from fast ions in fusion plasmas.

  17. SU-E-E-06: Teaching About the Gamma Camera and Ultrasound Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lowe, M; Spiro, A; Vogel, R

    Purpose: Instructional modules on applications of physics in medicine are being developed. The target audience consists of students who have had an introductory undergraduate physics course. This presentation will concentrate on an active learning approach to teach the principles of the gamma camera. There will also be a description of an apparatus to teach ultrasound imaging. Methods: Since a real gamma camera is not feasible in the undergraduate classroom, we have developed two types of optical apparatus that teach the main principles. To understand the collimator, LEDS mimic gamma emitters in the body, and the photons pass through an arraymore » of tubes. The distance, spacing, diameter, and length of the tubes can be varied to understand the effect upon the resolution of the image. To determine the positions of the gamma emitters, a second apparatus uses a movable green laser, fluorescent plastic in lieu of the scintillation crystal, acrylic rods that mimic the PMTs, and a photodetector to measure the intensity. The position of the laser is calculated with a centroid algorithm.To teach the principles of ultrasound imaging, we are using the sound head and pulser box of an educational product, variable gain amplifier, rotation table, digital oscilloscope, Matlab software, and phantoms. Results: Gamma camera curriculum materials have been implemented in the classroom at Loyola in 2014 and 2015. Written work shows good knowledge retention and a more complete understanding of the material. Preliminary ultrasound imaging materials were run in 2015. Conclusion: Active learning methods add another dimension to descriptions in textbooks and are effective in keeping the students engaged during class time. The teaching apparatus for the gamma camera and ultrasound imaging can be expanded to include more cases, and could potentially improve students’ understanding of artifacts and distortions in the images.« less

  18. Development of gamma-photon/Cerenkov-light hybrid system for simultaneous imaging of I-131 radionuclide

    NASA Astrophysics Data System (ADS)

    Yamamoto, Seiichi; Suzuki, Mayumi; Kato, Katsuhiko; Watabe, Tadashi; Ikeda, Hayato; Kanai, Yasukazu; Ogata, Yoshimune; Hatazawa, Jun

    2016-09-01

    Although iodine 131 (I-131) is used for radionuclide therapy, high resolution images are difficult to obtain with conventional gamma cameras because of the high energy of I-131 gamma photons (364 keV). Cerenkov-light imaging is a possible method for beta emitting radionuclides, and I-131 (606 MeV maximum beta energy) is a candidate to obtain high resolution images. We developed a high energy gamma camera system for I-131 radionuclide and combined it with a Cerenkov-light imaging system to form a gamma-photon/Cerenkov-light hybrid imaging system to compare the simultaneously measured images of these two modalities. The high energy gamma imaging detector used 0.85-mm×0.85-mm×10-mm thick GAGG scintillator pixels arranged in a 44×44 matrix with a 0.1-mm thick reflector and optical coupled to a Hamamatsu 2 in. square position sensitive photomultiplier tube (PSPMT: H12700 MOD). The gamma imaging detector was encased in a 2 cm thick tungsten shield, and a pinhole collimator was mounted on its top to form a gamma camera system. The Cerenkov-light imaging system was made of a high sensitivity cooled CCD camera. The Cerenkov-light imaging system was combined with the gamma camera using optical mirrors to image the same area of the subject. With this configuration, we simultaneously imaged the gamma photons and the Cerenkov-light from I-131 in the subjects. The spatial resolution and sensitivity of the gamma camera system for I-131 were respectively 3 mm FWHM and 10 cps/MBq for the high sensitivity collimator at 10 cm from the collimator surface. The spatial resolution of the Cerenkov-light imaging system was 0.64 mm FWHM at 10 cm from the system surface. Thyroid phantom and rat images were successfully obtained with the developed gamma-photon/Cerenkov-light hybrid imaging system, allowing direct comparison of these two modalities. Our developed gamma-photon/Cerenkov-light hybrid imaging system will be useful to evaluate the advantages and disadvantages of these two

  19. Simulation study of the second-generation MR-compatible SPECT system based on the inverted compound-eye gamma camera design

    NASA Astrophysics Data System (ADS)

    Lai, Xiaochun; Meng, Ling-Jian

    2018-02-01

    In this paper, we present simulation studies for the second-generation MRI compatible SPECT system, MRC-SPECT-II, based on an inverted compound eye (ICE) gamma camera concept. The MRC-SPECT-II system consists of a total of 1536 independent micro-pinhole-camera-elements (MCEs) distributed in a ring with an inner diameter of 6 cm. This system provides a FOV of 1 cm diameter and a peak geometrical efficiency of approximately 1.3% (the typical levels of 0.1%-0.01% found in modern pre-clinical SPECT instrumentations), while maintaining a sub-500 μm spatial resolution. Compared to the first-generation MRC-SPECT system (MRC-SPECT-I) (Cai 2014 Nucl. Instrum. Methods Phys. Res. A 734 147-51) developed in our lab, the MRC-SPECT-II system offers a similar resolution with dramatically improved sensitivity and greatly reduced physical dimension. The latter should allow the system to be placed inside most clinical and pre-clinical MRI scanners for high-performance simultaneous MRI and SPECT imaging.

  20. A high-speed digital camera system for the observation of rapid H-alpha fluctuations in solar flares

    NASA Technical Reports Server (NTRS)

    Kiplinger, Alan L.; Dennis, Brian R.; Orwig, Larry E.

    1989-01-01

    Researchers developed a prototype digital camera system for obtaining H-alpha images of solar flares with 0.1 s time resolution. They intend to operate this system in conjunction with SMM's Hard X Ray Burst Spectrometer, with x ray instruments which will be available on the Gamma Ray Observatory and eventually with the Gamma Ray Imaging Device (GRID), and with the High Resolution Gamma-Ray and Hard X Ray Spectrometer (HIREGS) which are being developed for the Max '91 program. The digital camera has recently proven to be successful as a one camera system operating in the blue wing of H-alpha during the first Max '91 campaign. Construction and procurement of a second and possibly a third camera for simultaneous observations at other wavelengths are underway as are analyses of the campaign data.

  1. Sentinel node detection in early breast cancer with intraoperative portable gamma camera: UK experience.

    PubMed

    Ghosh, Debashis; Michalopoulos, Nikolaos V; Davidson, Timothy; Wickham, Fred; Williams, Norman R; Keshtgar, Mohammed R

    2017-04-01

    Access to nuclear medicine department for sentinel node imaging remains an issue in number of hospitals in the UK and many parts of the world. Sentinella ® is a portable imaging camera used intra-operatively to produce real time visual localisation of sentinel lymph nodes. Sentinella ® was tested in a controlled laboratory environment at our centre and we report our experience on the first use of this technology from UK. Moreover, preoperative scintigrams of the axilla were obtained in 144 patients undergoing sentinel node biopsy using conventional gamma camera. Sentinella ® scans were done intra-operatively to correlate with the pre-operative scintigram and to determine presence of any residual hot node after the axilla was deemed to be clear based on the silence of the hand held gamma probe. Sentinella ® detected significantly more nodes compared with CGC (p < 0.0001). Sentinella ® picked up extra nodes in 5/144 cases after the axilla was found silent using hand held gamma probe. In 2/144 cases, extra nodes detected by Sentinella ® confirmed presence of tumour cells that led to a complete axillary clearance. Sentinella ® is a reliable technique for intra-operative localisation of radioactive nodes. It provides increased nodal visualisation rates compared to static scintigram imaging and proves to be an important tool for harvesting all hot sentinel nodes. This portable gamma camera can definitely replace the use of conventional lymphoscintigrams saving time and money both for patients and the health system. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Performance of the prototype LaBr{sub 3} spectrometer developed for the JET gamma-ray camera upgrade

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rigamonti, D., E-mail: davide.rigamonti@mib.infn.it; Nocente, M.; Gorini, G.

    2016-11-15

    In this work, we describe the solution developed by the gamma ray camera upgrade enhancement project to improve the spectroscopic properties of the existing JET γ-ray camera. Aim of the project is to enable gamma-ray spectroscopy in JET deuterium-tritium plasmas. A dedicated pilot spectrometer based on a LaBr{sub 3} crystal coupled to a silicon photo-multiplier has been developed. A proper pole zero cancellation network able to shorten the output signal to a length of 120 ns has been implemented allowing for spectroscopy at MHz count rates. The system has been characterized in the laboratory and shows an energy resolution ofmore » 5.5% at E{sub γ} = 0.662 MeV, which extrapolates favorably in the energy range of interest for gamma-ray emission from fast ions in fusion plasmas.« less

  3. TU-H-206-01: An Automated Approach for Identifying Geometric Distortions in Gamma Cameras

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mann, S; Nelson, J; Samei, E

    2016-06-15

    Purpose: To develop a clinically-deployable, automated process for detecting artifacts in routine nuclear medicine (NM) quality assurance (QA) bar phantom images. Methods: An artifact detection algorithm was created to analyze bar phantom images as part of an ongoing QA program. A low noise, high resolution reference image was acquired from an x-ray of the bar phantom with a Philips Digital Diagnost system utilizing image stitching. NM bar images, acquired for 5 million counts over a 512×512 matrix, were registered to the template image by maximizing mutual information (MI). The MI index was used as an initial test for artifacts; lowmore » values indicate an overall presence of distortions regardless of their spatial location. Images with low MI scores were further analyzed for bar linearity, periodicity, alignment, and compression to locate differences with respect to the template. Findings from each test were spatially correlated and locations failing multiple tests were flagged as potential artifacts requiring additional visual analysis. The algorithm was initially deployed for GE Discovery 670 and Infinia Hawkeye gamma cameras. Results: The algorithm successfully identified clinically relevant artifacts from both systems previously unnoticed by technologists performing the QA. Average MI indices for artifact-free images are 0.55. Images with MI indices < 0.50 have shown 100% sensitivity and specificity for artifact detection when compared with a thorough visual analysis. Correlation of geometric tests confirms the ability to spatially locate the most likely image regions containing an artifact regardless of initial phantom orientation. Conclusion: The algorithm shows the potential to detect gamma camera artifacts that may be missed by routine technologist inspections. Detection and subsequent correction of artifacts ensures maximum image quality and may help to identify failing hardware before it impacts clinical workflow. Going forward, the algorithm is

  4. Europe's space camera unmasks a cosmic gamma-ray machine

    NASA Astrophysics Data System (ADS)

    1996-11-01

    The new-found neutron star is the visible counterpart of a pulsating radio source, Pulsar 1055-52. It is a mere 20 kilometres wide. Although the neutron star is very hot, at about a million degrees C, very little of its radiant energy takes the form of visible light. It emits mainly gamma-rays, an extremely energetic form of radiation. By examining it at visible wavelengths, astronomers hope to figure out why Pulsar 1055-52 is the most efficient generator of gamma-rays known so far, anywhere the Universe. The Faint Object Camera found Pulsar 1055-52 in near ultraviolet light at 3400 angstroms, a little shorter in wavelength than the violet light at the extremity of the human visual range. Roberto Mignani, Patrizia Caraveo and Giovanni Bignami of the Istituto di Fisica Cosmica in Milan, Italy, report its optical identification in a forthcoming issue of Astrophysical Journal Letters (1 January 1997). The formal name of the object is PSR 1055-52. Evading the glare of an adjacent star The Italian team had tried since 1988 to spot Pulsar 1055-52 with two of the most powerful ground-based optical telescopes in the Southern Hemisphere. These were the 3.6-metre Telescope and the 3.5-metre New Technology Telescope of the European Southern Observatory at La Silla, Chile. Unfortunately an ordinary star 100,000 times brighter lay in almost the same direction in the sky, separated from the neutron star by only a thousandth of a degree. The Earth's atmosphere defocused the star's light sufficiently to mask the glimmer from Pulsar 1055-52. The astronomers therefore needed an instrument in space. The Faint Object Camera offered the best precision and sensitivity to continue the hunt. Devised by European astronomers to complement the American wide field camera in the Hubble Space Telescope, the Faint Object Camera has a relatively narrow field of view. It intensifies the image of a faint object by repeatedly accelerating electrons from photo-electric films, so as to produce

  5. Feasibility of a high-speed gamma-camera design using the high-yield-pileup-event-recovery method.

    PubMed

    Wong, W H; Li, H; Uribe, J; Baghaei, H; Wang, Y; Yokoyama, S

    2001-04-01

    Higher count-rate gamma cameras than are currently used are needed if the technology is to fulfill its promise in positron coincidence imaging, radionuclide therapy dosimetry imaging, and cardiac first-pass imaging. The present single-crystal design coupled with conventional detector electronics and the traditional Anger-positioning algorithm hinder higher count-rate imaging because of the pileup of gamma-ray signals in the detector and electronics. At an interaction rate of 2 million events per second, the fraction of nonpileup events is < 20% of the total incident events. Hence, the recovery of pileup events can significantly increase the count-rate capability, increase the yield of imaging photons, and minimize image artifacts associated with pileups. A new technology to significantly enhance the performance of gamma cameras in this area is introduced. We introduce a new electronic design called high-yield-pileup-event-recovery (HYPER) electronics for processing the detector signal in gamma cameras so that the individual gamma energies and positions of pileup events, including multiple pileups, can be resolved and recovered despite the mixing of signals. To illustrate the feasibility of the design concept, we have developed a small gamma-camera prototype with the HYPER-Anger electronics. The camera has a 10 x 10 x 1 cm NaI(Tl) crystal with four photomultipliers. Hot-spot and line sources with very high 99mTc activities were imaged. The phantoms were imaged continuously from 60,000 to 3,500,000 counts per second to illustrate the efficacy of the method as a function of counting rates. At 2-3 million events per second, all phantoms were imaged with little distortion, pileup, and dead-time loss. At these counting rates, multiple pileup events (> or = 3 events piling together) were the predominate occurrences, and the HYPER circuit functioned well to resolve and recover these events. The full width at half maximum of the line-spread function at 3,000,000 counts per

  6. Portable gamma camera guidance in sentinel lymph node biopsy: prospective observational study of consecutive cases.

    PubMed

    Peral Rubio, F; de La Riva, P; Moreno-Ramírez, D; Ferrándiz-Pulido, L

    2015-06-01

    Sentinel lymph node biopsy is the most important tool available for node staging in patients with melanoma. To analyze sentinel lymph node detection and dissection with radio guidance from a portable gamma camera. To assess the number of complications attributable to this biopsy technique. Prospective observational study of a consecutive series of patients undergoing radioguided sentinel lymph node biopsy. We analyzed agreement between nodes detected by presurgical lymphography, those detected by the gamma camera, and those finally dissected. A total of 29 patients (17 women [62.5%] and 12 men [37.5%]) were enrolled. The mean age was 52.6 years (range, 26-82 years). The sentinel node was dissected from all patients; secondary nodes were dissected from some. In 16 cases (55.2%), there was agreement between the number of nodes detected by lymphography, those detected by the gamma camera, and those finally dissected. The only complications observed were seromas (3.64%). No cases of wound dehiscence, infection, hematoma, or hemorrhage were observed. Portable gamma-camera radio guidance may be of use in improving the detection and dissection of sentinel lymph nodes and may also reduce complications. These goals are essential in a procedure whose purpose is melanoma staging. Copyright © 2014 Elsevier España, S.L.U. and AEDV. All rights reserved.

  7. Gate simulation of Compton Ar-Xe gamma-camera for radionuclide imaging in nuclear medicine

    NASA Astrophysics Data System (ADS)

    Dubov, L. Yu; Belyaev, V. N.; Berdnikova, A. K.; Bolozdynia, A. I.; Akmalova, Yu A.; Shtotsky, Yu V.

    2017-01-01

    Computer simulations of cylindrical Compton Ar-Xe gamma camera are described in the current report. Detection efficiency of cylindrical Ar-Xe Compton camera with internal diameter of 40 cm is estimated as1-3%that is 10-100 times higher than collimated Anger’s camera. It is shown that cylindrical Compton camera can image Tc-99m radiotracer distribution with uniform spatial resolution of 20 mm through the whole field of view.

  8. Investigations of Au-198 as radiotracer in laboratory porous media using gamma camera: a preliminary study

    NASA Astrophysics Data System (ADS)

    Othman, N.; Kamal, W. H. B. Wan; Yusof, N. H.; Engku Chik, E. M. F.; Yunos, M. A. S.; Adnan, M. A. K.; Shari, M. R.

    2018-01-01

    Preliminary experiment has been carried out using irradiated Au-198 as radiotracer inside the laboratory porous media. The objectives are to check the compatibility of Au-198 as the radiotracer inside the porous media as well as to provide insights of fluid hydrodynamics inside the media using gamma camera.198Au is gamma emitter isotope with half-life of 2.7 days and energy of 0.41 MeV (99%). The porous media consists of fine sandstone with grain size 850μm, lubricant as the mimic of original oil in plant (OOIP) or trapped oil and a layer of cement on top of the rig as the bed rock. Gamma camera is arranged next to the porous media in order to capture the movement of radiotracer which has been set to 1minute per frame. Initially, the gold wire which has isotope of 197Au was irradiated inside the rotary rack of Reactor Triga PUSPATI (RTP) to produce 198Au. RTP is located in Nuclear Malaysia, Bangi has energy of 750kW and neutron flux of 5 × 102 n/cm2/s. 198Au, which is in liquid form, is injected inside the porous media and monitored and recorded by gamma camera. The gamma camera gives a quantitative determination of local fluid saturations over the area of observation.

  9. A neutron camera system for MAST.

    PubMed

    Cecconello, M; Turnyanskiy, M; Conroy, S; Ericsson, G; Ronchi, E; Sangaroon, S; Akers, R; Fitzgerald, I; Cullen, A; Weiszflog, M

    2010-10-01

    A prototype neutron camera has been developed and installed at MAST as part of a feasibility study for a multichord neutron camera system with the aim to measure the spatial and time resolved 2.45 MeV neutron emissivity profile. Liquid scintillators coupled to a fast digitizer are used for neutron/gamma ray digital pulse shape discrimination. The preliminary results obtained clearly show the capability of this diagnostic to measure neutron emissivity profiles with sufficient time resolution to study the effect of fast ion loss and redistribution due to magnetohydrodynamic activity. A minimum time resolution of 2 ms has been achieved with a modest 1.5 MW of neutral beam injection heating with a measured neutron count rate of a few 100 kHz.

  10. Impact of intense x-ray pulses on a NaI(Tl)-based gamma camera

    NASA Astrophysics Data System (ADS)

    Koppert, W. J. C.; van der Velden, S.; Steenbergen, J. H. L.; de Jong, H. W. A. M.

    2018-03-01

    In SPECT/CT systems x-ray and γ-ray imaging is performed sequentially. Simultaneous acquisition may have advantages, for instance in interventional settings. However, this may expose a gamma camera to relatively high x-ray doses and deteriorate its functioning. We studied the NaI(Tl) response to x-ray pulses with a photodiode, PMT and gamma camera, respectively. First, we exposed a NaI(Tl)-photodiode assembly to x-ray pulses to investigate potential crystal afterglow. Next, we exposed a NaI(Tl)-PMT assembly to 10 ms LED pulses (mimicking x-ray pulses) and measured the response to flashing LED probe-pulses (mimicking γ-pulses). We then exposed the assembly to x-ray pulses, with detector entrance doses of up to 9 nGy/pulse, and analysed the response for γ-pulse variations. Finally, we studied the response of a Siemens Diacam gamma camera to γ-rays while exposed to x-ray pulses. X-ray exposure of the crystal, read out with a photodiode, revealed 15% afterglow fraction after 3 ms. The NaI(Tl)-PMT assembly showed disturbances up to 10 ms after 10 ms LED exposure. After x-ray exposure however, responses showed elevated baselines, with 60 ms decay-time. Both for x-ray and LED exposure and after baseline subtraction, probe-pulse analysis revealed disturbed pulse height measurements shortly after exposure. X-ray exposure of the Diacam corroborated the elementary experiments. Up to 50 ms after an x-ray pulse, no events are registered, followed by apparent energy elevations up to 100 ms after exposure. Limiting the dose to 0.02 nGy/pulse prevents detrimental effects. Conventional gamma cameras exhibit substantial dead-time and mis-registration of photon energies up to 100 ms after intense x-ray pulses. This is due PMT limitations and due to afterglow in the crystal. Using PMTs with modified circuitry, we show that deteriorative afterglow effects can be reduced without noticeable effects on the PMT performance, up to x-ray pulse doses of 1 nGy.

  11. A novel Compton camera design featuring a rear-panel shield for substantial noise reduction in gamma-ray images

    NASA Astrophysics Data System (ADS)

    Nishiyama, T.; Kataoka, J.; Kishimoto, A.; Fujita, T.; Iwamoto, Y.; Taya, T.; Ohsuka, S.; Nakamura, S.; Hirayanagi, M.; Sakurai, N.; Adachi, S.; Uchiyama, T.

    2014-12-01

    After the Japanese nuclear disaster in 2011, large amounts of radioactive isotopes were released and still remain a serious problem in Japan. Consequently, various gamma cameras are being developed to help identify radiation hotspots and ensure effective decontamination operation. The Compton camera utilizes the kinematics of Compton scattering to contract images without using a mechanical collimator, and features a wide field of view. For instance, we have developed a novel Compton camera that features a small size (13 × 14 × 15 cm3) and light weight (1.9 kg), but which also achieves high sensitivity thanks to Ce:GAGG scintillators optically coupled wiith MPPC arrays. By definition, in such a Compton camera, gamma rays are expected to scatter in the ``scatterer'' and then be fully absorbed in the ``absorber'' (in what is called a forward-scattered event). However, high energy gamma rays often interact with the detector in the opposite direction - initially scattered in the absorber and then absorbed in the scatterer - in what is called a ``back-scattered'' event. Any contamination of such back-scattered events is known to substantially degrade the quality of gamma-ray images, but determining the order of gamma-ray interaction based solely on energy deposits in the scatterer and absorber is quite difficult. For this reason, we propose a novel yet simple Compton camera design that includes a rear-panel shield (a few mm thick) consisting of W or Pb located just behind the scatterer. Since the energy of scattered gamma rays in back-scattered events is much lower than that in forward-scattered events, we can effectively discriminate and reduce back-scattered events to improve the signal-to-noise ratio in the images. This paper presents our detailed optimization of the rear-panel shield using Geant4 simulation, and describes a demonstration test using our Compton camera.

  12. A panoramic coded aperture gamma camera for radioactive hotspots localization

    NASA Astrophysics Data System (ADS)

    Paradiso, V.; Amgarou, K.; Blanc De Lanaute, N.; Schoepff, V.; Amoyal, G.; Mahe, C.; Beltramello, O.; Liénard, E.

    2017-11-01

    A known disadvantage of the coded aperture imaging approach is its limited field-of-view (FOV), which often results insufficient when analysing complex dismantling scenes such as post-accidental scenarios, where multiple measurements are needed to fully characterize the scene. In order to overcome this limitation, a panoramic coded aperture γ-camera prototype has been developed. The system is based on a 1 mm thick CdTe detector directly bump-bonded to a Timepix readout chip, developed by the Medipix2 collaboration (256 × 256 pixels, 55 μm pitch, 14.08 × 14.08 mm2 sensitive area). A MURA pattern coded aperture is used, allowing for background subtraction without the use of heavy shielding. Such system is then combined with a USB color camera. The output of each measurement is a semi-spherical image covering a FOV of 360 degrees horizontally and 80 degrees vertically, rendered in spherical coordinates (θ,phi). The geometrical shapes of the radiation-emitting objects are preserved by first registering and stitching the optical images captured by the prototype, and applying, subsequently, the same transformations to their corresponding radiation images. Panoramic gamma images generated by using the technique proposed in this paper are described and discussed, along with the main experimental results obtained in laboratories campaigns.

  13. Material efficiency studies for a Compton camera designed to measure characteristic prompt gamma rays emitted during proton beam radiotherapy

    PubMed Central

    Robertson, Daniel; Polf, Jerimy C; Peterson, Steve W; Gillin, Michael T; Beddar, Sam

    2011-01-01

    Prompt gamma rays emitted from biological tissues during proton irradiation carry dosimetric and spectroscopic information that can assist with treatment verification and provide an indication of the biological response of the irradiated tissues. Compton cameras are capable of determining the origin and energy of gamma rays. However, prompt gamma monitoring during proton therapy requires new Compton camera designs that perform well at the high gamma energies produced when tissues are bombarded with therapeutic protons. In this study we optimize the materials and geometry of a three-stage Compton camera for prompt gamma detection and calculate the theoretical efficiency of such a detector. The materials evaluated in this study include germanium, bismuth germanate (BGO), NaI, xenon, silicon and lanthanum bromide (LaBr3). For each material, the dimensions of each detector stage were optimized to produce the maximum number of relevant interactions. These results were used to predict the efficiency of various multi-material cameras. The theoretical detection efficiencies of the most promising multi-material cameras were then calculated for the photons emitted from a tissue-equivalent phantom irradiated by therapeutic proton beams ranging from 50 to 250 MeV. The optimized detector stages had a lateral extent of 10 × 10 cm2 with the thickness of the initial two stages dependent on the detector material. The thickness of the third stage was fixed at 10 cm regardless of material. The most efficient single-material cameras were composed of germanium (3 cm) and BGO (2.5 cm). These cameras exhibited efficiencies of 1.15 × 10−4 and 9.58 × 10−5 per incident proton, respectively. The most efficient multi-material camera design consisted of two initial stages of germanium (3 cm) and a final stage of BGO, resulting in a theoretical efficiency of 1.26 × 10−4 per incident proton. PMID:21508442

  14. Design and realization of an AEC&AGC system for the CCD aerial camera

    NASA Astrophysics Data System (ADS)

    Liu, Hai ying; Feng, Bing; Wang, Peng; Li, Yan; Wei, Hao yun

    2015-08-01

    An AEC and AGC(Automatic Exposure Control and Automatic Gain Control) system was designed for a CCD aerial camera with fixed aperture and electronic shutter. The normal AEC and AGE algorithm is not suitable to the aerial camera since the camera always takes high-resolution photographs in high-speed moving. The AEC and AGE system adjusts electronic shutter and camera gain automatically according to the target brightness and the moving speed of the aircraft. An automatic Gamma correction is used before the image is output so that the image is better for watching and analyzing by human eyes. The AEC and AGC system could avoid underexposure, overexposure, or image blurring caused by fast moving or environment vibration. A series of tests proved that the system meet the requirements of the camera system with its fast adjusting speed, high adaptability, high reliability in severe complex environment.

  15. Photodetectors for the Advanced Gamma-ray Imaging System (AGIS)

    NASA Astrophysics Data System (ADS)

    Wagner, Robert G.; Advanced Gamma-ray Imaging System AGIS Collaboration

    2010-03-01

    The Advanced Gamma-Ray Imaging System (AGIS) is a concept for the next generation very high energy gamma-ray observatory. Design goals include an order of magnitude better sensitivity, better angular resolution, and a lower energy threshold than existing Cherenkov telescopes. Each telescope is equipped with a camera that detects and records the Cherenkov-light flashes from air showers. The camera is comprised of a pixelated focal plane of blue sensitive and fast (nanosecond) photon detectors that detect the photon signal and convert it into an electrical one. Given the scale of AGIS, the camera must be reliable and cost effective. The Schwarzschild-Couder optical design yields a smaller plate scale than present-day Cherenkov telescopes, enabling the use of more compact, multi-pixel devices, including multianode photomultipliers or Geiger avalanche photodiodes. We present the conceptual design of the focal plane for the camera and results from testing candidate! focal plane sensors.

  16. A didactic experiment showing the Compton scattering by means of a clinical gamma camera.

    PubMed

    Amato, Ernesto; Auditore, Lucrezia; Campennì, Alfredo; Minutoli, Fabio; Cucinotta, Mariapaola; Sindoni, Alessandro; Baldari, Sergio

    2017-06-01

    We describe a didactic approach aimed to explain the effect of Compton scattering in nuclear medicine imaging, exploiting the comparison of a didactic experiment with a gamma camera with the outcomes from a Monte Carlo simulation of the same experimental apparatus. We employed a 99m Tc source emitting 140.5keV photons, collimated in the upper direction through two pinholes, shielded by 6mm of lead. An aluminium cylinder was placed on the source at 50mm of distance. The energy of the scattered photons was measured on the spectra acquired by the gamma camera. We observed that the gamma ray energy measured at each step of rotation gradually decreased from the characteristic energy of 140.5keV at 0° to 102.5keV at 120°. A comparison between the obtained data and the expected results from the Compton formula and from the Monte Carlo simulation revealed a full agreement within the experimental error (relative errors between -0.56% and 1.19%), given by the energy resolution of the gamma camera. Also the electron rest mass has been evaluated satisfactorily. The experiment was found useful in explaining nuclear medicine residents the phenomenology of the Compton scattering and its importance in the nuclear medicine imaging, and it can be profitably proposed during the training of medical physics residents as well. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  17. Development and characterization of a round hand-held silicon photomultiplier based gamma camera for intraoperative imaging

    PubMed Central

    Popovic, Kosta; McKisson, Jack E.; Kross, Brian; Lee, Seungjoon; McKisson, John; Weisenberger, Andrew G.; Proffitt, James; Stolin, Alexander; Majewski, Stan; Williams, Mark B.

    2017-01-01

    This paper describes the development of a hand-held gamma camera for intraoperative surgical guidance that is based on silicon photomultiplier (SiPM) technology. The camera incorporates a cerium doped lanthanum bromide (LaBr3:Ce) plate scintillator, an array of 80 SiPM photodetectors and a two-layer parallel-hole collimator. The field of view is circular with a 60 mm diameter. The disk-shaped camera housing is 75 mm in diameter, approximately 40.5 mm thick and has a mass of only 1.4 kg, permitting either hand-held or arm-mounted use. All camera components are integrated on a mobile cart that allows easy transport. The camera was developed for use in surgical procedures including determination of the location and extent of primary carcinomas, detection of secondary lesions and sentinel lymph node biopsy (SLNB). Here we describe the camera design and its principal operating characteristics, including spatial resolution, energy resolution, sensitivity uniformity, and geometric linearity. The gamma camera has an intrinsic spatial resolution of 4.2 mm FWHM, an energy resolution of 21.1 % FWHM at 140 keV, and a sensitivity of 481 and 73 cps/MBq when using the single- and double-layer collimators, respectively. PMID:28286345

  18. Development of an omnidirectional gamma-ray imaging Compton camera for low-radiation-level environmental monitoring

    NASA Astrophysics Data System (ADS)

    Watanabe, Takara; Enomoto, Ryoji; Muraishi, Hiroshi; Katagiri, Hideaki; Kagaya, Mika; Fukushi, Masahiro; Kano, Daisuke; Satoh, Wataru; Takeda, Tohoru; Tanaka, Manobu M.; Tanaka, Souichi; Uchida, Tomohisa; Wada, Kiyoto; Wakamatsu, Ryo

    2018-02-01

    We have developed an omnidirectional gamma-ray imaging Compton camera for environmental monitoring at low levels of radiation. The camera consisted of only six CsI(Tl) scintillator cubes of 3.5 cm, each of which was readout by super-bialkali photo-multiplier tubes (PMTs). Our camera enables the visualization of the position of gamma-ray sources in all directions (∼4π sr) over a wide energy range between 300 and 1400 keV. The angular resolution (σ) was found to be ∼11°, which was realized using an image-sharpening technique. A high detection efficiency of 18 cps/(µSv/h) for 511 keV (1.6 cps/MBq at 1 m) was achieved, indicating the capability of this camera to visualize hotspots in areas with low-radiation-level contamination from the order of µSv/h to natural background levels. Our proposed technique can be easily used as a low-radiation-level imaging monitor in radiation control areas, such as medical and accelerator facilities.

  19. Progress towards a semiconductor Compton camera for prompt gamma imaging during proton beam therapy for range and dose verification

    NASA Astrophysics Data System (ADS)

    Gutierrez, A.; Baker, C.; Boston, H.; Chung, S.; Judson, D. S.; Kacperek, A.; Le Crom, B.; Moss, R.; Royle, G.; Speller, R.; Boston, A. J.

    2018-01-01

    The main objective of this work is to test a new semiconductor Compton camera for prompt gamma imaging. Our device is composed of three active layers: a Si(Li) detector as a scatterer and two high purity Germanium detectors as absorbers of high-energy gamma rays. We performed Monte Carlo simulations using the Geant4 toolkit to characterise the expected gamma field during proton beam therapy and have made experimental measurements of the gamma spectrum with a 60 MeV passive scattering beam irradiating a phantom. In this proceeding, we describe the status of the Compton camera and present the first preliminary measurements with radioactive sources and their corresponding reconstructed images.

  20. The Si/CdTe semiconductor Compton camera of the ASTRO-H Soft Gamma-ray Detector (SGD)

    NASA Astrophysics Data System (ADS)

    Watanabe, Shin; Tajima, Hiroyasu; Fukazawa, Yasushi; Ichinohe, Yuto; Takeda, Shin`ichiro; Enoto, Teruaki; Fukuyama, Taro; Furui, Shunya; Genba, Kei; Hagino, Kouichi; Harayama, Atsushi; Kuroda, Yoshikatsu; Matsuura, Daisuke; Nakamura, Ryo; Nakazawa, Kazuhiro; Noda, Hirofumi; Odaka, Hirokazu; Ohta, Masayuki; Onishi, Mitsunobu; Saito, Shinya; Sato, Goro; Sato, Tamotsu; Takahashi, Tadayuki; Tanaka, Takaaki; Togo, Atsushi; Tomizuka, Shinji

    2014-11-01

    The Soft Gamma-ray Detector (SGD) is one of the instrument payloads onboard ASTRO-H, and will cover a wide energy band (60-600 keV) at a background level 10 times better than instruments currently in orbit. The SGD achieves low background by combining a Compton camera scheme with a narrow field-of-view active shield. The Compton camera in the SGD is realized as a hybrid semiconductor detector system which consists of silicon and cadmium telluride (CdTe) sensors. The design of the SGD Compton camera has been finalized and the final prototype, which has the same configuration as the flight model, has been fabricated for performance evaluation. The Compton camera has overall dimensions of 12 cm×12 cm×12 cm, consisting of 32 layers of Si pixel sensors and 8 layers of CdTe pixel sensors surrounded by 2 layers of CdTe pixel sensors. The detection efficiency of the Compton camera reaches about 15% and 3% for 100 keV and 511 keV gamma rays, respectively. The pixel pitch of the Si and CdTe sensors is 3.2 mm, and the signals from all 13,312 pixels are processed by 208 ASICs developed for the SGD. Good energy resolution is afforded by semiconductor sensors and low noise ASICs, and the obtained energy resolutions with the prototype Si and CdTe pixel sensors are 1.0-2.0 keV (FWHM) at 60 keV and 1.6-2.5 keV (FWHM) at 122 keV, respectively. This results in good background rejection capability due to better constraints on Compton kinematics. Compton camera energy resolutions achieved with the final prototype are 6.3 keV (FWHM) at 356 keV and 10.5 keV (FWHM) at 662 keV, which satisfy the instrument requirements for the SGD Compton camera (better than 2%). Moreover, a low intrinsic background has been confirmed by the background measurement with the final prototype.

  1. Compton camera study for high efficiency SPECT and benchmark with Anger system

    NASA Astrophysics Data System (ADS)

    Fontana, M.; Dauvergne, D.; Létang, J. M.; Ley, J.-L.; Testa, É.

    2017-12-01

    Single photon emission computed tomography (SPECT) is at present one of the major techniques for non-invasive diagnostics in nuclear medicine. The clinical routine is mostly based on collimated cameras, originally proposed by Hal Anger. Due to the presence of mechanical collimation, detection efficiency and energy acceptance are limited and fixed by the system’s geometrical features. In order to overcome these limitations, the application of Compton cameras for SPECT has been investigated for several years. In this study we compare a commercial SPECT-Anger device, the General Electric HealthCare Infinia system with a High Energy General Purpose (HEGP) collimator, and the Compton camera prototype under development by the French collaboration CLaRyS, through Monte Carlo simulations (GATE—GEANT4 Application for Tomographic Emission—version 7.1 and GEANT4 version 9.6, respectively). Given the possible introduction of new radio-emitters at higher energies intrinsically allowed by the Compton camera detection principle, the two detectors are exposed to point-like sources at increasing primary gamma energies, from actual isotopes already suggested for nuclear medicine applications. The Compton camera prototype is first characterized for SPECT application by studying the main parameters affecting its imaging performance: detector energy resolution and random coincidence rate. The two detector performances are then compared in terms of radial event distribution, detection efficiency and final image, obtained by gamma transmission analysis for the Anger system, and with an iterative List Mode-Maximum Likelihood Expectation Maximization (LM-MLEM) algorithm for the Compton reconstruction. The results show for the Compton camera a detection efficiency increased by a factor larger than an order of magnitude with respect to the Anger camera, associated with an enhanced spatial resolution for energies beyond 500 keV. We discuss the advantages of Compton camera application

  2. Dual-head gamma camera system for intraoperative localization of radioactive seeds

    NASA Astrophysics Data System (ADS)

    Arsenali, B.; de Jong, H. W. A. M.; Viergever, M. A.; Dickerscheid, D. B. M.; Beijst, C.; Gilhuijs, K. G. A.

    2015-10-01

    Breast-conserving surgery is a standard option for the treatment of patients with early-stage breast cancer. This form of surgery may result in incomplete excision of the tumor. Iodine-125 labeled titanium seeds are currently used in clinical practice to reduce the number of incomplete excisions. It seems likely that the number of incomplete excisions can be reduced even further if intraoperative information about the location of the radioactive seed is combined with preoperative information about the extent of the tumor. This can be combined if the location of the radioactive seed is established in a world coordinate system that can be linked to the (preoperative) image coordinate system. With this in mind, we propose a radioactive seed localization system which is composed of two static ceiling-suspended gamma camera heads and two parallel-hole collimators. Physical experiments and computer simulations which mimic realistic clinical situations were performed to estimate the localization accuracy (defined as trueness and precision) of the proposed system with respect to collimator-source distance (ranging between 50 cm and 100 cm) and imaging time (ranging between 1 s and 10 s). The goal of the study was to determine whether or not a trueness of 5 mm can be achieved if a collimator-source distance of 50 cm and imaging time of 5 s are used (these specifications were defined by a group of dedicated breast cancer surgeons). The results from the experiments indicate that the location of the radioactive seed can be established with an accuracy of 1.6 mm  ±  0.6 mm if a collimator-source distance of 50 cm and imaging time of 5 s are used (these experiments were performed with a 4.5 cm thick block phantom). Furthermore, the results from the simulations indicate that a trueness of 3.2 mm or less can be achieved if a collimator-source distance of 50 cm and imaging time of 5 s are used (this trueness was achieved for all 14 breast phantoms which

  3. A gamma beam profile imager for ELI-NP Gamma Beam System

    NASA Astrophysics Data System (ADS)

    Cardarelli, P.; Paternò, G.; Di Domenico, G.; Consoli, E.; Marziani, M.; Andreotti, M.; Evangelisti, F.; Squerzanti, S.; Gambaccini, M.; Albergo, S.; Cappello, G.; Tricomi, A.; Veltri, M.; Adriani, O.; Borgheresi, R.; Graziani, G.; Passaleva, G.; Serban, A.; Starodubtsev, O.; Variola, A.; Palumbo, L.

    2018-06-01

    The Gamma Beam System of ELI-Nuclear Physics is a high brilliance monochromatic gamma source based on the inverse Compton interaction between an intense high power laser and a bright electron beam with tunable energy. The source, currently being assembled in Magurele (Romania), is designed to provide a beam with tunable average energy ranging from 0.2 to 19.5 MeV, rms energy bandwidth down to 0.5% and flux of about 108 photons/s. The system includes a set of detectors for the diagnostic and complete characterization of the gamma beam. To evaluate the spatial distribution of the beam a gamma beam profile imager is required. For this purpose, a detector based on a scintillator target coupled to a CCD camera was designed and a prototype was tested at INFN-Ferrara laboratories. A set of analytical calculations and Monte Carlo simulations were carried out to optimize the imager design and evaluate the performance expected with ELI-NP gamma beam. In this work the design of the imager is described in detail, as well as the simulation tools used and the results obtained. The simulation parameters were tuned and cross-checked with the experimental measurements carried out on the assembled prototype using the beam from an x-ray tube.

  4. Emission computerized axial tomography from multiple gamma-camera views using frequency filtering.

    PubMed

    Pelletier, J L; Milan, C; Touzery, C; Coitoux, P; Gailliard, P; Budinger, T F

    1980-01-01

    Emission computerized axial tomography is achievable in any nuclear medicine department from multiple gamma camera views. Data are collected by rotating the patient in front of the camera. A simple fast algorithm is implemented, known as the convolution technique: first the projection data are Fourier transformed and then an original filter designed for optimizing resolution and noise suppression is applied; finally the inverse transform of the latter operation is back-projected. This program, which can also take into account the attenuation for single photon events, was executed with good results on phantoms and patients. We think that it can be easily implemented for specific diagnostic problems.

  5. Development of an LYSO based gamma camera for positron and scinti-mammography

    NASA Astrophysics Data System (ADS)

    Liang, H.-C.; Jan, M.-L.; Lin, W.-C.; Yu, S.-F.; Su, J.-L.; Shen, L.-H.

    2009-08-01

    In this research, characteristics of combination of PSPMTs (position sensitive photo-multiplier tube) to form a larger detection area is studied. A home-made linear divider circuit was built for merging signals and readout. Borosilicate glasses were chosen for the scintillation light sharing in the crossover region. Deterioration effect caused by the light guide was understood. The influences of light guide and crossover region on the separable crystal size were evaluated. According to the test results, a gamma camera with a crystal block of 90 × 90 mm2 covered area, composed of 2 mm LYSO crystal pixels, was designed and fabricated. Measured performances showed that this camera worked fine in both 511 keV and lower energy gammas. The light loss behaviour within the crossover region was analyzed and realized. Through count rate measurements, the 176Lu nature background didn't show severe influence on the single photon imaging and exhibited an amount of less than 1/3 of all the events acquired. These results show that with using light sharing techniques, combination of multiple PSPMTs in both X and Y directions to build a large area imaging detector is capable to be achieved. Also this camera design is feasible to keep both the abilities for positron and single photon breast imaging applications. Separable crystal size is 2 mm with 2 mm thick glass applied for the light sharing in current status.

  6. Evaluation of a gamma camera system for the RITS-6 accelerator using the self-magnetic pinch diode

    NASA Astrophysics Data System (ADS)

    Webb, Timothy J.; Kiefer, Mark L.; Gignac, Raymond; Baker, Stuart A.

    2015-08-01

    The self-magnetic pinch (SMP) diode is an intense radiographic source fielded on the Radiographic Integrated Test Stand (RITS-6) accelerator at Sandia National Laboratories in Albuquerque, NM. The accelerator is an inductive voltage adder (IVA) that can operate from 2-10 MV with currents up to 160 kA (at 7 MV). The SMP diode consists of an annular cathode separated from a flat anode, holding the bremsstrahlung conversion target, by a vacuum gap. Until recently the primary imaging diagnostic utilized image plates (storage phosphors) which has generally low DQE at these photon energies along with other problems. The benefits of using image plates include a high-dynamic range, good spatial resolution, and ease of use. A scintillator-based X-ray imaging system or "gamma camera" has been fielded in front of RITS and the SMP diode which has been able to provide vastly superior images in terms of signal-to-noise with similar resolution and acceptable dynamic range.

  7. A fast algorithm for computer aided collimation gamma camera (CACAO)

    NASA Astrophysics Data System (ADS)

    Jeanguillaume, C.; Begot, S.; Quartuccio, M.; Douiri, A.; Franck, D.; Pihet, P.; Ballongue, P.

    2000-08-01

    The computer aided collimation gamma camera is aimed at breaking down the resolution sensitivity trade-off of the conventional parallel hole collimator. It uses larger and longer holes, having an added linear movement at the acquisition sequence. A dedicated algorithm including shift and sum, deconvolution, parabolic filtering and rotation is described. Examples of reconstruction are given. This work shows that a simple and fast algorithm, based on a diagonal dominant approximation of the problem can be derived. Its gives a practical solution to the CACAO reconstruction problem.

  8. Microchannel plate streak camera

    DOEpatents

    Wang, Ching L.

    1989-01-01

    An improved streak camera in which a microchannel plate electron multiplier is used in place of or in combination with the photocathode used in prior streak cameras. The improved streak camera is far more sensitive to photons (UV to gamma-rays) than the conventional x-ray streak camera which uses a photocathode. The improved streak camera offers gamma-ray detection with high temporal resolution. It also offers low-energy x-ray detection without attenuation inside the cathode. Using the microchannel plate in the improved camera has resulted in a time resolution of about 150 ps, and has provided a sensitivity sufficient for 1000 KeV x-rays.

  9. Characterization of a small CsI(Na)-WSF-SiPM gamma camera prototype using 99mTc

    NASA Astrophysics Data System (ADS)

    Castro, I. F.; Soares, A. J.; Moutinho, L. M.; Ferreira, M. A.; Ferreira, R.; Combo, A.; Muchacho, F.; Veloso, J. F. C. A.

    2013-03-01

    A small field of view gamma camera is being developed, aiming for applications in scintimammography, sentinel lymph node detection or small animal imaging and research. The proposed wavelength-shifting fibre (WSF) gamma camera consists of two perpendicular sets of WSFs covering both sides of a CsI(Na) crystal, such that the fibres positioned at the bottom of the crystal provide the x coordinate and the ones on top the y coordinate of the gamma photon interaction point. The 2D position is given by highly sensitive photodetectors reading out each WSF and the energy information is provided by PMTs that cover the full detector area. This concept has the advantage of using N+N instead of N × N photodetectors to cover an identical imaging area, and is being applied using for the first time SiPMs. Previous studies carried out with 57Co have proved the feasibility of this concept using SiPM readout. In this work, we present experimental results from true 2D image acquisitions with a 10+10 SiPMs prototype, i.e. 10 × 10 mm2, using a parallel-hole collimator and different samples filled with 99mTc solution. The performance of the small prototype in these conditions is evaluated through the characterization of different gamma camera parameters, such as energy and spatial resolution. Ongoing advances towards a larger prototype of 100+100 SiPMs (10 × 10 cm2) are also presented.

  10. The HURRA filter: An easy method to eliminate collimator artifacts in high-energy gamma camera images.

    PubMed

    Perez-Garcia, H; Barquero, R

    The correct determination and delineation of tumor/organ size is crucial in 2-D imaging in 131 I therapy. These images are usually obtained using a system composed of a Gamma camera and high-energy collimator, although the system can produce artifacts in the image. This article analyses these artifacts and describes a correction filter that can eliminate those collimator artifacts. Using free software, ImageJ, a central profile in the image is obtained and analyzed. Two components can be seen in the fluctuation of the profile: one associated with the stochastic nature of the radiation, plus electronic noise and the other periodically across the position in space due to the collimator. These frequencies are analytically obtained and compared with the frequencies in the Fourier transform of the profile. A specially developed filter removes the artifacts in the 2D Fourier transform of the DICOM image. This filter is tested using a 15-cm-diameter Petri dish with 131 I radioactive water (big object size) image, a 131 I clinical pill (small object size) image, and an image of the remainder of the lesion of two patients treated with 3.7GBq (100mCi), and 4.44GBq (120mCi) of 131 I, respectively, after thyroidectomy. The artifact is due to the hexagonal periodic structure of the collimator. The use of the filter on large-sized images reduces the fluctuation by 5.8-3.5%. In small-sized images, the FWHM can be determined in the filtered image, while this is impossible in the unfiltered image. The definition of tumor boundary and the visualization of the activity distribution inside patient lesions improve drastically when the filter is applied to the corresponding images obtained with HE gamma camera. The HURRA filter removes the artifact of high-energy collimator artifacts in planar images obtained with a Gamma camera without reducing the image resolution. It can be applied in any study of patient quantification because the number of counts remains invariant. The filter makes

  11. A data acquisition system for coincidence imaging using a conventional dual head gamma camera

    NASA Astrophysics Data System (ADS)

    Lewellen, T. K.; Miyaoka, R. S.; Jansen, F.; Kaplan, M. S.

    1997-06-01

    A low cost data acquisition system (DAS) was developed to acquire coincidence data from an unmodified General Electric Maxxus dual head scintillation camera. A high impedance pick-off circuit provides position and energy signals to the DAS without interfering with normal camera operation. The signals are pulse-clipped to reduce pileup effects. Coincidence is determined with fast timing signals derived from constant fraction discriminators. A charge-integrating FERA 16 channel ADC feeds position and energy data to two CAMAC FERA memories operated as ping-pong buffers. A Macintosh PowerPC running Labview controls the system and reads the CAMAC memories. A CAMAC 12-channel scaler records singles and coincidence rate data. The system dead-time is approximately 10% at a coincidence rate of 4.0 kHz.

  12. Development of a Compton camera for prompt-gamma medical imaging

    NASA Astrophysics Data System (ADS)

    Aldawood, S.; Thirolf, P. G.; Miani, A.; Böhmer, M.; Dedes, G.; Gernhäuser, R.; Lang, C.; Liprandi, S.; Maier, L.; Marinšek, T.; Mayerhofer, M.; Schaart, D. R.; Lozano, I. Valencia; Parodi, K.

    2017-11-01

    A Compton camera-based detector system for photon detection from nuclear reactions induced by proton (or heavier ion) beams is under development at LMU Munich, targeting the online range verification of the particle beam in hadron therapy via prompt-gamma imaging. The detector is designed to be capable to reconstruct the photon source origin not only from the Compton scattering kinematics of the primary photon, but also to allow for tracking of the secondary Compton-scattered electrons, thus enabling a γ-source reconstruction also from incompletely absorbed photon events. The Compton camera consists of a monolithic LaBr3:Ce scintillation crystal, read out by a multi-anode PMT acting as absorber, preceded by a stacked array of 6 double-sided silicon strip detectors as scatterers. The detector components have been characterized both under offline and online conditions. The LaBr3:Ce crystal exhibits an excellent time and energy resolution. Using intense collimated 137Cs and 60Co sources, the monolithic scintillator was scanned on a fine 2D grid to generate a reference library of light amplitude distributions that allows for reconstructing the photon interaction position using a k-Nearest Neighbour (k-NN) algorithm. Systematic studies were performed to investigate the performance of the reconstruction algorithm, revealing an improvement of the spatial resolution with increasing photon energy to an optimum value of 3.7(1)mm at 1.33 MeV, achieved with the Categorical Average Pattern (CAP) modification of the k-NN algorithm.

  13. An attentive multi-camera system

    NASA Astrophysics Data System (ADS)

    Napoletano, Paolo; Tisato, Francesco

    2014-03-01

    Intelligent multi-camera systems that integrate computer vision algorithms are not error free, and thus both false positive and negative detections need to be revised by a specialized human operator. Traditional multi-camera systems usually include a control center with a wall of monitors displaying videos from each camera of the network. Nevertheless, as the number of cameras increases, switching from a camera to another becomes hard for a human operator. In this work we propose a new method that dynamically selects and displays the content of a video camera from all the available contents in the multi-camera system. The proposed method is based on a computational model of human visual attention that integrates top-down and bottom-up cues. We believe that this is the first work that tries to use a model of human visual attention for the dynamic selection of the camera view of a multi-camera system. The proposed method has been experimented in a given scenario and has demonstrated its effectiveness with respect to the other methods and manually generated ground-truth. The effectiveness has been evaluated in terms of number of correct best-views generated by the method with respect to the camera views manually generated by a human operator.

  14. Advances in Gamma-Ray Imaging with Intensified Quantum-Imaging Detectors

    NASA Astrophysics Data System (ADS)

    Han, Ling

    Nuclear medicine, an important branch of modern medical imaging, is an essential tool for both diagnosis and treatment of disease. As the fundamental element of nuclear medicine imaging, the gamma camera is able to detect gamma-ray photons emitted by radiotracers injected into a patient and form an image of the radiotracer distribution, reflecting biological functions of organs or tissues. Recently, an intensified CCD/CMOS-based quantum detector, called iQID, was developed in the Center for Gamma-Ray Imaging. Originally designed as a novel type of gamma camera, iQID demonstrated ultra-high spatial resolution (< 100 micron) and many other advantages over traditional gamma cameras. This work focuses on advancing this conceptually-proven gamma-ray imaging technology to make it ready for both preclinical and clinical applications. To start with, a Monte Carlo simulation of the key light-intensification device, i.e. the image intensifier, was developed, which revealed the dominating factor(s) that limit energy resolution performance of the iQID cameras. For preclinical imaging applications, a previously-developed iQID-based single-photon-emission computed-tomography (SPECT) system, called FastSPECT III, was fully advanced in terms of data acquisition software, system sensitivity and effective FOV by developing and adopting a new photon-counting algorithm, thicker columnar scintillation detectors, and system calibration method. Originally designed for mouse brain imaging, the system is now able to provide full-body mouse imaging with sub-350-micron spatial resolution. To further advance the iQID technology to include clinical imaging applications, a novel large-area iQID gamma camera, called LA-iQID, was developed from concept to prototype. Sub-mm system resolution in an effective FOV of 188 mm x 188 mm has been achieved. The camera architecture, system components, design and integration, data acquisition, camera calibration, and performance evaluation are presented in

  15. The Advanced Gamma-Ray Imaging System (AGIS)

    NASA Astrophysics Data System (ADS)

    Otte, Nepomuk

    The Advanced Gamma-ray Imaging System (AGIS) is a concept for the next generation of imag-ing atmospheric Cherenkov telescope arrays. It has the goal of providing an order of magnitude increase in sensitivity for Very High Energy Gamma-ray ( 100 GeV to 100 TeV) astronomy compared to currently operating arrays such as CANGAROO, HESS, MAGIC, and VERITAS. After an overview of the science such an array would enable, we discuss the development of the components of the telescope system that are required to achieve the sensitivity goal. AGIS stresses improvements in several areas of IACT technology including component reliability as well as exploring cost reduction possibilities in order to achieve its goal. We discuss alterna-tives for the telescopes and positioners: a novel Schwarzschild-Couder telescope offering a wide field of view with a relatively smaller plate scale, and possibilities for rapid slewing in order to address the search for and/or study of Gamma-ray Bursts in the VHE gamma-ray regime. We also discuss options for a high pixel count camera system providing the necessary finer solid angle per pixel and possibilities for a fast topological trigger that would offer improved realtime background rejection and lower energy thresholds.

  16. Dual cameras acquisition and display system of retina-like sensor camera and rectangular sensor camera

    NASA Astrophysics Data System (ADS)

    Cao, Nan; Cao, Fengmei; Lin, Yabin; Bai, Tingzhu; Song, Shengyu

    2015-04-01

    For a new kind of retina-like senor camera and a traditional rectangular sensor camera, dual cameras acquisition and display system need to be built. We introduce the principle and the development of retina-like senor. Image coordinates transformation and interpolation based on sub-pixel interpolation need to be realized for our retina-like sensor's special pixels distribution. The hardware platform is composed of retina-like senor camera, rectangular sensor camera, image grabber and PC. Combined the MIL and OpenCV library, the software program is composed in VC++ on VS 2010. Experience results show that the system can realizes two cameras' acquisition and display.

  17. Tissue-equivalent TL sheet dosimetry system for X- and gamma-ray dose mapping.

    PubMed

    Nariyama, N; Konnai, A; Ohnishi, S; Odano, N; Yamaji, A; Ozasa, N; Ishikawa, Y

    2006-01-01

    To measure dose distribution for X- and gamma rays simply and accurately, a tissue-equivalent thermoluminescent (TL) sheet-type dosemeter and reader system were developed. The TL sheet is composed of LiF:Mg,Cu,P and ETFE polymer, and the thickness is 0.2 mm. For the TL reading, a square heating plate, 20 cm on each side, was developed, and the temperature distribution was measured with an infrared thermal imaging camera. As a result, linearity within 2% and the homogeneity within 3% were confirmed. The TL signal emitted is detected using a CCD camera and displayed as a spatial dose distribution. Irradiation using synchrotron radiation between 10 and 100 keV and (60)Co gamma rays showed that the TL sheet dosimetry system was promising for radiation dose mapping for various purposes.

  18. Process simulation in digital camera system

    NASA Astrophysics Data System (ADS)

    Toadere, Florin

    2012-06-01

    The goal of this paper is to simulate the functionality of a digital camera system. The simulations cover the conversion from light to numerical signal and the color processing and rendering. We consider the image acquisition system to be linear shift invariant and axial. The light propagation is orthogonal to the system. We use a spectral image processing algorithm in order to simulate the radiometric properties of a digital camera. In the algorithm we take into consideration the transmittances of the: light source, lenses, filters and the quantum efficiency of a CMOS (complementary metal oxide semiconductor) sensor. The optical part is characterized by a multiple convolution between the different points spread functions of the optical components. We use a Cooke triplet, the aperture, the light fall off and the optical part of the CMOS sensor. The electrical part consists of the: Bayer sampling, interpolation, signal to noise ratio, dynamic range, analog to digital conversion and JPG compression. We reconstruct the noisy blurred image by blending different light exposed images in order to reduce the photon shot noise, also we filter the fixed pattern noise and we sharpen the image. Then we have the color processing blocks: white balancing, color correction, gamma correction, and conversion from XYZ color space to RGB color space. For the reproduction of color we use an OLED (organic light emitting diode) monitor. The analysis can be useful to assist students and engineers in image quality evaluation and imaging system design. Many other configurations of blocks can be used in our analysis.

  19. Performance Test Data Analysis of Scintillation Cameras

    NASA Astrophysics Data System (ADS)

    Demirkaya, Omer; Mazrou, Refaat Al

    2007-10-01

    In this paper, we present a set of image analysis tools to calculate the performance parameters of gamma camera systems from test data acquired according to the National Electrical Manufacturers Association NU 1-2001 guidelines. The calculation methods are either completely automated or require minimal user interaction; minimizing potential human errors. The developed methods are robust with respect to varying conditions under which these tests may be performed. The core algorithms have been validated for accuracy. They have been extensively tested on images acquired by the gamma cameras from different vendors. All the algorithms are incorporated into a graphical user interface that provides a convenient way to process the data and report the results. The entire application has been developed in MATLAB programming environment and is compiled to run as a stand-alone program. The developed image analysis tools provide an automated, convenient and accurate means to calculate the performance parameters of gamma cameras and SPECT systems. The developed application is available upon request for personal or non-commercial uses. The results of this study have been partially presented in Society of Nuclear Medicine Annual meeting as an InfoSNM presentation.

  20. Flexible mini gamma camera reconstructions of extended sources using step and shoot and list mode.

    PubMed

    Gardiazabal, José; Matthies, Philipp; Vogel, Jakob; Frisch, Benjamin; Navab, Nassir; Ziegler, Sibylle; Lasser, Tobias

    2016-12-01

    Hand- and robot-guided mini gamma cameras have been introduced for the acquisition of single-photon emission computed tomography (SPECT) images. Less cumbersome than whole-body scanners, they allow for a fast acquisition of the radioactivity distribution, for example, to differentiate cancerous from hormonally hyperactive lesions inside the thyroid. This work compares acquisition protocols and reconstruction algorithms in an attempt to identify the most suitable approach for fast acquisition and efficient image reconstruction, suitable for localization of extended sources, such as lesions inside the thyroid. Our setup consists of a mini gamma camera with precise tracking information provided by a robotic arm, which also provides reproducible positioning for our experiments. Based on a realistic phantom of the thyroid including hot and cold nodules as well as background radioactivity, the authors compare "step and shoot" (SAS) and continuous data (CD) acquisition protocols in combination with two different statistical reconstruction methods: maximum-likelihood expectation-maximization (ML-EM) for time-integrated count values and list-mode expectation-maximization (LM-EM) for individually detected gamma rays. In addition, the authors simulate lower uptake values by statistically subsampling the experimental data in order to study the behavior of their approach without changing other aspects of the acquired data. All compared methods yield suitable results, resolving the hot nodules and the cold nodule from the background. However, the CD acquisition is twice as fast as the SAS acquisition, while yielding better coverage of the thyroid phantom, resulting in qualitatively more accurate reconstructions of the isthmus between the lobes. For CD acquisitions, the LM-EM reconstruction method is preferable, as it yields comparable image quality to ML-EM at significantly higher speeds, on average by an order of magnitude. This work identifies CD acquisition protocols combined

  1. Design of Dual-Road Transportable Portal Monitoring System for Visible Light and Gamma-Ray Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karnowski, Thomas Paul; Cunningham, Mark F; Goddard Jr, James Samuel

    2010-01-01

    The use of radiation sensors as portal monitors is increasing due to heightened concerns over the smuggling of fissile material. Transportable systems that can detect significant quantities of fissile material that might be present in vehicular traffic are of particular interest, especially if they can be rapidly deployed to different locations. To serve this application, we have constructed a rapid-deployment portal monitor that uses visible-light and gamma-ray imaging to allow simultaneous monitoring of multiple lanes of traffic from the side of a roadway. The system operation uses machine vision methods on the visible-light images to detect vehicles as they entermore » and exit the field of view and to measure their position in each frame. The visible-light and gamma-ray cameras are synchronized which allows the gamma-ray imager to harvest gamma-ray data specific to each vehicle, integrating its radiation signature for the entire time that it is in the field of view. Thus our system creates vehicle-specific radiation signatures and avoids source confusion problems that plague non-imaging approaches to the same problem. Our current prototype instrument was designed for measurement of upto five lanes of freeway traffic with a pair of instruments, one on either side of the roadway. Stereoscopic cameras are used with a third alignment camera for motion compensation and are mounted on a 50 deployable mast. In this paper we discuss the design considerations for the machine-vision system, the algorithms used for vehicle detection and position estimates, and the overall architecture of the system. We also discuss system calibration for rapid deployment. We conclude with notes on preliminary performance and deployment.« less

  2. Design of dual-road transportable portal monitoring system for visible light and gamma-ray imaging

    NASA Astrophysics Data System (ADS)

    Karnowski, Thomas P.; Cunningham, Mark F.; Goddard, James S.; Cheriyadat, Anil M.; Hornback, Donald E.; Fabris, Lorenzo; Kerekes, Ryan A.; Ziock, Klaus-Peter; Bradley, E. Craig; Chesser, J.; Marchant, W.

    2010-04-01

    The use of radiation sensors as portal monitors is increasing due to heightened concerns over the smuggling of fissile material. Transportable systems that can detect significant quantities of fissile material that might be present in vehicular traffic are of particular interest, especially if they can be rapidly deployed to different locations. To serve this application, we have constructed a rapid-deployment portal monitor that uses visible-light and gamma-ray imaging to allow simultaneous monitoring of multiple lanes of traffic from the side of a roadway. The system operation uses machine vision methods on the visible-light images to detect vehicles as they enter and exit the field of view and to measure their position in each frame. The visible-light and gamma-ray cameras are synchronized which allows the gamma-ray imager to harvest gamma-ray data specific to each vehicle, integrating its radiation signature for the entire time that it is in the field of view. Thus our system creates vehicle-specific radiation signatures and avoids source confusion problems that plague non-imaging approaches to the same problem. Our current prototype instrument was designed for measurement of upto five lanes of freeway traffic with a pair of instruments, one on either side of the roadway. Stereoscopic cameras are used with a third "alignment" camera for motion compensation and are mounted on a 50' deployable mast. In this paper we discuss the design considerations for the machine-vision system, the algorithms used for vehicle detection and position estimates, and the overall architecture of the system. We also discuss system calibration for rapid deployment. We conclude with notes on preliminary performance and deployment.

  3. LSST camera control system

    NASA Astrophysics Data System (ADS)

    Marshall, Stuart; Thaler, Jon; Schalk, Terry; Huffer, Michael

    2006-06-01

    The LSST Camera Control System (CCS) will manage the activities of the various camera subsystems and coordinate those activities with the LSST Observatory Control System (OCS). The CCS comprises a set of modules (nominally implemented in software) which are each responsible for managing one camera subsystem. Generally, a control module will be a long lived "server" process running on an embedded computer in the subsystem. Multiple control modules may run on a single computer or a module may be implemented in "firmware" on a subsystem. In any case control modules must exchange messages and status data with a master control module (MCM). The main features of this approach are: (1) control is distributed to the local subsystem level; (2) the systems follow a "Master/Slave" strategy; (3) coordination will be achieved by the exchange of messages through the interfaces between the CCS and its subsystems. The interface between the camera data acquisition system and its downstream clients is also presented.

  4. The Orbiter camera payload system's large-format camera and attitude reference system

    NASA Technical Reports Server (NTRS)

    Schardt, B. B.; Mollberg, B. H.

    1985-01-01

    The Orbiter camera payload system (OCPS) is an integrated photographic system carried into earth orbit as a payload in the Space Transportation System (STS) Orbiter vehicle's cargo bay. The major component of the OCPS is a large-format camera (LFC), a precision wide-angle cartographic instrument capable of producing high-resolution stereophotography of great geometric fidelity in multiple base-to-height ratios. A secondary and supporting system to the LFC is the attitude reference system (ARS), a dual-lens stellar camera array (SCA) and camera support structure. The SCA is a 70 mm film system that is rigidly mounted to the LFC lens support structure and, through the simultaneous acquisition of two star fields with each earth viewing LFC frame, makes it possible to precisely determine the pointing of the LFC optical axis with reference to the earth nadir point. Other components complete the current OCPS configuration as a high-precision cartographic data acquisition system. The primary design objective for the OCPS was to maximize system performance characteristics while maintaining a high level of reliability compatible with rocket launch conditions and the on-orbit environment. The full OCPS configuration was launched on a highly successful maiden voyage aboard the STS Orbiter vehicle Challenger on Oct. 5, 1984, as a major payload aboard the STS-41G mission.

  5. SeHCAT retention values as measured with a collimated and an uncollimated gamma camera: a method comparison study.

    PubMed

    Wright, James W; Lovell, Lesley A; Gemmell, Howard G; McKiddie, Fergus; Staff, Roger T

    2013-07-01

    TauroH-23-(Se) selena-25-homocholic acid retention values are used in the diagnosis of bile acid malabsorption. The standard method for measuring values is with an uncollimated gamma camera, which can create some logistic difficulties, with other background sources of activity, which are irrelevant when a collimator is used, becoming significant. In this study we compare the retention values obtained with a collimated and an uncollimated gamma camera in phantoms and in 23 patients. Bland-Altman plots were created using the data, which showed a mean bias in retention of 0.10% in the phantom study and 0.55% in the patient study between methods. A Wilcoxon signed-rank test with the null hypothesis of zero median difference between uncollimated and collimated methods was not statistically significant to P values less than 0.05 in the patient and phantom studies. In the patient study, on using a fixed boundary of retention (10%) between positive and negative status, the status of one patient was changed from negative (12%) to positive (9%). We conclude that measurement of retention with a collimated gamma camera is similar but not identical to that of uncollimated values. The clinical significance of this shift is unclear, as the threshold of significance and the method of integrating this measure with other clinical factors into management remain unclear.

  6. System Synchronizes Recordings from Separated Video Cameras

    NASA Technical Reports Server (NTRS)

    Nail, William; Nail, William L.; Nail, Jasper M.; Le, Doung T.

    2009-01-01

    A system of electronic hardware and software for synchronizing recordings from multiple, physically separated video cameras is being developed, primarily for use in multiple-look-angle video production. The system, the time code used in the system, and the underlying method of synchronization upon which the design of the system is based are denoted generally by the term "Geo-TimeCode(TradeMark)." The system is embodied mostly in compact, lightweight, portable units (see figure) denoted video time-code units (VTUs) - one VTU for each video camera. The system is scalable in that any number of camera recordings can be synchronized. The estimated retail price per unit would be about $350 (in 2006 dollars). The need for this or another synchronization system external to video cameras arises because most video cameras do not include internal means for maintaining synchronization with other video cameras. Unlike prior video-camera-synchronization systems, this system does not depend on continuous cable or radio links between cameras (however, it does depend on occasional cable links lasting a few seconds). Also, whereas the time codes used in prior video-camera-synchronization systems typically repeat after 24 hours, the time code used in this system does not repeat for slightly more than 136 years; hence, this system is much better suited for long-term deployment of multiple cameras.

  7. Intraoperative Imaging Guidance for Sentinel Node Biopsy in Melanoma Using a Mobile Gamma Camera

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dengel, Lynn T; Judy, Patricia G; Petroni, Gina R

    2011-04-01

    The objective is to evaluate the sensitivity and clinical utility of intraoperative mobile gamma camera (MGC) imaging in sentinel lymph node biopsy (SLNB) in melanoma. The false-negative rate for SLNB for melanoma is approximately 17%, for which failure to identify the sentinel lymph node (SLN) is a major cause. Intraoperative imaging may aid in detection of SLN near the primary site, in ambiguous locations, and after excision of each SLN. The present pilot study reports outcomes with a prototype MGC designed for rapid intraoperative image acquisition. We hypothesized that intraoperative use of the MGC would be feasible and that sensitivitymore » would be at least 90%. From April to September 2008, 20 patients underwent Tc99 sulfur colloid lymphoscintigraphy, and SLNB was performed with use of a conventional fixed gamma camera (FGC), and gamma probe followed by intraoperative MGC imaging. Sensitivity was calculated for each detection method. Intraoperative logistical challenges were scored. Cases in which MGC provided clinical benefit were recorded. Sensitivity for detecting SLN basins was 97% for the FGC and 90% for the MGC. A total of 46 SLN were identified: 32 (70%) were identified as distinct hot spots by preoperative FGC imaging, 31 (67%) by preoperative MGC imaging, and 43 (93%) by MGC imaging pre- or intraoperatively. The gamma probe identified 44 (96%) independent of MGC imaging. The MGC provided defined clinical benefit as an addition to standard practice in 5 (25%) of 20 patients. Mean score for MGC logistic feasibility was 2 on a scale of 1-9 (1 = best). Intraoperative MGC imaging provides additional information when standard techniques fail or are ambiguous. Sensitivity is 90% and can be increased. This pilot study has identified ways to improve the usefulness of an MGC for intraoperative imaging, which holds promise for reducing false negatives of SLNB for melanoma.« less

  8. Color reproduction software for a digital still camera

    NASA Astrophysics Data System (ADS)

    Lee, Bong S.; Park, Du-Sik; Nam, Byung D.

    1998-04-01

    We have developed a color reproduction software for a digital still camera. The image taken by the camera was colorimetrically reproduced on the monitor after characterizing the camera and the monitor, and color matching between two devices. The reproduction was performed at three levels; level processing, gamma correction, and color transformation. The image contrast was increased after the level processing adjusting the level of dark and bright portions of the image. The relationship between the level processed digital values and the measured luminance values of test gray samples was calculated, and the gamma of the camera was obtained. The method for getting the unknown monitor gamma was proposed. As a result, the level processed values were adjusted by the look-up table created by the camera and the monitor gamma correction. For a color transformation matrix for the camera, 3 by 3 or 3 by 4 matrix was used, which was calculated by the regression between the gamma corrected values and the measured tristimulus values of each test color samples the various reproduced images were displayed on the dialogue box implemented in our software, which were generated according to four illuminations for the camera and three color temperatures for the monitor. An user can easily choose he best reproduced image comparing each others.

  9. Assessment of right ventricular function with nonimaging first pass ventriculography and comparison of results with gamma camera studies.

    PubMed

    Zhang, Z; Liu, X J; Liu, Y Z; Lu, P; Crawley, J C; Lahiri, A

    1990-08-01

    A new technique has been developed for measuring right ventricular function by nonimaging first pass ventriculography. The right ventricular ejection fraction (RVEF) obtained by non-imaging first pass ventriculography was compared with that obtained by gamma camera first pass and equilibrium ventriculography. The data has demonstrated that the correlation of RVEFs obtained by the nonimaging nuclear cardiac probe and by gamma camera first pass ventriculography in 15 subjects was comparable (r = 0.93). There was also a good correlation between RVEFs obtained by the nonimaging nuclear probe and by equilibrium gated blood pool studies in 33 subjects (r = 0.89). RVEF was significantly reduced in 15 patients with right ventricular and/or inferior myocardial infarction compared to normal subjects (28 +/- 9% v. 45 +/- 9%). The data suggests that nonimaging probes may be used for assessing right ventricular function accurately.

  10. Imaging system for cardiac planar imaging using a dedicated dual-head gamma camera

    DOEpatents

    Majewski, Stanislaw [Morgantown, VA; Umeno, Marc M [Woodinville, WA

    2011-09-13

    A cardiac imaging system employing dual gamma imaging heads co-registered with one another to provide two dynamic simultaneous views of the heart sector of a patient torso. A first gamma imaging head is positioned in a first orientation with respect to the heart sector and a second gamma imaging head is positioned in a second orientation with respect to the heart sector. An adjustment arrangement is capable of adjusting the distance between the separate imaging heads and the angle between the heads. With the angle between the imaging heads set to 180 degrees and operating in a range of 140-159 keV and at a rate of up to 500kHz, the imaging heads are co-registered to produce simultaneous dynamic recording of two stereotactic views of the heart. The use of co-registered imaging heads maximizes the uniformity of detection sensitivity of blood flow in and around the heart over the whole heart volume and minimizes radiation absorption effects. A normalization/image fusion technique is implemented pixel-by-corresponding pixel to increase signal for any cardiac region viewed in two images obtained from the two opposed detector heads for the same time bin. The imaging system is capable of producing enhanced first pass studies, bloodpool studies including planar, gated and non-gated EKG studies, planar EKG perfusion studies, and planar hot spot imaging.

  11. Electrostatic camera system functional design study

    NASA Technical Reports Server (NTRS)

    Botticelli, R. A.; Cook, F. J.; Moore, R. F.

    1972-01-01

    A functional design study for an electrostatic camera system for application to planetary missions is presented. The electrostatic camera can produce and store a large number of pictures and provide for transmission of the stored information at arbitrary times after exposure. Preliminary configuration drawings and circuit diagrams for the system are illustrated. The camera system's size, weight, power consumption, and performance are characterized. Tradeoffs between system weight, power, and storage capacity are identified.

  12. Development of an intraoperative gamma camera based on a 256-pixel mercuric iodide detector array

    NASA Astrophysics Data System (ADS)

    Patt, B. E.; Tornai, M. P.; Iwanczyk, J. S.; Levin, C. S.; Hoffman, E. J.

    1997-06-01

    A 256-element mercuric iodide (HgI/sub 2/) detector array has been developed which is intended for use as an intraoperative gamma camera (IOGC). The camera is specifically designed for use in imaging gamma-emitting radiopharmaceuticals (such as 99m-Tc labeled Sestamibi) incorporated into brain tumors in the intraoperative surgical environment. The system is intended to improve the success of tumor removal surgeries by allowing more complete removal of subclinical tumor cells without removal of excessive normal tissue. The use of HgI/sub 2/ detector arrays in this application facilitates construction of an imaging head that is very compact and has a high SNR. The detector is configured as a cross-grid array. Pixel dimensions are 1.25 mm squares separated by 0.25 mm. The overall dimension of the detector is 23.75 mm on a side. The detector thickness is 1 mm which corresponds to over 60% stopping at 140 keV. The array has good uniformity with average energy resolution of 5.2/spl plusmn/2.9% FWHM at 140 keV (best resolution was 1.9% FWHM). Response uniformity (/spl plusmn//spl sigma/) was 7.9%. A study utilizing realistic tumor phantoms (uptake ratio varied from 2:1 to 100:1) in background (1 mCi/l) was conducted. SNRs for the reasonably achievable uptake ratio of 50:1 were 5.61 /spl sigma/ with 1 cm of background depth ("normal tissue") and 2.74 /spl sigma/ with 4 cm of background for a 6.3 /spl mu/l tumor phantom (/spl sim/270 nCi at the time of the measurement).

  13. Gamma-ray imaging system for real-time measurements in nuclear waste characterisation

    NASA Astrophysics Data System (ADS)

    Caballero, L.; Albiol Colomer, F.; Corbi Bellot, A.; Domingo-Pardo, C.; Leganés Nieto, J. L.; Agramunt Ros, J.; Contreras, P.; Monserrate, M.; Olleros Rodríguez, P.; Pérez Magán, D. L.

    2018-03-01

    A compact, portable and large field-of-view gamma camera that is able to identify, locate and quantify gamma-ray emitting radioisotopes in real-time has been developed. The device delivers spectroscopic and imaging capabilities that enable its use it in a variety of nuclear waste characterisation scenarios, such as radioactivity monitoring in nuclear power plants and more specifically for the decommissioning of nuclear facilities. The technical development of this apparatus and some examples of its application in field measurements are reported in this article. The performance of the presented gamma-camera is also benchmarked against other conventional techniques.

  14. Situational Awareness from a Low-Cost Camera System

    NASA Technical Reports Server (NTRS)

    Freudinger, Lawrence C.; Ward, David; Lesage, John

    2010-01-01

    A method gathers scene information from a low-cost camera system. Existing surveillance systems using sufficient cameras for continuous coverage of a large field necessarily generate enormous amounts of raw data. Digitizing and channeling that data to a central computer and processing it in real time is difficult when using low-cost, commercially available components. A newly developed system is located on a combined power and data wire to form a string-of-lights camera system. Each camera is accessible through this network interface using standard TCP/IP networking protocols. The cameras more closely resemble cell-phone cameras than traditional security camera systems. Processing capabilities are built directly onto the camera backplane, which helps maintain a low cost. The low power requirements of each camera allow the creation of a single imaging system comprising over 100 cameras. Each camera has built-in processing capabilities to detect events and cooperatively share this information with neighboring cameras. The location of the event is reported to the host computer in Cartesian coordinates computed from data correlation across multiple cameras. In this way, events in the field of view can present low-bandwidth information to the host rather than high-bandwidth bitmap data constantly being generated by the cameras. This approach offers greater flexibility than conventional systems, without compromising performance through using many small, low-cost cameras with overlapping fields of view. This means significant increased viewing without ignoring surveillance areas, which can occur when pan, tilt, and zoom cameras look away. Additionally, due to the sharing of a single cable for power and data, the installation costs are lower. The technology is targeted toward 3D scene extraction and automatic target tracking for military and commercial applications. Security systems and environmental/ vehicular monitoring systems are also potential applications.

  15. Airborne ballistic camera tracking systems

    NASA Technical Reports Server (NTRS)

    Redish, W. L.

    1976-01-01

    An operational airborne ballistic camera tracking system was tested for operational and data reduction feasibility. The acquisition and data processing requirements of the system are discussed. Suggestions for future improvements are also noted. A description of the data reduction mathematics is outlined. Results from a successful reentry test mission are tabulated. The test mission indicated that airborne ballistic camera tracking systems are feasible.

  16. Test of Compton camera components for prompt gamma imaging at the ELBE bremsstrahlung beam

    NASA Astrophysics Data System (ADS)

    Hueso-González, F.; Golnik, C.; Berthel, M.; Dreyer, A.; Enghardt, W.; Fiedler, F.; Heidel, K.; Kormoll, T.; Rohling, H.; Schöne, S.; Schwengner, R.; Wagner, A.; Pausch, G.

    2014-05-01

    In the context of ion beam therapy, particle range verification is a major challenge for the quality assurance of the treatment. One approach is the measurement of the prompt gamma rays resulting from the tissue irradiation. A Compton camera based on several position sensitive gamma ray detectors, together with an imaging algorithm, is expected to reconstruct the prompt gamma ray emission density map, which is correlated with the dose distribution. At OncoRay and Helmholtz-Zentrum Dresden-Rossendorf (HZDR), a Compton camera setup is being developed consisting of two scatter planes: two CdZnTe (CZT) cross strip detectors, and an absorber consisting of one Lu2SiO5 (LSO) block detector. The data acquisition is based on VME electronics and handled by software developed on the ROOT framework. The setup has been tested at the linear electron accelerator ELBE at HZDR, which is used in this experiment to produce bunched bremsstrahlung photons with up to 12.5 MeV energy and a repetition rate of 13 MHz. Their spectrum has similarities with the shape expected from prompt gamma rays in the clinical environment, and the flux is also bunched with the accelerator frequency. The charge sharing effect of the CZT detector is studied qualitatively for different energy ranges. The LSO detector pixel discrimination resolution is analyzed and it shows a trend to improve for high energy depositions. The time correlation between the pulsed prompt photons and the measured detector signals, to be used for background suppression, exhibits a time resolution of 3 ns FWHM for the CZT detector and of 2 ns for the LSO detector. A time walk correction and pixel-wise calibration is applied for the LSO detector, whose resolution improves up to 630 ps. In conclusion, the detector setup is suitable for time-resolved background suppression in pulsed clinical particle accelerators. Ongoing tasks are the quantitative comparison with simulations and the test of imaging algorithms. Experiments at proton

  17. Developments in mercuric iodide gamma ray imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patt, B.E.; Beyerle, A.G.; Dolin, R.C.

    A mercuric iodide gamma-ray imaging array and camera system previously described has been characterized for spatial and energy resolution. Based on this data a new camera is being developed to more fully exploit the potential of the array. Characterization results and design criterion for the new camera will be presented. 2 refs., 7 figs.

  18. Automatic multi-camera calibration for deployable positioning systems

    NASA Astrophysics Data System (ADS)

    Axelsson, Maria; Karlsson, Mikael; Rudner, Staffan

    2012-06-01

    Surveillance with automated positioning and tracking of subjects and vehicles in 3D is desired in many defence and security applications. Camera systems with stereo or multiple cameras are often used for 3D positioning. In such systems, accurate camera calibration is needed to obtain a reliable 3D position estimate. There is also a need for automated camera calibration to facilitate fast deployment of semi-mobile multi-camera 3D positioning systems. In this paper we investigate a method for automatic calibration of the extrinsic camera parameters (relative camera pose and orientation) of a multi-camera positioning system. It is based on estimation of the essential matrix between each camera pair using the 5-point method for intrinsically calibrated cameras. The method is compared to a manual calibration method using real HD video data from a field trial with a multicamera positioning system. The method is also evaluated on simulated data from a stereo camera model. The results show that the reprojection error of the automated camera calibration method is close to or smaller than the error for the manual calibration method and that the automated calibration method can replace the manual calibration.

  19. The sequence measurement system of the IR camera

    NASA Astrophysics Data System (ADS)

    Geng, Ai-hui; Han, Hong-xia; Zhang, Hai-bo

    2011-08-01

    Currently, the IR cameras are broadly used in the optic-electronic tracking, optic-electronic measuring, fire control and optic-electronic countermeasure field, but the output sequence of the most presently applied IR cameras in the project is complex and the giving sequence documents from the leave factory are not detailed. Aiming at the requirement that the continuous image transmission and image procession system need the detailed sequence of the IR cameras, the sequence measurement system of the IR camera is designed, and the detailed sequence measurement way of the applied IR camera is carried out. The FPGA programming combined with the SignalTap online observation way has been applied in the sequence measurement system, and the precise sequence of the IR camera's output signal has been achieved, the detailed document of the IR camera has been supplied to the continuous image transmission system, image processing system and etc. The sequence measurement system of the IR camera includes CameraLink input interface part, LVDS input interface part, FPGA part, CameraLink output interface part and etc, thereinto the FPGA part is the key composed part in the sequence measurement system. Both the video signal of the CmaeraLink style and the video signal of LVDS style can be accepted by the sequence measurement system, and because the image processing card and image memory card always use the CameraLink interface as its input interface style, the output signal style of the sequence measurement system has been designed into CameraLink interface. The sequence measurement system does the IR camera's sequence measurement work and meanwhile does the interface transmission work to some cameras. Inside the FPGA of the sequence measurement system, the sequence measurement program, the pixel clock modification, the SignalTap file configuration and the SignalTap online observation has been integrated to realize the precise measurement to the IR camera. Te sequence measurement

  20. Optimal configuration of a low-dose breast-specific gamma camera based on semiconductor CdZnTe pixelated detectors

    NASA Astrophysics Data System (ADS)

    Genocchi, B.; Pickford Scienti, O.; Darambara, DG

    2017-05-01

    Breast cancer is one of the most frequent tumours in women. During the ‘90s, the introduction of screening programmes allowed the detection of cancer before the palpable stage, reducing its mortality up to 50%. About 50% of the women aged between 30 and 50 years present dense breast parenchyma. This percentage decreases to 30% for women between 50 to 80 years. In these women, mammography has a sensitivity of around 30%, and small tumours are covered by the dense parenchyma and missed in the mammogram. Interestingly, breast-specific gamma-cameras based on semiconductor CdZnTe detectors have shown to be of great interest to early diagnosis. Infact, due to the high energy, spatial resolution, and high sensitivity of CdZnTe, molecular breast imaging has been shown to have a sensitivity of about 90% independently of the breast parenchyma. The aim of this work is to determine the optimal combination of the detector pixel size, hole shape, and collimator material in a low dose dual head breast specific gamma camera based on a CdZnTe pixelated detector at 140 keV, in order to achieve high count rate, and the best possible image spatial resolution. The optimal combination has been studied by modeling the system using the Monte Carlo code GATE. Six different pixel sizes from 0.85 mm to 1.6 mm, two hole shapes, hexagonal and square, and two different collimator materials, lead and tungsten were considered. It was demonstrated that the camera achieved higher count rates, and better signal-to-noise ratio when equipped with square hole, and large pixels (> 1.3 mm). In these configurations, the spatial resolution was worse than using small pixel sizes (< 1.3 mm), but remained under 3.6 mm in all cases.

  1. Electronic camera-management system for 35-mm and 70-mm film cameras

    NASA Astrophysics Data System (ADS)

    Nielsen, Allan

    1993-01-01

    Military and commercial test facilities have been tasked with the need for increasingly sophisticated data collection and data reduction. A state-of-the-art electronic control system for high speed 35 mm and 70 mm film cameras designed to meet these tasks is described. Data collection in today's test range environment is difficult at best. The need for a completely integrated image and data collection system is mandated by the increasingly complex test environment. Instrumentation film cameras have been used on test ranges to capture images for decades. Their high frame rates coupled with exceptionally high resolution make them an essential part of any test system. In addition to documenting test events, today's camera system is required to perform many additional tasks. Data reduction to establish TSPI (time- space-position information) may be performed after a mission and is subject to all of the variables present in documenting the mission. A typical scenario would consist of multiple cameras located on tracking mounts capturing the event along with azimuth and elevation position data. Corrected data can then be reduced using each camera's time and position deltas and calculating the TSPI of the object using triangulation. An electronic camera control system designed to meet these requirements has been developed by Photo-Sonics, Inc. The feedback received from test technicians at range facilities throughout the world led Photo-Sonics to design the features of this control system. These prominent new features include: a comprehensive safety management system, full local or remote operation, frame rate accuracy of less than 0.005 percent, and phase locking capability to Irig-B. In fact, Irig-B phase lock operation of multiple cameras can reduce the time-distance delta of a test object traveling at mach-1 to less than one inch during data reduction.

  2. A machine learning method for fast and accurate characterization of depth-of-interaction gamma cameras

    NASA Astrophysics Data System (ADS)

    Pedemonte, Stefano; Pierce, Larry; Van Leemput, Koen

    2017-11-01

    Measuring the depth-of-interaction (DOI) of gamma photons enables increasing the resolution of emission imaging systems. Several design variants of DOI-sensitive detectors have been recently introduced to improve the performance of scanners for positron emission tomography (PET). However, the accurate characterization of the response of DOI detectors, necessary to accurately measure the DOI, remains an unsolved problem. Numerical simulations are, at the state of the art, imprecise, while measuring directly the characteristics of DOI detectors experimentally is hindered by the impossibility to impose the depth-of-interaction in an experimental set-up. In this article we introduce a machine learning approach for extracting accurate forward models of gamma imaging devices from simple pencil-beam measurements, using a nonlinear dimensionality reduction technique in combination with a finite mixture model. The method is purely data-driven, not requiring simulations, and is applicable to a wide range of detector types. The proposed method was evaluated both in a simulation study and with data acquired using a monolithic gamma camera designed for PET (the cMiCE detector), demonstrating the accurate recovery of the DOI characteristics. The combination of the proposed calibration technique with maximum- a posteriori estimation of the coordinates of interaction provided a depth resolution of  ≈1.14 mm for the simulated PET detector and  ≈1.74 mm for the cMiCE detector. The software and experimental data are made available at http://occiput.mgh.harvard.edu/depthembedding/.

  3. Evaluation of a CdTe semiconductor based compact gamma camera for sentinel lymph node imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Russo, Paolo; Curion, Assunta S.; Mettivier, Giovanni

    2011-03-15

    Purpose: The authors assembled a prototype compact gamma-ray imaging probe (MediPROBE) for sentinel lymph node (SLN) localization. This probe is based on a semiconductor pixel detector. Its basic performance was assessed in the laboratory and clinically in comparison with a conventional gamma camera. Methods: The room-temperature CdTe pixel detector (1 mm thick) has 256x256 square pixels arranged with a 55 {mu}m pitch (sensitive area 14.08x14.08 mm{sup 2}), coupled pixel-by-pixel via bump-bonding to the Medipix2 photon-counting readout CMOS integrated circuit. The imaging probe is equipped with a set of three interchangeable knife-edge pinhole collimators (0.94, 1.2, or 2.1 mm effective diametermore » at 140 keV) and its focal distance can be regulated in order to set a given field of view (FOV). A typical FOV of 70 mm at 50 mm skin-to-collimator distance corresponds to a minification factor 1:5. The detector is operated at a single low-energy threshold of about 20 keV. Results: For {sup 99m}Tc, at 50 mm distance, a background-subtracted sensitivity of 6.5x10{sup -3} cps/kBq and a system spatial resolution of 5.5 mm FWHM were obtained for the 0.94 mm pinhole; corresponding values for the 2.1 mm pinhole were 3.3x10{sup -2} cps/kBq and 12.6 mm. The dark count rate was 0.71 cps. Clinical images in three patients with melanoma indicate detection of the SLNs with acquisition times between 60 and 410 s with an injected activity of 26 MBq {sup 99m}Tc and prior localization with standard gamma camera lymphoscintigraphy. Conclusions: The laboratory performance of this imaging probe is limited by the pinhole collimator performance and the necessity of working in minification due to the limited detector size. However, in clinical operative conditions, the CdTe imaging probe was effective in detecting SLNs with adequate resolution and an acceptable sensitivity. Sensitivity is expected to improve with the future availability of a larger CdTe detector permitting operation at

  4. Autocalibration of a projector-camera system.

    PubMed

    Okatani, Takayuki; Deguchi, Koichiro

    2005-12-01

    This paper presents a method for calibrating a projector-camera system that consists of multiple projectors (or multiple poses of a single projector), a camera, and a planar screen. We consider the problem of estimating the homography between the screen and the image plane of the camera or the screen-camera homography, in the case where there is no prior knowledge regarding the screen surface that enables the direct computation of the homography. It is assumed that the pose of each projector is unknown while its internal geometry is known. Subsequently, it is shown that the screen-camera homography can be determined from only the images projected by the projectors and then obtained by the camera, up to a transformation with four degrees of freedom. This transformation corresponds to arbitrariness in choosing a two-dimensional coordinate system on the screen surface and when this coordinate system is chosen in some manner, the screen-camera homography as well as the unknown poses of the projectors can be uniquely determined. A noniterative algorithm is presented, which computes the homography from three or more images. Several experimental results on synthetic as well as real images are shown to demonstrate the effectiveness of the method.

  5. System of Programmed Modules for Measuring Photographs with a Gamma-Telescope

    NASA Technical Reports Server (NTRS)

    Averin, S. A.; Veselova, G. V.; Navasardyan, G. V.

    1978-01-01

    Physical experiments using tracking cameras resulted in hundreds of thousands of stereo photographs of events being received. To process such a large volume of information, automatic and semiautomatic measuring systems are required. At the Institute of Space Research of the Academy of Science of the USSR, a system for processing film information from the spark gamma-telescope was developed. The system is based on a BPS-75 projector in line with the minicomputer Elektronika 1001. The report describes this system. The various computer programs available to the operators are discussed.

  6. A compact neutron scatter camera for field deployment

    DOE PAGES

    Goldsmith, John E. M.; Gerling, Mark D.; Brennan, James S.

    2016-08-23

    Here, we describe a very compact (0.9 m high, 0.4 m diameter, 40 kg) battery operable neutron scatter camera designed for field deployment. Unlike most other systems, the configuration of the sixteen liquid-scintillator detection cells are arranged to provide omnidirectional (4π) imaging with sensitivity comparable to a conventional two-plane system. Although designed primarily to operate as a neutron scatter camera for localizing energetic neutron sources, it also functions as a Compton camera for localizing gamma sources. In addition to describing the radionuclide source localization capabilities of this system, we demonstrate how it provides neutron spectra that can distinguish plutonium metalmore » from plutonium oxide sources, in addition to the easier task of distinguishing AmBe from fission sources.« less

  7. An evolution of technologies and applications of gamma imagers in the nuclear cycle industry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khalil, R. A.; Carrel, F.; Menaa, N.

    The tracking of radiation contamination and distribution has become a high priority in the nuclear cycle industry in order to respect the ALARA principle which is a main challenge during decontamination and dismantling activities. To support this need, AREVA/CANBERRA and CEA LIST have been actively carrying out research and development on a gamma-radiation imager. In this paper we will present the new generation of gamma camera, called GAMPIX. This system is based on the Timepix chip, hybridized with a CdTe substrate. A coded mask could be used in order to increase the sensitivity of the camera. Moreover, due to themore » USB connection with a standard computer, this gamma camera is immediately operational and user-friendly. The final system is a very compact gamma camera (global weight is less than 1 kg without any shielding) which could be used as a hand-held device for radioprotection purposes. In this article, we present the main characteristics of this new generation of gamma camera and we expose experimental results obtained during in situ measurements. Even though we present preliminary results the final product is under industrialization phase to address various applications specifications. (authors)« less

  8. The upgrade of the H.E.S.S. cameras

    NASA Astrophysics Data System (ADS)

    Giavitto, Gianluca; Ashton, Terry; Balzer, Arnim; Berge, David; Brun, Francois; Chaminade, Thomas; Delagnes, Eric; Fontaine, Gerard; Füßling, Matthias; Giebels, Berrie; Glicenstein, Jean-Francois; Gräber, Tobias; Hinton, Jim; Jahnke, Albert; Klepser, Stefan; Kossatz, Marko; Kretzschmann, Axel; Lefranc, Valentin; Leich, Holger; Lüdecke, Hartmut; Lypova, Iryna; Manigot, Pascal; Marandon, Vincent; Moulin, Emmanuel; Naurois, Mathieu de; Nayman, Patrick; Ohm, Stefan; Penno, Marek; Ross, Duncan; Salek, David; Schade, Markus; Schwab, Thomas; Simoni, Rachel; Stegmann, Christian; Steppa, Constantin; Thornhill, Julian; Toussnel, Francois

    2017-12-01

    The High Energy Stereoscopic System (HESS) is an array of imaging atmospheric Cherenkov telescopes (IACTs) located in the Khomas highland in Namibia. It was built to detect Very High Energy (VHE > 100 GeV) cosmic gamma rays. Since 2003, HESS has discovered the majority of the known astrophysical VHE gamma-ray sources, opening a new observational window on the extreme non-thermal processes at work in our universe. HESS consists of four 12-m diameter Cherenkov telescopes (CT1-4), which started data taking in 2002, and a larger 28-m telescope (CT5), built in 2012, which lowers the energy threshold of the array to 30 GeV . The cameras of CT1-4 are currently undergoing an extensive upgrade, with the goals of reducing their failure rate, reducing their readout dead time and improving the overall performance of the array. The entire camera electronics has been renewed from ground-up, as well as the power, ventilation and pneumatics systems, and the control and data acquisition software. Only the PMTs and their HV supplies have been kept from the original cameras. Novel technical solutions have been introduced, which will find their way into some of the Cherenkov cameras foreseen for the next-generation Cherenkov Telescope Array (CTA) observatory. In particular, the camera readout system is the first large-scale system based on the analog memory chip NECTAr, which was designed for CTA cameras. The camera control subsystems and the control software framework also pursue an innovative design, exploiting cutting-edge hardware and software solutions which excel in performance, robustness and flexibility. The CT1 camera has been upgraded in July 2015 and is currently taking data; CT2-4 have been upgraded in fall 2016. Together they will assure continuous operation of HESS at its full sensitivity until and possibly beyond the advent of CTA. This contribution describes the design, the testing and the in-lab and on-site performance of all components of the newly upgraded HESS

  9. Focal Plane Detectors for the Advanced Gamma-Ray Imaging System (AGIS)

    NASA Astrophysics Data System (ADS)

    Otte, A. N.; Byrum, K.; Drake, G.; Falcone, A.; Funk, S.; Horan, D.; Mukherjee, R.; Smith, A.; Tajima, H.; Wagner, R. G.; Williams, D. A.

    2008-12-01

    The Advanced Gamma-Ray Imaging System (AGIS) is a concept for the next generation observatory in ground-based very high energy gamma-ray astronomy. Design goals are ten times better sensitivity, higher angular resolution, and a lower energy threshold than existing Cherenkov telescopes. Simulations show that a substantial improvement in angular resolution may be achieved if the pixel diameter is reduced to the order of 0.05 deg, i.e. two to three times smaller than the pixel diameter of current Cherenkov telescope cameras. At these dimensions, photon detectors with smaller physical dimensions can be attractive alternatives to the classical photomultiplier tube (PMT). Furthermore, the operation of an experiment with the size of AGIS requires photon detectors that are among other things more reliable, more durable, and possibly higher efficiency photon detectors. Alternative photon detectors we are considering for AGIS include both silicon photomultipliers (SiPMs) and multi-anode photomultipliers (MAPMTs). Here we present results from laboratory testing of MAPMTs and SiPMs along with results from the first incorporation of these devices into cameras on test bed Cherenkov telescopes.

  10. Effect of different thickness of material filter on Tc-99m spectra and performance parameters of gamma camera

    NASA Astrophysics Data System (ADS)

    Nazifah, A.; Norhanna, S.; Shah, S. I.; Zakaria, A.

    2014-11-01

    This study aimed to investigate the effects of material filter technique on Tc-99m spectra and performance parameters of Philip ADAC forte dual head gamma camera. Thickness of material filter was selected on the basis of percentage attenuation of various gamma ray energies by different thicknesses of zinc material. A cylindrical source tank of NEMA single photon emission computed tomography (SPECT) Triple Line Source Phantom filled with water and Tc-99m radionuclide injected was used for spectra, uniformity and sensitivity measurements. Vinyl plastic tube was used as a line source for spatial resolution. Images for uniformity were reconstructed by filtered back projection method. Butterworth filter of order 5 and cut off frequency 0.35 cycles/cm was selected. Chang's attenuation correction method was applied by selecting 0.13/cm linear attenuation coefficient. Count rate was decreased with material filter from the compton region of Tc-99m energy spectrum, also from the photopeak region. Spatial resolution was improved. However, uniformity of tomographic image was equivocal, and system volume sensitivity was reduced by material filter. Material filter improved system's spatial resolution. Therefore, the technique may be used for phantom studies to improve the image quality.

  11. A prototype small CdTe gamma camera for radioguided surgery and other imaging applications.

    PubMed

    Tsuchimochi, Makoto; Sakahara, Harumi; Hayama, Kazuhide; Funaki, Minoru; Ohno, Ryoichi; Shirahata, Takashi; Orskaug, Terje; Maehlum, Gunnar; Yoshioka, Koki; Nygard, Einar

    2003-12-01

    Gamma probes have been used for sentinel lymph node biopsy in melanoma and breast cancer. However, these probes can provide only radioactivity counts and variable pitch audio output based on the intensity of the detected radioactivity. We have developed a small semiconductor gamma camera (SSGC) that allows visualisation of the size, shape and location of the target tissues. This study is designed to characterise the performance of the SSGC for radioguided surgery of metastatic lesions and for other imaging applications amenable to the smaller format of this prototype imaging system. The detector head had 32 cadmium telluride semiconductor arrays with a total of 1,024 pixels, and with application-specific integrated circuits (ASICs) and a tungsten collimator. The entire assembly was encased in a lead housing measuring 152 mmx166 mmx65 mm. The effective visual field was 44.8 mmx44.8 mm. The energy resolution and imaging aspects were tested. Two spherical 5-mm- and 15-mm-diameter technetium-99m radioactive sources that had activities of 0.15 MBq and 100 MBq, respectively, were used to simulate a sentinel lymph node and an injection site. The relative detectability of these foci by the new detector and a conventional scintillation camera was studied. The prototype was also examined in a variety of clinical applications. Energy resolution [full-width at half-maximum (FWHM)] for a single element at the centre of the field of view was 4.2% at 140 keV (99mTc), and the mean energy resolution of the CdTe detector arrays was approximately 7.8%. The spatial resolution, represented by FWHM, had a mean value of 1.56 +/- 0.05 mm. Simulated node foci could be visualised clearly by the SSGC using a 15-s acquisition time. In preliminary clinical tests, the SSGC successfully imaged diseases in a variety of tissues, including salivary and thyroid glands, temporomandibular joints and sentinel lymph nodes. The SSGC has significant potential for diagnosing diseases and facilitating

  12. Tumor dosimetry for I-131 trastuzumab therapy in a Her2+ NCI N87 xenograft mouse model using the Siemens SYMBIA E gamma camera with a pinhole collimator

    NASA Astrophysics Data System (ADS)

    Lee, Young Sub; Kim, Jin Su; Deuk Cho, Kyung; Kang, Joo Hyun; Moo Lim, Sang

    2015-07-01

    We performed imaging and therapy using I-131 trastuzumab and a pinhole collimator attached to a conventional gamma camera for human use in a mouse model. The conventional clinical gamma camera with a 2-mm radius-sized pinhole collimator was used for monitoring the animal model after administration of I-131 trastuzumab The highest and lowest radiation-received organs were osteogenic cells (0.349 mSv/MBq) and skin (0.137 mSv/MBq), respectively. The mean coefficients of variation (%CV) of the effective dose equivalent and effective dose were 0.091 and 0.093 mSv/MBq respectively. We showed the feasibility of the pinholeattached conventional gamma camera for human use for the assessment of dosimetry. Mouse dosimetry and prediction of human dosimetry could be used to provide data for the safety and efficacy of newly developed therapeutic schemes.

  13. Prism-based single-camera system for stereo display

    NASA Astrophysics Data System (ADS)

    Zhao, Yue; Cui, Xiaoyu; Wang, Zhiguo; Chen, Hongsheng; Fan, Heyu; Wu, Teresa

    2016-06-01

    This paper combines the prism and single camera and puts forward a method of stereo imaging with low cost. First of all, according to the principle of geometrical optics, we can deduce the relationship between the prism single-camera system and dual-camera system, and according to the principle of binocular vision we can deduce the relationship between binoculars and dual camera. Thus we can establish the relationship between the prism single-camera system and binoculars and get the positional relation of prism, camera, and object with the best effect of stereo display. Finally, using the active shutter stereo glasses of NVIDIA Company, we can realize the three-dimensional (3-D) display of the object. The experimental results show that the proposed approach can make use of the prism single-camera system to simulate the various observation manners of eyes. The stereo imaging system, which is designed by the method proposed by this paper, can restore the 3-D shape of the object being photographed factually.

  14. A comparison between the use of a shadow shield whole body counter and an uncollimated gamma camera ain the assessment of the seven-day retention of SeHCAT.

    PubMed

    Hames, T K; Condon, B R; Fleming, J S; Phillips, G; Holdstock, G; Smith, C L; Howlett, P J; Ackery, D

    1984-07-01

    We have compared the 7-day retention of the radioisotope bile salt analogue SeHCAT (75Se-23-selena-25-homotaurocholate), by whole body counting and by uncollimated gamma camera measurement, in phantoms and in 25 patients with inflammatory bowel disease. The results correlate with a linear correlation coefficient of 0.96. An uncollimated gamma camera can be used to assess bile acid malabsorption when a whole body radioactivity monitor is not available.

  15. Design and performance tests of the calorimetric tract of a Compton Camera for small-animals imaging

    NASA Astrophysics Data System (ADS)

    Rossi, P.; Baldazzi, G.; Battistella, A.; Bello, M.; Bollini, D.; Bonvicini, V.; Fontana, C. L.; Gennaro, G.; Moschini, G.; Navarria, F.; Rashevsky, A.; Uzunov, N.; Zampa, G.; Zampa, N.; Vacchi, A.

    2011-02-01

    The bio-distribution and targeting capability of pharmaceuticals may be assessed in small animals by imaging gamma-rays emitted from radio-isotope markers. Detectors that exploit the Compton concept allow higher gamma-ray efficiency compared to conventional Anger cameras employing collimators, and feature sub-millimeter spatial resolution and compact geometry. We are developing a Compton Camera that has to address several requirements: the high rates typical of the Compton concept; detection of gamma-rays of different energies that may range from 140 keV ( 99 mTc) to 511 keV ( β+ emitters); presence of gamma and beta radiation with energies up to 2 MeV in case of 188Re. The camera consists of a thin position-sensitive Tracker that scatters the gamma ray, and a second position-sensitive detection system to totally absorb the energy of the scattered photons (Calorimeter). In this paper we present the design and discuss the realization of the calorimetric tract, including the choice of scintillator crystal, pixel size, and detector geometry. Simulations of the gamma-ray trajectories from source to detectors have helped to assess the accuracy of the system and decide on camera design. Crystals of different materials, such as LaBr 3 GSO and YAP, and of different size, in continuous or segmented geometry, have been optically coupled to a multi-anode Hamamatsu H8500 detector, allowing measurements of spatial resolution and efficiency.

  16. Combustion pinhole camera system

    DOEpatents

    Witte, Arvel B.

    1984-02-21

    A pinhole camera system utilizing a sealed optical-purge assembly which provides optical access into a coal combustor or other energy conversion reactors. The camera system basically consists of a focused-purge pinhole optical port assembly, a conventional TV vidicon receiver, an external, variable density light filter which is coupled electronically to the vidicon automatic gain control (agc). The key component of this system is the focused-purge pinhole optical port assembly which utilizes a purging inert gas to keep debris from entering the port and a lens arrangement which transfers the pinhole to the outside of the port assembly. One additional feature of the port assembly is that it is not flush with the interior of the combustor.

  17. [Evaluation of the efficacy of sentinel node detection in breast cancer: chronological course and influence of the incorporation of an intra-operative portable gamma camera].

    PubMed

    Goñi Gironés, E; Vicente García, F; Serra Arbeloa, P; Estébanez Estébanez, C; Calvo Benito, A; Rodrigo Rincón, I; Camarero Salazar, A; Martínez Lozano, M E

    2013-01-01

    To define the sentinel node identification rate in breast cancer, the chronological evolution of this parameter and the influence of the introduction of a portable gamma camera. A retrospective study was conducted using a prospective database of 754 patients who had undergone a sentinel lymph node biopsy between January 2003 and December 2011. The technique was mixed in the starting period and subsequently was performed with radiotracer intra-peritumorally administered the day before of the surgery. Until October 2009, excision of the sentinel node was guided by a probe. After that date, a portable gamma camera was introduced for intrasurgical detection. The SN was biopsied in 725 out of the 754 patients studied. The resulting technique global effectiveness was 96.2%. In accordance with the year of the surgical intervention, the identification percentage was 93.5% in 2003, 88.7% in 2004, 94.3% in 2005, 95.7% in 2006, 93.3% in 2007, 98.8% in 2008, 97.1% in 2009 and 99.1% in 2010 and 2011. There was a significant difference in the proportion of identification before and after the incorporation of the portable gamma camera of 4.6% (95% CI of the difference 2-7.2%, P = 0.0037). The percentage of global identification exceeds the recommended level following the current guidelines. Chronologically, the improvement for this parameter during the study period has been observed. These data suggest that the incorporation of a portable gamma camera had an important role. Copyright © 2013 Elsevier España, S.L. and SEMNIM. All rights reserved.

  18. Real time moving scene holographic camera system

    NASA Technical Reports Server (NTRS)

    Kurtz, R. L. (Inventor)

    1973-01-01

    A holographic motion picture camera system producing resolution of front surface detail is described. The system utilizes a beam of coherent light and means for dividing the beam into a reference beam for direct transmission to a conventional movie camera and two reflection signal beams for transmission to the movie camera by reflection from the front side of a moving scene. The system is arranged so that critical parts of the system are positioned on the foci of a pair of interrelated, mathematically derived ellipses. The camera has the theoretical capability of producing motion picture holograms of projectiles moving at speeds as high as 900,000 cm/sec (about 21,450 mph).

  19. Measurement of total-body cobalt-57 vitamin B12 absorption with a gamma camera.

    PubMed

    Cardarelli, J A; Slingerland, D W; Burrows, B A; Miller, A

    1985-08-01

    Previously described techniques for the measurement of the absorption of [57Co]vitamin B12 by total-body counting have required an iron room equipped with scanning or multiple detectors. The present study uses simplifying modifications which make the technique more available and include the use of static geometry, the measurement of body thickness to correct for attenuation, a simple formula to convert the capsule-in-air count to a 100% absorption count, and finally the use of an adequately shielded gamma camera obviating the need of an iron room.

  20. Prompt gamma imaging of proton pencil beams at clinical dose rate

    NASA Astrophysics Data System (ADS)

    Perali, I.; Celani, A.; Bombelli, L.; Fiorini, C.; Camera, F.; Clementel, E.; Henrotin, S.; Janssens, G.; Prieels, D.; Roellinghoff, F.; Smeets, J.; Stichelbaut, F.; Vander Stappen, F.

    2014-10-01

    In this work, we present experimental results of a prompt gamma camera for real-time proton beam range verification. The detection system features a pixelated Cerium doped lutetium based scintillation crystal, coupled to Silicon PhotoMultiplier arrays, read out by dedicated electronics. The prompt gamma camera uses a knife-edge slit collimator to produce a 1D projection of the beam path in the target on the scintillation detector. We designed the detector to provide high counting statistics and high photo-detection efficiency for prompt gamma rays of several MeV. The slit design favours the counting statistics and could be advantageous in terms of simplicity, reduced cost and limited footprint. We present the description of the realized gamma camera, as well as the results of the characterization of the camera itself in terms of imaging performance. We also present the results of experiments in which a polymethyl methacrylate phantom was irradiated with proton pencil beams in a proton therapy center. A tungsten slit collimator was used and prompt gamma rays were acquired in the 3-6 MeV energy range. The acquisitions were performed with the beam operated at 100 MeV, 160 MeV and 230 MeV, with beam currents at the nozzle exit of several nA. Measured prompt gamma profiles are consistent with the simulations and we reached a precision (2σ) in shift retrieval of 4 mm with 0.5 × 108, 1.4 × 108 and 3.4 × 108 protons at 100, 160 and 230 MeV, respectively. We conclude that the acquisition of prompt gamma profiles for in vivo range verification of proton beam with the developed gamma camera and a slit collimator is feasible in clinical conditions. The compact design of the camera allows its integration in a proton therapy treatment room and further studies will be undertaken to validate the use of this detection system during treatment of real patients.

  1. The development of large-aperture test system of infrared camera and visible CCD camera

    NASA Astrophysics Data System (ADS)

    Li, Yingwen; Geng, Anbing; Wang, Bo; Wang, Haitao; Wu, Yanying

    2015-10-01

    Infrared camera and CCD camera dual-band imaging system is used in many equipment and application widely. If it is tested using the traditional infrared camera test system and visible CCD test system, 2 times of installation and alignment are needed in the test procedure. The large-aperture test system of infrared camera and visible CCD camera uses the common large-aperture reflection collimator, target wheel, frame-grabber, computer which reduces the cost and the time of installation and alignment. Multiple-frame averaging algorithm is used to reduce the influence of random noise. Athermal optical design is adopted to reduce the change of focal length location change of collimator when the environmental temperature is changing, and the image quality of the collimator of large field of view and test accuracy are also improved. Its performance is the same as that of the exotic congener and is much cheaper. It will have a good market.

  2. Determination of in vivo behavior of mitomycin C-loaded o/w soybean oil microemulsion and mitomycin C solution via gamma camera imaging.

    PubMed

    Kotmakçı, Mustafa; Kantarcı, Gülten; Aşıkoğlu, Makbule; Ozkılıç, Hayal; Ertan, Gökhan

    2013-09-01

    In this study, a microemulsion system was evaluated for delivery of mitomycin C (MMC). To track the distribution of the formulated drug after intravenous administration, radiochemical labeling and gamma scintigraphy imaging were used. The aim was to evaluate a microemulsion system for intravenous delivery of MMC and to compare its in vivo behavior with that of the MMC solution. For microemulsion formulation, soybean oil was used as the oil phase. Lecithin and Tween 80 were surfactants and ethanol was the cosurfactant. To understand the whole body localization of MMC-loaded microemulsion, MMC was labeled with radioactive technetium and gamma scintigraphy was applied for visualization of drug distribution. Radioactivity in the bladder 30 minutes after injection of the MMC solution was observed, according to static gamma camera images. This shows that urinary excretion of the latter starts very soon. On the other hand, no radioactivity appeared in the urinary bladder during the 90 minutes following the administration of MMC-loaded microemulsion. The unabated radioactivity in the liver during the experiment shows that the localization of microemulsion formulation in the liver is stable. In the light of the foregoing, it is suggested that this microemulsion formulation may be an appropriate carrier system for anticancer agents by intravenous delivery in hepatic cancer chemotherapy.

  3. Precision imaging of 4.4 MeV gamma rays using a 3-D position sensitive Compton camera.

    PubMed

    Koide, Ayako; Kataoka, Jun; Masuda, Takamitsu; Mochizuki, Saku; Taya, Takanori; Sueoka, Koki; Tagawa, Leo; Fujieda, Kazuya; Maruhashi, Takuya; Kurihara, Takuya; Inaniwa, Taku

    2018-05-25

    Imaging of nuclear gamma-ray lines in the 1-10 MeV range is far from being established in both medical and physical applications. In proton therapy, 4.4 MeV gamma rays are emitted from the excited nucleus of either 12 C* or 11 B* and are considered good indicators of dose delivery and/or range verification. Further, in gamma-ray astronomy, 4.4 MeV gamma rays are produced by cosmic ray interactions in the interstellar medium, and can thus be used to probe nucleothynthesis in the universe. In this paper, we present a high-precision image of 4.4 MeV gamma rays taken by newly developed 3-D position sensitive Compton camera (3D-PSCC). To mimic the situation in proton therapy, we first irradiated water, PMMA and Ca(OH)2 with a 70 MeV proton beam, then we identified various nuclear lines with the HPGe detector. The 4.4 MeV gamma rays constitute a broad peak, including single and double escape peaks. Thus, by setting an energy window of 3D-PSCC from 3 to 5 MeV, we show that a gamma ray image sharply concentrates near the Bragg peak, as expected from the minimum energy threshold and sharp peak profile in the cross section of 12 C(p,p) 12 C*.

  4. Combustion pinhole-camera system

    DOEpatents

    Witte, A.B.

    1982-05-19

    A pinhole camera system is described utilizing a sealed optical-purge assembly which provides optical access into a coal combustor or other energy conversion reactors. The camera system basically consists of a focused-purge pinhole optical port assembly, a conventional TV vidicon receiver, an external, variable density light filter which is coupled electronically to the vidicon automatic gain control (agc). The key component of this system is the focused-purge pinhole optical port assembly which utilizes a purging inert gas to keep debris from entering the port and a lens arrangement which transfers the pinhole to the outside of the port assembly. One additional feature of the port assembly is that it is not flush with the interior of the combustor.

  5. Combustion pinhole camera system

    DOEpatents

    Witte, A.B.

    1984-02-21

    A pinhole camera system is described utilizing a sealed optical-purge assembly which provides optical access into a coal combustor or other energy conversion reactors. The camera system basically consists of a focused-purge pinhole optical port assembly, a conventional TV vidicon receiver, an external, variable density light filter which is coupled electronically to the vidicon automatic gain control (agc). The key component of this system is the focused-purge pinhole optical port assembly which utilizes a purging inert gas to keep debris from entering the port and a lens arrangement which transfers the pinhole to the outside of the port assembly. One additional feature of the port assembly is that it is not flush with the interior of the combustor. 2 figs.

  6. Positron emission particle tracking using a modular positron camera

    NASA Astrophysics Data System (ADS)

    Parker, D. J.; Leadbeater, T. W.; Fan, X.; Hausard, M. N.; Ingram, A.; Yang, Z.

    2009-06-01

    The technique of positron emission particle tracking (PEPT), developed at Birmingham in the early 1990s, enables a radioactively labelled tracer particle to be accurately tracked as it moves between the detectors of a "positron camera". In 1999 the original Birmingham positron camera, which consisted of a pair of MWPCs, was replaced by a system comprising two NaI(Tl) gamma camera heads operating in coincidence. This system has been successfully used for PEPT studies of a wide range of granular and fluid flow processes. More recently a modular positron camera has been developed using a number of the bismuth germanate (BGO) block detectors from standard PET scanners (CTI ECAT 930 and 950 series). This camera has flexible geometry, is transportable, and is capable of delivering high data rates. This paper presents simple models of its performance, and initial experience of its use in a range of geometries and applications.

  7. Camera systems in human motion analysis for biomedical applications

    NASA Astrophysics Data System (ADS)

    Chin, Lim Chee; Basah, Shafriza Nisha; Yaacob, Sazali; Juan, Yeap Ewe; Kadir, Aida Khairunnisaa Ab.

    2015-05-01

    Human Motion Analysis (HMA) system has been one of the major interests among researchers in the field of computer vision, artificial intelligence and biomedical engineering and sciences. This is due to its wide and promising biomedical applications, namely, bio-instrumentation for human computer interfacing and surveillance system for monitoring human behaviour as well as analysis of biomedical signal and image processing for diagnosis and rehabilitation applications. This paper provides an extensive review of the camera system of HMA, its taxonomy, including camera types, camera calibration and camera configuration. The review focused on evaluating the camera system consideration of the HMA system specifically for biomedical applications. This review is important as it provides guidelines and recommendation for researchers and practitioners in selecting a camera system of the HMA system for biomedical applications.

  8. Overview of a Hybrid Underwater Camera System

    DTIC Science & Technology

    2014-07-01

    meters), in increments of 200ps. The camera is also equipped with 6:1 motorized zoom lens. A precision miniature attitude, heading reference system ( AHRS ...LUCIE Control & Power Distribution System AHRS Pulsed LASER Gated Camera -^ Sonar Transducer (b) LUCIE sub-systems Proc. ofSPIEVol. 9111

  9. Development of biostereometric experiments. [stereometric camera system

    NASA Technical Reports Server (NTRS)

    Herron, R. E.

    1978-01-01

    The stereometric camera was designed for close-range techniques in biostereometrics. The camera focusing distance of 360 mm to infinity covers a broad field of close-range photogrammetry. The design provides for a separate unit for the lens system and interchangeable backs on the camera for the use of single frame film exposure, roll-type film cassettes, or glass plates. The system incorporates the use of a surface contrast optical projector.

  10. Ultra-wide Range Gamma Detector System for Search and Locate Operations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Odell, D. Mackenzie Odell; Harpring, Larry J.; Moore, Frank S. Jr.

    2005-10-26

    Collecting debris samples following a nuclear event requires that operations be conducted from a considerable stand-off distance. An ultra-wide range gamma detector system has been constructed to accomplish both long range radiation search and close range hot sample collection functions. Constructed and tested on a REMOTEC Andros platform, the system has demonstrated reliable operation over six orders of magnitude of gamma dose from 100's of uR/hr to over 100 R/hr. Functional elements include a remotely controlled variable collimator assembly, a NaI(Tl)/photomultiplier tube detector, a proprietary digital radiation instrument, a coaxially mounted video camera, a digital compass, and both local andmore » remote control computers with a user interface designed for long range operations. Long range sensitivity and target location, as well as close range sample selection performance are presented.« less

  11. A SPECT Scanner for Rodent Imaging Based on Small-Area Gamma Cameras

    NASA Astrophysics Data System (ADS)

    Lage, Eduardo; Villena, José L.; Tapias, Gustavo; Martinez, Naira P.; Soto-Montenegro, Maria L.; Abella, Mónica; Sisniega, Alejandro; Pino, Francisco; Ros, Domènec; Pavia, Javier; Desco, Manuel; Vaquero, Juan J.

    2010-10-01

    We developed a cost-effective SPECT scanner prototype (rSPECT) for in vivo imaging of rodents based on small-area gamma cameras. Each detector consists of a position-sensitive photomultiplier tube (PS-PMT) coupled to a 30 x 30 Nal(Tl) scintillator array and electronics attached to the PS-PMT sockets for adapting the detector signals to an in-house developed data acquisition system. The detector components are enclosed in a lead-shielded case with a receptacle to insert the collimators. System performance was assessed using 99mTc for a high-resolution parallel-hole collimator, and for a 0.75-mm pinhole collimator with a 60° aperture angle and a 42-mm collimator length. The energy resolution is about 10.7% of the photopeak energy. The overall system sensitivity is about 3 cps/μCi/detector and planar spatial resolution ranges from 2.4 mm at 1 cm source-to-collimator distance to 4.1 mm at 4.5 cm with parallel-hole collimators. With pinhole collimators planar spatial resolution ranges from 1.2 mm at 1 cm source-to-collimator distance to 2.4 mm at 4.5 cm; sensitivity at these distances ranges from 2.8 to 0.5 cps/μCi/detector. Tomographic hot-rod phantom images are presented together with images of bone, myocardium and brain of living rodents to demonstrate the feasibility of preclinical small-animal studies with the rSPECT.

  12. LAMOST CCD camera-control system based on RTS2

    NASA Astrophysics Data System (ADS)

    Tian, Yuan; Wang, Zheng; Li, Jian; Cao, Zi-Huang; Dai, Wei; Wei, Shou-Lin; Zhao, Yong-Heng

    2018-05-01

    The Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST) is the largest existing spectroscopic survey telescope, having 32 scientific charge-coupled-device (CCD) cameras for acquiring spectra. Stability and automation of the camera-control software are essential, but cannot be provided by the existing system. The Remote Telescope System 2nd Version (RTS2) is an open-source and automatic observatory-control system. However, all previous RTS2 applications were developed for small telescopes. This paper focuses on implementation of an RTS2-based camera-control system for the 32 CCDs of LAMOST. A virtual camera module inherited from the RTS2 camera module is built as a device component working on the RTS2 framework. To improve the controllability and robustness, a virtualized layer is designed using the master-slave software paradigm, and the virtual camera module is mapped to the 32 real cameras of LAMOST. The new system is deployed in the actual environment and experimentally tested. Finally, multiple observations are conducted using this new RTS2-framework-based control system. The new camera-control system is found to satisfy the requirements for automatic camera control in LAMOST. This is the first time that RTS2 has been applied to a large telescope, and provides a referential solution for full RTS2 introduction to the LAMOST observatory control system.

  13. Fuzzy logic control for camera tracking system

    NASA Technical Reports Server (NTRS)

    Lea, Robert N.; Fritz, R. H.; Giarratano, J.; Jani, Yashvant

    1992-01-01

    A concept utilizing fuzzy theory has been developed for a camera tracking system to provide support for proximity operations and traffic management around the Space Station Freedom. Fuzzy sets and fuzzy logic based reasoning are used in a control system which utilizes images from a camera and generates required pan and tilt commands to track and maintain a moving target in the camera's field of view. This control system can be implemented on a fuzzy chip to provide an intelligent sensor for autonomous operations. Capabilities of the control system can be expanded to include approach, handover to other sensors, caution and warning messages.

  14. Focal Plane Detectors for the Advanced Gamma-Ray Imaging System (AGIS)

    NASA Astrophysics Data System (ADS)

    Wagner, Robert G.; AGIS Photodetector Group; Byrum, K.; Drake, G.; Falcone, A.; Funk, S.; Horan, D.; Mukherjee, R.; Tajima, H.; Williams, D.

    2008-03-01

    The Advanced Gamma-Ray Imaging System (AGIS) is a concept for the next generation observatory in ground-based very high energy gamma-ray astronomy. It is being designed to achieve a significant improvement in sensitivity compared to current Imaging Air Cherenkov Telescope (IACT) Arrays. One of the main requirements in order that AGIS fulfill this goal will be to achieve higher angular resolution than current IACTs. Simulations show that a substantial improvement in angular resolution may be achieved if the pixel size is reduced to less than 0.05 deg, i.e. two to three times smaller than the pixel size of current IACT cameras. With finer pixelation and the plan to deploy on the order of 100 telescopes in the AGIS array, the channel count will exceed 1,000,000 imaging pixels. High uniformity and long mean time-to-failure will be important aspects of a successful photodetector technology choice. Here we present alternatives being considered for AGIS, including both silicon photomultipliers (SiPMs) and multi-anode photomultipliers (MAPMTs). Results from laboratory testing of MAPMTs and SiPMs are presented along with results from the first incorporation of these devices in cameras on test bed Cherenkov telescopes.

  15. Gamma Ray Burst Optical Counterpart Search Experiment (GROCSE)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, H.S.; Ables, E.; Bionta, R.M.

    GROCSE (Gamma-Ray Optical Counterpart Search Experiments) is a system of automated telescopes that search for simultaneous optical activity associated with gamma ray bursts in response to real-time burst notifications provided by the BATSE/BACODINE network. The first generation system, GROCSE 1, is sensitive down to Mv {approximately} 8.5 and requires an average of 12 seconds to obtain the first images of the gamma ray burst error box defined by the BACODINE trigger. The collaboration is now constructing a second generation system which has a 4 second slewing time and can reach Mv {approximately} 14 with a 5 second exposure. GROCSE 2more » consists of 4 cameras on a single mount. Each camera views the night sky through a commercial Canon lens (f/1.8, focal length 200 mm) and utilizes a 2K x 2K Loral CCD. Light weight and low noise custom readout electronics were designed and fabricated for these CCDs. The total field of view of the 4 cameras is 17.6 x 17.6 {degree}. GROCSE II will be operated by the end of 1995. In this paper, the authors present an overview of the GROCSE system and the results of measurements with a GROCSE 2 prototype unit.« less

  16. Focal Plane Detectors for the Advanced Gamma-Ray Imaging System (AGIS)

    NASA Astrophysics Data System (ADS)

    Wagner, R. G.; Byrum, K.; Drake, G.; Funk, S.; Otte, N.; Smith, A.; Tajima, H.; Williams, D.

    2009-05-01

    The Advanced Gamma-Ray Imaging System (AGIS) is a concept for the next generation observatory in ground-based very high energy gamma-ray astronomy. It is being designed to achieve a significant improvement in sensitivity compared to current Imaging Air Cherenkov Telescope (IACT) Arrays. One of the main requirements in order that AGIS fulfills this goal will be to achieve higher angular resolution than current IACTs. Simulations show that a substantial improvement in angular resolution may be achieved if the pixel size is reduced to 0.05 deg, i.e. two to three times smaller than for current IACT cameras. Here we present results from testing of alternatives being considered for AGIS, including both silicon photomultipliers (SiPMs) and multi-anode photomultipliers (MAPMTs).

  17. The GCT camera for the Cherenkov Telescope Array

    NASA Astrophysics Data System (ADS)

    Lapington, J. S.; Abchiche, A.; Allan, D.; Amans, J.-P.; Armstrong, T. P.; Balzer, A.; Berge, D.; Boisson, C.; Bousquet, J.-J.; Bose, R.; Brown, A. M.; Bryan, M.; Buchholtz, G.; Buckley, J.; Chadwick, P. M.; Costantini, H.; Cotter, G.; Daniel, M. K.; De Franco, A.; De Frondat, F.; Dournaux, J.-L.; Dumas, D.; Ernenwein, J.-P.; Fasola, G.; Funk, S.; Gironnet, J.; Graham, J. A.; Greenshaw, T.; Hervet, O.; Hidaka, N.; Hinton, J. A.; Huet, J.-M.; Jankowsky, D.; Jegouzo, I.; Jogler, T.; Kawashima, T.; Kraus, M.; Laporte, P.; Leach, S.; Lefaucheur, J.; Markoff, S.; Melse, T.; Minaya, I. A.; Mohrmann, L.; Molyneux, P.; Moore, P.; Nolan, S. J.; Okumura, A.; Osborne, J. P.; Parsons, R. D.; Rosen, S.; Ross, D.; Rowell, G.; Rulten, C. B.; Sato, Y.; Sayede, F.; Schmoll, J.; Schoorlemmer, H.; Servillat, M.; Sol, H.; Stamatescu, V.; Stephan, M.; Stuik, R.; Sykes, J.; Tajima, H.; Thornhill, J.; Tibaldo, L.; Trichard, C.; Varner, G.; Vink, J.; Watson, J. J.; White, R.; Yamane, N.; Zech, A.; Zink, A.; Zorn, J.; CTA Consortium

    2017-12-01

    The Gamma Cherenkov Telescope (GCT) is one of the designs proposed for the Small Sized Telescope (SST) section of the Cherenkov Telescope Array (CTA). The GCT uses dual-mirror optics, resulting in a compact telescope with good image quality and a large field of view with a smaller, more economical, camera than is achievable with conventional single mirror solutions. The photon counting GCT camera is designed to record the flashes of atmospheric Cherenkov light from gamma and cosmic ray initiated cascades, which last only a few tens of nanoseconds. The GCT optics require that the camera detectors follow a convex surface with a radius of curvature of 1 m and a diameter of 35 cm, which is approximated by tiling the focal plane with 32 modules. The first camera prototype is equipped with multi-anode photomultipliers, each comprising an 8×8 array of 6×6 mm2 pixels to provide the required angular scale, adding up to 2048 pixels in total. Detector signals are shaped, amplified and digitised by electronics based on custom ASICs that provide digitisation at 1 GSample/s. The camera is self-triggering, retaining images where the focal plane light distribution matches predefined spatial and temporal criteria. The electronics are housed in the liquid-cooled, sealed camera enclosure. LED flashers at the corners of the focal plane provide a calibration source via reflection from the secondary mirror. The first GCT camera prototype underwent preliminary laboratory tests last year. In November 2015, the camera was installed on a prototype GCT telescope (SST-GATE) in Paris and was used to successfully record the first Cherenkov light of any CTA prototype, and the first Cherenkov light seen with such a dual-mirror optical system. A second full-camera prototype based on Silicon Photomultipliers is under construction. Up to 35 GCTs are envisaged for CTA.

  18. A feasibility study of an integrated NIR/gamma/visible imaging system for endoscopic sentinel lymph node mapping.

    PubMed

    Kang, Han Gyu; Lee, Ho-Young; Kim, Kyeong Min; Song, Seong-Hyun; Hong, Gun Chul; Hong, Seong Jong

    2017-01-01

    The aim of this study is to integrate NIR, gamma, and visible imaging tools into a single endoscopic system to overcome the limitation of NIR using gamma imaging and to demonstrate the feasibility of endoscopic NIR/gamma/visible fusion imaging for sentinel lymph node (SLN) mapping with a small animal. The endoscopic NIR/gamma/visible imaging system consists of a tungsten pinhole collimator, a plastic focusing lens, a BGO crystal (11 × 11 × 2 mm 3 ), a fiber-optic taper (front = 11 × 11 mm 2 , end = 4 × 4 mm 2 ), a 122-cm long endoscopic fiber bundle, an NIR emission filter, a relay lens, and a CCD camera. A custom-made Derenzo-like phantom filled with a mixture of 99m Tc and indocyanine green (ICG) was used to assess the spatial resolution of the NIR and gamma images. The ICG fluorophore was excited using a light-emitting diode (LED) with an excitation filter (723-758 nm), and the emitted fluorescence photons were detected with an emission filter (780-820 nm) for a duration of 100 ms. Subsequently, the 99m Tc distribution in the phantom was imaged for 3 min. The feasibility of in vivo SLN mapping with a mouse was investigated by injecting a mixture of 99m Tc-antimony sulfur colloid (12 MBq) and ICG (0.1 mL) into the right paw of the mouse (C57/B6) subcutaneously. After one hour, NIR, gamma, and visible images were acquired sequentially. Subsequently, the dissected SLN was imaged in the same way as the in vivo SLN mapping. The NIR, gamma, and visible images of the Derenzo-like phantom can be obtained with the proposed endoscopic imaging system. The NIR/gamma/visible fusion image of the SLN showed a good correlation among the NIR, gamma, and visible images both for the in vivo and ex vivo imaging. We demonstrated the feasibility of the integrated NIR/gamma/visible imaging system using a single endoscopic fiber bundle. In future, we plan to investigate miniaturization of the endoscope head and simultaneous NIR/gamma/visible imaging with

  19. An integrated port camera and display system for laparoscopy.

    PubMed

    Terry, Benjamin S; Ruppert, Austin D; Steinhaus, Kristen R; Schoen, Jonathan A; Rentschler, Mark E

    2010-05-01

    In this paper, we built and tested the port camera, a novel, inexpensive, portable, and battery-powered laparoscopic tool that integrates the components of a vision system with a cannula port. This new device 1) minimizes the invasiveness of laparoscopic surgery by combining a camera port and tool port; 2) reduces the cost of laparoscopic vision systems by integrating an inexpensive CMOS sensor and LED light source; and 3) enhances laparoscopic surgical procedures by mechanically coupling the camera, tool port, and liquid crystal display (LCD) screen to provide an on-patient visual display. The port camera video system was compared to two laparoscopic video systems: a standard resolution unit from Karl Storz (model 22220130) and a high definition unit from Stryker (model 1188HD). Brightness, contrast, hue, colorfulness, and sharpness were compared. The port camera video is superior to the Storz scope and approximately equivalent to the Stryker scope. An ex vivo study was conducted to measure the operative performance of the port camera. The results suggest that simulated tissue identification and biopsy acquisition with the port camera is as efficient as with a traditional laparoscopic system. The port camera was successfully used by a laparoscopic surgeon for exploratory surgery and liver biopsy during a porcine surgery, demonstrating initial surgical feasibility.

  20. Application of infrared uncooled cameras in surveillance systems

    NASA Astrophysics Data System (ADS)

    Dulski, R.; Bareła, J.; Trzaskawka, P.; PiÄ tkowski, T.

    2013-10-01

    The recent necessity to protect military bases, convoys and patrols gave serious impact to the development of multisensor security systems for perimeter protection. One of the most important devices used in such systems are IR cameras. The paper discusses technical possibilities and limitations to use uncooled IR camera in a multi-sensor surveillance system for perimeter protection. Effective ranges of detection depend on the class of the sensor used and the observed scene itself. Application of IR camera increases the probability of intruder detection regardless of the time of day or weather conditions. It also simultaneously decreased the false alarm rate produced by the surveillance system. The role of IR cameras in the system was discussed as well as technical possibilities to detect human being. Comparison of commercially available IR cameras, capable to achieve desired ranges was done. The required spatial resolution for detection, recognition and identification was calculated. The simulation of detection ranges was done using a new model for predicting target acquisition performance which uses the Targeting Task Performance (TTP) metric. Like its predecessor, the Johnson criteria, the new model bounds the range performance with image quality. The scope of presented analysis is limited to the estimation of detection, recognition and identification ranges for typical thermal cameras with uncooled microbolometer focal plane arrays. This type of cameras is most widely used in security systems because of competitive price to performance ratio. Detection, recognition and identification range calculations were made, and the appropriate results for the devices with selected technical specifications were compared and discussed.

  1. Data Acquisition System of Nobeyama MKID Camera

    NASA Astrophysics Data System (ADS)

    Nagai, M.; Hisamatsu, S.; Zhai, G.; Nitta, T.; Nakai, N.; Kuno, N.; Murayama, Y.; Hattori, S.; Mandal, P.; Sekimoto, Y.; Kiuchi, H.; Noguchi, T.; Matsuo, H.; Dominjon, A.; Sekiguchi, S.; Naruse, M.; Maekawa, J.; Minamidani, T.; Saito, M.

    2018-05-01

    We are developing a superconducting camera based on microwave kinetic inductance detectors (MKIDs) to observe 100-GHz continuum with the Nobeyama 45-m telescope. A data acquisition (DAQ) system for the camera has been designed to operate the MKIDs with the telescope. This system is required to connect the telescope control system (COSMOS) to the readout system of the MKIDs (MKID DAQ) which employs the frequency-sweeping probe scheme. The DAQ system is also required to record the reference signal of the beam switching for the demodulation by the analysis pipeline in order to suppress the sky fluctuation. The system has to be able to merge and save all data acquired both by the camera and by the telescope, including the cryostat temperature and pressure and the telescope pointing. A collection of software which implements these functions and works as a TCP/IP server on a workstation was developed. The server accepts commands and observation scripts from COSMOS and then issues commands to MKID DAQ to configure and start data acquisition. We made a commissioning of the MKID camera on the Nobeyama 45-m telescope and obtained successful scan signals of the atmosphere and of the Moon.

  2. Feasibility evaluation and study of adapting the attitude reference system to the Orbiter camera payload system's large format camera

    NASA Technical Reports Server (NTRS)

    1982-01-01

    A design concept that will implement a mapping capability for the Orbital Camera Payload System (OCPS) when ground control points are not available is discussed. Through the use of stellar imagery collected by a pair of cameras whose optical axis are structurally related to the large format camera optical axis, such pointing information is made available.

  3. Relative and Absolute Calibration of a Multihead Camera System with Oblique and Nadir Looking Cameras for a Uas

    NASA Astrophysics Data System (ADS)

    Niemeyer, F.; Schima, R.; Grenzdörffer, G.

    2013-08-01

    Numerous unmanned aerial systems (UAS) are currently flooding the market. For the most diverse applications UAVs are special designed and used. Micro and mini UAS (maximum take-off weight up to 5 kg) are of particular interest, because legal restrictions are still manageable but also the payload capacities are sufficient for many imaging sensors. Currently a camera system with four oblique and one nadir looking cameras is under development at the Chair for Geodesy and Geoinformatics. The so-called "Four Vision" camera system was successfully built and tested in the air. A MD4-1000 UAS from microdrones is used as a carrier system. Light weight industrial cameras are used and controlled by a central computer. For further photogrammetric image processing, each individual camera, as well as all the cameras together have to be calibrated. This paper focuses on the determination of the relative orientation between the cameras with the „Australis" software and will give an overview of the results and experiences of test flights.

  4. Performance Characteristics For The Orbiter Camera Payload System's Large Format Camera (LFC)

    NASA Astrophysics Data System (ADS)

    MoIIberg, Bernard H.

    1981-11-01

    The Orbiter Camera Payload System, the OCPS, is an integrated photographic system which is carried into Earth orbit as a payload in the Shuttle Orbiter vehicle's cargo bay. The major component of the OCPS is a Large Format Camera (LFC) which is a precision wide-angle cartographic instrument that is capable of produc-ing high resolution stereophotography of great geometric fidelity in multiple base to height ratios. The primary design objective for the LFC was to maximize all system performance characteristics while maintaining a high level of reliability compatible with rocket launch conditions and the on-orbit environment.

  5. Gamma ray imager on the DIII-D tokamak

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pace, D. C., E-mail: pacedc@fusion.gat.com; Taussig, D.; Eidietis, N. W.

    2016-04-15

    A gamma ray camera is built for the DIII-D tokamak [J. Luxon, Nucl. Fusion 42, 614 (2002)] that provides spatial localization and energy resolution of gamma flux by combining a lead pinhole camera with custom-built detectors and optimized viewing geometry. This diagnostic system is installed on the outer midplane of the tokamak such that its 123 collimated sightlines extend across the tokamak radius while also covering most of the vertical extent of the plasma volume. A set of 30 bismuth germanate detectors can be secured in any of the available sightlines, allowing for customizable coverage in experiments with runaway electronsmore » in the energy range of 1–60 MeV. Commissioning of the gamma ray imager includes the quantification of electromagnetic noise sources in the tokamak machine hall and a measurement of the energy spectrum of background gamma radiation. First measurements of gamma rays coming from the plasma provide a suitable testbed for implementing pulse height analysis that provides the energy of detected gamma photons.« less

  6. Gamma ray imager on the DIII-D tokamak

    DOE PAGES

    Pace, D. C.; Cooper, C. M.; Taussig, D.; ...

    2016-04-13

    A gamma ray camera is built for the DIII-D tokamak [J. Luxon, Nucl. Fusion 42, 614 (2002)] that provides spatial localization and energy resolution of gamma flux by combining a lead pinhole camera with custom-built detectors and optimized viewing geometry. This diagnostic system is installed on the outer midplane of the tokamak such that its 123 collimated sightlines extend across the tokamak radius while also covering most of the vertical extent of the plasma volume. A set of 30 bismuth germanate detectors can be secured in any of the available sightlines, allowing for customizable coverage in experiments with runaway electronsmore » in the energy range of 1- 60 MeV. Commissioning of the gamma ray imager includes the quantification of electromagnetic noise sources in the tokamak machine hall and a measurement of the energy spectrum of background gamma radiation. In conclusion, first measurements of gamma rays coming from the plasma provide a suitable testbed for implementing pulse height analysis that provides the energy of detected gamma photons.« less

  7. Imaging characteristics of photogrammetric camera systems

    USGS Publications Warehouse

    Welch, R.; Halliday, J.

    1973-01-01

    In view of the current interest in high-altitude and space photographic systems for photogrammetric mapping, the United States Geological Survey (U.S.G.S.) undertook a comprehensive research project designed to explore the practical aspects of applying the latest image quality evaluation techniques to the analysis of such systems. The project had two direct objectives: (1) to evaluate the imaging characteristics of current U.S.G.S. photogrammetric camera systems; and (2) to develop methodologies for predicting the imaging capabilities of photogrammetric camera systems, comparing conventional systems with new or different types of systems, and analyzing the image quality of photographs. Image quality was judged in terms of a number of evaluation factors including response functions, resolving power, and the detectability and measurability of small detail. The limiting capabilities of the U.S.G.S. 6-inch and 12-inch focal length camera systems were established by analyzing laboratory and aerial photographs in terms of these evaluation factors. In the process, the contributing effects of relevant parameters such as lens aberrations, lens aperture, shutter function, image motion, film type, and target contrast procedures for analyzing image quality and predicting and comparing performance capabilities. ?? 1973.

  8. Orbiter Camera Payload System

    NASA Technical Reports Server (NTRS)

    1980-01-01

    Components for an orbiting camera payload system (OCPS) include the large format camera (LFC), a gas supply assembly, and ground test, handling, and calibration hardware. The LFC, a high resolution large format photogrammetric camera for use in the cargo bay of the space transport system, is also adaptable to use on an RB-57 aircraft or on a free flyer satellite. Carrying 4000 feet of film, the LFC is usable over the visible to near IR, at V/h rates of from 11 to 41 milliradians per second, overlap of 10, 60, 70 or 80 percent and exposure times of from 4 to 32 milliseconds. With a 12 inch focal length it produces a 9 by 18 inch format (long dimension in line of flight) with full format low contrast resolution of 88 lines per millimeter (AWAR), full format distortion of less than 14 microns and a complement of 45 Reseau marks and 12 fiducial marks. Weight of the OCPS as supplied, fully loaded is 944 pounds and power dissipation is 273 watts average when in operation, 95 watts in standby. The LFC contains an internal exposure sensor, or will respond to external command. It is able to photograph starfields for inflight calibration upon command.

  9. 131I activity quantification of gamma camera planar images.

    PubMed

    Barquero, Raquel; Garcia, Hugo P; Incio, Monica G; Minguez, Pablo; Cardenas, Alexander; Martínez, Daniel; Lassmann, Michael

    2017-02-07

    A procedure to estimate the activity in target tissues in patients during the therapeutic administration of 131 I radiopharmaceutical treatment for thyroid conditions (hyperthyroidism and differentiated thyroid cancer) using a gamma camera (GC) with a high energy (HE) collimator, is proposed. Planar images are acquired for lesions of different sizes r, and at different distances d, in two HE GC systems. Defining a region of interest (ROI) on the image of size r, total counts n g are measured. Sensitivity S (cps MBq -1 ) in each acquisition is estimated as the product of the geometric G and the intrinsic efficiency η 0 . The mean fluence of 364 keV photons arriving at the ROI per disintegration G, is calculated with the MCNPX code, simulating the entire GC and the HE collimator. Intrinsic efficiency η 0 is estimated from a calibration measurement of a plane reference source of 131 I in air. Values of G and S for two GC systems-Philips Skylight and Siemens e-cam-are calculated. The total range of possible sensitivity values in thyroidal imaging in the e-cam and skylight GC measure from 7 cps MBq -1 to 35 cps MBq -1 , and from 6 cps MBq -1 to 29 cps MBq -1 , respectively. These sensitivity values have been verified with the SIMIND code, with good agreement between them. The results have been validated with experimental measurements in air, and in a medium with scatter and attenuation. The counts in the ROI can be produced by direct, scatter and penetration photons. The fluence value for direct photons is constant for any r and d values, but scatter and penetration photons show different values related to specific r and d values, resulting in the large sensitivity differences found. The sensitivity in thyroidal GC planar imaging is strongly dependent on uptake size, and distance from the GC. An individual value for the acquisition sensitivity of each lesion can significantly alleviate the level of uncertainty in the measurement of thyroid uptake activity for each

  10. Head-coupled remote stereoscopic camera system for telepresence applications

    NASA Astrophysics Data System (ADS)

    Bolas, Mark T.; Fisher, Scott S.

    1990-09-01

    The Virtual Environment Workstation Project (VIEW) at NASA's Ames Research Center has developed a remotely controlled stereoscopic camera system that can be used for telepresence research and as a tool to develop and evaluate configurations for head-coupled visual systems associated with space station telerobots and remote manipulation robotic arms. The prototype camera system consists of two lightweight CCD video cameras mounted on a computer controlled platform that provides real-time pan, tilt, and roll control of the camera system in coordination with head position transmitted from the user. This paper provides an overall system description focused on the design and implementation of the camera and platform hardware configuration and the development of control software. Results of preliminary performance evaluations are reported with emphasis on engineering and mechanical design issues and discussion of related psychophysiological effects and objectives.

  11. Calibration Techniques for Accurate Measurements by Underwater Camera Systems

    PubMed Central

    Shortis, Mark

    2015-01-01

    Calibration of a camera system is essential to ensure that image measurements result in accurate estimates of locations and dimensions within the object space. In the underwater environment, the calibration must implicitly or explicitly model and compensate for the refractive effects of waterproof housings and the water medium. This paper reviews the different approaches to the calibration of underwater camera systems in theoretical and practical terms. The accuracy, reliability, validation and stability of underwater camera system calibration are also discussed. Samples of results from published reports are provided to demonstrate the range of possible accuracies for the measurements produced by underwater camera systems. PMID:26690172

  12. Mission Report on the Orbiter Camera Payload System (OCPS) Large Format Camera (LFC) and Attitude Reference System (ARS)

    NASA Technical Reports Server (NTRS)

    Mollberg, Bernard H.; Schardt, Bruton B.

    1988-01-01

    The Orbiter Camera Payload System (OCPS) is an integrated photographic system which is carried into earth orbit as a payload in the Space Transportation System (STS) Orbiter vehicle's cargo bay. The major component of the OCPS is a Large Format Camera (LFC), a precision wide-angle cartographic instrument that is capable of producing high resolution stereo photography of great geometric fidelity in multiple base-to-height (B/H) ratios. A secondary, supporting system to the LFC is the Attitude Reference System (ARS), which is a dual lens Stellar Camera Array (SCA) and camera support structure. The SCA is a 70-mm film system which is rigidly mounted to the LFC lens support structure and which, through the simultaneous acquisition of two star fields with each earth-viewing LFC frame, makes it possible to determine precisely the pointing of the LFC optical axis with reference to the earth nadir point. Other components complete the current OCPS configuration as a high precision cartographic data acquisition system. The primary design objective for the OCPS was to maximize system performance characteristics while maintaining a high level of reliability compatible with Shuttle launch conditions and the on-orbit environment. The full-up OCPS configuration was launched on a highly successful maiden voyage aboard the STS Orbiter vehicle Challenger on October 5, 1984, as a major payload aboard mission STS 41-G. This report documents the system design, the ground testing, the flight configuration, and an analysis of the results obtained during the Challenger mission STS 41-G.

  13. Light-Directed Ranging System Implementing Single Camera System for Telerobotics Applications

    NASA Technical Reports Server (NTRS)

    Wells, Dennis L. (Inventor); Li, Larry C. (Inventor); Cox, Brian J. (Inventor)

    1997-01-01

    A laser-directed ranging system has utility for use in various fields, such as telerobotics applications and other applications involving physically handicapped individuals. The ranging system includes a single video camera and a directional light source such as a laser mounted on a camera platform, and a remotely positioned operator. In one embodiment, the position of the camera platform is controlled by three servo motors to orient the roll axis, pitch axis and yaw axis of the video cameras, based upon an operator input such as head motion. The laser is offset vertically and horizontally from the camera, and the laser/camera platform is directed by the user to point the laser and the camera toward a target device. The image produced by the video camera is processed to eliminate all background images except for the spot created by the laser. This processing is performed by creating a digital image of the target prior to illumination by the laser, and then eliminating common pixels from the subsequent digital image which includes the laser spot. A reference point is defined at a point in the video frame, which may be located outside of the image area of the camera. The disparity between the digital image of the laser spot and the reference point is calculated for use in a ranging analysis to determine range to the target.

  14. Performance verification of the FlashCam prototype camera for the Cherenkov Telescope Array

    NASA Astrophysics Data System (ADS)

    Werner, F.; Bauer, C.; Bernhard, S.; Capasso, M.; Diebold, S.; Eisenkolb, F.; Eschbach, S.; Florin, D.; Föhr, C.; Funk, S.; Gadola, A.; Garrecht, F.; Hermann, G.; Jung, I.; Kalekin, O.; Kalkuhl, C.; Kasperek, J.; Kihm, T.; Lahmann, R.; Marszalek, A.; Pfeifer, M.; Principe, G.; Pühlhofer, G.; Pürckhauer, S.; Rajda, P. J.; Reimer, O.; Santangelo, A.; Schanz, T.; Schwab, T.; Steiner, S.; Straumann, U.; Tenzer, C.; Vollhardt, A.; Wolf, D.; Zietara, K.; CTA Consortium

    2017-12-01

    The Cherenkov Telescope Array (CTA) is a future gamma-ray observatory that is planned to significantly improve upon the sensitivity and precision of the current generation of Cherenkov telescopes. The observatory will consist of several dozens of telescopes with different sizes and equipped with different types of cameras. Of these, the FlashCam camera system is the first to implement a fully digital signal processing chain which allows for a traceable, configurable trigger scheme and flexible signal reconstruction. As of autumn 2016, a prototype FlashCam camera for the medium-sized telescopes of CTA nears completion. First results of the ongoing system tests demonstrate that the signal chain and the readout system surpass CTA requirements. The stability of the system is shown using long-term temperature cycling.

  15. Camera Development for the Cherenkov Telescope Array

    NASA Astrophysics Data System (ADS)

    Moncada, Roberto Jose

    2017-01-01

    With the Cherenkov Telescope Array (CTA), the very-high-energy gamma-ray universe, between 30 GeV and 300 TeV, will be probed at an unprecedented resolution, allowing deeper studies of known gamma-ray emitters and the possible discovery of new ones. This exciting project could also confirm the particle nature of dark matter by looking for the gamma rays produced by self-annihilating weakly interacting massive particles (WIMPs). The telescopes will use the imaging atmospheric Cherenkov technique (IACT) to record Cherenkov photons that are produced by the gamma-ray induced extensive air shower. One telescope design features dual-mirror Schwarzschild-Couder (SC) optics that allows the light to be finely focused on the high-resolution silicon photomultipliers of the camera modules starting from a 9.5-meter primary mirror. Each camera module will consist of a focal plane module and front-end electronics, and will have four TeV Array Readout with GSa/s Sampling and Event Trigger (TARGET) chips, giving them 64 parallel input channels. The TARGET chip has a self-trigger functionality for readout that can be used in higher logic across camera modules as well as across individual telescopes, which will each have 177 camera modules. There will be two sites, one in the northern and the other in the southern hemisphere, for full sky coverage, each spanning at least one square kilometer. A prototype SC telescope is currently under construction at the Fred Lawrence Whipple Observatory in Arizona. This work was supported by the National Science Foundation's REU program through NSF award AST-1560016.

  16. Projector-Camera Systems for Immersive Training

    DTIC Science & Technology

    2006-01-01

    average to a sequence of 100 captured distortion corrected images. The OpenCV library [ OpenCV ] was used for camera calibration. To correct for...rendering application [Treskunov, Pair, and Swartout, 2004]. It was transposed to take into account different matrix conventions between OpenCV and...Screen Imperfections. Proc. Workshop on Projector-Camera Systems (PROCAMS), Nice, France, IEEE. OpenCV : Open Source Computer Vision. [Available

  17. A multipurpose camera system for monitoring Kīlauea Volcano, Hawai'i

    USGS Publications Warehouse

    Patrick, Matthew R.; Orr, Tim R.; Lee, Lopaka; Moniz, Cyril J.

    2015-01-01

    We describe a low-cost, compact multipurpose camera system designed for field deployment at active volcanoes that can be used either as a webcam (transmitting images back to an observatory in real-time) or as a time-lapse camera system (storing images onto the camera system for periodic retrieval during field visits). The system also has the capability to acquire high-definition video. The camera system uses a Raspberry Pi single-board computer and a 5-megapixel low-light (near-infrared sensitive) camera, as well as a small Global Positioning System (GPS) module to ensure accurate time-stamping of images. Custom Python scripts control the webcam and GPS unit and handle data management. The inexpensive nature of the system allows it to be installed at hazardous sites where it might be lost. Another major advantage of this camera system is that it provides accurate internal timing (independent of network connection) and, because a full Linux operating system and the Python programming language are available on the camera system itself, it has the versatility to be configured for the specific needs of the user. We describe example deployments of the camera at Kīlauea Volcano, Hawai‘i, to monitor ongoing summit lava lake activity. 

  18. Localization and Mapping Using a Non-Central Catadioptric Camera System

    NASA Astrophysics Data System (ADS)

    Khurana, M.; Armenakis, C.

    2018-05-01

    This work details the development of an indoor navigation and mapping system using a non-central catadioptric omnidirectional camera and its implementation for mobile applications. Omnidirectional catadioptric cameras find their use in navigation and mapping of robotic platforms, owing to their wide field of view. Having a wider field of view, or rather a potential 360° field of view, allows the system to "see and move" more freely in the navigation space. A catadioptric camera system is a low cost system which consists of a mirror and a camera. Any perspective camera can be used. A platform was constructed in order to combine the mirror and a camera to build a catadioptric system. A calibration method was developed in order to obtain the relative position and orientation between the two components so that they can be considered as one monolithic system. The mathematical model for localizing the system was determined using conditions based on the reflective properties of the mirror. The obtained platform positions were then used to map the environment using epipolar geometry. Experiments were performed to test the mathematical models and the achieved location and mapping accuracies of the system. An iterative process of positioning and mapping was applied to determine object coordinates of an indoor environment while navigating the mobile platform. Camera localization and 3D coordinates of object points obtained decimetre level accuracies.

  19. The Advanced Gamma-ray Imaging System (AGIS): Telescope Mechanical Designs

    NASA Astrophysics Data System (ADS)

    Guarino, V.; Buckley, J.; Byrum, K.; Falcone, A.; Fegan, S.; Finley, J.; Hanna, D.; Horan, D.; Kaaret, P.; Konopelko, A.; Krawczynski, H.; Krennrich, F.; Wagner, R.; Woods, M.; Vassiliev, V.

    2008-04-01

    The concept of a future ground-based gamma-ray observatory, AGIS, in the energy range 40 GeV-100 TeV is based on an array of sim 100 imaging atmospheric Cherenkov telescopes (IACTs). The anticipated improvements of AGIS sensitivity, angular resolution and reliability of operation impose demanding technological and cost requirements on the design of IACTs. The relatively inexpensive Davies-Cotton telescope design has been used in ground-based gamma-ray astronomy for almost fifty years and is an excellent option. We are also exploring alternative designs and in this submission we focus on the recent mechanical design of a two-mirror telescope with a Schwarzschild-Couder (SC) optical system. The mechanical structure provides support points for mirrors and camera. The design was driven by the requirement of minimizing the deflections of the mirror support structures. The structure is also designed to be able to slew in elevation and azimuth at 10 degrees/sec.

  20. Performance of cardiac cadmium-zinc-telluride gamma camera imaging in coronary artery disease: a review from the cardiovascular committee of the European Association of Nuclear Medicine (EANM).

    PubMed

    Agostini, Denis; Marie, Pierre-Yves; Ben-Haim, Simona; Rouzet, François; Songy, Bernard; Giordano, Alessandro; Gimelli, Alessia; Hyafil, Fabien; Sciagrà, Roberto; Bucerius, Jan; Verberne, Hein J; Slart, Riemer H J A; Lindner, Oliver; Übleis, Christopher; Hacker, Marcus

    2016-12-01

    The trade-off between resolution and count sensitivity dominates the performance of standard gamma cameras and dictates the need for relatively high doses of radioactivity of the used radiopharmaceuticals in order to limit image acquisition duration. The introduction of cadmium-zinc-telluride (CZT)-based cameras may overcome some of the limitations against conventional gamma cameras. CZT cameras used for the evaluation of myocardial perfusion have been shown to have a higher count sensitivity compared to conventional single photon emission computed tomography (SPECT) techniques. CZT image quality is further improved by the development of a dedicated three-dimensional iterative reconstruction algorithm, based on maximum likelihood expectation maximization (MLEM), which corrects for the loss in spatial resolution due to line response function of the collimator. All these innovations significantly reduce imaging time and result in a lower patient's radiation exposure compared with standard SPECT. To guide current and possible future users of the CZT technique for myocardial perfusion imaging, the Cardiovascular Committee of the European Association of Nuclear Medicine, starting from the experience of its members, has decided to examine the current literature regarding procedures and clinical data on CZT cameras. The committee hereby aims 1) to identify the main acquisitions protocols; 2) to evaluate the diagnostic and prognostic value of CZT derived myocardial perfusion, and finally 3) to determine the impact of CZT on radiation exposure.

  1. Target-Tracking Camera for a Metrology System

    NASA Technical Reports Server (NTRS)

    Liebe, Carl; Bartman, Randall; Chapsky, Jacob; Abramovici, Alexander; Brown, David

    2009-01-01

    An analog electronic camera that is part of a metrology system measures the varying direction to a light-emitting diode that serves as a bright point target. In the original application for which the camera was developed, the metrological system is used to determine the varying relative positions of radiating elements of an airborne synthetic aperture-radar (SAR) antenna as the airplane flexes during flight; precise knowledge of the relative positions as a function of time is needed for processing SAR readings. It has been common metrology system practice to measure the varying direction to a bright target by use of an electronic camera of the charge-coupled-device or active-pixel-sensor type. A major disadvantage of this practice arises from the necessity of reading out and digitizing the outputs from a large number of pixels and processing the resulting digital values in a computer to determine the centroid of a target: Because of the time taken by the readout, digitization, and computation, the update rate is limited to tens of hertz. In contrast, the analog nature of the present camera makes it possible to achieve an update rate of hundreds of hertz, and no computer is needed to determine the centroid. The camera is based on a position-sensitive detector (PSD), which is a rectangular photodiode with output contacts at opposite ends. PSDs are usually used in triangulation for measuring small distances. PSDs are manufactured in both one- and two-dimensional versions. Because it is very difficult to calibrate two-dimensional PSDs accurately, the focal-plane sensors used in this camera are two orthogonally mounted one-dimensional PSDs.

  2. Stability analysis for a multi-camera photogrammetric system.

    PubMed

    Habib, Ayman; Detchev, Ivan; Kwak, Eunju

    2014-08-18

    Consumer-grade digital cameras suffer from geometrical instability that may cause problems when used in photogrammetric applications. This paper provides a comprehensive review of this issue of interior orientation parameter variation over time, it explains the common ways used for coping with the issue, and describes the existing methods for performing stability analysis for a single camera. The paper then points out the lack of coverage of stability analysis for multi-camera systems, suggests a modification of the collinearity model to be used for the calibration of an entire photogrammetric system, and proposes three methods for system stability analysis. The proposed methods explore the impact of the changes in interior orientation and relative orientation/mounting parameters on the reconstruction process. Rather than relying on ground truth in real datasets to check the system calibration stability, the proposed methods are simulation-based. Experiment results are shown, where a multi-camera photogrammetric system was calibrated three times, and stability analysis was performed on the system calibration parameters from the three sessions. The proposed simulation-based methods provided results that were compatible with a real-data based approach for evaluating the impact of changes in the system calibration parameters on the three-dimensional reconstruction.

  3. Stability Analysis for a Multi-Camera Photogrammetric System

    PubMed Central

    Habib, Ayman; Detchev, Ivan; Kwak, Eunju

    2014-01-01

    Consumer-grade digital cameras suffer from geometrical instability that may cause problems when used in photogrammetric applications. This paper provides a comprehensive review of this issue of interior orientation parameter variation over time, it explains the common ways used for coping with the issue, and describes the existing methods for performing stability analysis for a single camera. The paper then points out the lack of coverage of stability analysis for multi-camera systems, suggests a modification of the collinearity model to be used for the calibration of an entire photogrammetric system, and proposes three methods for system stability analysis. The proposed methods explore the impact of the changes in interior orientation and relative orientation/mounting parameters on the reconstruction process. Rather than relying on ground truth in real datasets to check the system calibration stability, the proposed methods are simulation-based. Experiment results are shown, where a multi-camera photogrammetric system was calibrated three times, and stability analysis was performed on the system calibration parameters from the three sessions. The proposed simulation-based methods provided results that were compatible with a real-data based approach for evaluating the impact of changes in the system calibration parameters on the three-dimensional reconstruction. PMID:25196012

  4. Design of a Day/Night Star Camera System

    NASA Technical Reports Server (NTRS)

    Alexander, Cheryl; Swift, Wesley; Ghosh, Kajal; Ramsey, Brian

    1999-01-01

    This paper describes the design of a camera system capable of acquiring stars during both the day and night cycles of a high altitude balloon flight (35-42 km). The camera system will be filtered to operate in the R band (590-810 nm). Simulations have been run using MODTRAN atmospheric code to determine the worse case sky brightness at 35 km. With a daytime sky brightness of 2(exp -05) W/sq cm/str/um in the R band, the sensitivity of the camera system will allow acquisition of at least 1-2 stars/sq degree at star magnitude limits of 8.25-9.00. The system will have an F2.8, 64.3 mm diameter lens and a 1340X1037 CCD array digitized to 12 bits. The CCD array is comprised of 6.8 X 6.8 micron pixels with a well depth of 45,000 electrons and a quantum efficiency of 0.525 at 700 nm. The camera's field of view will be 6.33 sq degree and provide attitude knowledge to 8 arcsec or better. A test flight of the system is scheduled for fall 1999.

  5. The imaging system design of three-line LMCCD mapping camera

    NASA Astrophysics Data System (ADS)

    Zhou, Huai-de; Liu, Jin-Guo; Wu, Xing-Xing; Lv, Shi-Liang; Zhao, Ying; Yu, Da

    2011-08-01

    In this paper, the authors introduced the theory about LMCCD (line-matrix CCD) mapping camera firstly. On top of the introduction were consists of the imaging system of LMCCD mapping camera. Secondly, some pivotal designs which were Introduced about the imaging system, such as the design of focal plane module, the video signal's procession, the controller's design of the imaging system, synchronous photography about forward and nadir and backward camera and the nadir camera of line-matrix CCD. At last, the test results of LMCCD mapping camera imaging system were introduced. The results as following: the precision of synchronous photography about forward and nadir and backward camera is better than 4 ns and the nadir camera of line-matrix CCD is better than 4 ns too; the photography interval of line-matrix CCD of the nadir camera can satisfy the butter requirements of LMCCD focal plane module; the SNR tested in laboratory is better than 95 under typical working condition(the solar incidence degree is 30, the reflectivity of the earth's surface is 0.3) of each CCD image; the temperature of the focal plane module is controlled under 30° in a working period of 15 minutes. All of these results can satisfy the requirements about the synchronous photography, the temperature control of focal plane module and SNR, Which give the guarantee of precision for satellite photogrammetry.

  6. A real-time camera calibration system based on OpenCV

    NASA Astrophysics Data System (ADS)

    Zhang, Hui; Wang, Hua; Guo, Huinan; Ren, Long; Zhou, Zuofeng

    2015-07-01

    Camera calibration is one of the essential steps in the computer vision research. This paper describes a real-time OpenCV based camera calibration system, and developed and implemented in the VS2008 environment. Experimental results prove that the system to achieve a simple and fast camera calibration, compared with MATLAB, higher precision and does not need manual intervention, and can be widely used in various computer vision system.

  7. Real-time proton beam range monitoring by means of prompt-gamma detection with a collimated camera

    NASA Astrophysics Data System (ADS)

    Roellinghoff, F.; Benilov, A.; Dauvergne, D.; Dedes, G.; Freud, N.; Janssens, G.; Krimmer, J.; Létang, J. M.; Pinto, M.; Prieels, D.; Ray, C.; Smeets, J.; Stichelbaut, F.; Testa, E.

    2014-03-01

    Prompt-gamma profile was measured at WPE-Essen using 160 MeV protons impinging a movable PMMA target. A single collimated detector was used with time-of-flight (TOF) to reduce the background due to neutrons. The target entrance rise and the Bragg peak falloff retrieval precision was determined as a function of incident proton number by a fitting procedure using independent data sets. Assuming improved sensitivity of this camera design by using a greater number of detectors, retrieval precisions of 1 to 2 mm (rms) are expected for a clinical pencil beam. TOF improves the contrast-to-noise ratio and the performance of the method significantly.

  8. Optical performance analysis of plenoptic camera systems

    NASA Astrophysics Data System (ADS)

    Langguth, Christin; Oberdörster, Alexander; Brückner, Andreas; Wippermann, Frank; Bräuer, Andreas

    2014-09-01

    Adding an array of microlenses in front of the sensor transforms the capabilities of a conventional camera to capture both spatial and angular information within a single shot. This plenoptic camera is capable of obtaining depth information and providing it for a multitude of applications, e.g. artificial re-focusing of photographs. Without the need of active illumination it represents a compact and fast optical 3D acquisition technique with reduced effort in system alignment. Since the extent of the aperture limits the range of detected angles, the observed parallax is reduced compared to common stereo imaging systems, which results in a decreased depth resolution. Besides, the gain of angular information implies a degraded spatial resolution. This trade-off requires a careful choice of the optical system parameters. We present a comprehensive assessment of possible degrees of freedom in the design of plenoptic systems. Utilizing a custom-built simulation tool, the optical performance is quantified with respect to particular starting conditions. Furthermore, a plenoptic camera prototype is demonstrated in order to verify the predicted optical characteristics.

  9. NEUTRON RADIATION DAMAGE IN CCD CAMERAS AT JOINT EUROPEAN TORUS (JET).

    PubMed

    Milocco, Alberto; Conroy, Sean; Popovichev, Sergey; Sergienko, Gennady; Huber, Alexander

    2017-10-26

    The neutron and gamma radiations in large fusion reactors are responsible for damage to charged couple device (CCD) cameras deployed for applied diagnostics. Based on the ASTM guide E722-09, the 'equivalent 1 MeV neutron fluence in silicon' was calculated for a set of CCD cameras at the Joint European Torus. Such evaluations would be useful to good practice in the operation of the video systems. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  10. Upgraded cameras for the HESS imaging atmospheric Cherenkov telescopes

    NASA Astrophysics Data System (ADS)

    Giavitto, Gianluca; Ashton, Terry; Balzer, Arnim; Berge, David; Brun, Francois; Chaminade, Thomas; Delagnes, Eric; Fontaine, Gérard; Füßling, Matthias; Giebels, Berrie; Glicenstein, Jean-François; Gräber, Tobias; Hinton, James; Jahnke, Albert; Klepser, Stefan; Kossatz, Marko; Kretzschmann, Axel; Lefranc, Valentin; Leich, Holger; Lüdecke, Hartmut; Lypova, Iryna; Manigot, Pascal; Marandon, Vincent; Moulin, Emmanuel; de Naurois, Mathieu; Nayman, Patrick; Penno, Marek; Ross, Duncan; Salek, David; Schade, Markus; Schwab, Thomas; Simoni, Rachel; Stegmann, Christian; Steppa, Constantin; Thornhill, Julian; Toussnel, François

    2016-08-01

    The High Energy Stereoscopic System (H.E.S.S.) is an array of five imaging atmospheric Cherenkov telescopes, sensitive to cosmic gamma rays of energies between 30 GeV and several tens of TeV. Four of them started operations in 2003 and their photomultiplier tube (PMT) cameras are currently undergoing a major upgrade, with the goals of improving the overall performance of the array and reducing the failure rate of the ageing systems. With the exception of the 960 PMTs, all components inside the camera have been replaced: these include the readout and trigger electronics, the power, ventilation and pneumatic systems and the control and data acquisition software. New designs and technical solutions have been introduced: the readout makes use of the NECTAr analog memory chip, which samples and stores the PMT signals and was developed for the Cherenkov Telescope Array (CTA). The control of all hardware subsystems is carried out by an FPGA coupled to an embedded ARM computer, a modular design which has proven to be very fast and reliable. The new camera software is based on modern C++ libraries such as Apache Thrift, ØMQ and Protocol buffers, offering very good performance, robustness, flexibility and ease of development. The first camera was upgraded in 2015, the other three cameras are foreseen to follow in fall 2016. We describe the design, the performance, the results of the tests and the lessons learned from the first upgraded H.E.S.S. camera.

  11. 131I activity quantification of gamma camera planar images

    NASA Astrophysics Data System (ADS)

    Barquero, Raquel; Garcia, Hugo P.; Incio, Monica G.; Minguez, Pablo; Cardenas, Alexander; Martínez, Daniel; Lassmann, Michael

    2017-02-01

    A procedure to estimate the activity in target tissues in patients during the therapeutic administration of 131I radiopharmaceutical treatment for thyroid conditions (hyperthyroidism and differentiated thyroid cancer) using a gamma camera (GC) with a high energy (HE) collimator, is proposed. Planar images are acquired for lesions of different sizes r, and at different distances d, in two HE GC systems. Defining a region of interest (ROI) on the image of size r, total counts n g are measured. Sensitivity S (cps MBq-1) in each acquisition is estimated as the product of the geometric G and the intrinsic efficiency η 0. The mean fluence of 364 keV photons arriving at the ROI per disintegration G, is calculated with the MCNPX code, simulating the entire GC and the HE collimator. Intrinsic efficiency η 0 is estimated from a calibration measurement of a plane reference source of 131I in air. Values of G and S for two GC systems—Philips Skylight and Siemens e-cam—are calculated. The total range of possible sensitivity values in thyroidal imaging in the e-cam and skylight GC measure from 7 cps MBq-1 to 35 cps MBq-1, and from 6 cps MBq-1 to 29 cps MBq-1, respectively. These sensitivity values have been verified with the SIMIND code, with good agreement between them. The results have been validated with experimental measurements in air, and in a medium with scatter and attenuation. The counts in the ROI can be produced by direct, scatter and penetration photons. The fluence value for direct photons is constant for any r and d values, but scatter and penetration photons show different values related to specific r and d values, resulting in the large sensitivity differences found. The sensitivity in thyroidal GC planar imaging is strongly dependent on uptake size, and distance from the GC. An individual value for the acquisition sensitivity of each lesion can significantly alleviate the level of uncertainty in the measurement of thyroid uptake activity for each patient.

  12. Application of gamma imaging techniques for the characterisation of position sensitive gamma detectors

    NASA Astrophysics Data System (ADS)

    Habermann, T.; Didierjean, F.; Duchêne, G.; Filliger, M.; Gerl, J.; Kojouharov, I.; Li, G.; Pietralla, N.; Schaffner, H.; Sigward, M.-H.

    2017-11-01

    A device to characterize position-sensitive germanium detectors has been implemented at GSI. The main component of this so called scanning table is a gamma camera that is capable of producing online 2D images of the scanned detector by means of a PET technique. To calibrate the gamma camera Compton imaging is employed. The 2D data can be processed further offline to obtain depth information. Of main interest is the response of the scanned detector in terms of the digitized pulse shapes from the preamplifier. This is an important input for pulse-shape analysis algorithms as they are in use for gamma tracking arrays in gamma spectroscopy. To validate the scanning table, a comparison of its results with a second scanning table implemented at the IPHC Strasbourg is envisaged. For this purpose a pixelated germanium detector has been scanned.

  13. A new high-speed IR camera system

    NASA Technical Reports Server (NTRS)

    Travis, Jeffrey W.; Shu, Peter K.; Jhabvala, Murzy D.; Kasten, Michael S.; Moseley, Samuel H.; Casey, Sean C.; Mcgovern, Lawrence K.; Luers, Philip J.; Dabney, Philip W.; Kaipa, Ravi C.

    1994-01-01

    A multi-organizational team at the Goddard Space Flight Center is developing a new far infrared (FIR) camera system which furthers the state of the art for this type of instrument by the incorporating recent advances in several technological disciplines. All aspects of the camera system are optimized for operation at the high data rates required for astronomical observations in the far infrared. The instrument is built around a Blocked Impurity Band (BIB) detector array which exhibits responsivity over a broad wavelength band and which is capable of operating at 1000 frames/sec, and consists of a focal plane dewar, a compact camera head electronics package, and a Digital Signal Processor (DSP)-based data system residing in a standard 486 personal computer. In this paper we discuss the overall system architecture, the focal plane dewar, and advanced features and design considerations for the electronics. This system, or one derived from it, may prove useful for many commercial and/or industrial infrared imaging or spectroscopic applications, including thermal machine vision for robotic manufacturing, photographic observation of short-duration thermal events such as combustion or chemical reactions, and high-resolution surveillance imaging.

  14. Control system for several rotating mirror camera synchronization operation

    NASA Astrophysics Data System (ADS)

    Liu, Ningwen; Wu, Yunfeng; Tan, Xianxiang; Lai, Guoji

    1997-05-01

    This paper introduces a single chip microcomputer control system for synchronization operation of several rotating mirror high-speed cameras. The system consists of four parts: the microcomputer control unit (including the synchronization part and precise measurement part and the time delay part), the shutter control unit, the motor driving unit and the high voltage pulse generator unit. The control system has been used to control the synchronization working process of the GSI cameras (driven by a motor) and FJZ-250 rotating mirror cameras (driven by a gas driven turbine). We have obtained the films of the same objective from different directions in different speed or in same speed.

  15. The added value of a portable gamma camera for intraoperative detection of sentinel lymph node in squamous cell carcinoma of the oral cavity: A case report.

    PubMed

    Mayoral, M; Paredes, P; Sieira, R; Vidal-Sicart, S; Marti, C; Pons, F

    2014-01-01

    The use of sentinel lymph node biopsy in squamous cell carcinoma of the oral cavity is still subject to debate although some studies have reported its feasibility. The main reason for this debate is probably due to the high false-negative rate for floor-of-mouth tumors per se. We report the case of a 54-year-old man with a T1N0 floor-of-mouth squamous cell carcinoma who underwent the sentinel lymph node procedure. Lymphoscintigraphy and SPECT/CT imaging were performed for lymphatic mapping with a conventional gamma camera. Sentinel lymph nodes were identified at right Ib, left IIa and Ia levels. However, these sentinel lymph nodes were difficult to detect intraoperatively with a gamma probe owing to the activity originating from the injection site. The use of a portable gamma camera made it possible to localize and excise all the sentinel lymph nodes. This case demonstrates the usefulness of this tool to improve sentinel lymph node detecting in floor-of-mouth tumors, especially those close to the injection area. Copyright © 2013 Elsevier España, S.L. and SEMNIM. All rights reserved.

  16. Applications of a shadow camera system for energy meteorology

    NASA Astrophysics Data System (ADS)

    Kuhn, Pascal; Wilbert, Stefan; Prahl, Christoph; Garsche, Dominik; Schüler, David; Haase, Thomas; Ramirez, Lourdes; Zarzalejo, Luis; Meyer, Angela; Blanc, Philippe; Pitz-Paal, Robert

    2018-02-01

    Downward-facing shadow cameras might play a major role in future energy meteorology. Shadow cameras directly image shadows on the ground from an elevated position. They are used to validate other systems (e.g. all-sky imager based nowcasting systems, cloud speed sensors or satellite forecasts) and can potentially provide short term forecasts for solar power plants. Such forecasts are needed for electricity grids with high penetrations of renewable energy and can help to optimize plant operations. In this publication, two key applications of shadow cameras are briefly presented.

  17. Multi-band infrared camera systems

    NASA Astrophysics Data System (ADS)

    Davis, Tim; Lang, Frank; Sinneger, Joe; Stabile, Paul; Tower, John

    1994-12-01

    The program resulted in an IR camera system that utilizes a unique MOS addressable focal plane array (FPA) with full TV resolution, electronic control capability, and windowing capability. Two systems were delivered, each with two different camera heads: a Stirling-cooled 3-5 micron band head and a liquid nitrogen-cooled, filter-wheel-based, 1.5-5 micron band head. Signal processing features include averaging up to 16 frames, flexible compensation modes, gain and offset control, and real-time dither. The primary digital interface is a Hewlett-Packard standard GPID (IEEE-488) port that is used to upload and download data. The FPA employs an X-Y addressed PtSi photodiode array, CMOS horizontal and vertical scan registers, horizontal signal line (HSL) buffers followed by a high-gain preamplifier and a depletion NMOS output amplifier. The 640 x 480 MOS X-Y addressed FPA has a high degree of flexibility in operational modes. By changing the digital data pattern applied to the vertical scan register, the FPA can be operated in either an interlaced or noninterlaced format. The thermal sensitivity performance of the second system's Stirling-cooled head was the best of the systems produced.

  18. SPECT detectors: the Anger Camera and beyond

    PubMed Central

    Peterson, Todd E.; Furenlid, Lars R.

    2011-01-01

    The development of radiation detectors capable of delivering spatial information about gamma-ray interactions was one of the key enabling technologies for nuclear medicine imaging and, eventually, single-photon emission computed tomography (SPECT). The continuous NaI(Tl) scintillator crystal coupled to an array of photomultiplier tubes, almost universally referred to as the Anger Camera after its inventor, has long been the dominant SPECT detector system. Nevertheless, many alternative materials and configurations have been investigated over the years. Technological advances as well as the emerging importance of specialized applications, such as cardiac and preclinical imaging, have spurred innovation such that alternatives to the Anger Camera are now part of commercial imaging systems. Increased computing power has made it practical to apply advanced signal processing and estimation schemes to make better use of the information contained in the detector signals. In this review we discuss the key performance properties of SPECT detectors and survey developments in both scintillator and semiconductor detectors and their readouts with an eye toward some of the practical issues at least in part responsible for the continuing prevalence of the Anger Camera in the clinic. PMID:21828904

  19. High-performance dual-speed CCD camera system for scientific imaging

    NASA Astrophysics Data System (ADS)

    Simpson, Raymond W.

    1996-03-01

    Traditionally, scientific camera systems were partitioned with a `camera head' containing the CCD and its support circuitry and a camera controller, which provided analog to digital conversion, timing, control, computer interfacing, and power. A new, unitized high performance scientific CCD camera with dual speed readout at 1 X 106 or 5 X 106 pixels per second, 12 bit digital gray scale, high performance thermoelectric cooling, and built in composite video output is described. This camera provides all digital, analog, and cooling functions in a single compact unit. The new system incorporates the A/C converter, timing, control and computer interfacing in the camera, with the power supply remaining a separate remote unit. A 100 Mbyte/second serial link transfers data over copper or fiber media to a variety of host computers, including Sun, SGI, SCSI, PCI, EISA, and Apple Macintosh. Having all the digital and analog functions in the camera made it possible to modify this system for the Woods Hole Oceanographic Institution for use on a remote controlled submersible vehicle. The oceanographic version achieves 16 bit dynamic range at 1.5 X 105 pixels/second, can be operated at depths of 3 kilometers, and transfers data to the surface via a real time fiber optic link.

  20. In-flight Video Captured by External Tank Camera System

    NASA Technical Reports Server (NTRS)

    2005-01-01

    In this July 26, 2005 video, Earth slowly fades into the background as the STS-114 Space Shuttle Discovery climbs into space until the External Tank (ET) separates from the orbiter. An External Tank ET Camera System featuring a Sony XC-999 model camera provided never before seen footage of the launch and tank separation. The camera was installed in the ET LO2 Feedline Fairing. From this position, the camera had a 40% field of view with a 3.5 mm lens. The field of view showed some of the Bipod area, a portion of the LH2 tank and Intertank flange area, and some of the bottom of the shuttle orbiter. Contained in an electronic box, the battery pack and transmitter were mounted on top of the Solid Rocker Booster (SRB) crossbeam inside the ET. The battery pack included 20 Nickel-Metal Hydride batteries (similar to cordless phone battery packs) totaling 28 volts DC and could supply about 70 minutes of video. Located 95 degrees apart on the exterior of the Intertank opposite orbiter side, there were 2 blade S-Band antennas about 2 1/2 inches long that transmitted a 10 watt signal to the ground stations. The camera turned on approximately 10 minutes prior to launch and operated for 15 minutes following liftoff. The complete camera system weighs about 32 pounds. Marshall Space Flight Center (MSFC), Johnson Space Center (JSC), Goddard Space Flight Center (GSFC), and Kennedy Space Center (KSC) participated in the design, development, and testing of the ET camera system.

  1. Optomechanical System Development of the AWARE Gigapixel Scale Camera

    NASA Astrophysics Data System (ADS)

    Son, Hui S.

    Electronic focal plane arrays (FPA) such as CMOS and CCD sensors have dramatically improved to the point that digital cameras have essentially phased out film (except in very niche applications such as hobby photography and cinema). However, the traditional method of mating a single lens assembly to a single detector plane, as required for film cameras, is still the dominant design used in cameras today. The use of electronic sensors and their ability to capture digital signals that can be processed and manipulated post acquisition offers much more freedom of design at system levels and opens up many interesting possibilities for the next generation of computational imaging systems. The AWARE gigapixel scale camera is one such computational imaging system. By utilizing a multiscale optical design, in which a large aperture objective lens is mated with an array of smaller, well corrected relay lenses, we are able to build an optically simple system that is capable of capturing gigapixel scale images via post acquisition stitching of the individual pictures from the array. Properly shaping the array of digital cameras allows us to form an effectively continuous focal surface using off the shelf (OTS) flat sensor technology. This dissertation details developments and physical implementations of the AWARE system architecture. It illustrates the optomechanical design principles and system integration strategies we have developed through the course of the project by summarizing the results of the two design phases for AWARE: AWARE-2 and AWARE-10. These systems represent significant advancements in the pursuit of scalable, commercially viable snapshot gigapixel imaging systems and should serve as a foundation for future development of such systems.

  2. Accuracy Potential and Applications of MIDAS Aerial Oblique Camera System

    NASA Astrophysics Data System (ADS)

    Madani, M.

    2012-07-01

    Airborne oblique cameras such as Fairchild T-3A were initially used for military reconnaissance in 30s. A modern professional digital oblique camera such as MIDAS (Multi-camera Integrated Digital Acquisition System) is used to generate lifelike three dimensional to the users for visualizations, GIS applications, architectural modeling, city modeling, games, simulators, etc. Oblique imagery provide the best vantage for accessing and reviewing changes to the local government tax base, property valuation assessment, buying & selling of residential/commercial for better decisions in a more timely manner. Oblique imagery is also used for infrastructure monitoring making sure safe operations of transportation, utilities, and facilities. Sanborn Mapping Company acquired one MIDAS from TrackAir in 2011. This system consists of four tilted (45 degrees) cameras and one vertical camera connected to a dedicated data acquisition computer system. The 5 digital cameras are based on the Canon EOS 1DS Mark3 with Zeiss lenses. The CCD size is 5,616 by 3,744 (21 MPixels) with the pixel size of 6.4 microns. Multiple flights using different camera configurations (nadir/oblique (28 mm/50 mm) and (50 mm/50 mm)) were flown over downtown Colorado Springs, Colorado. Boresight fights for 28 mm nadir camera were flown at 600 m and 1,200 m and for 50 mm nadir camera at 750 m and 1500 m. Cameras were calibrated by using a 3D cage and multiple convergent images utilizing Australis model. In this paper, the MIDAS system is described, a number of real data sets collected during the aforementioned flights are presented together with their associated flight configurations, data processing workflow, system calibration and quality control workflows are highlighted and the achievable accuracy is presented in some detail. This study revealed that the expected accuracy of about 1 to 1.5 GSD (Ground Sample Distance) for planimetry and about 2 to 2.5 GSD for vertical can be achieved. Remaining systematic

  3. The research of adaptive-exposure on spot-detecting camera in ATP system

    NASA Astrophysics Data System (ADS)

    Qian, Feng; Jia, Jian-jun; Zhang, Liang; Wang, Jian-Yu

    2013-08-01

    High precision acquisition, tracking, pointing (ATP) system is one of the key techniques of laser communication. The spot-detecting camera is used to detect the direction of beacon in laser communication link, so that it can get the position information of communication terminal for ATP system. The positioning accuracy of camera decides the capability of laser communication system directly. So the spot-detecting camera in satellite-to-earth laser communication ATP systems needs high precision on target detection. The positioning accuracy of cameras should be better than +/-1μ rad . The spot-detecting cameras usually adopt centroid algorithm to get the position information of light spot on detectors. When the intensity of beacon is moderate, calculation results of centroid algorithm will be precise. But the intensity of beacon changes greatly during communication for distance, atmospheric scintillation, weather etc. The output signal of detector will be insufficient when the camera underexposes to beacon because of low light intensity. On the other hand, the output signal of detector will be saturated when the camera overexposes to beacon because of high light intensity. The calculation accuracy of centroid algorithm becomes worse if the spot-detecting camera underexposes or overexposes, and then the positioning accuracy of camera will be reduced obviously. In order to improve the accuracy, space-based cameras should regulate exposure time in real time according to light intensity. The algorithm of adaptive-exposure technique for spot-detecting camera based on metal-oxide-semiconductor (CMOS) detector is analyzed. According to analytic results, a CMOS camera in space-based laser communication system is described, which utilizes the algorithm of adaptive-exposure to adapting exposure time. Test results from imaging experiment system formed verify the design. Experimental results prove that this design can restrain the reduction of positioning accuracy for the change

  4. Semi-autonomous wheelchair system using stereoscopic cameras.

    PubMed

    Nguyen, Jordan S; Nguyen, Thanh H; Nguyen, Hung T

    2009-01-01

    This paper is concerned with the design and development of a semi-autonomous wheelchair system using stereoscopic cameras to assist hands-free control technologies for severely disabled people. The stereoscopic cameras capture an image from both the left and right cameras, which are then processed with a Sum of Absolute Differences (SAD) correlation algorithm to establish correspondence between image features in the different views of the scene. This is used to produce a stereo disparity image containing information about the depth of objects away from the camera in the image. A geometric projection algorithm is then used to generate a 3-Dimensional (3D) point map, placing pixels of the disparity image in 3D space. This is then converted to a 2-Dimensional (2D) depth map allowing objects in the scene to be viewed and a safe travel path for the wheelchair to be planned and followed based on the user's commands. This assistive technology utilising stereoscopic cameras has the purpose of automated obstacle detection, path planning and following, and collision avoidance during navigation. Experimental results obtained in an indoor environment displayed the effectiveness of this assistive technology.

  5. Intraocular camera for retinal prostheses: Refractive and diffractive lens systems

    NASA Astrophysics Data System (ADS)

    Hauer, Michelle Christine

    The focus of this thesis is on the design and analysis of refractive, diffractive, and hybrid refractive/diffractive lens systems for a miniaturized camera that can be surgically implanted in the crystalline lens sac and is designed to work in conjunction with current and future generation retinal prostheses. The development of such an intraocular camera (IOC) would eliminate the need for an external head-mounted or eyeglass-mounted camera. Placing the camera inside the eye would allow subjects to use their natural eye movements for foveation (attention) instead of more cumbersome head tracking, would notably aid in personal navigation and mobility, and would also be significantly more psychologically appealing from the standpoint of personal appearances. The capability for accommodation with no moving parts or feedback control is incorporated by employing camera designs that exhibit nearly infinite depth of field. Such an ultracompact optical imaging system requires a unique combination of refractive and diffractive optical elements and relaxed system constraints derived from human psychophysics. This configuration necessitates an extremely compact, short focal-length lens system with an f-number close to unity. Initially, these constraints appear highly aggressive from an optical design perspective. However, after careful analysis of the unique imaging requirements of a camera intended to work in conjunction with the relatively low pixellation levels of a retinal microstimulator array, it becomes clear that such a design is not only feasible, but could possibly be implemented with a single lens system.

  6. Performance and Calibration of H2RG Detectors and SIDECAR ASICs for the RATIR Camera

    NASA Technical Reports Server (NTRS)

    Fox, Ori D.; Kutyrev, Alexander S.; Rapchun, David A.; Klein, Christopher R.; Butler, Nathaniel R.; Bloom, Josh; de Diego, Jos A.; Simn Farah, Alejandro D.; Gehrels, Neil A.; Georgiev, Leonid; hide

    2012-01-01

    The Reionization And Transient Infra,.Red (RATIR) camera has been built for rapid Gamma,.Ray Burst (GRE) followup and will provide simultaneous optical and infrared photometric capabilities. The infrared portion of this camera incorporates two Teledyne HgCdTe HAWAII-2RG detectors, controlled by Teledyne's SIDECAR ASICs. While other ground-based systems have used the SIDECAR before, this system also utilizes Teledyne's JADE2 interface card and IDE development environment. Together, this setup comprises Teledyne's Development Kit, which is a bundled solution that can be efficiently integrated into future ground-based systems. In this presentation, we characterize the system's read noise, dark current, and conversion gain.

  7. Traffic Sign Recognition with Invariance to Lighting in Dual-Focal Active Camera System

    NASA Astrophysics Data System (ADS)

    Gu, Yanlei; Panahpour Tehrani, Mehrdad; Yendo, Tomohiro; Fujii, Toshiaki; Tanimoto, Masayuki

    In this paper, we present an automatic vision-based traffic sign recognition system, which can detect and classify traffic signs at long distance under different lighting conditions. To realize this purpose, the traffic sign recognition is developed in an originally proposed dual-focal active camera system. In this system, a telephoto camera is equipped as an assistant of a wide angle camera. The telephoto camera can capture a high accuracy image for an object of interest in the view field of the wide angle camera. The image from the telephoto camera provides enough information for recognition when the accuracy of traffic sign is low from the wide angle camera. In the proposed system, the traffic sign detection and classification are processed separately for different images from the wide angle camera and telephoto camera. Besides, in order to detect traffic sign from complex background in different lighting conditions, we propose a type of color transformation which is invariant to light changing. This color transformation is conducted to highlight the pattern of traffic signs by reducing the complexity of background. Based on the color transformation, a multi-resolution detector with cascade mode is trained and used to locate traffic signs at low resolution in the image from the wide angle camera. After detection, the system actively captures a high accuracy image of each detected traffic sign by controlling the direction and exposure time of the telephoto camera based on the information from the wide angle camera. Moreover, in classification, a hierarchical classifier is constructed and used to recognize the detected traffic signs in the high accuracy image from the telephoto camera. Finally, based on the proposed system, a set of experiments in the domain of traffic sign recognition is presented. The experimental results demonstrate that the proposed system can effectively recognize traffic signs at low resolution in different lighting conditions.

  8. Video-Camera-Based Position-Measuring System

    NASA Technical Reports Server (NTRS)

    Lane, John; Immer, Christopher; Brink, Jeffrey; Youngquist, Robert

    2005-01-01

    A prototype optoelectronic system measures the three-dimensional relative coordinates of objects of interest or of targets affixed to objects of interest in a workspace. The system includes a charge-coupled-device video camera mounted in a known position and orientation in the workspace, a frame grabber, and a personal computer running image-data-processing software. Relative to conventional optical surveying equipment, this system can be built and operated at much lower cost; however, it is less accurate. It is also much easier to operate than are conventional instrumentation systems. In addition, there is no need to establish a coordinate system through cooperative action by a team of surveyors. The system operates in real time at around 30 frames per second (limited mostly by the frame rate of the camera). It continuously tracks targets as long as they remain in the field of the camera. In this respect, it emulates more expensive, elaborate laser tracking equipment that costs of the order of 100 times as much. Unlike laser tracking equipment, this system does not pose a hazard of laser exposure. Images acquired by the camera are digitized and processed to extract all valid targets in the field of view. The three-dimensional coordinates (x, y, and z) of each target are computed from the pixel coordinates of the targets in the images to accuracy of the order of millimeters over distances of the orders of meters. The system was originally intended specifically for real-time position measurement of payload transfers from payload canisters into the payload bay of the Space Shuttle Orbiters (see Figure 1). The system may be easily adapted to other applications that involve similar coordinate-measuring requirements. Examples of such applications include manufacturing, construction, preliminary approximate land surveying, and aerial surveying. For some applications with rectangular symmetry, it is feasible and desirable to attach a target composed of black and white

  9. Portable compton gamma-ray detection system

    DOEpatents

    Rowland, Mark S [Alamo, CA; Oldaker, Mark E [Pleasanton, CA

    2008-03-04

    A Compton scattered gamma-ray detector system. The system comprises a gamma-ray spectrometer and an annular array of individual scintillators. The scintillators are positioned so that they are arrayed around the gamma-ray spectrometer. The annular array of individual scintillators includes a first scintillator. A radiation shield is positioned around the first scintillator. A multi-channel analyzer is operatively connected to the gamma-ray spectrometer and the annular array of individual scintillators.

  10. Miniaturized fundus camera

    NASA Astrophysics Data System (ADS)

    Gliss, Christine; Parel, Jean-Marie A.; Flynn, John T.; Pratisto, Hans S.; Niederer, Peter F.

    2003-07-01

    We present a miniaturized version of a fundus camera. The camera is designed for the use in screening for retinopathy of prematurity (ROP). There, but also in other applications a small, light weight, digital camera system can be extremely useful. We present a small wide angle digital camera system. The handpiece is significantly smaller and lighter then in all other systems. The electronics is truly portable fitting in a standard boardcase. The camera is designed to be offered at a compatible price. Data from tests on young rabbits' eyes is presented. The development of the camera system is part of a telemedicine project screening for ROP. Telemedical applications are a perfect application for this camera system using both advantages: the portability as well as the digital image.

  11. 640x480 PtSi Stirling-cooled camera system

    NASA Astrophysics Data System (ADS)

    Villani, Thomas S.; Esposito, Benjamin J.; Davis, Timothy J.; Coyle, Peter J.; Feder, Howard L.; Gilmartin, Harvey R.; Levine, Peter A.; Sauer, Donald J.; Shallcross, Frank V.; Demers, P. L.; Smalser, P. J.; Tower, John R.

    1992-09-01

    A Stirling cooled 3 - 5 micron camera system has been developed. The camera employs a monolithic 640 X 480 PtSi-MOS focal plane array. The camera system achieves an NEDT equals 0.10 K at 30 Hz frame rate with f/1.5 optics (300 K background). At a spatial frequency of 0.02 cycles/mRAD the vertical and horizontal Minimum Resolvable Temperature are in the range of MRT equals 0.03 K (f/1.5 optics, 300 K background). The MOS focal plane array achieves a resolution of 480 TV lines per picture height independent of background level and position within the frame.

  12. "Stereo Compton cameras" for the 3-D localization of radioisotopes

    NASA Astrophysics Data System (ADS)

    Takeuchi, K.; Kataoka, J.; Nishiyama, T.; Fujita, T.; Kishimoto, A.; Ohsuka, S.; Nakamura, S.; Adachi, S.; Hirayanagi, M.; Uchiyama, T.; Ishikawa, Y.; Kato, T.

    2014-11-01

    The Compton camera is a viable and convenient tool used to visualize the distribution of radioactive isotopes that emit gamma rays. After the nuclear disaster in Fukushima in 2011, there is a particularly urgent need to develop "gamma cameras", which can visualize the distribution of such radioisotopes. In response, we propose a portable Compton camera, which comprises 3-D position-sensitive GAGG scintillators coupled with thin monolithic MPPC arrays. The pulse-height ratio of two MPPC-arrays allocated at both ends of the scintillator block determines the depth of interaction (DOI), which dramatically improves the position resolution of the scintillation detectors. We report on the detailed optimization of the detector design, based on Geant4 simulation. The results indicate that detection efficiency reaches up to 0.54%, or more than 10 times that of other cameras being tested in Fukushima, along with a moderate angular resolution of 8.1° (FWHM). By applying the triangular surveying method, we also propose a new concept for the stereo measurement of gamma rays by using two Compton cameras, thus enabling the 3-D positional measurement of radioactive isotopes for the first time. From one point source simulation data, we ensured that the source position and the distance to the same could be determined typically to within 2 meters' accuracy and we also confirmed that more than two sources are clearly separated by the event selection from two point sources of simulation data.

  13. Simulation based evaluation of the designs of the Advanced Gamma-ray Imageing System (AGIS)

    NASA Astrophysics Data System (ADS)

    Bugaev, Slava; Buckley, James; Digel, Seth; Funk, Stephen; Konopelko, Alex; Krawczynski, Henric; Lebohec, Steohan; Maier, Gernot; Vassiliev, Vladimir

    2009-05-01

    The AGIS project under design study, is a large array of imaging atmospheric Cherenkov telescopes for gamma-rays astronomy between 40GeV and 100 TeV. In this paper we present the ongoing simulation effort to model the considered design approaches as a function of the main parameters such as array geometry, telescope optics and camera design in such a way the gamma ray observation capabilities can be optimized against the overall project cost.

  14. Development of an Extra-vehicular (EVA) Infrared (IR) Camera Inspection System

    NASA Technical Reports Server (NTRS)

    Gazarik, Michael; Johnson, Dave; Kist, Ed; Novak, Frank; Antill, Charles; Haakenson, David; Howell, Patricia; Pandolf, John; Jenkins, Rusty; Yates, Rusty

    2006-01-01

    Designed to fulfill a critical inspection need for the Space Shuttle Program, the EVA IR Camera System can detect crack and subsurface defects in the Reinforced Carbon-Carbon (RCC) sections of the Space Shuttle s Thermal Protection System (TPS). The EVA IR Camera performs this detection by taking advantage of the natural thermal gradients induced in the RCC by solar flux and thermal emission from the Earth. This instrument is a compact, low-mass, low-power solution (1.2cm3, 1.5kg, 5.0W) for TPS inspection that exceeds existing requirements for feature detection. Taking advantage of ground-based IR thermography techniques, the EVA IR Camera System provides the Space Shuttle program with a solution that can be accommodated by the existing inspection system. The EVA IR Camera System augments the visible and laser inspection systems and finds cracks and subsurface damage that is not measurable by the other sensors, and thus fills a critical gap in the Space Shuttle s inspection needs. This paper discusses the on-orbit RCC inspection measurement concept and requirements, and then presents a detailed description of the EVA IR Camera System design.

  15. Procurement specification color graphic camera system

    NASA Technical Reports Server (NTRS)

    Prow, G. E.

    1980-01-01

    The performance and design requirements for a Color Graphic Camera System are presented. The system is a functional part of the Earth Observation Department Laboratory System (EODLS) and will be interfaced with Image Analysis Stations. It will convert the output of a raster scan computer color terminal into permanent, high resolution photographic prints and transparencies. Images usually displayed will be remotely sensed LANDSAT imager scenes.

  16. Improving Photometric Calibration of Meteor Video Camera Systems

    NASA Technical Reports Server (NTRS)

    Ehlert, Steven; Kingery, Aaron; Cooke, William

    2016-01-01

    Current optical observations of meteors are commonly limited by systematic uncertainties in photometric calibration at the level of approximately 0.5 mag or higher. Future improvements to meteor ablation models, luminous efficiency models, or emission spectra will hinge on new camera systems and techniques that significantly reduce calibration uncertainties and can reliably perform absolute photometric measurements of meteors. In this talk we discuss the algorithms and tests that NASA's Meteoroid Environment Office (MEO) has developed to better calibrate photometric measurements for the existing All-Sky and Wide-Field video camera networks as well as for a newly deployed four-camera system for measuring meteor colors in Johnson-Cousins BV RI filters. In particular we will emphasize how the MEO has been able to address two long-standing concerns with the traditional procedure, discussed in more detail below.

  17. The next evolution in radioguided surgery: breast cancer related sentinel node localization using a freehandSPECT-mobile gamma camera combination

    PubMed Central

    Engelen, Thijs; Winkel, Beatrice MF; Rietbergen, Daphne DD; KleinJan, Gijs H; Vidal-Sicart, Sergi; Olmos, Renato A Valdés; van den Berg, Nynke S; van Leeuwen, Fijs WB

    2015-01-01

    Accurate pre- and intraoperative identification of the sentinel node (SN) forms the basis of the SN biopsy procedure. Gamma tracing technologies such as a gamma probe (GP), a 2D mobile gamma camera (MGC) or 3D freehandSPECT (FHS) can be used to provide the surgeon with radioguidance to the SN(s). We reasoned that integrated use of these technologies results in the generation of a “hybrid” modality that combines the best that the individual radioguidance technologies have to offer. The sensitivity and resolvability of both 2D-MGC and 3D-FHS-MGC were studied in a phantom setup (at various source-detector depths and using varying injection site-to-SN distances), and in ten breast cancer patients scheduled for SN biopsy. Acquired 3D-FHS-MGC images were overlaid with the position of the phantom/patient. This augmented-reality overview image was then used for navigation to the hotspot/SN in virtual-reality using the GP. Obtained results were compared to conventional gamma camera lymphoscintigrams. Resolution of 3D-FHS-MGC allowed identification of the SNs at a minimum injection site (100 MBq)-to-node (1 MBq; 1%) distance of 20 mm, up to a source-detector depth of 36 mm in 2D-MGC and up to 24 mm in 3D-FHS-MGC. A clinically relevant dose of approximately 1 MBq was clearly detectable up to a depth of 60 mm in 2D-MGC and 48 mm in 3D-FHS-MGC. In all ten patients at least one SN was visualized on the lymphoscintigrams with a total of 12 SNs visualized. 3D-FHS-MGC identified 11 of 12 SNs and allowed navigation to all these visualized SNs; in one patient with two axillary SNs located closely to each other (11 mm), 3D-FHS-MGC was not able to distinguish the two SNs. In conclusion, high sensitivity detection of SNs at an injection site-to-node distance of 20 mm-and-up was possible using 3D-FHS-MGC. In patients, 3D-FHS-MGC showed highly reproducible images as compared to the conventional lymphoscintigrams. PMID:26069857

  18. Digital gamma-gamma coincidence HPGe system for environmental analysis.

    PubMed

    Marković, Nikola; Roos, Per; Nielsen, Sven Poul

    2017-08-01

    The performance of a new gamma-gamma coincidence spectrometer system for environmental samples analysis at the Center for Nuclear Technologies of the Technical University of Denmark (DTU) is reported. Nutech Coincidence Low Energy Germanium Sandwich (NUCLeGeS) system consists of two HPGe detectors in a surface laboratory with a digital acquisition system used to collect the data in time-stamped list mode with 10ns time resolution. The spectrometer is used in both anticoincidence and coincidence modes. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. A telephoto camera system with shooting direction control by gaze detection

    NASA Astrophysics Data System (ADS)

    Teraya, Daiki; Hachisu, Takumi; Yendo, Tomohiro

    2015-05-01

    For safe driving, it is important for driver to check traffic conditions such as traffic lights, or traffic signs as early as soon. If on-vehicle camera takes image of important objects to understand traffic conditions from long distance and shows these to driver, driver can understand traffic conditions earlier. To take image of long distance objects clearly, the focal length of camera must be long. When the focal length is long, on-vehicle camera doesn't have enough field of view to check traffic conditions. Therefore, in order to get necessary images from long distance, camera must have long-focal length and controllability of shooting direction. In previous study, driver indicates shooting direction on displayed image taken by a wide-angle camera, a direction controllable camera takes telescopic image, and displays these to driver. However, driver uses a touch panel to indicate the shooting direction in previous study. It is cause of disturb driving. So, we propose a telephoto camera system for driving support whose shooting direction is controlled by driver's gaze to avoid disturbing drive. This proposed system is composed of a gaze detector and an active telephoto camera whose shooting direction is controlled. We adopt non-wear detecting method to avoid hindrance to drive. The gaze detector measures driver's gaze by image processing. The shooting direction of the active telephoto camera is controlled by galvanometer scanners and the direction can be switched within a few milliseconds. We confirmed that the proposed system takes images of gazing straight ahead of subject by experiments.

  20. Calibration of a dual-PTZ camera system for stereo vision

    NASA Astrophysics Data System (ADS)

    Chang, Yau-Zen; Hou, Jung-Fu; Tsao, Yi Hsiang; Lee, Shih-Tseng

    2010-08-01

    In this paper, we propose a calibration process for the intrinsic and extrinsic parameters of dual-PTZ camera systems. The calibration is based on a complete definition of six coordinate systems fixed at the image planes, and the pan and tilt rotation axes of the cameras. Misalignments between estimated and ideal coordinates of image corners are formed into cost values to be solved by the Nelder-Mead simplex optimization method. Experimental results show that the system is able to obtain 3D coordinates of objects with a consistent accuracy of 1 mm when the distance between the dual-PTZ camera set and the objects are from 0.9 to 1.1 meters.

  1. Background simulations of the wide-field coded-mask camera for X-/Gamma-ray of the French-Chinese mission SVOM

    NASA Astrophysics Data System (ADS)

    Godet, Olivier; Barret, Didier; Paul, Jacques; Sizun, Patrick; Mandrou, Pierre; Cordier, Bertrand

    SVOM (Space Variable Object Monitor) is a French-Chinese mission dedicated to the study of high-redshift GRBs, which is expected to be launched in 2012. The anti-Sun pointing strategy of SVOM along with a strong and integrated ground segment consisting of two wide-field robotic telescopes covering the near-IR and optical will optimise the ground-based GRB follow-ups by the largest telescopes and thus the measurements of spectroscopic redshifts. The central instrument of the science payload will be an innovative wide-field coded-mask camera for X- /Gamma-rays (4-250 keV) responsible for triggering and localising GRBs with an accuracy better than 10 arc-minutes. Such an instrument will be background-dominated so it is essential to estimate the background level expected once in orbit during the early phase of the instrument design in order to ensure good science performance. We present our Monte-Carlo simulator enabling us to compute the background spectrum taking into account the mass model of the camera and the main components of the space environment encountered in orbit by the satellite. From that computation, we show that the current design of the camera CXG will be more sensitive to high-redshift GRBs than the Swift-BAT thanks to its low-energy threshold of 4 keV.

  2. Depth-of-Interaction Compensation Using a Focused-Cut Scintillator for a Pinhole Gamma Camera.

    PubMed

    Alhassen, Fares; Kudrolli, Haris; Singh, Bipin; Kim, Sangtaek; Seo, Youngho; Gould, Robert G; Nagarkar, Vivek V

    2011-06-01

    Preclinical SPECT offers a powerful means to understand the molecular pathways of drug interactions in animal models by discovering and testing new pharmaceuticals and therapies for potential clinical applications. A combination of high spatial resolution and sensitivity are required in order to map radiotracer uptake within small animals. Pinhole collimators have been investigated, as they offer high resolution by means of image magnification. One of the limitations of pinhole geometries is that increased magnification causes some rays to travel through the detection scintillator at steep angles, introducing parallax errors due to variable depth-of-interaction in scintillator material, especially towards the edges of the detector field of view. These parallax errors ultimately limit the resolution of pinhole preclinical SPECT systems, especially for higher energy isotopes that can easily penetrate through millimeters of scintillator material. A pixellated, focused-cut (FC) scintillator, with its pixels laser-cut so that they are collinear with incoming rays, can potentially compensate for these parallax errors and thus improve the system resolution. We performed the first experimental evaluation of a newly developed focused-cut scintillator. We scanned a Tc-99m source across the field of view of pinhole gamma camera with a continuous scintillator, a conventional "straight-cut" (SC) pixellated scintillator, and a focused-cut scintillator, each coupled to an electron-multiplying charge coupled device (EMCCD) detector by a fiber-optic taper, and compared the measured full-width half-maximum (FWHM) values. We show that the FWHMs of the focused-cut scintillator projections are comparable to the FWHMs of the thinner SC scintillator, indicating the effectiveness of the focused-cut scintillator in compensating parallax errors.

  3. Depth-of-Interaction Compensation Using a Focused-Cut Scintillator for a Pinhole Gamma Camera

    PubMed Central

    Alhassen, Fares; Kudrolli, Haris; Singh, Bipin; Kim, Sangtaek; Seo, Youngho; Gould, Robert G.; Nagarkar, Vivek V.

    2011-01-01

    Preclinical SPECT offers a powerful means to understand the molecular pathways of drug interactions in animal models by discovering and testing new pharmaceuticals and therapies for potential clinical applications. A combination of high spatial resolution and sensitivity are required in order to map radiotracer uptake within small animals. Pinhole collimators have been investigated, as they offer high resolution by means of image magnification. One of the limitations of pinhole geometries is that increased magnification causes some rays to travel through the detection scintillator at steep angles, introducing parallax errors due to variable depth-of-interaction in scintillator material, especially towards the edges of the detector field of view. These parallax errors ultimately limit the resolution of pinhole preclinical SPECT systems, especially for higher energy isotopes that can easily penetrate through millimeters of scintillator material. A pixellated, focused-cut (FC) scintillator, with its pixels laser-cut so that they are collinear with incoming rays, can potentially compensate for these parallax errors and thus improve the system resolution. We performed the first experimental evaluation of a newly developed focused-cut scintillator. We scanned a Tc-99m source across the field of view of pinhole gamma camera with a continuous scintillator, a conventional “straight-cut” (SC) pixellated scintillator, and a focused-cut scintillator, each coupled to an electron-multiplying charge coupled device (EMCCD) detector by a fiber-optic taper, and compared the measured full-width half-maximum (FWHM) values. We show that the FWHMs of the focused-cut scintillator projections are comparable to the FWHMs of the thinner SC scintillator, indicating the effectiveness of the focused-cut scintillator in compensating parallax errors. PMID:21731108

  4. Depth-of-Interaction Compensation Using a Focused-Cut Scintillator for a Pinhole Gamma Camera

    NASA Astrophysics Data System (ADS)

    Alhassen, Fares; Kudrolli, Haris; Singh, Bipin; Kim, Sangtaek; Seo, Youngho; Gould, Robert G.; Nagarkar, Vivek V.

    2011-06-01

    Preclinical SPECT offers a powerful means to understand the molecular pathways of drug interactions in animal models by discovering and testing new pharmaceuticals and therapies for potential clinical applications. A combination of high spatial resolution and sensitivity are required in order to map radiotracer uptake within small animals. Pinhole collimators have been investigated, as they offer high resolution by means of image magnification. One of the limitations of pinhole geometries is that increased magnification causes some rays to travel through the detection scintillator at steep angles, introducing parallax errors due to variable depth-of-interaction in scintillator material, especially towards the edges of the detector field of view. These parallax errors ultimately limit the resolution of pinhole preclinical SPECT systems, especially for higher energy isotopes that can easily penetrate through millimeters of scintillator material. A pixellated, focused-cut (FC) scintillator, with its pixels laser-cut so that they are collinear with incoming rays, can potentially compensate for these parallax errors and thus improve the system resolution. We performed the first experimental evaluation of a newly developed focused-cut scintillator. We scanned a Tc-99 m source across the field of view of pinhole gamma camera with a continuous scintillator, a conventional “straight-cut” (SC) pixellated scintillator, and a focused-cut scintillator, each coupled to an electron-multiplying charge coupled device (EMCCD) detector by a fiber-optic taper, and compared the measured full-width half-maximum (FWHM) values. We show that the FWHMs of the focused-cut scintillator projections are comparable to the FWHMs of the thinner SC scintillator, indicating the effectiveness of the focused-cut scintillator in compensating parallax errors.

  5. Design and Implementation of a Novel Portable 360° Stereo Camera System with Low-Cost Action Cameras

    NASA Astrophysics Data System (ADS)

    Holdener, D.; Nebiker, S.; Blaser, S.

    2017-11-01

    The demand for capturing indoor spaces is rising with the digitalization trend in the construction industry. An efficient solution for measuring challenging indoor environments is mobile mapping. Image-based systems with 360° panoramic coverage allow a rapid data acquisition and can be processed to georeferenced 3D images hosted in cloud-based 3D geoinformation services. For the multiview stereo camera system presented in this paper, a 360° coverage is achieved with a layout consisting of five horizontal stereo image pairs in a circular arrangement. The design is implemented as a low-cost solution based on a 3D printed camera rig and action cameras with fisheye lenses. The fisheye stereo system is successfully calibrated with accuracies sufficient for the applied measurement task. A comparison of 3D distances with reference data delivers maximal deviations of 3 cm on typical distances in indoor space of 2-8 m. Also the automatic computation of coloured point clouds from the stereo pairs is demonstrated.

  6. Occult Breast Cancer: Scintimammography with High-Resolution Breast-specific Gamma Camera in Women at High Risk for Breast Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rachel F. Brem; Jocelyn A. Rapelyea; , Gilat Zisman

    2005-08-01

    To prospectively evaluate a high-resolution breast-specific gamma camera for depicting occult breast cancer in women at high risk for breast cancer but with normal mammographic and physical examination findings. MATERIALS AND METHODS: Institutional Review Board approval and informed consent were obtained. The study was HIPAA compliant. Ninety-four high-risk women (age range, 36-78 years; mean, 55 years) with normal mammographic (Breast Imaging Reporting and Data System [BI-RADS] 1 or 2) and physical examination findings were evaluated with scintimammography. After injection with 25-30 mCi (925-1110 MBq) of technetium 99m sestamibi, patients were imaged with a high-resolution small-field-of-view breast-specific gamma camera in craniocaudalmore » and mediolateral oblique projections. Scintimammograms were prospectively classified according to focal radiotracer uptake as normal (score of 1), with no focal or diffuse uptake; benign (score of 2), with minimal patchy uptake; probably benign (score of 3), with scattered patchy uptake; probably abnormal (score of 4), with mild focal radiotracer uptake; and abnormal (score of 5), with marked focal radiotracer uptake. Mammographic breast density was categorized according to BI-RADS criteria. Patients with normal scintimammograms (scores of 1, 2, or 3) were followed up for 1 year with an annual mammogram, physical examination, and repeat scintimammography. Patients with abnormal scintimammograms (scores of 4 or 5) underwent ultrasonography (US), and those with focal hypoechoic lesions underwent biopsy. If no lesion was found during US, patients were followed up with scintimammography. Specific pathologic findings were compared with scintimammographic findings. RESULTS: Of 94 women, 78 (83%) had normal scintimammograms (score of 1, 2, or 3) at initial examination and 16 (17%) had abnormal scintimammograms (score of 4 or 5). Fourteen (88%) of the 16 patients had either benign findings at biopsy or no focal abnormality at US

  7. The Advanced Gamma-ray Imaging System (AGIS): Schwarzschild-Couder (SC) Telescope Mechanical and Optical System Design

    NASA Astrophysics Data System (ADS)

    Guarino, V.; Vassiliev, V.; Buckley, J.; Byrum, K.; Falcone, A.; Fegan, S.; Finley, J.; Hanna, D.; Kaaret, P.; Konopelko, A.; Krawczynski, H.; Krennrich, F.; Romani, R.; Wagner, R.; Woods, M.

    2009-05-01

    The concept of a future ground-based gamma-ray observatory, AGIS, in the energy range 20 GeV to 200 TeV is based on an array of 50-100 imaging atmospheric Cherenkov telescopes (IACTs). The anticipated improvement of AGIS sensitivity, angular resolution, and reliability of operation imposes demanding technological and cost requirements on the design of IACTs. In this submission, we focus on the optical and mechanical systems for a novel Schwarzschild-Couder two-mirror aplanatic optical system originally proposed by Schwarzschild. Emerging new mirror production technologies based on replication processes, such as cold and hot glass slumping, cured CFRP, and electroforming, provide new opportunities for cost effective solutions for the design of the optical system. We explore capabilities of these mirror fabrication methods for the AGIS project and alignment methods for optical systems. We also study a mechanical structure which will provide support points for mirrors and camera design driven by the requirement of minimizing the deflections of the mirror support structures.

  8. The new camera calibration system at the US Geological Survey

    USGS Publications Warehouse

    Light, D.L.

    1992-01-01

    Modern computerized photogrammetric instruments are capable of utilizing both radial and decentering camera calibration parameters which can increase plotting accuracy over that of older analog instrumentation technology from previous decades. Also, recent design improvements in aerial cameras have minimized distortions and increased the resolving power of camera systems, which should improve the performance of the overall photogrammetric process. In concert with these improvements, the Geological Survey has adopted the rigorous mathematical model for camera calibration developed by Duane Brown. An explanation of the Geological Survey's calibration facility and the additional calibration parameters now being provided in the USGS calibration certificate are reviewed. -Author

  9. A novel camera localization system for extending three-dimensional digital image correlation

    NASA Astrophysics Data System (ADS)

    Sabato, Alessandro; Reddy, Narasimha; Khan, Sameer; Niezrecki, Christopher

    2018-03-01

    The monitoring of civil, mechanical, and aerospace structures is important especially as these systems approach or surpass their design life. Often, Structural Health Monitoring (SHM) relies on sensing techniques for condition assessment. Advancements achieved in camera technology and optical sensors have made three-dimensional (3D) Digital Image Correlation (DIC) a valid technique for extracting structural deformations and geometry profiles. Prior to making stereophotogrammetry measurements, a calibration has to be performed to obtain the vision systems' extrinsic and intrinsic parameters. It means that the position of the cameras relative to each other (i.e. separation distance, cameras angle, etc.) must be determined. Typically, cameras are placed on a rigid bar to prevent any relative motion between the cameras. This constraint limits the utility of the 3D-DIC technique, especially as it is applied to monitor large-sized structures and from various fields of view. In this preliminary study, the design of a multi-sensor system is proposed to extend 3D-DIC's capability and allow for easier calibration and measurement. The suggested system relies on a MEMS-based Inertial Measurement Unit (IMU) and a 77 GHz radar sensor for measuring the orientation and relative distance of the stereo cameras. The feasibility of the proposed combined IMU-radar system is evaluated through laboratory tests, demonstrating its ability in determining the cameras position in space for performing accurate 3D-DIC calibration and measurements.

  10. Applications of iQID cameras

    NASA Astrophysics Data System (ADS)

    Han, Ling; Miller, Brian W.; Barrett, Harrison H.; Barber, H. Bradford; Furenlid, Lars R.

    2017-09-01

    iQID is an intensified quantum imaging detector developed in the Center for Gamma-Ray Imaging (CGRI). Originally called BazookaSPECT, iQID was designed for high-resolution gamma-ray imaging and preclinical gamma-ray single-photon emission computed tomography (SPECT). With the use of a columnar scintillator, an image intensifier and modern CCD/CMOS sensors, iQID cameras features outstanding intrinsic spatial resolution. In recent years, many advances have been achieved that greatly boost the performance of iQID, broadening its applications to cover nuclear and particle imaging for preclinical, clinical and homeland security settings. This paper presents an overview of the recent advances of iQID technology and its applications in preclinical and clinical scintigraphy, preclinical SPECT, particle imaging (alpha, neutron, beta, and fission fragment), and digital autoradiography.

  11. Technology Development for AGIS (Advanced Gamma-ray Imaging System).

    NASA Astrophysics Data System (ADS)

    Krennrich, Frank

    2008-04-01

    Next-generation arrays of atmospheric Cherenkov telescopes are at the conceptual planning stage and each could consist of on the order of 100 telescopes. The two currently-discussed projects AGIS in the US and CTA in Europe, have the potential to achieve an order of magnitude better sensitivity for Very High Energy (VHE) gamma-ray observations over state-to-the-art observatories. These projects require a substantial increase in scale from existing 4-telescope arrays such as VERITAS and HESS. The optimization of a large array requires exploring cost reduction and research and development for the individual elements while maximizing their performance as an array. In this context, the technology development program for AGIS will be discussed. This includes developing new optical designs, evaluating new types of photodetectors, developing fast trigger systems, integrating fast digitizers into highly-pixilated cameras, and reliability engineering of the individual components.

  12. Temporal Imaging CeBr3 Compton Camera: A New Concept for Nuclear Decommissioning and Nuclear Waste Management

    NASA Astrophysics Data System (ADS)

    Iltis, A.; Snoussi, H.; Magalhaes, L. Rodrigues de; Hmissi, M. Z.; Zafiarifety, C. Tata; Tadonkeng, G. Zeufack; Morel, C.

    2018-01-01

    During nuclear decommissioning or waste management operations, a camera that could make an image of the contamination field and identify and quantify the contaminants would be a great progress. Compton cameras have been proposed, but their limited efficiency for high energy gamma rays and their cost have severely limited their application. Our objective is to promote a Compton camera for the energy range (200 keV - 2 MeV) that uses fast scintillating crystals and a new concept for locating scintillation event: Temporal Imaging. Temporal Imaging uses monolithic plates of fast scintillators and measures photons time of arrival distribution in order to locate each gamma ray with a high precision in space (X,Y,Z), time (T) and energy (E). This provides a native estimation of the depth of interaction (Z) of every detected gamma ray. This also allows a time correction for the propagation time of scintillation photons inside the crystal, therefore resulting in excellent time resolution. The high temporal resolution of the system makes it possible to veto quite efficiently background by using narrow time coincidence (< 300 ps). It is also possible to reconstruct the direction of propagation of the photons inside the detector using timing constraints. The sensitivity of our system is better than 1 nSv/h in a 60 s acquisition with a 22Na source. The project TEMPORAL is funded by the ANDRA/PAI under the grant No. RTSCNADAA160019.

  13. Monitoring Kilauea Volcano Using Non-Telemetered Time-Lapse Camera Systems

    NASA Astrophysics Data System (ADS)

    Orr, T. R.; Hoblitt, R. P.

    2006-12-01

    Systematic visual observations are an essential component of monitoring volcanic activity. At the Hawaiian Volcano Observatory, the development and deployment of a new generation of high-resolution, non- telemetered, time-lapse camera systems provides periodic visual observations in inaccessible and hazardous environments. The camera systems combine a hand-held digital camera, programmable shutter-release, and other off-the-shelf components in a package that is inexpensive, easy to deploy, and ideal for situations in which the probability of equipment loss due to volcanic activity or theft is substantial. The camera systems have proven invaluable in correlating eruptive activity with deformation and seismic data streams. For example, in late 2005 and much of 2006, Pu`u `O`o, the active vent on Kilauea Volcano`s East Rift Zone, experienced 10--20-hour cycles of inflation and deflation that correlated with increases in seismic energy release. A time-lapse camera looking into a skylight above the main lava tube about 1 km south of the vent showed an increase in lava level---an indicator of increased lava flux---during periods of deflation, and a decrease in lava level during periods of inflation. A second time-lapse camera, with a broad view of the upper part of the active flow field, allowed us to correlate the same cyclic tilt and seismicity with lava breakouts from the tube. The breakouts were accompanied by rapid uplift and subsidence of shatter rings over the tube. The shatter rings---concentric rings of broken rock---rose and subsided by as much as 6 m in less than an hour during periods of varying flux. Time-lapse imagery also permits improved assessment of volcanic hazards, and is invaluable in illustrating the hazards to the public. In collaboration with Hawaii Volcanoes National Park, camera systems have been used to monitor the growth of lava deltas at the entry point of lava into the ocean to determine the potential for catastrophic collapse.

  14. 3D tomographic imaging with the γ-eye planar scintigraphic gamma camera

    NASA Astrophysics Data System (ADS)

    Tunnicliffe, H.; Georgiou, M.; Loudos, G. K.; Simcox, A.; Tsoumpas, C.

    2017-11-01

    γ-eye is a desktop planar scintigraphic gamma camera (100 mm × 50 mm field of view) designed by BET Solutions as an affordable tool for dynamic, whole body, small-animal imaging. This investigation tests the viability of using γ-eye for the collection of tomographic data for 3D SPECT reconstruction. Two software packages, QSPECT and STIR (software for tomographic image reconstruction), have been compared. Reconstructions have been performed using QSPECT’s implementation of the OSEM algorithm and STIR’s OSMAPOSL (Ordered Subset Maximum A Posteriori One Step Late) and OSSPS (Ordered Subsets Separable Paraboloidal Surrogate) algorithms. Reconstructed images of phantom and mouse data have been assessed in terms of spatial resolution, sensitivity to varying activity levels and uniformity. The effect of varying the number of iterations, the voxel size (1.25 mm default voxel size reduced to 0.625 mm and 0.3125 mm), the point spread function correction and the weight of prior terms were explored. While QSPECT demonstrated faster reconstructions, STIR outperformed it in terms of resolution (as low as 1 mm versus 3 mm), particularly when smaller voxel sizes were used, and in terms of uniformity, particularly when prior terms were used. Little difference in terms of sensitivity was seen throughout.

  15. Gamma-camera 18F-FDG PET in diagnosis and staging of patients presenting with suspected lung cancer and comparison with dedicated PET.

    PubMed

    Oturai, Peter S; Mortensen, Jann; Enevoldsen, Henriette; Eigtved, Annika; Backer, Vibeke; Olesen, Knud P; Nielsen, Henrik W; Hansen, Hanne; Stentoft, Poul; Friberg, Lars

    2004-08-01

    It is not clear whether high-quality coincidence gamma-PET (gPET) cameras can provide clinical data comparable with data obtained with dedicated PET (dPET) cameras in the primary diagnostic work-up of patients with suspected lung cancer. This study focuses on 2 main issues: direct comparison between foci resolved with the 2 different PET scanners and the diagnostic accuracy compared with final diagnosis determined by the combined information from all other investigations and clinical follow-up. Eighty-six patients were recruited to this study through a routine diagnostic program. They all had changes on their chest radiographs, suggesting malignant lung tumor. In addition to the standard diagnostic program, each patient had 2 PET scans that were performed on the same day. After administration of 419 MBq (range = 305-547 MBq) (18)F-FDG, patients were scanned in a dedicated PET scanner about 1 h after FDG administration and in a dual-head coincidence gamma-camera about 3 h after tracer injection. Images from the 2 scans were evaluated in a blinded set-up and compared with the final outcome. Malignant intrathoracic disease was found in 52 patients, and 47 patients had primary lung cancers. dPET detected all patients as having malignancies (sensitivity, 100%; specificity, 50%), whereas gPET missed one patient (sensitivity, 98%; specificity, 56%). For evaluating regional lymph node involvement, sensitivity and specificity rates were 78% and 84% for dPET and 61% and 90% for gPET, respectively. When comparing the 2 PET techniques with clinical tumor stage (TNM), full agreement was obtained in 64% of the patients (Cohen's kappa = 0.56). Comparing categorization of the patients into clinical relevant stages (no malignancy/malignancy suitable for treatment with curative intent/nontreatable malignancy), resulted in full agreement in 81% (Cohen's kappa = 0.71) of patients. Comparing results from a recent generation of gPET cameras obtained about 2 h later than those of d

  16. Composite video and graphics display for camera viewing systems in robotics and teleoperation

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B. (Inventor); Venema, Steven C. (Inventor)

    1993-01-01

    A system for real-time video image display for robotics or remote-vehicle teleoperation is described that has at least one robot arm or remotely operated vehicle controlled by an operator through hand-controllers, and one or more television cameras and optional lighting element. The system has at least one television monitor for display of a television image from a selected camera and the ability to select one of the cameras for image display. Graphics are generated with icons of cameras and lighting elements for display surrounding the television image to provide the operator information on: the location and orientation of each camera and lighting element; the region of illumination of each lighting element; the viewed region and range of focus of each camera; which camera is currently selected for image display for each monitor; and when the controller coordinate for said robot arms or remotely operated vehicles have been transformed to correspond to coordinates of a selected or nonselected camera.

  17. Field-deployable gamma-radiation detectors for DHS use

    NASA Astrophysics Data System (ADS)

    Mukhopadhyay, Sanjoy

    2007-09-01

    Recently, the Department of Homeland Security (DHS) has integrated all nuclear detection research, development, testing, evaluation, acquisition, and operational support into a single office: the Domestic Nuclear Detection Office (DNDO). The DNDO has specific requirements set for all commercial off-the-shelf and government off-the-shelf radiation detection equipment and data acquisition systems. This article would investigate several recent developments in field deployable gamma radiation detectors that are attempting to meet the DNDO specifications. Commercially available, transportable, handheld radio isotope identification devices (RIID) are inadequate for DHS' requirements in terms of sensitivity, resolution, response time, and reach-back capability. The leading commercial vendor manufacturing handheld gamma spectrometer in the United States is Thermo Electron Corporation. Thermo Electron's identiFINDER TM, which primarily uses sodium iodide crystals (3.18 x 2.54cm cylinders) as gamma detectors, has a Full-Width-at-Half-Maximum energy resolution of 7 percent at 662 keV. Thermo Electron has just recently come up with a reach-back capability patented as RadReachBack TM that enables emergency personnel to obtain real-time technical analysis of radiation samples they find in the field1. The current project has the goal to build a prototype handheld gamma spectrometer, equipped with a digital camera and an embedded cell phone to be used as an RIID with higher sensitivity, better resolution, and faster response time (able to detect the presence of gamma-emitting radio isotopes within 5 seconds of approach), which will make it useful as a field deployable tool. The handheld equipment continuously monitors the ambient gamma radiation, and, if it comes across any radiation anomalies with higher than normal gamma gross counts, it sets an alarm condition. When a substantial alarm level is reached, the system automatically triggers the saving of relevant spectral data and

  18. Harpicon camera for HDTV

    NASA Astrophysics Data System (ADS)

    Tanada, Jun

    1992-08-01

    Ikegami has been involved in broadcast equipment ever since it was established as a company. In conjunction with NHK it has brought forth countless television cameras, from black-and-white cameras to color cameras, HDTV cameras, and special-purpose cameras. In the early days of HDTV (high-definition television, also known as "High Vision") cameras the specifications were different from those for the cameras of the present-day system, and cameras using all kinds of components, having different arrangements of components, and having different appearances were developed into products, with time spent on experimentation, design, fabrication, adjustment, and inspection. But recently the knowhow built up thus far in components, , printed circuit boards, and wiring methods has been incorporated in camera fabrication, making it possible to make HDTV cameras by metbods similar to the present system. In addition, more-efficient production, lower costs, and better after-sales service are being achieved by using the same circuits, components, mechanism parts, and software for both HDTV cameras and cameras that operate by the present system.

  19. Testing of the Apollo 15 Metric Camera System.

    NASA Technical Reports Server (NTRS)

    Helmering, R. J.; Alspaugh, D. H.

    1972-01-01

    Description of tests conducted (1) to assess the quality of Apollo 15 Metric Camera System data and (2) to develop production procedures for total block reduction. Three strips of metric photography over the Hadley Rille area were selected for the tests. These photographs were utilized in a series of evaluation tests culminating in an orbitally constrained block triangulation solution. Results show that film deformations up to 25 and 5 microns are present in the mapping and stellar materials, respectively. Stellar reductions can provide mapping camera orientations with an accuracy that is consistent with the accuracies of other parameters in the triangulation solutions. Pointing accuracies of 4 to 10 microns can be expected for the mapping camera materials, depending on variations in resolution caused by changing sun angle conditions.

  20. A low-cost dual-camera imaging system for aerial applicators

    USDA-ARS?s Scientific Manuscript database

    Agricultural aircraft provide a readily available remote sensing platform as low-cost and easy-to-use consumer-grade cameras are being increasingly used for aerial imaging. In this article, we report on a dual-camera imaging system we recently assembled that can capture RGB and near-infrared (NIR) i...

  1. SHOK—The First Russian Wide-Field Optical Camera in Space

    NASA Astrophysics Data System (ADS)

    Lipunov, V. M.; Gorbovskoy, E. S.; Kornilov, V. G.; Panasyuk, M. I.; Amelushkin, A. M.; Petrov, V. L.; Yashin, I. V.; Svertilov, S. I.; Vedenkin, N. N.

    2018-02-01

    Onboard the spacecraft Lomonosov is established two fast, fixed, very wide-field cameras SHOK. The main goal of this experiment is the observation of GRB optical emission before, synchronously, and after the gamma-ray emission. The field of view of each of the cameras is placed in the gamma-ray burst detection area of other devices located onboard the "Lomonosov" spacecraft. SHOK provides measurements of optical emissions with a magnitude limit of ˜ 9-10m on a single frame with an exposure of 0.2 seconds. The device is designed for continuous sky monitoring at optical wavelengths in the very wide field of view (1000 square degrees each camera), detection and localization of fast time-varying (transient) optical sources on the celestial sphere, including provisional and synchronous time recording of optical emissions from the gamma-ray burst error boxes, detected by the BDRG device and implemented by a control signal (alert trigger) from the BDRG. The Lomonosov spacecraft has two identical devices, SHOK1 and SHOK2. The core of each SHOK device is a fast-speed 11-Megapixel CCD. Each of the SHOK devices represents a monoblock, consisting of a node observations of optical emission, the electronics node, elements of the mechanical construction, and the body.

  2. Evaluation of camera-based systems to reduce transit bus side collisions : phase II.

    DOT National Transportation Integrated Search

    2012-12-01

    The sideview camera system has been shown to eliminate blind zones by providing a view to the driver in real time. In : order to provide the best integration of these systems, an integrated camera-mirror system (hybrid system) was : developed and tes...

  3. Laser-Directed Ranging System Implementing Single Camera System for Telerobotics Applications

    NASA Technical Reports Server (NTRS)

    Wells, Dennis L. (Inventor); Li, Larry C. (Inventor); Cox, Brian J. (Inventor)

    1995-01-01

    The invention relates generally to systems for determining the range of an object from a reference point and, in one embodiment, to laser-directed ranging systems useful in telerobotics applications. Digital processing techniques are employed which minimize the complexity and cost of the hardware and software for processing range calculations, thereby enhancing the commercial attractiveness of the system for use in relatively low-cost robotic systems. The system includes a video camera for generating images of the target, image digitizing circuitry, and an associated frame grabber circuit. The circuit first captures one of the pairs of stereo video images of the target, and then captures a second video image of the target as it is partly illuminated by the light beam, suitably generated by a laser. The two video images, taken sufficiently close together in time to minimize camera and scene motion, are converted to digital images and then compared. Common pixels are eliminated, leaving only a digital image of the laser-illuminated spot on the target. Mw centroid of the laser illuminated spot is dm obtained and compared with a predetermined reference point, predetermined by design or calibration, which represents the coordinate at the focal plane of the laser illumination at infinite range. Preferably, the laser and camera are mounted on a servo-driven platform which can be oriented to direct the camera and the laser toward the target. In one embodiment the platform is positioned in response to movement of the operator's head. Position and orientation sensors are used to monitor head movement. The disparity between the digital image of the laser spot and the reference point is calculated for determining range to the target. Commercial applications for the system relate to active range-determination systems, such as those used with robotic systems in which it is necessary to determine the, range to a workpiece or object to be grasped or acted upon by a robot arm end

  4. Compton Camera and Prompt Gamma Ray Timing: Two Methods for In Vivo Range Assessment in Proton Therapy

    PubMed Central

    Hueso-González, Fernando; Fiedler, Fine; Golnik, Christian; Kormoll, Thomas; Pausch, Guntram; Petzoldt, Johannes; Römer, Katja E.; Enghardt, Wolfgang

    2016-01-01

    Proton beams are promising means for treating tumors. Such charged particles stop at a defined depth, where the ionization density is maximum. As the dose deposit beyond this distal edge is very low, proton therapy minimizes the damage to normal tissue compared to photon therapy. Nevertheless, inherent range uncertainties cast doubts on the irradiation of tumors close to organs at risk and lead to the application of conservative safety margins. This constrains significantly the potential benefits of protons over photons. In this context, several research groups are developing experimental tools for range verification based on the detection of prompt gammas, a nuclear by-product of the proton irradiation. At OncoRay and Helmholtz-Zentrum Dresden-Rossendorf, detector components have been characterized in realistic radiation environments as a step toward a clinical Compton camera. On the one hand, corresponding experimental methods and results obtained during the ENTERVISION training network are reviewed. On the other hand, a novel method based on timing spectroscopy has been proposed as an alternative to collimated imaging systems. The first tests of the timing method at a clinical proton accelerator are summarized, its applicability in a clinical environment for challenging the current safety margins is assessed, and the factors limiting its precision are discussed. PMID:27148473

  5. A Portable Shoulder-Mounted Camera System for Surgical Education in Spine Surgery.

    PubMed

    Pham, Martin H; Ohiorhenuan, Ifije E; Patel, Neil N; Jakoi, Andre M; Hsieh, Patrick C; Acosta, Frank L; Wang, Jeffrey C; Liu, John C

    2017-02-07

    The past several years have demonstrated an increased recognition of operative videos as an important adjunct for resident education. Currently lacking, however, are effective methods to record video for the purposes of illustrating the techniques of minimally invasive (MIS) and complex spine surgery. We describe here our experiences developing and using a shoulder-mounted camera system for recording surgical video. Our requirements for an effective camera system included wireless portability to allow for movement around the operating room, camera mount location for comfort and loupes/headlight usage, battery life for long operative days, and sterile control of on/off recording. With this in mind, we created a shoulder-mounted camera system utilizing a GoPro™ HERO3+, its Smart Remote (GoPro, Inc., San Mateo, California), a high-capacity external battery pack, and a commercially available shoulder-mount harness. This shoulder-mounted system was more comfortable to wear for long periods of time in comparison to existing head-mounted and loupe-mounted systems. Without requiring any wired connections, the surgeon was free to move around the room as needed. Over the past several years, we have recorded numerous MIS and complex spine surgeries for the purposes of surgical video creation for resident education. Surgical videos serve as a platform to distribute important operative nuances in rich multimedia. Effective and practical camera system setups are needed to encourage the continued creation of videos to illustrate the surgical maneuvers in minimally invasive and complex spinal surgery. We describe here a novel portable shoulder-mounted camera system setup specifically designed to be worn and used for long periods of time in the operating room.

  6. Focus collimator press for a collimator for gamma ray cameras

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    York, R.N.; York, D.L.

    A focus collimator press for collimators for gamma ray cameras is described comprising a pivot arm of fixed length mounted on a travelling pivot which is movable in the plane of a spaced apart work table surface in a direction toward and away from the work table. A press plate is carried at the opposite end of the fixed length pivot arm, and is maintained in registration with the same portion of the work table for pressing engagement with each undulating radiation opaque strip as it is added to the top of a collimator stack in process by movement ofmore » the travelling pivot inward toward the work table. This enables the press plate to maintain its relative position above the collimator stack and at the same time the angle of the press plate changes, becoming less acute in relation to the work table as the travelling pivot motes inwardly toward the work table. The fixed length of the pivot arm is substantially equal to the focal point of the converging apertures formed by each pair of undulating strips stacked together. Thus, the focal point of each aperture row falls substantially on the axis of the travelling pivot, and since it moves in the plane of the work table surface the focal point of each aperture row is directed to lie in the same common plane. When one of two collimator stacks made in this way is rotated 180 degrees and the two bonded together along their respective first strips, all focal points of every aperture row lie on the central axis of the completed collimator.« less

  7. A Major Upgrade of the H.E.S.S. Cherenkov Cameras

    NASA Astrophysics Data System (ADS)

    Lypova, Iryna; Giavitto, Gianluca; Ashton, Terry; Balzer, Arnim; Berge, David; Brun, Francois; Chaminade, Thomas; Delagnes, Eric; Fontaine, Gerard; Füßling, Matthias; Giebels, Berrie; Glicenstein, Jean-Francois; Gräber, Tobias; Hinton, Jim; Jahnke, Albert; Klepser, Stefan; Kossatz, Marko; Kretzschmann, Axel; Lefranc, Valentin; Leich, Holger; Lüdecke, Hartmut; Manigot, Pascal; Marandon, Vincent; Moulin, Emmanuel; de Naurois, Mathieu; Nayman, Patrick; Ohm, Stefan; Penno, Marek; Ross, Duncan; Salek, David; Schade, Markus; Schwab, Thomas; Simoni, Rachel; Stegmann, Christian; Steppa, Constantin; Thornhill, Julian; Toussnel, Francois

    2017-03-01

    The High Energy Stereoscopic System (H.E.S.S.) is an array of imaging atmospheric Cherenkov telescopes (IACTs) located in Namibia. It was built to detect Very High Energy (VHE, >100 GeV) cosmic gamma rays, and consists of four 12 m diameter Cherenkov telescopes (CT1-4), built in 2003, and a larger 28 m telescope (CT5), built in 2012. The larger mirror surface of CT5 permits to lower the energy threshold of the array down to 30 GeV. The cameras of CT1-4 are currently undergoing an extensive upgrade, with the goals of reducing their failure rate, reducing their readout dead time and improving the overall performance of the array. The entire camera electronics has been renewed from ground-up, as well as the power, ventilation and pneumatics systems, and the control and data acquisition software. Technical solutions forseen for the next-generation Cherenkov Telescope Array (CTA) observatory have been introduced, most notably the readout is based on the NECTAr analog memory chip. The camera control subsystems and the control software framework also pursue an innovative design, increasing the camera performance, robustness and flexibility. The CT1 camera has been upgraded in July 2015 and is currently taking data; CT2-4 will upgraded in Fall 2016. Together they will assure continuous operation of H.E.S.S at its full sensitivity until and possibly beyond the advent of CTA. This contribution describes the design, the testing and the in-lab and on-site performance of all components of the newly upgraded H.E.S.S. camera.

  8. Innovative Camera and Image Processing System to Characterize Cryospheric Changes

    NASA Astrophysics Data System (ADS)

    Schenk, A.; Csatho, B. M.; Nagarajan, S.

    2010-12-01

    The polar regions play an important role in Earth’s climatic and geodynamic systems. Digital photogrammetric mapping provides a means for monitoring the dramatic changes observed in the polar regions during the past decades. High-resolution, photogrammetrically processed digital aerial imagery provides complementary information to surface measurements obtained by laser altimetry systems. While laser points accurately sample the ice surface, stereo images allow for the mapping of features, such as crevasses, flow bands, shear margins, moraines, leads, and different types of sea ice. Tracking features in repeat images produces a dense velocity vector field that can either serve as validation for interferometrically derived surface velocities or it constitutes a stand-alone product. A multi-modal, photogrammetric platform consists of one or more high-resolution, commercial color cameras, GPS and inertial navigation system as well as optional laser scanner. Such a system, using a Canon EOS-1DS Mark II camera, was first flown on the Icebridge missions Fall 2009 and Spring 2010, capturing hundreds of thousands of images at a frame rate of about one second. While digital images and videos have been used for quite some time for visual inspection, precise 3D measurements with low cost, commercial cameras require special photogrammetric treatment that only became available recently. Calibrating the multi-camera imaging system and geo-referencing the images are absolute prerequisites for all subsequent applications. Commercial cameras are inherently non-metric, that is, their sensor model is only approximately known. Since these cameras are not as rugged as photogrammetric cameras, the interior orientation also changes, due to temperature and pressure changes and aircraft vibration, resulting in large errors in 3D measurements. It is therefore necessary to calibrate the cameras frequently, at least whenever the system is newly installed. Geo-referencing the images is

  9. Three-camera stereo vision for intelligent transportation systems

    NASA Astrophysics Data System (ADS)

    Bergendahl, Jason; Masaki, Ichiro; Horn, Berthold K. P.

    1997-02-01

    A major obstacle in the application of stereo vision to intelligent transportation system is high computational cost. In this paper, a PC based three-camera stereo vision system constructed with off-the-shelf components is described. The system serves as a tool for developing and testing robust algorithms which approach real-time performance. We present an edge based, subpixel stereo algorithm which is adapted to permit accurate distance measurements to objects in the field of view using a compact camera assembly. Once computed, the 3D scene information may be directly applied to a number of in-vehicle applications, such as adaptive cruise control, obstacle detection, and lane tracking. Moreover, since the largest computational costs is incurred in generating the 3D scene information, multiple applications that leverage this information can be implemented in a single system with minimal cost. On-road applications, such as vehicle counting and incident detection, are also possible. Preliminary in-vehicle road trial results are presented.

  10. Camera Systems Rapidly Scan Large Structures

    NASA Technical Reports Server (NTRS)

    2013-01-01

    Needing a method to quickly scan large structures like an aircraft wing, Langley Research Center developed the line scanning thermography (LST) system. LST works in tandem with a moving infrared camera to capture how a material responds to changes in temperature. Princeton Junction, New Jersey-based MISTRAS Group Inc. now licenses the technology and uses it in power stations and industrial plants.

  11. Report on the eROSITA camera system

    NASA Astrophysics Data System (ADS)

    Meidinger, Norbert; Andritschke, Robert; Bornemann, Walter; Coutinho, Diogo; Emberger, Valentin; Hälker, Olaf; Kink, Walter; Mican, Benjamin; Müller, Siegfried; Pietschner, Daniel; Predehl, Peter; Reiffers, Jonas

    2014-07-01

    The eROSITA space telescope is currently developed for the determination of cosmological parameters and the equation of state of dark energy via evolution of clusters of galaxies. Furthermore, the instrument development was strongly motivated by the intention of a first imaging X-ray all-sky survey enabling measurements above 2 keV. eROSITA is a scientific payload on the Russian research satellite SRG. Its destination after launch is the Lagrangian point L2. The observational program of the observatory divides into an all-sky survey and pointed observations and takes in total about 7.5 years. The instrument comprises an array of 7 identical and parallel aligned telescopes. Each of the seven focal plane cameras is equipped with a PNCCD detector, an enhanced type of the XMM-Newton focal plane detector. This instrumentation permits spectroscopy and imaging of X-rays in the energy band from 0.3 keV to 10 keV with a field of view of 1.0 degree. The camera development is done at the Max-Planck-Institute for extraterrestrial physics. Key component of each camera is the PNCCD chip. This silicon sensor is a back-illuminated, fully depleted and column-parallel type of charge coupled device. The image area of the 450 micron thick frame-transfer CCD comprises an array of 384 x 384 pixels, each with a size of 75 micron x 75 micron. Readout of the signal charge that is generated by an incident X-ray photon in the CCD is accomplished by an ASIC, the so-called eROSITA CAMEX. It provides 128 parallel analog signal processing channels but multiplexes the signals finally to one output which feeds the detector signals to a fast 14-bit ADC. The read noise of this system is equivalent to a noise charge of about 2.5 electrons rms. We achieve an energy resolution close to the theoretical limit given by Fano noise (except for very low energies). For example, the FWHM at an energy of 5.9 keV is approximately 140 eV. The complete camera assembly comprises the camera head with the detector as

  12. Optimization and verification of image reconstruction for a Compton camera towards application as an on-line monitor for particle therapy

    NASA Astrophysics Data System (ADS)

    Taya, T.; Kataoka, J.; Kishimoto, A.; Tagawa, L.; Mochizuki, S.; Toshito, T.; Kimura, M.; Nagao, Y.; Kurita, K.; Yamaguchi, M.; Kawachi, N.

    2017-07-01

    Particle therapy is an advanced cancer therapy that uses a feature known as the Bragg peak, in which particle beams suddenly lose their energy near the end of their range. The Bragg peak enables particle beams to damage tumors effectively. To achieve precise therapy, the demand for accurate and quantitative imaging of the beam irradiation region or dosage during therapy has increased. The most common method of particle range verification is imaging of annihilation gamma rays by positron emission tomography. Not only 511-keV gamma rays but also prompt gamma rays are generated during therapy; therefore, the Compton camera is expected to be used as an on-line monitor for particle therapy, as it can image these gamma rays in real time. Proton therapy, one of the most common particle therapies, uses a proton beam of approximately 200 MeV, which has a range of ~ 25 cm in water. As gamma rays are emitted along the path of the proton beam, quantitative evaluation of the reconstructed images of diffuse sources becomes crucial, but it is far from being fully developed for Compton camera imaging at present. In this study, we first quantitatively evaluated reconstructed Compton camera images of uniformly distributed diffuse sources, and then confirmed that our Compton camera obtained 3 %(1 σ) and 5 %(1 σ) uniformity for line and plane sources, respectively. Based on this quantitative study, we demonstrated on-line gamma imaging during proton irradiation. Through these studies, we show that the Compton camera is suitable for future use as an on-line monitor for particle therapy.

  13. The Advanced Gamma-ray Imaging System (AGIS): Topological Array Trigger

    NASA Astrophysics Data System (ADS)

    Smith, Andrew W.

    2010-03-01

    AGIS is a concept for the next-generation ground-based gamma-ray observatory. It will be an array of 36 imaging atmospheric Cherenkov telescopes (IACTs) sensitive in the energy range from 50 GeV to 200 TeV. The required improvements in sensitivity, angular resolution, and reliability of operation relative to the present generation instruments imposes demanding technological and cost requirements on the design of the telescopes and on the triggering and readout systems for AGIS. To maximize the capabilities of large arrays of IACTs with a low energy threshold, a wide field of view and a low background rate, a sophisticated array trigger is required. We outline the status of the development of a stereoscopic array trigger that calculates image parameters and correlates them across a subset of telescopes. Field Programmable Gate Arrays (FPGAs) implement the real-time pattern recognition to suppress cosmic rays and night-sky background events. A proof of principle system is being developed to run at camera trigger rates up to 10MHz and array-level rates up to 10kHz.

  14. Radiation camera motion correction system

    DOEpatents

    Hoffer, P.B.

    1973-12-18

    The device determines the ratio of the intensity of radiation received by a radiation camera from two separate portions of the object. A correction signal is developed to maintain this ratio at a substantially constant value and this correction signal is combined with the camera signal to correct for object motion. (Official Gazette)

  15. Composite video and graphics display for multiple camera viewing system in robotics and teleoperation

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B. (Inventor); Venema, Steven C. (Inventor)

    1991-01-01

    A system for real-time video image display for robotics or remote-vehicle teleoperation is described that has at least one robot arm or remotely operated vehicle controlled by an operator through hand-controllers, and one or more television cameras and optional lighting element. The system has at least one television monitor for display of a television image from a selected camera and the ability to select one of the cameras for image display. Graphics are generated with icons of cameras and lighting elements for display surrounding the television image to provide the operator information on: the location and orientation of each camera and lighting element; the region of illumination of each lighting element; the viewed region and range of focus of each camera; which camera is currently selected for image display for each monitor; and when the controller coordinate for said robot arms or remotely operated vehicles have been transformed to correspond to coordinates of a selected or nonselected camera.

  16. 640 x 480 MWIR and LWIR camera system developments

    NASA Astrophysics Data System (ADS)

    Tower, John R.; Villani, Thomas S.; Esposito, Benjamin J.; Gilmartin, Harvey R.; Levine, Peter A.; Coyle, Peter J.; Davis, Timothy J.; Shallcross, Frank V.; Sauer, Donald J.; Meyerhofer, Dietrich

    1993-01-01

    The performance of a 640 x 480 PtSi, 3,5 microns (MWIR), Stirling cooled camera system with a minimum resolvable temperature of 0.03 is considered. A preliminary specification of a full-TV resolution PtSi radiometer was developed using the measured performance characteristics of the Stirling cooled camera. The radiometer is capable of imaging rapid thermal transients from 25 to 250 C with better than 1 percent temperature resolution. This performance is achieved using the electronic exposure control capability of the MOS focal plane array (FPA). A liquid nitrogen cooled camera with an eight-position filter wheel has been developed using the 640 x 480 PtSi FPA. Low thermal mass packaging for the FPA was developed for Joule-Thomson applications.

  17. 640 x 480 MWIR and LWIR camera system developments

    NASA Astrophysics Data System (ADS)

    Tower, J. R.; Villani, T. S.; Esposito, B. J.; Gilmartin, H. R.; Levine, P. A.; Coyle, P. J.; Davis, T. J.; Shallcross, F. V.; Sauer, D. J.; Meyerhofer, D.

    The performance of a 640 x 480 PtSi, 3,5 microns (MWIR), Stirling cooled camera system with a minimum resolvable temperature of 0.03 is considered. A preliminary specification of a full-TV resolution PtSi radiometer was developed using the measured performance characteristics of the Stirling cooled camera. The radiometer is capable of imaging rapid thermal transients from 25 to 250 C with better than 1 percent temperature resolution. This performance is achieved using the electronic exposure control capability of the MOS focal plane array (FPA). A liquid nitrogen cooled camera with an eight-position filter wheel has been developed using the 640 x 480 PtSi FPA. Low thermal mass packaging for the FPA was developed for Joule-Thomson applications.

  18. Low-complexity camera digital signal imaging for video document projection system

    NASA Astrophysics Data System (ADS)

    Hsia, Shih-Chang; Tsai, Po-Shien

    2011-04-01

    We present high-performance and low-complexity algorithms for real-time camera imaging applications. The main functions of the proposed camera digital signal processing (DSP) involve color interpolation, white balance, adaptive binary processing, auto gain control, and edge and color enhancement for video projection systems. A series of simulations demonstrate that the proposed method can achieve good image quality while keeping computation cost and memory requirements low. On the basis of the proposed algorithms, the cost-effective hardware core is developed using Verilog HDL. The prototype chip has been verified with one low-cost programmable device. The real-time camera system can achieve 1270 × 792 resolution with the combination of extra components and can demonstrate each DSP function.

  19. Quantitative investigation of a novel small field of view hybrid gamma camera (HGC) capability for sentinel lymph node detection

    PubMed Central

    Lees, John E; Bugby, Sarah L; Jambi, Layal K; Perkins, Alan C

    2016-01-01

    Objective: The hybrid gamma camera (HGC) has been developed to enhance the localization of radiopharmaceutical uptake in targeted tissues during surgical procedures such as sentinel lymph node (SLN) biopsy. To assess the capability of the HGC, a lymph node contrast (LNC) phantom was constructed to simulate medical scenarios of varying radioactivity concentrations and SLN size. Methods: The phantom was constructed using two clear acrylic glass plates. The SLNs were simulated by circular wells of diameters ranging from 10 to 2.5 mm (16 wells in total) in 1 plate. The second plate contains four larger rectangular wells to simulate tissue background activity surrounding the SLNs. The activity used to simulate each SLN ranged between 4 and 0.025 MBq. The activity concentration ratio between the background and the activity injected in the SLNs was 1 : 10. The LNC phantom was placed at different depths of scattering material ranging between 5 and 40 mm. The collimator-to-source distance was 120 mm. Image acquisition times ranged from 60 to 240 s. Results: Contrast-to-noise ratio analysis and full-width-at-half-maximum (FWHM) measurements of the simulated SLNs were carried out for the images obtained. Over the range of activities used, the HGC detected between 87.5 and 100% of the SLNs through 20 mm of scattering material and 75–93.75% of the SLNs through 40 mm of scattering material. The FWHM of the detected SLNs ranged between 11.93 and 14.70 mm. Conclusion: The HGC is capable of detecting low accumulation of activity in small SLNs, indicating its usefulness as an intraoperative imaging system during surgical SLN procedures. Advances in knowledge: This study investigates the capability of a novel small-field-of-view (SFOV) HGC to detect low activity uptake in small SLNs. The phantom and procedure described are inexpensive and could be easily replicated and applied to any SFOV camera, to provide a comparison between systems with clinically relevant

  20. Heliostat kinematic system calibration using uncalibrated cameras

    NASA Astrophysics Data System (ADS)

    Burisch, Michael; Gomez, Luis; Olasolo, David; Villasante, Cristobal

    2017-06-01

    The efficiency of the solar field greatly depends on the ability of the heliostats to precisely reflect solar radiation onto a central receiver. To control the heliostats with such a precision accurate knowledge of the motion of each of them modeled as a kinematic system is required. Determining the parameters of this system for each heliostat by a calibration system is crucial for the efficient operation of the solar field. For small sized heliostats being able to make such a calibration in a fast and automatic manner is imperative as the solar field potentially contain tens or even hundreds of thousands of them. A calibration system which can rapidly recalibrate a whole solar field would also allow reducing costs. Heliostats are generally designed to provide stability over a large period of time. Being able to relax this requirement and compensate any occurring error by adapting parameters in a model, the costs of the heliostat can be reduced. The presented method describes such an automatic calibration system using uncalibrated cameras rigidly attached to each heliostat. The cameras are used to observe targets spread out through the solar field; based on this the kinematic system of the heliostat can be estimated with high precision. A comparison of this approach to similar solutions shows the viability of the proposed solution.

  1. Development of Automated Tracking System with Active Cameras for Figure Skating

    NASA Astrophysics Data System (ADS)

    Haraguchi, Tomohiko; Taki, Tsuyoshi; Hasegawa, Junichi

    This paper presents a system based on the control of PTZ cameras for automated real-time tracking of individual figure skaters moving on an ice rink. In the video images of figure skating, irregular trajectories, various postures, rapid movements, and various costume colors are included. Therefore, it is difficult to determine some features useful for image tracking. On the other hand, an ice rink has a limited area and uniform high intensity, and skating is always performed on ice. In the proposed system, an ice rink region is first extracted from a video image by the region growing method, and then, a skater region is extracted using the rink shape information. In the camera control process, each camera is automatically panned and/or tilted so that the skater region is as close to the center of the image as possible; further, the camera is zoomed to maintain the skater image at an appropriate scale. The results of experiments performed for 10 training scenes show that the skater extraction rate is approximately 98%. Thus, it was concluded that tracking with camera control was successful for almost all the cases considered in the study.

  2. 241-AZ-101 Waste Tank Color Video Camera System Shop Acceptance Test Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    WERRY, S.M.

    2000-03-23

    This report includes shop acceptance test results. The test was performed prior to installation at tank AZ-101. Both the camera system and camera purge system were originally sought and procured as a part of initial waste retrieval project W-151.

  3. Fisheye Multi-Camera System Calibration for Surveying Narrow and Complex Architectures

    NASA Astrophysics Data System (ADS)

    Perfetti, L.; Polari, C.; Fassi, F.

    2018-05-01

    Narrow spaces and passages are not a rare encounter in cultural heritage, the shape and extension of those areas place a serious challenge on any techniques one may choose to survey their 3D geometry. Especially on techniques that make use of stationary instrumentation like terrestrial laser scanning. The ratio between space extension and cross section width of many corridors and staircases can easily lead to distortions/drift of the 3D reconstruction because of the problem of propagation of uncertainty. This paper investigates the use of fisheye photogrammetry to produce the 3D reconstruction of such spaces and presents some tests to contain the degree of freedom of the photogrammetric network, thereby containing the drift of long data set as well. The idea is that of employing a multi-camera system composed of several fisheye cameras and to implement distances and relative orientation constraints, as well as the pre-calibration of the internal parameters for each camera, within the bundle adjustment. For the beginning of this investigation, we used the NCTech iSTAR panoramic camera as a rigid multi-camera system. The case study of the Amedeo Spire of the Milan Cathedral, that encloses a spiral staircase, is the stage for all the tests. Comparisons have been made between the results obtained with the multi-camera configuration, the auto-stitched equirectangular images and a data set obtained with a monocular fisheye configuration using a full frame DSLR. Results show improved accuracy, down to millimetres, using a rigidly constrained multi-camera.

  4. Automatic inference of geometric camera parameters and inter-camera topology in uncalibrated disjoint surveillance cameras

    NASA Astrophysics Data System (ADS)

    den Hollander, Richard J. M.; Bouma, Henri; Baan, Jan; Eendebak, Pieter T.; van Rest, Jeroen H. C.

    2015-10-01

    Person tracking across non-overlapping cameras and other types of video analytics benefit from spatial calibration information that allows an estimation of the distance between cameras and a relation between pixel coordinates and world coordinates within a camera. In a large environment with many cameras, or for frequent ad-hoc deployments of cameras, the cost of this calibration is high. This creates a barrier for the use of video analytics. Automating the calibration allows for a short configuration time, and the use of video analytics in a wider range of scenarios, including ad-hoc crisis situations and large scale surveillance systems. We show an autocalibration method entirely based on pedestrian detections in surveillance video in multiple non-overlapping cameras. In this paper, we show the two main components of automatic calibration. The first shows the intra-camera geometry estimation that leads to an estimate of the tilt angle, focal length and camera height, which is important for the conversion from pixels to meters and vice versa. The second component shows the inter-camera topology inference that leads to an estimate of the distance between cameras, which is important for spatio-temporal analysis of multi-camera tracking. This paper describes each of these methods and provides results on realistic video data.

  5. Detecting method of subjects' 3D positions and experimental advanced camera control system

    NASA Astrophysics Data System (ADS)

    Kato, Daiichiro; Abe, Kazuo; Ishikawa, Akio; Yamada, Mitsuho; Suzuki, Takahito; Kuwashima, Shigesumi

    1997-04-01

    Steady progress is being made in the development of an intelligent robot camera capable of automatically shooting pictures with a powerful sense of reality or tracking objects whose shooting requires advanced techniques. Currently, only experienced broadcasting cameramen can provide these pictures.TO develop an intelligent robot camera with these abilities, we need to clearly understand how a broadcasting cameraman assesses his shooting situation and how his camera is moved during shooting. We use a real- time analyzer to study a cameraman's work and his gaze movements at studios and during sports broadcasts. This time, we have developed a detecting method of subjects' 3D positions and an experimental camera control system to help us further understand the movements required for an intelligent robot camera. The features are as follows: (1) Two sensor cameras shoot a moving subject and detect colors, producing its 3D coordinates. (2) Capable of driving a camera based on camera movement data obtained by a real-time analyzer. 'Moving shoot' is the name we have given to the object position detection technology on which this system is based. We used it in a soccer game, producing computer graphics showing how players moved. These results will also be reported.

  6. Prompt-gamma monitoring in hadrontherapy: A review

    NASA Astrophysics Data System (ADS)

    Krimmer, J.; Dauvergne, D.; Létang, J. M.; Testa, É.

    2018-01-01

    Secondary radiation emission induced by nuclear reactions is correlated to the path of ions in matter. Therefore, such penetrating radiation can be used for in vivo control of hadrontherapy treatments, for which the primary beam is absorbed inside the patient. Among secondary radiations, prompt-gamma rays were proposed for real-time verification of ion range. Such a verification is a desired condition to reduce uncertainties in treatment planning. For more than a decade, efforts have been undertaken worldwide to promote prompt-gamma-based devices to be used in clinical conditions. Dedicated cameras are necessary to overcome the challenges of a broad- and high-energy distribution, a large background, high instantaneous count rates, and compatibility constraints with patient irradiation. Several types of prompt-gamma imaging devices have been proposed, that are either physically-collimated or electronically collimated (Compton cameras). Clinical tests are now undergoing. Meanwhile, other methods than direct prompt-gamma imaging were proposed, that are based on specific counting using either time-of-flight or photon energy measurements. In the present article, we make a review and discuss the state of the art for all techniques using prompt-gamma detection to improve the quality assurance in hadrontherapy.

  7. Design of microcontroller based system for automation of streak camera.

    PubMed

    Joshi, M J; Upadhyay, J; Deshpande, P P; Sharma, M L; Navathe, C P

    2010-08-01

    A microcontroller based system has been developed for automation of the S-20 optical streak camera, which is used as a diagnostic tool to measure ultrafast light phenomenon. An 8 bit MCS family microcontroller is employed to generate all control signals for the streak camera. All biasing voltages required for various electrodes of the tubes are generated using dc-to-dc converters. A high voltage ramp signal is generated through a step generator unit followed by an integrator circuit and is applied to the camera's deflecting plates. The slope of the ramp can be changed by varying values of the capacitor and inductor. A programmable digital delay generator has been developed for synchronization of ramp signal with the optical signal. An independent hardwired interlock circuit has been developed for machine safety. A LABVIEW based graphical user interface has been developed which enables the user to program the settings of the camera and capture the image. The image is displayed with intensity profiles along horizontal and vertical axes. The streak camera was calibrated using nanosecond and femtosecond lasers.

  8. Design of microcontroller based system for automation of streak camera

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Joshi, M. J.; Upadhyay, J.; Deshpande, P. P.

    2010-08-15

    A microcontroller based system has been developed for automation of the S-20 optical streak camera, which is used as a diagnostic tool to measure ultrafast light phenomenon. An 8 bit MCS family microcontroller is employed to generate all control signals for the streak camera. All biasing voltages required for various electrodes of the tubes are generated using dc-to-dc converters. A high voltage ramp signal is generated through a step generator unit followed by an integrator circuit and is applied to the camera's deflecting plates. The slope of the ramp can be changed by varying values of the capacitor and inductor.more » A programmable digital delay generator has been developed for synchronization of ramp signal with the optical signal. An independent hardwired interlock circuit has been developed for machine safety. A LABVIEW based graphical user interface has been developed which enables the user to program the settings of the camera and capture the image. The image is displayed with intensity profiles along horizontal and vertical axes. The streak camera was calibrated using nanosecond and femtosecond lasers.« less

  9. Development of the radial neutron camera system for the HL-2A tokamak

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Y. P., E-mail: zhangyp@swip.ac.cn; Yang, J. W.; Liu, Yi

    2016-06-15

    A new radial neutron camera system has been developed and operated recently in the HL-2A tokamak to measure the spatial and time resolved 2.5 MeV D-D fusion neutron, enhancing the understanding of the energetic-ion physics. The camera mainly consists of a multichannel collimator, liquid-scintillation detectors, shielding systems, and a data acquisition system. Measurements of the D-D fusion neutrons using the camera have been successfully performed during the 2015 HL-2A experiment campaign. The measurements show that the distribution of the fusion neutrons in the HL-2A plasma has a peaked profile, suggesting that the neutral beam injection beam ions in the plasmamore » have a peaked distribution. It also suggests that the neutrons are primarily produced from beam-target reactions in the plasma core region. The measurement results from the neutron camera are well consistent with the results of both a standard {sup 235}U fission chamber and NUBEAM neutron calculations. In this paper, the new radial neutron camera system on HL-2A and the first experimental results are described.« less

  10. Imaging Polarimeter for a Sub-MeV Gamma-Ray All-sky Survey Using an Electron-tracking Compton Camera

    NASA Astrophysics Data System (ADS)

    Komura, S.; Takada, A.; Mizumura, Y.; Miyamoto, S.; Takemura, T.; Kishimoto, T.; Kubo, H.; Kurosawa, S.; Matsuoka, Y.; Miuchi, K.; Mizumoto, T.; Nakamasu, Y.; Nakamura, K.; Oda, M.; Parker, J. D.; Sawano, T.; Sonoda, S.; Tanimori, T.; Tomono, D.; Yoshikawa, K.

    2017-04-01

    X-ray and gamma-ray polarimetry is a promising tool to study the geometry and the magnetic configuration of various celestial objects, such as binary black holes or gamma-ray bursts (GRBs). However, statistically significant polarizations have been detected in few of the brightest objects. Even though future polarimeters using X-ray telescopes are expected to observe weak persistent sources, there are no effective approaches to survey transient and serendipitous sources with a wide field of view (FoV). Here we present an electron-tracking Compton camera (ETCC) as a highly sensitive gamma-ray imaging polarimeter. The ETCC provides powerful background rejection and a high modulation factor over an FoV of up to 2π sr thanks to its excellent imaging based on a well-defined point-spread function. Importantly, we demonstrated for the first time the stability of the modulation factor under realistic conditions of off-axis incidence and huge backgrounds using the SPring-8 polarized X-ray beam. The measured modulation factor of the ETCC was 0.65 ± 0.01 at 150 keV for an off-axis incidence with an oblique angle of 30° and was not degraded compared to the 0.58 ± 0.02 at 130 keV for on-axis incidence. These measured results are consistent with the simulation results. Consequently, we found that the satellite-ETCC proposed in Tanimori et al. would provide all-sky surveys of weak persistent sources of 13 mCrab with 10% polarization for a 107 s exposure and over 20 GRBs down to a 6 × 10-6 erg cm-2 fluence and 10% polarization during a one-year observation.

  11. A comparison of two prompt gamma imaging techniques with collimator-based cameras for range verification in proton therapy

    NASA Astrophysics Data System (ADS)

    Lin, Hsin-Hon; Chang, Hao-Ting; Chao, Tsi-Chian; Chuang, Keh-Shih

    2017-08-01

    In vivo range verification plays an important role in proton therapy to fully utilize the benefits of the Bragg peak (BP) for delivering high radiation dose to tumor, while sparing the normal tissue. For accurately locating the position of BP, camera equipped with collimators (multi-slit and knife-edge collimator) to image prompt gamma (PG) emitted along the proton tracks in the patient have been proposed for range verification. The aim of the work is to compare the performance of multi-slit collimator and knife-edge collimator for non-invasive proton beam range verification. PG imaging was simulated by a validated GATE/GEANT4 Monte Carlo code to model the spot-scanning proton therapy and cylindrical PMMA phantom in detail. For each spot, 108 protons were simulated. To investigate the correlation between the acquired PG profile and the proton range, the falloff regions of PG profiles were fitted with a 3-line-segment curve function as the range estimate. Factors including the energy window setting, proton energy, phantom size, and phantom shift that may influence the accuracy of detecting range were studied. Results indicated that both collimator systems achieve reasonable accuracy and good response to the phantom shift. The accuracy of range predicted by multi-slit collimator system is less affected by the proton energy, while knife-edge collimator system can achieve higher detection efficiency that lead to a smaller deviation in predicting range. We conclude that both collimator systems have potentials for accurately range monitoring in proton therapy. It is noted that neutron contamination has a marked impact on range prediction of the two systems, especially in multi-slit system. Therefore, a neutron reduction technique for improving the accuracy of range verification of proton therapy is needed.

  12. Research on the electro-optical assistant landing system based on the dual camera photogrammetry algorithm

    NASA Astrophysics Data System (ADS)

    Mi, Yuhe; Huang, Yifan; Li, Lin

    2015-08-01

    Based on the location technique of beacon photogrammetry, Dual Camera Photogrammetry (DCP) algorithm was used to assist helicopters landing on the ship. In this paper, ZEMAX was used to simulate the two Charge Coupled Device (CCD) cameras imaging four beacons on both sides of the helicopter and output the image to MATLAB. Target coordinate systems, image pixel coordinate systems, world coordinate systems and camera coordinate systems were established respectively. According to the ideal pin-hole imaging model, the rotation matrix and translation vector of the target coordinate systems and the camera coordinate systems could be obtained by using MATLAB to process the image information and calculate the linear equations. On the basis mentioned above, ambient temperature and the positions of the beacons and cameras were changed in ZEMAX to test the accuracy of the DCP algorithm in complex sea status. The numerical simulation shows that in complex sea status, the position measurement accuracy can meet the requirements of the project.

  13. Polymorphic robotic system controlled by an observing camera

    NASA Astrophysics Data System (ADS)

    Koçer, Bilge; Yüksel, Tugçe; Yümer, M. Ersin; Özen, C. Alper; Yaman, Ulas

    2010-02-01

    Polymorphic robotic systems, which are composed of many modular robots that act in coordination to achieve a goal defined on the system level, have been drawing attention of industrial and research communities since they bring additional flexibility in many applications. This paper introduces a new polymorphic robotic system, in which the detection and control of the modules are attained by a stationary observing camera. The modules do not have any sensory equipment for positioning or detecting each other. They are self-powered, geared with means of wireless communication and locking mechanisms, and are marked to enable the image processing algorithm detect the position and orientation of each of them in a two dimensional space. Since the system does not depend on the modules for positioning and commanding others, in a circumstance where one or more of the modules malfunction, the system will be able to continue operating with the rest of the modules. Moreover, to enhance the compatibility and robustness of the system under different illumination conditions, stationary reference markers are employed together with global positioning markers, and an adaptive filtering parameter decision methodology is enclosed. To the best of authors' knowledge, this is the first study to introduce a remote camera observer to control modules of a polymorphic robotic system.

  14. Gamma camera dual imaging with a somatostatin receptor and thymidine kinase after gene transfer with a bicistronic adenovirus in mice.

    PubMed

    Zinn, Kurt R; Chaudhuri, Tandra R; Krasnykh, Victor N; Buchsbaum, Donald J; Belousova, Natalya; Grizzle, William E; Curiel, David T; Rogers, Buck E

    2002-05-01

    To compare two systems for assessing gene transfer to cancer cells and xenograft tumors with noninvasive gamma camera imaging. A replication-incompetent adenovirus encoding the human type 2 somatostatin receptor (hSSTr2) and the herpes simplex virus thymidine kinase (TK) enzyme (Ad-hSSTr2-TK) was constructed. A-427 human lung cancer cells were infected in vitro and mixed with uninfected cells at different ratios. A-427 tumors in nude mice (n = 23) were injected with 1 x 10(6) to 5 x 10(8) plaque-forming units (pfu) of Ad-hSSTr2-TK. The expressed hSSTr2 and TK proteins were imaged owing to internally bound, or trapped, technetium 99m ((99m)Tc)-labeled hSSTr2-binding peptide (P2045) and radioiodinated 2'-deoxy-2'-fluoro-beta-D-arabinofuranosyl-5-iodouracil (FIAU), respectively. Iodine 125 ((125)I)-labeled FIAU was used in vitro and iodine 131 ((131)I)-labeled FIAU, in vivo. The (99m)Tc-labeled P2045 and (125)I- or (131)I-labeled FIAU were imaged simultaneously with different window settings with an Anger gamma camera. Treatment effects were tested with analysis of variance. Infected cells in culture trapped (125)I-labeled FIAU and (99m)Tc-labeled P2045; uptake correlated with the percentage of Ad-hSSTr2-TK-positive cells. For 100% of infected cells, 24% +/- 0.4 (mean +/- SD) of the added (99m)Tc-labeled P2045 was trapped, which is significantly lower (P <.05) than the 40% +/- 2 of (125)I-labeled FIAU that was trapped. For the highest Ad-hSSTr2-TK tumor dose (5 x 10(8) pfu), the uptake of (99m)Tc-labeled P2045 was 11.1% +/- 2.9 of injected dose per gram of tumor (thereafter, dose per gram), significantly higher (P <.05) than the uptake of (131)I-labeled FIAU at 1.6% +/- 0.4 dose per gram. (99m)Tc-labeled P2045 imaging consistently depicted hSSTr2 gene transfer in tumors at all adenovirus doses. Tumor uptake of (99m)Tc-labeled P2045 positively correlated with Ad-hSSTr2-TK dose; (131)I-labeled FIAU tumor uptake did not correlate with vector dose. The hSSTr2 and TK

  15. Measurement of reach envelopes with a four-camera Selective Spot Recognition (SELSPOT) system

    NASA Technical Reports Server (NTRS)

    Stramler, J. H., Jr.; Woolford, B. J.

    1983-01-01

    The basic Selective Spot Recognition (SELSPOT) system is essentially a system which uses infrared LEDs and a 'camera' with an infrared-sensitive photodetector, a focusing lens, and some A/D electronics to produce a digital output representing an X and Y coordinate for each LED for each camera. When the data are synthesized across all cameras with appropriate calibrations, an XYZ set of coordinates is obtained for each LED at a given point in time. Attention is given to the operating modes, a system checkout, and reach envelopes and software. The Video Recording Adapter (VRA) represents the main addition to the basic SELSPOT system. The VRA contains a microprocessor and other electronics which permit user selection of several options and some interaction with the system.

  16. Soft gamma-ray detector for the ASTRO-H Mission

    NASA Astrophysics Data System (ADS)

    Watanabe, Shin; Tajima, Hiroyasu; Fukazawa, Yasushi; Blandford, Roger; Enoto, Teruaki; Kataoka, Jun; Kawaharada, Madoka; Kokubun, Motohide; Laurent, Philippe; Lebrun, François; Limousin, Olivier; Madejski, Greg; Makishima, Kazuo; Mizuno, Tsunefumi; Nakamori, Takeshi; Nakazawa, Kazuhiro; Mori, Kunishiro; Odaka, Hirokazu; Ohno, Masanori; Ohta, Masayuki; Sato, Goro; Sato, Rie; Takeda, Shin'ichiro; Takahashi, Hiromitsu; Takahashi, Tadayuki; Tanaka, Takaaki; Tashiro, Makoto; Terada, Yukikatsu; Uchiyama, Hideki; Uchiyama, Yasunobu; Yamada, Shinya; Yatsu, Yoichi; Yonetoku, Daisuke; Yuasa, Takayuki

    2012-09-01

    ASTRO-H is the next generation JAXA X-ray satellite, intended to carry instruments with broad energy coverage and exquisite energy resolution. The Soft Gamma-ray Detector (SGD) is one of ASTRO-H instruments and will feature wide energy band (60-600 keV) at a background level 10 times better than the current instruments on orbit. The SGD is complimentary to ASTRO-H’s Hard X-ray Imager covering the energy range of 5-80 keV. The SGD achieves low background by combining a Compton camera scheme with a narrow field-of-view active shield where Compton kinematics is utilized to reject backgrounds. The Compton camera in the SGD is realized as a hybrid semiconductor detector system which consists of silicon and CdTe (cadmium telluride) sensors. Good energy resolution is afforded by semiconductor sensors, and it results in good background rejection capability due to better constraints on Compton kinematics. Utilization of Compton kinematics also makes the SGD sensitive to the gamma-ray polarization, opening up a new window to study properties of gamma-ray emission processes. In this paper, we will present the detailed design of the SGD and the results of the final prototype developments and evaluations. Moreover, we will also present expected performance based on the measurements with prototypes.

  17. Precision of FLEET Velocimetry Using High-Speed CMOS Camera Systems

    NASA Technical Reports Server (NTRS)

    Peters, Christopher J.; Danehy, Paul M.; Bathel, Brett F.; Jiang, Naibo; Calvert, Nathan D.; Miles, Richard B.

    2015-01-01

    Femtosecond laser electronic excitation tagging (FLEET) is an optical measurement technique that permits quantitative velocimetry of unseeded air or nitrogen using a single laser and a single camera. In this paper, we seek to determine the fundamental precision of the FLEET technique using high-speed complementary metal-oxide semiconductor (CMOS) cameras. Also, we compare the performance of several different high-speed CMOS camera systems for acquiring FLEET velocimetry data in air and nitrogen free-jet flows. The precision was defined as the standard deviation of a set of several hundred single-shot velocity measurements. Methods of enhancing the precision of the measurement were explored such as digital binning (similar in concept to on-sensor binning, but done in post-processing), row-wise digital binning of the signal in adjacent pixels and increasing the time delay between successive exposures. These techniques generally improved precision; however, binning provided the greatest improvement to the un-intensified camera systems which had low signal-to-noise ratio. When binning row-wise by 8 pixels (about the thickness of the tagged region) and using an inter-frame delay of 65 microseconds, precisions of 0.5 meters per second in air and 0.2 meters per second in nitrogen were achieved. The camera comparison included a pco.dimax HD, a LaVision Imager scientific CMOS (sCMOS) and a Photron FASTCAM SA-X2, along with a two-stage LaVision HighSpeed IRO intensifier. Excluding the LaVision Imager sCMOS, the cameras were tested with and without intensification and with both short and long inter-frame delays. Use of intensification and longer inter-frame delay generally improved precision. Overall, the Photron FASTCAM SA-X2 exhibited the best performance in terms of greatest precision and highest signal-to-noise ratio primarily because it had the largest pixels.

  18. Precision of FLEET Velocimetry Using High-speed CMOS Camera Systems

    NASA Technical Reports Server (NTRS)

    Peters, Christopher J.; Danehy, Paul M.; Bathel, Brett F.; Jiang, Naibo; Calvert, Nathan D.; Miles, Richard B.

    2015-01-01

    Femtosecond laser electronic excitation tagging (FLEET) is an optical measurement technique that permits quantitative velocimetry of unseeded air or nitrogen using a single laser and a single camera. In this paper, we seek to determine the fundamental precision of the FLEET technique using high-speed complementary metal-oxide semiconductor (CMOS) cameras. Also, we compare the performance of several different high-speed CMOS camera systems for acquiring FLEET velocimetry data in air and nitrogen free-jet flows. The precision was defined as the standard deviation of a set of several hundred single-shot velocity measurements. Methods of enhancing the precision of the measurement were explored such as digital binning (similar in concept to on-sensor binning, but done in post-processing), row-wise digital binning of the signal in adjacent pixels and increasing the time delay between successive exposures. These techniques generally improved precision; however, binning provided the greatest improvement to the un-intensified camera systems which had low signal-to-noise ratio. When binning row-wise by 8 pixels (about the thickness of the tagged region) and using an inter-frame delay of 65 micro sec, precisions of 0.5 m/s in air and 0.2 m/s in nitrogen were achieved. The camera comparison included a pco.dimax HD, a LaVision Imager scientific CMOS (sCMOS) and a Photron FASTCAM SA-X2, along with a two-stage LaVision High Speed IRO intensifier. Excluding the LaVision Imager sCMOS, the cameras were tested with and without intensification and with both short and long inter-frame delays. Use of intensification and longer inter-frame delay generally improved precision. Overall, the Photron FASTCAM SA-X2 exhibited the best performance in terms of greatest precision and highest signal-to-noise ratio primarily because it had the largest pixels.

  19. Nonholonomic camera-space manipulation using cameras mounted on a mobile base

    NASA Astrophysics Data System (ADS)

    Goodwine, Bill; Seelinger, Michael J.; Skaar, Steven B.; Ma, Qun

    1998-10-01

    The body of work called `Camera Space Manipulation' is an effective and proven method of robotic control. Essentially, this technique identifies and refines the input-output relationship of the plant using estimation methods and drives the plant open-loop to its target state. 3D `success' of the desired motion, i.e., the end effector of the manipulator engages a target at a particular location with a particular orientation, is guaranteed when there is camera space success in two cameras which are adequately separated. Very accurate, sub-pixel positioning of a robotic end effector is possible using this method. To date, however, most efforts in this area have primarily considered holonomic systems. This work addresses the problem of nonholonomic camera space manipulation by considering the problem of a nonholonomic robot with two cameras and a holonomic manipulator on board the nonholonomic platform. While perhaps not as common in robotics, such a combination of holonomic and nonholonomic degrees of freedom are ubiquitous in industry: fork lifts and earth moving equipment are common examples of a nonholonomic system with an on-board holonomic actuator. The nonholonomic nature of the system makes the automation problem more difficult due to a variety of reasons; in particular, the target location is not fixed in the image planes, as it is for holonomic systems (since the cameras are attached to a moving platform), and there is a fundamental `path dependent' nature of nonholonomic kinematics. This work focuses on the sensor space or camera-space-based control laws necessary for effectively implementing an autonomous system of this type.

  20. Neutron Imaging Camera

    NASA Technical Reports Server (NTRS)

    Hunter, Stanley; deNolfo, G. A.; Barbier, L. M.; Link, J. T.; Son, S.; Floyd, S. R.; Guardala, N.; Skopec, M.; Stark, B.

    2008-01-01

    The Neutron Imaging Camera (NIC) is based on the Three-dimensional Track Imager (3DTI) technology developed at GSFC for gamma-ray astrophysics applications. The 3-DTI, a large volume time-projection chamber, provides accurate, approximately 0.4 mm resolution, 3-D tracking of charged particles. The incident direction of fast neutrons, En > 0.5 MeV, are reconstructed from the momenta and energies of the proton and triton fragments resulting from (sup 3)He(n,p) (sup 3)H interactions in the 3-DTI volume. The performance of the NIC from laboratory and accelerator tests is presented.

  1. Image Mosaicking Approach for a Double-Camera System in the GaoFen2 Optical Remote Sensing Satellite Based on the Big Virtual Camera.

    PubMed

    Cheng, Yufeng; Jin, Shuying; Wang, Mi; Zhu, Ying; Dong, Zhipeng

    2017-06-20

    The linear array push broom imaging mode is widely used for high resolution optical satellites (HROS). Using double-cameras attached by a high-rigidity support along with push broom imaging is one method to enlarge the field of view while ensuring high resolution. High accuracy image mosaicking is the key factor of the geometrical quality of complete stitched satellite imagery. This paper proposes a high accuracy image mosaicking approach based on the big virtual camera (BVC) in the double-camera system on the GaoFen2 optical remote sensing satellite (GF2). A big virtual camera can be built according to the rigorous imaging model of a single camera; then, each single image strip obtained by each TDI-CCD detector can be re-projected to the virtual detector of the big virtual camera coordinate system using forward-projection and backward-projection to obtain the corresponding single virtual image. After an on-orbit calibration and relative orientation, the complete final virtual image can be obtained by stitching the single virtual images together based on their coordinate information on the big virtual detector image plane. The paper subtly uses the concept of the big virtual camera to obtain a stitched image and the corresponding high accuracy rational function model (RFM) for concurrent post processing. Experiments verified that the proposed method can achieve seamless mosaicking while maintaining the geometric accuracy.

  2. Utilization and viability of biologically-inspired algorithms in a dynamic multiagent camera surveillance system

    NASA Astrophysics Data System (ADS)

    Mundhenk, Terrell N.; Dhavale, Nitin; Marmol, Salvador; Calleja, Elizabeth; Navalpakkam, Vidhya; Bellman, Kirstie; Landauer, Chris; Arbib, Michael A.; Itti, Laurent

    2003-10-01

    In view of the growing complexity of computational tasks and their design, we propose that certain interactive systems may be better designed by utilizing computational strategies based on the study of the human brain. Compared with current engineering paradigms, brain theory offers the promise of improved self-organization and adaptation to the current environment, freeing the programmer from having to address those issues in a procedural manner when designing and implementing large-scale complex systems. To advance this hypothesis, we discus a multi-agent surveillance system where 12 agent CPUs each with its own camera, compete and cooperate to monitor a large room. To cope with the overload of image data streaming from 12 cameras, we take inspiration from the primate"s visual system, which allows the animal to operate a real-time selection of the few most conspicuous locations in visual input. This is accomplished by having each camera agent utilize the bottom-up, saliency-based visual attention algorithm of Itti and Koch (Vision Research 2000;40(10-12):1489-1506) to scan the scene for objects of interest. Real time operation is achieved using a distributed version that runs on a 16-CPU Beowulf cluster composed of the agent computers. The algorithm guides cameras to track and monitor salient objects based on maps of color, orientation, intensity, and motion. To spread camera view points or create cooperation in monitoring highly salient targets, camera agents bias each other by increasing or decreasing the weight of different feature vectors in other cameras, using mechanisms similar to excitation and suppression that have been documented in electrophysiology, psychophysics and imaging studies of low-level visual processing. In addition, if cameras need to compete for computing resources, allocation of computational time is weighed based upon the history of each camera. A camera agent that has a history of seeing more salient targets is more likely to obtain

  3. Monte-Carlo Simulation for Accuracy Assessment of a Single Camera Navigation System

    NASA Astrophysics Data System (ADS)

    Bethmann, F.; Luhmann, T.

    2012-07-01

    The paper describes a simulation-based optimization of an optical tracking system that is used as a 6DOF navigation system for neurosurgery. Compared to classical system used in clinical navigation, the presented system has two unique properties: firstly, the system will be miniaturized and integrated into an operating microscope for neurosurgery; secondly, due to miniaturization a single camera approach has been designed. Single camera techniques for 6DOF measurements show a special sensitivity against weak geometric configurations between camera and object. In addition, the achievable accuracy potential depends significantly on the geometric properties of the tracked objects (locators). Besides quality and stability of the targets used on the locator, their geometric configuration is of major importance. In the following the development and investigation of a simulation program is presented which allows for the assessment and optimization of the system with respect to accuracy. Different system parameters can be altered as well as different scenarios indicating the operational use of the system. Measurement deviations are estimated based on the Monte-Carlo method. Practical measurements validate the correctness of the numerical simulation results.

  4. Pothole Detection System Using a Black-box Camera.

    PubMed

    Jo, Youngtae; Ryu, Seungki

    2015-11-19

    Aging roads and poor road-maintenance systems result a large number of potholes, whose numbers increase over time. Potholes jeopardize road safety and transportation efficiency. Moreover, they are often a contributing factor to car accidents. To address the problems associated with potholes, the locations and size of potholes must be determined quickly. Sophisticated road-maintenance strategies can be developed using a pothole database, which requires a specific pothole-detection system that can collect pothole information at low cost and over a wide area. However, pothole repair has long relied on manual detection efforts. Recent automatic detection systems, such as those based on vibrations or laser scanning, are insufficient to detect potholes correctly and inexpensively owing to the unstable detection of vibration-based methods and high costs of laser scanning-based methods. Thus, in this paper, we introduce a new pothole-detection system using a commercial black-box camera. The proposed system detects potholes over a wide area and at low cost. We have developed a novel pothole-detection algorithm specifically designed to work with the embedded computing environments of black-box cameras. Experimental results are presented with our proposed system, showing that potholes can be detected accurately in real-time.

  5. UCam: universal camera controller and data acquisition system

    NASA Astrophysics Data System (ADS)

    McLay, S. A.; Bezawada, N. N.; Atkinson, D. C.; Ives, D. J.

    2010-07-01

    This paper describes the software architecture and design concepts used in the UKATC's generic camera control and data acquisition software system (UCam) which was originally developed for use with the ARC controller hardware. The ARC detector control electronics are developed by Astronomical Research Cameras (ARC), of San Diego, USA. UCam provides an alternative software solution programmed in C/C++ and python that runs on a real-time Linux operating system to achieve critical speed performance for high time resolution instrumentation. UCam is a server based application that can be accessed remotely and easily integrated as part of a larger instrument control system. It comes with a user friendly client application interface that has several features including a FITS header editor and support for interfacing with network devices. Support is also provided for writing automated scripts in python or as text files. UCam has an application centric design where custom applications for different types of detectors and read out modes can be developed, downloaded and executed on the ARC controller. The built-in de-multiplexer can be easily reconfigured to readout any number of channels for almost any type of detector. It also provides support for numerous sampling modes such as CDS, FOWLER, NDR and threshold limited NDR. UCam has been developed over several years for use on many instruments such as the Wide Field Infra Red Camera (WFCAM) at UKIRT in Hawaii, the mid-IR imager/spectrometer UIST and is also used on instruments at SUBARU, Gemini and Palomar.

  6. A multi-camera system for real-time pose estimation

    NASA Astrophysics Data System (ADS)

    Savakis, Andreas; Erhard, Matthew; Schimmel, James; Hnatow, Justin

    2007-04-01

    This paper presents a multi-camera system that performs face detection and pose estimation in real-time and may be used for intelligent computing within a visual sensor network for surveillance or human-computer interaction. The system consists of a Scene View Camera (SVC), which operates at a fixed zoom level, and an Object View Camera (OVC), which continuously adjusts its zoom level to match objects of interest. The SVC is set to survey the whole filed of view. Once a region has been identified by the SVC as a potential object of interest, e.g. a face, the OVC zooms in to locate specific features. In this system, face candidate regions are selected based on skin color and face detection is accomplished using a Support Vector Machine classifier. The locations of the eyes and mouth are detected inside the face region using neural network feature detectors. Pose estimation is performed based on a geometrical model, where the head is modeled as a spherical object that rotates upon the vertical axis. The triangle formed by the mouth and eyes defines a vertical plane that intersects the head sphere. By projecting the eyes-mouth triangle onto a two dimensional viewing plane, equations were obtained that describe the change in its angles as the yaw pose angle increases. These equations are then combined and used for efficient pose estimation. The system achieves real-time performance for live video input. Testing results assessing system performance are presented for both still images and video.

  7. IR-camera methods for automotive brake system studies

    NASA Astrophysics Data System (ADS)

    Dinwiddie, Ralph B.; Lee, Kwangjin

    1998-03-01

    Automotive brake systems are energy conversion devices that convert kinetic energy into heat energy. Several mechanisms, mostly related to noise and vibration problems, can occur during brake operation and are often related to non-uniform temperature distribution on the brake disk. These problems are of significant cost to the industry and are a quality concern to automotive companies and brake system vendors. One such problem is thermo-elastic instabilities in brake system. During the occurrence of these instabilities several localized hot spots will form around the circumferential direction of the brake disk. The temperature distribution and the time dependence of these hot spots, a critical factor in analyzing this problem and in developing a fundamental understanding of this phenomenon, were recorded. Other modes of non-uniform temperature distributions which include hot banding and extreme localized heating were also observed. All of these modes of non-uniform temperature distributions were observed on automotive brake systems using a high speed IR camera operating in snap-shot mode. The camera was synchronized with the rotation of the brake disk so that the time evolution of hot regions could be studied. This paper discusses the experimental approach in detail.

  8. Method used to test the imaging consistency of binocular camera's left-right optical system

    NASA Astrophysics Data System (ADS)

    Liu, Meiying; Wang, Hu; Liu, Jie; Xue, Yaoke; Yang, Shaodong; Zhao, Hui

    2016-09-01

    To binocular camera, the consistency of optical parameters of the left and the right optical system is an important factor that will influence the overall imaging consistency. In conventional testing procedure of optical system, there lacks specifications suitable for evaluating imaging consistency. In this paper, considering the special requirements of binocular optical imaging system, a method used to measure the imaging consistency of binocular camera is presented. Based on this method, a measurement system which is composed of an integrating sphere, a rotary table and a CMOS camera has been established. First, let the left and the right optical system capture images in normal exposure time under the same condition. Second, a contour image is obtained based on the multiple threshold segmentation result and the boundary is determined using the slope of contour lines near the pseudo-contour line. Third, the constraint of gray level based on the corresponding coordinates of left-right images is established and the imaging consistency could be evaluated through standard deviation σ of the imaging grayscale difference D (x, y) between the left and right optical system. The experiments demonstrate that the method is suitable for carrying out the imaging consistency testing for binocular camera. When the standard deviation 3σ distribution of imaging gray difference D (x, y) between the left and right optical system of the binocular camera does not exceed 5%, it is believed that the design requirements have been achieved. This method could be used effectively and paves the way for the imaging consistency testing of the binocular camera.

  9. Programmable 10 MHz optical fiducial system for hydrodiagnostic cameras

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huen, T.

    1987-07-01

    A solid state light control system was designed and fabricated for use with hydrodiagnostic streak cameras of the electro-optic type. With its use, the film containing the streak images will have on it two time scales simultaneously exposed with the signal. This allows timing and cross timing. The latter is achieved with exposure modulation marking onto the time tick marks. The purpose of using two time scales will be discussed. The design is based on a microcomputer, resulting in a compact and easy to use instrument. The light source is a small red light emitting diode. Time marking can bemore » programmed in steps of 0.1 microseconds, with a range of 255 steps. The time accuracy is based on a precision 100 MHz quartz crystal, giving a divided down 10 MHz system frequency. The light is guided by two small 100 micron diameter optical fibers, which facilitates light coupling onto the input slit of an electro-optic streak camera. Three distinct groups of exposure modulation of the time tick marks can be independently set anywhere onto the streak duration. This system has been successfully used in Fabry-Perot laser velocimeters for over four years in our Laboratory. The microcomputer control section is also being used in providing optical fids to mechanical rotor cameras.« less

  10. A single camera photogrammetry system for multi-angle fast localization of EEG electrodes.

    PubMed

    Qian, Shuo; Sheng, Yang

    2011-11-01

    Photogrammetry has become an effective method for the determination of electroencephalography (EEG) electrode positions in three dimensions (3D). Capturing multi-angle images of the electrodes on the head is a fundamental objective in the design of photogrammetry system for EEG localization. Methods in previous studies are all based on the use of either a rotating camera or multiple cameras, which are time-consuming or not cost-effective. This study aims to present a novel photogrammetry system that can realize simultaneous acquisition of multi-angle head images in a single camera position. Aligning two planar mirrors with the angle of 51.4°, seven views of the head with 25 electrodes are captured simultaneously by the digital camera placed in front of them. A complete set of algorithms for electrode recognition, matching, and 3D reconstruction is developed. It is found that the elapsed time of the whole localization procedure is about 3 min, and camera calibration computation takes about 1 min, after the measurement of calibration points. The positioning accuracy with the maximum error of 1.19 mm is acceptable. Experimental results demonstrate that the proposed system provides a fast and cost-effective method for the EEG positioning.

  11. High resolution bone mineral densitometry with a gamma camera

    NASA Technical Reports Server (NTRS)

    Leblanc, A.; Evans, H.; Jhingran, S.; Johnson, P.

    1983-01-01

    A technique by which the regional distribution of bone mineral can be determined in bone samples from small animals is described. The technique employs an Anger camera interfaced to a medical computer. High resolution imaging is possible by producing magnified images of the bone samples. Regional densitometry of femurs from oophorectomised and bone mineral loss.

  12. Imaging Polarimeter for a Sub-MeV Gamma-Ray All-sky Survey Using an Electron-tracking Compton Camera

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Komura, S.; Takada, A.; Mizumura, Y.

    2017-04-10

    X-ray and gamma-ray polarimetry is a promising tool to study the geometry and the magnetic configuration of various celestial objects, such as binary black holes or gamma-ray bursts (GRBs). However, statistically significant polarizations have been detected in few of the brightest objects. Even though future polarimeters using X-ray telescopes are expected to observe weak persistent sources, there are no effective approaches to survey transient and serendipitous sources with a wide field of view (FoV). Here we present an electron-tracking Compton camera (ETCC) as a highly sensitive gamma-ray imaging polarimeter. The ETCC provides powerful background rejection and a high modulation factormore » over an FoV of up to 2 π sr thanks to its excellent imaging based on a well-defined point-spread function. Importantly, we demonstrated for the first time the stability of the modulation factor under realistic conditions of off-axis incidence and huge backgrounds using the SPring-8 polarized X-ray beam. The measured modulation factor of the ETCC was 0.65 ± 0.01 at 150 keV for an off-axis incidence with an oblique angle of 30° and was not degraded compared to the 0.58 ± 0.02 at 130 keV for on-axis incidence. These measured results are consistent with the simulation results. Consequently, we found that the satellite-ETCC proposed in Tanimori et al. would provide all-sky surveys of weak persistent sources of 13 mCrab with 10% polarization for a 10{sup 7} s exposure and over 20 GRBs down to a 6 × 10{sup −6} erg cm{sup −2} fluence and 10% polarization during a one-year observation.« less

  13. SU-G-IeP4-12: Performance of In-111 Coincident Gamma-Ray Counting: A Monte Carlo Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pahlka, R; Kappadath, S; Mawlawi, O

    2016-06-15

    Purpose: The decay of In-111 results in a non-isotropic gamma-ray cascade, which is normally imaged using a gamma camera. Creating images with a gamma camera using coincident gamma-rays from In-111 has not been previously studied. Our objective was to explore the feasibility of imaging this cascade as coincidence events and to determine the optimal timing resolution and source activity using Monte Carlo simulations. Methods: GEANT4 was used to simulate the decay of the In-111 nucleus and to model the gamma camera. Each photon emission was assigned a timestamp, and the time delay and angular separation for the second gamma-ray inmore » the cascade was consistent with the known intermediate state half-life of 85ns. The gamma-rays are transported through a model of a Siemens dual head Symbia “S” gamma camera with a 5/8-inch thick crystal and medium energy collimators. A true coincident event was defined as a single 171keV gamma-ray followed by a single 245keV gamma-ray within a specified time window (or vice versa). Several source activities (ranging from 10uCi to 5mCi) with and without incorporation of background counts were then simulated. Each simulation was analyzed using varying time windows to assess random events. The noise equivalent count rate (NECR) was computed based on the number of true and random counts for each combination of activity and time window. No scatter events were assumed since sources were simulated in air. Results: As expected, increasing the timing window increased the total number of observed coincidences albeit at the expense of true coincidences. A timing window range of 200–500ns maximizes the NECR at clinically-used source activities. The background rate did not significantly alter the maximum NECR. Conclusion: This work suggests coincident measurements of In-111 gamma-ray decay can be performed with commercial gamma cameras at clinically-relevant activities. Work is ongoing to assess useful clinical applications.« less

  14. Characterization and commissioning of the SST-1M camera for the Cherenkov Telescope Array

    NASA Astrophysics Data System (ADS)

    Aguilar, J. A.; Bilnik, W.; Błocki, J.; Bogacz, L.; Borkowski, J.; Bulik, T.; Cadoux, F.; Christov, A.; Curyło, M.; della Volpe, D.; Dyrda, M.; Favre, Y.; Frankowski, A.; Grudnik, Ł.; Grudzińska, M.; Heller, M.; Idźkowski, B.; Jamrozy, M.; Janiak, M.; Kasperek, J.; Lalik, K.; Lyard, E.; Mach, E.; Mandat, D.; Marszałek, A.; Medina Miranda, L. D.; Michałowski, J.; Moderski, R.; Montaruli, T.; Neronov, A.; Niemiec, J.; Ostrowski, M.; Paśko, P.; Pech, M.; Porcelli, A.; Prandini, E.; Rajda, P.; Rameez, M.; Schioppa, E., Jr.; Schovanek, P.; Seweryn, K.; Skowron, K.; Sliusar, V.; Sowiński, M.; Stawarz, Ł.; Stodulska, M.; Stodulski, M.; Toscano, S.; Troyano Pujadas, I.; Walter, R.; Wiȩcek, M.; Zagdański, A.; Ziȩtara, K.; Żychowski, P.

    2017-02-01

    The Cherenkov Telescope Array (CTA), the next generation very high energy gamma-rays observatory, will consist of three types of telescopes: large (LST), medium (MST) and small (SST) size telescopes. The SSTs are dedicated to the observation of gamma-rays with energy between a few TeV and a few hundreds of TeV. The SST array is expected to have 70 telescopes of different designs. The single-mirror small size telescope (SST-1 M) is one of the proposed telescope designs under consideration for the SST array. It will be equipped with a 4 m diameter segmented mirror dish and with an innovative camera based on silicon photomultipliers (SiPMs). The challenge is not only to build a telescope with exceptional performance but to do it foreseeing its mass production. To address both of these challenges, the camera adopts innovative solutions both for the optical system and readout. The Photo-Detection Plane (PDP) of the camera is composed of 1296 pixels, each made of a hollow, hexagonal light guide coupled to a hexagonal SiPM designed by the University of Geneva and Hamamatsu. As no commercial ASIC would satisfy the CTA requirements when coupled to such a large sensor, dedicated preamplifier electronics have been designed. The readout electronics also use an innovative approach in gamma-ray astronomy by adopting a fully digital approach. All signals coming from the PDP are digitized in a 250 MHz Fast ADC and stored in ring buffers waiting for a trigger decision to send them to the pre-processing server where calibration and higher level triggers will decide whether the data are stored. The latest generation of FPGAs is used to achieve high data rates and also to exploit all the flexibility of the system. As an example each event can be flagged according to its trigger pattern. All of these features have been demonstrated in laboratory measurements on realistic elements and the results of these measurements will be presented in this contribution.

  15. Dual-modality imaging with a ultrasound-gamma device for oncology

    NASA Astrophysics Data System (ADS)

    Polito, C.; Pellegrini, R.; Cinti, M. N.; De Vincentis, G.; Lo Meo, S.; Fabbri, A.; Bennati, P.; Cencelli, V. Orsolini; Pani, R.

    2018-06-01

    Recently, dual-modality systems have been developed, aimed to correlate anatomical and functional information, improving disease localization and helping oncological or surgical treatments. Moreover, due to the growing interest in handheld detectors for preclinical trials or small animal imaging, in this work a new dual modality integrated device, based on a Ultrasounds probe and a small Field of View Single Photon Emission gamma camera, is proposed.

  16. Evaluation of the MSFC facsimile camera system as a tool for extraterrestrial geologic exploration

    NASA Technical Reports Server (NTRS)

    Wolfe, E. W.; Alderman, J. D.

    1971-01-01

    Utility of the Marshall Space Flight (MSFC) facsimile camera system for extraterrestrial geologic exploration was investigated during the spring of 1971 near Merriam Crater in northern Arizona. Although the system with its present hard-wired recorder operates erratically, the imagery showed that the camera could be developed as a prime imaging tool for automated missions. Its utility would be enhanced by development of computer techniques that utilize digital camera output for construction of topographic maps, and it needs increased resolution for examining near field details. A supplementary imaging system may be necessary for hand specimen examination at low magnification.

  17. Single-view 3D reconstruction of correlated gamma-neutron sources

    DOE PAGES

    Monterial, Mateusz; Marleau, Peter; Pozzi, Sara A.

    2017-01-05

    We describe a new method of 3D image reconstruction of neutron sources that emit correlated gammas (e.g. Cf- 252, Am-Be). This category includes a vast majority of neutron sources important in nuclear threat search, safeguards and non-proliferation. Rather than requiring multiple views of the source this technique relies on the source’s intrinsic property of coincidence gamma and neutron emission. As a result only a single-view measurement of the source is required to perform the 3D reconstruction. In principle, any scatter camera sensitive to gammas and neutrons with adequate timing and interaction location resolution can perform this reconstruction. Using a neutronmore » double scatter technique, we can calculate a conical surface of possible source locations. By including the time to a correlated gamma we further constrain the source location in three-dimensions by solving for the source-to-detector distance along the surface of said cone. As a proof of concept we applied these reconstruction techniques on measurements taken with the the Mobile Imager of Neutrons for Emergency Responders (MINER). Two Cf-252 sources measured at 50 and 60 cm from the center of the detector were resolved in their varying depth with average radial distance relative resolution of 26%. To demonstrate the technique’s potential with an optimized system we simulated the measurement in MCNPX-PoliMi assuming timing resolution of 200 ps (from 2 ns in the current system) and source interaction location resolution of 5 mm (from 3 cm). Furthermore, these simulated improvements in scatter camera performance resulted in radial distance relative resolution decreasing to an average of 11%.« less

  18. Single-view 3D reconstruction of correlated gamma-neutron sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Monterial, Mateusz; Marleau, Peter; Pozzi, Sara A.

    We describe a new method of 3D image reconstruction of neutron sources that emit correlated gammas (e.g. Cf- 252, Am-Be). This category includes a vast majority of neutron sources important in nuclear threat search, safeguards and non-proliferation. Rather than requiring multiple views of the source this technique relies on the source’s intrinsic property of coincidence gamma and neutron emission. As a result only a single-view measurement of the source is required to perform the 3D reconstruction. In principle, any scatter camera sensitive to gammas and neutrons with adequate timing and interaction location resolution can perform this reconstruction. Using a neutronmore » double scatter technique, we can calculate a conical surface of possible source locations. By including the time to a correlated gamma we further constrain the source location in three-dimensions by solving for the source-to-detector distance along the surface of said cone. As a proof of concept we applied these reconstruction techniques on measurements taken with the the Mobile Imager of Neutrons for Emergency Responders (MINER). Two Cf-252 sources measured at 50 and 60 cm from the center of the detector were resolved in their varying depth with average radial distance relative resolution of 26%. To demonstrate the technique’s potential with an optimized system we simulated the measurement in MCNPX-PoliMi assuming timing resolution of 200 ps (from 2 ns in the current system) and source interaction location resolution of 5 mm (from 3 cm). Furthermore, these simulated improvements in scatter camera performance resulted in radial distance relative resolution decreasing to an average of 11%.« less

  19. Mosad and Stream Vision For A Telerobotic, Flying Camera System

    NASA Technical Reports Server (NTRS)

    Mandl, William

    2002-01-01

    Two full custom camera systems using the Multiplexed OverSample Analog to Digital (MOSAD) conversion technology for visible light sensing were built and demonstrated. They include a photo gate sensor and a photo diode sensor. The system includes the camera assembly, driver interface assembly, a frame stabler board with integrated decimeter and Windows 2000 compatible software for real time image display. An array size of 320X240 with 16 micron pixel pitch was developed for compatibility with 0.3 inch CCTV optics. With 1.2 micron technology, a 73% fill factor was achieved. Noise measurements indicated 9 to 11 bits operating with 13.7 bits best case. Power measured under 10 milliwatts at 400 samples per second. Nonuniformity variation was below noise floor. Pictures were taken with different cameras during the characterization study to demonstrate the operable range. The successful conclusion of this program demonstrates the utility of the MOSAD for NASA missions, providing superior performance over CMOS and lower cost and power consumption over CCD. The MOSAD approach also provides a path to radiation hardening for space based applications.

  20. The Use of Gamma-Ray Imaging to Improve Portal Monitor Performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ziock, Klaus-Peter; Collins, Jeff; Fabris, Lorenzo

    2008-01-01

    We have constructed a prototype, rapid-deployment portal monitor that uses visible-light and gamma-ray imaging to allow simultaneous monitoring of multiple lanes of traffic from the side of a roadway. Our Roadside Tracker uses automated target acquisition and tracking (TAT) software to identify and track vehicles in visible light images. The field of view of the visible camera overlaps with and is calibrated to that of a one-dimensional gamma-ray imager. The TAT code passes information on when vehicles enter and exit the system field of view and when they cross gamma-ray pixel boundaries. Based on this in-formation, the gamma-ray imager "harvests"more » the gamma-ray data specific to each vehicle, integrating its radiation signature for the entire time that it is in the field of view. In this fashion we are able to generate vehicle-specific radiation signatures and avoid source confusion problems that plague nonimaging approaches to the same problem.« less

  1. Mechanically assisted liquid lens zoom system for mobile phone cameras

    NASA Astrophysics Data System (ADS)

    Wippermann, F. C.; Schreiber, P.; Bräuer, A.; Berge, B.

    2006-08-01

    Camera systems with small form factor are an integral part of today's mobile phones which recently feature auto focus functionality. Ready to market solutions without moving parts have been developed by using the electrowetting technology. Besides virtually no deterioration, easy control electronics and simple and therefore cost-effective fabrication, this type of liquid lenses enables extremely fast settling times compared to mechanical approaches. As a next evolutionary step mobile phone cameras will be equipped with zoom functionality. We present first order considerations for the optical design of a miniaturized zoom system based on liquid-lenses and compare it to its mechanical counterpart. We propose a design of a zoom lens with a zoom factor of 2.5 considering state-of-the-art commercially available liquid lens products. The lens possesses auto focus capability and is based on liquid lenses and one additional mechanical actuator. The combination of liquid lenses and a single mechanical actuator enables extremely short settling times of about 20ms for the auto focus and a simplified mechanical system design leading to lower production cost and longer life time. The camera system has a mechanical outline of 24mm in length and 8mm in diameter. The lens with f/# 3.5 provides market relevant optical performance and is designed for an image circle of 6.25mm (1/2.8" format sensor).

  2. Multimodal US-gamma imaging using collaborative robotics for cancer staging biopsies.

    PubMed

    Esposito, Marco; Busam, Benjamin; Hennersperger, Christoph; Rackerseder, Julia; Navab, Nassir; Frisch, Benjamin

    2016-09-01

    The staging of female breast cancer requires detailed information about the level of cancer spread through the lymphatic system. Common practice to obtain this information for patients with early-stage cancer is sentinel lymph node (SLN) biopsy, where LNs are radioactively identified for surgical removal and subsequent histological analysis. Punch needle biopsy is a less invasive approach but suffers from the lack of combined anatomical and nuclear information. We present and evaluate a system that introduces live collaborative robotic 2D gamma imaging in addition to live 2D ultrasound to identify SLNs in the surrounding anatomy. The system consists of a robotic arm equipped with both a gamma camera and a stereoscopic tracking system that monitors the position of an ultrasound probe operated by the physician. The arm cooperatively places the gamma camera parallel to the ultrasound imaging plane to provide live multimodal visualization and guidance. We validate the system by evaluating the target registration errors between fused nuclear and US image data in a phantom consisting of two spheres, one of which is filled with radioactivity. Medical experts perform punch biopsies on agar-gelatine phantoms with complex configurations of hot and cold lesions to provide a qualitative and quantitative evaluation of the system. The average point registration error for the overlay is [Formula: see text] mm. The time of the entire procedure was reduced by 36 %, with 80v of the biopsies being successful. The users' feedback was very positive, and the system was deemed to be very intuitive, with handling similar to classic US-guided needle biopsy. We present and evaluate the first medical collaborative robotic imaging system. Feedback from potential users for SLN punch needle biopsy is encouraging. Ongoing work investigates the clinical feasibility with more complex and realistic phantoms.

  3. Optical design of space cameras for automated rendezvous and docking systems

    NASA Astrophysics Data System (ADS)

    Zhu, X.

    2018-05-01

    Visible cameras are essential components of a space automated rendezvous and docking (AR and D) system, which is utilized in many space missions including crewed or robotic spaceship docking, on-orbit satellite servicing, autonomous landing and hazard avoidance. Cameras are ubiquitous devices in modern time with countless lens designs that focus on high resolution and color rendition. In comparison, space AR and D cameras, while are not required to have extreme high resolution and color rendition, impose some unique requirements on lenses. Fixed lenses with no moving parts and separated lenses for narrow and wide field-of-view (FOV) are normally used in order to meet high reliability requirement. Cemented lens elements are usually avoided due to wide temperature swing and outgassing requirement in space environment. The lenses should be designed with exceptional straylight performance and minimum lens flare given intense sun light and lacking of atmosphere scattering in space. Furthermore radiation resistant glasses should be considered to prevent glass darkening from space radiation. Neptec has designed and built a narrow FOV (NFOV) lens and a wide FOV (WFOV) lens for an AR and D visible camera system. The lenses are designed by using ZEMAX program; the straylight performance and the lens baffles are simulated by using TracePro program. This paper discusses general requirements for space AR and D camera lenses and the specific measures for lenses to meet the space environmental requirements.

  4. A system for extracting 3-dimensional measurements from a stereo pair of TV cameras

    NASA Technical Reports Server (NTRS)

    Yakimovsky, Y.; Cunningham, R.

    1976-01-01

    Obtaining accurate three-dimensional (3-D) measurement from a stereo pair of TV cameras is a task requiring camera modeling, calibration, and the matching of the two images of a real 3-D point on the two TV pictures. A system which models and calibrates the cameras and pairs the two images of a real-world point in the two pictures, either manually or automatically, was implemented. This system is operating and provides three-dimensional measurements resolution of + or - mm at distances of about 2 m.

  5. Low power multi-camera system and algorithms for automated threat detection

    NASA Astrophysics Data System (ADS)

    Huber, David J.; Khosla, Deepak; Chen, Yang; Van Buer, Darrel J.; Martin, Kevin

    2013-05-01

    A key to any robust automated surveillance system is continuous, wide field-of-view sensor coverage and high accuracy target detection algorithms. Newer systems typically employ an array of multiple fixed cameras that provide individual data streams, each of which is managed by its own processor. This array can continuously capture the entire field of view, but collecting all the data and back-end detection algorithm consumes additional power and increases the size, weight, and power (SWaP) of the package. This is often unacceptable, as many potential surveillance applications have strict system SWaP requirements. This paper describes a wide field-of-view video system that employs multiple fixed cameras and exhibits low SWaP without compromising the target detection rate. We cycle through the sensors, fetch a fixed number of frames, and process them through a modified target detection algorithm. During this time, the other sensors remain powered-down, which reduces the required hardware and power consumption of the system. We show that the resulting gaps in coverage and irregular frame rate do not affect the detection accuracy of the underlying algorithms. This reduces the power of an N-camera system by up to approximately N-fold compared to the baseline normal operation. This work was applied to Phase 2 of DARPA Cognitive Technology Threat Warning System (CT2WS) program and used during field testing.

  6. Product Plan of New Generation System Camera "OLYMPUS PEN E-P1"

    NASA Astrophysics Data System (ADS)

    Ogawa, Haruo

    "OLYMPUS PEN E-P1", which is new generation system camera, is the first product of Olympus which is new standard "Micro Four-thirds System" for high-resolution mirror-less cameras. It continues good sales by the concept of "small and stylish design, easy operation and SLR image quality" since release on July 3, 2009. On the other hand, the half-size film camera "OLYMPUS PEN" was popular by the concept "small and stylish design and original mechanism" since the first product in 1959 and recorded sale number more than 17 million with 17 models. By the 50th anniversary topic and emotional value of the Olympus pen, Olympus pen E-P1 became big sales. I would like to explain the way of thinking of the product plan that included not only the simple functional value but also emotional value on planning the first product of "Micro Four-thirds System".

  7. Development of low-cost high-performance multispectral camera system at Banpil

    NASA Astrophysics Data System (ADS)

    Oduor, Patrick; Mizuno, Genki; Olah, Robert; Dutta, Achyut K.

    2014-05-01

    Banpil Photonics (Banpil) has developed a low-cost high-performance multispectral camera system for Visible to Short- Wave Infrared (VIS-SWIR) imaging for the most demanding high-sensitivity and high-speed military, commercial and industrial applications. The 640x512 pixel InGaAs uncooled camera system is designed to provide a compact, smallform factor to within a cubic inch, high sensitivity needing less than 100 electrons, high dynamic range exceeding 190 dB, high-frame rates greater than 1000 frames per second (FPS) at full resolution, and low power consumption below 1W. This is practically all the feature benefits highly desirable in military imaging applications to expand deployment to every warfighter, while also maintaining a low-cost structure demanded for scaling into commercial markets. This paper describes Banpil's development of the camera system including the features of the image sensor with an innovation integrating advanced digital electronics functionality, which has made the confluence of high-performance capabilities on the same imaging platform practical at low cost. It discusses the strategies employed including innovations of the key components (e.g. focal plane array (FPA) and Read-Out Integrated Circuitry (ROIC)) within our control while maintaining a fabless model, and strategic collaboration with partners to attain additional cost reductions on optics, electronics, and packaging. We highlight the challenges and potential opportunities for further cost reductions to achieve a goal of a sub-$1000 uncooled high-performance camera system. Finally, a brief overview of emerging military, commercial and industrial applications that will benefit from this high performance imaging system and their forecast cost structure is presented.

  8. Photometric Calibration and Image Stitching for a Large Field of View Multi-Camera System

    PubMed Central

    Lu, Yu; Wang, Keyi; Fan, Gongshu

    2016-01-01

    A new compact large field of view (FOV) multi-camera system is introduced. The camera is based on seven tiny complementary metal-oxide-semiconductor sensor modules covering over 160° × 160° FOV. Although image stitching has been studied extensively, sensor and lens differences have not been considered in previous multi-camera devices. In this study, we have calibrated the photometric characteristics of the multi-camera device. Lenses were not mounted on the sensor in the process of radiometric response calibration to eliminate the influence of the focusing effect of uniform light from an integrating sphere. Linearity range of the radiometric response, non-linearity response characteristics, sensitivity, and dark current of the camera response function are presented. The R, G, and B channels have different responses for the same illuminance. Vignetting artifact patterns have been tested. The actual luminance of the object is retrieved by sensor calibration results, and is used to blend images to make panoramas reflect the objective luminance more objectively. This compensates for the limitation of stitching images that are more realistic only through the smoothing method. The dynamic range limitation of can be resolved by using multiple cameras that cover a large field of view instead of a single image sensor with a wide-angle lens. The dynamic range is expanded by 48-fold in this system. We can obtain seven images in one shot with this multi-camera system, at 13 frames per second. PMID:27077857

  9. Development of the geoCamera, a System for Mapping Ice from a Ship

    NASA Astrophysics Data System (ADS)

    Arsenault, R.; Clemente-Colon, P.

    2012-12-01

    The geoCamera produces maps of the ice surrounding an ice-capable ship by combining images from one or more digital cameras with the ship's position and attitude data. Maps are produced along the ship's path with the achievable width and resolution depending on camera mounting height as well as camera resolution and lens parameters. Our system has produced maps up to 2000m wide at 1m resolution. Once installed and calibrated, the system is designed to operate automatically producing maps in near real-time and making them available to on-board users via existing information systems. The resulting small-scale maps complement existing satellite based products as well as on-board observations. Development versions have temporarily been deployed in Antarctica on the RV Nathaniel B. Palmer in 2010 and in the Arctic on the USCGC Healy in 2011. A permanent system has been deployed during the summer of 2012 on the USCGC Healy. To make the system attractive to other ships of opportunity, design goals include using existing ship systems when practical, using low costs commercial-off-the-shelf components if additional hardware is necessary, automating the process to virtually eliminate adding to the workload of ships technicians and making the software components modular and flexible enough to allow more seamless integration with a ships particular IT system.

  10. IMAX camera (12-IML-1)

    NASA Technical Reports Server (NTRS)

    1992-01-01

    The IMAX camera system is used to record on-orbit activities of interest to the public. Because of the extremely high resolution of the IMAX camera, projector, and audio systems, the audience is afforded a motion picture experience unlike any other. IMAX and OMNIMAX motion picture systems were designed to create motion picture images of superior quality and audience impact. The IMAX camera is a 65 mm, single lens, reflex viewing design with a 15 perforation per frame horizontal pull across. The frame size is 2.06 x 2.77 inches. Film travels through the camera at a rate of 336 feet per minute when the camera is running at the standard 24 frames/sec.

  11. Cost effective system for monitoring of fish migration with a camera

    NASA Astrophysics Data System (ADS)

    Sečnik, Matej; Brilly, Mitja; Vidmar, Andrej

    2016-04-01

    Within the European LIFE project Ljubljanica connects (LIFE10 NAT/SI/000142) we have developed a cost-effective solution for the monitoring of fish migration through the fish passes with the underwater camera. In the fish pass at Ambrožev trg and in the fish pass near the Fužine castle we installed a video camera called "Fishcam" to be able to monitor the migration of fish through the fish passes and success of its reconstruction. Live stream from fishcams installed in the fishpassesis available on our project website (http://ksh.fgg.uni-lj.si/ljubljanicaconnects/ang/12_camera). The system for the fish monitoring is made from two parts. First is the waterproof box for the computer with charger and the second part is the camera itself. We used a high sensitive Sony analogue camera. The advantage of this camera is that it has very good sensitivity in low light conditions, so it can take good quality pictures even at night with a minimum additional lighting. For the night recording we use additional IR reflector to illuminate passing fishes. The camera is connected to an 8-inch tablet PC. We decided to use a tablet PC because it is quite small, cheap, it is relatively fast and has a low power consumption. On the computer we use software which has advanced motion detection capabilities, so we can also detect the small fishes. When the fish is detected by a software, its photograph is automatically saved to local hard drive and for backup also on Google drive. The system for monitoring of fish migration has turned out to work very well. From the beginning of monitoring in June 2015 to end of the year there were more than 100.000 photographs produced. The first analysis of them was already prepared estimating fish species and their frequency in passing the fish pass.

  12. Motionless active depth from defocus system using smart optics for camera autofocus applications

    NASA Astrophysics Data System (ADS)

    Amin, M. Junaid; Riza, Nabeel A.

    2016-04-01

    This paper describes a motionless active Depth from Defocus (DFD) system design suited for long working range camera autofocus applications. The design consists of an active illumination module that projects a scene illuminating coherent conditioned optical radiation pattern which maintains its sharpness over multiple axial distances allowing an increased DFD working distance range. The imager module of the system responsible for the actual DFD operation deploys an electronically controlled variable focus lens (ECVFL) as a smart optic to enable a motionless imager design capable of effective DFD operation. An experimental demonstration is conducted in the laboratory which compares the effectiveness of the coherent conditioned radiation module versus a conventional incoherent active light source, and demonstrates the applicability of the presented motionless DFD imager design. The fast response and no-moving-parts features of the DFD imager design are especially suited for camera scenarios where mechanical motion of lenses to achieve autofocus action is challenging, for example, in the tiny camera housings in smartphones and tablets. Applications for the proposed system include autofocus in modern day digital cameras.

  13. Intercomparison of SO2 camera systems for imaging volcanic gas plumes

    NASA Astrophysics Data System (ADS)

    Kern, Christoph; Lübcke, Peter; Bobrowski, Nicole; Campion, Robin; Mori, Toshiya; Smekens, Jean-François; Stebel, Kerstin; Tamburello, Giancarlo; Burton, Mike; Platt, Ulrich; Prata, Fred

    2015-07-01

    SO2 camera systems are increasingly being used to image volcanic gas plumes. The ability to derive SO2 emission rates directly from the acquired imagery at high time resolution allows volcanic process studies that incorporate other high time-resolution datasets. Though the general principles behind the SO2 camera have remained the same for a number of years, recent advances in CCD technology and an improved understanding of the physics behind the measurements have driven a continuous evolution of the camera systems. Here we present an intercomparison of seven different SO2 cameras. In the first part of the experiment, the various technical designs are compared and the advantages and drawbacks of individual design options are considered. Though the ideal design was found to be dependent on the specific application, a number of general recommendations are made. Next, a time series of images recorded by all instruments at Stromboli Volcano (Italy) is compared. All instruments were easily able to capture SO2 clouds emitted from the summit vents. Quantitative comparison of the SO2 load in an individual cloud yielded an intra-instrument precision of about 12%. From the imagery, emission rates were then derived according to each group's standard retrieval process. A daily average SO2 emission rate of 61 ± 10 t/d was calculated. Due to differences in spatial integration methods and plume velocity determination, the time-dependent progression of SO2 emissions varied significantly among the individual systems. However, integration over distinct degassing events yielded comparable SO2 masses. Based on the intercomparison data, we find an approximate 1-sigma precision of 20% for the emission rates derived from the various SO2 cameras. Though it may still be improved in the future, this is currently within the typical accuracy of the measurement and is considered sufficient for most applications.

  14. Electronic still camera

    NASA Astrophysics Data System (ADS)

    Holland, S. Douglas

    1992-09-01

    A handheld, programmable, digital camera is disclosed that supports a variety of sensors and has program control over the system components to provide versatility. The camera uses a high performance design which produces near film quality images from an electronic system. The optical system of the camera incorporates a conventional camera body that was slightly modified, thus permitting the use of conventional camera accessories, such as telephoto lenses, wide-angle lenses, auto-focusing circuitry, auto-exposure circuitry, flash units, and the like. An image sensor, such as a charge coupled device ('CCD') collects the photons that pass through the camera aperture when the shutter is opened, and produces an analog electrical signal indicative of the image. The analog image signal is read out of the CCD and is processed by preamplifier circuitry, a correlated double sampler, and a sample and hold circuit before it is converted to a digital signal. The analog-to-digital converter has an accuracy of eight bits to insure accuracy during the conversion. Two types of data ports are included for two different data transfer needs. One data port comprises a general purpose industrial standard port and the other a high speed/high performance application specific port. The system uses removable hard disks as its permanent storage media. The hard disk receives the digital image signal from the memory buffer and correlates the image signal with other sensed parameters, such as longitudinal or other information. When the storage capacity of the hard disk has been filled, the disk can be replaced with a new disk.

  15. Electronic Still Camera

    NASA Technical Reports Server (NTRS)

    Holland, S. Douglas (Inventor)

    1992-01-01

    A handheld, programmable, digital camera is disclosed that supports a variety of sensors and has program control over the system components to provide versatility. The camera uses a high performance design which produces near film quality images from an electronic system. The optical system of the camera incorporates a conventional camera body that was slightly modified, thus permitting the use of conventional camera accessories, such as telephoto lenses, wide-angle lenses, auto-focusing circuitry, auto-exposure circuitry, flash units, and the like. An image sensor, such as a charge coupled device ('CCD') collects the photons that pass through the camera aperture when the shutter is opened, and produces an analog electrical signal indicative of the image. The analog image signal is read out of the CCD and is processed by preamplifier circuitry, a correlated double sampler, and a sample and hold circuit before it is converted to a digital signal. The analog-to-digital converter has an accuracy of eight bits to insure accuracy during the conversion. Two types of data ports are included for two different data transfer needs. One data port comprises a general purpose industrial standard port and the other a high speed/high performance application specific port. The system uses removable hard disks as its permanent storage media. The hard disk receives the digital image signal from the memory buffer and correlates the image signal with other sensed parameters, such as longitudinal or other information. When the storage capacity of the hard disk has been filled, the disk can be replaced with a new disk.

  16. A Distributed Wireless Camera System for the Management of Parking Spaces

    PubMed Central

    Melničuk, Petr

    2017-01-01

    The importance of detection of parking space availability is still growing, particularly in major cities. This paper deals with the design of a distributed wireless camera system for the management of parking spaces, which can determine occupancy of the parking space based on the information from multiple cameras. The proposed system uses small camera modules based on Raspberry Pi Zero and computationally efficient algorithm for the occupancy detection based on the histogram of oriented gradients (HOG) feature descriptor and support vector machine (SVM) classifier. We have included information about the orientation of the vehicle as a supporting feature, which has enabled us to achieve better accuracy. The described solution can deliver occupancy information at the rate of 10 parking spaces per second with more than 90% accuracy in a wide range of conditions. Reliability of the implemented algorithm is evaluated with three different test sets which altogether contain over 700,000 samples of parking spaces. PMID:29283371

  17. A Distributed Wireless Camera System for the Management of Parking Spaces.

    PubMed

    Vítek, Stanislav; Melničuk, Petr

    2017-12-28

    The importance of detection of parking space availability is still growing, particularly in major cities. This paper deals with the design of a distributed wireless camera system for the management of parking spaces, which can determine occupancy of the parking space based on the information from multiple cameras. The proposed system uses small camera modules based on Raspberry Pi Zero and computationally efficient algorithm for the occupancy detection based on the histogram of oriented gradients (HOG) feature descriptor and support vector machine (SVM) classifier. We have included information about the orientation of the vehicle as a supporting feature, which has enabled us to achieve better accuracy. The described solution can deliver occupancy information at the rate of 10 parking spaces per second with more than 90% accuracy in a wide range of conditions. Reliability of the implemented algorithm is evaluated with three different test sets which altogether contain over 700,000 samples of parking spaces.

  18. Development of a digital camera tree evaluation system

    Treesearch

    Neil Clark; Daniel L. Schmoldt; Philip A. Araman

    2000-01-01

    Within the Strategic Plan for Forest Inventory and Monitoring (USDA Forest Service 1998), there is a call to "conduct applied research in the use of [advanced technology] towards the end of increasing the operational efficiency and effectiveness of our program". The digital camera tree evaluation system is part of that research, aimed at decreasing field...

  19. Touch And Go Camera System (TAGCAMS) for the OSIRIS-REx Asteroid Sample Return Mission

    NASA Astrophysics Data System (ADS)

    Bos, B. J.; Ravine, M. A.; Caplinger, M.; Schaffner, J. A.; Ladewig, J. V.; Olds, R. D.; Norman, C. D.; Huish, D.; Hughes, M.; Anderson, S. K.; Lorenz, D. A.; May, A.; Jackman, C. D.; Nelson, D.; Moreau, M.; Kubitschek, D.; Getzandanner, K.; Gordon, K. E.; Eberhardt, A.; Lauretta, D. S.

    2018-02-01

    NASA's OSIRIS-REx asteroid sample return mission spacecraft includes the Touch And Go Camera System (TAGCAMS) three camera-head instrument. The purpose of TAGCAMS is to provide imagery during the mission to facilitate navigation to the target asteroid, confirm acquisition of the asteroid sample, and document asteroid sample stowage. The cameras were designed and constructed by Malin Space Science Systems (MSSS) based on requirements developed by Lockheed Martin and NASA. All three of the cameras are mounted to the spacecraft nadir deck and provide images in the visible part of the spectrum, 400-700 nm. Two of the TAGCAMS cameras, NavCam 1 and NavCam 2, serve as fully redundant navigation cameras to support optical navigation and natural feature tracking. Their boresights are aligned in the nadir direction with small angular offsets for operational convenience. The third TAGCAMS camera, StowCam, provides imagery to assist with and confirm proper stowage of the asteroid sample. Its boresight is pointed at the OSIRIS-REx sample return capsule located on the spacecraft deck. All three cameras have at their heart a 2592 × 1944 pixel complementary metal oxide semiconductor (CMOS) detector array that provides up to 12-bit pixel depth. All cameras also share the same lens design and a camera field of view of roughly 44° × 32° with a pixel scale of 0.28 mrad/pixel. The StowCam lens is focused to image features on the spacecraft deck, while both NavCam lens focus positions are optimized for imaging at infinity. A brief description of the TAGCAMS instrument and how it is used to support critical OSIRIS-REx operations is provided.

  20. Design and Development of Multi-Purpose CCD Camera System with Thermoelectric Cooling: Hardware

    NASA Astrophysics Data System (ADS)

    Kang, Y.-W.; Byun, Y. I.; Rhee, J. H.; Oh, S. H.; Kim, D. K.

    2007-12-01

    We designed and developed a multi-purpose CCD camera system for three kinds of CCDs; KAF-0401E(768×512), KAF-1602E(1536×1024), KAF-3200E(2184×1472) made by KODAK Co.. The system supports fast USB port as well as parallel port for data I/O and control signal. The packing is based on two stage circuit boards for size reduction and contains built-in filter wheel. Basic hardware components include clock pattern circuit, A/D conversion circuit, CCD data flow control circuit, and CCD temperature control unit. The CCD temperature can be controlled with accuracy of approximately 0.4° C in the max. range of temperature, Δ 33° C. This CCD camera system has with readout noise 6 e^{-}, and system gain 5 e^{-}/ADU. A total of 10 CCD camera systems were produced and our tests show that all of them show passable performance.

  1. Evaluation of thermal cameras in quality systems according to ISO 9000 or EN 45000 standards

    NASA Astrophysics Data System (ADS)

    Chrzanowski, Krzysztof

    2001-03-01

    According to the international standards ISO 9001-9004 and EN 45001-45003 the industrial plants and the accreditation laboratories that implemented the quality systems according to these standards are required to evaluate an uncertainty of measurements. Manufacturers of thermal cameras do not offer any data that could enable estimation of measurement uncertainty of these imagers. Difficulties in determining the measurement uncertainty is an important limitation of thermal cameras for applications in the industrial plants and the cooperating accreditation laboratories that have implemented these quality systems. A set of parameters for characterization of commercial thermal cameras, a measuring set, some results of testing of these cameras, a mathematical model of uncertainty, and a software that enables quick calculation of uncertainty of temperature measurements with thermal cameras are presented in this paper.

  2. An intelligent space for mobile robot localization using a multi-camera system.

    PubMed

    Rampinelli, Mariana; Covre, Vitor Buback; de Queiroz, Felippe Mendonça; Vassallo, Raquel Frizera; Bastos-Filho, Teodiano Freire; Mazo, Manuel

    2014-08-15

    This paper describes an intelligent space, whose objective is to localize and control robots or robotic wheelchairs to help people. Such an intelligent space has 11 cameras distributed in two laboratories and a corridor. The cameras are fixed in the environment, and image capturing is done synchronously. The system was programmed as a client/server with TCP/IP connections, and a communication protocol was defined. The client coordinates the activities inside the intelligent space, and the servers provide the information needed for that. Once the cameras are used for localization, they have to be properly calibrated. Therefore, a calibration method for a multi-camera network is also proposed in this paper. A robot is used to move a calibration pattern throughout the field of view of the cameras. Then, the captured images and the robot odometry are used for calibration. As a result, the proposed algorithm provides a solution for multi-camera calibration and robot localization at the same time. The intelligent space and the calibration method were evaluated under different scenarios using computer simulations and real experiments. The results demonstrate the proper functioning of the intelligent space and validate the multi-camera calibration method, which also improves robot localization.

  3. An Intelligent Space for Mobile Robot Localization Using a Multi-Camera System

    PubMed Central

    Rampinelli, Mariana.; Covre, Vitor Buback.; de Queiroz, Felippe Mendonça.; Vassallo, Raquel Frizera.; Bastos-Filho, Teodiano Freire.; Mazo, Manuel.

    2014-01-01

    This paper describes an intelligent space, whose objective is to localize and control robots or robotic wheelchairs to help people. Such an intelligent space has 11 cameras distributed in two laboratories and a corridor. The cameras are fixed in the environment, and image capturing is done synchronously. The system was programmed as a client/server with TCP/IP connections, and a communication protocol was defined. The client coordinates the activities inside the intelligent space, and the servers provide the information needed for that. Once the cameras are used for localization, they have to be properly calibrated. Therefore, a calibration method for a multi-camera network is also proposed in this paper. A robot is used to move a calibration pattern throughout the field of view of the cameras. Then, the captured images and the robot odometry are used for calibration. As a result, the proposed algorithm provides a solution for multi-camera calibration and robot localization at the same time. The intelligent space and the calibration method were evaluated under different scenarios using computer simulations and real experiments. The results demonstrate the proper functioning of the intelligent space and validate the multi-camera calibration method, which also improves robot localization. PMID:25196009

  4. A Ground-Based Near Infrared Camera Array System for UAV Auto-Landing in GPS-Denied Environment.

    PubMed

    Yang, Tao; Li, Guangpo; Li, Jing; Zhang, Yanning; Zhang, Xiaoqiang; Zhang, Zhuoyue; Li, Zhi

    2016-08-30

    This paper proposes a novel infrared camera array guidance system with capability to track and provide real time position and speed of a fixed-wing Unmanned air vehicle (UAV) during a landing process. The system mainly include three novel parts: (1) Infrared camera array and near infrared laser lamp based cooperative long range optical imaging module; (2) Large scale outdoor camera array calibration module; and (3) Laser marker detection and 3D tracking module. Extensive automatic landing experiments with fixed-wing flight demonstrate that our infrared camera array system has the unique ability to guide the UAV landing safely and accurately in real time. Moreover, the measurement and control distance of our system is more than 1000 m. The experimental results also demonstrate that our system can be used for UAV automatic accurate landing in Global Position System (GPS)-denied environments.

  5. Improving Photometric Calibration of Meteor Video Camera Systems

    NASA Technical Reports Server (NTRS)

    Ehlert, Steven; Kingery, Aaron; Suggs, Robert

    2016-01-01

    We present the results of new calibration tests performed by the NASA Meteoroid Environment Oce (MEO) designed to help quantify and minimize systematic uncertainties in meteor photometry from video camera observations. These systematic uncertainties can be categorized by two main sources: an imperfect understanding of the linearity correction for the MEO's Watec 902H2 Ultimate video cameras and uncertainties in meteor magnitudes arising from transformations between the Watec camera's Sony EX-View HAD bandpass and the bandpasses used to determine reference star magnitudes. To address the rst point, we have measured the linearity response of the MEO's standard meteor video cameras using two independent laboratory tests on eight cameras. Our empirically determined linearity correction is critical for performing accurate photometry at low camera intensity levels. With regards to the second point, we have calculated synthetic magnitudes in the EX bandpass for reference stars. These synthetic magnitudes enable direct calculations of the meteor's photometric ux within the camera band-pass without requiring any assumptions of its spectral energy distribution. Systematic uncertainties in the synthetic magnitudes of individual reference stars are estimated at 0:20 mag, and are limited by the available spectral information in the reference catalogs. These two improvements allow for zero-points accurate to 0:05 ?? 0:10 mag in both ltered and un ltered camera observations with no evidence for lingering systematics.

  6. System Architecture of the Dark Energy Survey Camera Readout Electronics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shaw, Theresa; /FERMILAB; Ballester, Otger

    2010-05-27

    The Dark Energy Survey makes use of a new camera, the Dark Energy Camera (DECam). DECam will be installed in the Blanco 4M telescope at Cerro Tololo Inter-American Observatory (CTIO). DECam is presently under construction and is expected to be ready for observations in the fall of 2011. The focal plane will make use of 62 2Kx4K and 12 2kx2k fully depleted Charge-Coupled Devices (CCDs) for guiding, alignment and focus. This paper will describe design considerations of the system; including, the entire signal path used to read out the CCDs, the development of a custom crate and backplane, the overallmore » grounding scheme and early results of system tests.« less

  7. Streak camera receiver definition study

    NASA Technical Reports Server (NTRS)

    Johnson, C. B.; Hunkler, L. T., Sr.; Letzring, S. A.; Jaanimagi, P.

    1990-01-01

    Detailed streak camera definition studies were made as a first step toward full flight qualification of a dual channel picosecond resolution streak camera receiver for the Geoscience Laser Altimeter and Ranging System (GLRS). The streak camera receiver requirements are discussed as they pertain specifically to the GLRS system, and estimates of the characteristics of the streak camera are given, based upon existing and near-term technological capabilities. Important problem areas are highlighted, and possible corresponding solutions are discussed.

  8. The LSST Camera 500 watt -130 degC Mixed Refrigerant Cooling System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bowden, Gordon B.; Langton, Brian J.; /SLAC

    2014-05-28

    The LSST Camera has a higher cryogenic heat load than previous CCD telescope cameras due to its large size (634 mm diameter focal plane, 3.2 Giga pixels) and its close coupled front-end electronics operating at low temperature inside the cryostat. Various refrigeration technologies are considered for this telescope/camera environment. MMR-Technology’s Mixed Refrigerant technology was chosen. A collaboration with that company was started in 2009. The system, based on a cluster of Joule-Thomson refrigerators running a special blend of mixed refrigerants is described. Both the advantages and problems of applying this technology to telescope camera refrigeration are discussed. Test results frommore » a prototype refrigerator running in a realistic telescope configuration are reported. Current and future stages of the development program are described. (auth)« less

  9. Gamma-ray tracking method for pet systems

    DOEpatents

    Mihailescu, Lucian; Vetter, Kai M.

    2010-06-08

    Gamma-ray tracking methods for use with granular, position sensitive detectors identify the sequence of the interactions taking place in the detector and, hence, the position of the first interaction. The improved position resolution in finding the first interaction in the detection system determines a better definition of the direction of the gamma-ray photon, and hence, a superior source image resolution. A PET system using such a method will have increased efficiency and position resolution.

  10. Utilization of Open Source Technology to Create Cost-Effective Microscope Camera Systems for Teaching.

    PubMed

    Konduru, Anil Reddy; Yelikar, Balasaheb R; Sathyashree, K V; Kumar, Ankur

    2018-01-01

    Open source technologies and mobile innovations have radically changed the way people interact with technology. These innovations and advancements have been used across various disciplines and already have a significant impact. Microscopy, with focus on visually appealing contrasting colors for better appreciation of morphology, forms the core of the disciplines such as Pathology, microbiology, and anatomy. Here, learning happens with the aid of multi-head microscopes and digital camera systems for teaching larger groups and in organizing interactive sessions for students or faculty of other departments. The cost of the original equipment manufacturer (OEM) camera systems in bringing this useful technology at all the locations is a limiting factor. To avoid this, we have used the low-cost technologies like Raspberry Pi, Mobile high definition link and 3D printing for adapters to create portable camera systems. Adopting these open source technologies enabled us to convert any binocular or trinocular microscope be connected to a projector or HD television at a fraction of the cost of the OEM camera systems with comparable quality. These systems, in addition to being cost-effective, have also provided the added advantage of portability, thus providing the much-needed flexibility at various teaching locations.

  11. Combined use of a priori data for fast system self-calibration of a non-rigid multi-camera fringe projection system

    NASA Astrophysics Data System (ADS)

    Stavroulakis, Petros I.; Chen, Shuxiao; Sims-Waterhouse, Danny; Piano, Samanta; Southon, Nicholas; Bointon, Patrick; Leach, Richard

    2017-06-01

    In non-rigid fringe projection 3D measurement systems, where either the camera or projector setup can change significantly between measurements or the object needs to be tracked, self-calibration has to be carried out frequently to keep the measurements accurate1. In fringe projection systems, it is common to use methods developed initially for photogrammetry for the calibration of the camera(s) in the system in terms of extrinsic and intrinsic parameters. To calibrate the projector(s) an extra correspondence between a pre-calibrated camera and an image created by the projector is performed. These recalibration steps are usually time consuming and involve the measurement of calibrated patterns on planes, before the actual object can continue to be measured after a motion of a camera or projector has been introduced in the setup and hence do not facilitate fast 3D measurement of objects when frequent experimental setup changes are necessary. By employing and combining a priori information via inverse rendering, on-board sensors, deep learning and leveraging a graphics processor unit (GPU), we assess a fine camera pose estimation method which is based on optimising the rendering of a model of a scene and the object to match the view from the camera. We find that the success of this calibration pipeline can be greatly improved by using adequate a priori information from the aforementioned sources.

  12. Performance of Color Camera Machine Vision in Automated Furniture Rough Mill Systems

    Treesearch

    D. Earl Kline; Agus Widoyoko; Janice K. Wiedenbeck; Philip A. Araman

    1998-01-01

    The objective of this study was to evaluate the performance of color camera machine vision for lumber processing in a furniture rough mill. The study used 134 red oak boards to compare the performance of automated gang-rip-first rough mill yield based on a prototype color camera lumber inspection system developed at Virginia Tech with both estimated optimum rough mill...

  13. Improving Photometric Calibration of Meteor Video Camera Systems.

    PubMed

    Ehlert, Steven; Kingery, Aaron; Suggs, Robert

    2017-09-01

    We present the results of new calibration tests performed by the NASA Meteoroid Environment Office (MEO) designed to help quantify and minimize systematic uncertainties in meteor photometry from video camera observations. These systematic uncertainties can be categorized by two main sources: an imperfect understanding of the linearity correction for the MEO's Watec 902H2 Ultimate video cameras and uncertainties in meteor magnitudes arising from transformations between the Watec camera's Sony EX-View HAD bandpass and the bandpasses used to determine reference star magnitudes. To address the first point, we have measured the linearity response of the MEO's standard meteor video cameras using two independent laboratory tests on eight cameras. Our empirically determined linearity correction is critical for performing accurate photometry at low camera intensity levels. With regards to the second point, we have calculated synthetic magnitudes in the EX bandpass for reference stars. These synthetic magnitudes enable direct calculations of the meteor's photometric flux within the camera band pass without requiring any assumptions of its spectral energy distribution. Systematic uncertainties in the synthetic magnitudes of individual reference stars are estimated at ∼ 0.20 mag, and are limited by the available spectral information in the reference catalogs. These two improvements allow for zero-points accurate to ∼ 0.05 - 0.10 mag in both filtered and unfiltered camera observations with no evidence for lingering systematics. These improvements are essential to accurately measuring photometric masses of individual meteors and source mass indexes.

  14. Improving Photometric Calibration of Meteor Video Camera Systems

    NASA Technical Reports Server (NTRS)

    Ehlert, Steven; Kingery, Aaron; Suggs, Robert

    2017-01-01

    We present the results of new calibration tests performed by the NASA Meteoroid Environment Office (MEO) designed to help quantify and minimize systematic uncertainties in meteor photometry from video camera observations. These systematic uncertainties can be categorized by two main sources: an imperfect understanding of the linearity correction for the MEO's Watec 902H2 Ultimate video cameras and uncertainties in meteor magnitudes arising from transformations between the Watec camera's Sony EX-View HAD bandpass and the bandpasses used to determine reference star magnitudes. To address the first point, we have measured the linearity response of the MEO's standard meteor video cameras using two independent laboratory tests on eight cameras. Our empirically determined linearity correction is critical for performing accurate photometry at low camera intensity levels. With regards to the second point, we have calculated synthetic magnitudes in the EX bandpass for reference stars. These synthetic magnitudes enable direct calculations of the meteor's photometric flux within the camera bandpass without requiring any assumptions of its spectral energy distribution. Systematic uncertainties in the synthetic magnitudes of individual reference stars are estimated at approx. 0.20 mag, and are limited by the available spectral information in the reference catalogs. These two improvements allow for zero-points accurate to 0.05 - 0.10 mag in both filtered and unfiltered camera observations with no evidence for lingering systematics. These improvements are essential to accurately measuring photometric masses of individual meteors and source mass indexes.

  15. Structure-From for Calibration of a Vehicle Camera System with Non-Overlapping Fields-Of in AN Urban Environment

    NASA Astrophysics Data System (ADS)

    Hanel, A.; Stilla, U.

    2017-05-01

    Vehicle environment cameras observing traffic participants in the area around a car and interior cameras observing the car driver are important data sources for driver intention recognition algorithms. To combine information from both camera groups, a camera system calibration can be performed. Typically, there is no overlapping field-of-view between environment and interior cameras. Often no marked reference points are available in environments, which are a large enough to cover a car for the system calibration. In this contribution, a calibration method for a vehicle camera system with non-overlapping camera groups in an urban environment is described. A-priori images of an urban calibration environment taken with an external camera are processed with the structure-frommotion method to obtain an environment point cloud. Images of the vehicle interior, taken also with an external camera, are processed to obtain an interior point cloud. Both point clouds are tied to each other with images of both image sets showing the same real-world objects. The point clouds are transformed into a self-defined vehicle coordinate system describing the vehicle movement. On demand, videos can be recorded with the vehicle cameras in a calibration drive. Poses of vehicle environment cameras and interior cameras are estimated separately using ground control points from the respective point cloud. All poses of a vehicle camera estimated for different video frames are optimized in a bundle adjustment. In an experiment, a point cloud is created from images of an underground car park, as well as a point cloud of the interior of a Volkswagen test car is created. Videos of two environment and one interior cameras are recorded. Results show, that the vehicle camera poses are estimated successfully especially when the car is not moving. Position standard deviations in the centimeter range can be achieved for all vehicle cameras. Relative distances between the vehicle cameras deviate between

  16. SpectraCAM SPM: a camera system with high dynamic range for scientific and medical applications

    NASA Astrophysics Data System (ADS)

    Bhaskaran, S.; Baiko, D.; Lungu, G.; Pilon, M.; VanGorden, S.

    2005-08-01

    A scientific camera system having high dynamic range designed and manufactured by Thermo Electron for scientific and medical applications is presented. The newly developed CID820 image sensor with preamplifier-per-pixel technology is employed in this camera system. The 4 Mega-pixel imaging sensor has a raw dynamic range of 82dB. Each high-transparent pixel is based on a preamplifier-per-pixel architecture and contains two photogates for non-destructive readout of the photon-generated charge (NDRO). Readout is achieved via parallel row processing with on-chip correlated double sampling (CDS). The imager is capable of true random pixel access with a maximum operating speed of 4MHz. The camera controller consists of a custom camera signal processor (CSP) with an integrated 16-bit A/D converter and a PowerPC-based CPU running a Linux embedded operating system. The imager is cooled to -40C via three-stage cooler to minimize dark current. The camera housing is sealed and is designed to maintain the CID820 imager in the evacuated chamber for at least 5 years. Thermo Electron has also developed custom software and firmware to drive the SpectraCAM SPM camera. Included in this firmware package is the new Extreme DRTM algorithm that is designed to extend the effective dynamic range of the camera by several orders of magnitude up to 32-bit dynamic range. The RACID Exposure graphical user interface image analysis software runs on a standard PC that is connected to the camera via Gigabit Ethernet.

  17. Validation of the Microsoft Kinect® camera system for measurement of lower extremity jump landing and squatting kinematics.

    PubMed

    Eltoukhy, Moataz; Kelly, Adam; Kim, Chang-Young; Jun, Hyung-Pil; Campbell, Richard; Kuenze, Christopher

    2016-01-01

    Cost effective, quantifiable assessment of lower extremity movement represents potential improvement over standard tools for evaluation of injury risk. Ten healthy participants completed three trials of a drop jump, overhead squat, and single leg squat task. Peak hip and knee kinematics were assessed using an 8 camera BTS Smart 7000DX motion analysis system and the Microsoft Kinect® camera system. The agreement and consistency between both uncorrected and correct Kinect kinematic variables and the BTS camera system were assessed using interclass correlations coefficients. Peak sagittal plane kinematics measured using the Microsoft Kinect® camera system explained a significant amount of variance [Range(hip) = 43.5-62.8%; Range(knee) = 67.5-89.6%] in peak kinematics measured using the BTS camera system. Across tasks, peak knee flexion angle and peak hip flexion were found to be consistent and in agreement when the Microsoft Kinect® camera system was directly compared to the BTS camera system but these values were improved following application of a corrective factor. The Microsoft Kinect® may not be an appropriate surrogate for traditional motion analysis technology, but it may have potential applications as a real-time feedback tool in pathological or high injury risk populations.

  18. Improving the color fidelity of cameras for advanced television systems

    NASA Astrophysics Data System (ADS)

    Kollarits, Richard V.; Gibbon, David C.

    1992-08-01

    In this paper we compare the accuracy of the color information obtained from television cameras using three and five wavelength bands. This comparison is based on real digital camera data. The cameras are treated as colorimeters whose characteristics are not linked to that of the display. The color matrices for both cameras were obtained by identical optimization procedures that minimized the color error The color error for the five band camera is 2. 5 times smaller than that obtained from the three band camera. Visual comparison of color matches on a characterized color monitor indicate that the five band camera is capable of color measurements that produce no significant visual error on the display. Because the outputs from the five band camera are reduced to the normal three channels conventionally used for display there need be no increase in signal handling complexity outside the camera. Likewise it is possible to construct a five band camera using only three sensors as in conventional cameras. The principal drawback of the five band camera is the reduction in effective camera sensitivity by about 3/4 of an I stop. 1.

  19. Studies on a silicon-photomultiplier-based camera for Imaging Atmospheric Cherenkov Telescopes

    NASA Astrophysics Data System (ADS)

    Arcaro, C.; Corti, D.; De Angelis, A.; Doro, M.; Manea, C.; Mariotti, M.; Rando, R.; Reichardt, I.; Tescaro, D.

    2017-12-01

    Imaging Atmospheric Cherenkov Telescopes (IACTs) represent a class of instruments which are dedicated to the ground-based observation of cosmic VHE gamma ray emission based on the detection of the Cherenkov radiation produced in the interaction of gamma rays with the Earth atmosphere. One of the key elements of such instruments is a pixelized focal-plane camera consisting of photodetectors. To date, photomultiplier tubes (PMTs) have been the common choice given their high photon detection efficiency (PDE) and fast time response. Recently, silicon photomultipliers (SiPMs) are emerging as an alternative. This rapidly evolving technology has strong potential to become superior to that based on PMTs in terms of PDE, which would further improve the sensitivity of IACTs, and see a price reduction per square millimeter of detector area. We are working to develop a SiPM-based module for the focal-plane cameras of the MAGIC telescopes to probe this technology for IACTs with large focal plane cameras of an area of few square meters. We will describe the solutions we are exploring in order to balance a competitive performance with a minimal impact on the overall MAGIC camera design using ray tracing simulations. We further present a comparative study of the overall light throughput based on Monte Carlo simulations and considering the properties of the major hardware elements of an IACT.

  20. Fisheye camera around view monitoring system

    NASA Astrophysics Data System (ADS)

    Feng, Cong; Ma, Xinjun; Li, Yuanyuan; Wu, Chenchen

    2018-04-01

    360 degree around view monitoring system is the key technology of the advanced driver assistance system, which is used to assist the driver to clear the blind area, and has high application value. In this paper, we study the transformation relationship between multi coordinate system to generate panoramic image in the unified car coordinate system. Firstly, the panoramic image is divided into four regions. By using the parameters obtained by calibration, four fisheye images pixel corresponding to the four sub regions are mapped to the constructed panoramic image. On the basis of 2D around view monitoring system, 3D version is realized by reconstructing the projection surface. Then, we compare 2D around view scheme and 3D around view scheme in unified coordinate system, 3D around view scheme solves the shortcomings of the traditional 2D scheme, such as small visual field, prominent ground object deformation and so on. Finally, the image collected by a fisheye camera installed around the car body can be spliced into a 360 degree panoramic image. So it has very high application value.

  1. Orbital docking system centerline color television camera system test

    NASA Technical Reports Server (NTRS)

    Mongan, Philip T.

    1993-01-01

    A series of tests was run to verify that the design of the centerline color television camera (CTVC) system is adequate optically for the STS-71 Space Shuttle Orbiter docking mission with the Mir space station. In each test, a mockup of the Mir consisting of hatch, docking mechanism, and docking target was positioned above the Johnson Space Center's full fuselage trainer, which simulated the Orbiter with a mockup of the external airlock and docking adapter. Test subjects viewed the docking target through the CTVC under 30 different lighting conditions and evaluated target resolution, field of view, light levels, light placement, and methods of target alignment. Test results indicate that the proposed design will provide adequate visibility through the centerline camera for a successful docking, even with a reasonable number of light failures. It is recommended that the flight deck crew have individual switching capability for docking lights to provide maximum shadow management and that centerline lights be retained to deal with light failures and user preferences. Procedures for light management should be developed and target alignment aids should be selected during simulated docking runs.

  2. IR observations in gamma-ray blazars

    NASA Technical Reports Server (NTRS)

    Mahoney, W. A.; Gautier, T. N.; Ressler, M. E.; Wallyn, P.; Durouchoux, P.; Higdon, J. C.

    1997-01-01

    The infrared photometric and spectral observation of five gamma ray blazars in coordination with the energetic gamma ray experiment telescope (EGRET) onboard the Compton Gamma Ray Observatory is reported. The infrared measurements were made with a Cassegrain infrared camera and the mid-infrared large well imager at the Mt. Palomar 5 m telescope. The emphasis is on the three blazars observed simultaneously by EGRET and the ground-based telescope during viewing period 519. In addition to the acquisition of broadband spectral measurements for direct correlation with the 100 MeV EGRET observations, near infrared images were obtained, enabling a search for intra-day variability to be carried out.

  3. Illumination box and camera system

    DOEpatents

    Haas, Jeffrey S.; Kelly, Fredrick R.; Bushman, John F.; Wiefel, Michael H.; Jensen, Wayne A.; Klunder, Gregory L.

    2002-01-01

    A hand portable, field-deployable thin-layer chromatography (TLC) unit and a hand portable, battery-operated unit for development, illumination, and data acquisition of the TLC plates contain many miniaturized features that permit a large number of samples to be processed efficiently. The TLC unit includes a solvent tank, a holder for TLC plates, and a variety of tool chambers for storing TLC plates, solvent, and pipettes. After processing in the TLC unit, a TLC plate is positioned in a collapsible illumination box, where the box and a CCD camera are optically aligned for optimal pixel resolution of the CCD images of the TLC plate. The TLC system includes an improved development chamber for chemical development of TLC plates that prevents solvent overflow.

  4. An electronic pan/tilt/zoom camera system

    NASA Technical Reports Server (NTRS)

    Zimmermann, Steve; Martin, H. Lee

    1991-01-01

    A camera system for omnidirectional image viewing applications that provides pan, tilt, zoom, and rotational orientation within a hemispherical field of view (FOV) using no moving parts was developed. The imaging device is based on the effect that from a fisheye lens, which produces a circular image of an entire hemispherical FOV, can be mathematically corrected using high speed electronic circuitry. An incoming fisheye image from any image acquisition source is captured in memory of the device, a transformation is performed for the viewing region of interest and viewing direction, and a corrected image is output as a video image signal for viewing, recording, or analysis. As a result, this device can accomplish the functions of pan, tilt, rotation, and zoom throughout a hemispherical FOV without the need for any mechanical mechanisms. A programmable transformation processor provides flexible control over viewing situations. Multiple images, each with different image magnifications and pan tilt rotation parameters, can be obtained from a single camera. The image transformation device can provide corrected images at frame rates compatible with RS-170 standard video equipment.

  5. Intelligent person identification system using stereo camera-based height and stride estimation

    NASA Astrophysics Data System (ADS)

    Ko, Jung-Hwan; Jang, Jae-Hun; Kim, Eun-Soo

    2005-05-01

    In this paper, a stereo camera-based intelligent person identification system is suggested. In the proposed method, face area of the moving target person is extracted from the left image of the input steros image pair by using a threshold value of YCbCr color model and by carrying out correlation between the face area segmented from this threshold value of YCbCr color model and the right input image, the location coordinates of the target face can be acquired, and then these values are used to control the pan/tilt system through the modified PID-based recursive controller. Also, by using the geometric parameters between the target face and the stereo camera system, the vertical distance between the target and stereo camera system can be calculated through a triangulation method. Using this calculated vertical distance and the angles of the pan and tilt, the target's real position data in the world space can be acquired and from them its height and stride values can be finally extracted. Some experiments with video images for 16 moving persons show that a person could be identified with these extracted height and stride parameters.

  6. Network-linked long-time recording high-speed video camera system

    NASA Astrophysics Data System (ADS)

    Kimura, Seiji; Tsuji, Masataka

    2001-04-01

    This paper describes a network-oriented, long-recording-time high-speed digital video camera system that utilizes an HDD (Hard Disk Drive) as a recording medium. Semiconductor memories (DRAM, etc.) are the most common image data recording media with existing high-speed digital video cameras. They are extensively used because of their advantage of high-speed writing and reading of picture data. The drawback is that their recording time is limited to only several seconds because the data amount is very large. A recording time of several seconds is sufficient for many applications. However, a much longer recording time is required in some applications where an exact prediction of trigger timing is hard to make. In the Late years, the recording density of the HDD has been dramatically improved, which has attracted more attention to its value as a long-recording-time medium. We conceived an idea that we would be able to build a compact system that makes possible a long time recording if the HDD can be used as a memory unit for high-speed digital image recording. However, the data rate of such a system, capable of recording 640 X 480 pixel resolution pictures at 500 frames per second (fps) with 8-bit grayscale is 153.6 Mbyte/sec., and is way beyond the writing speed of the commonly used HDD. So, we developed a dedicated image compression system and verified its capability to lower the data rate from the digital camera to match the HDD writing rate.

  7. Directional Unfolded Source Term (DUST) for Compton Cameras.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mitchell, Dean J.; Horne, Steven M.; O'Brien, Sean

    2018-03-01

    A Directional Unfolded Source Term (DUST) algorithm was developed to enable improved spectral analysis capabilities using data collected by Compton cameras. Achieving this objective required modification of the detector response function in the Gamma Detector Response and Analysis Software (GADRAS). Experimental data that were collected in support of this work include measurements of calibration sources at a range of separation distances and cylindrical depleted uranium castings.

  8. FPGA Based Adaptive Rate and Manifold Pattern Projection for Structured Light 3D Camera System

    PubMed Central

    Lee, Sukhan

    2018-01-01

    The quality of the captured point cloud and the scanning speed of a structured light 3D camera system depend upon their capability of handling the object surface of a large reflectance variation in the trade-off of the required number of patterns to be projected. In this paper, we propose and implement a flexible embedded framework that is capable of triggering the camera single or multiple times for capturing single or multiple projections within a single camera exposure setting. This allows the 3D camera system to synchronize the camera and projector even for miss-matched frame rates such that the system is capable of projecting different types of patterns for different scan speed applications. This makes the system capturing a high quality of 3D point cloud even for the surface of a large reflectance variation while achieving a high scan speed. The proposed framework is implemented on the Field Programmable Gate Array (FPGA), where the camera trigger is adaptively generated in such a way that the position and the number of triggers are automatically determined according to camera exposure settings. In other words, the projection frequency is adaptive to different scanning applications without altering the architecture. In addition, the proposed framework is unique as it does not require any external memory for storage because pattern pixels are generated in real-time, which minimizes the complexity and size of the application-specific integrated circuit (ASIC) design and implementation. PMID:29642506

  9. The structure, logic of operation and distinctive features of the system of triggers and counting signals formation for gamma-telescope GAMMA-400

    NASA Astrophysics Data System (ADS)

    Topchiev, N. P.; Galper, A. M.; Arkhangelskiy, A. I.; Arkhangelskaja, I. V.; Kheymits, M. D.; Suchkov, S. I.; Yurkin, Y. T.

    2017-01-01

    Scientific project GAMMA-400 (Gamma Astronomical Multifunctional Modular Apparatus) relates to the new generation of space observatories intended to perform an indirect search for signatures of dark matter in the cosmic-ray fluxes, measurements of characteristics of diffuse gamma-ray emission and gamma-rays from the Sun during periods of solar activity, gamma-ray bursts, extended and point gamma-ray sources, electron/positron and cosmic-ray nuclei fluxes up to TeV energy region by means of the GAMMA-400 gamma-ray telescope represents the core of the scientific complex. The system of triggers and counting signals formation of the GAMMA-400 gamma-ray telescope constitutes the pipelined processor structure which collects data from the gamma-ray telescope subsystems and produces summary information used in forming the trigger decision for each event. The system design is based on the use of state-of-the-art reconfigurable logic devices and fast data links. The basic structure, logic of operation and distinctive features of the system are presented.

  10. Intercomparison of SO2 camera systems for imaging volcanic gas plumes

    USGS Publications Warehouse

    Kern, Christoph; Lübcke, Peter; Bobrowski, Nicole; Campion, Robin; Mori, Toshiya; Smekens, Jean-Francois; Stebel, Kerstin; Tamburello, Giancarlo; Burton, Mike; Platt, Ulrich; Prata, Fred

    2015-01-01

    SO2 camera systems are increasingly being used to image volcanic gas plumes. The ability to derive SO2 emission rates directly from the acquired imagery at high time resolution allows volcanic process studies that incorporate other high time-resolution datasets. Though the general principles behind the SO2 camera have remained the same for a number of years, recent advances in CCD technology and an improved understanding of the physics behind the measurements have driven a continuous evolution of the camera systems. Here we present an intercomparison of seven different SO2 cameras. In the first part of the experiment, the various technical designs are compared and the advantages and drawbacks of individual design options are considered. Though the ideal design was found to be dependent on the specific application, a number of general recommendations are made. Next, a time series of images recorded by all instruments at Stromboli Volcano (Italy) is compared. All instruments were easily able to capture SO2 clouds emitted from the summit vents. Quantitative comparison of the SO2 load in an individual cloud yielded an intra-instrument precision of about 12%. From the imagery, emission rates were then derived according to each group's standard retrieval process. A daily average SO2 emission rate of 61 ± 10 t/d was calculated. Due to differences in spatial integration methods and plume velocity determination, the time-dependent progression of SO2 emissions varied significantly among the individual systems. However, integration over distinct degassing events yielded comparable SO2 masses. Based on the intercomparison data, we find an approximate 1-sigma precision of 20% for the emission rates derived from the various SO2 cameras. Though it may still be improved in the future, this is currently within the typical accuracy of the measurement and is considered sufficient for most applications.

  11. Ringfield lithographic camera

    DOEpatents

    Sweatt, William C.

    1998-01-01

    A projection lithography camera is presented with a wide ringfield optimized so as to make efficient use of extreme ultraviolet radiation from a large area radiation source (e.g., D.sub.source .apprxeq.0.5 mm). The camera comprises four aspheric mirrors optically arranged on a common axis of symmetry with an increased etendue for the camera system. The camera includes an aperture stop that is accessible through a plurality of partial aperture stops to synthesize the theoretical aperture stop. Radiation from a mask is focused to form a reduced image on a wafer, relative to the mask, by reflection from the four aspheric mirrors.

  12. Sub-Camera Calibration of a Penta-Camera

    NASA Astrophysics Data System (ADS)

    Jacobsen, K.; Gerke, M.

    2016-03-01

    Penta cameras consisting of a nadir and four inclined cameras are becoming more and more popular, having the advantage of imaging also facades in built up areas from four directions. Such system cameras require a boresight calibration of the geometric relation of the cameras to each other, but also a calibration of the sub-cameras. Based on data sets of the ISPRS/EuroSDR benchmark for multi platform photogrammetry the inner orientation of the used IGI Penta DigiCAM has been analyzed. The required image coordinates of the blocks Dortmund and Zeche Zollern have been determined by Pix4Dmapper and have been independently adjusted and analyzed by program system BLUH. With 4.1 million image points in 314 images respectively 3.9 million image points in 248 images a dense matching was provided by Pix4Dmapper. With up to 19 respectively 29 images per object point the images are well connected, nevertheless the high number of images per object point are concentrated to the block centres while the inclined images outside the block centre are satisfying but not very strongly connected. This leads to very high values for the Student test (T-test) of the finally used additional parameters or in other words, additional parameters are highly significant. The estimated radial symmetric distortion of the nadir sub-camera corresponds to the laboratory calibration of IGI, but there are still radial symmetric distortions also for the inclined cameras with a size exceeding 5μm even if mentioned as negligible based on the laboratory calibration. Radial and tangential effects of the image corners are limited but still available. Remarkable angular affine systematic image errors can be seen especially in the block Zeche Zollern. Such deformations are unusual for digital matrix cameras, but it can be caused by the correlation between inner and exterior orientation if only parallel flight lines are used. With exception of the angular affinity the systematic image errors for corresponding

  13. SPARTAN Near-IR Camera | SOAR

    Science.gov Websites

    SPARTAN Near-IR Camera SPARTAN Cookbook Ohio State Infrared Imager/Spectrograph (OSIRIS) - NO LONGER Instrumentation at SOAR»SPARTAN Near-IR Camera SPARTAN Near-IR Camera System Overview The Spartan Infrared Camera is a high spatial resolution near-IR imager. Spartan has a focal plane conisisting of four "

  14. A survey of camera error sources in machine vision systems

    NASA Astrophysics Data System (ADS)

    Jatko, W. B.

    In machine vision applications, such as an automated inspection line, television cameras are commonly used to record scene intensity in a computer memory or frame buffer. Scene data from the image sensor can then be analyzed with a wide variety of feature-detection techniques. Many algorithms found in textbooks on image processing make the implicit simplifying assumption of an ideal input image with clearly defined edges and uniform illumination. The ideal image model is helpful to aid the student in understanding the principles of operation, but when these algorithms are blindly applied to real-world images the results can be unsatisfactory. This paper examines some common measurement errors found in camera sensors and their underlying causes, and possible methods of error compensation. The role of the camera in a typical image-processing system is discussed, with emphasis on the origination of signal distortions. The effects of such things as lighting, optics, and sensor characteristics are considered.

  15. Solid state television camera

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The design, fabrication, and tests of a solid state television camera using a new charge-coupled imaging device are reported. An RCA charge-coupled device arranged in a 512 by 320 format and directly compatible with EIA format standards was the sensor selected. This is a three-phase, sealed surface-channel array that has 163,840 sensor elements, which employs a vertical frame transfer system for image readout. Included are test results of the complete camera system, circuit description and changes to such circuits as a result of integration and test, maintenance and operation section, recommendations to improve the camera system, and a complete set of electrical and mechanical drawing sketches.

  16. Establishment of Imaging Spectroscopy of Nuclear Gamma-Rays based on Geometrical Optics.

    PubMed

    Tanimori, Toru; Mizumura, Yoshitaka; Takada, Atsushi; Miyamoto, Shohei; Takemura, Taito; Kishimoto, Tetsuro; Komura, Shotaro; Kubo, Hidetoshi; Kurosawa, Shunsuke; Matsuoka, Yoshihiro; Miuchi, Kentaro; Mizumoto, Tetsuya; Nakamasu, Yuma; Nakamura, Kiseki; Parker, Joseph D; Sawano, Tatsuya; Sonoda, Shinya; Tomono, Dai; Yoshikawa, Kei

    2017-02-03

    Since the discovery of nuclear gamma-rays, its imaging has been limited to pseudo imaging, such as Compton Camera (CC) and coded mask. Pseudo imaging does not keep physical information (intensity, or brightness in Optics) along a ray, and thus is capable of no more than qualitative imaging of bright objects. To attain quantitative imaging, cameras that realize geometrical optics is essential, which would be, for nuclear MeV gammas, only possible via complete reconstruction of the Compton process. Recently we have revealed that "Electron Tracking Compton Camera" (ETCC) provides a well-defined Point Spread Function (PSF). The information of an incoming gamma is kept along a ray with the PSF and that is equivalent to geometrical optics. Here we present an imaging-spectroscopic measurement with the ETCC. Our results highlight the intrinsic difficulty with CCs in performing accurate imaging, and show that the ETCC surmounts this problem. The imaging capability also helps the ETCC suppress the noise level dramatically by ~3 orders of magnitude without a shielding structure. Furthermore, full reconstruction of Compton process with the ETCC provides spectra free of Compton edges. These results mark the first proper imaging of nuclear gammas based on the genuine geometrical optics.

  17. Usability of a Wearable Camera System for Dementia Family Caregivers

    PubMed Central

    Matthews, Judith T.; Lingler, Jennifer H.; Campbell, Grace B.; Hunsaker, Amanda E.; Hu, Lu; Pires, Bernardo R.; Hebert, Martial; Schulz, Richard

    2015-01-01

    Health care providers typically rely on family caregivers (CG) of persons with dementia (PWD) to describe difficult behaviors manifested by their underlying disease. Although invaluable, such reports may be selective or biased during brief medical encounters. Our team explored the usability of a wearable camera system with 9 caregiving dyads (CGs: 3 males, 6 females, 67.00 ± 14.95 years; PWDs: 2 males, 7 females, 80.00 ± 3.81 years, MMSE 17.33 ± 8.86) who recorded 79 salient events over a combined total of 140 hours of data capture, from 3 to 7 days of wear per CG. Prior to using the system, CGs assessed its benefits to be worth the invasion of privacy; post-wear privacy concerns did not differ significantly. CGs rated the system easy to learn to use, although cumbersome and obtrusive. Few negative reactions by PWDs were reported or evident in resulting video. Our findings suggest that CGs can and will wear a camera system to reveal their daily caregiving challenges to health care providers. PMID:26288888

  18. Gamma-ray detector guidance of breast cancer therapy

    NASA Astrophysics Data System (ADS)

    Ravi, Ananth

    2009-12-01

    . One method to provide intraoperative seed localization is through the use of a gamma-camera system. Monte Carlo simulations were conducted of a Cadmium Zinc Telluride (CZT) gamma-camera system and a realistic model of a breast with 3 layers of seeds distributed according to the pre-implant treatment plan of a typical patient. The simulations showed that a gamma-camera was able to localize the seeds with a maximum error of 2.0 mm within 20 seconds. An experimental prototype was designed and constructed to validate these promising Monte Carlo results. Using a 64 pixel linear array CZT detector fitted with a custom built brass collimator, images were acquired of a physical phantom similar to the model used in the Monte Carlo simulations. The experimental prototype was able to reliably detect the seeds within 30 seconds with a median error in localization of 1 mm. The results from this thesis suggest that gamma-ray detecting technology may be able to provide significant improvements in guidance of breast cancer therapies and, thus, potentially improved therapeutic outcomes.

  19. [Diagnostic use of positron emission tomography in France: from the coincidence gamma-camera to mobile hybrid PET/CT devices].

    PubMed

    Talbot, Jean-Noël

    2010-11-01

    Positron emission tomography (PET) is a well-established medical imaging method. PET is increasingly used for diagnostic purposes, especially in oncology. The most widely used radiopharmaceutical is FDG, a glucose analogue. Other radiopharmaceuticals have recently been registered or are in development. We outline technical improvements of PET machines during more than a decade of clinical use in France. Even though image quality has improved considerably and PET-CT hybrid machines have emerged, spending per examination has remained remarkably constant. Replacement and maintenance costs have remained in the range of 170-190 Euros per examination since 1997, whether early CDET gamma cameras or the latest time-of-flight PET/CT devices are used. This is mainly due to shorter acquisition times and more efficient use of FDG New reimbursement rates for PET/CT are needed in France in order to favor regular acquisition of state-of-the-art devices. One major development is the coupling of PET and MR imaging.

  20. Registration of an on-axis see-through head-mounted display and camera system

    NASA Astrophysics Data System (ADS)

    Luo, Gang; Rensing, Noa M.; Weststrate, Evan; Peli, Eli

    2005-02-01

    An optical see-through head-mounted display (HMD) system integrating a miniature camera that is aligned with the user's pupil is developed and tested. Such an HMD system has a potential value in many augmented reality applications, in which registration of the virtual display to the real scene is one of the critical aspects. The camera alignment to the user's pupil results in a simple yet accurate calibration and a low registration error across a wide range of depth. In reality, a small camera-eye misalignment may still occur in such a system due to the inevitable variations of HMD wearing position with respect to the eye. The effects of such errors are measured. Calculation further shows that the registration error as a function of viewing distance behaves nearly the same for different virtual image distances, except for a shift. The impact of prismatic effect of the display lens on registration is also discussed.

  1. A novel multi-digital camera system based on tilt-shift photography technology.

    PubMed

    Sun, Tao; Fang, Jun-Yong; Zhao, Dong; Liu, Xue; Tong, Qing-Xi

    2015-03-31

    Multi-digital camera systems (MDCS) are constantly being improved to meet the increasing requirement of high-resolution spatial data. This study identifies the insufficiencies of traditional MDCSs and proposes a new category MDCS based on tilt-shift photography to improve ability of the MDCS to acquire high-accuracy spatial data. A prototype system, including two or four tilt-shift cameras (TSC, camera model: Nikon D90), is developed to validate the feasibility and correctness of proposed MDCS. Similar to the cameras of traditional MDCSs, calibration is also essential for TSC of new MDCS. The study constructs indoor control fields and proposes appropriate calibration methods for TSC, including digital distortion model (DDM) approach and two-step calibrated strategy. The characteristics of TSC are analyzed in detail via a calibration experiment; for example, the edge distortion of TSC. Finally, the ability of the new MDCS to acquire high-accuracy spatial data is verified through flight experiments. The results of flight experiments illustrate that geo-position accuracy of prototype system achieves 0.3 m at a flight height of 800 m, and spatial resolution of 0.15 m. In addition, results of the comparison between the traditional (MADC II) and proposed MDCS demonstrate that the latter (0.3 m) provides spatial data with higher accuracy than the former (only 0.6 m) under the same conditions. We also take the attitude that using higher accuracy TSC in the new MDCS should further improve the accuracy of the photogrammetry senior product.

  2. Line following using a two camera guidance system for a mobile robot

    NASA Astrophysics Data System (ADS)

    Samu, Tayib; Kelkar, Nikhal; Perdue, David; Ruthemeyer, Michael A.; Matthews, Bradley O.; Hall, Ernest L.

    1996-10-01

    Automated unmanned guided vehicles have many potential applications in manufacturing, medicine, space and defense. A mobile robot has been designed for the 1996 Automated Unmanned Vehicle Society competition which was held in Orlando, Florida on July 15, 1996. The competition required the vehicle to follow solid and dashed lines around an approximately 800 ft. path while avoiding obstacles, overcoming terrain changes such as inclines and sand traps, and attempting to maximize speed. The purpose of this paper is to describe the algorithm developed for the line following. The line following algorithm images two windows and locates their centroid and with the knowledge that the points are on the ground plane, a mathematical and geometrical relationship between the image coordinates of the points and their corresponding ground coordinates are established. The angle of the line and minimum distance from the robot centroid are then calculated and used in the steering control. Two cameras are mounted on the robot with a camera on each side. One camera guides the robot and when it loses track of the line on its side, the robot control system automatically switches to the other camera. The test bed system has provided an educational experience for all involved and permits understanding and extending the state of the art in autonomous vehicle design.

  3. Gamma-Ray Background Variability in Mobile Detectors

    NASA Astrophysics Data System (ADS)

    Aucott, Timothy John

    Gamma-ray background radiation significantly reduces detection sensitivity when searching for radioactive sources in the field, such as in wide-area searches for homeland security applications. Mobile detector systems in particular must contend with a variable background that is not necessarily known or even measurable a priori. This work will present measurements of the spatial and temporal variability of the background, with the goal of merging gamma-ray detection, spectroscopy, and imaging with contextual information--a "nuclear street view" of the ubiquitous background radiation. The gamma-ray background originates from a variety of sources, both natural and anthropogenic. The dominant sources in the field are the primordial isotopes potassium-40, uranium-238, and thorium-232, as well as their decay daughters. In addition to the natural background, many artificially-created isotopes are used for industrial or medical purposes, and contamination from fission products can be found in many environments. Regardless of origin, these backgrounds will reduce detection sensitivity by adding both statistical as well as systematic uncertainty. In particular, large detector arrays will be limited by the systematic uncertainty in the background and will suffer from a high rate of false alarms. The goal of this work is to provide a comprehensive characterization of the gamma-ray background and its variability in order to improve detection sensitivity and evaluate the performance of mobile detectors in the field. Large quantities of data are measured in order to study their performance at very low false alarm rates. Two different approaches, spectroscopy and imaging, are compared in a controlled study in the presence of this measured background. Furthermore, there is additional information that can be gained by correlating the gamma-ray data with contextual data streams (such as cameras and global positioning systems) in order to reduce the variability in the background

  4. Spacecraft camera image registration

    NASA Technical Reports Server (NTRS)

    Kamel, Ahmed A. (Inventor); Graul, Donald W. (Inventor); Chan, Fred N. T. (Inventor); Gamble, Donald W. (Inventor)

    1987-01-01

    A system for achieving spacecraft camera (1, 2) image registration comprises a portion external to the spacecraft and an image motion compensation system (IMCS) portion onboard the spacecraft. Within the IMCS, a computer (38) calculates an image registration compensation signal (60) which is sent to the scan control loops (84, 88, 94, 98) of the onboard cameras (1, 2). At the location external to the spacecraft, the long-term orbital and attitude perturbations on the spacecraft are modeled. Coefficients (K, A) from this model are periodically sent to the onboard computer (38) by means of a command unit (39). The coefficients (K, A) take into account observations of stars and landmarks made by the spacecraft cameras (1, 2) themselves. The computer (38) takes as inputs the updated coefficients (K, A) plus synchronization information indicating the mirror position (AZ, EL) of each of the spacecraft cameras (1, 2), operating mode, and starting and stopping status of the scan lines generated by these cameras (1, 2), and generates in response thereto the image registration compensation signal (60). The sources of periodic thermal errors on the spacecraft are discussed. The system is checked by calculating measurement residuals, the difference between the landmark and star locations predicted at the external location and the landmark and star locations as measured by the spacecraft cameras (1, 2).

  5. A remote camera operation system using a marker attached cap

    NASA Astrophysics Data System (ADS)

    Kawai, Hironori; Hama, Hiromitsu

    2005-12-01

    In this paper, we propose a convenient system to control a remote camera according to the eye-gazing direction of the operator, which is approximately obtained through calculating the face direction by means of image processing. The operator put a marker attached cap on his head, and the system takes an image of the operator from above with only one video camera. Three markers are set up on the cap, and 'three' is the minimum number to calculate the tilt angle of the head. The more markers are used, the robuster system may be made to occlusion, and the wider moving range of the head is tolerated. It is supposed that the markers must not exist on any three dimensional straight line. To compensate the marker's color change due to illumination conditions, the threshold for the marker extraction is adaptively decided using a k-means clustering method. The system was implemented with MATLAB on a personal computer, and the real-time operation was realized. Through the experimental results, robustness of the system was confirmed and tilt and pan angles of the head could be calculated with enough accuracy to use.

  6. Evaluation of multispectral plenoptic camera

    NASA Astrophysics Data System (ADS)

    Meng, Lingfei; Sun, Ting; Kosoglow, Rich; Berkner, Kathrin

    2013-01-01

    Plenoptic cameras enable capture of a 4D lightfield, allowing digital refocusing and depth estimation from data captured with a compact portable camera. Whereas most of the work on plenoptic camera design has been based a simplistic geometric-optics-based characterization of the optical path only, little work has been done of optimizing end-to-end system performance for a specific application. Such design optimization requires design tools that need to include careful parameterization of main lens elements, as well as microlens array and sensor characteristics. In this paper we are interested in evaluating the performance of a multispectral plenoptic camera, i.e. a camera with spectral filters inserted into the aperture plane of the main lens. Such a camera enables single-snapshot spectral data acquisition.1-3 We first describe in detail an end-to-end imaging system model for a spectrally coded plenoptic camera that we briefly introduced in.4 Different performance metrics are defined to evaluate the spectral reconstruction quality. We then present a prototype which is developed based on a modified DSLR camera containing a lenslet array on the sensor and a filter array in the main lens. Finally we evaluate the spectral reconstruction performance of a spectral plenoptic camera based on both simulation and measurements obtained from the prototype.

  7. Nuclear medicine imaging system

    DOEpatents

    Bennett, G.W.; Brill, A.B.; Bizais, Y.J.C.; Rowe, R.W.; Zubal, I.G.

    1983-03-11

    It is an object of this invention to provide a nuclear imaging system having the versatility to do positron annihilation studies, rotating single or opposed camera gamma emission studies, and orthogonal gamma emission studies. It is a further object of this invention to provide an imaging system having the capability for orthogonal dual multipinhole tomography. It is another object of this invention to provide a nuclear imaging system in which all available energy data, as well as patient physiological data, are acquired simultaneously in list mode.

  8. A portable high-speed camera system for vocal fold examinations.

    PubMed

    Hertegård, Stellan; Larsson, Hans

    2014-11-01

    In this article, we present a new portable low-cost system for high-speed examinations of the vocal folds. Analysis of glottal vibratory parameters from the high-speed recordings is compared with videostroboscopic recordings. The high-speed system is built around a Fastec 1 monochrome camera, which is used with newly developed software, High-Speed Studio (HSS). The HSS has options for video/image recording, contains a database, and has a set of analysis options. The Fastec/HSS system has been used clinically since 2011 in more than 2000 patient examinations and recordings. The Fastec 1 camera has sufficient time resolution (≥4000 frames/s) and light sensitivity (ISO 3200) to produce images for detailed analyses of parameters pertinent to vocal fold function. The camera can be used with both rigid and flexible endoscopes. The HSS software includes options for analyses of glottal vibrations, such as kymogram, phase asymmetry, glottal area variation, open and closed phase, and angle of vocal fold abduction. It can also be used for separate analysis of the left and vocal fold movements, including maximum speed during opening and closing, a parameter possibly related to vocal fold elasticity. A blinded analysis of 32 patients with various voice disorders examined with both the Fastec/HSS system and videostroboscopy showed that the high-speed recordings were significantly better for the analysis of glottal parameters (eg, mucosal wave and vibration asymmetry). The monochrome high-speed system can be used in daily clinical work within normal clinical time limits for patient examinations. A detailed analysis can be made of voice disorders and laryngeal pathology at a relatively low cost. Copyright © 2014 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  9. Gamma-Ray Imaging for Explosives Detection

    NASA Technical Reports Server (NTRS)

    deNolfo, G. A.; Hunter, S. D.; Barbier, L. M.; Link, J. T.; Son, S.; Floyd, S. R.; Guardala, N.; Skopec, M.; Stark, B.

    2008-01-01

    We describe a gamma-ray imaging camera (GIC) for active interrogation of explosives being developed by NASA/GSFC and NSWCICarderock. The GIC is based on the Three-dimensional Track Imager (3-DTI) technology developed at GSFC for gamma-ray astrophysics. The 3-DTI, a large volume time-projection chamber, provides accurate, approx.0.4 mm resolution, 3-D tracking of charged particles. The incident direction of gamma rays, E, > 6 MeV, are reconstructed from the momenta and energies of the electron-positron pair resulting from interactions in the 3-DTI volume. The optimization of the 3-DTI technology for this specific application and the performance of the GIC from laboratory tests is presented.

  10. Breathing synchronized assessment of the chest hemodynamics: application to gamma and MR angiography

    NASA Astrophysics Data System (ADS)

    Eclancher, Bernard; Demangeat, Jean-Louis; Germain, Philippe; Baruthio, Joseph

    2003-05-01

    The project was to assess by gamma and MR angiography the bulk variations of chest blood volume related to deep and slow breathing movements. The acquisitions were performed at constant intervals on the widely moving system, without cardiac gating. Two fast enough modalities were used: a gamma-stethoscope working at 30 msec intervals for bulk volumic detection (of 99Tc labelled red cells), and MR imaging at 0.5 sec intervals well depicting displacements but not yet performing true angiography. The third modality yielding quantitative imaging was the scintillation gamma camera, but which required 30 sec signal acquisitions for each image. Frames were acquired at 1 sec intervals for up to 30 breathing cycles, and later sorted with double (inspiration and expiration) synchronization for the reconstruction of an average breathing cycle. Convergent results were obtained from the three angiographic modalities, confirming that the deep breathing movements produced inspiratory increases in bulk blood volume and caudal-median displacement of heart and great vessels, and expiratory decreases in blood volume and cranial-left displacement of heart and great vessels. Deep and slow breathing contributed effectively to thoracic blood pumping. The design of a 64x64 channels collimator has been undertaken to speed up the scintillation camera imaging acquisitions.

  11. Variable high-resolution color CCD camera system with online capability for professional photo studio application

    NASA Astrophysics Data System (ADS)

    Breitfelder, Stefan; Reichel, Frank R.; Gaertner, Ernst; Hacker, Erich J.; Cappellaro, Markus; Rudolf, Peter; Voelk, Ute

    1998-04-01

    Digital cameras are of increasing significance for professional applications in photo studios where fashion, portrait, product and catalog photographs or advertising photos of high quality have to be taken. The eyelike is a digital camera system which has been developed for such applications. It is capable of working online with high frame rates and images of full sensor size and it provides a resolution that can be varied between 2048 by 2048 and 6144 by 6144 pixel at a RGB color depth of 12 Bit per channel with an also variable exposure time of 1/60s to 1s. With an exposure time of 100 ms digitization takes approx. 2 seconds for an image of 2048 by 2048 pixels (12 Mbyte), 8 seconds for the image of 4096 by 4096 pixels (48 Mbyte) and 40 seconds for the image of 6144 by 6144 pixels (108 MByte). The eyelike can be used in various configurations. Used as a camera body most commercial lenses can be connected to the camera via existing lens adaptors. On the other hand the eyelike can be used as a back to most commercial 4' by 5' view cameras. This paper describes the eyelike camera concept with the essential system components. The article finishes with a description of the software, which is needed to bring the high quality of the camera to the user.

  12. Z{gamma}{gamma}{gamma} {yields} 0 Processes in SANC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bardin, D. Yu., E-mail: bardin@nu.jinr.ru; Kalinovskaya, L. V., E-mail: kalinov@nu.jinr.ru; Uglov, E. D., E-mail: corner@nu.jinr.ru

    2013-11-15

    We describe the analytic and numerical evaluation of the {gamma}{gamma} {yields} {gamma}Z process cross section and the Z {yields} {gamma}{gamma}{gamma} decay rate within the SANC system multi-channel approach at the one-loop accuracy level with all masses taken into account. The corresponding package for numeric calculations is presented. For checking of the results' correctness we make a comparison with the other independent calculations.

  13. An on-line calibration algorithm for external parameters of visual system based on binocular stereo cameras

    NASA Astrophysics Data System (ADS)

    Wang, Liqiang; Liu, Zhen; Zhang, Zhonghua

    2014-11-01

    Stereo vision is the key in the visual measurement, robot vision, and autonomous navigation. Before performing the system of stereo vision, it needs to calibrate the intrinsic parameters for each camera and the external parameters of the system. In engineering, the intrinsic parameters remain unchanged after calibrating cameras, and the positional relationship between the cameras could be changed because of vibration, knocks and pressures in the vicinity of the railway or motor workshops. Especially for large baselines, even minute changes in translation or rotation can affect the epipolar geometry and scene triangulation to such a degree that visual system becomes disabled. A technology including both real-time examination and on-line recalibration for the external parameters of stereo system becomes particularly important. This paper presents an on-line method for checking and recalibrating the positional relationship between stereo cameras. In epipolar geometry, the external parameters of cameras can be obtained by factorization of the fundamental matrix. Thus, it offers a method to calculate the external camera parameters without any special targets. If the intrinsic camera parameters are known, the external parameters of system can be calculated via a number of random matched points. The process is: (i) estimating the fundamental matrix via the feature point correspondences; (ii) computing the essential matrix from the fundamental matrix; (iii) obtaining the external parameters by decomposition of the essential matrix. In the step of computing the fundamental matrix, the traditional methods are sensitive to noise and cannot ensure the estimation accuracy. We consider the feature distribution situation in the actual scene images and introduce a regional weighted normalization algorithm to improve accuracy of the fundamental matrix estimation. In contrast to traditional algorithms, experiments on simulated data prove that the method improves estimation

  14. Lytro camera technology: theory, algorithms, performance analysis

    NASA Astrophysics Data System (ADS)

    Georgiev, Todor; Yu, Zhan; Lumsdaine, Andrew; Goma, Sergio

    2013-03-01

    The Lytro camera is the first implementation of a plenoptic camera for the consumer market. We consider it a successful example of the miniaturization aided by the increase in computational power characterizing mobile computational photography. The plenoptic camera approach to radiance capture uses a microlens array as an imaging system focused on the focal plane of the main camera lens. This paper analyzes the performance of Lytro camera from a system level perspective, considering the Lytro camera as a black box, and uses our interpretation of Lytro image data saved by the camera. We present our findings based on our interpretation of Lytro camera file structure, image calibration and image rendering; in this context, artifacts and final image resolution are discussed.

  15. System for photometric calibration of optoelectronic imaging devices especially streak cameras

    DOEpatents

    Boni, Robert; Jaanimagi, Paul

    2003-11-04

    A system for the photometric calibration of streak cameras and similar imaging devices provides a precise knowledge of the camera's flat-field response as well as a mapping of the geometric distortions. The system provides the flat-field response, representing the spatial variations in the sensitivity of the recorded output, with a signal-to-noise ratio (SNR) greater than can be achieved in a single submicrosecond streak record. The measurement of the flat-field response is carried out by illuminating the input slit of the streak camera with a signal that is uniform in space and constant in time. This signal is generated by passing a continuous wave source through an optical homogenizer made up of a light pipe or pipes in which the illumination typically makes several bounces before exiting as a spatially uniform source field. The rectangular cross-section of the homogenizer is matched to the usable photocathode area of the streak tube. The flat-field data set is obtained by using a slow streak ramp that may have a period from one millisecond (ms) to ten seconds (s), but may be nominally one second in duration. The system also provides a mapping of the geometric distortions, by spatially and temporarily modulating the output of the homogenizer and obtaining a data set using the slow streak ramps. All data sets are acquired using a CCD camera and stored on a computer, which is used to calculate all relevant corrections to the signal data sets. The signal and flat-field data sets are both corrected for geometric distortions prior to applying the flat-field correction. Absolute photometric calibration is obtained by measuring the output fluence of the homogenizer with a "standard-traceable" meter and relating that to the CCD pixel values for a self-corrected flat-field data set.

  16. Night Vision Camera

    NASA Technical Reports Server (NTRS)

    1996-01-01

    PixelVision, Inc. developed the Night Video NV652 Back-illuminated CCD Camera, based on the expertise of a former Jet Propulsion Laboratory employee and a former employee of Scientific Imaging Technologies, Inc. The camera operates without an image intensifier, using back-illuminated and thinned CCD technology to achieve extremely low light level imaging performance. The advantages of PixelVision's system over conventional cameras include greater resolution and better target identification under low light conditions, lower cost and a longer lifetime. It is used commercially for research and aviation.

  17. Development of a Compton camera for medical applications based on silicon strip and scintillation detectors

    NASA Astrophysics Data System (ADS)

    Krimmer, J.; Ley, J.-L.; Abellan, C.; Cachemiche, J.-P.; Caponetto, L.; Chen, X.; Dahoumane, M.; Dauvergne, D.; Freud, N.; Joly, B.; Lambert, D.; Lestand, L.; Létang, J. M.; Magne, M.; Mathez, H.; Maxim, V.; Montarou, G.; Morel, C.; Pinto, M.; Ray, C.; Reithinger, V.; Testa, E.; Zoccarato, Y.

    2015-07-01

    A Compton camera is being developed for the purpose of ion-range monitoring during hadrontherapy via the detection of prompt-gamma rays. The system consists of a scintillating fiber beam tagging hodoscope, a stack of double sided silicon strip detectors (90×90×2 mm3, 2×64 strips) as scatter detectors, as well as bismuth germanate (BGO) scintillation detectors (38×35×30 mm3, 100 blocks) as absorbers. The individual components will be described, together with the status of their characterization.

  18. A new omni-directional multi-camera system for high resolution surveillance

    NASA Astrophysics Data System (ADS)

    Cogal, Omer; Akin, Abdulkadir; Seyid, Kerem; Popovic, Vladan; Schmid, Alexandre; Ott, Beat; Wellig, Peter; Leblebici, Yusuf

    2014-05-01

    Omni-directional high resolution surveillance has a wide application range in defense and security fields. Early systems used for this purpose are based on parabolic mirror or fisheye lens where distortion due to the nature of the optical elements cannot be avoided. Moreover, in such systems, the image resolution is limited to a single image sensor's image resolution. Recently, the Panoptic camera approach that mimics the eyes of flying insects using multiple imagers has been presented. This approach features a novel solution for constructing a spherically arranged wide FOV plenoptic imaging system where the omni-directional image quality is limited by low-end sensors. In this paper, an overview of current Panoptic camera designs is provided. New results for a very-high resolution visible spectrum imaging and recording system inspired from the Panoptic approach are presented. The GigaEye-1 system, with 44 single cameras and 22 FPGAs, is capable of recording omni-directional video in a 360°×100° FOV at 9.5 fps with a resolution over (17,700×4,650) pixels (82.3MP). Real-time video capturing capability is also verified at 30 fps for a resolution over (9,000×2,400) pixels (21.6MP). The next generation system with significantly higher resolution and real-time processing capacity, called GigaEye-2, is currently under development. The important capacity of GigaEye-1 opens the door to various post-processing techniques in surveillance domain such as large perimeter object tracking, very-high resolution depth map estimation and high dynamicrange imaging which are beyond standard stitching and panorama generation methods.

  19. Radiometric calibration of wide-field camera system with an application in astronomy

    NASA Astrophysics Data System (ADS)

    Vítek, Stanislav; Nasyrova, Maria; Stehlíková, Veronika

    2017-09-01

    Camera response function (CRF) is widely used for the description of the relationship between scene radiance and image brightness. Most common application of CRF is High Dynamic Range (HDR) reconstruction of the radiance maps of imaged scenes from a set of frames with different exposures. The main goal of this work is to provide an overview of CRF estimation algorithms and compare their outputs with results obtained under laboratory conditions. These algorithms, typically designed for multimedia content, are unfortunately quite useless with astronomical image data, mostly due to their nature (blur, noise, and long exposures). Therefore, we propose an optimization of selected methods to use in an astronomical imaging application. Results are experimentally verified on the wide-field camera system using Digital Single Lens Reflex (DSLR) camera.

  20. ARNICA, the Arcetri Near-Infrared Camera

    NASA Astrophysics Data System (ADS)

    Lisi, F.; Baffa, C.; Bilotti, V.; Bonaccini, D.; del Vecchio, C.; Gennari, S.; Hunt, L. K.; Marcucci, G.; Stanga, R.

    1996-04-01

    ARNICA (ARcetri Near-Infrared CAmera) is the imaging camera for the near-infrared bands between 1.0 and 2.5 microns that the Arcetri Observatory has designed and built for the Infrared Telescope TIRGO located at Gornergrat, Switzerland. We describe the mechanical and optical design of the camera, and report on the astronomical performance of ARNICA as measured during the commissioning runs at the TIRGO (December, 1992 to December 1993), and an observing run at the William Herschel Telescope, Canary Islands (December, 1993). System performance is defined in terms of efficiency of the camera+telescope system and camera sensitivity for extended and point-like sources. (SECTION: Astronomical Instrumentation)

  1. A Novel Multi-Digital Camera System Based on Tilt-Shift Photography Technology

    PubMed Central

    Sun, Tao; Fang, Jun-yong; Zhao, Dong; Liu, Xue; Tong, Qing-xi

    2015-01-01

    Multi-digital camera systems (MDCS) are constantly being improved to meet the increasing requirement of high-resolution spatial data. This study identifies the insufficiencies of traditional MDCSs and proposes a new category MDCS based on tilt-shift photography to improve ability of the MDCS to acquire high-accuracy spatial data. A prototype system, including two or four tilt-shift cameras (TSC, camera model: Nikon D90), is developed to validate the feasibility and correctness of proposed MDCS. Similar to the cameras of traditional MDCSs, calibration is also essential for TSC of new MDCS. The study constructs indoor control fields and proposes appropriate calibration methods for TSC, including digital distortion model (DDM) approach and two-step calibrated strategy. The characteristics of TSC are analyzed in detail via a calibration experiment; for example, the edge distortion of TSC. Finally, the ability of the new MDCS to acquire high-accuracy spatial data is verified through flight experiments. The results of flight experiments illustrate that geo-position accuracy of prototype system achieves 0.3 m at a flight height of 800 m, and spatial resolution of 0.15 m. In addition, results of the comparison between the traditional (MADC II) and proposed MDCS demonstrate that the latter (0.3 m) provides spatial data with higher accuracy than the former (only 0.6 m) under the same conditions. We also take the attitude that using higher accuracy TSC in the new MDCS should further improve the accuracy of the photogrammetry senior product. PMID:25835187

  2. Camera system resolution and its influence on digital image correlation

    DOE PAGES

    Reu, Phillip L.; Sweatt, William; Miller, Timothy; ...

    2014-09-21

    Digital image correlation (DIC) uses images from a camera and lens system to make quantitative measurements of the shape, displacement, and strain of test objects. This increasingly popular method has had little research on the influence of the imaging system resolution on the DIC results. This paper investigates the entire imaging system and studies how both the camera and lens resolution influence the DIC results as a function of the system Modulation Transfer Function (MTF). It will show that when making spatial resolution decisions (including speckle size) the resolution limiting component should be considered. A consequence of the loss ofmore » spatial resolution is that the DIC uncertainties will be increased. This is demonstrated using both synthetic and experimental images with varying resolution. The loss of image resolution and DIC accuracy can be compensated for by increasing the subset size, or better, by increasing the speckle size. The speckle-size and spatial resolution are now a function of the lens resolution rather than the more typical assumption of the pixel size. The study will demonstrate the tradeoffs associated with limited lens resolution.« less

  3. ProxiScan™: A Novel Camera for Imaging Prostate Cancer

    ScienceCinema

    Ralph James

    2017-12-09

    ProxiScan is a compact gamma camera suited for high-resolution imaging of prostate cancer. Developed by Brookhaven National Laboratory and Hybridyne Imaging Technologies, Inc., ProxiScan won a 2009 R&D 100 Award, sponsored by R&D Magazine to recognize t

  4. Alternative images for perpendicular parking : a usability test of a multi-camera parking assistance system.

    DOT National Transportation Integrated Search

    2004-10-01

    The parking assistance system evaluated consisted of four outward facing cameras whose images could be presented on a monitor on the center console. The images presented varied in the location of the virtual eye point of the camera (the height above ...

  5. Adaptive tracking control of a wheeled mobile robot via an uncalibrated camera system.

    PubMed

    Dixon, W E; Dawson, D M; Zergeroglu, E; Behal, A

    2001-01-01

    This paper considers the problem of position/orientation tracking control of wheeled mobile robots via visual servoing in the presence of parametric uncertainty associated with the mechanical dynamics and the camera system. Specifically, we design an adaptive controller that compensates for uncertain camera and mechanical parameters and ensures global asymptotic position/orientation tracking. Simulation and experimental results are included to illustrate the performance of the control law.

  6. Integrated calibration between digital camera and laser scanner from mobile mapping system for land vehicles

    NASA Astrophysics Data System (ADS)

    Zhao, Guihua; Chen, Hong; Li, Xingquan; Zou, Xiaoliang

    The paper presents the concept of lever arm and boresight angle, the design requirements of calibration sites and the integrated calibration method of boresight angles of digital camera or laser scanner. Taking test data collected by Applanix's LandMark system as an example, the camera calibration method is introduced to be piling three consecutive stereo images and OTF-Calibration method using ground control points. The laser calibration of boresight angle is proposed to use a manual and automatic method with ground control points. Integrated calibration between digital camera and laser scanner is introduced to improve the systemic precision of two sensors. By analyzing the measurement value between ground control points and its corresponding image points in sequence images, a conclusion is that position objects between camera and images are within about 15cm in relative errors and 20cm in absolute errors. By comparing the difference value between ground control points and its corresponding laser point clouds, the errors is less than 20cm. From achieved results of these experiments in analysis, mobile mapping system is efficient and reliable system for generating high-accuracy and high-density road spatial data more rapidly.

  7. Aerial multi-camera systems: Accuracy and block triangulation issues

    NASA Astrophysics Data System (ADS)

    Rupnik, Ewelina; Nex, Francesco; Toschi, Isabella; Remondino, Fabio

    2015-03-01

    Oblique photography has reached its maturity and has now been adopted for several applications. The number and variety of multi-camera oblique platforms available on the market is continuously growing. So far, few attempts have been made to study the influence of the additional cameras on the behaviour of the image block and comprehensive revisions to existing flight patterns are yet to be formulated. This paper looks into the precision and accuracy of 3D points triangulated from diverse multi-camera oblique platforms. Its coverage is divided into simulated and real case studies. Within the simulations, different imaging platform parameters and flight patterns are varied, reflecting both current market offerings and common flight practices. Attention is paid to the aspect of completeness in terms of dense matching algorithms and 3D city modelling - the most promising application of such systems. The experimental part demonstrates the behaviour of two oblique imaging platforms in real-world conditions. A number of Ground Control Point (GCP) configurations are adopted in order to point out the sensitivity of tested imaging networks and arising block deformations. To stress the contribution of slanted views, all scenarios are compared against a scenario in which exclusively nadir images are used for evaluation.

  8. AGIS -- the Advanced Gamma-ray Imaging System

    NASA Astrophysics Data System (ADS)

    Krennrich, Frank

    2009-05-01

    The Advanced Gamma-ray Imaging System, AGIS, is envisioned to become the follow-up mission of the current generation of very high energy gamma-ray telescopes, namely, H.E.S.S., MAGIC and VERITAS. These instruments have provided a glimpse of the TeV gamma-ray sky, showing more than 70 sources while their detailed studies constrain a wealth of physics and astrophysics. The particle acceleration, emission and absorption processes in these sources permit the study of extreme physical conditions found in galactic and extragalactic TeV sources. AGIS will dramatically improve the sensitivity and angular resolution of TeV gamma-ray observations and therefore provide unique prospects for particle physics, astrophysics and cosmology. This talk will provide an overview of the science drivers, scientific capabilities and the novel technical approaches that are pursued to maximize the performance of the large array concept of AGIS.

  9. Backing collisions: a study of drivers' eye and backing behaviour using combined rear-view camera and sensor systems.

    PubMed

    Hurwitz, David S; Pradhan, Anuj; Fisher, Donald L; Knodler, Michael A; Muttart, Jeffrey W; Menon, Rajiv; Meissner, Uwe

    2010-04-01

    Backing crash injures can be severe; approximately 200 of the 2,500 reported injuries of this type per year to children under the age of 15 years result in death. Technology for assisting drivers when backing has limited success in preventing backing crashes. Two questions are addressed: Why is the reduction in backing crashes moderate when rear-view cameras are deployed? Could rear-view cameras augment sensor systems? 46 drivers (36 experimental, 10 control) completed 16 parking trials over 2 days (eight trials per day). Experimental participants were provided with a sensor camera system, controls were not. Three crash scenarios were introduced. Parking facility at UMass Amherst, USA. 46 drivers (33 men, 13 women) average age 29 years, who were Massachusetts residents licensed within the USA for an average of 9.3 years. Interventions Vehicles equipped with a rear-view camera and sensor system-based parking aid. Subject's eye fixations while driving and researcher's observation of collision with objects during backing. Only 20% of drivers looked at the rear-view camera before backing, and 88% of those did not crash. Of those who did not look at the rear-view camera before backing, 46% looked after the sensor warned the driver. This study indicates that drivers not only attend to an audible warning, but will look at a rear-view camera if available. Evidence suggests that when used appropriately, rear-view cameras can mitigate the occurrence of backing crashes, particularly when paired with an appropriate sensor system.

  10. Falling-incident detection and throughput enhancement in a multi-camera video-surveillance system.

    PubMed

    Shieh, Wann-Yun; Huang, Ju-Chin

    2012-09-01

    For most elderly, unpredictable falling incidents may occur at the corner of stairs or a long corridor due to body frailty. If we delay to rescue a falling elder who is likely fainting, more serious consequent injury may occur. Traditional secure or video surveillance systems need caregivers to monitor a centralized screen continuously, or need an elder to wear sensors to detect falling incidents, which explicitly waste much human power or cause inconvenience for elders. In this paper, we propose an automatic falling-detection algorithm and implement this algorithm in a multi-camera video surveillance system. The algorithm uses each camera to fetch the images from the regions required to be monitored. It then uses a falling-pattern recognition algorithm to determine if a falling incident has occurred. If yes, system will send short messages to someone needs to be noticed. The algorithm has been implemented in a DSP-based hardware acceleration board for functionality proof. Simulation results show that the accuracy of falling detection can achieve at least 90% and the throughput of a four-camera surveillance system can be improved by about 2.1 times. Copyright © 2011 IPEM. Published by Elsevier Ltd. All rights reserved.

  11. Traffic monitoring with distributed smart cameras

    NASA Astrophysics Data System (ADS)

    Sidla, Oliver; Rosner, Marcin; Ulm, Michael; Schwingshackl, Gert

    2012-01-01

    The observation and monitoring of traffic with smart visions systems for the purpose of improving traffic safety has a big potential. Today the automated analysis of traffic situations is still in its infancy--the patterns of vehicle motion and pedestrian flow in an urban environment are too complex to be fully captured and interpreted by a vision system. 3In this work we present steps towards a visual monitoring system which is designed to detect potentially dangerous traffic situations around a pedestrian crossing at a street intersection. The camera system is specifically designed to detect incidents in which the interaction of pedestrians and vehicles might develop into safety critical encounters. The proposed system has been field-tested at a real pedestrian crossing in the City of Vienna for the duration of one year. It consists of a cluster of 3 smart cameras, each of which is built from a very compact PC hardware system in a weatherproof housing. Two cameras run vehicle detection and tracking software, one camera runs a pedestrian detection and tracking module based on the HOG dectection principle. All 3 cameras use sparse optical flow computation in a low-resolution video stream in order to estimate the motion path and speed of objects. Geometric calibration of the cameras allows us to estimate the real-world co-ordinates of detected objects and to link the cameras together into one common reference system. This work describes the foundation for all the different object detection modalities (pedestrians, vehicles), and explains the system setup, tis design, and evaluation results which we have achieved so far.

  12. CAOS-CMOS camera.

    PubMed

    Riza, Nabeel A; La Torre, Juan Pablo; Amin, M Junaid

    2016-06-13

    Proposed and experimentally demonstrated is the CAOS-CMOS camera design that combines the coded access optical sensor (CAOS) imager platform with the CMOS multi-pixel optical sensor. The unique CAOS-CMOS camera engages the classic CMOS sensor light staring mode with the time-frequency-space agile pixel CAOS imager mode within one programmable optical unit to realize a high dynamic range imager for extreme light contrast conditions. The experimentally demonstrated CAOS-CMOS camera is built using a digital micromirror device, a silicon point-photo-detector with a variable gain amplifier, and a silicon CMOS sensor with a maximum rated 51.3 dB dynamic range. White light imaging of three different brightness simultaneously viewed targets, that is not possible by the CMOS sensor, is achieved by the CAOS-CMOS camera demonstrating an 82.06 dB dynamic range. Applications for the camera include industrial machine vision, welding, laser analysis, automotive, night vision, surveillance and multispectral military systems.

  13. Coordinating High-Resolution Traffic Cameras : Developing Intelligent, Collaborating Cameras for Transportation Security and Communications

    DOT National Transportation Integrated Search

    2015-08-01

    Cameras are used prolifically to monitor transportation incidents, infrastructure, and congestion. Traditional camera systems often require human monitoring and only offer low-resolution video. Researchers for the Exploratory Advanced Research (EAR) ...

  14. Calibration and Testing of Digital Zenith Camera System Components

    NASA Astrophysics Data System (ADS)

    Ulug, Rasit; Halicioglu, Kerem; Tevfik Ozludemir, M.; Albayrak, Muge; Basoglu, Burak; Deniz, Rasim

    2017-04-01

    Starting from the beginning of the new millennium, thanks to the Charged-Coupled Device (CCD) technology, fully or partly automatic zenith camera systems are designed and used in order to determine astro-geodetic deflections of the vertical components in several countries, including Germany, Switzerland, Serbia, Latvia, Poland, Austria, China and Turkey. The Digital Zenith Camera System (DZCS) of Turkey performed successful observations yet it needs to be improved in terms of automating the system and increasing observation accuracy. In order to optimize the observation time and improve the system, some modifications have been implemented. Through the modification process that started at the beginning of 2016, some DZCS components have been replaced with the new ones and some new additional components have been installed. In this presentation, the ongoing calibration and testing process of the DZCS are summarized in general. In particular, one of the tested system components is the High Resolution Tiltmeter (HRTM), which enable orthogonal orientation of DZCS to the direction of plump line, is discussed. For the calibration of these components, two tiltmeters with different accuracies (1 nrad and 0.001 mrad) were observed nearly 30 days. The data recorded under different environmental conditions were divided into hourly, daily, and weekly subsets. In addition to the effects of temperature and humidity, interoperability of two tiltmeters were also investigated. Results show that with the integration of HRTM and the other implementations, the modified DZCS provides higher accuracy for the determination of vertical deflections.

  15. Image dissector camera system study

    NASA Technical Reports Server (NTRS)

    Howell, L.

    1984-01-01

    Various aspects of a rendezvous and docking system using an image dissector detector as compared to a GaAs detector were discussed. Investigation into a gimbled scanning system is also covered and the measured video response curves from the image dissector camera are presented. Rendezvous will occur at ranges greater than 100 meters. The maximum range considered was 1000 meters. During docking, the range, range-rate, angle, and angle-rate to each reflector on the satellite must be measured. Docking range will be from 3 to 100 meters. The system consists of a CW laser diode transmitter and an image dissector receiver. The transmitter beam is amplitude modulated with three sine wave tones for ranging. The beam is coaxially combined with the receiver beam. Mechanical deflection of the transmitter beam, + or - 10 degrees in both X and Y, can be accomplished before or after it is combined with the receiver beam. The receiver will have a field-of-view (FOV) of 20 degrees and an instantaneous field-of-view (IFOV) of two milliradians (mrad) and will be electronically scanned in the image dissector. The increase in performance obtained from the GaAs photocathode is not needed to meet the present performance requirements.

  16. A Versatile Time-Lapse Camera System Developed by the Hawaiian Volcano Observatory for Use at Kilauea Volcano, Hawaii

    USGS Publications Warehouse

    Orr, Tim R.; Hoblitt, Richard P.

    2008-01-01

    Volcanoes can be difficult to study up close. Because it may be days, weeks, or even years between important events, direct observation is often impractical. In addition, volcanoes are often inaccessible due to their remote location and (or) harsh environmental conditions. An eruption adds another level of complexity to what already may be a difficult and dangerous situation. For these reasons, scientists at the U.S. Geological Survey (USGS) Hawaiian Volcano Observatory (HVO) have, for years, built camera systems to act as surrogate eyes. With the recent advances in digital-camera technology, these eyes are rapidly improving. One type of photographic monitoring involves the use of near-real-time network-enabled cameras installed at permanent sites (Hoblitt and others, in press). Time-lapse camera-systems, on the other hand, provide an inexpensive, easily transportable monitoring option that offers more versatility in site location. While time-lapse systems lack near-real-time capability, they provide higher image resolution and can be rapidly deployed in areas where the use of sophisticated telemetry required by the networked cameras systems is not practical. This report describes the latest generation (as of 2008) time-lapse camera system used by HVO for photograph acquisition in remote and hazardous sites on Kilauea Volcano.

  17. STS-37 Breakfast / Ingress / Launch & ISO Camera Views

    NASA Technical Reports Server (NTRS)

    1991-01-01

    The primary objective of the STS-37 mission was to deploy the Gamma Ray Observatory. The mission was launched at 9:22:44 am on April 5, 1991, onboard the space shuttle Atlantis. The mission was led by Commander Steven Nagel. The crew was Pilot Kenneth Cameron and Mission Specialists Jerry Ross, Jay Apt, and Linda Godwing. This videotape shows the crew having breakfast on the launch day, with the narrator introducing them. It then shows the crew's final preparations and the entry into the shuttle, while the narrator gives information about each of the crew members. The countdown and launch is shown including the shuttle separation from the solid rocket boosters. The launch is reshown from 17 different camera views. Some of the other camera views were in black and white.

  18. A practical approach for active camera coordination based on a fusion-driven multi-agent system

    NASA Astrophysics Data System (ADS)

    Bustamante, Alvaro Luis; Molina, José M.; Patricio, Miguel A.

    2014-04-01

    In this paper, we propose a multi-agent system architecture to manage spatially distributed active (or pan-tilt-zoom) cameras. Traditional video surveillance algorithms are of no use for active cameras, and we have to look at different approaches. Such multi-sensor surveillance systems have to be designed to solve two related problems: data fusion and coordinated sensor-task management. Generally, architectures proposed for the coordinated operation of multiple cameras are based on the centralisation of management decisions at the fusion centre. However, the existence of intelligent sensors capable of decision making brings with it the possibility of conceiving alternative decentralised architectures. This problem is approached by means of a MAS, integrating data fusion as an integral part of the architecture for distributed coordination purposes. This paper presents the MAS architecture and system agents.

  19. An airborne multispectral imaging system based on two consumer-grade cameras for agricultural remote sensing

    USDA-ARS?s Scientific Manuscript database

    This paper describes the design and evaluation of an airborne multispectral imaging system based on two identical consumer-grade cameras for agricultural remote sensing. The cameras are equipped with a full-frame complementary metal oxide semiconductor (CMOS) sensor with 5616 × 3744 pixels. One came...

  20. A Framework for People Re-Identification in Multi-Camera Surveillance Systems

    ERIC Educational Resources Information Center

    Ammar, Sirine; Zaghden, Nizar; Neji, Mahmoud

    2017-01-01

    People re-identification has been a very active research topic recently in computer vision. It is an important application in surveillance system with disjoint cameras. This paper is focused on the implementation of a human re-identification system. First the face of detected people is divided into three parts and some soft-biometric traits are…

  1. Backing collisions: a study of drivers’ eye and backing behaviour using combined rear-view camera and sensor systems

    PubMed Central

    Hurwitz, David S; Pradhan, Anuj; Fisher, Donald L; Knodler, Michael A; Muttart, Jeffrey W; Menon, Rajiv; Meissner, Uwe

    2012-01-01

    Context Backing crash injures can be severe; approximately 200 of the 2,500 reported injuries of this type per year to children under the age of 15 years result in death. Technology for assisting drivers when backing has limited success in preventing backing crashes. Objectives Two questions are addressed: Why is the reduction in backing crashes moderate when rear-view cameras are deployed? Could rear-view cameras augment sensor systems? Design 46 drivers (36 experimental, 10 control) completed 16 parking trials over 2 days (eight trials per day). Experimental participants were provided with a sensor camera system, controls were not. Three crash scenarios were introduced. Setting Parking facility at UMass Amherst, USA. Subjects 46 drivers (33 men, 13 women) average age 29 years, who were Massachusetts residents licensed within the USA for an average of 9.3 years. Interventions Vehicles equipped with a rear-view camera and sensor system-based parking aid. Main Outcome Measures Subject’s eye fixations while driving and researcher’s observation of collision with objects during backing. Results Only 20% of drivers looked at the rear-view camera before backing, and 88% of those did not crash. Of those who did not look at the rear-view camera before backing, 46% looked after the sensor warned the driver. Conclusions This study indicates that drivers not only attend to an audible warning, but will look at a rear-view camera if available. Evidence suggests that when used appropriately, rear-view cameras can mitigate the occurrence of backing crashes, particularly when paired with an appropriate sensor system. PMID:20363812

  2. System Construction of the Stilbene Compact Neutron Scatter Camera

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goldsmith, John E. M.; Gerling, Mark D.; Brennan, James S.

    This report documents the construction of a stilbene-crystal-based compact neutron scatter camera. This system is essentially identical to the MINER (Mobile Imager of Neutrons for Emergency Responders) system previously built and deployed under DNN R&D funding,1 but with the liquid scintillator in the detection cells replaced by stilbene crystals. The availability of these two systems for side-by-side performance comparisons will enable us to unambiguously identify the performance enhancements provided by the stilbene crystals, which have only recently become commercially available in the large size required (3” diameter, 3” deep).

  3. Gamma-telescopes Fermi/LAT and GAMMA-400 Trigger Systems Event Recognizing Methods Comparison

    NASA Astrophysics Data System (ADS)

    Arkhangelskaja, I. V.; Murchenko, A. E.; Chasovikov, E. N.; Arkhangelskiy, A. I.; Kheymits, M. D.

    Usually instruments for high-energy γ-quanta registration consists of converter (where γ-quanta produced pairs) and calorimeter for particles energy measurements surrounded by anticoincidence shield used to events identification (whether incident particle was charged or neutral). The influence of pair formation by γ-quanta in shield and the backsplash (moved in the opposite direction particles created due high energy γ-rays interact with calorimeter) should be taken into account. It leads to decrease both effective area and registration efficiency at E>10 GeV. In the presented article the event recognizing methods used in Fermi/LAT trigger system is considered in comparison with the ones applied in counting and triggers signals formation system of gamma-telescope GAMMA-400. The GAMMA-400 (Gamma Astronomical Multifunctional Modular Apparatus) will be the new high-apogee space γ-observatory. The GAMMA-400 consist of converter-tracker based on silicon-strip coordinate detectors interleaved with tungsten foils, imaging calorimeter make of 2 layers of double (x, y) silicon strip coordinate detectors interleaved with planes of CsI(Tl) crystals and the electromagnetic calorimeter CC2 consists only of CsI(Tl) crystals. Several plastics detections systems used as anticoincidence shield, for particles energy and moving direction estimations. The main differences of GAMMA-400 constructions from Fermi/LAT one are using the time-of-flight system with base of 50 cm and double layer structure of plastic detectors provides more effective particles direction definition and backsplash rejection. Also two calorimeters in GAMMA-400 composed the total absorbtion spectrometer with total thickness ∼ 25 X0 or ∼1.2 λ0 for vertical incident particles registration and 54 X0 or 2.5 λ0 for laterally incident ones (where λ0 is nuclear interaction length). It provides energy resolution 1-2% for 10 GeV-3.0×103 GeV events while the Fermi/LAT energy resolution does not reach such a

  4. A Quasi-Static Method for Determining the Characteristics of a Motion Capture Camera System in a "Split-Volume" Configuration

    NASA Technical Reports Server (NTRS)

    Miller, Chris; Mulavara, Ajitkumar; Bloomberg, Jacob

    2001-01-01

    To confidently report any data collected from a video-based motion capture system, its functional characteristics must be determined, namely accuracy, repeatability and resolution. Many researchers have examined these characteristics with motion capture systems, but they used only two cameras, positioned 90 degrees to each other. Everaert used 4 cameras, but all were aligned along major axes (two in x, one in y and z). Richards compared the characteristics of different commercially available systems set-up in practical configurations, but all cameras viewed a single calibration volume. The purpose of this study was to determine the accuracy, repeatability and resolution of a 6-camera Motion Analysis system in a split-volume configuration using a quasistatic methodology.

  5. Systems and methods for maintaining multiple objects within a camera field-of-view

    DOEpatents

    Gans, Nicholas R.; Dixon, Warren

    2016-03-15

    In one embodiment, a system and method for maintaining objects within a camera field of view include identifying constraints to be enforced, each constraint relating to an attribute of the viewed objects, identifying a priority rank for the constraints such that more important constraints have a higher priority that less important constraints, and determining the set of solutions that satisfy the constraints relative to the order of their priority rank such that solutions that satisfy lower ranking constraints are only considered viable if they also satisfy any higher ranking constraints, each solution providing an indication as to how to control the camera to maintain the objects within the camera field of view.

  6. LSST Camera Optics Design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Riot, V J; Olivier, S; Bauman, B

    2012-05-24

    The Large Synoptic Survey Telescope (LSST) uses a novel, three-mirror, telescope design feeding a camera system that includes a set of broad-band filters and three refractive corrector lenses to produce a flat field at the focal plane with a wide field of view. Optical design of the camera lenses and filters is integrated in with the optical design of telescope mirrors to optimize performance. We discuss the rationale for the LSST camera optics design, describe the methodology for fabricating, coating, mounting and testing the lenses and filters, and present the results of detailed analyses demonstrating that the camera optics willmore » meet their performance goals.« less

  7. 3D digital image correlation using single color camera pseudo-stereo system

    NASA Astrophysics Data System (ADS)

    Li, Junrui; Dan, Xizuo; Xu, Wan; Wang, Yonghong; Yang, Guobiao; Yang, Lianxiang

    2017-10-01

    Three dimensional digital image correlation (3D-DIC) has been widely used by industry to measure the 3D contour and whole-field displacement/strain. In this paper, a novel single color camera 3D-DIC setup, using a reflection-based pseudo-stereo system, is proposed. Compared to the conventional single camera pseudo-stereo system, which splits the CCD sensor into two halves to capture the stereo views, the proposed system achieves both views using the whole CCD chip and without reducing the spatial resolution. In addition, similarly to the conventional 3D-DIC system, the center of the two views stands in the center of the CCD chip, which minimizes the image distortion relative to the conventional pseudo-stereo system. The two overlapped views in the CCD are separated by the color domain, and the standard 3D-DIC algorithm can be utilized directly to perform the evaluation. The system's principle and experimental setup are described in detail, and multiple tests are performed to validate the system.

  8. Pixel-based characterisation of CMOS high-speed camera systems

    NASA Astrophysics Data System (ADS)

    Weber, V.; Brübach, J.; Gordon, R. L.; Dreizler, A.

    2011-05-01

    Quantifying high-repetition rate laser diagnostic techniques for measuring scalars in turbulent combustion relies on a complete description of the relationship between detected photons and the signal produced by the detector. CMOS-chip based cameras are becoming an accepted tool for capturing high frame rate cinematographic sequences for laser-based techniques such as Particle Image Velocimetry (PIV) and Planar Laser Induced Fluorescence (PLIF) and can be used with thermographic phosphors to determine surface temperatures. At low repetition rates, imaging techniques have benefitted from significant developments in the quality of CCD-based camera systems, particularly with the uniformity of pixel response and minimal non-linearities in the photon-to-signal conversion. The state of the art in CMOS technology displays a significant number of technical aspects that must be accounted for before these detectors can be used for quantitative diagnostics. This paper addresses these issues.

  9. Wired and Wireless Camera Triggering with Arduino

    NASA Astrophysics Data System (ADS)

    Kauhanen, H.; Rönnholm, P.

    2017-10-01

    Synchronous triggering is an important task that allows simultaneous data capture from multiple cameras. Accurate synchronization enables 3D measurements of moving objects or from a moving platform. In this paper, we describe one wired and four wireless variations of Arduino-based low-cost remote trigger systems designed to provide a synchronous trigger signal for industrial cameras. Our wireless systems utilize 315 MHz or 434 MHz frequencies with noise filtering capacitors. In order to validate the synchronization accuracy, we developed a prototype of a rotating trigger detection system (named RoTriDeS). This system is suitable to detect the triggering accuracy of global shutter cameras. As a result, the wired system indicated an 8.91 μs mean triggering time difference between two cameras. Corresponding mean values for the four wireless triggering systems varied between 7.92 and 9.42 μs. Presented values include both camera-based and trigger-based desynchronization. Arduino-based triggering systems appeared to be feasible, and they have the potential to be extended to more complicated triggering systems.

  10. Uav Cameras: Overview and Geometric Calibration Benchmark

    NASA Astrophysics Data System (ADS)

    Cramer, M.; Przybilla, H.-J.; Zurhorst, A.

    2017-08-01

    Different UAV platforms and sensors are used in mapping already, many of them equipped with (sometimes) modified cameras as known from the consumer market. Even though these systems normally fulfil their requested mapping accuracy, the question arises, which system performs best? This asks for a benchmark, to check selected UAV based camera systems in well-defined, reproducible environments. Such benchmark is tried within this work here. Nine different cameras used on UAV platforms, representing typical camera classes, are considered. The focus is laid on the geometry here, which is tightly linked to the process of geometrical calibration of the system. In most applications the calibration is performed in-situ, i.e. calibration parameters are obtained as part of the project data itself. This is often motivated because consumer cameras do not keep constant geometry, thus, cannot be seen as metric cameras. Still, some of the commercial systems are quite stable over time, as it was proven from repeated (terrestrial) calibrations runs. Already (pre-)calibrated systems may offer advantages, especially when the block geometry of the project does not allow for a stable and sufficient in-situ calibration. Especially for such scenario close to metric UAV cameras may have advantages. Empirical airborne test flights in a calibration field have shown how block geometry influences the estimated calibration parameters and how consistent the parameters from lab calibration can be reproduced.

  11. A large-area gamma-ray imaging telescope system

    NASA Technical Reports Server (NTRS)

    Koch, D. G.

    1983-01-01

    The concept definition of using the External Tank (ET) of the Space Shuttle as the basis for constructing a large area gamma ray imaging telescope in space is detailed. The telescope will be used to locate and study cosmic sources of gamma rays of energy greater than 100 MeV. Both the telescope properties and the means whereby an ET is used for this purpose are described. A parallel is drawn between those systems that would be common to both a Space Station and this ET application. In addition, those systems necessary for support of the telescope can form the basis for using the ET as part of the Space Station. The major conclusions of this concept definition are that the ET is ideal for making into a gamma ray telescope, and that this telescope will provide a substantial increase in collecting area.

  12. Low-cost conversion of the Polaroid MD-4 land camera to a digital gel documentation system.

    PubMed

    Porch, Timothy G; Erpelding, John E

    2006-04-30

    A simple, inexpensive design is presented for the rapid conversion of the popular MD-4 Polaroid land camera to a high quality digital gel documentation system. Images of ethidium bromide stained DNA gels captured using the digital system were compared to images captured on Polaroid instant film. Resolution and sensitivity were enhanced using the digital system. In addition to the low cost and superior image quality of the digital system, there is also the added convenience of real-time image viewing through the swivel LCD of the digital camera, wide flexibility of gel sizes, accurate automatic focusing, variable image resolution, and consistent ease of use and quality. Images can be directly imported to a computer by using the USB port on the digital camera, further enhancing the potential of the digital system for documentation, analysis, and archiving. The system is appropriate for use as a start-up gel documentation system and for routine gel analysis.

  13. Testing and Performance Validation of a Sensitive Gamma Ray Camera Designed for Radiation Detection and Decommissioning Measurements in Nuclear Facilities-13044

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mason, John A.; Looman, Marc R.; Poundall, Adam J.

    2013-07-01

    This paper describes the measurements, testing and performance validation of a sensitive gamma ray camera designed for radiation detection and quantification in the environment and decommissioning and hold-up measurements in nuclear facilities. The instrument, which is known as RadSearch, combines a sensitive and highly collimated LaBr{sub 3} scintillation detector with an optical (video) camera with controllable zoom and focus and a laser range finder in one detector head. The LaBr{sub 3} detector has a typical energy resolution of between 2.5% and 3% at the 662 keV energy of Cs-137 compared to that of NaI detectors with a resolution of typicallymore » 7% to 8% at the same energy. At this energy the tungsten shielding of the detector provides a shielding ratio of greater than 900:1 in the forward direction and 100:1 on the sides and from the rear. The detector head is mounted on a pan/tile mechanism with a range of motion of ±180 degrees (pan) and ±90 degrees (tilt) equivalent to 4 π steradians. The detector head with pan/tilt is normally mounted on a tripod or wheeled cart. It can also be mounted on vehicles or a mobile robot for access to high dose-rate areas and areas with high levels of contamination. Ethernet connects RadSearch to a ruggedized notebook computer from which it is operated and controlled. Power can be supplied either as 24-volts DC from a battery or as 50 volts DC supplied by a small mains (110 or 230 VAC) power supply unit that is co-located with the controlling notebook computer. In this latter case both power and Ethernet are supplied through a single cable that can be up to 80 metres in length. If a local battery supplies power, the unit can be controlled through wireless Ethernet. Both manual operation and automatic scanning of surfaces and objects is available through the software interface on the notebook computer. For each scan element making up a part of an overall scanned area, the unit measures a gamma ray spectrum. Multiple

  14. Precise color images a high-speed color video camera system with three intensified sensors

    NASA Astrophysics Data System (ADS)

    Oki, Sachio; Yamakawa, Masafumi; Gohda, Susumu; Etoh, Takeharu G.

    1999-06-01

    High speed imaging systems have been used in a large field of science and engineering. Although the high speed camera systems have been improved to high performance, most of their applications are only to get high speed motion pictures. However, in some fields of science and technology, it is useful to get some other information, such as temperature of combustion flame, thermal plasma and molten materials. Recent digital high speed video imaging technology should be able to get such information from those objects. For this purpose, we have already developed a high speed video camera system with three-intensified-sensors and cubic prism image splitter. The maximum frame rate is 40,500 pps (picture per second) at 64 X 64 pixels and 4,500 pps at 256 X 256 pixels with 256 (8 bit) intensity resolution for each pixel. The camera system can store more than 1,000 pictures continuously in solid state memory. In order to get the precise color images from this camera system, we need to develop a digital technique, which consists of a computer program and ancillary instruments, to adjust displacement of images taken from two or three image sensors and to calibrate relationship between incident light intensity and corresponding digital output signals. In this paper, the digital technique for pixel-based displacement adjustment are proposed. Although the displacement of the corresponding circle was more than 8 pixels in original image, the displacement was adjusted within 0.2 pixels at most by this method.

  15. Motion Sickness When Driving With a Head-Slaved Camera System

    DTIC Science & Technology

    2003-02-01

    YPR-765 under armour (Report TM-97-A026). Soesterberg, The Netherlands: TNO Human Factors Research Institute. Van Erp, J.B.F., Padmos, P. & Tenkink, E...Institute. Van Erp, J.B.F., Van den Dobbelsteen, J.J. & Padmos, P. (1998). Improved camera-monitor system for driving YPR-765 under armour (Report TM-98

  16. Implementation of test for quality assurance in nuclear medicine gamma camera

    NASA Astrophysics Data System (ADS)

    Moreno, A. Montoya; Laguna, A. Rodríguez; Zamudio, Flavio E. Trujillo

    2012-10-01

    In nuclear medicine (NM) over 90% of procedures are performed for diagnostic purposes. To ensure adequate diagnostic quality of images and the optimization of the doses received by patients originated from the radioactive material is essential for regular monitoring and equipment performance through a quality assurance program (QAP). The QAP consists of 15 proposed performance tomographic and not tomographic gamma camera (GC) tests, and is based on recommendations of international organizations. We describe some results of the performance parameters of QAP applied to a GC model e.cam Siemens, of the Department of NM of the National Cancer Institute of Mexico (INCan). The results were: (1) The average intrinsic spatial resolution (Rin) was 4.67 ± 0.25 mm at the limit of acceptance criterion of 4.4 mm. (2) The sensitivity extrinsic (Sext), with maximum variations of 1.8% (less than 2% which is the criterion of acceptance). (3) Rotational Uniformity (Urot), with values of integral uniformity (IU) in the useful field of view detector (UFOV), with maximum percentage change of 0.97% and monthly variations equal angles, ranging from 0.13 to 0.99% less than 1%. (4) The displacement of the center of rotation (DCOR), indicated a maximum deviation of 0.155 ± 0.039 mm less than 4.795 mm, an absolute deviation of less than 0.5 where pixel 0.085 pixel is suggested, the criteria are assigned to low-energy collimator high resolution. (5) In tomographic uniformity (Utomo), UI values (%) and percentage noise level (rms%) were 7.54 ± 1.53 and 4.18 ± 1.69 which are consistent with the limits of acceptance of 7.0-12.0% and 3.0-6.0% respectively. The smallest cold sphere has a diameter of 11.4 mm. The implementation of a QAP allows for high quality diagnostic images, optimization of the doses given to patients, a reduction of exposure to occupationally exposed workers (POE, by its Spanish acronym), and generally improves the productivity of the service. This proposal can be used to

  17. Implementation of test for quality assurance in nuclear medicine gamma camera

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Montoya Moreno, A.; Rodriguez Laguna, A.; Trujillo Zamudio, Flavio E

    2012-10-23

    In nuclear medicine (NM) over 90% of procedures are performed for diagnostic purposes. To ensure adequate diagnostic quality of images and the optimization of the doses received by patients originated from the radioactive material is essential for regular monitoring and equipment performance through a quality assurance program (QAP). The QAP consists of 15 proposed performance tomographic and not tomographic gamma camera (GC) tests, and is based on recommendations of international organizations. We describe some results of the performance parameters of QAP applied to a GC model e.cam Siemens, of the Department of NM of the National Cancer Institute of Mexicomore » (INCan). The results were: (1) The average intrinsic spatial resolution (R{sub in}) was 4.67 {+-} 0.25 mm at the limit of acceptance criterion of 4.4 mm. (2) The sensitivity extrinsic (S{sub ext}), with maximum variations of 1.8% (less than 2% which is the criterion of acceptance). (3) Rotational Uniformity (U{sub rot}), with values of integral uniformity (IU) in the useful field of view detector (UFOV), with maximum percentage change of 0.97% and monthly variations equal angles, ranging from 0.13 to 0.99% less than 1%. (4) The displacement of the center of rotation (DCOR), indicated a maximum deviation of 0.155 {+-} 0.039 mm less than 4.795 mm, an absolute deviation of less than 0.5 where pixel 0.085 pixel is suggested, the criteria are assigned to low-energy collimator high resolution. (5) In tomographic uniformity (U{sub tomo}), UI values (%) and percentage noise level (rms%) were 7.54 {+-} 1.53 and 4.18 {+-} 1.69 which are consistent with the limits of acceptance of 7.0-12.0% and 3.0-6.0% respectively. The smallest cold sphere has a diameter of 11.4 mm. The implementation of a QAP allows for high quality diagnostic images, optimization of the doses given to patients, a reduction of exposure to occupationally exposed workers (POE, by its Spanish acronym), and generally improves the productivity of

  18. Intermediate view synthesis algorithm using mesh clustering for rectangular multiview camera system

    NASA Astrophysics Data System (ADS)

    Choi, Byeongho; Kim, Taewan; Oh, Kwan-Jung; Ho, Yo-Sung; Choi, Jong-Soo

    2010-02-01

    A multiview video-based three-dimensional (3-D) video system offers a realistic impression and a free view navigation to the user. The efficient compression and intermediate view synthesis are key technologies since 3-D video systems deal multiple views. We propose an intermediate view synthesis using a rectangular multiview camera system that is suitable to realize 3-D video systems. The rectangular multiview camera system not only can offer free view navigation both horizontally and vertically but also can employ three reference views such as left, right, and bottom for intermediate view synthesis. The proposed view synthesis method first represents the each reference view to meshes and then finds the best disparity for each mesh element by using the stereo matching between reference views. Before stereo matching, we separate the virtual image to be synthesized into several regions to enhance the accuracy of disparities. The mesh is classified into foreground and background groups by disparity values and then affine transformed. By experiments, we confirm that the proposed method synthesizes a high-quality image and is suitable for 3-D video systems.

  19. An algorithm of a real time image tracking system using a camera with pan/tilt motors on an embedded system

    NASA Astrophysics Data System (ADS)

    Kim, Hie-Sik; Nam, Chul; Ha, Kwan-Yong; Ayurzana, Odgeral; Kwon, Jong-Won

    2005-12-01

    The embedded systems have been applied to many fields, including households and industrial sites. The user interface technology with simple display on the screen was implemented more and more. The user demands are increasing and the system has more various applicable fields due to a high penetration rate of the Internet. Therefore, the demand for embedded system is tend to rise. An embedded system for image tracking was implemented. This system is used a fixed IP for the reliable server operation on TCP/IP networks. Using an USB camera on the embedded Linux system developed a real time broadcasting of video image on the Internet. The digital camera is connected at the USB host port of the embedded board. All input images from the video camera are continuously stored as a compressed JPEG file in a directory at the Linux web-server. And each frame image data from web camera is compared for measurement of displacement Vector. That used Block matching algorithm and edge detection algorithm for past speed. And the displacement vector is used at pan/tilt motor control through RS232 serial cable. The embedded board utilized the S3C2410 MPU, which used the ARM 920T core form Samsung. The operating system was ported to embedded Linux kernel and mounted of root file system. And the stored images are sent to the client PC through the web browser. It used the network function of Linux and it developed a program with protocol of the TCP/IP.

  20. Experimental comparison of high-density scintillators for EMCCD-based gamma ray imaging

    NASA Astrophysics Data System (ADS)

    Heemskerk, Jan W. T.; Kreuger, Rob; Goorden, Marlies C.; Korevaar, Marc A. N.; Salvador, Samuel; Seeley, Zachary M.; Cherepy, Nerine J.; van der Kolk, Erik; Payne, Stephen A.; Dorenbos, Pieter; Beekman, Freek J.

    2012-07-01

    Detection of x-rays and gamma rays with high spatial resolution can be achieved with scintillators that are optically coupled to electron-multiplying charge-coupled devices (EMCCDs). These can be operated at typical frame rates of 50 Hz with low noise. In such a set-up, scintillation light within each frame is integrated after which the frame is analyzed for the presence of scintillation events. This method allows for the use of scintillator materials with relatively long decay times of a few milliseconds, not previously considered for use in photon-counting gamma cameras, opening up an unexplored range of dense scintillators. In this paper, we test CdWO4 and transparent polycrystalline ceramics of Lu2O3:Eu and (Gd,Lu)2O3:Eu as alternatives to currently used CsI:Tl in order to improve the performance of EMCCD-based gamma cameras. The tested scintillators were selected for their significantly larger cross-sections at 140 keV (99mTc) compared to CsI:Tl combined with moderate to good light yield. A performance comparison based on gamma camera spatial and energy resolution was done with all tested scintillators having equal (66%) interaction probability at 140 keV. CdWO4, Lu2O3:Eu and (Gd,Lu)2O3:Eu all result in a significantly improved spatial resolution over CsI:Tl, albeit at the cost of reduced energy resolution. Lu2O3:Eu transparent ceramic gives the best spatial resolution: 65 µm full-width-at-half-maximum (FWHM) compared to 147 µm FWHM for CsI:Tl. In conclusion, these ‘slow’ dense scintillators open up new possibilities for improving the spatial resolution of EMCCD-based scintillation cameras.

  1. Concept of a photon-counting camera based on a diffraction-addressed Gray-code mask

    NASA Astrophysics Data System (ADS)

    Morel, Sébastien

    2004-09-01

    A new concept of photon counting camera for fast and low-light-level imaging applications is introduced. The possible spectrum covered by this camera ranges from visible light to gamma rays, depending on the device used to transform an incoming photon into a burst of visible photons (photo-event spot) localized in an (x,y) image plane. It is actually an evolution of the existing "PAPA" (Precision Analog Photon Address) Camera that was designed for visible photons. This improvement comes from a simplified optics. The new camera transforms, by diffraction, each photo-event spot from an image intensifier or a scintillator into a cross-shaped pattern, which is projected onto a specific Gray code mask. The photo-event position is then extracted from the signal given by an array of avalanche photodiodes (or photomultiplier tubes, alternatively) downstream of the mask. After a detailed explanation of this camera concept that we have called "DIAMICON" (DIffraction Addressed Mask ICONographer), we briefly discuss about technical solutions to build such a camera.

  2. New generation of meteorology cameras

    NASA Astrophysics Data System (ADS)

    Janout, Petr; Blažek, Martin; Páta, Petr

    2017-12-01

    A new generation of the WILLIAM (WIde-field aLL-sky Image Analyzing Monitoring system) camera includes new features such as monitoring of rain and storm clouds during the day observation. Development of the new generation of weather monitoring cameras responds to the demand for monitoring of sudden weather changes. However, new WILLIAM cameras are ready to process acquired image data immediately, release warning against sudden torrential rains, and send it to user's cell phone and email. Actual weather conditions are determined from image data, and results of image processing are complemented by data from sensors of temperature, humidity, and atmospheric pressure. In this paper, we present the architecture, image data processing algorithms of mentioned monitoring camera and spatially-variant model of imaging system aberrations based on Zernike polynomials.

  3. Flexible nuclear medicine camera and method of using

    DOEpatents

    Dilmanian, F.A.; Packer, S.; Slatkin, D.N.

    1996-12-10

    A nuclear medicine camera and method of use photographically record radioactive decay particles emitted from a source, for example a small, previously undetectable breast cancer, inside a patient. The camera includes a flexible frame containing a window, a photographic film, and a scintillation screen, with or without a gamma-ray collimator. The frame flexes for following the contour of the examination site on the patient, with the window being disposed in substantially abutting contact with the skin of the patient for reducing the distance between the film and the radiation source inside the patient. The frame is removably affixed to the patient at the examination site for allowing the patient mobility to wear the frame for a predetermined exposure time period. The exposure time may be several days for obtaining early qualitative detection of small malignant neoplasms. 11 figs.

  4. Motorcycle detection and counting using stereo camera, IR camera, and microphone array

    NASA Astrophysics Data System (ADS)

    Ling, Bo; Gibson, David R. P.; Middleton, Dan

    2013-03-01

    Detection, classification, and characterization are the key to enhancing motorcycle safety, motorcycle operations and motorcycle travel estimation. Average motorcycle fatalities per Vehicle Mile Traveled (VMT) are currently estimated at 30 times those of auto fatalities. Although it has been an active research area for many years, motorcycle detection still remains a challenging task. Working with FHWA, we have developed a hybrid motorcycle detection and counting system using a suite of sensors including stereo camera, thermal IR camera and unidirectional microphone array. The IR thermal camera can capture the unique thermal signatures associated with the motorcycle's exhaust pipes that often show bright elongated blobs in IR images. The stereo camera in the system is used to detect the motorcyclist who can be easily windowed out in the stereo disparity map. If the motorcyclist is detected through his or her 3D body recognition, motorcycle is detected. Microphones are used to detect motorcycles that often produce low frequency acoustic signals. All three microphones in the microphone array are placed in strategic locations on the sensor platform to minimize the interferences of background noises from sources such as rain and wind. Field test results show that this hybrid motorcycle detection and counting system has an excellent performance.

  5. The soft gamma-ray detector (SGD) onboard ASTRO-H

    NASA Astrophysics Data System (ADS)

    Watanabe, Shin; Tajima, Hiroyasu; Fukazawa, Yasushi; Blandford, Roger; Enoto, Teruaki; Goldwurm, Andrea; Hagino, Kouichi; Hayashi, Katsuhiro; Ichinohe, Yuto; Kataoka, Jun; Katsuta, Junichiro; Kitaguchi, Takao; Kokubun, Motohide; Laurent, Philippe; Lebrun, François; Limousin, Olivier; Madejski, Grzegorz M.; Makishima, Kazuo; Mizuno, Tsunefumi; Mori, Kunishiro; Nakamori, Takeshi; Nakano, Toshio; Nakazawa, Kazuhiro; Noda, Hirofumu; Odaka, Hirokazu; Ohno, Masanori; Ohta, Masayuki; Saito, Shinya; Sato, Goro; Sato, Rie; Takeda, Shin'ichiro; Takahashi, Hiromitsu; Takahashi, Tadayuki; Tanaka, Takaaki; Tanaka, Yasuyuki; Terada, Yukikatsu; Uchiyama, Hideki; Uchiyama, Yasunobu; Yamaoka, Kazutaka; Yatsu, Yoichi; Yonetoku, Daisuke; Yuasa, Takayuki

    2016-07-01

    The Soft Gamma-ray Detector (SGD) is one of science instruments onboard ASTRO-H (Hitomi) and features a wide energy band of 60{600 keV with low backgrounds. SGD is an instrument with a novel concept of "Narrow field-of-view" Compton camera where Compton kinematics is utilized to reject backgrounds which are inconsistent with the field-of-view defined by the active shield. After several years of developments, the flight hardware was fabricated and subjected to subsystem tests and satellite system tests. After a successful ASTRO-H (Hitomi) launch on February 17, 2016 and a critical phase operation of satellite and SGD in-orbit commissioning, the SGD operation was moved to the nominal observation mode on March 24, 2016. The Compton cameras and BGO-APD shields of SGD worked properly as designed. On March 25, 2016, the Crab nebula observation was performed, and, the observation data was successfully obtained.

  6. Multiple window spatial registration error of a gamma camera: 133Ba point source as a replacement of the NEMA procedure.

    PubMed

    Bergmann, Helmar; Minear, Gregory; Raith, Maria; Schaffarich, Peter M

    2008-12-09

    The accuracy of multiple window spatial resolution characterises the performance of a gamma camera for dual isotope imaging. In the present study we investigate an alternative method to the standard NEMA procedure for measuring this performance parameter. A long-lived 133Ba point source with gamma energies close to 67Ga and a single bore lead collimator were used to measure the multiple window spatial registration error. Calculation of the positions of the point source in the images used the NEMA algorithm. The results were validated against the values obtained by the standard NEMA procedure which uses a liquid 67Ga source with collimation. Of the source-collimator configurations under investigation an optimum collimator geometry, consisting of a 5 mm thick lead disk with a diameter of 46 mm and a 5 mm central bore, was selected. The multiple window spatial registration errors obtained by the 133Ba method showed excellent reproducibility (standard deviation < 0.07 mm). The values were compared with the results from the NEMA procedure obtained at the same locations and showed small differences with a correlation coefficient of 0.51 (p < 0.05). In addition, the 133Ba point source method proved to be much easier to use. A Bland-Altman analysis showed that the 133Ba and the 67Ga Method can be used interchangeably. The 133Ba point source method measures the multiple window spatial registration error with essentially the same accuracy as the NEMA-recommended procedure, but is easier and safer to use and has the potential to replace the current standard procedure.

  7. Evaluation of 3D Gamma index calculation implemented in two commercial dosimetry systems

    NASA Astrophysics Data System (ADS)

    Xing, Aitang; Arumugam, Sankar; Deshpande, Shrikant; George, Armia; Vial, Philip; Holloway, Lois; Goozee, Gary

    2015-01-01

    3D Gamma index is one of the metrics which have been widely used for clinical routine patient specific quality assurance for IMRT, Tomotherapy and VMAT. The algorithms for calculating the 3D Gamma index using global and local methods implemented in two software tools: PTW- VeriSoft® as a part of OCTIVIUS 4D dosimeter systems and 3DVHTM from Sun Nuclear were assessed. The Gamma index calculated by the two systems was compared with manual calculated for one data set. The Gamma pass rate calculated by the two systems was compared using 3%/3mm, 2%/2mm, 3%/2mm and 2%/3mm for two additional data sets. The Gamma indexes calculated by the two systems were accurate, but Gamma pass rates calculated by the two software tools for same data set with the same dose threshold were different due to the different interpolation of raw dose data by the two systems and different implementation of Gamma index calculation and other modules in the two software tools. The mean difference was -1.3%±3.38 (1SD) with a maximum difference of 11.7%.

  8. Technical Note: Range verification system using edge detection method for a scintillator and a CCD camera system.

    PubMed

    Saotome, Naoya; Furukawa, Takuji; Hara, Yousuke; Mizushima, Kota; Tansho, Ryohei; Saraya, Yuichi; Shirai, Toshiyuki; Noda, Koji

    2016-04-01

    Three-dimensional irradiation with a scanned carbon-ion beam has been performed from 2011 at the authors' facility. The authors have developed the rotating-gantry equipped with the scanning irradiation system. The number of combinations of beam properties to measure for the commissioning is more than 7200, i.e., 201 energy steps, 3 intensities, and 12 gantry angles. To compress the commissioning time, quick and simple range verification system is required. In this work, the authors develop a quick range verification system using scintillator and charge-coupled device (CCD) camera and estimate the accuracy of the range verification. A cylindrical plastic scintillator block and a CCD camera were installed on the black box. The optical spatial resolution of the system is 0.2 mm/pixel. The camera control system was connected and communicates with the measurement system that is part of the scanning system. The range was determined by image processing. Reference range for each energy beam was determined by a difference of Gaussian (DOG) method and the 80% of distal dose of the depth-dose distribution that were measured by a large parallel-plate ionization chamber. The authors compared a threshold method and a DOG method. The authors found that the edge detection method (i.e., the DOG method) is best for the range detection. The accuracy of range detection using this system is within 0.2 mm, and the reproducibility of the same energy measurement is within 0.1 mm without setup error. The results of this study demonstrate that the authors' range check system is capable of quick and easy range verification with sufficient accuracy.

  9. Technical Note: Range verification system using edge detection method for a scintillator and a CCD camera system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saotome, Naoya, E-mail: naosao@nirs.go.jp; Furukawa, Takuji; Hara, Yousuke

    Purpose: Three-dimensional irradiation with a scanned carbon-ion beam has been performed from 2011 at the authors’ facility. The authors have developed the rotating-gantry equipped with the scanning irradiation system. The number of combinations of beam properties to measure for the commissioning is more than 7200, i.e., 201 energy steps, 3 intensities, and 12 gantry angles. To compress the commissioning time, quick and simple range verification system is required. In this work, the authors develop a quick range verification system using scintillator and charge-coupled device (CCD) camera and estimate the accuracy of the range verification. Methods: A cylindrical plastic scintillator blockmore » and a CCD camera were installed on the black box. The optical spatial resolution of the system is 0.2 mm/pixel. The camera control system was connected and communicates with the measurement system that is part of the scanning system. The range was determined by image processing. Reference range for each energy beam was determined by a difference of Gaussian (DOG) method and the 80% of distal dose of the depth-dose distribution that were measured by a large parallel-plate ionization chamber. The authors compared a threshold method and a DOG method. Results: The authors found that the edge detection method (i.e., the DOG method) is best for the range detection. The accuracy of range detection using this system is within 0.2 mm, and the reproducibility of the same energy measurement is within 0.1 mm without setup error. Conclusions: The results of this study demonstrate that the authors’ range check system is capable of quick and easy range verification with sufficient accuracy.« less

  10. Omnidirectional Underwater Camera Design and Calibration

    PubMed Central

    Bosch, Josep; Gracias, Nuno; Ridao, Pere; Ribas, David

    2015-01-01

    This paper presents the development of an underwater omnidirectional multi-camera system (OMS) based on a commercially available six-camera system, originally designed for land applications. A full calibration method is presented for the estimation of both the intrinsic and extrinsic parameters, which is able to cope with wide-angle lenses and non-overlapping cameras simultaneously. This method is valid for any OMS in both land or water applications. For underwater use, a customized housing is required, which often leads to strong image distortion due to refraction among the different media. This phenomena makes the basic pinhole camera model invalid for underwater cameras, especially when using wide-angle lenses, and requires the explicit modeling of the individual optical rays. To address this problem, a ray tracing approach has been adopted to create a field-of-view (FOV) simulator for underwater cameras. The simulator allows for the testing of different housing geometries and optics for the cameras to ensure a complete hemisphere coverage in underwater operation. This paper describes the design and testing of a compact custom housing for a commercial off-the-shelf OMS camera (Ladybug 3) and presents the first results of its use. A proposed three-stage calibration process allows for the estimation of all of the relevant camera parameters. Experimental results are presented, which illustrate the performance of the calibration method and validate the approach. PMID:25774707

  11. Video camera system for locating bullet holes in targets at a ballistics tunnel

    NASA Technical Reports Server (NTRS)

    Burner, A. W.; Rummler, D. R.; Goad, W. K.

    1990-01-01

    A system consisting of a single charge coupled device (CCD) video camera, computer controlled video digitizer, and software to automate the measurement was developed to measure the location of bullet holes in targets at the International Shooters Development Fund (ISDF)/NASA Ballistics Tunnel. The camera/digitizer system is a crucial component of a highly instrumented indoor 50 meter rifle range which is being constructed to support development of wind resistant, ultra match ammunition. The system was designed to take data rapidly (10 sec between shoots) and automatically with little operator intervention. The system description, measurement concept, and procedure are presented along with laboratory tests of repeatability and bias error. The long term (1 hour) repeatability of the system was found to be 4 microns (one standard deviation) at the target and the bias error was found to be less than 50 microns. An analysis of potential errors and a technique for calibration of the system are presented.

  12. Experimental study of the vidicon system for information recording using the wide-gap spark chamber of gamma - telescope gamma-I

    NASA Technical Reports Server (NTRS)

    Akimov, V. V.; Bazer-Bashv, R.; Voronov, S. A.; Galper, A. M.; Gro, M.; Kalinkin, L. F.; Kerl, P.; Kozlov, V. D.; Koten, F.; Kretol, D.

    1979-01-01

    The development of the gamma ray telescope is investigated. The wide gap spark chambers, used to identify the gamma quanta and to determine the directions of their arrival, are examined. Two systems of information recording with the spark chambers photographic and vidicon system are compared.

  13. Olfactory system gamma oscillations: the physiological dissection of a cognitive neural system

    PubMed Central

    Rojas-Líbano, Daniel

    2008-01-01

    Oscillatory phenomena have been a focus of dynamical systems research since the time of the classical studies on the pendulum by Galileo. Fast cortical oscillations also have a long and storied history in neurophysiology, and olfactory oscillations have led the way with a depth of explanation not present in the literature of most other cortical systems. From the earliest studies of odor-evoked oscillations by Adrian, many reports have focused on mechanisms and functional associations of these oscillations, in particular for the so-called gamma oscillations. As a result, much information is now available regarding the biophysical mechanisms that underlie the oscillations in the mammalian olfactory system. Recent studies have expanded on these and addressed functionality directly in mammals and in the analogous insect system. Sub-bands within the rodent gamma oscillatory band associated with specific behavioral and cognitive states have also been identified. All this makes oscillatory neuronal networks a unique interdisciplinary platform from which to study neurocognitive and dynamical phenomena in intact, freely behaving animals. We present here a summary of what has been learned about the functional role and mechanisms of gamma oscillations in the olfactory system as a guide for similar studies in other cortical systems. PMID:19003484

  14. Automatic camera to laser calibration for high accuracy mobile mapping systems using INS

    NASA Astrophysics Data System (ADS)

    Goeman, Werner; Douterloigne, Koen; Gautama, Sidharta

    2013-09-01

    A mobile mapping system (MMS) is a mobile multi-sensor platform developed by the geoinformation community to support the acquisition of huge amounts of geodata in the form of georeferenced high resolution images and dense laser clouds. Since data fusion and data integration techniques are increasingly able to combine the complementary strengths of different sensor types, the external calibration of a camera to a laser scanner is a common pre-requisite on today's mobile platforms. The methods of calibration, nevertheless, are often relatively poorly documented, are almost always time-consuming, demand expert knowledge and often require a carefully constructed calibration environment. A new methodology is studied and explored to provide a high quality external calibration for a pinhole camera to a laser scanner which is automatic, easy to perform, robust and foolproof. The method presented here, uses a portable, standard ranging pole which needs to be positioned on a known ground control point. For calibration, a well studied absolute orientation problem needs to be solved. In many cases, the camera and laser sensor are calibrated in relation to the INS system. Therefore, the transformation from camera to laser contains the cumulated error of each sensor in relation to the INS. Here, the calibration of the camera is performed in relation to the laser frame using the time synchronization between the sensors for data association. In this study, the use of the inertial relative movement will be explored to collect more useful calibration data. This results in a better intersensor calibration allowing better coloring of the clouds and a more accurate depth mask for images, especially on the edges of objects in the scene.

  15. Infrared Camera System for Visualization of IR-Absorbing Gas Leaks

    NASA Technical Reports Server (NTRS)

    Youngquist, Robert; Immer, Christopher; Cox, Robert

    2010-01-01

    embodiment would use a ratioed output signal to better represent the gas column concentration. An alternative approach uses a simpler multiplication of the filtered signal to make the filtered signal equal to the unfiltered signal at most locations, followed by a subtraction to remove all but the wavelength-specific absorption in the unfiltered sample. This signal processing can also reveal the net difference signal representing the leaking gas absorption, and allow rapid leak location, but signal intensity would not relate solely to gas absorption, as raw signal intensity would also affect the displayed signal. A second design choice is whether to use one camera with two images closely spaced in time, or two cameras with essentially the same view and time. The figure shows the two-camera version. This choice involves many tradeoffs that are not apparent until some detailed testing is done. In short, the tradeoffs involve the temporal changes in the field picture versus the pixel sensitivity curves and frame alignment differences with two cameras, and which system would lead to the smaller variations from the uncontrolled variables.

  16. Design criteria for a high energy Compton Camera and possible application to targeted cancer therapy

    NASA Astrophysics Data System (ADS)

    Conka Nurdan, T.; Nurdan, K.; Brill, A. B.; Walenta, A. H.

    2015-07-01

    The proposed research focuses on the design criteria for a Compton Camera with high spatial resolution and sensitivity, operating at high gamma energies and its possible application for molecular imaging. This application is mainly on the detection and visualization of the pharmacokinetics of tumor targeting substances specific for particular cancer sites. Expected high resolution (< 0.5 mm) permits monitoring the pharmacokinetics of labeled gene constructs in vivo in small animals with a human tumor xenograft which is one of the first steps in evaluating the potential utility of a candidate gene. The additional benefit of high sensitivity detection will be improved cancer treatment strategies in patients based on the use of specific molecules binding to cancer sites for early detection of tumors and identifying metastasis, monitoring drug delivery and radionuclide therapy for optimum cell killing at the tumor site. This new technology can provide high resolution, high sensitivity imaging of a wide range of gamma energies and will significantly extend the range of radiotracers that can be investigated and used clinically. The small and compact construction of the proposed camera system allows flexible application which will be particularly useful for monitoring residual tumor around the resection site during surgery. It is also envisaged as able to test the performance of new drug/gene-based therapies in vitro and in vivo for tumor targeting efficacy using automatic large scale screening methods.

  17. Camera System MTF: combining optic with detector

    NASA Astrophysics Data System (ADS)

    Andersen, Torben B.; Granger, Zachary A.

    2017-08-01

    MTF is one of the most common metrics used to quantify the resolving power of an optical component. Extensive literature is dedicated to describing methods to calculate the Modulation Transfer Function (MTF) for stand-alone optical components such as a camera lens or telescope, and some literature addresses approaches to determine an MTF for combination of an optic with a detector. The formulations pertaining to a combined electro-optical system MTF are mostly based on theory, and assumptions that detector MTF is described only by the pixel pitch which does not account for wavelength dependencies. When working with real hardware, detectors are often characterized by testing MTF at discrete wavelengths. This paper presents a method to simplify the calculation of a polychromatic system MTF when it is permissible to consider the detector MTF to be independent of wavelength.

  18. High-precision real-time 3D shape measurement based on a quad-camera system

    NASA Astrophysics Data System (ADS)

    Tao, Tianyang; Chen, Qian; Feng, Shijie; Hu, Yan; Zhang, Minliang; Zuo, Chao

    2018-01-01

    Phase-shifting profilometry (PSP) based 3D shape measurement is well established in various applications due to its high accuracy, simple implementation, and robustness to environmental illumination and surface texture. In PSP, higher depth resolution generally requires higher fringe density of projected patterns which, in turn, lead to severe phase ambiguities that must be solved with additional information from phase coding and/or geometric constraints. However, in order to guarantee the reliability of phase unwrapping, available techniques are usually accompanied by increased number of patterns, reduced amplitude of fringe, and complicated post-processing algorithms. In this work, we demonstrate that by using a quad-camera multi-view fringe projection system and carefully arranging the relative spatial positions between the cameras and the projector, it becomes possible to completely eliminate the phase ambiguities in conventional three-step PSP patterns with high-fringe-density without projecting any additional patterns or embedding any auxiliary signals. Benefiting from the position-optimized quad-camera system, stereo phase unwrapping can be efficiently and reliably performed by flexible phase consistency checks. Besides, redundant information of multiple phase consistency checks is fully used through a weighted phase difference scheme to further enhance the reliability of phase unwrapping. This paper explains the 3D measurement principle and the basic design of quad-camera system, and finally demonstrates that in a large measurement volume of 200 mm × 200 mm × 400 mm, the resultant dynamic 3D sensing system can realize real-time 3D reconstruction at 60 frames per second with a depth precision of 50 μm.

  19. In-Situ Cameras for Radiometric Correction of Remotely Sensed Data

    NASA Astrophysics Data System (ADS)

    Kautz, Jess S.

    The atmosphere distorts the spectrum of remotely sensed data, negatively affecting all forms of investigating Earth's surface. To gather reliable data, it is vital that atmospheric corrections are accurate. The current state of the field of atmospheric correction does not account well for the benefits and costs of different correction algorithms. Ground spectral data are required to evaluate these algorithms better. This dissertation explores using cameras as radiometers as a means of gathering ground spectral data. I introduce techniques to implement a camera systems for atmospheric correction using off the shelf parts. To aid the design of future camera systems for radiometric correction, methods for estimating the system error prior to construction, calibration and testing of the resulting camera system are explored. Simulations are used to investigate the relationship between the reflectance accuracy of the camera system and the quality of atmospheric correction. In the design phase, read noise and filter choice are found to be the strongest sources of system error. I explain the calibration methods for the camera system, showing the problems of pixel to angle calibration, and adapting the web camera for scientific work. The camera system is tested in the field to estimate its ability to recover directional reflectance from BRF data. I estimate the error in the system due to the experimental set up, then explore how the system error changes with different cameras, environmental set-ups and inversions. With these experiments, I learn about the importance of the dynamic range of the camera, and the input ranges used for the PROSAIL inversion. Evidence that the camera can perform within the specification set for ELM correction in this dissertation is evaluated. The analysis is concluded by simulating an ELM correction of a scene using various numbers of calibration targets, and levels of system error, to find the number of cameras needed for a full

  20. Differences in glance behavior between drivers using a rearview camera, parking sensor system, both technologies, or no technology during low-speed parking maneuvers.

    PubMed

    Kidd, David G; McCartt, Anne T

    2016-02-01

    This study characterized the use of various fields of view during low-speed parking maneuvers by drivers with a rearview camera, a sensor system, a camera and sensor system combined, or neither technology. Participants performed four different low-speed parking maneuvers five times. Glances to different fields of view the second time through the four maneuvers were coded along with the glance locations at the onset of the audible warning from the sensor system and immediately after the warning for participants in the sensor and camera-plus-sensor conditions. Overall, the results suggest that information from cameras and/or sensor systems is used in place of mirrors and shoulder glances. Participants with a camera, sensor system, or both technologies looked over their shoulders significantly less than participants without technology. Participants with cameras (camera and camera-plus-sensor conditions) used their mirrors significantly less compared with participants without cameras (no-technology and sensor conditions). Participants in the camera-plus-sensor condition looked at the center console/camera display for a smaller percentage of the time during the low-speed maneuvers than participants in the camera condition and glanced more frequently to the center console/camera display immediately after the warning from the sensor system compared with the frequency of glances to this location at warning onset. Although this increase was not statistically significant, the pattern suggests that participants in the camera-plus-sensor condition may have used the warning as a cue to look at the camera display. The observed differences in glance behavior between study groups were illustrated by relating it to the visibility of a 12-15-month-old child-size object. These findings provide evidence that drivers adapt their glance behavior during low-speed parking maneuvers following extended use of rearview cameras and parking sensors, and suggest that other technologies which

  1. Recording stereoscopic 3D neurosurgery with a head-mounted 3D camera system.

    PubMed

    Lee, Brian; Chen, Brian R; Chen, Beverly B; Lu, James Y; Giannotta, Steven L

    2015-06-01

    Stereoscopic three-dimensional (3D) imaging can present more information to the viewer and further enhance the learning experience over traditional two-dimensional (2D) video. Most 3D surgical videos are recorded from the operating microscope and only feature the crux, or the most important part of the surgery, leaving out other crucial parts of surgery including the opening, approach, and closing of the surgical site. In addition, many other surgeries including complex spine, trauma, and intensive care unit procedures are also rarely recorded. We describe and share our experience with a commercially available head-mounted stereoscopic 3D camera system to obtain stereoscopic 3D recordings of these seldom recorded aspects of neurosurgery. The strengths and limitations of using the GoPro(®) 3D system as a head-mounted stereoscopic 3D camera system in the operating room are reviewed in detail. Over the past several years, we have recorded in stereoscopic 3D over 50 cranial and spinal surgeries and created a library for education purposes. We have found the head-mounted stereoscopic 3D camera system to be a valuable asset to supplement 3D footage from a 3D microscope. We expect that these comprehensive 3D surgical videos will become an important facet of resident education and ultimately lead to improved patient care.

  2. Inexpensive camera systems for detecting martens, fishers, and other animals: guidelines for use and standardization.

    Treesearch

    Lawrence L.C. Jones; Martin G. Raphael

    1993-01-01

    Inexpensive camera systems have been successfully used to detect the occurrence of martens, fishers, and other wildlife species. The use of cameras is becoming widespread, and we give suggestions for standardizing techniques so that comparisons of data can occur across the geographic range of the target species. Details are given on equipment needs, setting up the...

  3. SFR test fixture for hemispherical and hyperhemispherical camera systems

    NASA Astrophysics Data System (ADS)

    Tamkin, John M.

    2017-08-01

    Optical testing of camera systems in volume production environments can often require expensive tooling and test fixturing. Wide field (fish-eye, hemispheric and hyperhemispheric) optical systems create unique challenges because of the inherent distortion, and difficulty in controlling reflections from front-lit high resolution test targets over the hemisphere. We present a unique design for a test fixture that uses low-cost manufacturing methods and equipment such as 3D printing and an Arduino processor to control back-lit multi-color (VIS/NIR) targets and sources. Special care with LED drive electronics is required to accommodate both global and rolling shutter sensors.

  4. Science, conservation, and camera traps

    USGS Publications Warehouse

    Nichols, James D.; Karanth, K. Ullas; O'Connel, Allan F.; O'Connell, Allan F.; Nichols, James D.; Karanth, K. Ullas

    2011-01-01

    Biologists commonly perceive camera traps as a new tool that enables them to enter the hitherto secret world of wild animals. Camera traps are being used in a wide range of studies dealing with animal ecology, behavior, and conservation. Our intention in this volume is not to simply present the various uses of camera traps, but to focus on their use in the conduct of science and conservation. In this chapter, we provide an overview of these two broad classes of endeavor and sketch the manner in which camera traps are likely to be able to contribute to them. Our main point here is that neither photographs of individual animals, nor detection history data, nor parameter estimates generated from detection histories are the ultimate objective of a camera trap study directed at either science or management. Instead, the ultimate objectives are best viewed as either gaining an understanding of how ecological systems work (science) or trying to make wise decisions that move systems from less desirable to more desirable states (conservation, management). Therefore, we briefly describe here basic approaches to science and management, emphasizing the role of field data and associated analyses in these processes. We provide examples of ways in which camera trap data can inform science and management.

  5. Advanced EVA Suit Camera System Development Project

    NASA Technical Reports Server (NTRS)

    Mock, Kyla

    2016-01-01

    The National Aeronautics and Space Administration (NASA) at the Johnson Space Center (JSC) is developing a new extra-vehicular activity (EVA) suit known as the Advanced EVA Z2 Suit. All of the improvements to the EVA Suit provide the opportunity to update the technology of the video imagery. My summer internship project involved improving the video streaming capabilities of the cameras that will be used on the Z2 Suit for data acquisition. To accomplish this, I familiarized myself with the architecture of the camera that is currently being tested to be able to make improvements on the design. Because there is a lot of benefit to saving space, power, and weight on the EVA suit, my job was to use Altium Design to start designing a much smaller and simplified interface board for the camera's microprocessor and external components. This involved checking datasheets of various components and checking signal connections to ensure that this architecture could be used for both the Z2 suit and potentially other future projects. The Orion spacecraft is a specific project that may benefit from this condensed camera interface design. The camera's physical placement on the suit also needed to be determined and tested so that image resolution can be maximized. Many of the options of the camera placement may be tested along with other future suit testing. There are multiple teams that work on different parts of the suit, so the camera's placement could directly affect their research or design. For this reason, a big part of my project was initiating contact with other branches and setting up multiple meetings to learn more about the pros and cons of the potential camera placements we are analyzing. Collaboration with the multiple teams working on the Advanced EVA Z2 Suit is absolutely necessary and these comparisons will be used as further progress is made for the overall suit design. This prototype will not be finished in time for the scheduled Z2 Suit testing, so my time was

  6. BAE Systems' 17μm LWIR camera core for civil, commercial, and military applications

    NASA Astrophysics Data System (ADS)

    Lee, Jeffrey; Rodriguez, Christian; Blackwell, Richard

    2013-06-01

    Seventeen (17) µm pixel Long Wave Infrared (LWIR) Sensors based on vanadium oxide (VOx) micro-bolometers have been in full rate production at BAE Systems' Night Vision Sensors facility in Lexington, MA for the past five years.[1] We introduce here a commercial camera core product, the Airia-MTM imaging module, in a VGA format that reads out in 30 and 60Hz progressive modes. The camera core is architected to conserve power with all digital interfaces from the readout integrated circuit through video output. The architecture enables a variety of input/output interfaces including Camera Link, USB 2.0, micro-display drivers and optional RS-170 analog output supporting legacy systems. The modular board architecture of the electronics facilitates hardware upgrades allow us to capitalize on the latest high performance low power electronics developed for the mobile phones. Software and firmware is field upgradeable through a USB 2.0 port. The USB port also gives users access to up to 100 digitally stored (lossless) images.

  7. A TV Camera System Which Extracts Feature Points For Non-Contact Eye Movement Detection

    NASA Astrophysics Data System (ADS)

    Tomono, Akira; Iida, Muneo; Kobayashi, Yukio

    1990-04-01

    This paper proposes a highly efficient camera system which extracts, irrespective of background, feature points such as the pupil, corneal reflection image and dot-marks pasted on a human face in order to detect human eye movement by image processing. Two eye movement detection methods are sugested: One utilizing face orientation as well as pupil position, The other utilizing pupil and corneal reflection images. A method of extracting these feature points using LEDs as illumination devices and a new TV camera system designed to record eye movement are proposed. Two kinds of infra-red LEDs are used. These LEDs are set up a short distance apart and emit polarized light of different wavelengths. One light source beams from near the optical axis of the lens and the other is some distance from the optical axis. The LEDs are operated in synchronization with the camera. The camera includes 3 CCD image pick-up sensors and a prism system with 2 boundary layers. Incident rays are separated into 2 wavelengths by the first boundary layer of the prism. One set of rays forms an image on CCD-3. The other set is split by the half-mirror layer of the prism and forms an image including the regularly reflected component by placing a polarizing filter in front of CCD-1 or another image not including the component by not placing a polarizing filter in front of CCD-2. Thus, three images with different reflection characteristics are obtained by three CCDs. Through the experiment, it is shown that two kinds of subtraction operations between the three images output from CCDs accentuate three kinds of feature points: the pupil and corneal reflection images and the dot-marks. Since the S/N ratio of the subtracted image is extremely high, the thresholding process is simple and allows reducting the intensity of the infra-red illumination. A high speed image processing apparatus using this camera system is decribed. Realtime processing of the subtraction, thresholding and gravity position

  8. Object Occlusion Detection Using Automatic Camera Calibration for a Wide-Area Video Surveillance System

    PubMed Central

    Jung, Jaehoon; Yoon, Inhye; Paik, Joonki

    2016-01-01

    This paper presents an object occlusion detection algorithm using object depth information that is estimated by automatic camera calibration. The object occlusion problem is a major factor to degrade the performance of object tracking and recognition. To detect an object occlusion, the proposed algorithm consists of three steps: (i) automatic camera calibration using both moving objects and a background structure; (ii) object depth estimation; and (iii) detection of occluded regions. The proposed algorithm estimates the depth of the object without extra sensors but with a generic red, green and blue (RGB) camera. As a result, the proposed algorithm can be applied to improve the performance of object tracking and object recognition algorithms for video surveillance systems. PMID:27347978

  9. Characterization of scintillator crystals for usage as prompt gamma monitors in particle therapy

    NASA Astrophysics Data System (ADS)

    Roemer, K.; Pausch, G.; Bemmerer, D.; Berthel, M.; Dreyer, A.; Golnik, C.; Hueso-González, F.; Kormoll, T.; Petzoldt, J.; Rohling, H.; Thirolf, P.; Wagner, A.; Wagner, L.; Weinberger, D.; Fiedler, F.

    2015-10-01

    Particle therapy in oncology is advantageous compared to classical radiotherapy due to its well-defined penetration depth. In the so-called Bragg peak, the highest dose is deposited; the tissue behind the cancerous area is not exposed. Different factors influence the range of the particle and thus the target area, e.g. organ motion, mispositioning of the patient or anatomical changes. In order to avoid over-exposure of healthy tissue and under-dosage of cancerous regions, the penetration depth of the particle has to be monitored, preferably already during the ongoing therapy session. The verification of the ion range can be performed using prompt gamma emissions, which are produced by interactions between projectile and tissue, and originate from the same location and time of the nuclear reaction. The prompt gamma emission profile and the clinically relevant penetration depth are correlated. Various imaging concepts based on the detection of prompt gamma rays are currently discussed: collimated systems with counting detectors, Compton cameras with (at least) two detector planes, or the prompt gamma timing method, utilizing the particle time-of-flight within the body. For each concept, the detection system must meet special requirements regarding energy, time, and spatial resolution. Nonetheless, the prerequisites remain the same: the gamma energy region (2 to 10 MeV), high counting rates and the stability in strong background radiation fields. The aim of this work is the comparison of different scintillation crystals regarding energy and time resolution for optimized prompt gamma detection.

  10. Flexible nuclear medicine camera and method of using

    DOEpatents

    Dilmanian, F. Avraham; Packer, Samuel; Slatkin, Daniel N.

    1996-12-10

    A nuclear medicine camera 10 and method of use photographically record radioactive decay particles emitted from a source, for example a small, previously undetectable breast cancer, inside a patient. The camera 10 includes a flexible frame 20 containing a window 22, a photographic film 24, and a scintillation screen 26, with or without a gamma-ray collimator 34. The frame 20 flexes for following the contour of the examination site on the patient, with the window 22 being disposed in substantially abutting contact with the skin of the patient for reducing the distance between the film 24 and the radiation source inside the patient. The frame 20 is removably affixed to the patient at the examination site for allowing the patient mobility to wear the frame 20 for a predetermined exposure time period. The exposure time may be several days for obtaining early qualitative detection of small malignant neoplasms.

  11. Theoretical study of the NO gamma system

    NASA Technical Reports Server (NTRS)

    Langhoff, Stephen R.; Bauschlicher, Charles W., Jr.; Partridge, Harry

    1988-01-01

    A systematic study of the NO gamma system with level of correlation treatment was carried out using large Gaussian basis sets to determine the potential curves for the X2Pi and A2Sigma(+) states of NO. It is shown that the A2Sigma-X2Pi electronic transition moment (gamma system) increases monotonically with decreasing internuclear distance and that the increase in the moment as r decreases is correlated with the increasing degree of diffuse character in the X2Pi state. The results of a study of the X2Pi and A2Sigma(+) dipole moment functions showed that the X2Pi vibrationally averaged dipole moments and the (1-0) and (2-0) vibration-rotation band intensities agree well with experimental data.

  12. Mars Science Laboratory Engineering Cameras

    NASA Technical Reports Server (NTRS)

    Maki, Justin N.; Thiessen, David L.; Pourangi, Ali M.; Kobzeff, Peter A.; Lee, Steven W.; Dingizian, Arsham; Schwochert, Mark A.

    2012-01-01

    NASA's Mars Science Laboratory (MSL) Rover, which launched to Mars in 2011, is equipped with a set of 12 engineering cameras. These cameras are build-to-print copies of the Mars Exploration Rover (MER) cameras, which were sent to Mars in 2003. The engineering cameras weigh less than 300 grams each and use less than 3 W of power. Images returned from the engineering cameras are used to navigate the rover on the Martian surface, deploy the rover robotic arm, and ingest samples into the rover sample processing system. The navigation cameras (Navcams) are mounted to a pan/tilt mast and have a 45-degree square field of view (FOV) with a pixel scale of 0.82 mrad/pixel. The hazard avoidance cameras (Haz - cams) are body-mounted to the rover chassis in the front and rear of the vehicle and have a 124-degree square FOV with a pixel scale of 2.1 mrad/pixel. All of the cameras utilize a frame-transfer CCD (charge-coupled device) with a 1024x1024 imaging region and red/near IR bandpass filters centered at 650 nm. The MSL engineering cameras are grouped into two sets of six: one set of cameras is connected to rover computer A and the other set is connected to rover computer B. The MSL rover carries 8 Hazcams and 4 Navcams.

  13. Infrared On-Orbit RCC Inspection With the EVA IR Camera: Development of Flight Hardware From a COTS System

    NASA Technical Reports Server (NTRS)

    Gazanik, Michael; Johnson, Dave; Kist, Ed; Novak, Frank; Antill, Charles; Haakenson, David; Howell, Patricia; Jenkins, Rusty; Yates, Rusty; Stephan, Ryan; hide

    2005-01-01

    In November 2004, NASA's Space Shuttle Program approved the development of the Extravehicular (EVA) Infrared (IR) Camera to test the application of infrared thermography to on-orbit reinforced carbon-carbon (RCC) damage detection. A multi-center team composed of members from NASA's Johnson Space Center (JSC), Langley Research Center (LaRC), and Goddard Space Flight Center (GSFC) was formed to develop the camera system and plan a flight test. The initial development schedule called for the delivery of the system in time to support STS-115 in late 2005. At the request of Shuttle Program managers and the flight crews, the team accelerated its schedule and delivered a certified EVA IR Camera system in time to support STS-114 in July 2005 as a contingency. The development of the camera system, led by LaRC, was based on the Commercial-Off-the-Shelf (COTS) FLIR S65 handheld infrared camera. An assessment of the S65 system in regards to space-flight operation was critical to the project. This paper discusses the space-flight assessment and describes the significant modifications required for EVA use by the astronaut crew. The on-orbit inspection technique will be demonstrated during the third EVA of STS-121 in September 2005 by imaging damaged RCC samples mounted in a box in the Shuttle's cargo bay.

  14. Full image-processing pipeline in field-programmable gate array for a small endoscopic camera

    NASA Astrophysics Data System (ADS)

    Mostafa, Sheikh Shanawaz; Sousa, L. Natércia; Ferreira, Nuno Fábio; Sousa, Ricardo M.; Santos, Joao; Wäny, Martin; Morgado-Dias, F.

    2017-01-01

    Endoscopy is an imaging procedure used for diagnosis as well as for some surgical purposes. The camera used for the endoscopy should be small and able to produce a good quality image or video, to reduce discomfort of the patients, and to increase the efficiency of the medical team. To achieve these fundamental goals, a small endoscopy camera with a footprint of 1 mm×1 mm×1.65 mm is used. Due to the physical properties of the sensors and human vision system limitations, different image-processing algorithms, such as noise reduction, demosaicking, and gamma correction, among others, are needed to faithfully reproduce the image or video. A full image-processing pipeline is implemented using a field-programmable gate array (FPGA) to accomplish a high frame rate of 60 fps with minimum processing delay. Along with this, a viewer has also been developed to display and control the image-processing pipeline. The control and data transfer are done by a USB 3.0 end point in the computer. The full developed system achieves real-time processing of the image and fits in a Xilinx Spartan-6LX150 FPGA.

  15. Body worn camera

    NASA Astrophysics Data System (ADS)

    Aishwariya, A.; Pallavi Sudhir, Gulavani; Garg, Nemesa; Karthikeyan, B.

    2017-11-01

    A body worn camera is small video camera worn on the body, typically used by police officers to record arrests, evidence from crime scenes. It helps preventing and resolving complaints brought by members of the public; and strengthening police transparency, performance, and accountability. The main constants of this type of the system are video format, resolution, frames rate, and audio quality. This system records the video in .mp4 format with 1080p resolution and 30 frames per second. One more important aspect to while designing this system is amount of power the system requires as battery management becomes very critical. The main design challenges are Size of the Video, Audio for the video. Combining both audio and video and saving it in .mp4 format, Battery, size that is required for 8 hours of continuous recording, Security. For prototyping this system is implemented using Raspberry Pi model B.

  16. Accurate and cost-effective MTF measurement system for lens modules of digital cameras

    NASA Astrophysics Data System (ADS)

    Chang, Gao-Wei; Liao, Chia-Cheng; Yeh, Zong-Mu

    2007-01-01

    For many years, the widening use of digital imaging products, e.g., digital cameras, has given rise to much attention in the market of consumer electronics. However, it is important to measure and enhance the imaging performance of the digital ones, compared to that of conventional cameras (with photographic films). For example, the effect of diffraction arising from the miniaturization of the optical modules tends to decrease the image resolution. As a figure of merit, modulation transfer function (MTF) has been broadly employed to estimate the image quality. Therefore, the objective of this paper is to design and implement an accurate and cost-effective MTF measurement system for the digital camera. Once the MTF of the sensor array is provided, that of the optical module can be then obtained. In this approach, a spatial light modulator (SLM) is employed to modulate the spatial frequency of light emitted from the light-source. The modulated light going through the camera under test is consecutively detected by the sensors. The corresponding images formed from the camera are acquired by a computer and then, they are processed by an algorithm for computing the MTF. Finally, through the investigation on the measurement accuracy from various methods, such as from bar-target and spread-function methods, it appears that our approach gives quite satisfactory results.

  17. Plate refractive camera model and its applications

    NASA Astrophysics Data System (ADS)

    Huang, Longxiang; Zhao, Xu; Cai, Shen; Liu, Yuncai

    2017-03-01

    In real applications, a pinhole camera capturing objects through a planar parallel transparent plate is frequently employed. Due to the refractive effects of the plate, such an imaging system does not comply with the conventional pinhole camera model. Although the system is ubiquitous, it has not been thoroughly studied. This paper aims at presenting a simple virtual camera model, called a plate refractive camera model, which has a form similar to a pinhole camera model and can efficiently model refractions through a plate. The key idea is to employ a pixel-wise viewpoint concept to encode the refraction effects into a pixel-wise pinhole camera model. The proposed camera model realizes an efficient forward projection computation method and has some advantages in applications. First, the model can help to compute the caustic surface to represent the changes of the camera viewpoints. Second, the model has strengths in analyzing and rectifying the image caustic distortion caused by the plate refraction effects. Third, the model can be used to calibrate the camera's intrinsic parameters without removing the plate. Last but not least, the model contributes to putting forward the plate refractive triangulation methods in order to solve the plate refractive triangulation problem easily in multiviews. We verify our theory in both synthetic and real experiments.

  18. Measurement of Separated Flow Structures Using a Multiple-Camera DPIV System. [conducted in the Langley Subsonic Basic Research Tunnel

    NASA Technical Reports Server (NTRS)

    Humphreys, William M., Jr.; Bartram, Scott M.

    2001-01-01

    A novel multiple-camera system for the recording of digital particle image velocimetry (DPIV) images acquired in a two-dimensional separating/reattaching flow is described. The measurements were performed in the NASA Langley Subsonic Basic Research Tunnel as part of an overall series of experiments involving the simultaneous acquisition of dynamic surface pressures and off-body velocities. The DPIV system utilized two frequency-doubled Nd:YAG lasers to generate two coplanar, orthogonally polarized light sheets directed upstream along the horizontal centerline of the test model. A recording system containing two pairs of matched high resolution, 8-bit cameras was used to separate and capture images of illuminated tracer particles embedded in the flow field. Background image subtraction was used to reduce undesirable flare light emanating from the surface of the model, and custom pixel alignment algorithms were employed to provide accurate registration among the various cameras. Spatial cross correlation analysis with median filter validation was used to determine the instantaneous velocity structure in the separating/reattaching flow region illuminated by the laser light sheets. In operation the DPIV system exhibited a good ability to resolve large-scale separated flow structures with acceptable accuracy over the extended field of view of the cameras. The recording system design provided enhanced performance versus traditional DPIV systems by allowing a variety of standard and non-standard cameras to be easily incorporated into the system.

  19. Object tracking using multiple camera video streams

    NASA Astrophysics Data System (ADS)

    Mehrubeoglu, Mehrube; Rojas, Diego; McLauchlan, Lifford

    2010-05-01

    Two synchronized cameras are utilized to obtain independent video streams to detect moving objects from two different viewing angles. The video frames are directly correlated in time. Moving objects in image frames from the two cameras are identified and tagged for tracking. One advantage of such a system involves overcoming effects of occlusions that could result in an object in partial or full view in one camera, when the same object is fully visible in another camera. Object registration is achieved by determining the location of common features in the moving object across simultaneous frames. Perspective differences are adjusted. Combining information from images from multiple cameras increases robustness of the tracking process. Motion tracking is achieved by determining anomalies caused by the objects' movement across frames in time in each and the combined video information. The path of each object is determined heuristically. Accuracy of detection is dependent on the speed of the object as well as variations in direction of motion. Fast cameras increase accuracy but limit the speed and complexity of the algorithm. Such an imaging system has applications in traffic analysis, surveillance and security, as well as object modeling from multi-view images. The system can easily be expanded by increasing the number of cameras such that there is an overlap between the scenes from at least two cameras in proximity. An object can then be tracked long distances or across multiple cameras continuously, applicable, for example, in wireless sensor networks for surveillance or navigation.

  20. Clinical trials of the prototype Rutherford Appleton Laboratory MWPC positron camera at the Royal Marsden Hospital

    NASA Astrophysics Data System (ADS)

    Flower, M. A.; Ott, R. J.; Webb, S.; Leach, M. O.; Marsden, P. K.; Clack, R.; Khan, O.; Batty, V.; McCready, V. R.; Bateman, J. E.

    1988-06-01

    Two clinical trials of the prototype RAL multiwire proportional chamber (MWPC) positron camera were carried out prior to the development of a clinical system with large-area detectors. During the first clinical trial, the patient studies included skeletal imaging using 18F, imaging of brain glucose metabolism using 18F FDG, bone marrow imaging using 52Fe citrate and thyroid imaging with Na 124I. Longitudinal tomograms were produced from the limited-angle data acquisition from the static detectors. During the second clinical trial, transaxial, coronal and sagittal images were produced from the multiview data acquisition. A more detailed thyroid study was performed in which the volume of the functioning thyroid tissue was obtained from the 3D PET image and this volume was used in estimating the radiation dose achieved during radioiodine therapy of patients with thyrotoxicosis. Despite the small field of view of the prototype camera, and the use of smaller than usual amounts of activity administered, the PET images were in most cases comparable with, and in a few cases visually better than, the equivalent planar view using a state-of-the-art gamma camera with a large field of view and routine radiopharmaceuticals.

  1. Camera system for multispectral imaging of documents

    NASA Astrophysics Data System (ADS)

    Christens-Barry, William A.; Boydston, Kenneth; France, Fenella G.; Knox, Keith T.; Easton, Roger L., Jr.; Toth, Michael B.

    2009-02-01

    A spectral imaging system comprising a 39-Mpixel monochrome camera, LED-based narrowband illumination, and acquisition/control software has been designed for investigations of cultural heritage objects. Notable attributes of this system, referred to as EurekaVision, include: streamlined workflow, flexibility, provision of well-structured data and metadata for downstream processing, and illumination that is safer for the artifacts. The system design builds upon experience gained while imaging the Archimedes Palimpsest and has been used in studies of a number of important objects in the LOC collection. This paper describes practical issues that were considered by EurekaVision to address key research questions for the study of fragile and unique cultural objects over a range of spectral bands. The system is intended to capture important digital records for access by researchers, professionals, and the public. The system was first used for spectral imaging of the 1507 world map by Martin Waldseemueller, the first printed map to reference "America." It was also used to image sections of the Carta Marina 1516 map by the same cartographer for comparative purposes. An updated version of the system is now being utilized by the Preservation Research and Testing Division of the Library of Congress.

  2. Study of gamma detection capabilities of the REWARD mobile spectroscopic system

    NASA Astrophysics Data System (ADS)

    Balbuena, J. P.; Baptista, M.; Barros, S.; Dambacher, M.; Disch, C.; Fiederle, M.; Kuehn, S.; Parzefall, U.

    2017-07-01

    REWARD is a novel mobile spectroscopic radiation detector system for Homeland Security applications. The system integrates gamma and neutron detection equipped with wireless communication. A comprehensive simulation study on its gamma detection capabilities in different radioactive scenarios is presented in this work. The gamma detection unit consists of a precise energy resolution system based on two stacked (Cd,Zn)Te sensors working in coincidence sum mode. The volume of each of these CZT sensors is 1 cm3. The investigated energy windows used to determine the detection capabilities of the detector correspond to the gamma emissions from 137Cs and 60Co radioactive sources (662 keV and 1173/1333 keV respectively). Monte Carlo and Technology Computer-Aided Design (TCAD) simulations are combined to determine its sensing capabilities for different radiation sources and estimate the limits of detection of the sensing unit as a function of source activity for several shielding materials.

  3. A design of camera simulator for photoelectric image acquisition system

    NASA Astrophysics Data System (ADS)

    Cai, Guanghui; Liu, Wen; Zhang, Xin

    2015-02-01

    In the process of developing the photoelectric image acquisition equipment, it needs to verify the function and performance. In order to make the photoelectric device recall the image data formerly in the process of debugging and testing, a design scheme of the camera simulator is presented. In this system, with FPGA as the control core, the image data is saved in NAND flash trough USB2.0 bus. Due to the access rate of the NAND, flash is too slow to meet the requirement of the sytsem, to fix the problem, the pipeline technique and the High-Band-Buses technique are applied in the design to improve the storage rate. It reads image data out from flash in the control logic of FPGA and output separately from three different interface of Camera Link, LVDS and PAL, which can provide image data for photoelectric image acquisition equipment's debugging and algorithm validation. However, because the standard of PAL image resolution is 720*576, the resolution is different between PAL image and input image, so the image can be output after the resolution conversion. The experimental results demonstrate that the camera simulator outputs three format image sequence correctly, which can be captured and displayed by frame gather. And the three-format image data can meet test requirements of the most equipment, shorten debugging time and improve the test efficiency.

  4. Research into a Single-aperture Light Field Camera System to Obtain Passive Ground-based 3D Imagery of LEO Objects

    NASA Astrophysics Data System (ADS)

    Bechis, K.; Pitruzzello, A.

    2014-09-01

    This presentation describes our ongoing research into using a ground-based light field camera to obtain passive, single-aperture 3D imagery of LEO objects. Light field cameras are an emerging and rapidly evolving technology for passive 3D imaging with a single optical sensor. The cameras use an array of lenslets placed in front of the camera focal plane, which provides angle of arrival information for light rays originating from across the target, allowing range to target and 3D image to be obtained from a single image using monocular optics. The technology, which has been commercially available for less than four years, has the potential to replace dual-sensor systems such as stereo cameras, dual radar-optical systems, and optical-LIDAR fused systems, thus reducing size, weight, cost, and complexity. We have developed a prototype system for passive ranging and 3D imaging using a commercial light field camera and custom light field image processing algorithms. Our light field camera system has been demonstrated for ground-target surveillance and threat detection applications, and this paper presents results of our research thus far into applying this technology to the 3D imaging of LEO objects. The prototype 3D imaging camera system developed by Northrop Grumman uses a Raytrix R5 C2GigE light field camera connected to a Windows computer with an nVidia graphics processing unit (GPU). The system has a frame rate of 30 Hz, and a software control interface allows for automated camera triggering and light field image acquisition to disk. Custom image processing software then performs the following steps: (1) image refocusing, (2) change detection, (3) range finding, and (4) 3D reconstruction. In Step (1), a series of 2D images are generated from each light field image; the 2D images can be refocused at up to 100 different depths. Currently, steps (1) through (3) are automated, while step (4) requires some user interaction. A key requirement for light field camera

  5. A novel optical system design of light field camera

    NASA Astrophysics Data System (ADS)

    Wang, Ye; Li, Wenhua; Hao, Chenyang

    2016-01-01

    The structure of main lens - Micro Lens Array (MLA) - imaging sensor is usually adopted in optical system of light field camera, and the MLA is the most important part in the optical system, which has the function of collecting and recording the amplitude and phase information of the field light. In this paper, a novel optical system structure is proposed. The novel optical system is based on the 4f optical structure, and the micro-aperture array (MAA) is used to instead of the MLA for realizing the information acquisition of the 4D light field. We analyze the principle that the novel optical system could realize the information acquisition of the light field. At the same time, a simple MAA, line grating optical system, is designed by ZEMAX software in this paper. The novel optical system is simulated by a line grating optical system, and multiple images are obtained in the image plane. The imaging quality of the novel optical system is analyzed.

  6. ROBOCAL: An automated NDA (nondestructive analysis) calorimetry and gamma isotopic system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hurd, J.R.; Powell, W.D.; Ostenak, C.A.

    1989-11-01

    ROBOCAL, which is presently being developed and tested at Los Alamos National Laboratory, is a full-scale, prototype robotic system for remote calorimetric and gamma-ray analysis of special nuclear materials. It integrates a fully automated, multidrawer, vertical stacker-retriever system for staging unmeasured nuclear materials, and a fully automated gantry robot for computer-based selection and transfer of nuclear materials to calorimetric and gamma-ray measurement stations. Since ROBOCAL is designed for minimal operator intervention, a completely programmed user interface is provided to interact with the automated mechanical and assay systems. The assay system is designed to completely integrate calorimetric and gamma-ray data acquisitionmore » and to perform state-of-the-art analyses on both homogeneous and heterogeneous distributions of nuclear materials in a wide variety of matrices.« less

  7. [A Quality Assurance (QA) System with a Web Camera for High-dose-rate Brachytherapy].

    PubMed

    Hirose, Asako; Ueda, Yoshihiro; Oohira, Shingo; Isono, Masaru; Tsujii, Katsutomo; Inui, Shouki; Masaoka, Akira; Taniguchi, Makoto; Miyazaki, Masayoshi; Teshima, Teruki

    2016-03-01

    The quality assurance (QA) system that simultaneously quantifies the position and duration of an (192)Ir source (dwell position and time) was developed and the performance of this system was evaluated in high-dose-rate brachytherapy. This QA system has two functions to verify and quantify dwell position and time by using a web camera. The web camera records 30 images per second in a range from 1,425 mm to 1,505 mm. A user verifies the source position from the web camera at real time. The source position and duration were quantified with the movie using in-house software which was applied with a template-matching technique. This QA system allowed verification of the absolute position in real time and quantification of dwell position and time simultaneously. It was evident from the verification of the system that the mean of step size errors was 0.31±0.1 mm and that of dwell time errors 0.1±0.0 s. Absolute position errors can be determined with an accuracy of 1.0 mm at all dwell points in three step sizes and dwell time errors with an accuracy of 0.1% in more than 10.0 s of the planned time. This system is to provide quick verification and quantification of the dwell position and time with high accuracy at various dwell positions without depending on the step size.

  8. VUV testing of science cameras at MSFC: QE measurement of the CLASP flight cameras

    NASA Astrophysics Data System (ADS)

    Champey, P.; Kobayashi, K.; Winebarger, A.; Cirtain, J.; Hyde, D.; Robertson, B.; Beabout, B.; Beabout, D.; Stewart, M.

    2015-08-01

    The NASA Marshall Space Flight Center (MSFC) has developed a science camera suitable for sub-orbital missions for observations in the UV, EUV and soft X-ray. Six cameras were built and tested for the Chromospheric Lyman-Alpha Spectro-Polarimeter (CLASP), a joint MSFC, National Astronomical Observatory of Japan (NAOJ), Instituto de Astrofisica de Canarias (IAC) and Institut D'Astrophysique Spatiale (IAS) sounding rocket mission. The CLASP camera design includes a frame-transfer e2v CCD57-10 512 × 512 detector, dual channel analog readout and an internally mounted cold block. At the flight CCD temperature of -20C, the CLASP cameras exceeded the low-noise performance requirements (<= 25 e- read noise and <= 10 e- /sec/pixel dark current), in addition to maintaining a stable gain of ≍ 2.0 e-/DN. The e2v CCD57-10 detectors were coated with Lumogen-E to improve quantum efficiency (QE) at the Lyman- wavelength. A vacuum ultra-violet (VUV) monochromator and a NIST calibrated photodiode were employed to measure the QE of each camera. Three flight cameras and one engineering camera were tested in a high-vacuum chamber, which was configured to operate several tests intended to verify the QE, gain, read noise and dark current of the CCD. We present and discuss the QE measurements performed on the CLASP cameras. We also discuss the high-vacuum system outfitted for testing of UV, EUV and X-ray science cameras at MSFC.

  9. Homography-based multiple-camera person-tracking

    NASA Astrophysics Data System (ADS)

    Turk, Matthew R.

    2009-01-01

    Multiple video cameras are cheaply installed overlooking an area of interest. While computerized single-camera tracking is well-developed, multiple-camera tracking is a relatively new problem. The main multi-camera problem is to give the same tracking label to all projections of a real-world target. This is called the consistent labelling problem. Khan and Shah (2003) introduced a method to use field of view lines to perform multiple-camera tracking. The method creates inter-camera meta-target associations when objects enter at the scene edges. They also said that a plane-induced homography could be used for tracking, but this method was not well described. Their homography-based system would not work if targets use only one side of a camera to enter the scene. This paper overcomes this limitation and fully describes a practical homography-based tracker. A new method to find the feet feature is introduced. The method works especially well if the camera is tilted, when using the bottom centre of the target's bounding-box would produce inaccurate results. The new method is more accurate than the bounding-box method even when the camera is not tilted. Next, a method is presented that uses a series of corresponding point pairs "dropped" by oblivious, live human targets to find a plane-induced homography. The point pairs are created by tracking the feet locations of moving targets that were associated using the field of view line method. Finally, a homography-based multiple-camera tracking algorithm is introduced. Rules governing when to create the homography are specified. The algorithm ensures that homography-based tracking only starts after a non-degenerate homography is found. The method works when not all four field of view lines are discoverable; only one line needs to be found to use the algorithm. To initialize the system, the operator must specify pairs of overlapping cameras. Aside from that, the algorithm is fully automatic and uses the natural movement of

  10. 24/7 security system: 60-FPS color EMCCD camera with integral human recognition

    NASA Astrophysics Data System (ADS)

    Vogelsong, T. L.; Boult, T. E.; Gardner, D. W.; Woodworth, R.; Johnson, R. C.; Heflin, B.

    2007-04-01

    An advanced surveillance/security system is being developed for unattended 24/7 image acquisition and automated detection, discrimination, and tracking of humans and vehicles. The low-light video camera incorporates an electron multiplying CCD sensor with a programmable on-chip gain of up to 1000:1, providing effective noise levels of less than 1 electron. The EMCCD camera operates in full color mode under sunlit and moonlit conditions, and monochrome under quarter-moonlight to overcast starlight illumination. Sixty frame per second operation and progressive scanning minimizes motion artifacts. The acquired image sequences are processed with FPGA-compatible real-time algorithms, to detect/localize/track targets and reject non-targets due to clutter under a broad range of illumination conditions and viewing angles. The object detectors that are used are trained from actual image data. Detectors have been developed and demonstrated for faces, upright humans, crawling humans, large animals, cars and trucks. Detection and tracking of targets too small for template-based detection is achieved. For face and vehicle targets the results of the detection are passed to secondary processing to extract recognition templates, which are then compared with a database for identification. When combined with pan-tilt-zoom (PTZ) optics, the resulting system provides a reliable wide-area 24/7 surveillance system that avoids the high life-cycle cost of infrared cameras and image intensifiers.

  11. A direct-view customer-oriented digital holographic camera

    NASA Astrophysics Data System (ADS)

    Besaga, Vira R.; Gerhardt, Nils C.; Maksimyak, Peter P.; Hofmann, Martin R.

    2018-01-01

    In this paper, we propose a direct-view digital holographic camera system consisting mostly of customer-oriented components. The camera system is based on standard photographic units such as camera sensor and objective and is adapted to operate under off-axis external white-light illumination. The common-path geometry of the holographic module of the system ensures direct-view operation. The system can operate in both self-reference and self-interference modes. As a proof of system operability, we present reconstructed amplitude and phase information of a test sample.

  12. Development of compact Compton camera for 3D image reconstruction of radioactive contamination

    NASA Astrophysics Data System (ADS)

    Sato, Y.; Terasaka, Y.; Ozawa, S.; Nakamura Miyamura, H.; Kaburagi, M.; Tanifuji, Y.; Kawabata, K.; Torii, T.

    2017-11-01

    The Fukushima Daiichi Nuclear Power Station (FDNPS), operated by Tokyo Electric Power Company Holdings, Inc., went into meltdown after the large tsunami caused by the Great East Japan Earthquake of March 11, 2011. Very large amounts of radionuclides were released from the damaged plant. Radiation distribution measurements inside FDNPS buildings are indispensable to execute decommissioning tasks in the reactor buildings. We have developed a compact Compton camera to measure the distribution of radioactive contamination inside the FDNPS buildings three-dimensionally (3D). The total weight of the Compton camera is lower than 1.0 kg. The gamma-ray sensor of the Compton camera employs Ce-doped GAGG (Gd3Al2Ga3O12) scintillators coupled with a multi-pixel photon counter. Angular correction of the detection efficiency of the Compton camera was conducted. Moreover, we developed a 3D back-projection method using the multi-angle data measured with the Compton camera. We successfully observed 3D radiation images resulting from the two 137Cs radioactive sources, and the image of the 9.2 MBq source appeared stronger than that of the 2.7 MBq source.

  13. A digital underwater video camera system for aquatic research in regulated rivers

    USGS Publications Warehouse

    Martin, Benjamin M.; Irwin, Elise R.

    2010-01-01

    We designed a digital underwater video camera system to monitor nesting centrarchid behavior in the Tallapoosa River, Alabama, 20 km below a peaking hydropower dam with a highly variable flow regime. Major components of the system included a digital video recorder, multiple underwater cameras, and specially fabricated substrate stakes. The innovative design of the substrate stakes allowed us to effectively observe nesting redbreast sunfish Lepomis auritus in a highly regulated river. Substrate stakes, which were constructed for the specific substratum complex (i.e., sand, gravel, and cobble) identified at our study site, were able to withstand a discharge level of approximately 300 m3/s and allowed us to simultaneously record 10 active nests before and during water releases from the dam. We believe our technique will be valuable for other researchers that work in regulated rivers to quantify behavior of aquatic fauna in response to a discharge disturbance.

  14. Tests of commercial colour CMOS cameras for astronomical applications

    NASA Astrophysics Data System (ADS)

    Pokhvala, S. M.; Reshetnyk, V. M.; Zhilyaev, B. E.

    2013-12-01

    We present some results of testing commercial colour CMOS cameras for astronomical applications. Colour CMOS sensors allow to perform photometry in three filters simultaneously that gives a great advantage compared with monochrome CCD detectors. The Bayer BGR colour system realized in colour CMOS sensors is close to the astronomical Johnson BVR system. The basic camera characteristics: read noise (e^{-}/pix), thermal noise (e^{-}/pix/sec) and electronic gain (e^{-}/ADU) for the commercial digital camera Canon 5D MarkIII are presented. We give the same characteristics for the scientific high performance cooled CCD camera system ALTA E47. Comparing results for tests of Canon 5D MarkIII and CCD ALTA E47 show that present-day commercial colour CMOS cameras can seriously compete with the scientific CCD cameras in deep astronomical imaging.

  15. The Advanced Gamma-ray Imaging System (AGIS): Simulation Studies

    NASA Astrophysics Data System (ADS)

    Fegan, Stephen; Buckley, J. H.; Bugaev, S.; Funk, S.; Konopelko, A.; Maier, G.; Vassiliev, V. V.; Simulation Studies Working Group; AGIS Collaboration

    2008-03-01

    The Advanced Gamma-ray Imaging System (AGIS) is a concept for the next generation instrument in ground-based very high energy gamma-ray astronomy. It has the goal of achieving significant improvement in sensitivity over current experiments. We present the results of simulation studies of various possible designs for AGIS. The primary characteristics of the array performance, collecting area, angular resolution, background rejection, and sensitivity are discussed.

  16. Adjustable control station with movable monitors and cameras for viewing systems in robotics and teleoperations

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B. (Inventor)

    1994-01-01

    Real-time video presentations are provided in the field of operator-supervised automation and teleoperation, particularly in control stations having movable cameras for optimal viewing of a region of interest in robotics and teleoperations for performing different types of tasks. Movable monitors to match the corresponding camera orientations (pan, tilt, and roll) are provided in order to match the coordinate systems of all the monitors to the operator internal coordinate system. Automated control of the arrangement of cameras and monitors, and of the configuration of system parameters, is provided for optimal viewing and performance of each type of task for each operator since operators have different individual characteristics. The optimal viewing arrangement and system parameter configuration is determined and stored for each operator in performing each of many types of tasks in order to aid the automation of setting up optimal arrangements and configurations for successive tasks in real time. Factors in determining what is optimal include the operator's ability to use hand-controllers for each type of task. Robot joint locations, forces and torques are used, as well as the operator's identity, to identify the current type of task being performed in order to call up a stored optimal viewing arrangement and system parameter configuration.

  17. Application of Two Phase (Liquid/Gas) Xenon Gamma-Camera for the Detection of Special Nuclear Material and PET Medical Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McKinsey, Daniel Nicholas

    The McKinsey group at Yale has been awarded a grant from DTRA for the building of a Liquid Xenon Gamma Ray Color Camera (LXe-GRCC), which combines state-of-the-art detection of LXe scintillation light and time projection chamber (TPC) charge readout. The DTRA application requires a movable detector and hence only a single phase (liquid) xenon detector can be considered in this case. We propose to extend the DTRA project to applications that allow a two phase (liquid/gas) xenon TPC. This entails additional (yet minimal) hardware and extension of the research effort funded by DTRA. The two phase detector will have bettermore » energy and angular resolution. Such detectors will be useful for PET medical imaging and detection of special nuclear material in stationary applications (e.g. port of entry). The expertise of the UConn group in gas phase TPCs will enhance the capabilities of the Yale group and the synergy between the two groups will be very beneficial for this research project as well as the education and research projects of the two universities. The LXe technology to be used in this project has matured rapidly over the past few years, developed for use in detectors for nuclear physics and astrophysics. This technology may now be applied in a straightforward way to the imaging of gamma rays. According to detailed Monte Carlo simulations recently performed at Yale University, energy resolution of 1% and angular resolution of 3 degrees may be obtained for 1.0 MeV gamma rays, using existing technology. With further research and development, energy resolution of 0.5% and angular resolution of 1.3 degrees will be possible at 1.0 MeV. Because liquid xenon is a high density, high Z material, it is highly efficient for scattering and capturing gamma rays. In addition, this technology scales elegantly to large detector areas, with several square meter apertures possible. The Yale research group is highly experienced in the development and use of noble liquid

  18. Geiger-mode APD camera system for single-photon 3D LADAR imaging

    NASA Astrophysics Data System (ADS)

    Entwistle, Mark; Itzler, Mark A.; Chen, Jim; Owens, Mark; Patel, Ketan; Jiang, Xudong; Slomkowski, Krystyna; Rangwala, Sabbir

    2012-06-01

    The unparalleled sensitivity of 3D LADAR imaging sensors based on single photon detection provides substantial benefits for imaging at long stand-off distances and minimizing laser pulse energy requirements. To obtain 3D LADAR images with single photon sensitivity, we have demonstrated focal plane arrays (FPAs) based on InGaAsP Geiger-mode avalanche photodiodes (GmAPDs) optimized for use at either 1.06 μm or 1.55 μm. These state-of-the-art FPAs exhibit excellent pixel-level performance and the capability for 100% pixel yield on a 32 x 32 format. To realize the full potential of these FPAs, we have recently developed an integrated camera system providing turnkey operation based on FPGA control. This system implementation enables the extremely high frame-rate capability of the GmAPD FPA, and frame rates in excess of 250 kHz (for 0.4 μs range gates) can be accommodated using an industry-standard CameraLink interface in full configuration. Real-time data streaming for continuous acquisition of 2 μs range gate point cloud data with 13-bit time-stamp resolution at 186 kHz frame rates has been established using multiple solid-state storage drives. Range gate durations spanning 4 ns to 10 μs provide broad operational flexibility. The camera also provides real-time signal processing in the form of multi-frame gray-scale contrast images and single-frame time-stamp histograms, and automated bias control has been implemented to maintain a constant photon detection efficiency in the presence of ambient temperature changes. A comprehensive graphical user interface has been developed to provide complete camera control using a simple serial command set, and this command set supports highly flexible end-user customization.

  19. The electronics system for the LBNL positron emission mammography (PEM) camera

    NASA Astrophysics Data System (ADS)

    Moses, W. W.; Young, J. W.; Baker, K.; Jones, W.; Lenox, M.; Ho, M. H.; Weng, M.

    2001-06-01

    Describes the electronics for a high-performance positron emission mammography (PEM) camera. It is based on the electronics for a human brain positron emission tomography (PET) camera (the Siemens/CTI HRRT), modified to use a detector module that incorporates a photodiode (PD) array. An application-specified integrated circuit (ASIC) services the photodetector (PD) array, amplifying its signal and identifying the crystal of interaction. Another ASIC services the photomultiplier tube (PMT), measuring its output and providing a timing signal. Field-programmable gate arrays (FPGAs) and lookup RAMs are used to apply crystal-by-crystal correction factors and measure the energy deposit and the interaction depth (based on the PD/PMT ratio). Additional FPGAs provide event multiplexing, derandomization, coincidence detection, and real-time rebinning. Embedded PC/104 microprocessors provide communication, real-time control, and configure the system. Extensive use of FPGAs make the overall design extremely flexible, allowing many different functions (or design modifications) to be realized without hardware changes. Incorporation of extensive onboard diagnostics, implemented in the FPGAs, is required by the very high level of integration and density achieved by this system.

  20. Processing the Viking lander camera data

    NASA Technical Reports Server (NTRS)

    Levinthal, E. C.; Tucker, R.; Green, W.; Jones, K. L.

    1977-01-01

    Over 1000 camera events were returned from the two Viking landers during the Primary Mission. A system was devised for processing camera data as they were received, in real time, from the Deep Space Network. This system provided a flexible choice of parameters for three computer-enhanced versions of the data for display or hard-copy generation. Software systems allowed all but 0.3% of the imagery scan lines received on earth to be placed correctly in the camera data record. A second-order processing system was developed which allowed extensive interactive image processing including computer-assisted photogrammetry, a variety of geometric and photometric transformations, mosaicking, and color balancing using six different filtered images of a common scene. These results have been completely cataloged and documented to produce an Experiment Data Record.

  1. SU-F-J-182: Investigation of Systems for Improved Accuracy in Clinical Y-90 Percent Delivered Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McBeth, R; Elder, D; Kesner, A

    2016-06-15

    Purpose: Y-90 Selective Internal Radiation Therapy (SIRT) is used to treat liver tumors, and by nature has variability in the percent of the intended dose that is actually delivered. To determine the quality of the administration, pre and post activity measurements are taken, and used to infer percent delivered. Vendor specifications indicate the use of an ion chamber to take these measurements. In our work, we investigated the accuracy of ion chambers, and compared them to other detector systems. Methods: We have built phantoms, phantom holders, and protocols, which allow us to measure our Y90 doses with varying apparatuses: amore » dose calibrator, a Geiger-counter, an ion chamber, a crystal based thyroid probe, and a gamma camera. We have set up a system that has enabled us to gather data by measuring clinical Y90 doses as they are used in the clinic using all of the instrumental methods. Five initial doses (25 measurements/acquisitions) have been taken at the time of this abstract submission. Results: Our initial results show that measurements acquired using scintillation based detectors (thyroid probe and gamma camera) correlate better with the gold standard (i.e. the dose calibrator). Pearson correlations between the dose calibrator measurements and the GM counter, Ion chamber, thyroid probe, and gamma camera were found to be 0.88, 0.83, 0.98, 0.99, respectively. More acquisitions and analysis are planned to determine the precision of the systems, as well as optimal energy window settings. Conclusion: It is likely that current standard practice can be improved using scintillation crystal based detectors. Such systems are more sensitive, can integrate signal, and can use energy discrimination. Furthermore, phantoms can be built to integrate with probe and gamma camera systems that are robust and provide reproducibility. Future work will include expanded acquisition and analysis.« less

  2. The Beagle 2 Stereo Camera System: Scientific Objectives and Design Characteristics

    NASA Astrophysics Data System (ADS)

    Griffiths, A.; Coates, A.; Josset, J.; Paar, G.; Sims, M.

    2003-04-01

    The Stereo Camera System (SCS) will provide wide-angle (48 degree) multi-spectral stereo imaging of the Beagle 2 landing site in Isidis Planitia with an angular resolution of 0.75 milliradians. Based on the SpaceX Modular Micro-Imager, the SCS is composed of twin cameras (with 1024 by 1024 pixel frame transfer CCD) and twin filter wheel units (with a combined total of 24 filters). The primary mission objective is to construct a digital elevation model of the area in reach of the lander’s robot arm. The SCS specifications and following baseline studies are described: Panoramic RGB colour imaging of the landing site and panoramic multi-spectral imaging at 12 distinct wavelengths to study the mineralogy of landing site. Solar observations to measure water vapour absorption and the atmospheric dust optical density. Also envisaged are multi-spectral observations of Phobos &Deimos (observations of the moons relative to background stars will be used to determine the lander’s location and orientation relative to the Martian surface), monitoring of the landing site to detect temporal changes, observation of the actions and effects of the other PAW experiments (including rock texture studies with a close-up-lens) and collaborative observations with the Mars Express orbiter instrument teams. Due to be launched in May of this year, the total system mass is 360 g, the required volume envelope is 747 cm^3 and the average power consumption is 1.8 W. A 10Mbit/s RS422 bus connects each camera to the lander common electronics.

  3. Study of the polarimetric performance of a Si/CdTe semiconductor Compton camera for the Hitomi satellite

    NASA Astrophysics Data System (ADS)

    Katsuta, Junichiro; Edahiro, Ikumi; Watanabe, Shin; Odaka, Hirokazu; Uchida, Yusuke; Uchida, Nagomi; Mizuno, Tsunefumi; Fukazawa, Yasushi; Hayashi, Katsuhiro; Habata, Sho; Ichinohe, Yuto; Kitaguchi, Takao; Ohno, Masanori; Ohta, Masayuki; Takahashi, Hiromitsu; Takahashi, Tadayuki; Takeda, Shin'ichiro; Tajima, Hiroyasu; Yuasa, Takayuki; Itou, Masayoshi; SGD Team

    2016-12-01

    Gamma-ray polarization offers a unique probe into the geometry of the γ-ray emission process in celestial objects. The Soft Gamma-ray Detector (SGD) onboard the X-ray observatory Hitomi is a Si/CdTe Compton camera and is expected to be an excellent polarimeter, as well as a highly sensitive spectrometer due to its good angular coverage and resolution for Compton scattering. A beam test of the final-prototype for the SGD Compton camera was conducted to demonstrate its polarimetric capability and to verify and calibrate the Monte Carlo simulation of the instrument. The modulation factor of the SGD prototype camera, evaluated for the inner and outer parts of the CdTe sensors as absorbers, was measured to be 0.649-0.701 (inner part) and 0.637-0.653 (outer part) at 122.2 keV and 0.610-0.651 (inner part) and 0.564-0.592 (outer part) at 194.5 keV at varying polarization angles with respect to the detector. This indicates that the relative systematic uncertainty of the modulation factor is as small as ∼ 3 % .

  4. Utilizing ISS Camera Systems for Scientific Analysis of Lightning Characteristics and Comparison with ISS-LIS and GLM

    NASA Technical Reports Server (NTRS)

    Schultz, Christopher J.; Lang, Timothy J.; Leake, Skye; Runco, Mario, Jr.; Blakeslee, Richard J.

    2017-01-01

    Video and still frame images from cameras aboard the International Space Station (ISS) are used to inspire, educate, and provide a unique vantage point from low-Earth orbit that is second to none; however, these cameras have overlooked capabilities for contributing to scientific analysis of the Earth and near-space environment. The goal of this project is to study how geo referenced video/images from available ISS camera systems can be useful for scientific analysis, using lightning properties as a demonstration.

  5. COBRA ATD multispectral camera response model

    NASA Astrophysics Data System (ADS)

    Holmes, V. Todd; Kenton, Arthur C.; Hilton, Russell J.; Witherspoon, Ned H.; Holloway, John H., Jr.

    2000-08-01

    A new multispectral camera response model has been developed in support of the US Marine Corps (USMC) Coastal Battlefield Reconnaissance and Analysis (COBRA) Advanced Technology Demonstration (ATD) Program. This analytical model accurately estimates response form five Xybion intensified IMC 201 multispectral cameras used for COBRA ATD airborne minefield detection. The camera model design is based on a series of camera response curves which were generated through optical laboratory test performed by the Naval Surface Warfare Center, Dahlgren Division, Coastal Systems Station (CSS). Data fitting techniques were applied to these measured response curves to obtain nonlinear expressions which estimates digitized camera output as a function of irradiance, intensifier gain, and exposure. This COBRA Camera Response Model was proven to be very accurate, stable over a wide range of parameters, analytically invertible, and relatively simple. This practical camera model was subsequently incorporated into the COBRA sensor performance evaluation and computational tools for research analysis modeling toolbox in order to enhance COBRA modeling and simulation capabilities. Details of the camera model design and comparisons of modeled response to measured experimental data are presented.

  6. Heart Imaging System

    NASA Technical Reports Server (NTRS)

    1993-01-01

    Johnson Space Flight Center's device to test astronauts' heart function in microgravity has led to the MultiWire Gamma Camera, which images heart conditions six times faster than conventional devices. Dr. Jeffrey Lacy, who developed the technology as a NASA researcher, later formed Proportional Technologies, Inc. to develop a commercially viable process that would enable use of Tantalum-178 (Ta-178), a radio-pharmaceutical. His company supplies the generator for the radioactive Ta-178 to Xenos Medical Systems, which markets the camera. Ta-178 can only be optimally imaged with the camera. Because the body is subjected to it for only nine minutes, the radiation dose is significantly reduced and the technique can be used more frequently. Ta-178 also enables the camera to be used on pediatric patients who are rarely studied with conventional isotopes because of the high radiation dosage.

  7. Multi-camera sensor system for 3D segmentation and localization of multiple mobile robots.

    PubMed

    Losada, Cristina; Mazo, Manuel; Palazuelos, Sira; Pizarro, Daniel; Marrón, Marta

    2010-01-01

    This paper presents a method for obtaining the motion segmentation and 3D localization of multiple mobile robots in an intelligent space using a multi-camera sensor system. The set of calibrated and synchronized cameras are placed in fixed positions within the environment (intelligent space). The proposed algorithm for motion segmentation and 3D localization is based on the minimization of an objective function. This function includes information from all the cameras, and it does not rely on previous knowledge or invasive landmarks on board the robots. The proposed objective function depends on three groups of variables: the segmentation boundaries, the motion parameters and the depth. For the objective function minimization, we use a greedy iterative algorithm with three steps that, after initialization of segmentation boundaries and depth, are repeated until convergence.

  8. Camera calibration for multidirectional flame chemiluminescence tomography

    NASA Astrophysics Data System (ADS)

    Wang, Jia; Zhang, Weiguang; Zhang, Yuhong; Yu, Xun

    2017-04-01

    Flame chemiluminescence tomography (FCT), which combines computerized tomography theory and multidirectional chemiluminescence emission measurements, can realize instantaneous three-dimensional (3-D) diagnostics for flames with high spatial and temporal resolutions. One critical step of FCT is to record the projections by multiple cameras from different view angles. For high accuracy reconstructions, it requires that extrinsic parameters (the positions and orientations) and intrinsic parameters (especially the image distances) of cameras be accurately calibrated first. Taking the focus effect of the camera into account, a modified camera calibration method was presented for FCT, and a 3-D calibration pattern was designed to solve the parameters. The precision of the method was evaluated by reprojections of feature points to cameras with the calibration results. The maximum root mean square error of the feature points' position is 1.42 pixels and 0.0064 mm for the image distance. An FCT system with 12 cameras was calibrated by the proposed method and the 3-D CH* intensity of a propane flame was measured. The results showed that the FCT system provides reasonable reconstruction accuracy using the camera's calibration results.

  9. Low-cost, portable, robust and high-resolution single-camera stereo-DIC system and its application in high-temperature deformation measurements

    NASA Astrophysics Data System (ADS)

    Chi, Yuxi; Yu, Liping; Pan, Bing

    2018-05-01

    A low-cost, portable, robust and high-resolution single-camera stereo-digital image correlation (stereo-DIC) system for accurate surface three-dimensional (3D) shape and deformation measurements is described. This system adopts a single consumer-grade high-resolution digital Single Lens Reflex (SLR) camera and a four-mirror adaptor, rather than two synchronized industrial digital cameras, for stereo image acquisition. In addition, monochromatic blue light illumination and coupled bandpass filter imaging are integrated to ensure the robustness of the system against ambient light variations. In contrast to conventional binocular stereo-DIC systems, the developed pseudo-stereo-DIC system offers the advantages of low cost, portability, robustness against ambient light variations, and high resolution. The accuracy and precision of the developed single SLR camera-based stereo-DIC system were validated by measuring the 3D shape of a stationary sphere along with in-plane and out-of-plane displacements of a translated planar plate. Application of the established system to thermal deformation measurement of an alumina ceramic plate and a stainless-steel plate subjected to radiation heating was also demonstrated.

  10. Versatile microsecond movie camera

    NASA Astrophysics Data System (ADS)

    Dreyfus, R. W.

    1980-03-01

    A laboratory-type movie camera is described which satisfies many requirements in the range 1 microsec to 1 sec. The camera consists of a He-Ne laser and compatible state-of-the-art components; the primary components are an acoustooptic modulator, an electromechanical beam deflector, and a video tape system. The present camera is distinct in its operation in that submicrosecond laser flashes freeze the image motion while still allowing the simplicity of electromechanical image deflection in the millisecond range. The gating and pulse delay circuits of an oscilloscope synchronize the modulator and scanner relative to the subject being photographed. The optical table construction and electronic control enhance the camera's versatility and adaptability. The instant replay video tape recording allows for easy synchronization and immediate viewing of the results. Economy is achieved by using off-the-shelf components, optical table construction, and short assembly time.

  11. High Speed Digital Camera Technology Review

    NASA Technical Reports Server (NTRS)

    Clements, Sandra D.

    2009-01-01

    A High Speed Digital Camera Technology Review (HSD Review) is being conducted to evaluate the state-of-the-shelf in this rapidly progressing industry. Five HSD cameras supplied by four camera manufacturers participated in a Field Test during the Space Shuttle Discovery STS-128 launch. Each camera was also subjected to Bench Tests in the ASRC Imaging Development Laboratory. Evaluation of the data from the Field and Bench Tests is underway. Representatives from the imaging communities at NASA / KSC and the Optical Systems Group are participating as reviewers. A High Speed Digital Video Camera Draft Specification was updated to address Shuttle engineering imagery requirements based on findings from this HSD Review. This draft specification will serve as the template for a High Speed Digital Video Camera Specification to be developed for the wider OSG imaging community under OSG Task OS-33.

  12. Super-Resolution in Plenoptic Cameras Using FPGAs

    PubMed Central

    Pérez, Joel; Magdaleno, Eduardo; Pérez, Fernando; Rodríguez, Manuel; Hernández, David; Corrales, Jaime

    2014-01-01

    Plenoptic cameras are a new type of sensor that extend the possibilities of current commercial cameras allowing 3D refocusing or the capture of 3D depths. One of the limitations of plenoptic cameras is their limited spatial resolution. In this paper we describe a fast, specialized hardware implementation of a super-resolution algorithm for plenoptic cameras. The algorithm has been designed for field programmable graphic array (FPGA) devices using VHDL (very high speed integrated circuit (VHSIC) hardware description language). With this technology, we obtain an acceleration of several orders of magnitude using its extremely high-performance signal processing capability through parallelism and pipeline architecture. The system has been developed using generics of the VHDL language. This allows a very versatile and parameterizable system. The system user can easily modify parameters such as data width, number of microlenses of the plenoptic camera, their size and shape, and the super-resolution factor. The speed of the algorithm in FPGA has been successfully compared with the execution using a conventional computer for several image sizes and different 3D refocusing planes. PMID:24841246

  13. Super-resolution in plenoptic cameras using FPGAs.

    PubMed

    Pérez, Joel; Magdaleno, Eduardo; Pérez, Fernando; Rodríguez, Manuel; Hernández, David; Corrales, Jaime

    2014-05-16

    Plenoptic cameras are a new type of sensor that extend the possibilities of current commercial cameras allowing 3D refocusing or the capture of 3D depths. One of the limitations of plenoptic cameras is their limited spatial resolution. In this paper we describe a fast, specialized hardware implementation of a super-resolution algorithm for plenoptic cameras. The algorithm has been designed for field programmable graphic array (FPGA) devices using VHDL (very high speed integrated circuit (VHSIC) hardware description language). With this technology, we obtain an acceleration of several orders of magnitude using its extremely high-performance signal processing capability through parallelism and pipeline architecture. The system has been developed using generics of the VHDL language. This allows a very versatile and parameterizable system. The system user can easily modify parameters such as data width, number of microlenses of the plenoptic camera, their size and shape, and the super-resolution factor. The speed of the algorithm in FPGA has been successfully compared with the execution using a conventional computer for several image sizes and different 3D refocusing planes.

  14. HIGH SPEED CAMERA

    DOEpatents

    Rogers, B.T. Jr.; Davis, W.C.

    1957-12-17

    This patent relates to high speed cameras having resolution times of less than one-tenth microseconds suitable for filming distinct sequences of a very fast event such as an explosion. This camera consists of a rotating mirror with reflecting surfaces on both sides, a narrow mirror acting as a slit in a focal plane shutter, various other mirror and lens systems as well as an innage recording surface. The combination of the rotating mirrors and the slit mirror causes discrete, narrow, separate pictures to fall upon the film plane, thereby forming a moving image increment of the photographed event. Placing a reflecting surface on each side of the rotating mirror cancels the image velocity that one side of the rotating mirror would impart, so as a camera having this short a resolution time is thereby possible.

  15. Systems for increasing the sensitivity of gamma-ray imagers

    DOEpatents

    Mihailescu, Lucian; Vetter, Kai M.; Chivers, Daniel H.

    2012-12-11

    Systems that increase the position resolution and granularity of double sided segmented semiconductor detectors are provided. These systems increase the imaging resolution capability of such detectors, either used as Compton cameras, or as position sensitive radiation detectors in imagers such as SPECT, PET, coded apertures, multi-pinhole imagers, or other spatial or temporal modulated imagers.

  16. Automated Meteor Detection by All-Sky Digital Camera Systems

    NASA Astrophysics Data System (ADS)

    Suk, Tomáš; Šimberová, Stanislava

    2017-12-01

    We have developed a set of methods to detect meteor light traces captured by all-sky CCD cameras. Operating at small automatic observatories (stations), these cameras create a network spread over a large territory. Image data coming from these stations are merged in one central node. Since a vast amount of data is collected by the stations in a single night, robotic storage and analysis are essential to processing. The proposed methodology is adapted to data from a network of automatic stations equipped with digital fish-eye cameras and includes data capturing, preparation, pre-processing, analysis, and finally recognition of objects in time sequences. In our experiments we utilized real observed data from two stations.

  17. Schwarzschild camera

    NASA Technical Reports Server (NTRS)

    1980-01-01

    The fabrication procedures for the primary and secondary mirrors for a Schwarzschild camera are summarized. The achieved wave front for the telescope was 1/2 wave at .63 microns. Interferograms of the two mirrors as a system are given and the mounting procedures are outlined.

  18. Establishment of Imaging Spectroscopy of Nuclear Gamma-Rays based on Geometrical Optics

    PubMed Central

    Tanimori, Toru; Mizumura, Yoshitaka; Takada, Atsushi; Miyamoto, Shohei; Takemura, Taito; Kishimoto, Tetsuro; Komura, Shotaro; Kubo, Hidetoshi; Kurosawa, Shunsuke; Matsuoka, Yoshihiro; Miuchi, Kentaro; Mizumoto, Tetsuya; Nakamasu, Yuma; Nakamura, Kiseki; Parker, Joseph D.; Sawano, Tatsuya; Sonoda, Shinya; Tomono, Dai; Yoshikawa, Kei

    2017-01-01

    Since the discovery of nuclear gamma-rays, its imaging has been limited to pseudo imaging, such as Compton Camera (CC) and coded mask. Pseudo imaging does not keep physical information (intensity, or brightness in Optics) along a ray, and thus is capable of no more than qualitative imaging of bright objects. To attain quantitative imaging, cameras that realize geometrical optics is essential, which would be, for nuclear MeV gammas, only possible via complete reconstruction of the Compton process. Recently we have revealed that “Electron Tracking Compton Camera” (ETCC) provides a well-defined Point Spread Function (PSF). The information of an incoming gamma is kept along a ray with the PSF and that is equivalent to geometrical optics. Here we present an imaging-spectroscopic measurement with the ETCC. Our results highlight the intrinsic difficulty with CCs in performing accurate imaging, and show that the ETCC surmounts this problem. The imaging capability also helps the ETCC suppress the noise level dramatically by ~3 orders of magnitude without a shielding structure. Furthermore, full reconstruction of Compton process with the ETCC provides spectra free of Compton edges. These results mark the first proper imaging of nuclear gammas based on the genuine geometrical optics. PMID:28155870

  19. VUV Testing of Science Cameras at MSFC: QE Measurement of the CLASP Flight Cameras

    NASA Technical Reports Server (NTRS)

    Champey, Patrick; Kobayashi, Ken; Winebarger, Amy; Cirtain, Jonathan; Hyde, David; Robertson, Bryan; Beabout, Brent; Beabout, Dyana; Stewart, Mike

    2015-01-01

    The NASA Marshall Space Flight Center (MSFC) has developed a science camera suitable for sub-orbital missions for observations in the UV, EUV and soft X-ray. Six cameras were built and tested for the Chromospheric Lyman-Alpha Spectro-Polarimeter (CLASP), a joint National Astronomical Observatory of Japan (NAOJ) and MSFC sounding rocket mission. The CLASP camera design includes a frame-transfer e2v CCD57-10 512x512 detector, dual channel analog readout electronics and an internally mounted cold block. At the flight operating temperature of -20 C, the CLASP cameras achieved the low-noise performance requirements (less than or equal to 25 e- read noise and greater than or equal to 10 e-/sec/pix dark current), in addition to maintaining a stable gain of approximately equal to 2.0 e-/DN. The e2v CCD57-10 detectors were coated with Lumogen-E to improve quantum efficiency (QE) at the Lyman- wavelength. A vacuum ultra-violet (VUV) monochromator and a NIST calibrated photodiode were employed to measure the QE of each camera. Four flight-like cameras were tested in a high-vacuum chamber, which was configured to operate several tests intended to verify the QE, gain, read noise, dark current and residual non-linearity of the CCD. We present and discuss the QE measurements performed on the CLASP cameras. We also discuss the high-vacuum system outfitted for testing of UV and EUV science cameras at MSFC.

  20. Performance and field tests of a handheld Compton camera using 3-D position-sensitive scintillators coupled to multi-pixel photon counter arrays

    NASA Astrophysics Data System (ADS)

    Kishimoto, A.; Kataoka, J.; Nishiyama, T.; Fujita, T.; Takeuchi, K.; Okochi, H.; Ogata, H.; Kuroshima, H.; Ohsuka, S.; Nakamura, S.; Hirayanagi, M.; Adachi, S.; Uchiyama, T.; Suzuki, H.

    2014-11-01

    After the nuclear disaster in Fukushima, radiation decontamination has become particularly urgent. To help identify radiation hotspots and ensure effective decontamination operation, we have developed a novel Compton camera based on Ce-doped Gd3Al2Ga3O12 scintillators and multi-pixel photon counter (MPPC) arrays. Even though its sensitivity is several times better than that of other cameras being tested in Fukushima, we introduce a depth-of-interaction (DOI) method to further improve the angular resolution. For gamma rays, the DOI information, in addition to 2-D position, is obtained by measuring the pulse-height ratio of the MPPC arrays coupled to ends of the scintillator. We present the detailed performance and results of various field tests conducted in Fukushima with the prototype 2-D and DOI Compton cameras. Moreover, we demonstrate stereo measurement of gamma rays that enables measurement of not only direction but also approximate distance to radioactive hotspots.

  1. Gamma Band Activity in the Reticular Activating System

    PubMed Central

    Urbano, Francisco J.; Kezunovic, Nebojsa; Hyde, James; Simon, Christen; Beck, Paige; Garcia-Rill, Edgar

    2012-01-01

    This review considers recent evidence showing that cells in three regions of the reticular activating system (RAS) exhibit gamma band activity, and describes the mechanisms behind such manifestation. Specifically, we discuss how cells in the mesopontine pedunculopontine nucleus (PPN), intralaminar parafascicular nucleus (Pf), and pontine subcoeruleus nucleus dorsalis (SubCD) all fire in the beta/gamma band range when maximally activated, but no higher. The mechanisms behind this ceiling effect have been recently elucidated. We describe recent findings showing that every cell in the PPN have high-threshold, voltage-dependent P/Q-type calcium channels that are essential, while N-type calcium channels are permissive, to gamma band activity. Every cell in the Pf also showed that P/Q-type and N-type calcium channels are responsible for this activity. On the other hand, every SubCD cell exhibited sodium-dependent subthreshold oscillations. A novel mechanism for sleep–wake control based on well-known transmitter interactions, electrical coupling, and gamma band activity is described. The data presented here on inherent gamma band activity demonstrates the global nature of sleep–wake oscillation that is orchestrated by brainstem–thalamic mechanism, and questions the undue importance given to the hypothalamus for regulation of sleep–wakefulness. The discovery of gamma band activity in the RAS follows recent reports of such activity in other subcortical regions like the hippocampus and cerebellum. We hypothesize that, rather than participating in the temporal binding of sensory events as seen in the cortex, gamma band activity manifested in the RAS may help stabilize coherence related to arousal, providing a stable activation state during waking and paradoxical sleep. Most of our thoughts and actions are driven by pre-conscious processes. We speculate that continuous sensory input will induce gamma band activity in the RAS that could participate in the processes of

  2. Miniaturized Autonomous Extravehicular Robotic Camera (Mini AERCam)

    NASA Technical Reports Server (NTRS)

    Fredrickson, Steven E.

    2001-01-01

    The NASA Johnson Space Center (JSC) Engineering Directorate is developing the Autonomous Extravehicular Robotic Camera (AERCam), a low-volume, low-mass free-flying camera system . AERCam project team personnel recently initiated development of a miniaturized version of AERCam known as Mini AERCam. The Mini AERCam target design is a spherical "nanosatellite" free-flyer 7.5 inches in diameter and weighing 1 0 pounds. Mini AERCam is building on the success of the AERCam Sprint STS-87 flight experiment by adding new on-board sensing and processing capabilities while simultaneously reducing volume by 80%. Achieving enhanced capability in a smaller package depends on applying miniaturization technology across virtually all subsystems. Technology innovations being incorporated include micro electromechanical system (MEMS) gyros, "camera-on-a-chip" CMOS imagers, rechargeable xenon gas propulsion system , rechargeable lithium ion battery, custom avionics based on the PowerPC 740 microprocessor, GPS relative navigation, digital radio frequency communications and tracking, micropatch antennas, digital instrumentation, and dense mechanical packaging. The Mini AERCam free-flyer will initially be integrated into an approximate flight-like configuration for demonstration on an airbearing table. A pilot-in-the-loop and hardware-in-the-loop simulation to simulate on-orbit navigation and dynamics will complement the airbearing table demonstration. The Mini AERCam lab demonstration is intended to form the basis for future development of an AERCam flight system that provides beneficial on-orbit views unobtainable from fixed cameras, cameras on robotic manipulators, or cameras carried by EVA crewmembers.

  3. Experimental Characterization of Close-Emitter Interference in an Optical Camera Communication System.

    PubMed

    Chavez-Burbano, Patricia; Guerra, Victor; Rabadan, Jose; Rodríguez-Esparragón, Dionisio; Perez-Jimenez, Rafael

    2017-07-04

    Due to the massive insertion of embedded cameras in a wide variety of devices and the generalized use of LED lamps, Optical Camera Communication (OCC) has been proposed as a practical solution for future Internet of Things (IoT) and smart cities applications. Influence of mobility, weather conditions, solar radiation interference, and external light sources over Visible Light Communication (VLC) schemes have been addressed in previous works. Some authors have studied the spatial intersymbol interference from close emitters within an OCC system; however, it has not been characterized or measured in function of the different transmitted wavelengths. In this work, this interference has been experimentally characterized and the Normalized Power Signal to Interference Ratio (NPSIR) for easily determining the interference in other implementations, independently of the selected system devices, has been also proposed. A set of experiments in a darkroom, working with RGB multi-LED transmitters and a general purpose camera, were performed in order to obtain the NPSIR values and to validate the deduced equations for 2D pixel representation of real distances. These parameters were used in the simulation of a wireless sensor network scenario in a small office, where the Bit Error Rate (BER) of the communication link was calculated. The experiments show that the interference of other close emitters in terms of the distance and the used wavelength can be easily determined with the NPSIR. Finally, the simulation validates the applicability of the deduced equations for scaling the initial results into real scenarios.

  4. Development of polarization-controlled multi-pass Thomson scattering system in the GAMMA 10 tandem mirror

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoshikawa, M.; Morimoto, M.; Shima, Y.

    2012-10-15

    In the GAMMA 10 tandem mirror, the typical electron density is comparable to that of the peripheral plasma of torus-type fusion devices. Therefore, an effective method to increase Thomson scattering (TS) signals is required in order to improve signal quality. In GAMMA 10, the yttrium-aluminum-garnet (YAG)-TS system comprises a laser, incident optics, light collection optics, signal detection electronics, and a data recording system. We have been developing a multi-pass TS method for a polarization-based system based on the GAMMA 10 YAG TS. To evaluate the effectiveness of the polarization-based configuration, the multi-pass system was installed in the GAMMA 10 YAG-TSmore » system, which is capable of double-pass scattering. We carried out a Rayleigh scattering experiment and applied this double-pass scattering system to the GAMMA 10 plasma. The integrated scattering signal was made about twice as large by the double-pass system.« less

  5. IMAX camera in payload bay

    NASA Image and Video Library

    1995-12-20

    STS074-361-035 (12-20 Nov 1995) --- This medium close-up view centers on the IMAX Cargo Bay Camera (ICBC) and its associated IMAX Camera Container Equipment (ICCE) at its position in the cargo bay of the Earth-orbiting Space Shuttle Atlantis. With its own ?space suit? or protective covering to protect it from the rigors of space, this version of the IMAX was able to record scenes not accessible with the in-cabin cameras. For docking and undocking activities involving Russia?s Mir Space Station and the Space Shuttle Atlantis, the camera joined a variety of in-cabin camera hardware in recording the historical events. IMAX?s secondary objectives were to film Earth views. The IMAX project is a collaboration between NASA, the Smithsonian Institution?s National Air and Space Museum (NASM), IMAX Systems Corporation, and the Lockheed Corporation to document significant space activities and promote NASA?s educational goals using the IMAX film medium.

  6. Coincidence ion imaging with a fast frame camera

    NASA Astrophysics Data System (ADS)

    Lee, Suk Kyoung; Cudry, Fadia; Lin, Yun Fei; Lingenfelter, Steven; Winney, Alexander H.; Fan, Lin; Li, Wen

    2014-12-01

    A new time- and position-sensitive particle detection system based on a fast frame CMOS (complementary metal-oxide semiconductors) camera is developed for coincidence ion imaging. The system is composed of four major components: a conventional microchannel plate/phosphor screen ion imager, a fast frame CMOS camera, a single anode photomultiplier tube (PMT), and a high-speed digitizer. The system collects the positional information of ions from a fast frame camera through real-time centroiding while the arrival times are obtained from the timing signal of a PMT processed by a high-speed digitizer. Multi-hit capability is achieved by correlating the intensity of ion spots on each camera frame with the peak heights on the corresponding time-of-flight spectrum of a PMT. Efficient computer algorithms are developed to process camera frames and digitizer traces in real-time at 1 kHz laser repetition rate. We demonstrate the capability of this system by detecting a momentum-matched co-fragments pair (methyl and iodine cations) produced from strong field dissociative double ionization of methyl iodide.

  7. Coincidence ion imaging with a fast frame camera

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Suk Kyoung; Cudry, Fadia; Lin, Yun Fei

    2014-12-15

    A new time- and position-sensitive particle detection system based on a fast frame CMOS (complementary metal-oxide semiconductors) camera is developed for coincidence ion imaging. The system is composed of four major components: a conventional microchannel plate/phosphor screen ion imager, a fast frame CMOS camera, a single anode photomultiplier tube (PMT), and a high-speed digitizer. The system collects the positional information of ions from a fast frame camera through real-time centroiding while the arrival times are obtained from the timing signal of a PMT processed by a high-speed digitizer. Multi-hit capability is achieved by correlating the intensity of ion spots onmore » each camera frame with the peak heights on the corresponding time-of-flight spectrum of a PMT. Efficient computer algorithms are developed to process camera frames and digitizer traces in real-time at 1 kHz laser repetition rate. We demonstrate the capability of this system by detecting a momentum-matched co-fragments pair (methyl and iodine cations) produced from strong field dissociative double ionization of methyl iodide.« less

  8. HIGH SPEED KERR CELL FRAMING CAMERA

    DOEpatents

    Goss, W.C.; Gilley, L.F.

    1964-01-01

    The present invention relates to a high speed camera utilizing a Kerr cell shutter and a novel optical delay system having no moving parts. The camera can selectively photograph at least 6 frames within 9 x 10/sup -8/ seconds during any such time interval of an occurring event. The invention utilizes particularly an optical system which views and transmits 6 images of an event to a multi-channeled optical delay relay system. The delay relay system has optical paths of successively increased length in whole multiples of the first channel optical path length, into which optical paths the 6 images are transmitted. The successively delayed images are accepted from the exit of the delay relay system by an optical image focusing means, which in turn directs the images into a Kerr cell shutter disposed to intercept the image paths. A camera is disposed to simultaneously view and record the 6 images during a single exposure of the Kerr cell shutter. (AEC)

  9. The gamma-ray Cherenkov telescope for the Cherenkov telescope array

    NASA Astrophysics Data System (ADS)

    Tibaldo, L.; Abchiche, A.; Allan, D.; Amans, J.-P.; Armstrong, T. P.; Balzer, A.; Berge, D.; Boisson, C.; Bousquet, J.-J.; Brown, A. M.; Bryan, M.; Buchholtz, G.; Chadwick, P. M.; Costantini, H.; Cotter, G.; Daniel, M. K.; De Franco, A.; De Frondat, F.; Dournaux, J.-L.; Dumas, D.; Ernenwein, J.-P.; Fasola, G.; Funk, S.; Gironnet, J.; Graham, J. A.; Greenshaw, T.; Hervet, O.; Hidaka, N.; Hinton, J. A.; Huet, J.-M.; Jankowsky, D.; Jegouzo, I.; Jogler, T.; Kraus, M.; Lapington, J. S.; Laporte, P.; Lefaucheur, J.; Markoff, S.; Melse, T.; Mohrmann, L.; Molyneux, P.; Nolan, S. J.; Okumura, A.; Osborne, J. P.; Parsons, R. D.; Rosen, S.; Ross, D.; Rowell, G.; Rulten, C. B.; Sato, Y.; Sayède, F.; Schmoll, J.; Schoorlemmer, H.; Servillat, M.; Sol, H.; Stamatescu, V.; Stephan, M.; Stuik, R.; Sykes, J.; Tajima, H.; Thornhill, J.; Trichard, C.; Vink, J.; Watson, J. J.; White, R.; Yamane, N.; Zech, A.; Zink, A.; Zorn, J.; CTA Consortium

    2017-01-01

    The Cherenkov Telescope Array (CTA) is a forthcoming ground-based observatory for very-high-energy gamma rays. CTA will consist of two arrays of imaging atmospheric Cherenkov telescopes in the Northern and Southern hemispheres, and will combine telescopes of different types to achieve unprecedented performance and energy coverage. The Gamma-ray Cherenkov Telescope (GCT) is one of the small-sized telescopes proposed for CTA to explore the energy range from a few TeV to hundreds of TeV with a field of view ≳ 8° and angular resolution of a few arcminutes. The GCT design features dual-mirror Schwarzschild-Couder optics and a compact camera based on densely-pixelated photodetectors as well as custom electronics. In this contribution we provide an overview of the GCT project with focus on prototype development and testing that is currently ongoing. We present results obtained during the first on-telescope campaign in late 2015 at the Observatoire de Paris-Meudon, during which we recorded the first Cherenkov images from atmospheric showers with the GCT multi-anode photomultiplier camera prototype. We also discuss the development of a second GCT camera prototype with silicon photomultipliers as photosensors, and plans toward a contribution to the realisation of CTA.

  10. Design of optical system for binocular fundus camera.

    PubMed

    Wu, Jun; Lou, Shiliang; Xiao, Zhitao; Geng, Lei; Zhang, Fang; Wang, Wen; Liu, Mengjia

    2017-12-01

    A non-mydriasis optical system for binocular fundus camera has been designed in this paper. It can capture two images of the same fundus retinal region from different angles at the same time, and can be used to achieve three-dimensional reconstruction of fundus. It is composed of imaging system and illumination system. In imaging system, Gullstrand Le Grand eye model is used to simulate normal human eye, and Schematic eye model is used to test the influence of ametropia in human eye on imaging quality. Annular aperture and black dot board are added into illumination system, so that the illumination system can eliminate stray light produced by corneal-reflected light and omentoscopic lens. Simulation results show that MTF of each visual field at the cut-off frequency of 90lp/mm is greater than 0.2, system distortion value is -2.7%, field curvature is less than 0.1 mm, radius of Airy disc is 3.25um. This system has a strong ability of chromatic aberration correction and focusing, and can image clearly for human fundus in which the range of diopters is from -10 D to +6 D(1 D = 1 m -1 ).

  11. CameraHRV: robust measurement of heart rate variability using a camera

    NASA Astrophysics Data System (ADS)

    Pai, Amruta; Veeraraghavan, Ashok; Sabharwal, Ashutosh

    2018-02-01

    The inter-beat-interval (time period of the cardiac cycle) changes slightly for every heartbeat; this variation is measured as Heart Rate Variability (HRV). HRV is presumed to occur due to interactions between the parasym- pathetic and sympathetic nervous system. Therefore, it is sometimes used as an indicator of the stress level of an individual. HRV also reveals some clinical information about cardiac health. Currently, HRV is accurately measured using contact devices such as a pulse oximeter. However, recent research in the field of non-contact imaging Photoplethysmography (iPPG) has made vital sign measurements using just the video recording of any exposed skin (such as a person's face) possible. The current signal processing methods for extracting HRV using peak detection perform well for contact-based systems but have poor performance for the iPPG signals. The main reason for this poor performance is the fact that current methods are sensitive to large noise sources which are often present in iPPG data. Further, current methods are not robust to motion artifacts that are common in iPPG systems. We developed a new algorithm, CameraHRV, for robustly extracting HRV even in low SNR such as is common with iPPG recordings. CameraHRV combined spatial combination and frequency demodulation to obtain HRV from the instantaneous frequency of the iPPG signal. CameraHRV outperforms other current methods of HRV estimation. Ground truth data was obtained from FDA-approved pulse oximeter for validation purposes. CameraHRV on iPPG data showed an error of 6 milliseconds for low motion and varying skin tone scenarios. The improvement in error was 14%. In case of high motion scenarios like reading, watching and talking, the error was 10 milliseconds.

  12. Explosive Transient Camera (ETC) Program

    NASA Technical Reports Server (NTRS)

    Ricker, George

    1991-01-01

    Since the inception of the ETC program, a wide range of new technologies was developed to support this astronomical instrument. The prototype unit was installed at ETC Site 1. The first partially automated observations were made and some major renovations were later added to the ETC hardware. The ETC was outfitted with new thermoelectrically-cooled CCD cameras and a sophisticated vacuum manifold, which, together, made the ETC a much more reliable unit than the prototype. The ETC instrumentation and building were placed under full computer control, allowing the ETC to operate as an automated, autonomous instrument with virtually no human intervention necessary. The first fully-automated operation of the ETC was performed, during which the ETC monitored the error region of the repeating soft gamma-ray burster SGR 1806-21.

  13. Mobile viewer system for virtual 3D space using infrared LED point markers and camera

    NASA Astrophysics Data System (ADS)

    Sakamoto, Kunio; Taneji, Shoto

    2006-09-01

    The authors have developed a 3D workspace system using collaborative imaging devices. A stereoscopic display enables this system to project 3D information. In this paper, we describe the position detecting system for a see-through 3D viewer. A 3D display system is useful technology for virtual reality, mixed reality and augmented reality. We have researched spatial imaging and interaction system. We have ever proposed 3D displays using the slit as a parallax barrier, the lenticular screen and the holographic optical elements(HOEs) for displaying active image 1)2)3)4). The purpose of this paper is to propose the interactive system using these 3D imaging technologies. The observer can view virtual images in the real world when the user watches the screen of a see-through 3D viewer. The goal of our research is to build the display system as follows; when users see the real world through the mobile viewer, the display system gives users virtual 3D images, which is floating in the air, and the observers can touch these floating images and interact them such that kids can make play clay. The key technologies of this system are the position recognition system and the spatial imaging display. The 3D images are presented by the improved parallax barrier 3D display. Here the authors discuss the measuring method of the mobile viewer using infrared LED point markers and a camera in the 3D workspace (augmented reality world). The authors show the geometric analysis of the proposed measuring method, which is the simplest method using a single camera not the stereo camera, and the results of our viewer system.

  14. Color calibration of an RGB camera mounted in front of a microscope with strong color distortion.

    PubMed

    Charrière, Renée; Hébert, Mathieu; Trémeau, Alain; Destouches, Nathalie

    2013-07-20

    This paper aims at showing that performing color calibration of an RGB camera can be achieved even in the case where the optical system before the camera introduces strong color distortion. In the present case, the optical system is a microscope containing a halogen lamp, with a nonuniform irradiance on the viewed surface. The calibration method proposed in this work is based on an existing method, but it is preceded by a three-step preprocessing of the RGB images aiming at extracting relevant color information from the strongly distorted images, taking especially into account the nonuniform irradiance map and the perturbing texture due to the surface topology of the standard color calibration charts when observed at micrometric scale. The proposed color calibration process consists first in computing the average color of the color-chart patches viewed under the microscope; then computing white balance, gamma correction, and saturation enhancement; and finally applying a third-order polynomial regression color calibration transform. Despite the nonusual conditions for color calibration, fairly good performance is achieved from a 48 patch Lambertian color chart, since an average CIE-94 color difference on the color-chart colors lower than 2.5 units is obtained.

  15. 7. VAL CAMERA CAR, DETAIL OF 'FLARE' OR TRAJECTORY CAMERA ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    7. VAL CAMERA CAR, DETAIL OF 'FLARE' OR TRAJECTORY CAMERA INSIDE CAMERA CAR. - Variable Angle Launcher Complex, Camera Car & Track, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  16. Development of a 32-detector CdTe matrix for the SVOM ECLAIRs x/gamma camera: tests results of first flight models

    NASA Astrophysics Data System (ADS)

    Lacombe, K.; Dezalay, J.-P.; Houret, B.; Amoros, C.; Atteia, J.-L.; Aubaret, K.; Billot, M.; Bordon, S.; Cordier, B.; Delaigue, S.; Galliano, M.; Gevin, O.; Godet, O.; Gonzalez, F.; Guillemot, Ph.; Limousin, O.; Mercier, K.; Nasser, G.; Pons, R.; Rambaud, D.; Ramon, P.; Waegebaert, V.

    2016-07-01

    ECLAIRs, a 2-D coded-mask imaging camera on-board the Sino-French SVOM space mission, will detect and locate gamma-ray bursts in near real time in the 4 - 150 keV energy band in a large field of view. The design of ECLAIRs has been driven by the objective to reach an unprecedented low-energy threshold of 4 keV. The detection plane is an assembly of 6400 Schottky CdTe detectors of size 4x4x1 mm3, biased from -200V to -500V and operated at -20°C. The low-energy threshold is achieved thanks to an innovative hybrid module composed of a thick film ceramic holding 32 CdTe detectors ("Detectors Ceramics"), associated to an HTCC ceramic housing a low-noise 32-channel ASIC ("ASIC Ceramics"). We manage the coupling between Detectors Ceramics and ASIC Ceramics in order to achieve the best performance and ensure the uniformity of the detection plane. In this paper, we describe the complete hybrid XRDPIX, of which 50 flight models have been manufactured by the SAGEM company. Afterwards, we show test results obtained on Detectors Ceramics, on ASIC Ceramics and on the modules once assembled. Then, we compare and confront detectors leakage currents and ASIC ENC with the energy threshold values and FWHM measured on XRDPIX modules at the temperature of -20°C by using a calibrated radioactive source of 241Am. Finally, we study the homogeneity of the spectral properties of the 32-detector hybrid matrices and we conclude on general performance of more than 1000 detection channels which may reach the lowenergy threshold of 4 keV required for the future ECLAIRs space camera.

  17. Method and System for Gamma-Ray Localization Induced Spacecraft Navigation Using Celestial Gamma-Ray Sources

    NASA Technical Reports Server (NTRS)

    Hisamoto, Chuck (Inventor); Arzoumanian, Zaven (Inventor); Sheikh, Suneel I. (Inventor)

    2015-01-01

    A method and system for spacecraft navigation using distant celestial gamma-ray bursts which offer detectable, bright, high-energy events that provide well-defined characteristics conducive to accurate time-alignment among spatially separated spacecraft. Utilizing assemblages of photons from distant gamma-ray bursts, relative range between two spacecraft can be accurately computed along the direction to each burst's source based upon the difference in arrival time of the burst emission at each spacecraft's location. Correlation methods used to time-align the high-energy burst profiles are provided. The spacecraft navigation may be carried out autonomously or in a central control mode of operation.

  18. The first demonstration of the concept of "narrow-FOV Si/CdTe semiconductor Compton camera"

    NASA Astrophysics Data System (ADS)

    Ichinohe, Yuto; Uchida, Yuusuke; Watanabe, Shin; Edahiro, Ikumi; Hayashi, Katsuhiro; Kawano, Takafumi; Ohno, Masanori; Ohta, Masayuki; Takeda, Shin`ichiro; Fukazawa, Yasushi; Katsuragawa, Miho; Nakazawa, Kazuhiro; Odaka, Hirokazu; Tajima, Hiroyasu; Takahashi, Hiromitsu; Takahashi, Tadayuki; Yuasa, Takayuki

    2016-01-01

    The Soft Gamma-ray Detector (SGD), to be deployed on board the ASTRO-H satellite, has been developed to provide the highest sensitivity observations of celestial sources in the energy band of 60-600 keV by employing a detector concept which uses a Compton camera whose field-of-view is restricted by a BGO shield to a few degree (narrow-FOV Compton camera). In this concept, the background from outside the FOV can be heavily suppressed by constraining the incident direction of the gamma ray reconstructed by the Compton camera to be consistent with the narrow FOV. We, for the first time, demonstrate the validity of the concept using background data taken during the thermal vacuum test and the low-temperature environment test of the flight model of SGD on ground. We show that the measured background level is suppressed to less than 10% by combining the event rejection using the anti-coincidence trigger of the active BGO shield and by using Compton event reconstruction techniques. More than 75% of the signals from the field-of-view are retained against the background rejection, which clearly demonstrates the improvement of signal-to-noise ratio. The estimated effective area of 22.8 cm2 meets the mission requirement even though not all of the operational parameters of the instrument have been fully optimized yet.

  19. Caught on Camera.

    ERIC Educational Resources Information Center

    Milshtein, Amy

    2002-01-01

    Describes the benefits of and rules to be followed when using surveillance cameras for school security. Discusses various camera models, including indoor and outdoor fixed position cameras, pan-tilt zoom cameras, and pinhole-lens cameras for covert surveillance. (EV)

  20. Three-dimensional sensor system using multistripe laser and stereo camera for environment recognition of mobile robots

    NASA Astrophysics Data System (ADS)

    Kim, Min Young; Cho, Hyung Suck; Kim, Jae H.

    2002-10-01

    In recent years, intelligent autonomous mobile robots have drawn tremendous interests as service robots for serving human or industrial robots for replacing human. To carry out the task, robots must be able to sense and recognize 3D space that they live or work. In this paper, we deal with the topic related to 3D sensing system for the environment recognition of mobile robots. For this, the structured lighting is basically utilized for a 3D visual sensor system because of the robustness on the nature of the navigation environment and the easy extraction of feature information of interest. The proposed sensing system is classified into a trinocular vision system, which is composed of the flexible multi-stripe laser projector, and two cameras. The principle of extracting the 3D information is based on the optical triangulation method. With modeling the projector as another camera and using the epipolar constraints which the whole cameras makes, the point-to-point correspondence between the line feature points in each image is established. In this work, the principle of this sensor is described in detail, and a series of experimental tests is performed to show the simplicity and efficiency and accuracy of this sensor system for 3D the environment sensing and recognition.