Science.gov

Sample records for camera based positron

  1. Detection of occult disease in breast cancer using fluorodeoxyglucose camera-based positron emission tomography.

    PubMed

    Pecking, A P; Mechelany-Corone, C; Bertrand-Kermorgant, F; Alberini, J L; Floiras, J L; Goupil, A; Pichon, M F

    2001-10-01

    An isolated increase of blood tumor marker CA 15.3 in breast cancer is considered a sensitive indicator for occult metastatic disease but by itself is not sufficient for initiating therapeutic intervention. We investigated the potential of camera-based positron emission tomography (PET) imaging using [18F]-fluorodeoxyglucose (FDG) to detect clinically occult recurrences in 132 female patients (age, 35-69 years) treated for breast cancer, all presenting with an isolated increase in blood tumor marker CA 15.3 without any other evidence of metastatic disease. FDG results were correlated to pathology results or to a sequentially guided conventional imaging method. One hundred nineteen patients were eligible for correlations. Positive FDG scans were obtained for 106 patients, including 89 with a single lesion and 17 with 2 or more lesion. There were 92 true-positive and 14 false-positive cases, 10 of which became true positive within 1 year. Among the 13 negative cases, 7 were false negative and 6 were true negative. Camera-based PET using FDG has successfully identified clinically occult disease with an overall sensitivity of 93.6% and a positive predictive value of 96.2%. The smallest detected size was 6 mm for a lymph node metastasis (tumor to nontumor ratio, 4:2). FDG camera-based PET localized tumors in 85.7% of cases suspected for clinically occult metastatic disease on the basis of a significant increase in blood tumor marker. A positive FDG scan associated with an elevated CA 15.3 level is most consistent with metastatic relapse of breast cancer.

  2. 21 CFR 892.1110 - Positron camera.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 21 Food and Drugs 8 2012-04-01 2012-04-01 false Positron camera. 892.1110 Section 892.1110 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES RADIOLOGY DEVICES Diagnostic Devices § 892.1110 Positron camera. (a) Identification. A...

  3. 21 CFR 892.1110 - Positron camera.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Positron camera. 892.1110 Section 892.1110 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES RADIOLOGY DEVICES Diagnostic Devices § 892.1110 Positron camera. (a) Identification. A...

  4. 21 CFR 892.1110 - Positron camera.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Positron camera. 892.1110 Section 892.1110 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES RADIOLOGY DEVICES Diagnostic Devices § 892.1110 Positron camera. (a) Identification. A...

  5. 21 CFR 892.1110 - Positron camera.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Positron camera. 892.1110 Section 892.1110 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES RADIOLOGY DEVICES Diagnostic Devices § 892.1110 Positron camera. (a) Identification. A...

  6. 21 CFR 892.1110 - Positron camera.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 21 Food and Drugs 8 2013-04-01 2013-04-01 false Positron camera. 892.1110 Section 892.1110 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES RADIOLOGY DEVICES Diagnostic Devices § 892.1110 Positron camera. (a) Identification. A...

  7. Clinical applications with the HIDAC positron camera

    NASA Astrophysics Data System (ADS)

    Frey, P.; Schaller, G.; Christin, A.; Townsend, D.; Tochon-Danguy, H.; Wensveen, M.; Donath, A.

    1988-06-01

    A high density avalanche chamber (HIDAC) positron camera has been used for positron emission tomographic (PET) imaging in three different human studies, including patients presenting with: (I) thyroid diseases (124 cases); (II) clinically suspected malignant tumours of the pharynx or larynx (ENT) region (23 cases); and (III) clinically suspected primary malignant and metastatic tumours of the liver (9 cases, 19 PET scans). The positron emitting radiopharmaceuticals used for the three studies were Na 124I (4.2 d half-life) for the thyroid, 55Co-bleomycin (17.5 h half-life) for the ENT-region and 68Ga-colloid (68 min half-life) for the liver. Tomographic imaging was performed: (I) 24 h after oral Na 124I administration to the thyroid patients, (II) 18 h after intraveneous administration of 55Co-bleomycin to the ENT patients and (III) 20 min following the intraveneous injection of 68Ga-colloid to the liver tumour patients. Three different imaging protocols were used with the HIDAC positron camera to perform appropriate tomographic imaging in each patient study. Promising results were obtained in all three studies, particularly in tomographic thyroid imaging, where a significant clinical contribution is made possible for diagnosis and therapy planning by the PET technique. In the other two PET studies encouraging results were obtained for the detection and precise localisation of malignant tumour disease including an estimate of the functional liver volume based on the reticulo-endothelial-system (RES) of the liver, obtained in vivo, and the three-dimensional display of liver PET data using shaded graphics techniques. The clinical significance of the overall results obtained in both the ENT and the liver PET study, however, is still uncertain and the respective role of PET as a new imaging modality in these applications is not yet clearly established. To appreciate the clinical impact made by PET in liver and ENT malignant tumour staging needs further investigation

  8. Investigational study of iodine-124 with a positron camera

    SciTech Connect

    Lambrecht, R.M.; Woodhouse, N.; Phillips, R.; Wolczak, D.; Qureshi, A.; Reyes, E.D.; Graser, C.; Al-Yanbawi, S.; Al-Rabiah, A.; Meyer, W.

    1988-01-01

    A case is presented where I-124 produced by a clinical cyclotron was used with a positron emission tomography camera for clinical usage. This represents the first report of the utilization of this modality with this radionuclide. We feel the increased spatial resolution of PET should be of value in looking at thyroid disease.

  9. Initial performance studies of a wearable brain positron emission tomography camera based on autonomous thin-film digital Geiger avalanche photodiode arrays.

    PubMed

    Schmidtlein, Charles R; Turner, James N; Thompson, Michael O; Mandal, Krishna C; Häggström, Ida; Zhang, Jiahan; Humm, John L; Feiglin, David H; Krol, Andrzej

    2017-01-01

    Using analytical and Monte Carlo modeling, we explored performance of a lightweight wearable helmet-shaped brain positron emission tomography (PET), or BET camera, based on thin-film digital Geiger avalanche photodiode arrays with Lutetium-yttrium oxyorthosilicate (LYSO) or [Formula: see text] scintillators for imaging in vivo human brain function of freely moving and acting subjects. We investigated a spherical cap BET and cylindrical brain PET (CYL) geometries with 250-mm diameter. We also considered a clinical whole-body (WB) LYSO PET/CT scanner. The simulated energy resolutions were 10.8% (LYSO) and 3.3% ([Formula: see text]), and the coincidence window was set at 2 ns. The brain was simulated as a water sphere of uniform F-18 activity with a radius of 100 mm. We found that BET achieved [Formula: see text] better noise equivalent count (NEC) performance relative to the CYL and [Formula: see text] than WB. For 10-mm-thick [Formula: see text] equivalent mass systems, LYSO (7-mm thick) had [Formula: see text] higher NEC than [Formula: see text]. We found that [Formula: see text] scintillator crystals achieved [Formula: see text] full-width-half-maximum spatial resolution without parallax errors. Additionally, our simulations showed that LYSO generally outperformed [Formula: see text] for NEC unless the timing resolution for [Formula: see text] was considerably smaller than that presently used for LYSO, i.e., well below 300 ps.

  10. The Rutherford Appleton laboratory's mark I multiwire proportional counter positron camera

    NASA Astrophysics Data System (ADS)

    Bateman, J. E.; Connolly, J. F.; Stephenson, R.; Tappern, G. J.; Flesher, A. C.

    1984-08-01

    A small (30 cm × 30 cm) model of a proposed large aperture positron camera has been developed at Rutherford Appleton Laboratory. Based on multiwire proportional counter technology it uses lead foil cathodes which function simultaneously as converters for the 511 keV gamma rays and readout electrodes for a delay line readout system. The detectors have been built up into a portable imaging system complete with a dedicated computer for data taking, processing and display. This has permitted evaluation of this type of positron imaging system in the clinical environment using both cyclotron generated isotopes ( 15O, 11C, 18F, 124I) and available isotopic generator systems ( 82Rb, 68Ga). At RAL we provided a complete hardware system and sufficient software to permit our hospital based colleagues to generate useful images with the minimum of effort. A complete description of the system is given with performance figures and some of the images obtained in three hospital visits are presented. Some detailed studies of the imaging performance of the positron camera are reported which have bearing on the design of future, improved systems.

  11. The electronics system for the LBNL positron emission mammography (PEM) camera

    NASA Astrophysics Data System (ADS)

    Moses, W. W.; Young, J. W.; Baker, K.; Jones, W.; Lenox, M.; Ho, M. H.; Weng, M.

    2001-06-01

    Describes the electronics for a high-performance positron emission mammography (PEM) camera. It is based on the electronics for a human brain positron emission tomography (PET) camera (the Siemens/CTI HRRT), modified to use a detector module that incorporates a photodiode (PD) array. An application-specified integrated circuit (ASIC) services the photodetector (PD) array, amplifying its signal and identifying the crystal of interaction. Another ASIC services the photomultiplier tube (PMT), measuring its output and providing a timing signal. Field-programmable gate arrays (FPGAs) and lookup RAMs are used to apply crystal-by-crystal correction factors and measure the energy deposit and the interaction depth (based on the PD/PMT ratio). Additional FPGAs provide event multiplexing, derandomization, coincidence detection, and real-time rebinning. Embedded PC/104 microprocessors provide communication, real-time control, and configure the system. Extensive use of FPGAs make the overall design extremely flexible, allowing many different functions (or design modifications) to be realized without hardware changes. Incorporation of extensive onboard diagnostics, implemented in the FPGAs, is required by the very high level of integration and density achieved by this system.

  12. A new gamma camera for positron emission tomography

    NASA Astrophysics Data System (ADS)

    Schotanus, Paul

    1988-06-01

    The detection of annihilation radiation employing radiation absorbed in a barium fluoride (BaF2) crystal is described. The resulting scintillation light is detected in a multiwire proportional chamber filled with a photosensitive vapor. The use of a high density fast scintillator with a low pressure wire chamber offers a good detection efficiency and permits high count rates because of the small dead time. The physical background of the detection mechanism is explored and the performance parameters of a gamma camera using this principle are determined. The scintillation mechanism and physical characteristics of the BaF2 scintillator are examined. Ultraviolet scintillation materials consisting of rare earth doped fluorides are introduced.

  13. Design considerations for a high-spatial-resolution positron camera with dense-drift-space MWPC's

    NASA Astrophysics Data System (ADS)

    Delguerra, A.; Perez-Mendez, V.; Schwartz, G.; Nelson, W. R.

    1982-10-01

    A multiplane Positron Camera is proposed, made of six MWPC modules arranged to form the lateral surface of a hexagonal prism. Each module (50 x 50 sq cm) has a 2 cm thick lead-glass tube converter on both sides of a MWPC pressurized to 2 atm. Experimental measurements are presented to show how to reduce the parallax error by determining in which of the two converter layers the photon has interacted. The results of a detailed Monte Carlo calculation for the efficiency of this type of converter are shown to be in excellent agreement with the experimental measurements. The expected performance of the Positron Camera is presented: a true coincidence rate of 56,000 counts/s (with an equal accidental coincidence rate and a 30% Compton scatter contamination) and a spatial resolution better than 5.0 mm (FWHM) for a 400 micron Ci pointlike source embedded in a 10 cm radius water phantom.

  14. Development of a high resolution beta camera for a direct measurement of positron distribution on brain surface

    SciTech Connect

    Yamamoto, S.; Seki, C.; Kashikura, K.

    1996-12-31

    We have developed and tested a high resolution beta camera for a direct measurement of positron distribution on brain surface of animals. The beta camera consists of a thin CaF{sub 2}(Eu) scintillator, a tapered fiber optics plate (taper fiber) and a position sensitive photomultiplier tube (PSPMT). The taper fiber is the key component of the camera. We have developed two types of beta cameras. One is 20mm diameter field of view camera for imaging brain surface of cats. The other is 10mm diameter camera for that of rats. Spatial resolutions of beta camera for cats and rats were 0.8mm FWHM and 0.5mm FWHM, respectively. We confirmed that developed beta cameras may overcome the limitation of the spatial resolution of the positron emission tomography (PET).

  15. Figures of merit for different detector configurations utilized in high resolution positron cameras

    SciTech Connect

    Eriksson, L.; Bergstrom, M.; Rohm, C.; Holte, S.; Kesselberg, M.

    1986-02-01

    A new positron camera system is currently being designed. The goal is an instrument that can measure the whole brain with a spatial resolution of 5 mm FWHM in all directions. In addition to the high spatial resolution, the system must be able to handle count rates of 0.5 MHz or more in order to perform accurate fast dynamic function studies such as the determination of cerebral blood flow and cerebral oxygen consumption following a rapid bolus. An overall spatial resolution of 5 mm requires crystal dimensions of 6 x 6 x L mm/sup 3/, or less, L being the length of the crystal. Timing and energy requirements necessitate high performance photomultipliers. The identification of the small size scintillator crystals can currently only be handled in schemes based on the Anger technique, in the future possibly with photodiodes. In the present work different crystal identification schemes have been investigated. The investigations have involved efficiency measurements of different scintillators, line spread function studies and the evaluation of different coding schemes in order to identify small crystals.

  16. Design of POSICAM: A high resolution multislice whole body positron camera

    SciTech Connect

    Mullani, N.A.; Wong, W.H.; Hartz, R.K.; Bristow, D.; Gaeta, J.M.; Yerian, K.; Adler, S.; Gould, K.L.

    1985-01-01

    A high resolution (6mm), multislice (21) whole body positron camera has been designed with innovative detector and septa arrangement for 3-D imaging and tracer quantitation. An object of interest such as the brain and the heart is optimally imaged by the 21 simultaneous image planes which have 12 mm resolution and are separated by 5.5 mm to provide adequate sampling in the axial direction. The detector geometry and the electronics are flexible enough to allow BaF/sub 2/, BGO, GSO or time of flight BaF/sub 2/ scintillators. The mechanical gantry has been designed for clinical applications and incorporates several features for patient handling and comfort. A large patient opening of 58 cm diameter with a tilt of +-30/sup 0/ and rotation of +-20/sup 0/ permit imaging from different positions without moving the patient. Multiprocessor computing systems and user-friendly software make the POSICAM a powerful 3-D imaging device. 7 figs.

  17. A CF4 based positron trap

    NASA Astrophysics Data System (ADS)

    Marjanović, Srdjan; Banković, Ana; Cassidy, David; Cooper, Ben; Deller, Adam; Dujko, Saša; Petrović, Zoran Lj

    2016-11-01

    All buffer-gas positron traps in use today rely on N2 as the primary trapping gas due to its conveniently placed {{{a}}}1{{\\Pi }} electronic excitation cross-section. The energy loss per excitation in this process is 8.5 eV, which is sufficient to capture positrons from low-energy moderated beams into a Penning-trap configuration of electric and magnetic fields. However, the energy range over which this cross-section is accessible overlaps with that for positronium (Ps) formation, resulting in inevitable losses and setting an intrinsic upper limit on the overall trapping efficiency of ∼25%. In this paper we present a numerical simulation of a device that uses CF4 as the primary trapping gas, exploiting vibrational excitation as the main inelastic capture process. The threshold for such excitations is far below that for Ps formation and hence, in principle, a CF4 trap can be highly efficient; our simulations indicate that it may be possible to achieve trapping efficiencies as high as 90%. We also report the results of an attempt to re-purpose an existing two-stage N2-based buffer-gas positron trap. Operating the device using CF4 proved unsuccessful, which we attribute to back scattering and expansion of the positron beam following interactions with the CF4 gas, and an unfavourably broad longitudinal beam energy spread arising from the magnetic field differential between the source and trap regions. The observed performance was broadly consistent with subsequent simulations that included parameters specific to the test system, and we outline the modifications that would be required to realise efficient positron trapping with CF4. However, additional losses appear to be present which require further investigation through both simulation and experiment.

  18. Performance analysis of a new positron camera geometry for high speed, fine particle tracking

    NASA Astrophysics Data System (ADS)

    Sovechles, J. M.; Boucher, D.; Pax, R.; Leadbeater, T.; Sasmito, A. P.; Waters, K. E.

    2017-09-01

    A new positron camera arrangement was assembled using 16 ECAT951 modular detector blocks. A closely packed, cross pattern arrangement was selected to produce a highly sensitive cylindrical region for tracking particles with low activities and high speeds. To determine the capabilities of this system a comprehensive analysis of the tracking performance was conducted to determine the 3D location error and location frequency as a function of tracer activity and speed. The 3D error was found to range from 0.54 mm for a stationary particle, consistent for all tracer activities, up to 4.33 mm for a tracer with an activity of 3 MBq and a speed of 4 m · s-1. For lower activity tracers (<10-2 MBq), the error was more sensitive to increases in speed, increasing to 28 mm (at 4 m · s-1), indicating that at these conditions a reliable trajectory is not possible. These results expanded on, but correlated well with, previous literature that only contained location errors for tracer speeds up to 1.5 m · s-1. The camera was also used to track directly activated mineral particles inside a two-inch hydrocyclone and a 142 mm diameter flotation cell. A detailed trajectory, inside the hydrocyclone, of a  -212  +  106 µm (10-1 MBq) quartz particle displayed the expected spiralling motion towards the apex. This was the first time a mineral particle of this size had been successfully traced within a hydrocyclone, however more work is required to develop detailed velocity fields.

  19. Investigation of Positron Moderator Materials for Electron-Linac-Based Slow Positron Beamlines

    NASA Astrophysics Data System (ADS)

    Suzuki, Ryoichi; Ohdaira, Toshiyuki; Uedono, Akira; Cho, Yang; Yoshida, Sadafumi; Ishida, Yuuki; Ohshima, Takeshi; Itoh, Hisayoshi; Chiwaki, Mitsukuni; Mikado, Tomohisa; Yamazaki, Tetsuo; Tanigawa, Shoichiro

    1998-08-01

    Positron re-emission properties were studied on moderator materials in order to improve the positron moderation system of electron-linac-based intense slow positron beamlines. The re-emitted positron fraction was measured on tungsten, SiC, GaN, SrTiO3, and hydrogen-terminated Si with a variable-energy pulsed positron beam. The results suggested that tungsten is the best material for the primary moderator of the positron beamlines while epitaxially grown n-type 6H SiC is the best material for the secondary moderator. Defect characterization by monoenergetic positron beams and surface characterization by Auger electron spectroscopy were carried out to clarify the mechanism of tungsten moderator degradation induced by high-energy electron irradiation. The characterization experiments revealed that the degradation is due to both radiation-induced vacancy clusters and surface carbon impurities. For the restoration of degraded tungsten moderators, oxygen treatment at ˜900°C is effective. Furthermore, it was found that oxygen at the tungsten surface inhibits positronium formation; as a result, it can increase the positron re-emission fraction.

  20. Optimization of positrons generation based on laser wakefield electron acceleration

    NASA Astrophysics Data System (ADS)

    Wu, Yuchi; Han, Dan; Zhang, Tiankui; Dong, Kegong; Zhu, Bin; Yan, Yonghong; Gu, Yuqiu

    2016-08-01

    Laser based positron represents a new particle source with short pulse duration and high charge density. Positron production based on laser wakefield electron acceleration (LWFA) has been investigated theoretically in this paper. Analytical expressions for positron spectra and yield have been obtained through a combination of LWFA and cascade shower theories. The maximum positron yield and corresponding converter thickness have been optimized as a function of driven laser power. Under the optimal condition, high energy (>100 MeV ) positron yield up to 5 ×1011 can be produced by high power femtosecond lasers at ELI-NP. The percentage of positrons shows that a quasineutral electron-positron jet can be generated by setting the converter thickness greater than 5 radiation lengths.

  1. First platinum moderated positron beam based on neutron capture

    NASA Astrophysics Data System (ADS)

    Hugenschmidt, C.; Kögel, G.; Repper, R.; Schreckenbach, K.; Sperr, P.; Triftshäuser, W.

    2002-12-01

    A positron beam based on absorption of high energy prompt γ-rays from thermal neutron capture in 113Cd was installed at a neutron guide of the high flux reactor at the ILL in Grenoble. Measurements were performed for various source geometries, dependent on converter mass, moderator surface and extraction voltages. The results lead to an optimised design of the in-pile positron source which will be implemented at the Munich research reactor FRM-II. The positron source consists of platinum foils acting as γ-e +e --converter and positron moderator. Due to the negative positron work function moderation in heated platinum leads to emission of monoenergetic positrons. The positron work function of polycrystalline platinum was determined to 1.95(5) eV. After acceleration to several keV by four electrical lenses the beam was magnetically guided in a solenoid field of 7.5 mT leading to a NaI-detector in order to detect the 511 keV γ-radiation of the annihilating positrons. The positron beam with a diameter of less than 20 mm yielded an intensity of 3.1×10 4 moderated positrons per second. The total moderation efficiency of the positron source was about ɛ=1.06(16)×10 -4. Within the first 20 h of operation a degradation of the moderation efficiency of 30% was observed. An annealing procedure at 873 K in air recovers the platinum moderator.

  2. Industrial positron-based imaging: Principles and applications

    NASA Astrophysics Data System (ADS)

    Parker, D. J.; Hawkesworth, M. R.; Broadbent, C. J.; Fowles, P.; Fryer, T. D.; McNeil, P. A.

    1994-09-01

    Positron Emission Tomography (PET) has great potential as a non-invasive flow imaging technique in engineering, since 511 keV gamma-rays can penetrate a considerable thickness of (e.g.) steel. The RAL/Birmingham multiwire positron camera was constructed in 1984, with the initial goal of observing the lubricant distribution in operating aero-engines, automotive engines and gearboxes, and has since been used in a variety of industrial fields. The major limitation of the camera for conventional tomographic PET studies is its restricted logging rate, which limits the frequency with which images can be acquired. Tracking a single small positron-emitting tracer particle provides a more powerful means of observing high speed motion using such a camera. Following a brief review of the use of conventional PET in engineering, and the capabilities of the Birmingham camera, this paper describes recent developments in the Positron Emission Particle Tracking (PEPT) technique, and compares the results obtainable by PET and PEPT using, as an example, a study of axial diffusion of particles in a rolling cylinder.

  3. Fast 3D-EM reconstruction using Planograms for stationary planar positron emission mammography camera.

    PubMed

    Motta, A; Guerra, A Del; Belcari, N; Moehrs, S; Panetta, D; Righi, S; Valentini, D

    2005-12-01

    At the University of Pisa we are building a PEM prototype, the YAP-PEM camera, consisting of two opposite 6 x 6 x 3 cm3 detector heads of 30 x 30 YAP:Ce finger crystals, 2 x 2 x 30 mm3 each. The camera will be equipped with breast compressors. The acquisition will be stationary. Compared with a whole body PET scanner, a planar Positron Emission Mammography (PEM) camera allows a better, easier and more flexible positioning around the breast in the vicinity of the tumor: this increases the sensitivity and solid angle coverage, and reduces cost. To avoid software rejection of data during the reconstruction, resulting in a reduced sensitivity, we adopted a 3D-EM reconstruction which uses all of the collected Lines Of Response (LORs). This skips the PSF distortion given by data rebinning procedures and/or Fourier methods. The traditional 3D-EM reconstruction requires several times the computation of the LOR-voxel correlation matrix, or probability matrix {p(ij)}; therefore is highly time-consuming. We use the sparse and symmetry properties of the matrix {p(ij)} to perform fast 3D-EM reconstruction. Geometrically, a 3D grid of cubic voxels (FOV) is crossed by several divergent 3D line sets (LORs). The symmetries occur when tracing different LORs produces the same p(ij) value. Parallel LORs of different sets cross the FOV in the same way, and the repetition of p(ij) values depends on the ratio between the tube and voxel sizes. By optimizing this ratio, the occurrence of symmetries is increased. We identify a nucleus of symmetry of LORs: for each set of symmetrical LORs we choose just one LOR to be put in the nucleus, while the others lie outside. All of the possible p(ij) values are obtainable by tracking only the LORs of this nucleus. The coordinates of the voxels of all of the other LORs are given by means of simple translation rules. Before making the reconstruction, we trace the LORs of the nucleus to find the intersecting voxels, whose p(ij) values are computed and

  4. Monoenergetic positron beam at the reactor based positron source at FRM-II

    NASA Astrophysics Data System (ADS)

    Hugenschmidt, C.; Kögel, G.; Repper, R.; Schreckenbach, K.; Sperr, P.; Straßer, B.; Triftshäuser, W.

    2002-05-01

    The principle of the in-pile positron source at the Munich research reactor FRM-II is based on absorption of high energy prompt γ-rays from thermal neutron capture in 113Cd. For this purpose, a cadmium cap is placed inside the tip of the inclined beam tube SR-11 in the moderator tank of the reactor, where an undisturbed thermal neutron flux up to 2×10 14n cm-2 s-1 is expected. Inside the cadmium cap a structure of platinum foils is placed for converting high energy γ-radiation into positron-electron pairs. Due to the negative positron work function, moderation in annealed platinum leads to emission of monoenergetic positrons. Therefore, platinum will also be used as moderator, since its moderation property seems to yield long-term stability under reactor conditions and it is much easier to handle than tungsten. Model calculations were performed with SIMION-7.0w to optimise geometry and potential of Pt-foils and electrical lenses. It could be shown that the potentials between the Pt-foils must be chosen in the range of 1-10 V to extract moderated positrons. After successive acceleration to 5 keV by four electrical lenses the beam is magnetically guided in a solenoid field of 7.5 mT resulting in a beam diameter of about 25 mm. An intensity of about 10 10 slow positrons per second is expected in the primary positron beam. Outside of the reactor shield a W(1 0 0) single crystal remoderation stage will lead to an improvement of the positron beam brilliance before the positrons are guided to the experimental facilities.

  5. Undulator-based production of polarized positrons

    NASA Astrophysics Data System (ADS)

    Alexander, G.; Barley, J.; Batygin, Y.; Berridge, S.; Bharadwaj, V.; Bower, G.; Bugg, W.; Decker, F.-J.; Dollan, R.; Efremenko, Y.; Flöttmann, K.; Gharibyan, V.; Hast, C.; Iverson, R.; Kolanoski, H.; Kovermann, J. W.; Laihem, K.; Lohse, T.; McDonald, K. T.; Mikhailichenko, A. A.; Moortgat-Pick, G. A.; Pahl, P.; Pitthan, R.; Pöschl, R.; Reinherz-Aronis, E.; Riemann, S.; Schälicke, A.; Schüler, K. P.; Schweizer, T.; Scott, D.; Sheppard, J. C.; Stahl, A.; Szalata, Z.; Walz, D. R.; Weidemann, A.

    2009-11-01

    Full exploitation of the physics potential of a future International Linear Collider will require the use of polarized electron and positron beams. Experiment E166 at the Stanford Linear Accelerator Center (SLAC) has demonstrated a scheme in which an electron beam passes through a helical undulator to generate photons (whose first-harmonic spectrum extended to 7.9 MeV) with circular polarization, which are then converted in a thin target to generate longitudinally polarized positrons and electrons. The experiment was carried out with a 1-m-long, 400-period, pulsed helical undulator in the Final Focus Test Beam (FFTB) operated at 46.6 GeV. Measurements of the positron polarization have been performed at five positron energies from 4.5 to 7.5 MeV. In addition, the electron polarization has been determined at 6.7 MeV, and the effect of operating the undulator with a ferrofluid was also investigated. To compare the measurements with expectations, detailed simulations were made with an upgraded version of G EANT4 that includes the dominant polarization-dependent interactions of electrons, positrons, and photons with matter. The measurements agree with calculations, corresponding to 80% polarization for positrons near 6 MeV and 90% for electrons near 7 MeV.

  6. Testing of a nuclear-reactor-based positron beam

    NASA Astrophysics Data System (ADS)

    van Veen, A.; Labohm, F.; Schut, H.; de Roode, J.; Heijenga, T.; Mijnarends, P. E.

    1997-05-01

    This paper describes the testing of a positron beam which is primarily based on copper activation near the core of a nuclear reactor and extraction of the positrons through a beam guide tube. An out-of-core test with a 22Na source and an in-core test with the reactor at reduced power have been performed. Both tests indicated a high reflectivity of moderated positrons at the tungsten surfaces of the moderation discs which enhanced the expected yield. Secondary electrons generated in the source materials during the in-core test caused electrical field distortions in the electrode system of the system by charging of the insulators. At 100 kW reactor power during one hour, positrons were observed with an intensity of 4.4 × 10 4 e + s -1 of which 90% was due to positrons created by pair formation and 10% by copper activation.

  7. Camera-based driver assistance systems

    NASA Astrophysics Data System (ADS)

    Grimm, Michael

    2013-04-01

    In recent years, camera-based driver assistance systems have taken an important step: from laboratory setup to series production. This tutorial gives a brief overview on the technology behind driver assistance systems, presents the most significant functionalities and focuses on the processes of developing camera-based systems for series production. We highlight the critical points which need to be addressed when camera-based driver assistance systems are sold in their thousands, worldwide - and the benefit in terms of safety which results from it.

  8. Van de Graaff based positron source production

    NASA Astrophysics Data System (ADS)

    Lund, Kasey Roy

    The anti-matter counterpart to the electron, the positron, can be used for a myriad of different scientific research projects to include materials research, energy storage, and deep space flight propulsion. Currently there is a demand for large numbers of positrons to aid in these mentioned research projects. There are different methods of producing and harvesting positrons but all require radioactive sources or large facilities. Positron beams produced by relatively small accelerators are attractive because they are easily shut down, and small accelerators are readily available. A 4MV Van de Graaff accelerator was used to induce the nuclear reaction 12C(d,n)13N in order to produce an intense beam of positrons. 13N is an isotope of nitrogen that decays with a 10 minute half life into 13C, a positron, and an electron neutrino. This radioactive gas is frozen onto a cryogenic freezer where it is then channeled to form an antimatter beam. The beam is then guided using axial magnetic fields into a superconducting magnet with a field strength up to 7 Tesla where it will be stored in a newly designed Micro-Penning-Malmberg trap. Several source geometries have been experimented on and found that a maximum antimatter beam with a positron flux of greater than 0.55x10 6 e+s-1 was achieved. This beam was produced using a solid rare gas moderator composed of krypton. Due to geometric restrictions on this set up, only 0.1-1.0% of the antimatter was being frozen to the desired locations. Simulations and preliminary experiments suggest that a new geometry, currently under testing, will produce a beam of 107 e+s-1 or more.

  9. Methods and applications of positron-based medical imaging

    NASA Astrophysics Data System (ADS)

    Herzog, H.

    2007-02-01

    Positron emission tomography (PET) is a diagnostic imaging method to examine metabolic functions and their disorders. Dedicated ring systems of scintillation detectors measure the 511 keV γ-radiation produced in the course of the positron emission from radiolabelled metabolically active molecules. A great number of radiopharmaceuticals labelled with 11C, 13N, 15O, or 18F positron emitters have been applied both for research and clinical purposes in neurology, cardiology and oncology. The recent success of PET with rapidly increasing installations is mainly based on the use of [ 18F]fluorodeoxyglucose (FDG) in oncology where it is most useful to localize primary tumours and their metastases.

  10. Potential advantages of a cesium fluoride scintillator for a time-of-flight positron camera.

    PubMed

    Allemand, R; Gresset, C; Vacher, J

    1980-02-01

    In order to improve the quality of positron tomographic imaging, a time-of-flight technique combined with a classical reconstruction method has been investigated. The decay time of NaI(Tl) and bismuth germanate (BGO) scintillators is too long for this application, and efficiency of the plastic scintillators is too low. Cesium fluoride appears to be a very promising detector material. This paper presents preliminary results obtained with a time-of-flight technique using CsF scintillators. The expected advantages were realized.

  11. Plasma and trap-based techniques for science with positrons

    NASA Astrophysics Data System (ADS)

    Danielson, J. R.; Dubin, D. H. E.; Greaves, R. G.; Surko, C. M.

    2015-01-01

    In recent years, there has been a wealth of new science involving low-energy antimatter (i.e., positrons and antiprotons) at energies ranging from 102 to less than 10-3 eV . Much of this progress has been driven by the development of new plasma-based techniques to accumulate, manipulate, and deliver antiparticles for specific applications. This article focuses on the advances made in this area using positrons. However, many of the resulting techniques are relevant to antiprotons as well. An overview is presented of relevant theory of single-component plasmas in electromagnetic traps. Methods are described to produce intense sources of positrons and to efficiently slow the typically energetic particles thus produced. Techniques are described to trap positrons efficiently and to cool and compress the resulting positron gases and plasmas. Finally, the procedures developed to deliver tailored pulses and beams (e.g., in intense, short bursts, or as quasimonoenergetic continuous beams) for specific applications are reviewed. The status of development in specific application areas is also reviewed. One example is the formation of antihydrogen atoms for fundamental physics [e.g., tests of invariance under charge conjugation, parity inversion, and time reversal (the CPT theorem), and studies of the interaction of gravity with antimatter]. Other applications discussed include atomic and materials physics studies and the study of the electron-positron many-body system, including both classical electron-positron plasmas and the complementary quantum system in the form of Bose-condensed gases of positronium atoms. Areas of future promise are also discussed. The review concludes with a brief summary and a list of outstanding challenges.

  12. Foale in Base Block with camera

    NASA Image and Video Library

    1997-11-03

    STS086-405-008 (25 Sept-6 Oct 1997) --- Astronaut C. Michael Foale, sporting attire representing the STS-86 crew after four months aboard Russia?s Mir Space Station in Russian wear, operates a video camera in Mir?s Base Block Module. Photo credit: NASA

  13. Multimodal sensing-based camera applications

    NASA Astrophysics Data System (ADS)

    Bordallo López, Miguel; Hannuksela, Jari; Silvén, J. Olli; Vehviläinen, Markku

    2011-02-01

    The increased sensing and computing capabilities of mobile devices can provide for enhanced mobile user experience. Integrating the data from different sensors offers a way to improve application performance in camera-based applications. A key advantage of using cameras as an input modality is that it enables recognizing the context. Therefore, computer vision has been traditionally utilized in user interfaces to observe and automatically detect the user actions. The imaging applications can also make use of various sensors for improving the interactivity and the robustness of the system. In this context, two applications fusing the sensor data with the results obtained from video analysis have been implemented on a Nokia Nseries mobile device. The first solution is a real-time user interface that can be used for browsing large images. The solution enables the display to be controlled by the motion of the user's hand using the built-in sensors as complementary information. The second application is a real-time panorama builder that uses the device's accelerometers to improve the overall quality, providing also instructions during the capture. The experiments show that fusing the sensor data improves camera-based applications especially when the conditions are not optimal for approaches using camera data alone.

  14. Recent progress in tailoring trap-based positron beams

    SciTech Connect

    Natisin, M. R.; Hurst, N. C.; Danielson, J. R.; Surko, C. M.

    2013-03-19

    Recent progress is described to implement two approaches to specially tailor trap-based positron beams. Experiments and simulations are presented to understand the limits on the energy spread and pulse duration of positron beams extracted from a Penning-Malmberg (PM) trap after the particles have been buffer-gas cooled (or heated) in the range of temperatures 1000 {>=} T {>=} 300 K. These simulations are also used to predict beam performance for cryogenically cooled positrons. Experiments and simulations are also presented to understand the properties of beams formed when plasmas are tailored in a PM trap in a 5 tesla magnetic field, then non-adiabatically extracted from the field using a specially designed high-permeability grid to create a new class of electrostatically guided beams.

  15. Camera calibration based on parallel lines

    NASA Astrophysics Data System (ADS)

    Li, Weimin; Zhang, Yuhai; Zhao, Yu

    2015-01-01

    Nowadays, computer vision has been wildly used in our daily life. In order to get some reliable information, camera calibration can not be neglected. Traditional camera calibration cannot be used in reality due to the fact that we cannot find the accurate coordinate information of the referenced control points. In this article, we present a camera calibration algorithm which can determine the intrinsic parameters both with the extrinsic parameters. The algorithm is based on the parallel lines in photos which can be commonly find in the real life photos. That is we can first get the intrinsic parameters as well as the extrinsic parameters though the information picked from the photos we take from the normal life. More detail, we use two pairs of the parallel lines to compute the vanishing points, specially if these parallel lines are perpendicular, which means these two vanishing points are conjugate with each other, we can use some views (at least 5 views) to determine the image of the absolute conic(IAC). Then, we can easily get the intrinsic parameters by doing cholesky factorization on the matrix of IAC.As we all know, when connect the vanishing point with the camera optical center, we can get a line which is parallel with the original lines in the scene plane. According to this, we can get the extrinsic parameters R and T. Both the simulation and the experiment results meets our expectations.

  16. The AOTF-based NO2 camera

    NASA Astrophysics Data System (ADS)

    Dekemper, Emmanuel; Vanhamel, Jurgen; Van Opstal, Bert; Fussen, Didier

    2016-12-01

    The abundance of NO2 in the boundary layer relates to air quality and pollution source monitoring. Observing the spatiotemporal distribution of NO2 above well-delimited (flue gas stacks, volcanoes, ships) or more extended sources (cities) allows for applications such as monitoring emission fluxes or studying the plume dynamic chemistry and its transport. So far, most attempts to map the NO2 field from the ground have been made with visible-light scanning grating spectrometers. Benefiting from a high retrieval accuracy, they only achieve a relatively low spatiotemporal resolution that hampers the detection of dynamic features. We present a new type of passive remote sensing instrument aiming at the measurement of the 2-D distributions of NO2 slant column densities (SCDs) with a high spatiotemporal resolution. The measurement principle has strong similarities with the popular filter-based SO2 camera as it relies on spectral images taken at wavelengths where the molecule absorption cross section is different. Contrary to the SO2 camera, the spectral selection is performed by an acousto-optical tunable filter (AOTF) capable of resolving the target molecule's spectral features. The NO2 camera capabilities are demonstrated by imaging the NO2 abundance in the plume of a coal-fired power plant. During this experiment, the 2-D distribution of the NO2 SCD was retrieved with a temporal resolution of 3 min and a spatial sampling of 50 cm (over a 250 × 250 m2 area). The detection limit was close to 5 × 1016 molecules cm-2, with a maximum detected SCD of 4 × 1017 molecules cm-2. Illustrating the added value of the NO2 camera measurements, the data reveal the dynamics of the NO to NO2 conversion in the early plume with an unprecedent resolution: from its release in the air, and for 100 m upwards, the observed NO2 plume concentration increased at a rate of 0.75-1.25 g s-1. In joint campaigns with SO2 cameras, the NO2 camera could also help in removing the bias introduced by the

  17. Conceptual design of an intense positron source based on an LIA

    NASA Astrophysics Data System (ADS)

    Long, Ji-Dong; Yang, Zhen; Dong, Pan; Shi, Jin-Shui

    2012-04-01

    Accelerator based positron sources are widely used due to their high intensity. Most of these accelerators are RF accelerators. An LIA (linear induction accelerator) is a kind of high current pulsed accelerator used for radiography. A conceptual design of an intense pulsed positron source based on an LIA is presented in the paper. One advantage of an LIA is its pulsed power being higher than conventional accelerators, which means a higher amount of primary electrons for positron generations per pulse. Another advantage of an LIA is that it is very suitable to decelerate the positron bunch generated by bremsstrahlung pair process due to its ability to adjustably shape the voltage pulse. By implementing LIA cavities to decelerate the positron bunch before it is moderated, the positron yield could be greatly increased. These features may make the LIA based positron source become a high intensity pulsed positron source.

  18. Interface circuit design and control system programming for an EMCCD camera based on Camera Link

    NASA Astrophysics Data System (ADS)

    Li, Bin-hua; Rao, Xiao-hui; Yan, Jia; Li, Da-lun; Zhang, Yi-gong

    2013-08-01

    This paper presents an appropriate solution for self-developed EMCCD cameras based on Camera Link. A new interface circuit used to connect an embedded processor Nios II to the serial communication port of Camera Link in the camera is designed, and a simplified structure diagram is shown. To implement functions of the circuit, in the hardware design, it is necessary to add a universal serial communication component to the Nios II when building the processor and its peripheral components in the Altera SOPC development environment. In the software design, we use C language to write a UART interrupt response routine for instructions and data receiving and transmitting, and a camera control program in the slave computer (Nios II), employ a Sapera LT development library and VC++ to write a serial communication routine, a camera control and image acquisition program in the host computer. The developed camera can be controlled by the host PC, the camera status can return to the PC, and a huge amount of image data can be uploaded at a high speed through a Camera Link cable. A flow chart of the serial communication and camera control program in Nios II is given, and two operating interfaces in the PC are shown. Some design and application skills are described in detail. The test results indicate that the interface circuit and the control programs that we have developed are feasible and reliable.

  19. Calibration method for misaligned catadioptric camera based on planar conic

    NASA Astrophysics Data System (ADS)

    Zhu, Qidan; Xu, Congying; Cai, Chengtao

    2013-03-01

    Based on conventional camera calibration methods, a flexible approach for misaligned catadioptric camera from planar conic was presented. The catadioptric camera composed of a perspective camera and a hyperboloid mirror. The projection model of misaligned catadioptric camera was built and the projection functions were derived. Having known camera parameters, this method only requires the camera to observe a model plane which contains three concentric conics on the back of the revolution mirror. Then the mirror posture and attitude relative to the camera can be calculated linearly. The center of the concentric conics is on the mirror axis. This technique is easy to use and all computations are matrix operations in the linear algebra. The closed-form solution can be obtained without nonlinear optimization. Experiments are conducted on real images to evaluate the correctness and the feasibility of this method.

  20. A new scheme to accumulate positrons in a Penning-Malmberg trap with a linac-based positron pulsed source

    NASA Astrophysics Data System (ADS)

    Dupré, P.

    2013-03-01

    The Gravitational Behaviour of Antimatter at Rest experiment (GBAR) is designed to perform a direct measurement of the weak equivalence principle on antimatter by measuring the acceleration of anti-hydrogen atoms in the gravitational field of the Earth. The experimental scheme requires a high density positronium (Ps) cloud as a target for antiprotons, provided by the Antiproton Decelerator (AD) - Extra Low Energy Antiproton Ring (ELENA) facility at CERN. The Ps target will be produced by a pulse of few 1010 positrons injected onto a positron-positronium converter. For this purpose, a slow positron source using an electron Linac has been constructed at Saclay. The present flux is comparable with that of 22Na-based sources using solid neon moderator. A new positron accumulation scheme with a Penning-Malmberg trap has been proposed taking advantage of the pulsed time structure of the beam. In the trap, the positrons are cooled by interaction with a dense electron plasma. The overall trapping efficiency has been estimated to be ˜70% by numerical simulations.

  1. Exploring positron characteristics utilizing two new positron-electron correlation schemes based on multiple electronic structure calculation methods

    NASA Astrophysics Data System (ADS)

    Zhang, Wen-Shuai; Gu, Bing-Chuan; Han, Xiao-Xi; Liu, Jian-Dang; Ye, Bang-Jiao

    2015-10-01

    We make a gradient correction to a new local density approximation form of positron-electron correlation. The positron lifetimes and affinities are then probed by using these two approximation forms based on three electronic-structure calculation methods, including the full-potential linearized augmented plane wave (FLAPW) plus local orbitals approach, the atomic superposition (ATSUP) approach, and the projector augmented wave (PAW) approach. The differences between calculated lifetimes using the FLAPW and ATSUP methods are clearly interpreted in the view of positron and electron transfers. We further find that a well-implemented PAW method can give near-perfect agreement on both the positron lifetimes and affinities with the FLAPW method, and the competitiveness of the ATSUP method against the FLAPW/PAW method is reduced within the best calculations. By comparing with the experimental data, the new introduced gradient corrected correlation form is proved to be competitive for positron lifetime and affinity calculations. Project supported by the National Natural Science Foundation of China (Grant Nos. 11175171 and 11105139).

  2. Design analysis and performance evaluation of a two-dimensional camera for accelerated positron-emitter beam injection by computer simulation

    SciTech Connect

    Llacer, J.; Chatterjee, A.; Batho, E.K.; Poskanzer, J.A.

    1982-05-01

    The characteristics and design of a high-accuracy and high-sensitivity 2-dimensional camera for the measurement of the end-point of the trajectory of accelerated heavy ion beams of positron emitter isotopes are described. Computer simulation methods have been used in order to insure that the design would meet the demanding criteria of ability to obtain the location of the centroid of a point source in the X-Y plane with errors smaller than 1 mm, with an activity of 100 nanoCi, in a counting time of 5 sec or less. A computer program which can be developed into a general purpose analysis tool for a large number of positron emitter camera configurations is described in its essential parts. The validation of basic simulation results with simple measurements is reported, and the use of the program to generate simulated images which include important second order effects due to detector material, geometry, septa, etc. is demonstrated. Comparison between simulated images and initial results with the completed instrument shows that the desired specifications have been met.

  3. Design Study of Linear Accelerator-Based Positron Re-Emission Microscopy

    NASA Astrophysics Data System (ADS)

    Ogawa, Hiroshi; Kinomura, Atsushi; Oshima, Nagayasu; Suzuki, Ryoichi; O'Rourke, Brian E.

    In order to shorten the acquisition time of positron re-emission microscopy (PRM), a linear accelerator (LINAC)-based PRM system has been studied. The beam focusing system was designed to obtain a high brightness positron beam on the PRM sample. The beam size at the sample was calculated to be 0.8mm (FWHM), and the positron intensity within the field of view of the PRM was more than one order of magnitude higher in comparison with the previous studies.

  4. Operator-based homogeneous coordinates: application in camera document scanning

    NASA Astrophysics Data System (ADS)

    Juarez-Salazar, Rigoberto; Diaz-Ramirez, Victor H.

    2017-07-01

    An operator-based approach for the study of homogeneous coordinates and projective geometry is proposed. First, some basic geometrical concepts and properties of the operators are investigated in the one- and two-dimensional cases. Then, the pinhole camera model is derived, and a simple method for homography estimation and camera calibration is explained. The usefulness of the analyzed theoretical framework is exemplified by addressing the perspective correction problem for a camera document scanning application. Several experimental results are provided for illustrative purposes. The proposed approach is expected to provide practical insights for inexperienced students on camera calibration, computer vision, and optical metrology among others.

  5. Video-Based Point Cloud Generation Using Multiple Action Cameras

    NASA Astrophysics Data System (ADS)

    Teo, T.

    2015-05-01

    Due to the development of action cameras, the use of video technology for collecting geo-spatial data becomes an important trend. The objective of this study is to compare the image-mode and video-mode of multiple action cameras for 3D point clouds generation. Frame images are acquired from discrete camera stations while videos are taken from continuous trajectories. The proposed method includes five major parts: (1) camera calibration, (2) video conversion and alignment, (3) orientation modelling, (4) dense matching, and (5) evaluation. As the action cameras usually have large FOV in wide viewing mode, camera calibration plays an important role to calibrate the effect of lens distortion before image matching. Once the camera has been calibrated, the author use these action cameras to take video in an indoor environment. The videos are further converted into multiple frame images based on the frame rates. In order to overcome the time synchronous issues in between videos from different viewpoints, an additional timer APP is used to determine the time shift factor between cameras in time alignment. A structure form motion (SfM) technique is utilized to obtain the image orientations. Then, semi-global matching (SGM) algorithm is adopted to obtain dense 3D point clouds. The preliminary results indicated that the 3D points from 4K video are similar to 12MP images, but the data acquisition performance of 4K video is more efficient than 12MP digital images.

  6. Theodolite-camera videometrics system based on total station

    NASA Astrophysics Data System (ADS)

    Zhu, Zhao-kun; Yuan, Yun; Zhang, Xiao-hu

    2011-08-01

    A novel measuring system, named Theodolite-camera Videometrics System (TVS) based on total station, has been introduced in this paper, and the concept of theodolite-camera which is the key component of TVS has been proposed, it consists of non-metric camera and rotation platform generally, and can rotate horizontally and vertically. TVS based on total station is free of field control points, and the fields of view of its theodolite-cameras are nonfixed, thus TVS is qualified for targets with wide moving range or big structure. Theodolite-camera model has been analyzed and presented in detail in this paper. The calibration strategy adopted has been demonstrated to be accurate and feasible by both simulated and real data, and TVS has also been proved to be a valid, reliable, precise measuring system, and living up to expectations.

  7. New light field camera based on physical based rendering tracing

    NASA Astrophysics Data System (ADS)

    Chung, Ming-Han; Chang, Shan-Ching; Lee, Chih-Kung

    2014-03-01

    Even though light field technology was first invented more than 50 years ago, it did not gain popularity due to the limitation imposed by the computation technology. With the rapid advancement of computer technology over the last decade, the limitation has been uplifted and the light field technology quickly returns to the spotlight of the research stage. In this paper, PBRT (Physical Based Rendering Tracing) was introduced to overcome the limitation of using traditional optical simulation approach to study the light field camera technology. More specifically, traditional optical simulation approach can only present light energy distribution but typically lack the capability to present the pictures in realistic scenes. By using PBRT, which was developed to create virtual scenes, 4D light field information was obtained to conduct initial data analysis and calculation. This PBRT approach was also used to explore the light field data calculation potential in creating realistic photos. Furthermore, we integrated the optical experimental measurement results with PBRT in order to place the real measurement results into the virtually created scenes. In other words, our approach provided us with a way to establish a link of virtual scene with the real measurement results. Several images developed based on the above-mentioned approaches were analyzed and discussed to verify the pros and cons of the newly developed PBRT based light field camera technology. It will be shown that this newly developed light field camera approach can circumvent the loss of spatial resolution associated with adopting a micro-lens array in front of the image sensors. Detailed operational constraint, performance metrics, computation resources needed, etc. associated with this newly developed light field camera technique were presented in detail.

  8. Spectral Camera based on Ghost Imaging via Sparsity Constraints

    PubMed Central

    Liu, Zhentao; Tan, Shiyu; Wu, Jianrong; Li, Enrong; Shen, Xia; Han, Shensheng

    2016-01-01

    The image information acquisition ability of a conventional camera is usually much lower than the Shannon Limit since it does not make use of the correlation between pixels of image data. Applying a random phase modulator to code the spectral images and combining with compressive sensing (CS) theory, a spectral camera based on true thermal light ghost imaging via sparsity constraints (GISC spectral camera) is proposed and demonstrated experimentally. GISC spectral camera can acquire the information at a rate significantly below the Nyquist rate, and the resolution of the cells in the three-dimensional (3D) spectral images data-cube can be achieved with a two-dimensional (2D) detector in a single exposure. For the first time, GISC spectral camera opens the way of approaching the Shannon Limit determined by Information Theory in optical imaging instruments. PMID:27180619

  9. A Novel Camera Calibration Method Based on Polar Coordinate

    PubMed Central

    Gai, Shaoyan; Da, Feipeng; Fang, Xu

    2016-01-01

    A novel calibration method based on polar coordinate is proposed. The world coordinates are expressed in the form of polar coordinates, which are converted to world coordinates in the calibration process. In the beginning, the calibration points are obtained in polar coordinates. By transformation between polar coordinates and rectangular coordinates, the points turn into form of rectangular coordinates. Then, the points are matched with the corresponding image coordinates. At last, the parameters are obtained by objective function optimization. By the proposed method, the relationships between objects and cameras are expressed in polar coordinates easily. It is suitable for multi-camera calibration. Cameras can be calibrated with fewer points. The calibration images can be positioned according to the location of cameras. The experiment results demonstrate that the proposed method is an efficient calibration method. By the method, cameras are calibrated conveniently with high accuracy. PMID:27798651

  10. Helical Undulator Based Production of Polarized Positrons and Status of the E166 Experiment

    NASA Astrophysics Data System (ADS)

    Laihem, K.

    2005-08-01

    This paper describes the status of the E166 experiment. The experiment is dedicated to test the helical-undulator-based polarized positron source for the international linear collider. The physics motivation for having both electrons and positrons polarized in collision is crucial and a demonstration experiment for the undulator-based production of polarized positrons is summarized. The E166 experiment uses a 1 meter long helical undulator in the 50 GeV Final Focus Test Beam at SLAC to provide MeV photons with circular polarization. These photons are then converted in a thin (0.5 radiation length X0) target into positrons (and electrons) with about 50% degree of longitudinal polarization. In this experiment, the polarization of both photons and positrons is measured simultaneously using photon transmission polarimetry.

  11. An Undulator Based Polarized Positron Source for CLIC

    SciTech Connect

    Liu, Wanming; Gai, Wei; Rinolfi, Louis; Sheppard, John; /SLAC

    2012-07-02

    A viable positron source scheme is proposed that uses circularly polarized gamma rays generated from the main 250 GeV electron beam. The beam passes through a helical superconducting undulator with a magnetic field of {approx} 1 Tesla and a period of 1.15 cm. The gamma-rays produced in the undulator in the energy range between {approx} 3 MeV - 100 MeV will be directed to a titanium target and produce polarized positrons. The positrons are then captured, accelerated and transported to a Pre-Damping Ring (PDR). Detailed parameter studies of this scheme including positron yield, and undulator parameter dependence are presented. Effects on the 250 GeV CLIC main beam, including emittance growth and energy loss from the beam passing through the undulator are also discussed.

  12. Compact and robust hyperspectral camera based on compressed sensing

    NASA Astrophysics Data System (ADS)

    Žídek, K.; Denk, O.; Hlubuček, J.; Václavík, J.

    2016-11-01

    Spectrum of light which is emitted or reflected by an object carries immense amount of information about the object. A simple piece of evidence is the importance of color sensing for human vision. Combining an image acquisition with efficient measurement of light spectra for each detected pixel is therefore one of the important issues in imaging, referred as hyperspectral imaging. We demonstrate a construction of a compact and robust hyperspectral camera for the visible and near-IR spectral region. The camera was designed vastly based on off-shelf optics, yet an extensive optimization and addition of three customized parts enabled construction of the camera featuring a low f-number (F/3.9) and fully concentric optics. We employ a novel approach of compressed sensing (namely coded aperture snapshot spectral imaging, abbrev. CASSI). The compressed sensing enables to computationally extract an encoded hyperspectral information from a single camera exposition. Owing to the technique the camera lacks any moving or scanning part, while it can record the full image and spectral information in a single snapshot. Moreover, unlike the commonly used compressed sensing table-top apparatuses, the camera represents a portable device able to work outside a lab. We demonstrate the spectro-temporal reconstruction of recorded scenes based on 90×90 random matrix encoding. Finally, we discuss potential of the compressed sensing in hyperspectral camera.

  13. Study of Trade Wind Clouds Using Ground Based Stereo Cameras

    NASA Astrophysics Data System (ADS)

    Porter, J.

    2010-12-01

    We employ ground based stereo cameras to derive the three dimensional position of trade wind clouds features. The process employs both traditional and novel methods. The stereo cameras are calibrated for orientation using the sun as a geo-reference point at several times throughout the day. Spatial correlation is used to detect similar cloud features in both camera images and a simultaneous-differential equation is solved to get the best cloud position for the given rays from the cameras to the cloud feature. Once the positions of the clouds are known in three-dimensional space, then it is also possible to derive upper level wind speed and direction by tracking the position of clouds in space and time. The vector winds can be obtained at many locations and heights in a cone region over the surface site. The accuracy of the measurement depends on the camera separation with a trade-off occurring at different camera separations and cloud ranges. The system design and performance will be discussed along with field observations. This approach provides a new way to study clouds for climate change efforts. It also provides an inexpensive way to measure upper level wind fields in cloudy regions. Ground based stereo cameras are used to derive cloud position in space a time.

  14. Monte-Carlo-based studies of a polarized positron source for International Linear Collider (ILC)

    NASA Astrophysics Data System (ADS)

    Dollan, Ralph; Laihem, Karim; Schälicke, Andreas

    2006-04-01

    The full exploitation of the physics potential of an International Linear Collider (ILC) requires the development of a polarized positron beam. New concepts of polarized positron sources are based on the development of circularly polarized photon sources. The polarized photons create electron-positron pairs in a thin target and transfer their polarization state to the outgoing leptons. To achieve a high level of positron polarization the understanding of the production mechanisms in the target is crucial. Therefore, a general framework for the simulation of polarized processes with GEANT4 is under development. In this contribution the current status of the project and its application to a study of the positron production process for the ILC is presented.

  15. Benchmarking the Optical Resolving Power of Uav Based Camera Systems

    NASA Astrophysics Data System (ADS)

    Meißner, H.; Cramer, M.; Piltz, B.

    2017-08-01

    UAV based imaging and 3D object point generation is an established technology. Some of the UAV users try to address (very) highaccuracy applications, i.e. inspection or monitoring scenarios. In order to guarantee such level of detail and accuracy high resolving imaging systems are mandatory. Furthermore, image quality considerably impacts photogrammetric processing, as the tie point transfer, mandatory for forming the block geometry, fully relies on the radiometric quality of images. Thus, empirical testing of radiometric camera performance is an important issue, in addition to standard (geometric) calibration, which normally is covered primarily. Within this paper the resolving power of ten different camera/lens installations has been investigated. Selected systems represent different camera classes, like DSLRs, system cameras, larger format cameras and proprietary systems. As the systems have been tested in wellcontrolled laboratory conditions and objective quality measures have been derived, individual performance can be compared directly, thus representing a first benchmark on radiometric performance of UAV cameras. The results have shown, that not only the selection of appropriate lens and camera body has an impact, in addition the image pre-processing, i.e. the use of a specific debayering method, significantly influences the final resolving power.

  16. Global Calibration of Multiple Cameras Based on Sphere Targets

    PubMed Central

    Sun, Junhua; He, Huabin; Zeng, Debing

    2016-01-01

    Global calibration methods for multi-camera system are critical to the accuracy of vision measurement. Proposed in this paper is such a method based on several groups of sphere targets and a precision auxiliary camera. Each camera to be calibrated observes a group of spheres (at least three), while the auxiliary camera observes all the spheres. The global calibration can be achieved after each camera reconstructs the sphere centers in its field of view. In the process of reconstructing a sphere center, a parameter equation is used to describe the sphere projection model. Theoretical analysis and computer simulation are carried out to analyze the factors that affect the calibration accuracy. Simulation results show that the parameter equation can largely improve the reconstruction accuracy. In the experiments, a two-camera system calibrated by our method is used to measure a distance about 578 mm, and the root mean squared error is within 0.14 mm. Furthermore, the experiments indicate that the method has simple operation and good flexibility, especially for the onsite multiple cameras without common field of view. PMID:26761007

  17. Fuzzy-rule-based image reconstruction for positron emission tomography

    NASA Astrophysics Data System (ADS)

    Mondal, Partha P.; Rajan, K.

    2005-09-01

    Positron emission tomography (PET) and single-photon emission computed tomography have revolutionized the field of medicine and biology. Penalized iterative algorithms based on maximum a posteriori (MAP) estimation eliminate noisy artifacts by utilizing available prior information in the reconstruction process but often result in a blurring effect. MAP-based algorithms fail to determine the density class in the reconstructed image and hence penalize the pixels irrespective of the density class. Reconstruction with better edge information is often difficult because prior knowledge is not taken into account. The recently introduced median-root-prior (MRP)-based algorithm preserves the edges, but a steplike streaking effect is observed in the reconstructed image, which is undesirable. A fuzzy approach is proposed for modeling the nature of interpixel interaction in order to build an artifact-free edge-preserving reconstruction. The proposed algorithm consists of two elementary steps: (1) edge detection, in which fuzzy-rule-based derivatives are used for the detection of edges in the nearest neighborhood window (which is equivalent to recognizing nearby density classes), and (2) fuzzy smoothing, in which penalization is performed only for those pixels for which no edge is detected in the nearest neighborhood. Both of these operations are carried out iteratively until the image converges. Analysis shows that the proposed fuzzy-rule-based reconstruction algorithm is capable of producing qualitatively better reconstructed images than those reconstructed by MAP and MRP algorithms. The reconstructed images are sharper, with small features being better resolved owing to the nature of the fuzzy potential function.

  18. FPGA-Based Pulse Parameter Discovery for Positron Emission Tomography.

    PubMed

    Haselman, Michael; Hauck, Scott; Lewellen, Thomas K; Miyaoka, Robert S

    2009-10-24

    Modern Field Programmable Gate Arrays (FPGAs) are capable of performing complex digital signal processing algorithms with clock rates well above 100MHz. This, combined with FPGA's low expense and ease of use make them an ideal technology for a data acquisition system for a positron emission tomography (PET) scanner. The University of Washington is producing a series of high-resolution, small-animal PET scanners that utilize FPGAs as the core of the front-end electronics. For these next generation scanners, functions that are typically performed in dedicated circuits, or offline, are being migrated to the FPGA. This will not only simplify the electronics, but the features of modern FPGAs can be utilizes to add significant signal processing power to produce higher resolution images. In this paper we report how we utilize the reconfigurable property of an FPGA to self-calibrate itself to determine pulse parameters necessary for some of the pulse processing steps. Specifically, we show how the FPGA can generate a reference pulse based on actual pulse data instead of a model. We also report how other properties of the photodetector pulse (baseline, pulse length, average pulse energy and event triggers) can be determined automatically by the FPGA.

  19. A cooperative control algorithm for camera based observational systems.

    SciTech Connect

    Young, Joseph G.

    2012-01-01

    Over the last several years, there has been considerable growth in camera based observation systems for a variety of safety, scientific, and recreational applications. In order to improve the effectiveness of these systems, we frequently desire the ability to increase the number of observed objects, but solving this problem is not as simple as adding more cameras. Quite often, there are economic or physical restrictions that prevent us from adding additional cameras to the system. As a result, we require methods that coordinate the tracking of objects between multiple cameras in an optimal way. In order to accomplish this goal, we present a new cooperative control algorithm for a camera based observational system. Specifically, we present a receding horizon control where we model the underlying optimal control problem as a mixed integer linear program. The benefit of this design is that we can coordinate the actions between each camera while simultaneously respecting its kinematics. In addition, we further improve the quality of our solution by coupling our algorithm with a Kalman filter. Through this integration, we not only add a predictive component to our control, but we use the uncertainty estimates provided by the filter to encourage the system to periodically observe any outliers in the observed area. This combined approach allows us to intelligently observe the entire region of interest in an effective and thorough manner.

  20. Observation of Polarized Positrons from an Undulator-Based Source

    NASA Astrophysics Data System (ADS)

    Alexander, G.; Barley, J.; Batygin, Y.; Berridge, S.; Bharadwaj, V.; Bower, G.; Bugg, W.; Decker, F.-J.; Dollan, R.; Efremenko, Y.; Gharibyan, V.; Hast, C.; Iverson, R.; Kolanoski, H.; Kovermann, J.; Laihem, K.; Lohse, T.; McDonald, K. T.; Mikhailichenko, A. A.; Moortgat-Pick, G. A.; Pahl, P.; Pitthan, R.; Pöschl, R.; Reinherz-Aronis, E.; Riemann, S.; Schälicke, A.; Schüler, K. P.; Schweizer, T.; Scott, D.; Sheppard, J. C.; Stahl, A.; Szalata, Z. M.; Walz, D.; Weidemann, A. W.

    2008-05-01

    An experiment (E166) at the Stanford Linear Accelerator Center has demonstrated a scheme in which a multi-GeV electron beam passed through a helical undulator to generate multi-MeV, circularly polarized photons which were then converted in a thin target to produce positrons (and electrons) with longitudinal polarization above 80% at 6 MeV. The results are in agreement with Geant4 simulations that include the dominant polarization-dependent interactions of electrons, positrons, and photons in matter.

  1. Observation of polarized positrons from an undulator-based source.

    PubMed

    Alexander, G; Barley, J; Batygin, Y; Berridge, S; Bharadwaj, V; Bower, G; Bugg, W; Decker, F-J; Dollan, R; Efremenko, Y; Gharibyan, V; Hast, C; Iverson, R; Kolanoski, H; Kovermann, J; Laihem, K; Lohse, T; McDonald, K T; Mikhailichenko, A A; Moortgat-Pick, G A; Pahl, P; Pitthan, R; Pöschl, R; Reinherz-Aronis, E; Riemann, S; Schälicke, A; Schüler, K P; Schweizer, T; Scott, D; Sheppard, J C; Stahl, A; Szalata, Z M; Walz, D; Weidemann, A W

    2008-05-30

    An experiment (E166) at the Stanford Linear Accelerator Center has demonstrated a scheme in which a multi-GeV electron beam passed through a helical undulator to generate multi-MeV, circularly polarized photons which were then converted in a thin target to produce positrons (and electrons) with longitudinal polarization above 80% at 6 MeV. The results are in agreement with GEANT4 simulations that include the dominant polarization-dependent interactions of electrons, positrons, and photons in matter.

  2. Observation of Polarized Positrons from an Undulator-Based Source

    SciTech Connect

    Alexander, G; Barley, J.; Batygin, Y.; Berridge, S.; Bharadwaj, V.; Bower, G.; Bugg, W.; Decker, F.-J.; Dollan, R.; Efremenko, Y.; Gharibyan, V.; Hast, C.; Iverson, R.; Kolanoski, H.; Kovermann, J.; Laihem, K.; Lohse, T.; McDonald, K.T.; Mikhailichenko, A.A.; Moortgat-Pick, G.A.; Pahl, P.; /Tel Aviv U. /Cornell U., Phys. Dept. /SLAC /Tennessee U. /Humboldt U., Berlin /DESY /Yerevan Phys. Inst. /Aachen, Tech. Hochsch. /DESY, Zeuthen /Princeton U. /Durham U. /Daresbury

    2008-03-06

    An experiment (E166) at the Stanford Linear Accelerator Center (SLAC) has demonstrated a scheme in which a multi-GeV electron beam passed through a helical undulator to generate multi-MeV, circularly polarized photons which were then converted in a thin target to produce positrons (and electrons) with longitudinal polarization above 80% at 6 MeV. The results are in agreement with Geant4 simulations that include the dominant polarization-dependent interactions of electrons, positrons and photons in matter.

  3. Performance of the (n,{gamma})-Based Positron Beam Facility NEPOMUC

    SciTech Connect

    Schreckenbach, K.; Hugenschmidt, C.; Piochacz, C.; Stadlbauer, M.; Loewe, B.; Maier, J.; Pikart, P.

    2009-01-28

    The in-pile positron source of NEPOMUC at the neutron source Heinz Maier-Leibnitz (FRM II) provides at the experimental site an intense beam of monoenergetic positrons with selectable energy between 15 eV and 3 keV. The principle of the source is based on neutron capture gamma rays produced by cadmium in a beam tube tip close to the reactor core. The gamma ray absorption in platinum produces positrons which are moderated and formed to the beam. An unprecedented beam intensity of 9.10{sup 8} e{sup +}/s is achieved (1 keV). The performance and applications of the facility are presented.

  4. An interactive, computer-based atlas of neurologic positron emission tomographic studies for use in teaching.

    PubMed

    Berlangieri, S U; Schifter, T; Hoffman, J M; Hawk, T C; Hamblen, S M; Coleman, R E

    1993-06-01

    Recent developments in personal computer hardware and software allow the manipulation of radiologic images. We developed an interactive, computer-based atlas of clinical neurologic positron emission tomographic studies for use as an educational resource. A personal computer and multimedia software were used to assemble the clinical case studies. For each clinical case, the user had available the clinical history, positron emission tomographic and correlative anatomic images, study interpretation, discussion, and references. The clinical cases were selected for their educational value, either as a representative example of an abnormality or for their ability to illustrate a common pitfall in positron emission tomographic imaging of the brain.

  5. Dual camera-based split shutter for high-rate and long-distance optical camera communications

    NASA Astrophysics Data System (ADS)

    Cahyadi, Willy Anugrah; Kim, Yong-hyeon; Chung, Yeon-ho

    2016-11-01

    Smartphones have been very versatile communication devices. Cameras fitted in smartphones are primarily used to capture photographs and for relevant multimedia functions. Recently, smartphone cameras have been studied for short-range wireless communications. Camera-based optical wireless communication, termed optical camera communication (OCC), is a form of visible light communication. In the OCC, a camera is employed as the receiver, while either a light emitting diode or a liquid crystal display (LCD) is used as the transmitter. This paper proposes a downlink OCC scheme with a dual camera (receiver) and an LCD (transmitter) is employed, using a unique capture method called split shutter method. The split shutter method combined with dual camera offers a significantly increased capture rate as well as increased coverage distance. Experiments were conducted to verify the proposed dual camera-based split shutter OCC scheme. Compared with a standard camera shutter, it is found that the proposed method effectively increases the transmission rate by a factor of two under identical experimental conditions, achieving a maximum of 11,520 bits per second and a distance of up to 2 m using a 1280×720 pixels camera resolution. When compared with conventional rolling shutter OCCs, the proposed method offers four times farther the coverage distance at an identical resolution. It is also demonstrated that the data rate can be further increased when a larger number of transmitters are employed.

  6. Design of microcontroller based system for automation of streak camera.

    PubMed

    Joshi, M J; Upadhyay, J; Deshpande, P P; Sharma, M L; Navathe, C P

    2010-08-01

    A microcontroller based system has been developed for automation of the S-20 optical streak camera, which is used as a diagnostic tool to measure ultrafast light phenomenon. An 8 bit MCS family microcontroller is employed to generate all control signals for the streak camera. All biasing voltages required for various electrodes of the tubes are generated using dc-to-dc converters. A high voltage ramp signal is generated through a step generator unit followed by an integrator circuit and is applied to the camera's deflecting plates. The slope of the ramp can be changed by varying values of the capacitor and inductor. A programmable digital delay generator has been developed for synchronization of ramp signal with the optical signal. An independent hardwired interlock circuit has been developed for machine safety. A LABVIEW based graphical user interface has been developed which enables the user to program the settings of the camera and capture the image. The image is displayed with intensity profiles along horizontal and vertical axes. The streak camera was calibrated using nanosecond and femtosecond lasers.

  7. Design of microcontroller based system for automation of streak camera

    SciTech Connect

    Joshi, M. J.; Upadhyay, J.; Deshpande, P. P.; Sharma, M. L.; Navathe, C. P.

    2010-08-15

    A microcontroller based system has been developed for automation of the S-20 optical streak camera, which is used as a diagnostic tool to measure ultrafast light phenomenon. An 8 bit MCS family microcontroller is employed to generate all control signals for the streak camera. All biasing voltages required for various electrodes of the tubes are generated using dc-to-dc converters. A high voltage ramp signal is generated through a step generator unit followed by an integrator circuit and is applied to the camera's deflecting plates. The slope of the ramp can be changed by varying values of the capacitor and inductor. A programmable digital delay generator has been developed for synchronization of ramp signal with the optical signal. An independent hardwired interlock circuit has been developed for machine safety. A LABVIEW based graphical user interface has been developed which enables the user to program the settings of the camera and capture the image. The image is displayed with intensity profiles along horizontal and vertical axes. The streak camera was calibrated using nanosecond and femtosecond lasers.

  8. Extrinsic Calibration of Camera Networks Based on Pedestrians

    PubMed Central

    Guan, Junzhi; Deboeverie, Francis; Slembrouck, Maarten; Van Haerenborgh, Dirk; Van Cauwelaert, Dimitri; Veelaert, Peter; Philips, Wilfried

    2016-01-01

    In this paper, we propose a novel extrinsic calibration method for camera networks by analyzing tracks of pedestrians. First of all, we extract the center lines of walking persons by detecting their heads and feet in the camera images. We propose an easy and accurate method to estimate the 3D positions of the head and feet w.r.t. a local camera coordinate system from these center lines. We also propose a RANSAC-based orthogonal Procrustes approach to compute relative extrinsic parameters connecting the coordinate systems of cameras in a pairwise fashion. Finally, we refine the extrinsic calibration matrices using a method that minimizes the reprojection error. While existing state-of-the-art calibration methods explore epipolar geometry and use image positions directly, the proposed method first computes 3D positions per camera and then fuses the data. This results in simpler computations and a more flexible and accurate calibration method. Another advantage of our method is that it can also handle the case of persons walking along straight lines, which cannot be handled by most of the existing state-of-the-art calibration methods since all head and feet positions are co-planar. This situation often happens in real life. PMID:27171080

  9. Extrinsic Calibration of Camera Networks Based on Pedestrians.

    PubMed

    Guan, Junzhi; Deboeverie, Francis; Slembrouck, Maarten; Van Haerenborgh, Dirk; Van Cauwelaert, Dimitri; Veelaert, Peter; Philips, Wilfried

    2016-05-09

    In this paper, we propose a novel extrinsic calibration method for camera networks by analyzing tracks of pedestrians. First of all, we extract the center lines of walking persons by detecting their heads and feet in the camera images. We propose an easy and accurate method to estimate the 3D positions of the head and feet w.r.t. a local camera coordinate system from these center lines. We also propose a RANSAC-based orthogonal Procrustes approach to compute relative extrinsic parameters connecting the coordinate systems of cameras in a pairwise fashion. Finally, we refine the extrinsic calibration matrices using a method that minimizes the reprojection error. While existing state-of-the-art calibration methods explore epipolar geometry and use image positions directly, the proposed method first computes 3D positions per camera and then fuses the data. This results in simpler computations and a more flexible and accurate calibration method. Another advantage of our method is that it can also handle the case of persons walking along straight lines, which cannot be handled by most of the existing state-of-the-art calibration methods since all head and feet positions are co-planar. This situation often happens in real life.

  10. A Robust Camera-Based Interface for Mobile Entertainment

    PubMed Central

    Roig-Maimó, Maria Francesca; Manresa-Yee, Cristina; Varona, Javier

    2016-01-01

    Camera-based interfaces in mobile devices are starting to be used in games and apps, but few works have evaluated them in terms of usability or user perception. Due to the changing nature of mobile contexts, this evaluation requires extensive studies to consider the full spectrum of potential users and contexts. However, previous works usually evaluate these interfaces in controlled environments such as laboratory conditions, therefore, the findings cannot be generalized to real users and real contexts. In this work, we present a robust camera-based interface for mobile entertainment. The interface detects and tracks the user’s head by processing the frames provided by the mobile device’s front camera, and its position is then used to interact with the mobile apps. First, we evaluate the interface as a pointing device to study its accuracy, and different factors to configure such as the gain or the device’s orientation, as well as the optimal target size for the interface. Second, we present an in the wild study to evaluate the usage and the user’s perception when playing a game controlled by head motion. Finally, the game is published in an application store to make it available to a large number of potential users and contexts and we register usage data. Results show the feasibility of using this robust camera-based interface for mobile entertainment in different contexts and by different people. PMID:26907288

  11. A Robust Camera-Based Interface for Mobile Entertainment.

    PubMed

    Roig-Maimó, Maria Francesca; Manresa-Yee, Cristina; Varona, Javier

    2016-02-19

    Camera-based interfaces in mobile devices are starting to be used in games and apps, but few works have evaluated them in terms of usability or user perception. Due to the changing nature of mobile contexts, this evaluation requires extensive studies to consider the full spectrum of potential users and contexts. However, previous works usually evaluate these interfaces in controlled environments such as laboratory conditions, therefore, the findings cannot be generalized to real users and real contexts. In this work, we present a robust camera-based interface for mobile entertainment. The interface detects and tracks the user's head by processing the frames provided by the mobile device's front camera, and its position is then used to interact with the mobile apps. First, we evaluate the interface as a pointing device to study its accuracy, and different factors to configure such as the gain or the device's orientation, as well as the optimal target size for the interface. Second, we present an in the wild study to evaluate the usage and the user's perception when playing a game controlled by head motion. Finally, the game is published in an application store to make it available to a large number of potential users and contexts and we register usage data. Results show the feasibility of using this robust camera-based interface for mobile entertainment in different contexts and by different people.

  12. Analysis of unstructured video based on camera motion

    NASA Astrophysics Data System (ADS)

    Abdollahian, Golnaz; Delp, Edward J.

    2007-01-01

    Although considerable work has been done in management of "structured" video such as movies, sports, and television programs that has known scene structures, "unstructured" video analysis is still a challenging problem due to its unrestricted nature. The purpose of this paper is to address issues in the analysis of unstructured video and in particular video shot by a typical unprofessional user (i.e home video). We describe how one can make use of camera motion information for unstructured video analysis. A new concept, "camera viewing direction," is introduced as the building block of home video analysis. Motion displacement vectors are employed to temporally segment the video based on this concept. We then find the correspondence between the camera behavior with respect to the subjective importance of the information in each segment and describe how different patterns in the camera motion can indicate levels of interest in a particular object or scene. By extracting these patterns, the most representative frames, keyframes, for the scenes are determined and aggregated to summarize the video sequence.

  13. Activity-based costing evaluation of a [(18)F]-fludeoxyglucose positron emission tomography study.

    PubMed

    Krug, Bruno; Van Zanten, Annie; Pirson, Anne-Sophie; Crott, Ralph; Borght, Thierry Vander

    2009-10-01

    The aim of the study is to use the activity-based costing approach to give a better insight in the actual cost structure of a positron emission tomography procedure (FDG-PET) by defining the constituting components and by simulating the impact of possible resource or practice changes. The cost data were obtained from the hospital administration, personnel and vendor interviews as well as from structured questionnaires. A process map separates the process in 16 patient- and non-patient-related activities, to which the detailed cost data are related. One-way sensitivity analyses shows to which degree of uncertainty the different parameters affect the individual cost and evaluate the impact of possible resource or practice changes like the acquisition of a hybrid PET/CT device, the patient throughput or the sales price of a 370MBq (18)F-FDG patient dose. The PET centre spends 73% of time in clinical activities and the resting time after injection of the tracer (42%) is the single largest departmental cost element. The tracer cost and the operational time have the most influence on cost per procedure. The analysis shows a total cost per FDG-PET ranging from 859 Euro for a BGO PET camera to 1142 Euro for a 16 slices PET-CT system, with a distribution of the resource costs in decreasing order: materials (44%), equipment (24%), wage (16%), space (6%) and hospital overhead (10%). The cost of FDG-PET is mainly influenced by the cost of the radiopharmaceutical. Therefore, the latter rather than the operational time should be reduced in order to improve its cost-effectiveness.

  14. Formation of buffer-gas-trap based positron beams

    SciTech Connect

    Natisin, M. R. Danielson, J. R. Surko, C. M.

    2015-03-15

    Presented here are experimental measurements, analytic expressions, and simulation results for pulsed, magnetically guided positron beams formed using a Penning-Malmberg style buffer gas trap. In the relevant limit, particle motion can be separated into motion along the magnetic field and gyro-motion in the plane perpendicular to the field. Analytic expressions are developed which describe the evolution of the beam energy distributions, both parallel and perpendicular to the magnetic field, as the beam propagates through regions of varying magnetic field. Simulations of the beam formation process are presented, with the parameters chosen to accurately replicate experimental conditions. The initial conditions and ejection parameters are varied systematically in both experiment and simulation, allowing the relevant processes involved in beam formation to be explored. These studies provide new insights into the underlying physics, including significant adiabatic cooling, due to the time-dependent beam-formation potential. Methods to improve the beam energy and temporal resolution are discussed.

  15. EAST FACE OF REACTOR BASE. COMING TOWARD CAMERA IS EXCAVATION ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    EAST FACE OF REACTOR BASE. COMING TOWARD CAMERA IS EXCAVATION FOR MTR CANAL. CAISSONS FLANK EACH SIDE. COUNTERFORT (SUPPORT PERPENDICULAR TO WHAT WILL BE THE LONG WALL OF THE CANAL) RESTS ATOP LEFT CAISSON. IN LOWER PART OF VIEW, DRILLERS PREPARE TRENCHES FOR SUPPORT BEAMS THAT WILL LIE BENEATH CANAL FLOOR. INL NEGATIVE NO. 739. Unknown Photographer, 10/6/1950 - Idaho National Engineering Laboratory, Test Reactor Area, Materials & Engineering Test Reactors, Scoville, Butte County, ID

  16. Development of mini linac-based positron source and an efficient positronium convertor for positively charged antihydrogen production

    NASA Astrophysics Data System (ADS)

    Muranaka, T.; Debu, P.; Dupré, P.; Liszkay, L.; Mansoulie, B.; Pérez, P.; Rey, J. M.; Ruiz, N.; Sacquin, Y.; Crivelli, P.; Gendotti, U.; Rubbia, A.

    2010-04-01

    We have installed in Saclay a facility for an intense positron source in November 2008. It is based on a compact 5.5 MeV electron linac connected to a reaction chamber with a tungsten target inside to produce positrons via pair production. The expected production rate for fast positrons is 5·1011 per second. The study of moderation of fast positrons and the construction of a slow positron trap are underway. In parallel, we have investigated an efficient positron-positronium convertor using porous silica materials. These studies are parts of a project to produce positively charged antihydrogen ions aiming to demonstrate the feasibility of a free fall antigravity measurement of neutral antihydrogen.

  17. A trap-based pulsed positron beam optimised for positronium laser spectroscopy

    SciTech Connect

    Cooper, B. S. Alonso, A. M.; Deller, A.; Wall, T. E.; Cassidy, D. B.

    2015-10-15

    We describe a pulsed positron beam that is optimised for positronium (Ps) laser-spectroscopy experiments. The system is based on a two-stage Surko-type buffer gas trap that produces 4 ns wide pulses containing up to 5 × 10{sup 5} positrons at a rate of 0.5-10 Hz. By implanting positrons from the trap into a suitable target material, a dilute positronium gas with an initial density of the order of 10{sup 7} cm{sup −3} is created in vacuum. This is then probed with pulsed (ns) laser systems, where various Ps-laser interactions have been observed via changes in Ps annihilation rates using a fast gamma ray detector. We demonstrate the capabilities of the apparatus and detection methodology via the observation of Rydberg positronium atoms with principal quantum numbers ranging from 11 to 22 and the Stark broadening of the n = 2 → 11 transition in electric fields.

  18. Camera-based system for contactless monitoring of respiration.

    PubMed

    Bartula, Marek; Tigges, Timo; Muehlsteff, Jens

    2013-01-01

    Reliable, remote measurement of respiration rate is still an unmet need in clinical and home settings. Although the predictive power of respiratory rate for a patient's health status is well-known, this vital sign is often measured inaccurately or not at all. In this paper we propose a camera-based monitoring system to reliably measure respiration rate without any body contact. A computationally efficient algorithm to extract raw breathing signals from the video stream has been developed and implemented. Additionally, a camera offers an easy access to motion information in the analyzed scenes, which significantly improves subsequent breath-to-breath classification. The performance of the sensor system was evaluated using data acquired with healthy volunteers, as well as with a mechanical phantom, under laboratory conditions covering a large range of challenging measurement situations.

  19. Visual homing with a pan-tilt based stereo camera

    NASA Astrophysics Data System (ADS)

    Nirmal, Paramesh; Lyons, Damian M.

    2013-01-01

    Visual homing is a navigation method based on comparing a stored image of the goal location and the current image (current view) to determine how to navigate to the goal location. It is theorized that insects, such as ants and bees, employ visual homing methods to return to their nest. Visual homing has been applied to autonomous robot platforms using two main approaches: holistic and feature-based. Both methods aim at determining distance and direction to the goal location. Navigational algorithms using Scale Invariant Feature Transforms (SIFT) have gained great popularity in the recent years due to the robustness of the feature operator. Churchill and Vardy have developed a visual homing method using scale change information (Homing in Scale Space, HiSS) from SIFT. HiSS uses SIFT feature scale change information to determine distance between the robot and the goal location. Since the scale component is discrete with a small range of values, the result is a rough measurement with limited accuracy. We have developed a method that uses stereo data, resulting in better homing performance. Our approach utilizes a pan-tilt based stereo camera, which is used to build composite wide-field images. We use the wide-field images combined with stereo-data obtained from the stereo camera to extend the keypoint vector described in to include a new parameter, depth (z). Using this info, our algorithm determines the distance and orientation from the robot to the goal location. We compare our method with HiSS in a set of indoor trials using a Pioneer 3-AT robot equipped with a BumbleBee2 stereo camera. We evaluate the performance of both methods using a set of performance measures described in this paper.

  20. 78 FR 68475 - Certain Vision-Based Driver Assistance System Cameras and Components Thereof; Institution of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-14

    ... COMMISSION Certain Vision-Based Driver Assistance System Cameras and Components Thereof; Institution of...-based driver assistance system cameras and components thereof by reason of infringement of certain... assistance system cameras and components thereof by reason of infringement of one or more of claims 1, 2,...

  1. Positron source investigation by using CLIC drive beam for Linac-LHC based e+p collider

    NASA Astrophysics Data System (ADS)

    Arιkan, Ertan; Aksakal, Hüsnü

    2012-08-01

    Three different methods which are alternately conventional, Compton backscattering and Undulator based methods employed for the production of positrons. The positrons to be used for e+p collisions in a Linac-LHC (Large Hadron Collider) based collider have been studied. The number of produced positrons as a function of drive beam energy and optimum target thickness has been determined. Three different targets have been used as a source investigation which are W75-Ir25, W75-Ta25, and W75-Re25 for three methods. Estimated number of the positrons has been performed with FLUKA simulation code. Then, these produced positrons are used for following Adiabatic matching device (AMD) and capture efficiency is determined. Then e+p collider luminosity corresponding to the methods mentioned above have been calculated by CAIN code.

  2. Estimation of Cometary Rotation Parameters Based on Camera Images

    NASA Technical Reports Server (NTRS)

    Spindler, Karlheinz

    2007-01-01

    The purpose of the Rosetta mission is the in situ analysis of a cometary nucleus using both remote sensing equipment and scientific instruments delivered to the comet surface by a lander and transmitting measurement data to the comet-orbiting probe. Following a tour of planets including one Mars swing-by and three Earth swing-bys, the Rosetta probe is scheduled to rendezvous with comet 67P/Churyumov-Gerasimenko in May 2014. The mission poses various flight dynamics challenges, both in terms of parameter estimation and maneuver planning. Along with spacecraft parameters, the comet's position, velocity, attitude, angular velocity, inertia tensor and gravitatonal field need to be estimated. The measurements on which the estimation process is based are ground-based measurements (range and Doppler) yielding information on the heliocentric spacecraft state and images taken by an on-board camera yielding informaton on the comet state relative to the spacecraft. The image-based navigation depends on te identification of cometary landmarks (whose body coordinates also need to be estimated in the process). The paper will describe the estimation process involved, focusing on the phase when, after orbit insertion, the task arises to estimate the cometary rotational motion from camera images on which individual landmarks begin to become identifiable.

  3. Linac-based positron source and generation of a high density positronium cloud for the GBAR experiment

    NASA Astrophysics Data System (ADS)

    Liszkay, L.; Comini, P.; Corbel, C.; Debu, P.; Dupré, P.; Grandemange, P.; Pérez, P.; Rey, J.-M.; Ruiz, N.; Sacquin, Y.

    2013-06-01

    The aim of the recently approved GBAR (Gravitational Behaviour of Antihydrogen at Rest) experiment is to measure the acceleration of neutral antihydrogen atoms in the gravitational field of the Earth. The experimental scheme requires a high density positronium cloud as a target for antiprotons, provided by the Antiproton Decelerator (AD) - Extra Low Energy Antiproton Ring (ELENA) facility at CERN. We introduce briefly the experimental scheme and present the ongoing efforts at IRFU CEA Saclay to develop the positron source and the positron-positronium converter, which are key parts of the experiment. We have constructed a slow positron source in Saclay, based on a low energy (4.3 MeV) linear electron accelerator (linac). By using an electron target made of tungsten and a stack of thin W meshes as positron moderator, we reached a slow positron intensity that is comparable with that of 22Na-based sources using a solid neon moderator. The source feeds positrons into a high field (5 T) Penning-Malmberg trap. Intense positron pulses from the trap will be converted to slow ortho-positronium (o-Ps) by a converter structure. Mesoporous silica films appear to date to be the best candidates as converter material. We discuss our studies to find the optimal pore configuration for the positron-positronium converter.

  4. Noninvasive particle sizing using camera-based diffuse reflectance spectroscopy.

    PubMed

    Abildgaard, Otto Højager Attermann; Frisvad, Jeppe Revall; Falster, Viggo; Parker, Alan; Christensen, Niels Jørgen; Dahl, Anders Bjorholm; Larsen, Rasmus

    2016-05-10

    Diffuse reflectance measurements are useful for noninvasive inspection of optical properties such as reduced scattering and absorption coefficients. Spectroscopic analysis of these optical properties can be used for particle sizing. Systems based on optical fiber probes are commonly employed, but their low spatial resolution limits their validity ranges for the coefficients. To cover a wider range of coefficients, we use camera-based spectroscopic oblique incidence reflectometry. We develop a noninvasive technique for acquisition of apparent particle size distributions based on this approach. Our technique is validated using stable oil-in-water emulsions with a wide range of known particle size distributions. We also measure the apparent particle size distributions of complex dairy products. These results show that our tool, in contrast to those based on fiber probes, can deal with a range of optical properties wide enough to track apparent particle size distributions in a typical industrial process.

  5. Camera-based forecasting of insolation for solar systems

    NASA Astrophysics Data System (ADS)

    Manger, Daniel; Pagel, Frank

    2015-02-01

    With the transition towards renewable energies, electricity suppliers are faced with huge challenges. Especially the increasing integration of solar power systems into the grid gets more and more complicated because of their dynamic feed-in capacity. To assist the stabilization of the grid, the feed-in capacity of a solar power system within the next hours, minutes and even seconds should be known in advance. In this work, we present a consumer camera-based system for forecasting the feed-in capacity of a solar system for a horizon of 10 seconds. A camera is targeted at the sky and clouds are segmented, detected and tracked. A quantitative prediction of the insolation is performed based on the tracked clouds. Image data as well as truth data for the feed-in capacity was synchronously collected at one Hz using a small solar panel, a resistor and a measuring device. Preliminary results demonstrate both the applicability and the limits of the proposed system.

  6. Video-Camera-Based Position-Measuring System

    NASA Technical Reports Server (NTRS)

    Lane, John; Immer, Christopher; Brink, Jeffrey; Youngquist, Robert

    2005-01-01

    A prototype optoelectronic system measures the three-dimensional relative coordinates of objects of interest or of targets affixed to objects of interest in a workspace. The system includes a charge-coupled-device video camera mounted in a known position and orientation in the workspace, a frame grabber, and a personal computer running image-data-processing software. Relative to conventional optical surveying equipment, this system can be built and operated at much lower cost; however, it is less accurate. It is also much easier to operate than are conventional instrumentation systems. In addition, there is no need to establish a coordinate system through cooperative action by a team of surveyors. The system operates in real time at around 30 frames per second (limited mostly by the frame rate of the camera). It continuously tracks targets as long as they remain in the field of the camera. In this respect, it emulates more expensive, elaborate laser tracking equipment that costs of the order of 100 times as much. Unlike laser tracking equipment, this system does not pose a hazard of laser exposure. Images acquired by the camera are digitized and processed to extract all valid targets in the field of view. The three-dimensional coordinates (x, y, and z) of each target are computed from the pixel coordinates of the targets in the images to accuracy of the order of millimeters over distances of the orders of meters. The system was originally intended specifically for real-time position measurement of payload transfers from payload canisters into the payload bay of the Space Shuttle Orbiters (see Figure 1). The system may be easily adapted to other applications that involve similar coordinate-measuring requirements. Examples of such applications include manufacturing, construction, preliminary approximate land surveying, and aerial surveying. For some applications with rectangular symmetry, it is feasible and desirable to attach a target composed of black and white

  7. Goal-oriented rectification of camera-based document images.

    PubMed

    Stamatopoulos, Nikolaos; Gatos, Basilis; Pratikakis, Ioannis; Perantonis, Stavros J

    2011-04-01

    Document digitization with either flatbed scanners or camera-based systems results in document images which often suffer from warping and perspective distortions that deteriorate the performance of current OCR approaches. In this paper, we present a goal-oriented rectification methodology to compensate for undesirable document image distortions aiming to improve the OCR result. Our approach relies upon a coarse-to-fine strategy. First, a coarse rectification is accomplished with the aid of a computationally low cost transformation which addresses the projection of a curved surface to a 2-D rectangular area. The projection of the curved surface on the plane is guided only by the textual content's appearance in the document image while incorporating a transformation which does not depend on specific model primitives or camera setup parameters. Second, pose normalization is applied on the word level aiming to restore all the local distortions of the document image. Experimental results on various document images with a variety of distortions demonstrate the robustness and effectiveness of the proposed rectification methodology using a consistent evaluation methodology that encounters OCR accuracy and a newly introduced measure using a semi-automatic procedure.

  8. Evolutionary Fuzzy Block-Matching-Based Camera Raw Image Denoising.

    PubMed

    Yang, Chin-Chang; Guo, Shu-Mei; Tsai, Jason Sheng-Hong

    2017-09-01

    An evolutionary fuzzy block-matching-based image denoising algorithm is proposed to remove noise from a camera raw image. Recently, a variance stabilization transform is widely used to stabilize the noise variance, so that a Gaussian denoising algorithm can be used to remove the signal-dependent noise in camera sensors. However, in the stabilized domain, the existed denoising algorithm may blur too much detail. To provide a better estimate of the noise-free signal, a new block-matching approach is proposed to find similar blocks by the use of a type-2 fuzzy logic system (FLS). Then, these similar blocks are averaged with the weightings which are determined by the FLS. Finally, an efficient differential evolution is used to further improve the performance of the proposed denoising algorithm. The experimental results show that the proposed denoising algorithm effectively improves the performance of image denoising. Furthermore, the average performance of the proposed method is better than those of two state-of-the-art image denoising algorithms in subjective and objective measures.

  9. Whole blood glucose analysis based on smartphone camera module.

    PubMed

    Devadhasan, Jasmine Pramila; Oh, Hyunhee; Choi, Cheol Soo; Kim, Sanghyo

    2015-11-01

    Complementary metal oxide semiconductor (CMOS) image sensors have received great attention for their high efficiency in biological applications. The present work describes a CMOS image sensor-based whole blood glucose monitoring system through a point-of-care (POC) approach. A simple poly-ethylene terephthalate (PET) chip was developed to carry out the enzyme kinetic reaction at various concentrations (110–586 mg∕dL) of mouse blood glucose. In this technique, assay reagent is immobilized onto amine functionalized silica (AFSiO2) nanoparticles as an electrostatic attraction in order to achieve glucose oxidation on the chip. The assay reagent immobilized AFSiO2 nanoparticles develop a semi-transparent reaction platform, which is technically a suitable chip to analyze by a camera module. The oxidized glucose then produces a green color according to the glucose concentration and is analyzed by the camera module as a photon detection technique; the photon number decreases when the glucose concentration increases. The combination of these components, the CMOS image sensor and enzyme immobilized PET film chip, constitute a compact, accurate, inexpensive, precise, digital, highly sensitive, specific, and optical glucose-sensing approach for POC diagnosis.

  10. Whole blood glucose analysis based on smartphone camera module

    NASA Astrophysics Data System (ADS)

    Devadhasan, Jasmine Pramila; Oh, Hyunhee; Choi, Cheol Soo; Kim, Sanghyo

    2015-11-01

    Complementary metal oxide semiconductor (CMOS) image sensors have received great attention for their high efficiency in biological applications. The present work describes a CMOS image sensor-based whole blood glucose monitoring system through a point-of-care (POC) approach. A simple poly-ethylene terephthalate (PET) chip was developed to carry out the enzyme kinetic reaction at various concentrations (110-586 mg/dL) of mouse blood glucose. In this technique, assay reagent is immobilized onto amine functionalized silica (AFSiO2) nanoparticles as an electrostatic attraction in order to achieve glucose oxidation on the chip. The assay reagent immobilized AFSiO2 nanoparticles develop a semi-transparent reaction platform, which is technically a suitable chip to analyze by a camera module. The oxidized glucose then produces a green color according to the glucose concentration and is analyzed by the camera module as a photon detection technique; the photon number decreases when the glucose concentration increases. The combination of these components, the CMOS image sensor and enzyme immobilized PET film chip, constitute a compact, accurate, inexpensive, precise, digital, highly sensitive, specific, and optical glucose-sensing approach for POC diagnosis.

  11. Securing quality of camera-based biomedical optics

    NASA Astrophysics Data System (ADS)

    Guse, Frank; Kasper, Axel; Zinter, Bob

    2009-02-01

    As sophisticated optical imaging technologies move into clinical applications, manufacturers need to guarantee their products meet required performance criteria over long lifetimes and in very different environmental conditions. A consistent quality management marks critical components features derived from end-users requirements in a top-down approach. Careful risk analysis in the design phase defines the sample sizes for production tests, whereas first article inspection assures the reliability of the production processes. We demonstrate the application of these basic quality principles to camera-based biomedical optics for a variety of examples including molecular diagnostics, dental imaging, ophthalmology and digital radiography, covering a wide range of CCD/CMOS chip sizes and resolutions. Novel concepts in fluorescence detection and structured illumination are also highlighted.

  12. Cardiac cameras.

    PubMed

    Travin, Mark I

    2011-05-01

    Cardiac imaging with radiotracers plays an important role in patient evaluation, and the development of suitable imaging instruments has been crucial. While initially performed with the rectilinear scanner that slowly transmitted, in a row-by-row fashion, cardiac count distributions onto various printing media, the Anger scintillation camera allowed electronic determination of tracer energies and of the distribution of radioactive counts in 2D space. Increased sophistication of cardiac cameras and development of powerful computers to analyze, display, and quantify data has been essential to making radionuclide cardiac imaging a key component of the cardiac work-up. Newer processing algorithms and solid state cameras, fundamentally different from the Anger camera, show promise to provide higher counting efficiency and resolution, leading to better image quality, more patient comfort and potentially lower radiation exposure. While the focus has been on myocardial perfusion imaging with single-photon emission computed tomography, increased use of positron emission tomography is broadening the field to include molecular imaging of the myocardium and of the coronary vasculature. Further advances may require integrating cardiac nuclear cameras with other imaging devices, ie, hybrid imaging cameras. The goal is to image the heart and its physiological processes as accurately as possible, to prevent and cure disease processes.

  13. Design of high speed camera based on CMOS technology

    NASA Astrophysics Data System (ADS)

    Park, Sei-Hun; An, Jun-Sick; Oh, Tae-Seok; Kim, Il-Hwan

    2007-12-01

    The capacity of a high speed camera in taking high speed images has been evaluated using CMOS image sensors. There are 2 types of image sensors, namely, CCD and CMOS sensors. CMOS sensor consumes less power than CCD sensor and can take images more rapidly. High speed camera with built-in CMOS sensor is widely used in vehicle crash tests and airbag controls, golf training aids, and in bullet direction measurement in the military. The High Speed Camera System made in this study has the following components: CMOS image sensor that can take about 500 frames per second at a resolution of 1280*1024; FPGA and DDR2 memory that control the image sensor and save images; Camera Link Module that transmits saved data to PC; and RS-422 communication function that enables control of the camera from a PC.

  14. MO-AB-206-02: Testing Gamma Cameras Based On TG177 WG Report.

    PubMed

    Halama, J

    2016-06-01

    This education session will cover the physics and operation principles of gamma cameras and PET scanners. The first talk will focus on PET imaging. An overview of the principles of PET imaging will be provided, including positron decay physics, and the transition from 2D to 3D imaging. More recent advances in hardware and software will be discussed, such as time-of-flight imaging, and improvements in reconstruction algorithms that provide for options such as depth-of-interaction corrections. Quantitative applications of PET will be discussed, as well as the requirements for doing accurate quantitation. Relevant performance tests will also be described.

  15. A subwavelength resolution microwave/6.3 GHz camera based on a metamaterial absorber

    NASA Astrophysics Data System (ADS)

    Xie, Yunsong; Fan, Xin; Chen, Yunpeng; Wilson, Jeffrey D.; Simons, Rainee N.; Xiao, John Q.

    2017-01-01

    The design, fabrication and characterization of a novel metamaterial absorber based camera with subwavelength spatial resolution are investigated. The proposed camera is featured with simple and lightweight design, easy portability, low cost, high resolution and sensitivity, and minimal image interference or distortion to the original field distribution. The imaging capability of the proposed camera was characterized in both near field and far field ranges. The experimental and simulated near field images both reveal that the camera produces qualitatively accurate images with negligible distortion to the original field distribution. The far field demonstration was done by coupling the designed camera with a microwave convex lens. The far field results further demonstrate that the camera can capture quantitatively accurate electromagnetic wave distribution in the diffraction limit. The proposed camera can be used in application such as non-destructive image and beam direction tracer.

  16. A subwavelength resolution microwave/6.3 GHz camera based on a metamaterial absorber

    PubMed Central

    Xie, Yunsong; Fan, Xin; Chen, Yunpeng; Wilson, Jeffrey D.; Simons, Rainee N.; Xiao, John Q.

    2017-01-01

    The design, fabrication and characterization of a novel metamaterial absorber based camera with subwavelength spatial resolution are investigated. The proposed camera is featured with simple and lightweight design, easy portability, low cost, high resolution and sensitivity, and minimal image interference or distortion to the original field distribution. The imaging capability of the proposed camera was characterized in both near field and far field ranges. The experimental and simulated near field images both reveal that the camera produces qualitatively accurate images with negligible distortion to the original field distribution. The far field demonstration was done by coupling the designed camera with a microwave convex lens. The far field results further demonstrate that the camera can capture quantitatively accurate electromagnetic wave distribution in the diffraction limit. The proposed camera can be used in application such as non-destructive image and beam direction tracer. PMID:28071734

  17. Camera calibration method of binocular stereo vision based on OpenCV

    NASA Astrophysics Data System (ADS)

    Zhong, Wanzhen; Dong, Xiaona

    2015-10-01

    Camera calibration, an important part of the binocular stereo vision research, is the essential foundation of 3D reconstruction of the spatial object. In this paper, the camera calibration method based on OpenCV (open source computer vision library) is submitted to make the process better as a result of obtaining higher precision and efficiency. First, the camera model in OpenCV and an algorithm of camera calibration are presented, especially considering the influence of camera lens radial distortion and decentering distortion. Then, camera calibration procedure is designed to compute those parameters of camera and calculate calibration errors. High-accurate profile extraction algorithm and a checkboard with 48 corners have also been used in this part. Finally, results of calibration program are presented, demonstrating the high efficiency and accuracy of the proposed approach. The results can reach the requirement of robot binocular stereo vision.

  18. A subwavelength resolution microwave/6.3 GHz camera based on a metamaterial absorber.

    PubMed

    Xie, Yunsong; Fan, Xin; Chen, Yunpeng; Wilson, Jeffrey D; Simons, Rainee N; Xiao, John Q

    2017-01-10

    The design, fabrication and characterization of a novel metamaterial absorber based camera with subwavelength spatial resolution are investigated. The proposed camera is featured with simple and lightweight design, easy portability, low cost, high resolution and sensitivity, and minimal image interference or distortion to the original field distribution. The imaging capability of the proposed camera was characterized in both near field and far field ranges. The experimental and simulated near field images both reveal that the camera produces qualitatively accurate images with negligible distortion to the original field distribution. The far field demonstration was done by coupling the designed camera with a microwave convex lens. The far field results further demonstrate that the camera can capture quantitatively accurate electromagnetic wave distribution in the diffraction limit. The proposed camera can be used in application such as non-destructive image and beam direction tracer.

  19. An autonomous sensor module based on a legacy CCTV camera

    NASA Astrophysics Data System (ADS)

    Kent, P. J.; Faulkner, D. A. A.; Marshall, G. F.

    2016-10-01

    A UK MoD funded programme into autonomous sensors arrays (SAPIENT) has been developing new, highly capable sensor modules together with a scalable modular architecture for control and communication. As part of this system there is a desire to also utilise existing legacy sensors. The paper reports upon the development of a SAPIENT-compliant sensor module using a legacy Close-Circuit Television (CCTV) pan-tilt-zoom (PTZ) camera. The PTZ camera sensor provides three modes of operation. In the first mode, the camera is automatically slewed to acquire imagery of a specified scene area, e.g. to provide "eyes-on" confirmation for a human operator or for forensic purposes. In the second mode, the camera is directed to monitor an area of interest, with zoom level automatically optimized for human detection at the appropriate range. Open source algorithms (using OpenCV) are used to automatically detect pedestrians; their real world positions are estimated and communicated back to the SAPIENT central fusion system. In the third mode of operation a "follow" mode is implemented where the camera maintains the detected person within the camera field-of-view without requiring an end-user to directly control the camera with a joystick.

  20. A Portable, Inexpensive, Nonmydriatic Fundus Camera Based on the Raspberry Pi® Computer.

    PubMed

    Shen, Bailey Y; Mukai, Shizuo

    2017-01-01

    Purpose. Nonmydriatic fundus cameras allow retinal photography without pharmacologic dilation of the pupil. However, currently available nonmydriatic fundus cameras are bulky, not portable, and expensive. Taking advantage of recent advances in mobile technology, we sought to create a nonmydriatic fundus camera that was affordable and could be carried in a white coat pocket. Methods. We built a point-and-shoot prototype camera using a Raspberry Pi computer, an infrared-sensitive camera board, a dual infrared and white light light-emitting diode, a battery, a 5-inch touchscreen liquid crystal display, and a disposable 20-diopter condensing lens. Our prototype camera was based on indirect ophthalmoscopy with both infrared and white lights. Results. The prototype camera measured 133mm × 91mm × 45mm and weighed 386 grams. The total cost of the components, including the disposable lens, was $185.20. The camera was able to obtain good-quality fundus images without pharmacologic dilation of the pupils. Conclusion. A fully functional, inexpensive, handheld, nonmydriatic fundus camera can be easily assembled from a relatively small number of components. With modest improvements, such a camera could be useful for a variety of healthcare professionals, particularly those who work in settings where a traditional table-mounted nonmydriatic fundus camera would be inconvenient.

  1. A Portable, Inexpensive, Nonmydriatic Fundus Camera Based on the Raspberry Pi® Computer

    PubMed Central

    Shen, Bailey Y.

    2017-01-01

    Purpose. Nonmydriatic fundus cameras allow retinal photography without pharmacologic dilation of the pupil. However, currently available nonmydriatic fundus cameras are bulky, not portable, and expensive. Taking advantage of recent advances in mobile technology, we sought to create a nonmydriatic fundus camera that was affordable and could be carried in a white coat pocket. Methods. We built a point-and-shoot prototype camera using a Raspberry Pi computer, an infrared-sensitive camera board, a dual infrared and white light light-emitting diode, a battery, a 5-inch touchscreen liquid crystal display, and a disposable 20-diopter condensing lens. Our prototype camera was based on indirect ophthalmoscopy with both infrared and white lights. Results. The prototype camera measured 133mm × 91mm × 45mm and weighed 386 grams. The total cost of the components, including the disposable lens, was $185.20. The camera was able to obtain good-quality fundus images without pharmacologic dilation of the pupils. Conclusion. A fully functional, inexpensive, handheld, nonmydriatic fundus camera can be easily assembled from a relatively small number of components. With modest improvements, such a camera could be useful for a variety of healthcare professionals, particularly those who work in settings where a traditional table-mounted nonmydriatic fundus camera would be inconvenient. PMID:28396802

  2. Positron microscopy

    SciTech Connect

    Hulett, L.D. Jr.; Xu, J.

    1995-02-01

    The negative work function property that some materials have for positrons make possible the development of positron reemission microscopy (PRM). Because of the low energies with which the positrons are emitted, some unique applications, such as the imaging of defects, can be made. The history of the concept of PRM, and its present state of development will be reviewed. The potential of positron microprobe techniques will be discussed also.

  3. The E166 experiment: Development of an Undulator-Based Polarized Positron Source for the International Linear Collider

    SciTech Connect

    Kovermann, J.; Stahl, A.; Mikhailichenko, A.A.; Scott, D.; Moortgat-Pick, G.A.; Gharibyan, V.; Pahl, P.; Poschl, R.; Schuler, K.P.; Laihem, K.; Riemann, S.; Schalicke, A.; Dollan, R.; Kolanoski, H.; Lohse, T.; Schweizer, T.; McDonald, K.T.; Batygin, Y.; Bharadwaj, V.; Bower, G.; Decker, F.J.; /SLAC /Tel Aviv U. /Tennessee U.

    2011-11-14

    A longitudinal polarized positron beam is foreseen for the international linear collider (ILC). A proof-of-principle experiment has been performed in the final focus test beam at SLAC to demonstrate the production of polarized positrons for implementation at the ILC. The E166 experiment uses a 1 m long helical undulator in a 46.6 GeV electron beam to produce a few MeV photons with a high degree of circular polarization. These photons are then converted in a thin target to generate longitudinally polarized e{sup +} and e{sup -}. The positron polarization is measured using a Compton transmission polarimeter. The data analysis has shown asymmetries in the expected vicinity of 3.4% and {approx}1% for photons and positrons respectively and the expected positron longitudinal polarization is covering a range from 50% to 90%. The full exploitation of the physics potential of an international linear collider (ILC) will require the development of polarized positron beams. Having both e{sup +} and e{sup -} beams polarized will provide new insight into structures of couplings and thus give access to physics beyond the standard model [1]. The concept for a polarized positron source is based on circularly polarized photon sources. These photons are then converted to longitudinally polarized e{sup +} and e{sup -} pairs. While in an experiment at KEK [1a], Compton backscattering is used [2], the E166 experiment uses a helical undulator to produce polarized photons. An undulator-based positron source for the ILC has been proposed in [3,4]. The proposed scheme for an ILC positron source is illustrated in figure 1. In this scheme, a 150 GeV electron beam passes through a 120 m long helical undulator to produce an intense photon beam with a high degree of circular polarization. These photons are converted in a thin target to e{sup +} e{sup -} pairs. The polarized positrons are then collected, pre-accelerated to the damping ring and injected to the main linac. The E166 experiment is

  4. A Bionic Camera-Based Polarization Navigation Sensor

    PubMed Central

    Wang, Daobin; Liang, Huawei; Zhu, Hui; Zhang, Shuai

    2014-01-01

    Navigation and positioning technology is closely related to our routine life activities, from travel to aerospace. Recently it has been found that Cataglyphis (a kind of desert ant) is able to detect the polarization direction of skylight and navigate according to this information. This paper presents a real-time bionic camera-based polarization navigation sensor. This sensor has two work modes: one is a single-point measurement mode and the other is a multi-point measurement mode. An indoor calibration experiment of the sensor has been done under a beam of standard polarized light. The experiment results show that after noise reduction the accuracy of the sensor can reach up to 0.3256°. It is also compared with GPS and INS (Inertial Navigation System) in the single-point measurement mode through an outdoor experiment. Through time compensation and location compensation, the sensor can be a useful alternative to GPS and INS. In addition, the sensor also can measure the polarization distribution pattern when it works in multi-point measurement mode. PMID:25051029

  5. A bionic camera-based polarization navigation sensor.

    PubMed

    Wang, Daobin; Liang, Huawei; Zhu, Hui; Zhang, Shuai

    2014-07-21

    Navigation and positioning technology is closely related to our routine life activities, from travel to aerospace. Recently it has been found that Cataglyphis (a kind of desert ant) is able to detect the polarization direction of skylight and navigate according to this information. This paper presents a real-time bionic camera-based polarization navigation sensor. This sensor has two work modes: one is a single-point measurement mode and the other is a multi-point measurement mode. An indoor calibration experiment of the sensor has been done under a beam of standard polarized light. The experiment results show that after noise reduction the accuracy of the sensor can reach up to 0.3256°. It is also compared with GPS and INS (Inertial Navigation System) in the single-point measurement mode through an outdoor experiment. Through time compensation and location compensation, the sensor can be a useful alternative to GPS and INS. In addition, the sensor also can measure the polarization distribution pattern when it works in multi-point measurement mode.

  6. A hemispherical electronic eye camera based on compressible silicon optoelectronics.

    PubMed

    Ko, Heung Cho; Stoykovich, Mark P; Song, Jizhou; Malyarchuk, Viktor; Choi, Won Mook; Yu, Chang-Jae; Geddes, Joseph B; Xiao, Jianliang; Wang, Shuodao; Huang, Yonggang; Rogers, John A

    2008-08-07

    The human eye is a remarkable imaging device, with many attractive design features. Prominent among these is a hemispherical detector geometry, similar to that found in many other biological systems, that enables a wide field of view and low aberrations with simple, few-component imaging optics. This type of configuration is extremely difficult to achieve using established optoelectronics technologies, owing to the intrinsically planar nature of the patterning, deposition, etching, materials growth and doping methods that exist for fabricating such systems. Here we report strategies that avoid these limitations, and implement them to yield high-performance, hemispherical electronic eye cameras based on single-crystalline silicon. The approach uses wafer-scale optoelectronics formed in unusual, two-dimensionally compressible configurations and elastomeric transfer elements capable of transforming the planar layouts in which the systems are initially fabricated into hemispherical geometries for their final implementation. In a general sense, these methods, taken together with our theoretical analyses of their associated mechanics, provide practical routes for integrating well-developed planar device technologies onto the surfaces of complex curvilinear objects, suitable for diverse applications that cannot be addressed by conventional means.

  7. Interactive facial caricaturing system based on eye camera

    NASA Astrophysics Data System (ADS)

    Yamaguchi, Tsuyoshi; Tominaga, Masafumi; Koshimizu, Hiroyasu

    2003-04-01

    Face is the most effective visual media for supporting human interface and communication. We have proposed a typical KANSEI machine vision system to generate the facial caricature so far. The basic principle of this system uses the "mean face assumption" to extract individual features of a given face. This system did not provide for feedback from the gallery of the caricature; therefore, to allow for such feedback, in this paper, we propose a caricaturing system by using the KANSEI visual information acquired from the Eye-camera mounted on the head of a gallery, because it is well know that the gaze distribution represents not only where but also how he is looking at the face. The caricatures created in this way could be based on several measures which are provided from the distribution of the number of fixations to the facial parts, the number of times the gaze came to a particular area of the face, and the matrix of the transitions from a facial region to the other. These measures of the gallery"s KANSEI information were used to create caricatures with feedback from the gallery.

  8. Only Image Based for the 3d Metric Survey of Gothic Structures by Using Frame Cameras and Panoramic Cameras

    NASA Astrophysics Data System (ADS)

    Pérez Ramos, A.; Robleda Prieto, G.

    2016-06-01

    Indoor Gothic apse provides a complex environment for virtualization using imaging techniques due to its light conditions and architecture. Light entering throw large windows in combination with the apse shape makes difficult to find proper conditions to photo capture for reconstruction purposes. Thus, documentation techniques based on images are usually replaced by scanning techniques inside churches. Nevertheless, the need to use Terrestrial Laser Scanning (TLS) for indoor virtualization means a significant increase in the final surveying cost. So, in most cases, scanning techniques are used to generate dense point clouds. However, many Terrestrial Laser Scanner (TLS) internal cameras are not able to provide colour images or cannot reach the image quality that can be obtained using an external camera. Therefore, external quality images are often used to build high resolution textures of these models. This paper aims to solve the problem posted by virtualizing indoor Gothic churches, making that task more affordable using exclusively techniques base on images. It reviews a previous proposed methodology using a DSRL camera with 18-135 lens commonly used for close range photogrammetry and add another one using a HDR 360° camera with four lenses that makes the task easier and faster in comparison with the previous one. Fieldwork and office-work are simplified. The proposed methodology provides photographs in such a good conditions for building point clouds and textured meshes. Furthermore, the same imaging resources can be used to generate more deliverables without extra time consuming in the field, for instance, immersive virtual tours. In order to verify the usefulness of the method, it has been decided to apply it to the apse since it is considered one of the most complex elements of Gothic churches and it could be extended to the whole building.

  9. A real-time camera calibration system based on OpenCV

    NASA Astrophysics Data System (ADS)

    Zhang, Hui; Wang, Hua; Guo, Huinan; Ren, Long; Zhou, Zuofeng

    2015-07-01

    Camera calibration is one of the essential steps in the computer vision research. This paper describes a real-time OpenCV based camera calibration system, and developed and implemented in the VS2008 environment. Experimental results prove that the system to achieve a simple and fast camera calibration, compared with MATLAB, higher precision and does not need manual intervention, and can be widely used in various computer vision system.

  10. 3D measurement and camera attitude estimation method based on trifocal tensor

    NASA Astrophysics Data System (ADS)

    Chen, Shengyi; Liu, Haibo; Yao, Linshen; Yu, Qifeng

    2016-11-01

    To simultaneously perform 3D measurement and camera attitude estimation, an efficient and robust method based on trifocal tensor is proposed in this paper, which only employs the intrinsic parameters and positions of three cameras. The initial trifocal tensor is obtained by using heteroscedastic errors-in-variables (HEIV) estimator and the initial relative poses of the three cameras is acquired by decomposing the tensor. Further the initial attitude of the cameras is obtained with knowledge of the three cameras' positions. Then the camera attitude and the interested points' image positions are optimized according to the constraint of trifocal tensor with the HEIV method. Finally the spatial positions of the points are obtained by using intersection measurement method. Both simulation and real image experiment results suggest that the proposed method achieves the same precision of the Bundle Adjustment (BA) method but be more efficient.

  11. Ultra Fast X-ray Streak Camera for TIM Based Platforms

    SciTech Connect

    Marley, E; Shepherd, R; Fulkerson, E S; James, L; Emig, J; Norman, D

    2012-05-02

    Ultra fast x-ray streak cameras are a staple for time resolved x-ray measurements. There is a need for a ten inch manipulator (TIM) based streak camera that can be fielded in a newer large scale laser facility. The LLNL ultra fast streak camera's drive electronics have been upgraded and redesigned to fit inside a TIM tube. The camera also has a new user interface that allows for remote control and data acquisition. The system has been outfitted with a new sensor package that gives the user more operational awareness and control.

  12. Development of a treatment planning system for BNCT based on positron emission tomography data: preliminary results

    NASA Astrophysics Data System (ADS)

    Cerullo, N.; Daquino, G. G.; Muzi, L.; Esposito, J.

    2004-01-01

    Present standard treatment planning (TP) for glioblastoma multiforme (GBM - a kind of brain tumor), used in all boron neutron capture therapy (BNCT) trials, requires the construction (based on CT and/or MRI images) of a 3D model of the patient head, in which several regions, corresponding to different anatomical structures, are identified. The model is then employed by a computer code to simulate radiation transport in human tissues. The assumption is always made that considering a single value of boron concentration for each specific region will not lead to significant errors in dose computation. The concentration values are estimated "indirectly", on the basis of previous experience and blood sample analysis. This paper describes an original approach, with the introduction of data on the in vivo boron distribution, acquired by a positron emission tomography (PET) scan after labeling the BPA (borono-phenylalanine) with the positron emitter 18F. The feasibility of this approach was first tested with good results using the code CARONTE. Now a complete TPS is under development. The main features of the first version of this code are described and the results of a preliminary study are presented. Significant differences in dose computation arise when the two different approaches ("standard" and "PET-based") are applied to the TP of the same GBM case.

  13. Dual-Nuclide Radiopharmaceuticals for Positron Emission Tomography Based Dosimetry in Radiotherapy.

    PubMed

    Wurzer, Alexander; Seidl, Christof; Morgenstern, Alfred; Bruchertseifer, Frank; Schwaiger, Markus; Wester, Hans-Jürgen; Notni, Johannes

    2017-08-21

    Improvement of the accuracy of dosimetry in radionuclide therapy has the potential to increase patient safety and therapeutic outcomes. Although positron emission tomography (PET) is ideally suited for acquisition of dosimetric data because PET is inherently quantitative and offers high sensitivity and spatial resolution, it is not directly applicable for this purpose because common therapeutic radionuclides lack the necessary positron emission. This work reports on the synthesis of dual-nuclide labeled radiopharmaceuticals with therapeutic and PET functionality, which are based on common and widely available metal radionuclides. Dual-chelator conjugates, featuring interlinked cyclen- and triazacyclononane-based polyphosphinates DOTPI and TRAP, allow for strictly regioselective complexation of therapeutic (e.g., (177) Lu, (90) Y, or (213) Bi) and PET (e.g., (68) Ga) radiometals in the same molecular framework by exploiting the orthogonal metal ion selectivity of these chelators (DOTPI: large cations, such as lanthanide(III) ions; TRAP: small trivalent ions, such as Ga(III) ). Such DOTPI-TRAP conjugates were decorated with 3 Gly-urea-Lys (KuE) motifs for targeting prostate-specific membrane antigen (PSMA), employing Cu-catalyzed (CuAAC) as well as strain-promoted (SPAAC) click chemistry. These were labeled with (177) Lu or (213) Bi and (68) Ga and used for in vivo imaging of LNCaP (human prostate carcinoma) tumor xenografts in SCID mice by PET, thus proving practical applicability of the concept. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. A four-lens based plenoptic camera for depth measurements

    NASA Astrophysics Data System (ADS)

    Riou, Cécile; Deng, Zhiyuan; Colicchio, Bruno; Lauffenburger, Jean-Philippe; Kohler, Sophie; Haeberlé, Olivier; Cudel, Christophe

    2015-04-01

    In previous works, we have extended the principles of "variable homography", defined by Zhang and Greenspan, for measuring height of emergent fibers on glass and non-woven fabrics. This method has been defined for working with fabric samples progressing on a conveyor belt. Triggered acquisition of two successive images was needed to perform the 3D measurement. In this work, we have retained advantages of homography variable for measurements along Z axis, but we have reduced acquisitions number to a single one, by developing an acquisition device characterized by 4 lenses placed in front of a single image sensor. The idea is then to obtain four projected sub-images on a single CCD sensor. The device becomes a plenoptic or light field camera, capturing multiple views on the same image sensor. We have adapted the variable homography formulation for this device and we propose a new formulation to calculate a depth with plenoptic cameras. With these results, we have transformed our plenoptic camera in a depth camera and first results given are very promising.

  15. An airborne multispectral imaging system based on two consumer-grade cameras for agricultural remote sensing

    USDA-ARS?s Scientific Manuscript database

    This paper describes the design and evaluation of an airborne multispectral imaging system based on two identical consumer-grade cameras for agricultural remote sensing. The cameras are equipped with a full-frame complementary metal oxide semiconductor (CMOS) sensor with 5616 × 3744 pixels. One came...

  16. Multi-camera synchronization core implemented on USB3 based FPGA platform

    NASA Astrophysics Data System (ADS)

    Sousa, Ricardo M.; Wäny, Martin; Santos, Pedro; Dias, Morgado

    2015-03-01

    Centered on Awaiba's NanEye CMOS image sensor family and a FPGA platform with USB3 interface, the aim of this paper is to demonstrate a new technique to synchronize up to 8 individual self-timed cameras with minimal error. Small form factor self-timed camera modules of 1 mm x 1 mm or smaller do not normally allow external synchronization. However, for stereo vision or 3D reconstruction with multiple cameras as well as for applications requiring pulsed illumination it is required to synchronize multiple cameras. In this work, the challenge of synchronizing multiple selftimed cameras with only 4 wire interface has been solved by adaptively regulating the power supply for each of the cameras. To that effect, a control core was created to constantly monitor the operating frequency of each camera by measuring the line period in each frame based on a well-defined sampling signal. The frequency is adjusted by varying the voltage level applied to the sensor based on the error between the measured line period and the desired line period. To ensure phase synchronization between frames, a Master-Slave interface was implemented. A single camera is defined as the Master, with its operating frequency being controlled directly through a PC based interface. The remaining cameras are setup in Slave mode and are interfaced directly with the Master camera control module. This enables the remaining cameras to monitor its line and frame period and adjust their own to achieve phase and frequency synchronization. The result of this work will allow the implementation of smaller than 3mm diameter 3D stereo vision equipment in medical endoscopic context, such as endoscopic surgical robotic or micro invasive surgery.

  17. The determination of the intrinsic and extrinsic parameters of virtual camera based on OpenGL

    NASA Astrophysics Data System (ADS)

    Li, Suqi; Zhang, Guangjun; Wei, Zhenzhong

    2006-11-01

    OpenGL is the international standard of 3D image. The 3D image generation by OpenGL is similar to the shoot by camera. This paper focuses on the application of OpenGL to computer vision, the OpenGL 3D image is regarded as virtual camera image. Firstly, the imaging mechanism of OpenGL has been analyzed in view of perspective projection transformation of computer vision camera. Then, the relationship between intrinsic and extrinsic parameters of camera and function parameters in OpenGL has been analysed, the transformation formulas have been deduced. Thereout the computer vision simulation has been realized. According to the comparison between the actual CCD camera images and virtual camera images(the parameters of actual camera are the same as virtual camera's) and the experiment results of stereo vision 3D reconstruction simulation, the effectiveness of the method with which the intrinsic and extrinsic parameters of virtual camera based on OpenGL are determined has been verified.

  18. Pixelated CdTe detectors to overcome intrinsic limitations of crystal based positron emission mammographs

    NASA Astrophysics Data System (ADS)

    De Lorenzo, G.; Chmeissani, M.; Uzun, D.; Kolstein, M.; Ozsahin, I.; Mikhaylova, E.; Arce, P.; Cañadas, M.; Ariño, G.; Calderón, Y.

    2013-01-01

    A positron emission mammograph (PEM) is an organ dedicated positron emission tomography (PET) scanner for breast cancer detection. State-of-the-art PEMs employing scintillating crystals as detection medium can provide metabolic images of the breast with significantly higher sensitivity and specificity with respect to standard whole body PET scanners. Over the past few years, crystal PEMs have dramatically increased their importance in the diagnosis and treatment of early stage breast cancer. Nevertheless, designs based on scintillators are characterized by an intrinsic deficiency of the depth of interaction (DOI) information from relatively thick crystals constraining the size of the smallest detectable tumor. This work shows how to overcome such intrinsic limitation by substituting scintillating crystals with pixelated CdTe detectors. The proposed novel design is developed within the Voxel Imaging PET (VIP) Pathfinder project and evaluated via Monte Carlo simulation. The volumetric spatial resolution of the VIP-PEM is expected to be up to 6 times better than standard commercial devices with a point spread function of 1 mm full width at half maximum (FWHM) in all directions. Pixelated CdTe detectors can also provide an energy resolution as low as 1.5% FWHM at 511 keV for a virtually pure signal with negligible contribution from scattered events.

  19. Pixelated CdTe detectors to overcome intrinsic limitations of crystal based positron emission mammographs.

    PubMed

    De Lorenzo, G; Chmeissani, M; Uzun, D; Kolstein, M; Ozsahin, I; Mikhaylova, E; Arce, P; Cañadas, M; Ariño, G; Calderón, Y

    2013-01-01

    A positron emission mammograph (PEM) is an organ dedicated positron emission tomography (PET) scanner for breast cancer detection. State-of-the-art PEMs employing scintillating crystals as detection medium can provide metabolic images of the breast with significantly higher sensitivity and specificity with respect to standard whole body PET scanners. Over the past few years, crystal PEMs have dramatically increased their importance in the diagnosis and treatment of early stage breast cancer. Nevertheless, designs based on scintillators are characterized by an intrinsic deficiency of the depth of interaction (DOI) information from relatively thick crystals constraining the size of the smallest detectable tumor. This work shows how to overcome such intrinsic limitation by substituting scintillating crystals with pixelated CdTe detectors. The proposed novel design is developed within the Voxel Imaging PET (VIP) Pathfinder project and evaluated via Monte Carlo simulation. The volumetric spatial resolution of the VIP-PEM is expected to be up to 6 times better than standard commercial devices with a point spread function of 1 mm full width at half maximum (FWHM) in all directions. Pixelated CdTe detectors can also provide an energy resolution as low as 1.5% FWHM at 511 keV for a virtually pure signal with negligible contribution from scattered events.

  20. Pixelated CdTe detectors to overcome intrinsic limitations of crystal based positron emission mammographs

    PubMed Central

    De Lorenzo, G.; Chmeissani, M.; Uzun, D.; Kolstein, M.; Ozsahin, I.; Mikhaylova, E.; Arce, P.; Cañadas, M.; Ariño, G.; Calderón, Y.

    2013-01-01

    A positron emission mammograph (PEM) is an organ dedicated positron emission tomography (PET) scanner for breast cancer detection. State-of-the-art PEMs employing scintillating crystals as detection medium can provide metabolic images of the breast with significantly higher sensitivity and specificity with respect to standard whole body PET scanners. Over the past few years, crystal PEMs have dramatically increased their importance in the diagnosis and treatment of early stage breast cancer. Nevertheless, designs based on scintillators are characterized by an intrinsic deficiency of the depth of interaction (DOI) information from relatively thick crystals constraining the size of the smallest detectable tumor. This work shows how to overcome such intrinsic limitation by substituting scintillating crystals with pixelated CdTe detectors. The proposed novel design is developed within the Voxel Imaging PET (VIP) Pathfinder project and evaluated via Monte Carlo simulation. The volumetric spatial resolution of the VIP-PEM is expected to be up to 6 times better than standard commercial devices with a point spread function of 1 mm full width at half maximum (FWHM) in all directions. Pixelated CdTe detectors can also provide an energy resolution as low as 1.5% FWHM at 511 keV for a virtually pure signal with negligible contribution from scattered events. PMID:23750176

  1. Inspection focus technology of space tridimensional mapping camera based on astigmatic method

    NASA Astrophysics Data System (ADS)

    Wang, Zhi; Zhang, Liping

    2010-10-01

    The CCD plane of the space tridimensional mapping camera will be deviated from the focal plane(including the CCD plane deviated due to camera focal length changed), under the condition of space environment and vibration, impact when satellite is launching, image resolution ratio will be descended because defocusing. For tridimensional mapping camera, principal point position and focal length variation of the camera affect positioning accuracy of ground target, conventional solution is under the condition of vacuum and focusing range, calibrate the position of CCD plane with code of photoelectric encoder, when the camera defocusing in orbit, the magnitude and direction of defocusing amount are obtained by photoelectric encoder, then the focusing mechanism driven by step motor to compensate defocusing amount of the CCD plane. For tridimensional mapping camera, under the condition of space environment and vibration, impact when satellite is launching, if the camera focal length changes, above focusing method has been meaningless. Thus, the measuring and focusing method was put forward based on astigmation, a quadrant detector was adopted to measure the astigmation caused by the deviation of the CCD plane, refer to calibrated relation between the CCD plane poison and the asrigmation, the deviation vector of the CCD plane can be obtained. This method includes all factors caused deviation of the CCD plane, experimental results show that the focusing resolution of mapping camera focusing mechanism based on astigmatic method can reach 0.25 μm.

  2. External Mask Based Depth and Light Field Camera

    DTIC Science & Technology

    2013-12-08

    improve many computer vision problems such as segmentation, stabiliza- tion and material classification. However, the current light field cameras have... material classifi- cation and recognition. We note that currently our design is applicable only for static scenes since we cycle through 5×5 array of...Ikr(u) Ik(u) Ikr(u’)α Ik(u’)α Ikr(u)α Ik(u)α Masked Raw Data Sensor Reparameterization (Figure 4) Registration to Target Image Photoconsistency α α f

  3. Base Intrusion Schottky Barrier IR Assessment Camera Study.

    DTIC Science & Technology

    1981-09-01

    detection line sensors. The program includes coverage studies to determine requirements for array size and camera complexity to provide cost-effective...addition, hardware studies are being conducted to determine design requirements and specifications for development and for future field testing of an...Since the early 1970s, RCA has been actively engaged in the development of IRI Schottky barrier line and area FPAs for the Air Force RADC Deputy for

  4. Streak camera based SLR receiver for two color atmospheric measurements

    NASA Technical Reports Server (NTRS)

    Varghese, Thomas K.; Clarke, Christopher; Oldham, Thomas; Selden, Michael

    1993-01-01

    To realize accurate two-color differential measurements, an image digitizing system with variable spatial resolution was designed, built, and integrated to a photon-counting picosecond streak camera, yielding a temporal scan resolution better than 300 femtosecond/pixel. The streak camera is configured to operate with 3 spatial channels; two of these support green (532 nm) and uv (355 nm) while the third accommodates reference pulses (764 nm) for real-time calibration. Critical parameters affecting differential timing accuracy such as pulse width and shape, number of received photons, streak camera/imaging system nonlinearities, dynamic range, and noise characteristics were investigated to optimize the system for accurate differential delay measurements. The streak camera output image consists of three image fields, each field is 1024 pixels along the time axis and 16 pixels across the spatial axis. Each of the image fields may be independently positioned across the spatial axis. Two of the image fields are used for the two wavelengths used in the experiment; the third window measures the temporal separation of a pair of diode laser pulses which verify the streak camera sweep speed for each data frame. The sum of the 16 pixel intensities across each of the 1024 temporal positions for the three data windows is used to extract the three waveforms. The waveform data is processed using an iterative three-point running average filter (10 to 30 iterations are used) to remove high-frequency structure. The pulse pair separations are determined using the half-max and centroid type analysis. Rigorous experimental verification has demonstrated that this simplified process provides the best measurement accuracy. To calibrate the receiver system sweep, two laser pulses with precisely known temporal separation are scanned along the full length of the sweep axis. The experimental measurements are then modeled using polynomial regression to obtain a best fit to the data. Data

  5. Web Camera Based Eye Tracking to Assess Visual Memory on a Visual Paired Comparison Task.

    PubMed

    Bott, Nicholas T; Lange, Alex; Rentz, Dorene; Buffalo, Elizabeth; Clopton, Paul; Zola, Stuart

    2017-01-01

    Background: Web cameras are increasingly part of the standard hardware of most smart devices. Eye movements can often provide a noninvasive "window on the brain," and the recording of eye movements using web cameras is a burgeoning area of research. Objective: This study investigated a novel methodology for administering a visual paired comparison (VPC) decisional task using a web camera.To further assess this method, we examined the correlation between a standard eye-tracking camera automated scoring procedure [obtaining images at 60 frames per second (FPS)] and a manually scored procedure using a built-in laptop web camera (obtaining images at 3 FPS). Methods: This was an observational study of 54 clinically normal older adults.Subjects completed three in-clinic visits with simultaneous recording of eye movements on a VPC decision task by a standard eye tracker camera and a built-in laptop-based web camera. Inter-rater reliability was analyzed using Siegel and Castellan's kappa formula. Pearson correlations were used to investigate the correlation between VPC performance using a standard eye tracker camera and a built-in web camera. Results: Strong associations were observed on VPC mean novelty preference score between the 60 FPS eye tracker and 3 FPS built-in web camera at each of the three visits (r = 0.88-0.92). Inter-rater agreement of web camera scoring at each time point was high (κ = 0.81-0.88). There were strong relationships on VPC mean novelty preference score between 10, 5, and 3 FPS training sets (r = 0.88-0.94). Significantly fewer data quality issues were encountered using the built-in web camera. Conclusions: Human scoring of a VPC decisional task using a built-in laptop web camera correlated strongly with automated scoring of the same task using a standard high frame rate eye tracker camera.While this method is not suitable for eye tracking paradigms requiring the collection and analysis of fine-grained metrics, such as fixation points, built

  6. Web Camera Based Eye Tracking to Assess Visual Memory on a Visual Paired Comparison Task

    PubMed Central

    Bott, Nicholas T.; Lange, Alex; Rentz, Dorene; Buffalo, Elizabeth; Clopton, Paul; Zola, Stuart

    2017-01-01

    Background: Web cameras are increasingly part of the standard hardware of most smart devices. Eye movements can often provide a noninvasive “window on the brain,” and the recording of eye movements using web cameras is a burgeoning area of research. Objective: This study investigated a novel methodology for administering a visual paired comparison (VPC) decisional task using a web camera.To further assess this method, we examined the correlation between a standard eye-tracking camera automated scoring procedure [obtaining images at 60 frames per second (FPS)] and a manually scored procedure using a built-in laptop web camera (obtaining images at 3 FPS). Methods: This was an observational study of 54 clinically normal older adults.Subjects completed three in-clinic visits with simultaneous recording of eye movements on a VPC decision task by a standard eye tracker camera and a built-in laptop-based web camera. Inter-rater reliability was analyzed using Siegel and Castellan's kappa formula. Pearson correlations were used to investigate the correlation between VPC performance using a standard eye tracker camera and a built-in web camera. Results: Strong associations were observed on VPC mean novelty preference score between the 60 FPS eye tracker and 3 FPS built-in web camera at each of the three visits (r = 0.88–0.92). Inter-rater agreement of web camera scoring at each time point was high (κ = 0.81–0.88). There were strong relationships on VPC mean novelty preference score between 10, 5, and 3 FPS training sets (r = 0.88–0.94). Significantly fewer data quality issues were encountered using the built-in web camera. Conclusions: Human scoring of a VPC decisional task using a built-in laptop web camera correlated strongly with automated scoring of the same task using a standard high frame rate eye tracker camera.While this method is not suitable for eye tracking paradigms requiring the collection and analysis of fine-grained metrics, such as fixation points

  7. Design of an infrared camera based aircraft detection system for laser guide star installations

    SciTech Connect

    Friedman, H.; Macintosh, B.

    1996-03-05

    There have been incidents in which the irradiance resulting from laser guide stars have temporarily blinded pilots or passengers of aircraft. An aircraft detection system based on passive near infrared cameras (instead of active radar) is described in this report.

  8. The Feasibility of Performing Particle Tracking Based Flow Measurements with Acoustic Cameras

    DTIC Science & Technology

    2017-08-01

    ER D C/ CH L SR -1 7- 1 Dredging Operations and Environmental Research Program The Feasibility of Performing Particle- Tracking-Based...Flow Measurements with Acoustic Cameras Co as ta l a nd H yd ra ul ic s La bo ra to ry David L . Young and Brian C. McFall August 2017...Tracking-Based Flow Measurements with Acoustic Cameras David L . Young and Brian C. McFall Coastal and Hydraulics Laboratory U.S. Army Engineer

  9. Ultrashort megaelectronvolt positron beam generation based on laser-accelerated electrons

    SciTech Connect

    Xu, Tongjun; Shen, Baifei Xu, Jiancai Li, Shun; Yu, Yong; Li, Jinfeng; Lu, Xiaoming; Wang, Cheng; Wang, Xinliang; Liang, Xiaoyan; Leng, Yuxin; Li, Ruxin; Xu, Zhizhan

    2016-03-15

    Experimental generation of ultrashort MeV positron beams with high intensity and high density using a compact laser-driven setup is reported. A high-density gas jet is employed experimentally to generate MeV electrons with high charge; thus, a charge-neutralized MeV positron beam with high density is obtained during laser-accelerated electrons irradiating high-Z solid targets. It is a novel electron–positron source for the study of laboratory astrophysics. Meanwhile, the MeV positron beam is pulsed with an ultrashort duration of tens of femtoseconds and has a high peak intensity of 7.8 × 10{sup 21} s{sup −1}, thus allows specific studies of fast kinetics in millimeter-thick materials with a high time resolution and exhibits potential for applications in positron annihilation spectroscopy.

  10. One high-accuracy camera calibration algorithm based on computer vision images

    NASA Astrophysics Data System (ADS)

    Wang, Ying; Huang, Jianming; Wei, Xiangquan

    2015-12-01

    Camera calibration is the first step of computer vision and one of the most active research fields nowadays. In order to improve the measurement precision, the internal parameters of the camera should be accurately calibrated. So one high-accuracy camera calibration algorithm is proposed based on the images of planar targets or tridimensional targets. By using the algorithm, the internal parameters of the camera are calibrated based on the existing planar target at the vision-based navigation experiment. The experimental results show that the accuracy of the proposed algorithm is obviously improved compared with the conventional linear algorithm, Tsai general algorithm, and Zhang Zhengyou calibration algorithm. The algorithm proposed by the article can satisfy the need of computer vision and provide reference for precise measurement of the relative position and attitude.

  11. Multitarget visual tracking based effective surveillance with cooperation of multiple active cameras.

    PubMed

    Huang, Cheng-Ming; Fu, Li-Chen

    2011-02-01

    This paper presents a tracking-based surveillance system that is capable of tracking multiple moving objects, with almost real-time response, through the effective cooperation of multiple pan-tilt cameras. To construct this surveillance system, the distributed camera agent, which tracks multiple moving objects independently, is first developed. The particle filter is extended with target depth estimate to track multiple targets that may overlap with one another. A strategy to select the suboptimal camera action is then proposed for a camera mounted on a pan-tilt platform that has been assigned to track multiple targets within its limited field of view simultaneously. This strategy is based on the mutual information and the Monte Carlo method to maintain coverage of the tracked targets. Finally, for a surveillance system with a small number of active cameras to effectively monitor a wide space, this system is aimed to maximize the number of targets to be tracked. We further propose a hierarchical camera selection and task assignment strategy, known as the online position strategy, to integrate all of the distributed camera agents. The overall performance of the multicamera surveillance system has been verified with computer simulations and extensive experiments.

  12. A method of diameter measurement for spur gear based on camera calibration

    NASA Astrophysics Data System (ADS)

    Wu, Ziyue; Geng, Jinfeng; Xu, Zhe

    2012-04-01

    The camera calibration is the basis of putting the computer vision technology into practice. This paper proposes a new method based on camera calibration for diameter measurement of gear, and analyses the error from calibration and measurement. Diameter values are gained by this method, which firstly gets the intrinsic parameters and the extrinsic parameters by camera calibration, then transforms the feature points in image coordinate extracted from the image plane of gear to the 3D world coordinate, lastly computes distance between the features points. The experiment results demonstrate that the method is simple and quick, and easy to implement, highly precise, and rarely limited to the size of target.

  13. Image Mosaicking Approach for a Double-Camera System in the GaoFen2 Optical Remote Sensing Satellite Based on the Big Virtual Camera.

    PubMed

    Cheng, Yufeng; Jin, Shuying; Wang, Mi; Zhu, Ying; Dong, Zhipeng

    2017-06-20

    The linear array push broom imaging mode is widely used for high resolution optical satellites (HROS). Using double-cameras attached by a high-rigidity support along with push broom imaging is one method to enlarge the field of view while ensuring high resolution. High accuracy image mosaicking is the key factor of the geometrical quality of complete stitched satellite imagery. This paper proposes a high accuracy image mosaicking approach based on the big virtual camera (BVC) in the double-camera system on the GaoFen2 optical remote sensing satellite (GF2). A big virtual camera can be built according to the rigorous imaging model of a single camera; then, each single image strip obtained by each TDI-CCD detector can be re-projected to the virtual detector of the big virtual camera coordinate system using forward-projection and backward-projection to obtain the corresponding single virtual image. After an on-orbit calibration and relative orientation, the complete final virtual image can be obtained by stitching the single virtual images together based on their coordinate information on the big virtual detector image plane. The paper subtly uses the concept of the big virtual camera to obtain a stitched image and the corresponding high accuracy rational function model (RFM) for concurrent post processing. Experiments verified that the proposed method can achieve seamless mosaicking while maintaining the geometric accuracy.

  14. Facial skin color measurement based on camera colorimetric characterization

    NASA Astrophysics Data System (ADS)

    Yang, Boquan; Zhou, Changhe; Wang, Shaoqing; Fan, Xin; Li, Chao

    2016-10-01

    The objective measurement of facial skin color and its variance is of great significance as much information can be obtained from it. In this paper, we developed a new skin color measurement procedure which includes following parts: first, a new skin tone color checker made of pantone Skin Tone Color Checker was designed for camera colorimetric characterization; second, the chromaticity of light source was estimated via a new scene illumination estimation method considering several previous algorithms; third, chromatic adaption was used to convert the input facial image into output facial image which appears taken under canonical light; finally the validity and accuracy of our method was verified by comparing the results obtained by our procedure with these by spectrophotometer.

  15. Camera-based curvature measurement of a large incandescent object

    NASA Astrophysics Data System (ADS)

    Ollikkala, Arttu V. H.; Kananen, Timo P.; Mäkynen, Anssi J.; Holappa, Markus

    2013-04-01

    The goal of this work was to implement a low-cost machine vision system to help the roller operator to estimate the amount of strip camber during the rolling process. The machine vision system composing of a single camera, a standard PC-computer and a LabVIEW written program using straightforward image analysis determines the magnitude and direction of camber and presents the results both in numerical and graphical form on the computer screen. The system was calibrated with LED set-up which was also used to validate the accuracy of the system by mimicking the strip curvatures. The validation showed that the maximum difference between the true and measured values was less than +/-4 mm (k=0.95) within the 22 meter long test pattern.

  16. FPGA-based data acquisition system for a Compton camera

    NASA Astrophysics Data System (ADS)

    Nurdan, K.; Çonka-Nurdan, T.; Besch, H. J.; Freisleben, B.; Pavel, N. A.; Walenta, A. H.

    2003-09-01

    A data acquisition (DAQ) system with custom back-plane and custom readout boards has been developed for a Compton camera prototype. The DAQ system consists of two layers. The first layer has units for parallel high-speed analog-to-digital conversion and online data pre-processing. The second layer has a central board to form a general event trigger and to build the data structure for the event. This modularity and the use of field programmable gate arrays make the whole DAQ system highly flexible and adaptable to modified experimental setups. The design specifications, the general architecture of the Trigger and DAQ system and the implemented readout protocols are presented in this paper.

  17. An Educational PET Camera Model

    ERIC Educational Resources Information Center

    Johansson, K. E.; Nilsson, Ch.; Tegner, P. E.

    2006-01-01

    Positron emission tomography (PET) cameras are now in widespread use in hospitals. A model of a PET camera has been installed in Stockholm House of Science and is used to explain the principles of PET to school pupils as described here.

  18. An Educational PET Camera Model

    ERIC Educational Resources Information Center

    Johansson, K. E.; Nilsson, Ch.; Tegner, P. E.

    2006-01-01

    Positron emission tomography (PET) cameras are now in widespread use in hospitals. A model of a PET camera has been installed in Stockholm House of Science and is used to explain the principles of PET to school pupils as described here.

  19. Person re-identification across aerial and ground-based cameras by deep feature fusion

    NASA Astrophysics Data System (ADS)

    Schumann, Arne; Metzler, Jürgen

    2017-05-01

    Person re-identification is the task of correctly matching visual appearances of the same person in image or video data while distinguishing appearances of different persons. The traditional setup for re-identification is a network of fixed cameras. However, in recent years mobile aerial cameras mounted on unmanned aerial vehicles (UAV) have become increasingly useful for security and surveillance tasks. Aerial data has many characteristics different from typical camera network data. Thus, re-identification approaches designed for a camera network scenario can be expected to suffer a drop in accuracy when applied to aerial data. In this work, we investigate the suitability of features, which were shown to give robust results for re- identification in camera networks, for the task of re-identifying persons between a camera network and a mobile aerial camera. Specifically, we apply hand-crafted region covariance features and features extracted by convolu- tional neural networks which were learned on separate data. We evaluate their suitability for this new and as yet unexplored scenario. We investigate common fusion methods to combine the hand-crafted and learned features and propose our own deep fusion approach which is already applied during training of the deep network. We evaluate features and fusion methods on our own dataset. The dataset consists of fourteen people moving through a scene recorded by four fixed ground-based cameras and one mobile camera mounted on a small UAV. We discuss strengths and weaknesses of the features in the new scenario and show that our fusion approach successfully leverages the strengths of each feature and outperforms all single features significantly.

  20. Minicyclotron-based technology for the production of positron-emitting labelled radiopharmaceuticals

    SciTech Connect

    Barrio, J.R.; Bida, G.; Satyamurthy, N.; Padgett, H.C.; MacDonald, N.S.; Phelps, M.E.

    1983-01-01

    The use of short-lived positron emitters such as carbon 11, fluorine 18, nitrogen 13, and oxygen 15, together with positron-emission tomography (PET) for probing the dynamics of physiological and biochemical processes in the normal and diseased states in man is presently an active area of research. One of the pivotal elements for the continued growth and success of PET is the routine delivery of the desired positron emitting labelled compounds. To date, the cyclotron remains the accelerator of choice for production of medically useful radionuclides. The development of the technology to bring the use of cyclotrons to a clinical setting is discussed. (ACR)

  1. Empirical Study on Designing of Gaze Tracking Camera Based on the Information of User's Head Movement.

    PubMed

    Pan, Weiyuan; Jung, Dongwook; Yoon, Hyo Sik; Lee, Dong Eun; Naqvi, Rizwan Ali; Lee, Kwan Woo; Park, Kang Ryoung

    2016-08-31

    Gaze tracking is the technology that identifies a region in space that a user is looking at. Most previous non-wearable gaze tracking systems use a near-infrared (NIR) light camera with an NIR illuminator. Based on the kind of camera lens used, the viewing angle and depth-of-field (DOF) of a gaze tracking camera can be different, which affects the performance of the gaze tracking system. Nevertheless, to our best knowledge, most previous researches implemented gaze tracking cameras without ground truth information for determining the optimal viewing angle and DOF of the camera lens. Eye-tracker manufacturers might also use ground truth information, but they do not provide this in public. Therefore, researchers and developers of gaze tracking systems cannot refer to such information for implementing gaze tracking system. We address this problem providing an empirical study in which we design an optimal gaze tracking camera based on experimental measurements of the amount and velocity of user's head movements. Based on our results and analyses, researchers and developers might be able to more easily implement an optimal gaze tracking system. Experimental results show that our gaze tracking system shows high performance in terms of accuracy, user convenience and interest.

  2. Localization-based super-resolution microscopy with an sCMOS camera part II: experimental methodology for comparing sCMOS with EMCCD cameras.

    PubMed

    Long, Fan; Zeng, Shaoqun; Huang, Zhen-Li

    2012-07-30

    Nowadays, there is a hot debate among industry and academic researchers that whether the newly developed scientific-grade Complementary Metal Oxide Semiconductor (sCMOS) cameras could become the image sensors of choice in localization-based super-resolution microscopy. To help researchers find answers to this question, here we reported an experimental methodology for quantitatively comparing the performance of low-light cameras in single molecule detection (characterized via image SNR) and localization (via localization accuracy). We found that a newly launched sCMOS camera can present superior imaging performance than a popular Electron Multiplying Charge Coupled Device (EMCCD) camera in a signal range (15-12000 photon/pixel) more than enough for typical localization-based super-resolution microscopy.

  3. MARS: a mouse atlas registration system based on a planar x-ray projector and an optical camera

    NASA Astrophysics Data System (ADS)

    Wang, Hongkai; Stout, David B.; Taschereau, Richard; Gu, Zheng; Vu, Nam T.; Prout, David L.; Chatziioannou, Arion F.

    2012-10-01

    This paper introduces a mouse atlas registration system (MARS), composed of a stationary top-view x-ray projector and a side-view optical camera, coupled to a mouse atlas registration algorithm. This system uses the x-ray and optical images to guide a fully automatic co-registration of a mouse atlas with each subject, in order to provide anatomical reference for small animal molecular imaging systems such as positron emission tomography (PET). To facilitate the registration, a statistical atlas that accounts for inter-subject anatomical variations was constructed based on 83 organ-labeled mouse micro-computed tomography (CT) images. The statistical shape model and conditional Gaussian model techniques were used to register the atlas with the x-ray image and optical photo. The accuracy of the atlas registration was evaluated by comparing the registered atlas with the organ-labeled micro-CT images of the test subjects. The results showed excellent registration accuracy of the whole-body region, and good accuracy for the brain, liver, heart, lungs and kidneys. In its implementation, the MARS was integrated with a preclinical PET scanner to deliver combined PET/MARS imaging, and to facilitate atlas-assisted analysis of the preclinical PET images.

  4. MARS: a mouse atlas registration system based on a planar x-ray projector and an optical camera.

    PubMed

    Wang, Hongkai; Stout, David B; Taschereau, Richard; Gu, Zheng; Vu, Nam T; Prout, David L; Chatziioannou, Arion F

    2012-10-07

    This paper introduces a mouse atlas registration system (MARS), composed of a stationary top-view x-ray projector and a side-view optical camera, coupled to a mouse atlas registration algorithm. This system uses the x-ray and optical images to guide a fully automatic co-registration of a mouse atlas with each subject, in order to provide anatomical reference for small animal molecular imaging systems such as positron emission tomography (PET). To facilitate the registration, a statistical atlas that accounts for inter-subject anatomical variations was constructed based on 83 organ-labeled mouse micro-computed tomography (CT) images. The statistical shape model and conditional Gaussian model techniques were used to register the atlas with the x-ray image and optical photo. The accuracy of the atlas registration was evaluated by comparing the registered atlas with the organ-labeled micro-CT images of the test subjects. The results showed excellent registration accuracy of the whole-body region, and good accuracy for the brain, liver, heart, lungs and kidneys. In its implementation, the MARS was integrated with a preclinical PET scanner to deliver combined PET/MARS imaging, and to facilitate atlas-assisted analysis of the preclinical PET images.

  5. Comparison of FDG PET and positron coincidence detection imaging using a dual-head gamma camera with 5/8-inch NaI(Tl) crystals in patients with suspected body malignancies.

    PubMed

    Boren, E L; Delbeke, D; Patton, J A; Sandler, M P

    1999-04-01

    The purpose of this study was to compare the diagnostic accuracy of fluorine-18 fluorodeoxyglucose (FDG) images obtained with (a) a dual-head coincidence gamma camera (DHC) equipped with 5/8-inch-thick NaI(Tl) crystals and parallel slit collimators and (b) a dedicated positron emission tomograph (PET) in a series of 28 patients with known or suspected malignancies. Twenty-eight patients with known or suspected malignancies underwent whole-body FDG PET imaging (Siemens, ECAT 933) after injection of approximately 10 mCi of 18F-FDG. FDG DHC images were then acquired for 30 min over the regions of interest using a dual-head gamma camera (VariCam, Elscint). The images were reconstructed in the normal mode, using photopeak/photopeak, photopeak/Compton, and Compton/photopeak coincidence events. FDG PET imaging found 45 lesions ranging in size from 1 cm to 7 cm in 28 patients. FDG DHC imaging detected 35/45 (78%) of these lesions. Among the ten lesions not seen with FDG DHC imaging, eight were less than 1.5 cm in size, and two were located centrally within the abdomen suffering from marked attenuation effects. The lesions were classified into three categories: thorax (n=24), liver (n=12), and extrahepatic abdominal (n=9). FDG DHC imaging identified 100% of lesions above 1.5 cm in the thorax group and 78% of those below 1.5 cm, for an overall total of 83%. FDG DHC imaging identified 100% of lesions above 1.5 cm, in the liver and 43% of lesions below 1.5 cm, for an overall total of 67%. FDG DHC imaging identified 78% of lesions above 1.5 cm in the extrahepatic abdominal group. There were no lesions below 1.5 cm in this group. FDG coincidence imaging using a dual-head gamma camera detected 90% of lesions greater than 1.5 cm. These data suggest that DHC imaging can be used clinically in well-defined diagnostic situations to differentiate benign from malignant lesions.

  6. Defining habitat covariates in camera-trap based occupancy studies.

    PubMed

    Niedballa, Jürgen; Sollmann, Rahel; bin Mohamed, Azlan; Bender, Johannes; Wilting, Andreas

    2015-11-24

    In species-habitat association studies, both the type and spatial scale of habitat covariates need to match the ecology of the focal species. We assessed the potential of high-resolution satellite imagery for generating habitat covariates using camera-trapping data from Sabah, Malaysian Borneo, within an occupancy framework. We tested the predictive power of covariates generated from satellite imagery at different resolutions and extents (focal patch sizes, 10-500 m around sample points) on estimates of occupancy patterns of six small to medium sized mammal species/species groups. High-resolution land cover information had considerably more model support for small, patchily distributed habitat features, whereas it had no advantage for large, homogeneous habitat features. A comparison of different focal patch sizes including remote sensing data and an in-situ measure showed that patches with a 50-m radius had most support for the target species. Thus, high-resolution satellite imagery proved to be particularly useful in heterogeneous landscapes, and can be used as a surrogate for certain in-situ measures, reducing field effort in logistically challenging environments. Additionally, remote sensed data provide more flexibility in defining appropriate spatial scales, which we show to impact estimates of wildlife-habitat associations.

  7. Defining habitat covariates in camera-trap based occupancy studies

    PubMed Central

    Niedballa, Jürgen; Sollmann, Rahel; Mohamed, Azlan bin; Bender, Johannes; Wilting, Andreas

    2015-01-01

    In species-habitat association studies, both the type and spatial scale of habitat covariates need to match the ecology of the focal species. We assessed the potential of high-resolution satellite imagery for generating habitat covariates using camera-trapping data from Sabah, Malaysian Borneo, within an occupancy framework. We tested the predictive power of covariates generated from satellite imagery at different resolutions and extents (focal patch sizes, 10–500 m around sample points) on estimates of occupancy patterns of six small to medium sized mammal species/species groups. High-resolution land cover information had considerably more model support for small, patchily distributed habitat features, whereas it had no advantage for large, homogeneous habitat features. A comparison of different focal patch sizes including remote sensing data and an in-situ measure showed that patches with a 50-m radius had most support for the target species. Thus, high-resolution satellite imagery proved to be particularly useful in heterogeneous landscapes, and can be used as a surrogate for certain in-situ measures, reducing field effort in logistically challenging environments. Additionally, remote sensed data provide more flexibility in defining appropriate spatial scales, which we show to impact estimates of wildlife-habitat associations. PMID:26596779

  8. Multi-camera calibration based on openCV and multi-view registration

    NASA Astrophysics Data System (ADS)

    Deng, Xiao-ming; Wan, Xiong; Zhang, Zhi-min; Leng, Bi-yan; Lou, Ning-ning; He, Shuai

    2010-10-01

    For multi-camera calibration systems, a method based on OpenCV and multi-view registration combining calibration algorithm is proposed. First of all, using a Zhang's calibration plate (8X8 chessboard diagram) and a number of cameras (with three industrial-grade CCD) to be 9 group images shooting from different angles, using OpenCV to calibrate the parameters fast in the camera. Secondly, based on the corresponding relationship between each camera view, the computation of the rotation matrix and translation matrix is formulated as a constrained optimization problem. According to the Kuhn-Tucker theorem and the properties on the derivative of the matrix-valued function, the formulae of rotation matrix and translation matrix are deduced by using singular value decomposition algorithm. Afterwards an iterative method is utilized to get the entire coordinate transformation of pair-wise views, thus the precise multi-view registration can be conveniently achieved and then can get the relative positions in them(the camera outside the parameters).Experimental results show that the method is practical in multi-camera calibration .

  9. Medium Format Camera Evaluation Based on the Latest Phase One Technology

    NASA Astrophysics Data System (ADS)

    Tölg, T.; Kemper, G.; Kalinski, D.

    2016-06-01

    In early 2016, Phase One Industrial launched a new high resolution camera with a 100 MP CMOS sensor. CCD sensors excel at ISOs up to 200, but in lower light conditions, exposure time must be increased and Forward Motion Compensation (FMC) has to be employed to avoid smearing the images. The CMOS sensor has an ISO range of up to 6400, which enables short exposures instead of using FMC. This paper aims to evaluate the strengths of each of the sensor types based on real missions over a test field in Speyer, Germany, used for airborne camera calibration. The test field area has about 30 Ground Control Points (GCPs), which enable a perfect scenario for a proper geometric evaluation of the cameras. The test field includes both a Siemen star and scale bars to show any blurring caused by forward motion. The result of the comparison showed that both cameras offer high accuracy photogrammetric results with post processing, including triangulation, calibration, orthophoto and DEM generation. The forward motion effect can be compensated by a fast shutter speed and a higher ISO range of the CMOS-based camera. The results showed no significant differences between cameras.

  10. The E166 Experiment: Undulator-Based Production of Polarized Positrons

    NASA Astrophysics Data System (ADS)

    Mikhailichenko, A.; Alexander, G.; Batygin, Y.; Berridge, S.; Bharadwaj, V.; Bower, G.; Bugg, W.; Decker, F.-J.; Dollan, R.; Efrimenko, Y.; Gharibyan, V.; Hast, C.; Iverson, R.; Kolanoski, H.; Kovermann, J.; Laihem, K.; Lohse, T.; McDonald, K. T.; Moortgat-Pick, G. A.; Pahl, P.; Pitthan, R.; Pöschl, R.; Reinherz-Aronis, E.; Riemann, S.; Schälicke, A.; Schüler, K. P.; Schweizer, T.; Scott, D.; Sheppard, J. C.; Stahl, A.; Szalata, Z.; Walz, D.; Weidemann, A.

    2007-06-01

    A proof-of-principle experiment has been carried out in the Final Focus Test Beam (FFTB) at SLAC to demonstrate production of polarized positrons in a manner suitable for implementation at the ILC. A helical undulator of 2.54 mm period and 1-m length produced circularly polarized photons of first harmonic endpoint energy of 8 MeV when traversed by a 46.6 GeV electron beam. The polarized photons were converted to polarized positrons in a 0.2-radiation-length tungsten target. The polarization of these positrons was measured at several energies, with a peak value of ≈ 80% according to a preliminary analysis of the transmission polarimetry of photons obtained on reconversion of the positrons in a second tungsten target.

  11. Relative Panoramic Camera Position Estimation for Image-Based Virtual Reality Networks in Indoor Environments

    NASA Astrophysics Data System (ADS)

    Nakagawa, M.; Akano, K.; Kobayashi, T.; Sekiguchi, Y.

    2017-09-01

    Image-based virtual reality (VR) is a virtual space generated with panoramic images projected onto a primitive model. In imagebased VR, realistic VR scenes can be generated with lower rendering cost, and network data can be described as relationships among VR scenes. The camera network data are generated manually or by an automated procedure using camera position and rotation data. When panoramic images are acquired in indoor environments, network data should be generated without Global Navigation Satellite Systems (GNSS) positioning data. Thus, we focused on image-based VR generation using a panoramic camera in indoor environments. We propose a methodology to automate network data generation using panoramic images for an image-based VR space. We verified and evaluated our methodology through five experiments in indoor environments, including a corridor, elevator hall, room, and stairs. We confirmed that our methodology can automatically reconstruct network data using panoramic images for image-based VR in indoor environments without GNSS position data.

  12. Preclinical positron emission tomography scanner based on a monolithic annulus of scintillator: initial design study.

    PubMed

    Stolin, Alexander V; Martone, Peter F; Jaliparthi, Gangadhar; Raylman, Raymond R

    2017-01-01

    Positron emission tomography (PET) scanners designed for imaging of small animals have transformed translational research by reducing the necessity to invasively monitor physiology and disease progression. Virtually all of these scanners are based on the use of pixelated detector modules arranged in rings. This design, while generally successful, has some limitations. Specifically, use of discrete detector modules to construct PET scanners reduces detection sensitivity and can introduce artifacts in reconstructed images, requiring the use of correction methods. To address these challenges, and facilitate measurement of photon depth-of-interaction in the detector, we investigated a small animal PET scanner (called AnnPET) based on a monolithic annulus of scintillator. The scanner was created by placing 12 flat facets around the outer surface of the scintillator to accommodate placement of silicon photomultiplier arrays. Its performance characteristics were explored using Monte Carlo simulations and sections of the NEMA NU4-2008 protocol. Results from this study revealed that AnnPET's reconstructed spatial resolution is predicted to be [Formula: see text] full width at half maximum in the radial, tangential, and axial directions. Peak detection sensitivity is predicted to be 10.1%. Images of simulated phantoms (mini-hot rod and mouse whole body) yielded promising results, indicating the potential of this system for enhancing PET imaging of small animals.

  13. Iterative reconstruction using a Monte Carlo based system transfer matrix for dedicated breast positron emission tomography

    SciTech Connect

    Saha, Krishnendu; Straus, Kenneth J.; Glick, Stephen J.; Chen, Yu.

    2014-08-28

    To maximize sensitivity, it is desirable that ring Positron Emission Tomography (PET) systems dedicated for imaging the breast have a small bore. Unfortunately, due to parallax error this causes substantial degradation in spatial resolution for objects near the periphery of the breast. In this work, a framework for computing and incorporating an accurate system matrix into iterative reconstruction is presented in an effort to reduce spatial resolution degradation towards the periphery of the breast. The GATE Monte Carlo Simulation software was utilized to accurately model the system matrix for a breast PET system. A strategy for increasing the count statistics in the system matrix computation and for reducing the system element storage space was used by calculating only a subset of matrix elements and then estimating the rest of the elements by using the geometric symmetry of the cylindrical scanner. To implement this strategy, polar voxel basis functions were used to represent the object, resulting in a block-circulant system matrix. Simulation studies using a breast PET scanner model with ring geometry demonstrated improved contrast at 45% reduced noise level and 1.5 to 3 times resolution performance improvement when compared to MLEM reconstruction using a simple line-integral model. The GATE based system matrix reconstruction technique promises to improve resolution and noise performance and reduce image distortion at FOV periphery compared to line-integral based system matrix reconstruction.

  14. 18F-Labeled Silicon-Based Fluoride Acceptors: Potential Opportunities for Novel Positron Emitting Radiopharmaceuticals

    PubMed Central

    Bernard-Gauthier, Vadim; Wängler, Carmen; Wängler, Bjoern; Schirrmacher, Ralf

    2014-01-01

    Background. Over the recent years, radiopharmaceutical chemistry has experienced a wide variety of innovative pushes towards finding both novel and unconventional radiochemical methods to introduce fluorine-18 into radiotracers for positron emission tomography (PET). These “nonclassical” labeling methodologies based on silicon-, boron-, and aluminium-18F chemistry deviate from commonplace bonding of an [18F]fluorine atom (18F) to either an aliphatic or aromatic carbon atom. One method in particular, the silicon-fluoride-acceptor isotopic exchange (SiFA-IE) approach, invalidates a dogma in radiochemistry that has been widely accepted for many years: the inability to obtain radiopharmaceuticals of high specific activity (SA) via simple IE. Methodology. The most advantageous feature of IE labeling in general is that labeling precursor and labeled radiotracer are chemically identical, eliminating the need to separate the radiotracer from its precursor. SiFA-IE chemistry proceeds in dipolar aprotic solvents at room temperature and below, entirely avoiding the formation of radioactive side products during the IE. Scope of Review. A great plethora of different SiFA species have been reported in the literature ranging from small prosthetic groups and other compounds of low molecular weight to labeled peptides and most recently affibody molecules. Conclusions. The literature over the last years (from 2006 to 2014) shows unambiguously that SiFA-IE and other silicon-based fluoride acceptor strategies relying on 18F− leaving group substitutions have the potential to become a valuable addition to radiochemistry. PMID:25157357

  15. Iterative reconstruction using a Monte Carlo based system transfer matrix for dedicated breast positron emission tomography

    NASA Astrophysics Data System (ADS)

    Saha, Krishnendu; Straus, Kenneth J.; Chen, Yu.; Glick, Stephen J.

    2014-08-01

    To maximize sensitivity, it is desirable that ring Positron Emission Tomography (PET) systems dedicated for imaging the breast have a small bore. Unfortunately, due to parallax error this causes substantial degradation in spatial resolution for objects near the periphery of the breast. In this work, a framework for computing and incorporating an accurate system matrix into iterative reconstruction is presented in an effort to reduce spatial resolution degradation towards the periphery of the breast. The GATE Monte Carlo Simulation software was utilized to accurately model the system matrix for a breast PET system. A strategy for increasing the count statistics in the system matrix computation and for reducing the system element storage space was used by calculating only a subset of matrix elements and then estimating the rest of the elements by using the geometric symmetry of the cylindrical scanner. To implement this strategy, polar voxel basis functions were used to represent the object, resulting in a block-circulant system matrix. Simulation studies using a breast PET scanner model with ring geometry demonstrated improved contrast at 45% reduced noise level and 1.5 to 3 times resolution performance improvement when compared to MLEM reconstruction using a simple line-integral model. The GATE based system matrix reconstruction technique promises to improve resolution and noise performance and reduce image distortion at FOV periphery compared to line-integral based system matrix reconstruction.

  16. Research of aerial camera focal pane micro-displacement measurement system based on Michelson interferometer

    NASA Astrophysics Data System (ADS)

    Wang, Shu-juan; Zhao, Yu-liang; Li, Shu-jun

    2014-09-01

    The aerial camera focal plane in the correct position is critical to the imaging quality. In order to adjust the aerial camera focal plane displacement caused in the process of maintenance, a new micro-displacement measuring system of aerial camera focal plane in view of the Michelson interferometer has been designed in this paper, which is based on the phase modulation principle, and uses the interference effect to realize the focal plane of the micro-displacement measurement. The system takes He-Ne laser as the light source, uses the Michelson interference mechanism to produce interference fringes, changes with the motion of the aerial camera focal plane interference fringes periodically, and records the periodicity of the change of the interference fringes to obtain the aerial camera plane displacement; Taking linear CCD and its driving system as the interference fringes picking up tool, relying on the frequency conversion and differentiating system, the system determines the moving direction of the focal plane. After data collecting, filtering, amplifying, threshold comparing, counting, CCD video signals of the interference fringes are sent into the computer processed automatically, and output the focal plane micro displacement results. As a result, the focal plane micro displacement can be measured automatically by this system. This system uses linear CCD as the interference fringes picking up tool, greatly improving the counting accuracy and eliminated the artificial counting error almost, improving the measurement accuracy of the system. The results of the experiments demonstrate that: the aerial camera focal plane displacement measurement accuracy is 0.2nm. While tests in the laboratory and flight show that aerial camera focal plane positioning is accurate and can satisfy the requirement of the aerial camera imaging.

  17. Hydrogenated amorphous silicon (a-Si:H) based gamma camera--Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Lee, Hyoung-Koo; Drewery, John S.; Hong, Wan S.; Jing, Tao; Kaplan, Selig N.; Mireshghi, Ali; Perez-Mendez, Victor

    1994-05-01

    A new gamma camera using a-Si:H photodetectors has been designed for the imaging of heart and other small organs. In this new design the photomultiplier tubes and the position sensing circuitry are replaced by 2D array of a-Si:H p-i-n pixel photodetectors and readout circuitry which are built on a substrate. Without the photomultiplier tubes this camera is light weight, hence can be made portable. To predict the characteristics and the performance of this new gamma camera we did Monte Carlo simulations. In the simulations 128 X 128 imaging array of various pixel sizes were used. 99mTc (140 keV) and 201Tl (70 keV) were used as radiation sources. From the simulations we could obtain the resolution of the camera and the overall system, and the blurring effects due to scattering in the phantom. Using the Wiener filter for image processing, restoration of the blurred image could be achieved. Simulation results of a-Si:H based gamma camera were compared with those of a conventional gamma camera.

  18. Extrinsic calibration of a non-overlapping camera network based on close-range photogrammetry.

    PubMed

    Dong, Shuai; Shao, Xinxing; Kang, Xin; Yang, Fujun; He, Xiaoyuan

    2016-08-10

    In this paper, an extrinsic calibration method for a non-overlapping camera network is presented based on close-range photogrammetry. The method does not require calibration targets or the cameras to be moved. The visual sensors are relatively motionless and do not see the same area at the same time. The proposed method combines the multiple cameras using some arbitrarily distributed encoded targets. The calibration procedure consists of three steps: reconstructing the three-dimensional (3D) coordinates of the encoded targets using a hand-held digital camera, performing the intrinsic calibration of the camera network, and calibrating the extrinsic parameters of each camera with only one image. A series of experiments, including 3D reconstruction, rotation, and translation, are employed to validate the proposed approach. The results show that the relative error for the 3D reconstruction is smaller than 0.003%, the relative errors of both rotation and translation are less than 0.066%, and the re-projection error is only 0.09 pixels.

  19. Positron emission tomography for the assessment of myocardial viability: an evidence-based analysis.

    PubMed

    2010-01-01

    In July 2009, the Medical Advisory Secretariat (MAS) began work on Non-Invasive Cardiac Imaging Technologies for the Assessment of Myocardial Viability, an evidence-based review of the literature surrounding different cardiac imaging modalities to ensure that appropriate technologies are accessed by patients undergoing viability assessment. This project came about when the Health Services Branch at the Ministry of Health and Long-Term Care asked MAS to provide an evidentiary platform on effectiveness and cost-effectiveness of non-invasive cardiac imaging modalities.After an initial review of the strategy and consultation with experts, MAS identified five key non-invasive cardiac imaging technologies that can be used for the assessment of myocardial viability: positron emission tomography, cardiac magnetic resonance imaging, dobutamine echocardiography, and dobutamine echocardiography with contrast, and single photon emission computed tomography.A 2005 review conducted by MAS determined that positron emission tomography was more sensitivity than dobutamine echocardiography and single photon emission tomography and dominated the other imaging modalities from a cost-effective standpoint. However, there was inadequate evidence to compare positron emission tomography and cardiac magnetic resonance imaging. Thus, this report focuses on this comparison only. For both technologies, an economic analysis was also completed.The Non-Invasive Cardiac Imaging Technologies for the Assessment of Myocardial Viability is made up of the following reports, which can be publicly accessed at the MAS website at: www.health.gov.on.ca/mas or at www.health.gov.on.ca/english/providers/program/mas/mas_about.htmlPOSITRON EMISSION TOMOGRAPHY FOR THE ASSESSMENT OF MYOCARDIAL VIABILITY: An Evidence-Based AnalysisMAGNETIC RESONANCE IMAGING FOR THE ASSESSMENT OF MYOCARDIAL VIABILITY: An Evidence-Based Analysis The objective of this analysis is to assess the effectiveness and safety of positron

  20. Simulation-based camera navigation training in laparoscopy-a randomized trial.

    PubMed

    Nilsson, Cecilia; Sorensen, Jette Led; Konge, Lars; Westen, Mikkel; Stadeager, Morten; Ottesen, Bent; Bjerrum, Flemming

    2017-05-01

    Inexperienced operating assistants are often tasked with the important role of handling camera navigation during laparoscopic surgery. Incorrect handling can lead to poor visualization, increased operating time, and frustration for the operating surgeon-all of which can compromise patient safety. The objectives of this trial were to examine how to train laparoscopic camera navigation and to explore the transfer of skills to the operating room. A randomized, single-center superiority trial with three groups: The first group practiced simulation-based camera navigation tasks (camera group), the second group practiced performing a simulation-based cholecystectomy (procedure group), and the third group received no training (control group). Participants were surgical novices without prior laparoscopic experience. The primary outcome was assessment of camera navigation skills during a laparoscopic cholecystectomy. The secondary outcome was technical skills after training, using a previously developed model for testing camera navigational skills. The exploratory outcome measured participants' motivation toward the task as an operating assistant. Thirty-six participants were randomized. No significant difference was found in the primary outcome between the three groups (p = 0.279). The secondary outcome showed no significant difference between the interventions groups, total time 167 s (95% CI, 118-217) and 194 s (95% CI, 152-236) for the camera group and the procedure group, respectively (p = 0.369). Both interventions groups were significantly faster than the control group, 307 s (95% CI, 202-412), p = 0.018 and p = 0.045, respectively. On the exploratory outcome, the control group for two dimensions, interest/enjoyment (p = 0.030) and perceived choice (p = 0.033), had a higher score. Simulation-based training improves the technical skills required for camera navigation, regardless of practicing camera navigation or the procedure itself. Transfer to the

  1. Status of the photomultiplier-based FlashCam camera for the Cherenkov Telescope Array

    NASA Astrophysics Data System (ADS)

    Pühlhofer, G.; Bauer, C.; Eisenkolb, F.; Florin, D.; Föhr, C.; Gadola, A.; Garrecht, F.; Hermann, G.; Jung, I.; Kalekin, O.; Kalkuhl, C.; Kasperek, J.; Kihm, T.; Koziol, J.; Lahmann, R.; Manalaysay, A.; Marszalek, A.; Rajda, P. J.; Reimer, O.; Romaszkan, W.; Rupinski, M.; Schanz, T.; Schwab, T.; Steiner, S.; Straumann, U.; Tenzer, C.; Vollhardt, A.; Weitzel, Q.; Winiarski, K.; Zietara, K.

    2014-07-01

    The FlashCam project is preparing a camera prototype around a fully digital FADC-based readout system, for the medium sized telescopes (MST) of the Cherenkov Telescope Array (CTA). The FlashCam design is the first fully digital readout system for Cherenkov cameras, based on commercial FADCs and FPGAs as key components for digitization and triggering, and a high performance camera server as back end. It provides the option to easily implement different types of trigger algorithms as well as digitization and readout scenarios using identical hardware, by simply changing the firmware on the FPGAs. The readout of the front end modules into the camera server is Ethernet-based using standard Ethernet switches and a custom, raw Ethernet protocol. In the current implementation of the system, data transfer and back end processing rates of 3.8 GB/s and 2.4 GB/s have been achieved, respectively. Together with the dead-time-free front end event buffering on the FPGAs, this permits the cameras to operate at trigger rates of up to several ten kHz. In the horizontal architecture of FlashCam, the photon detector plane (PDP), consisting of photon detectors, preamplifiers, high voltage-, control-, and monitoring systems, is a self-contained unit, mechanically detached from the front end modules. It interfaces to the digital readout system via analogue signal transmission. The horizontal integration of FlashCam is expected not only to be more cost efficient, it also allows PDPs with different types of photon detectors to be adapted to the FlashCam readout system. By now, a 144-pixel mini-camera" setup, fully equipped with photomultipliers, PDP electronics, and digitization/ trigger electronics, has been realized and extensively tested. Preparations for a full-scale, 1764 pixel camera mechanics and a cooling system are ongoing. The paper describes the status of the project.

  2. A novel image reconstruction methodology based on inverse Monte Carlo analysis for positron emission tomography

    NASA Astrophysics Data System (ADS)

    Kudrolli, Haris A.

    2001-04-01

    A three dimensional (3D) reconstruction procedure for Positron Emission Tomography (PET) based on inverse Monte Carlo analysis is presented. PET is a medical imaging modality which employs a positron emitting radio-tracer to give functional images of an organ's metabolic activity. This makes PET an invaluable tool in the detection of cancer and for in-vivo biochemical measurements. There are a number of analytical and iterative algorithms for image reconstruction of PET data. Analytical algorithms are computationally fast, but the assumptions intrinsic in the line integral model limit their accuracy. Iterative algorithms can apply accurate models for reconstruction and give improvements in image quality, but at an increased computational cost. These algorithms require the explicit calculation of the system response matrix, which may not be easy to calculate. This matrix gives the probability that a photon emitted from a certain source element will be detected in a particular detector line of response. The ``Three Dimensional Stochastic Sampling'' (SS3D) procedure implements iterative algorithms in a manner that does not require the explicit calculation of the system response matrix. It uses Monte Carlo techniques to simulate the process of photon emission from a source distribution and interaction with the detector. This technique has the advantage of being able to model complex detector systems and also take into account the physics of gamma ray interaction within the source and detector systems, which leads to an accurate image estimate. A series of simulation studies was conducted to validate the method using the Maximum Likelihood - Expectation Maximization (ML-EM) algorithm. The accuracy of the reconstructed images was improved by using an algorithm that required a priori knowledge of the source distribution. Means to reduce the computational time for reconstruction were explored by using parallel processors and algorithms that had faster convergence rates

  3. Performance evaluation of a dual-crystal APD-based detector modules for positron emission tomography

    NASA Astrophysics Data System (ADS)

    Pepin, Catherine M.; Bérard, Philippe; Cadorette, Jules; Tétrault, Marc-André; Leroux, Jean-Daniel; Michaud, Jean-Baptiste; Robert, Stéfan; Dautet, Henri; Davies, Murray; Fontaine, Réjean; Lecomte, Roger

    2006-03-01

    Positron Emission Tomography (PET) scanners dedicated to small animal studies have seen a swift development in recent years. Higher spatial resolution, greater sensitivity and faster scanning procedures are the leading factors driving further improvements. The new LabPET TM system is a second-generation APD-based animal PET scanner that combines avalanche photodiode (APD) technology with a highly integrated, fully digital, parallel electronic architecture. This work reports on the performance characteristics of the LabPET quad detector module, which consists of LYSO/LGSO phoswich assemblies individually coupled to reach-through APDs. Individual crystals 2×2×~10 mm 3 in size are optically coupled in pair along one long side to form the phoswich detectors. Although the LYSO and LGSO photopeaks partially overlap, the good energy resolution and decay time difference allow for efficient crystal identification by pulse-shape discrimination. Conventional analog discrimination techniques result in significant misidentification, but advanced digital signal processing methods make it possible to circumvent this limitation, achieving virtually error-free decoding. Timing resolution results of 3.4 ns and 4.5 ns FWHM have been obtained for LYSO and LGSO, respectively, using analog CFD techniques. However, test bench measurements with digital techniques have shown that resolutions in the range of 2 to 4 ns FWHM can be achieved.

  4. FPGA-Based Front-End Electronics for Positron Emission Tomography.

    PubMed

    Haselman, Michael; Dewitt, Don; McDougald, Wendy; Lewellen, Thomas K; Miyaoka, Robert; Hauck, Scott

    2009-02-22

    Modern Field Programmable Gate Arrays (FPGAs) are capable of performing complex discrete signal processing algorithms with clock rates above 100MHz. This combined with FPGA's low expense, ease of use, and selected dedicated hardware make them an ideal technology for a data acquisition system for positron emission tomography (PET) scanners. Our laboratory is producing a high-resolution, small-animal PET scanner that utilizes FPGAs as the core of the front-end electronics. For this next generation scanner, functions that are typically performed in dedicated circuits, or offline, are being migrated to the FPGA. This will not only simplify the electronics, but the features of modern FPGAs can be utilizes to add significant signal processing power to produce higher resolution images. In this paper two such processes, sub-clock rate pulse timing and event localization, will be discussed in detail. We show that timing performed in the FPGA can achieve a resolution that is suitable for small-animal scanners, and will outperform the analog version given a low enough sampling period for the ADC. We will also show that the position of events in the scanner can be determined in real time using a statistical positioning based algorithm.

  5. Low background high efficiency radiocesium detection system based on positron emission tomography technology

    SciTech Connect

    Yamamoto, Seiichi; Ogata, Yoshimune

    2013-09-15

    After the 2011 nuclear power plant accident at Fukushima, radiocesium contamination in food became a serious concern in Japan. However, low background and high efficiency radiocesium detectors are expensive and huge, including semiconductor germanium detectors. To solve this problem, we developed a radiocesium detector by employing positron emission tomography (PET) technology. Because {sup 134}Cs emits two gamma photons (795 and 605 keV) within 5 ps, they can selectively be measured with coincidence. Such major environmental gamma photons as {sup 40}K (1.46 MeV) are single photon emitters and a coincidence measurement reduces the detection limit of radiocesium detectors. We arranged eight sets of Bi{sub 4}Ge{sub 3}O{sub 12} (BGO) scintillation detectors in double rings (four for each ring) and measured the coincidence between these detectors using PET data acquisition system. A 50 × 50 × 30 mm BGO was optically coupled to a 2 in. square photomultiplier tube (PMT). By measuring the coincidence, we eliminated most single gamma photons from the energy distribution and only detected those from {sup 134}Cs at an average efficiency of 12%. The minimum detectable concentration of the system for the 100 s acquisition time is less than half of the food monitor requirements in Japan (25 Bq/kg). These results show that the developed radiocesium detector based on PET technology is promising to detect low level radiocesium.

  6. Low background high efficiency radiocesium detection system based on positron emission tomography technology

    NASA Astrophysics Data System (ADS)

    Yamamoto, Seiichi; Ogata, Yoshimune

    2013-09-01

    After the 2011 nuclear power plant accident at Fukushima, radiocesium contamination in food became a serious concern in Japan. However, low background and high efficiency radiocesium detectors are expensive and huge, including semiconductor germanium detectors. To solve this problem, we developed a radiocesium detector by employing positron emission tomography (PET) technology. Because 134Cs emits two gamma photons (795 and 605 keV) within 5 ps, they can selectively be measured with coincidence. Such major environmental gamma photons as 40K (1.46 MeV) are single photon emitters and a coincidence measurement reduces the detection limit of radiocesium detectors. We arranged eight sets of Bi4Ge3O12 (BGO) scintillation detectors in double rings (four for each ring) and measured the coincidence between these detectors using PET data acquisition system. A 50 × 50 × 30 mm BGO was optically coupled to a 2 in. square photomultiplier tube (PMT). By measuring the coincidence, we eliminated most single gamma photons from the energy distribution and only detected those from 134Cs at an average efficiency of 12%. The minimum detectable concentration of the system for the 100 s acquisition time is less than half of the food monitor requirements in Japan (25 Bq/kg). These results show that the developed radiocesium detector based on PET technology is promising to detect low level radiocesium.

  7. Low background high efficiency radiocesium detection system based on positron emission tomography technology.

    PubMed

    Yamamoto, Seiichi; Ogata, Yoshimune

    2013-09-01

    After the 2011 nuclear power plant accident at Fukushima, radiocesium contamination in food became a serious concern in Japan. However, low background and high efficiency radiocesium detectors are expensive and huge, including semiconductor germanium detectors. To solve this problem, we developed a radiocesium detector by employing positron emission tomography (PET) technology. Because (134)Cs emits two gamma photons (795 and 605 keV) within 5 ps, they can selectively be measured with coincidence. Such major environmental gamma photons as (40)K (1.46 MeV) are single photon emitters and a coincidence measurement reduces the detection limit of radiocesium detectors. We arranged eight sets of Bi4Ge3O12 (BGO) scintillation detectors in double rings (four for each ring) and measured the coincidence between these detectors using PET data acquisition system. A 50 × 50 × 30 mm BGO was optically coupled to a 2 in. square photomultiplier tube (PMT). By measuring the coincidence, we eliminated most single gamma photons from the energy distribution and only detected those from (134)Cs at an average efficiency of 12%. The minimum detectable concentration of the system for the 100 s acquisition time is less than half of the food monitor requirements in Japan (25 Bq/kg). These results show that the developed radiocesium detector based on PET technology is promising to detect low level radiocesium.

  8. Magnetic resonance-based motion correction for positron emission tomography imaging.

    PubMed

    Ouyang, Jinsong; Li, Quanzheng; El Fakhri, Georges

    2013-01-01

    Positron emission tomography (PET) image quality is limited by patient motion. Emission data are blurred owing to cardiac and/or respiratory motion. Although spatial resolution is 4 mm for standard clinical whole-body PET scanners, the effective resolution can be as low as 1 cm owing to motion. Additionally, the deformation of attenuation medium causes image artifacts. Previously, gating has been used to "freeze" the motion, but led to significantly increased noise level. Simultaneous PET/magnetic resonance (MR) modality offers a new way to perform PET motion correction. MR can be used to measure 3-dimensional motion fields, which can then be incorporated into the iterative PET reconstruction to obtain motion-corrected PET images. In this report, we present MR imaging techniques to acquire dynamic images, a nonrigid image registration algorithm to extract motion fields from acquired MR images, and a PET reconstruction algorithm with motion correction. We also present results from both phantom and in vivo animal PET/MR studies. We demonstrate that MR-based PET motion correction using simultaneous PET/MR improves image quality and lesion detectability compared with gating and no motion correction. Copyright © 2013 Elsevier Inc. All rights reserved.

  9. Positron Physics

    NASA Technical Reports Server (NTRS)

    Drachman, Richard J.

    2003-01-01

    I will give a review of the history of low-energy positron physics, experimental and theoretical, concentrating on the type of work pioneered by John Humberston and the positronics group at University College. This subject became a legitimate subfield of atomic physics under the enthusiastic direction of the late Sir Harrie Massey, and it attracted a diverse following throughout the world. At first purely theoretical, the subject has now expanded to include high brightness beams of low-energy positrons, positronium beams, and, lately, experiments involving anti-hydrogen atoms. The theory requires a certain type of persistence in its practitioners, as well as an eagerness to try new mathematical and numerical techniques. I will conclude with a short summary of some of the most interesting recent advances.

  10. Positron Physics

    NASA Technical Reports Server (NTRS)

    Drachman, Richard J.

    2003-01-01

    I will give a review of the history of low-energy positron physics, experimental and theoretical, concentrating on the type of work pioneered by John Humberston and the positronics group at University College. This subject became a legitimate subfield of atomic physics under the enthusiastic direction of the late Sir Harrie Massey, and it attracted a diverse following throughout the world. At first purely theoretical, the subject has now expanded to include high brightness beams of low-energy positrons, positronium beams, and, lately, experiments involving anti-hydrogen atoms. The theory requires a certain type of persistence in its practitioners, as well as an eagerness to try new mathematical and numerical techniques. I will conclude with a short summary of some of the most interesting recent advances.

  11. Optimization of light field display-camera configuration based on display properties in spectral domain.

    PubMed

    Bregović, Robert; Kovács, Péter Tamás; Gotchev, Atanas

    2016-02-08

    The visualization capability of a light field display is uniquely determined by its angular and spatial resolution referred to as display passband. In this paper we use a multidimensional sampling model for describing the display-camera channel. Based on the model, for a given display passband, we propose a methodology for determining the optimal distribution of ray generators in a projection-based light field display. We also discuss the required camera setup that can provide data with the necessary amount of details for such display that maximizes the visual quality and minimizes the amount of data.

  12. Narrow Field-Of Visual Odometry Based on a Focused Plenoptic Camera

    NASA Astrophysics Data System (ADS)

    Zeller, N.; Quint, F.; Stilla, U.

    2015-03-01

    In this article we present a new method for visual odometry based on a focused plenoptic camera. This method fuses the depth data gained by a monocular Simultaneous Localization and Mapping (SLAM) algorithm and the one received from a focused plenoptic camera. Our algorithm uses the depth data and the totally focused images supplied by the plenoptic camera to run a real-time semi-dense direct SLAM algorithm. Based on this combined approach, the scale ambiguity of a monocular SLAM system can be overcome. Furthermore, the additional light-field information highly improves the tracking capabilities of the algorithm. Thus, visual odometry even for narrow field of view (FOV) cameras is possible. We show that not only tracking profits from the additional light-field information. By accumulating the depth information over multiple tracked images, also the depth accuracy of the focused plenoptic camera can be highly improved. This novel approach improves the depth error by one order of magnitude compared to the one received from a single light-field image.

  13. Submap joining smoothing and mapping for camera-based indoor localization and mapping

    NASA Astrophysics Data System (ADS)

    Bjärkefur, J.; Karlsson, A.; Grönwall, C.; Rydell, J.

    2011-06-01

    Personnel positioning is important for safety in e.g. emergency response operations. In GPS-denied environments, possible positioning solutions include systems based on radio frequency communication, inertial sensors, and cameras. Many camera-based systems create a map and localize themselves relative to that. The computational complexity of most such solutions grows rapidly with the size of the map. One way to reduce the complexity is to divide the visited region into submaps. This paper presents a novel method for merging conditionally independent submaps (generated using e.g. EKF-SLAM) by the use of smoothing. Using this approach it is possible to build large maps in close to linear time. The method is demonstrated in two indoor scenarios, where data was collected with a trolley-mounted stereo vision camera.

  14. Secure chaotic map based block cryptosystem with application to camera sensor networks.

    PubMed

    Guo, Xianfeng; Zhang, Jiashu; Khan, Muhammad Khurram; Alghathbar, Khaled

    2011-01-01

    Recently, Wang et al. presented an efficient logistic map based block encryption system. The encryption system employs feedback ciphertext to achieve plaintext dependence of sub-keys. Unfortunately, we discovered that their scheme is unable to withstand key stream attack. To improve its security, this paper proposes a novel chaotic map based block cryptosystem. At the same time, a secure architecture for camera sensor network is constructed. The network comprises a set of inexpensive camera sensors to capture the images, a sink node equipped with sufficient computation and storage capabilities and a data processing server. The transmission security between the sink node and the server is gained by utilizing the improved cipher. Both theoretical analysis and simulation results indicate that the improved algorithm can overcome the flaws and maintain all the merits of the original cryptosystem. In addition, computational costs and efficiency of the proposed scheme are encouraging for the practical implementation in the real environment as well as camera sensor network.

  15. Development of an angled Si-PM-based detector unit for positron emission mammography (PEM) system

    NASA Astrophysics Data System (ADS)

    Nakanishi, Kouhei; Yamamoto, Seiichi

    2016-11-01

    Positron emission mammography (PEM) systems have higher sensitivity than clinical whole body PET systems because they have a smaller ring diameter. However, the spatial resolution of PEM systems is not high enough to detect early stage breast cancer. To solve this problem, we developed a silicon photomultiplier (Si-PM) based detector unit for the development of a PEM system. Since a Si-PM's channel is small, Si-PM can resolve small scintillator pixels to improve the spatial resolution. Also Si-PM based detectors have inherently high timing resolution and are able to reduce the random coincidence events by reducing the time window. We used 1.5×1.9×15 mm LGSO scintillation pixels and arranged them in an 8×24 matrix to form scintillator blocks. Four scintillator blocks were optically coupled to Si-PM arrays with an angled light guide to form a detector unit. Since the light guide has angles of 5.625°, we can arrange 64 scintillator blocks in a nearly circular shape (a regular 64-sided polygon) using 16 detector units. We clearly resolved the pixels of the scintillator blocks in a 2-dimensional position histogram where the averages of the peak-to-valley ratios (P/Vs) were 3.7±0.3 and 5.7±0.8 in the transverse and axial directions, respectively. The average energy resolution was 14.2±2.1% full-width at half-maximum (FWHM). By including the temperature dependent gain control electronics, the photo-peak channel shifts were controlled within ±1.5% with the temperature from 23 °C to 28 °C. With these results, in addition to the potential high timing performance of Si-PM based detectors, our developed detector unit is promising for the development of a high-resolution PEM system.

  16. A cryogenically cooled, ultra-high-energy-resolution, trap-based positron beam

    SciTech Connect

    Natisin, M. R. Danielson, J. R.; Surko, C. M.

    2016-01-11

    A technique is described to produce a pulsed, magnetically guided positron beam with significantly improved beam characteristics over those available previously. A pulsed, room-temperature positron beam from a buffer gas trap is used as input to a trap that captures the positrons, compresses them both radially and axially, and cools them to 50 K on a cryogenic CO buffer gas before ejecting them as a pulsed beam. The total energy spread of the beam formed using this technique is 6.9 ± 0.7 meV FWHM, which is a factor of ∼5 better than the previous state-of-the-art, while simultaneously having sub-microsecond temporal resolution and millimeter spatial resolution. Possible further improvements in beam quality are discussed.

  17. Calculation of positron observables using a finite-element-based approach

    SciTech Connect

    Klein, B. M.; Pask, J. E.; Sterne, P.

    1998-11-04

    We report the development of a new method for calculating positron observables using a finite-element approach for the solution of the Schrodinger equation. This method combines the advantages of both basis-set and real-space-grid approaches. The strict locality in real space of the finite element basis functions results in a method that is well suited for calculating large systems of a thousand or more atoms, as required for calculations of extended defects such as dislocations. In addition, the method is variational in nature and its convergence can be controlled systematically. The calculation of positron observables is straightforward due to the real-space nature of this method. We illustrate the power of this method with positron lifetime calculations on defects and defect-free materials, using overlapping atomic charge densities.

  18. Demonstration of a Novel Positron Source Based on a Plasma Wiggler

    SciTech Connect

    Johnson, D. K.; Clayton, C. E.; Huang, C.; Joshi, C.; Lu, W.; Marsh, K. A.; Mori, W. B.; Zhou, M.; Blumenfeld, I.; Barnes, C. D.; Decker, F. J.; Emma, P.; Hogan, M. J.; Ischebeck, R.; Iverson, R.; Kirby, N.; Krejcik, P.; O'Connell, C. L.; Siemann, R. H.; Walz, D.

    2006-11-27

    A new method for generating positrons has been proposed that uses betatron X-rays emitted by an electron beam in a high-K plasma wiggler. The plasma wiggler is an ion column produced by the head of the beam when the peak beam density exceeds the plasma density. The radial electric field of the beam blows out the plasma electrons transversely, creating an ion column. The focusing electric field of the ion column causes the beam electrons to execute betatron oscillations about the ion column axis. If the beam energy and the plasma density are high enough, these oscillations lead to synchrotron radiation in the 1-50 MeV range. A significant amount of electron energy can be lost to these radiated X-ray photons. These photons strike a thin (.5Xo), high-Z target and create e+/e- pairs. The experiment was performed at the Stanford Linear Accelerator Center (SLAC) where a 28.5 GeV electron beam with {sigma}r {approx_equal} 10{mu}m and {sigma}z {approx_equal} 25{mu}m was propagated through a neutral Lithium vapor (Li). The radial electric field of the dense beam was large enough to field ionize the Li vapor to form a plasma. The positron yield was measured as a function of plasma density, ion column length and electron beam pulse length. A computational model was written to match the experimental data with theory. The measured positron spectra are in excellent agreement with those expected from the calculated X-ray spectral yield from the plasma wiggler. After matching the model with the experimental results, it was used to design a more efficient positron source, giving positron yields of 0.44 e+/e-, a number that is close to the target goal of 1-2 e+/e- for future positron sources.

  19. Defocus compensation system of long focal aerial camera based on auto-collimation

    NASA Astrophysics Data System (ADS)

    Zhang, Yu-ye; Zhao, Yu-liang; Xu, Zhao-lin

    2010-10-01

    Nowadays, novel aerial reconnaissance camera emphasizes on the shooting performance in high altitude or in long distance of oblique photography. In order to obtain the larger scale pictures which are easier for image interpretation, we need the camera has long focal length. But long focal length camera is easier to be influenced by environmental condition and lead to great change of lens' back focus which can result in the lens' resolution decreased greatly. So, we should do precise defocusing compensation to long focal aerial camera system. In order to realize defocusing compensation, a defocusing compensation system based on autocollimation is designed. Firstly, the reason which can lead to long focal camera's defocusing was discussed, then the factors such as changes of atmospheric pressure and temperature and oblique photographic distance were pointed out, and mathematical equation which could compute camera's defocusing amount was presented. Secondly, after camera's defocusing was analyzed, electro-optical autocollimation of higher automation and intelligent was adopted in the system. Before shooting , focal surface was located by electro-optical autocollimation focal detection mechanism, the data of airplane's height was imported through electronic control system. Defocusing amount was corrected by computing defocusing amount and the signal was send to focusing control motor. And an efficient improved mountain climb-searching algorithm was adopted for focal surface locating in the correction process. When confirming the direction of curve, the improved algorithm considered both twice focusing results and four points. If four points continue raised, the curve would be confirmed as rising direction. On the other hand, if four points continue decreased, the curve would be confirmed as decrease direction. In this way, we could avoid the local peak value appeared in two focusing steps. The defocusing compensation system consists of optical component and precise

  20. Crystal identification in positron emission tomography using nonrigid registration to a Fourier-based template

    PubMed Central

    Chaudhari, Abhijit J.; Joshi, Anand A.; Bowen, Spencer L.; Leahy, Richard M.; Cherry, Simon R.; Badawi, Ramsey D.

    2009-01-01

    Modern Positron Emission Tomography (PET) detectors typically are made from 2D modular arrays of scintillation crystals. Their characteristic flood field response (or flood histogram) must be segmented in order to correctly determine the crystal of annihilation photon interaction in the system. Crystal identification information thus generated is also needed for accurate system modeling as well as for detailed detector characterization and performance studies. In this paper, we present a semi-automatic general purpose template-guided scheme for segmentation of flood histograms. We first generate a template image that exploits the spatial frequency information in the given flood histogram using Fourier-space analysis. This template image is a lower order approximation of the flood histogram, and can be segmented with horizontal and vertical lines drawn midway between adjacent peaks in the histogram. The template is then registered to the given flood histogram by a diffeomorphic polynomial-based warping scheme that is capable of iteratively minimizing intensity differences. The displacement field thus calculated is applied to the segmentation of the template resulting in a segmentation of the given flood histogram. We evaluate our segmentation scheme for a photomultiplier tube-based PET detector, a detector with readout by a position-sensitive avalanche photodiode (PSAPD) and a detector consisting of a stack of photomultiplier tubes and scintillator arrays. Further, we quantitatively compare the performance of the proposed method to that of a manual segmentation scheme using reconstructed images of a line source phantom. We also present an adaptive method for distortion reduction in flood histograms obtained for PET detectors that use PSAPDs. PMID:18723924

  1. Linking the near-surface camera-based phenological metrics with leaf chemical and spectroscopic properties

    NASA Astrophysics Data System (ADS)

    Yang, X.; Tang, J.; Mustard, J. F.; Schmitt, J.

    2012-12-01

    Plant phenology is an important indicator of climate change. Near-surface cameras provide a way to continuously monitor plant canopy development at the scale of several hundred meters, which is rarely feasible by either traditional phenological monitoring methods or remote sensing. Thus, digital cameras are being deployed in national networks such as the National Ecological Observatory Network (NEON) and PhenoCam. However, it is unclear how the camera-based phenological metrics are linked with plant physiology as measured from leaf chemical and spectroscopic properties throughout the growing season. We used the temporal trajectories of leaf chemical properties (chlorophyll a and b, carotenoids, leaf water content, leaf carbon/nitrogen content) and leaf reflectance/transmittance (300 to 2500 nm) to understand the temporal changes of camera-based phenological metrics (e.g., relative greenness), which was acquired from our Standalone Phenological Observation System installed on a tower on the island of Martha's Vineyard, MA (dominant species: Quercus alba). Leaf chemical and spectroscopic properties of three oak trees near the tower were measured weekly from June to November, 2011. We found that the chlorophyll concentration showed similar temporal trajectories to the relative greenness. However, the change of chlorophyll concentration lagged behind the change of relative greenness for about 20 days both in the spring and the fall. The relative redness is a better indicator of leaf senescence in the fall than the relative greenness. We derived relative greenness from leaf spectroscopy and found that the relative greenness from camera matched well with that from the spectroscopy in the mid-summer, but this relationship faded as leaves start to fall, exposing the branches and soil background. This work suggests that we should be cautious to interpret camera-based phenological metrics, and the relative redness could potentially be a useful indicator of fall senescence.

  2. Medium field of view multiflat panel-based portable gamma camera

    NASA Astrophysics Data System (ADS)

    Giménez, M.; Benlloch, J. M.; Cerdá, J.; Escat, B.; Fernández, M.; Giménez, E. N.; Lerche, Ch. W.; Martínez, J. D.; Mora, F. J.; Pavón, N.; Sánchez, F.; Sebastià, A.

    2004-06-01

    A portable gamma camera based on the multianode technology has been built and tested. The camera consists in optically coupling four "Flat Panel" H8500 PSPMTs to a 100×100×4 mm 3 CsI(Na) continuous scintillation crystal. The dimensions of the camera are 17×12×12 cm 3 including the pinhole collimator and it weighs a total of 2 kg. Its average spatial resolution is 2 mm, its energy resolution is about 15%, and it shows a field of view of 95 mm. Because of its portability, its FOV and its cost, it is a convenient choice for osteological, renal, mammary, and endocrine (thyroid, parathyroid and suprarenal) scintigraphies, as well as other important applications such as intraoperatory detection of lymph nodes and surgical oncology. We describe the simulations performed which explain the crystal choice, the mechanical design of the camera and the method of calibration and algorithms used for position, energy and uniformity correction. We present images taken from phantoms. We plan to increase the camera sensitivity by using a four-holes collimator in combination with the MLEM algorithm, in order to decrease the exploration time and to reduce the dose given to the patient.

  3. Clinical CT-based calculations of dose and positron emitter distributions in proton therapy using the FLUKA Monte Carlo code

    NASA Astrophysics Data System (ADS)

    Parodi, K.; Ferrari, A.; Sommerer, F.; Paganetti, H.

    2007-07-01

    Clinical investigations on post-irradiation PET/CT (positron emission tomography/computed tomography) imaging for in vivo verification of treatment delivery and, in particular, beam range in proton therapy are underway at Massachusetts General Hospital (MGH). Within this project, we have developed a Monte Carlo framework for CT-based calculation of dose and irradiation-induced positron emitter distributions. Initial proton beam information is provided by a separate Geant4 Monte Carlo simulation modelling the treatment head. Particle transport in the patient is performed in the CT voxel geometry using the FLUKA Monte Carlo code. The implementation uses a discrete number of different tissue types with composition and mean density deduced from the CT scan. Scaling factors are introduced to account for the continuous Hounsfield unit dependence of the mass density and of the relative stopping power ratio to water used by the treatment planning system (XiO (Computerized Medical Systems Inc.)). Resulting Monte Carlo dose distributions are generally found in good correspondence with calculations of the treatment planning program, except a few cases (e.g. in the presence of air/tissue interfaces). Whereas dose is computed using standard FLUKA utilities, positron emitter distributions are calculated by internally combining proton fluence with experimental and evaluated cross-sections yielding 11C, 15O, 14O, 13N, 38K and 30P. Simulated positron emitter distributions yield PET images in good agreement with measurements. In this paper, we describe in detail the specific implementation of the FLUKA calculation framework, which may be easily adapted to handle arbitrary phase spaces of proton beams delivered by other facilities or include more reaction channels based on additional cross-section data. Further, we demonstrate the effects of different acquisition time regimes (e.g., PET imaging during or after irradiation) on the intensity and spatial distribution of the irradiation

  4. Shape Function-Based Estimation of Deformation with Moving Cameras Attached to the Deforming Body

    NASA Astrophysics Data System (ADS)

    Jokinen, O.; Ranta, I.; Haggrén, H.; Rönnholm, P.

    2016-06-01

    The paper presents a novel method to measure 3-D deformation of a large metallic frame structure of a crane under loading from one to several images, when the cameras need to be attached to the self deforming body, the structure sways during loading, and the imaging geometry is not optimal due to physical limitations. The solution is based on modeling the deformation with adequate shape functions and taking into account that the cameras move depending on the frame deformation. It is shown that the deformation can be estimated even from a single image of targeted points if the 3-D coordinates of the points are known or have been measured before loading using multiple cameras or some other measuring technique. The precision of the method is evaluated to be 1 mm at best, corresponding to 1:11400 of the average distance to the target.

  5. EGS5 simulations to design a Ce:GAGG scintillator based Compton camera.

    PubMed

    Malik, Azhar H; Shimazoe, Kenji; Takahashi, Hiroyuki

    2013-01-01

    Ce(+3): Gd3Al2Ga3O12 (Ce:GAGG) is expected to be promising scintillator for PET, SPECT, and gamma camera applications because of its attractive properties. We designed a Compton camera based on Ce:GAGG, both as scatterer and absorber, for imaging and radioactivity measurement of point sources. The two important parameters sensitivity and spatial resolution are determined for 4 × 4 pixels, each pixel of size 1 × 1 cm(2), for both scatterer and absorber. Our main focus in this paper is to image a distant source for which sensitivity is of prime importance. High sensitivity and light weight are two important advantages of Compton camera for distant source imaging and the availability of Ce:GAGG 3 × 3 mm(2) pixel size is expected to give a spatial resolution of ~ 5 mm for medical applications as well.

  6. Mach-zehnder based optical marker/comb generator for streak camera calibration

    DOEpatents

    Miller, Edward Kirk

    2015-03-03

    This disclosure is directed to a method and apparatus for generating marker and comb indicia in an optical environment using a Mach-Zehnder (M-Z) modulator. High speed recording devices are configured to record image or other data defining a high speed event. To calibrate and establish time reference, the markers or combs are indicia which serve as timing pulses (markers) or a constant-frequency train of optical pulses (comb) to be imaged on a streak camera for accurate time based calibration and time reference. The system includes a camera, an optic signal generator which provides an optic signal to an M-Z modulator and biasing and modulation signal generators configured to provide input to the M-Z modulator. An optical reference signal is provided to the M-Z modulator. The M-Z modulator modulates the reference signal to a higher frequency optical signal which is output through a fiber coupled link to the streak camera.

  7. An electron energy loss spectrometer based streak camera for time resolved TEM measurements.

    PubMed

    Ali, Hasan; Eriksson, Johan; Li, Hu; Jafri, S Hassan M; Kumar, M S Sharath; Ögren, Jim; Ziemann, Volker; Leifer, Klaus

    2017-05-01

    We propose an experimental setup based on a streak camera approach inside an energy filter to measure time resolved properties of materials in the transmission electron microscope (TEM). In order to put in place the streak camera, a beam sweeper was built inside an energy filter. After exciting the TEM sample, the beam is swept across the CCD camera of the filter. We describe different parts of the setup at the example of a magnetic measurement. This setup is capable to acquire time resolved diffraction patterns, electron energy loss spectra (EELS) and images with total streaking times in the range between 100ns and 10μs. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. Omnidirectional stereo vision sensor based on single camera and catoptric system.

    PubMed

    Zhou, Fuqiang; Chai, Xinghua; Chen, Xin; Song, Ya

    2016-09-01

    An omnidirectional stereo vision sensor based on one single camera and catoptric system is proposed. As crucial components, one camera and two pyramid mirrors are used for imaging. The omnidirectional measurement towards different directions in the horizontal field can be performed by four pairs of virtual cameras, with a consummate synchronism and an improved compactness. Moreover, the perspective projection invariance is ensured in the imaging process, which avoids the imaging distortion reflected by the curved mirrors. In this paper, the structure model of the sensor was established and a sensor prototype was designed. The influences of the structural parameters on the field of view and the measurement accuracy were also discussed. In addition, real experiments and analyses were performed to evaluate the performance of the proposed sensor in the measurement application. The results proved the feasibility of the sensor, and exhibited a considerable accuracy in 3D coordinate reconstruction.

  9. Portable Positron Measurement System (PPMS)

    SciTech Connect

    2011-01-01

    Portable Positron Measurement System (PPMS) is an automated, non-destructive inspection system based on positron annihilation, which characterizes a material's in situatomic-level properties during the manufacturing processes of formation, solidification, and heat treatment. Simultaneous manufacturing and quality monitoring now are possible. Learn more about the lab's project on our facebook site http://www.facebook.com/idahonationallaboratory.

  10. Portable Positron Measurement System (PPMS)

    ScienceCinema

    None

    2016-07-12

    Portable Positron Measurement System (PPMS) is an automated, non-destructive inspection system based on positron annihilation, which characterizes a material's in situatomic-level properties during the manufacturing processes of formation, solidification, and heat treatment. Simultaneous manufacturing and quality monitoring now are possible. Learn more about the lab's project on our facebook site http://www.facebook.com/idahonationallaboratory.

  11. MTR STACK, TRA710, DETAIL OF BASE. CAMERA FACING NORTH. SIGN ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    MTR STACK, TRA-710, DETAIL OF BASE. CAMERA FACING NORTH. SIGN SAYS "DANGER, DO NOT USE THIS LADDER." TRA-605, PROCESS WATER BUILDING, IN VIEW AT LEFT. INL NEGATIVE NO. HD52-1-3. Mike Crane, Photographer, 5/2005 - Idaho National Engineering Laboratory, Test Reactor Area, Materials & Engineering Test Reactors, Scoville, Butte County, ID

  12. Cost Effective Paper-Based Colorimetric Microfluidic Devices and Mobile Phone Camera Readers for the Classroom

    ERIC Educational Resources Information Center

    Koesdjojo, Myra T.; Pengpumkiat, Sumate; Wu, Yuanyuan; Boonloed, Anukul; Huynh, Daniel; Remcho, Thomas P.; Remcho, Vincent T.

    2015-01-01

    We have developed a simple and direct method to fabricate paper-based microfluidic devices that can be used for a wide range of colorimetric assay applications. With these devices, assays can be performed within minutes to allow for quantitative colorimetric analysis by use of a widely accessible iPhone camera and an RGB color reader application…

  13. Cost Effective Paper-Based Colorimetric Microfluidic Devices and Mobile Phone Camera Readers for the Classroom

    ERIC Educational Resources Information Center

    Koesdjojo, Myra T.; Pengpumkiat, Sumate; Wu, Yuanyuan; Boonloed, Anukul; Huynh, Daniel; Remcho, Thomas P.; Remcho, Vincent T.

    2015-01-01

    We have developed a simple and direct method to fabricate paper-based microfluidic devices that can be used for a wide range of colorimetric assay applications. With these devices, assays can be performed within minutes to allow for quantitative colorimetric analysis by use of a widely accessible iPhone camera and an RGB color reader application…

  14. Development of a high-resolution Si-PM-based gamma camera system.

    PubMed

    Yamamoto, Seiichi; Watabe, Hiroshi; Kanai, Yasukazu; Imaizumi, Masao; Watabe, Tadashi; Shimosegawa, Eku; Hatazawa, Jun

    2011-12-07

    A silicon photomultiplier (Si-PM) is a promising photodetector for PET, especially for PET/MRI combined systems, due to its high gain, small size, and lower sensitivity to static magnetic fields. However, these properties are also promising for gamma camera systems for single-photon imaging. We developed an ultra-high-resolution Si-PM-based compact gamma camera system for small animals. Y(2)SiO(5):Ce (YSO) was selected as scintillators because of its high light output and no natural radioactivity. The gamma camera consists of 0.6 mm × 0.6 mm × 6 mm YSO pixels combined with a 0.1 mm thick reflector to form a 17 × 17 matrix that was optically coupled to a Si-PM array (Hamamatsu multi-pixel photon counter S11064-050P) with a 2 mm thick light guide. The YSO block size was 12 mm × 12 mm. The YSO gamma camera was encased in a 5 mm thick gamma shield, and a parallel hole collimator was mounted in front of the camera (0.5 mm hole, 0.7 mm separation, 5 mm thick). The two-dimensional distribution for the Co-57 gamma photons (122 keV) was almost resolved. The energy resolution was 24.4% full-width at half-maximum (FWHM) for the Co-57 gamma photons. The spatial resolution at 1.5 mm from the collimator surface was 1.25 mm FWHM measured using a 1 mm diameter Co-57 point source. Phantom and small animal images were successfully obtained. We conclude that a Si-PM-based gamma camera is promising for molecular imaging research.

  15. Positron and Ion Migrations and the Attractive Interactions between like Ion Pairs in the Liquids: Based on Studies with Slow Positron Beam

    NASA Astrophysics Data System (ADS)

    Kanazawa, I.; Sasaki, T.; Yamada, K.; Imai, E.

    2014-04-01

    We have discussed positron and ion diffusions in liquids by using the gauge-invariant effection Lagrange density with the spontaneously broken density (the hedgehog-like density) with the internal non-linear gauge fields (Yaug-Mills gauge fields), and have presented the relation to the Hubbard-Onsager theory.

  16. A novel method based on two cameras for accurate estimation of arterial oxygen saturation.

    PubMed

    Liu, He; Ivanov, Kamen; Wang, Yadong; Wang, Lei

    2015-05-30

    Photoplethysmographic imaging (PPGi) that is based on camera allows acquiring photoplethysmogram and measuring physiological parameters such as pulse rate, respiration rate and perfusion level. It has also shown potential for estimation of arterial oxygen saturation (SaO2). However, there are some technical limitations such as optical shunting, different camera sensitivity to different light spectra, different AC-to-DC ratios (the peak-to-peak amplitude to baseline ratio) of the PPGi signal for different portions of the sensor surface area, the low sampling rate and the inconsistency of contact force between the fingertip and camera lens. In this paper, we take full account of the above-mentioned design challenges and present an accurate SaO2 estimation method based on two cameras. The hardware system we used consisted of an FPGA development board (XC6SLX150T-3FGG676 from Xilinx), with connected to it two commercial cameras and an SD card. The two cameras were placed back to back, one camera acquired PPGi signal from the right index fingertip under 660 nm light illumination while the other camera acquired PPGi signal from the thumb fingertip using an 800 nm light illumination. The both PPGi signals were captured simultaneously, recorded in a text file on the SD card and processed offline using MATLAB®. The calculation of SaO2 was based on the principle of pulse oximetry. The AC-to-DC ratio was acquired by the ratio of powers of AC and DC components of the PPGi signal in the time-frequency domain using the smoothed pseudo Wigner-Ville distribution. The calibration curve required for SaO2 measurement was obtained by linear regression analysis. The results of our estimation method from 12 subjects showed a high correlation and accuracy with those of conventional pulse oximetry for the range from 90 to 100%. Our method is suitable for mobile applications implemented in smartphones, which could allow SaO2 measurement in a pervasive environment.

  17. Monte Carlo-based evaluation of S-values in mouse models for positron-emitting radionuclides

    NASA Astrophysics Data System (ADS)

    Xie, Tianwu; Zaidi, Habib

    2013-01-01

    In addition to being a powerful clinical tool, Positron emission tomography (PET) is also used in small laboratory animal research to visualize and track certain molecular processes associated with diseases such as cancer, heart disease and neurological disorders in living small animal models of disease. However, dosimetric characteristics in small animal PET imaging are usually overlooked, though the radiation dose may not be negligible. In this work, we constructed 17 mouse models of different body mass and size based on the realistic four-dimensional MOBY mouse model. Particle (photons, electrons and positrons) transport using the Monte Carlo method was performed to calculate the absorbed fractions and S-values for eight positron-emitting radionuclides (C-11, N-13, O-15, F-18, Cu-64, Ga-68, Y-86 and I-124). Among these radionuclides, O-15 emits positrons with high energy and frequency and produces the highest self-absorbed S-values in each organ, while Y-86 emits γ-rays with high energy and frequency which results in the highest cross-absorbed S-values for non-neighbouring organs. Differences between S-values for self-irradiated organs were between 2% and 3%/g difference in body weight for most organs. For organs irradiating other organs outside the splanchnocoele (i.e. brain, testis and bladder), differences between S-values were lower than 1%/g. These appealing results can be used to assess variations in small animal dosimetry as a function of total-body mass. The generated database of S-values for various radionuclides can be used in the assessment of radiation dose to mice from different radiotracers in small animal PET experiments, thus offering quantitative figures for comparative dosimetry research in small animal models.

  18. Note: A manifold ranking based saliency detection method for camera

    NASA Astrophysics Data System (ADS)

    Zhang, Libo; Sun, Yihan; Luo, Tiejian; Rahman, Mohammad Muntasir

    2016-09-01

    Research focused on salient object region in natural scenes has attracted a lot in computer vision and has widely been used in many applications like object detection and segmentation. However, an accurate focusing on the salient region, while taking photographs of the real-world scenery, is still a challenging task. In order to deal with the problem, this paper presents a novel approach based on human visual system, which works better with the usage of both background prior and compactness prior. In the proposed method, we eliminate the unsuitable boundary with a fixed threshold to optimize the image boundary selection which can provide more precise estimations. Then, the object detection, which is optimized with compactness prior, is obtained by ranking with background queries. Salient objects are generally grouped together into connected areas that have compact spatial distributions. The experimental results on three public datasets demonstrate that the precision and robustness of the proposed algorithm have been improved obviously.

  19. A method of color correction of camera based on HSV model

    NASA Astrophysics Data System (ADS)

    Zhao, Rujin; Wang, Jin; Yu, Guobing; Zhong, Jie; Zhou, Wulin; Li, Yihao

    2014-09-01

    A novel color correction method of camera based on HSV (Hue, Saturation, and Value) model is proposed in this paper, which aims at the problem that spectrum response of camera differs from the CIE criterion, and that the image color of camera is aberrant. Firstly, the color of image is corrected based on HSV model to which image is transformed from RGB model. As a result, the color of image accords with the human vision for the coherence between HSV model and human vision; Secondly, the colors checker with 24 kinds of color under standard light source is used to compute correction coefficient matrix, which improves the spectrum response of camera and the CIE criterion. Furthermore, the colors checker with 24 kinds of color improves the applicability of the color correction coefficient matrix for different image. The experimental results show that the color difference between corrected color and color checker is lower based on proposed method, and the corrected color of image is consistent with the human eyes.

  20. A Kinect™ camera based navigation system for percutaneous abdominal puncture

    NASA Astrophysics Data System (ADS)

    Xiao, Deqiang; Luo, Huoling; Jia, Fucang; Zhang, Yanfang; Li, Yong; Guo, Xuejun; Cai, Wei; Fang, Chihua; Fan, Yingfang; Zheng, Huimin; Hu, Qingmao

    2016-08-01

    Percutaneous abdominal puncture is a popular interventional method for the management of abdominal tumors. Image-guided puncture can help interventional radiologists improve targeting accuracy. The second generation of Kinect™ was released recently, we developed an optical navigation system to investigate its feasibility for guiding percutaneous abdominal puncture, and compare its performance on needle insertion guidance with that of the first-generation Kinect™. For physical-to-image registration in this system, two surfaces extracted from preoperative CT and intraoperative Kinect™ depth images were matched using an iterative closest point (ICP) algorithm. A 2D shape image-based correspondence searching algorithm was proposed for generating a close initial position before ICP matching. Evaluation experiments were conducted on an abdominal phantom and six beagles in vivo. For phantom study, a two-factor experiment was designed to evaluate the effect of the operator’s skill and trajectory on target positioning error (TPE). A total of 36 needle punctures were tested on a Kinect™ for Windows version 2 (Kinect™ V2). The target registration error (TRE), user error, and TPE are 4.26  ±  1.94 mm, 2.92  ±  1.67 mm, and 5.23  ±  2.29 mm, respectively. No statistically significant differences in TPE regarding operator’s skill and trajectory are observed. Additionally, a Kinect™ for Windows version 1 (Kinect™ V1) was tested with 12 insertions, and the TRE evaluated with the Kinect™ V1 is statistically significantly larger than that with the Kinect™ V2. For the animal experiment, fifteen artificial liver tumors were inserted guided by the navigation system. The TPE was evaluated as 6.40  ±  2.72 mm, and its lateral and longitudinal component were 4.30  ±  2.51 mm and 3.80  ±  3.11 mm, respectively. This study demonstrates that the navigation accuracy of the proposed system is acceptable

  1. Development of a compact scintillator-based high-resolution Compton camera for molecular imaging

    NASA Astrophysics Data System (ADS)

    Kishimoto, A.; Kataoka, J.; Koide, A.; Sueoka, K.; Iwamoto, Y.; Taya, T.; Ohsuka, S.

    2017-02-01

    The Compton camera, which shows gamma-ray distribution utilizing the kinematics of Compton scattering, is a promising detector capable of imaging across a wide range of energy. In this study, we aim to construct a small-animal molecular imaging system in a wide energy range by using the Compton camera. We developed a compact medical Compton camera based on a Ce-doped Gd3Al2Ga3O12 (Ce:GAGG) scintillator and multi-pixel photon counter (MPPC). A basic performance confirmed that for 662 keV, the typical energy resolution was 7.4 % (FWHM) and the angular resolution was 4.5° (FWHM). We then used the medical Compton camera to conduct imaging experiments based on a 3-D imaging reconstruction algorithm using the multi-angle data acquisition method. The result confirmed that for a 137Cs point source at a distance of 4 cm, the image had a spatial resolution of 3.1 mm (FWHM). Furthermore, we succeeded in producing 3-D multi-color image of different simultaneous energy sources (22Na [511 keV], 137Cs [662 keV], and 54Mn [834 keV]).

  2. A novel multi slit X-ray backscatter camera based on synthetic aperture focusing

    NASA Astrophysics Data System (ADS)

    Wieder, Frank; Ewert, Uwe; Vogel, Justus; Jaenisch, Gerd-Rüdiger; Bellon, Carsten

    2017-02-01

    A special slit collimator was developed earlier for fast acquisition of X-ray back scatter images. The design was based on a twisted slit design (ruled surfaces) in a Tungsten block to acquire backscatter images. The comparison with alternative techniques as flying spot and coded aperture pin hole technique could not prove the expected higher contrast sensitivity. In analogy to the coded aperture technique, a novel multi slit camera was designed and tested. Several twisted slits were parallelly arranged in a metal block. The CAD design of different multi-slit cameras was evaluated and optimized by the computer simulation packages aRTist and McRay. The camera projects a set of equal images, one per slit, to the digital detector array, which are overlaying each other. Afterwards, the aperture is corrected based on a deconvolution algorithm to focus the overlaying projections into a single representation of the object. Furthermore, a correction of the geometrical distortions due to the slit geometry is performed. The expected increase of the contrast-to-noise ratio is proportional to the square root of the number of parallel slits in the camera. However, additional noise has to be considered originating from the deconvolution operation. The slit design, functional principle, and the expected limits of this technique are discussed.

  3. Analysis on the detection capability of the space-based camera for the space debris

    NASA Astrophysics Data System (ADS)

    Wang, Chao; Wang, Fugang; Ye, Zhao; Ge, Xianying; Yin, Huan; Cao, Qipeng; Zhu, Jun

    2016-10-01

    Based on the maximum detection range, the detection capability of space-based camera for space debris is analyzed in the paper. We perform grid generation method on the debris target and analyze the shadowing effects among the grids, building the geometry modeling of cone target sequentially. The calculation model of optical infrared characteristics is established, taking into consideration the target self-radiation and radiation reflection characteristics of the material on surface. The radiation energy of the target is only depended on the reflection of earth's radiation and its self-radiation in the simulation proposed in the paper. Based on the maximum detection range formula, the numerical simulation presented shows that when the space-based target radiation intensity is 21.54W/sr and optical system aperture is 0.5m, the maximum detection range is 17279km. The simulation results theoretically contribute to the estimation of camera parameters and analysis on the detection capability.

  4. Table-top laser-based source of femtosecond, collimated, ultrarelativistic positron beams.

    PubMed

    Sarri, G; Schumaker, W; Di Piazza, A; Vargas, M; Dromey, B; Dieckmann, M E; Chvykov, V; Maksimchuk, A; Yanovsky, V; He, Z H; Hou, B X; Nees, J A; Thomas, A G R; Keitel, C H; Zepf, M; Krushelnick, K

    2013-06-21

    The generation of ultrarelativistic positron beams with short duration (τ(e+) ≃ 30  fs), small divergence (θ(e+) ≃ 3  mrad), and high density (n(e+) ≃ 10(14)-10(15)  cm(-3)) from a fully optical setup is reported. The detected positron beam propagates with a high-density electron beam and γ rays of similar spectral shape and peak energy, thus closely resembling the structure of an astrophysical leptonic jet. It is envisaged that this experimental evidence, besides the intrinsic relevance to laser-driven particle acceleration, may open the pathway for the small-scale study of astrophysical leptonic jets in the laboratory.

  5. Data acquisition system based on the Nios II for a CCD camera

    NASA Astrophysics Data System (ADS)

    Li, Binhua; Hu, Keliang; Wang, Chunrong; Liu, Yangbing; He, Chun

    2006-06-01

    The FPGA with Avalon Bus architecture and Nios soft-core processor developed by Altera Corporation is an advanced embedded solution for control and interface systems. A CCD data acquisition system with an Ethernet terminal port based on the TCP/IP protocol is implemented in NAOC, which is composed of a piece of interface board with an Altera's FPGA, 32MB SDRAM and some other accessory devices integrated on it, and two packages of control software used in the Nios II embedded processor and the remote host PC respectively. The system is used to replace a 7200 series image acquisition card which is inserted in a control and data acquisition PC, and to download commands to an existing CCD camera and collect image data from the camera to the PC. The embedded chip in the system is a Cyclone FPGA with a configurable Nios II soft-core processor. Hardware structure of the system, configuration for the embedded soft-core processor, and peripherals of the processor in the PFGA are described. The C program run in the Nios II embedded system is built in the Nios II IDE kits and the C++ program used in the PC is developed in the Microsoft's Visual C++ environment. Some key techniques in design and implementation of the C and VC++ programs are presented, including the downloading of the camera commands, initialization of the camera, DMA control, TCP/IP communication and UDP data uploading.

  6. Development and calibration of the Moon-based EUV camera for Chang'e-3

    NASA Astrophysics Data System (ADS)

    Chen, Bo; Song, Ke-Fei; Li, Zhao-Hui; Wu, Qing-Wen; Ni, Qi-Liang; Wang, Xiao-Dong; Xie, Jin-Jiang; Liu, Shi-Jie; He, Ling-Ping; He, Fei; Wang, Xiao-Guang; Chen, Bin; Zhang, Hong-Ji; Wang, Xiao-Dong; Wang, Hai-Feng; Zheng, Xin; E, Shu-Lin; Wang, Yong-Cheng; Yu, Tao; Sun, Liang; Wang, Jin-Ling; Wang, Zhi; Yang, Liang; Hu, Qing-Long; Qiao, Ke; Wang, Zhong-Su; Yang, Xian-Wei; Bao, Hai-Ming; Liu, Wen-Guang; Li, Zhe; Chen, Ya; Gao, Yang; Sun, Hui; Chen, Wen-Chang

    2014-12-01

    The process of development and calibration for the first Moon-based extreme ultraviolet (EUV) camera to observe Earth's plasmasphere is introduced and the design, test and calibration results are presented. The EUV camera is composed of a multilayer film mirror, a thin film filter, a photon-counting imaging detector, a mechanism that can adjust the direction in two dimensions, a protective cover, an electronic unit and a thermal control unit. The center wavelength of the EUV camera is 30.2 nm with a bandwidth of 4.6 nm. The field of view is 14.7° with an angular resolution of 0.08°, and the sensitivity of the camera is 0.11 count s-1 Rayleigh-1. The geometric calibration, the absolute photometric calibration and the relative photometric calibration are carried out under different temperatures before launch to obtain a matrix that can correct geometric distortion and a matrix for relative photometric correction, which are used for in-orbit correction of the images to ensure their accuracy.

  7. Performance Analysis of a Low-Cost Triangulation-Based 3d Camera: Microsoft Kinect System

    NASA Astrophysics Data System (ADS)

    . K. Chow, J. C.; Ang, K. D.; Lichti, D. D.; Teskey, W. F.

    2012-07-01

    Recent technological advancements have made active imaging sensors popular for 3D modelling and motion tracking. The 3D coordinates of signalised targets are traditionally estimated by matching conjugate points in overlapping images. Current 3D cameras can acquire point clouds at video frame rates from a single exposure station. In the area of 3D cameras, Microsoft and PrimeSense have collaborated and developed an active 3D camera based on the triangulation principle, known as the Kinect system. This off-the-shelf system costs less than 150 USD and has drawn a lot of attention from the robotics, computer vision, and photogrammetry disciplines. In this paper, the prospect of using the Kinect system for precise engineering applications was evaluated. The geometric quality of the Kinect system as a function of the scene (i.e. variation of depth, ambient light conditions, incidence angle, and object reflectivity) and the sensor (i.e. warm-up time and distance averaging) were analysed quantitatively. This system's potential in human body measurements was tested against a laser scanner and 3D range camera. A new calibration model for simultaneously determining the exterior orientation parameters, interior orientation parameters, boresight angles, leverarm, and object space features parameters was developed and the effectiveness of this calibration approach was explored.

  8. Pixel-based characterisation of CMOS high-speed camera systems

    NASA Astrophysics Data System (ADS)

    Weber, V.; Brübach, J.; Gordon, R. L.; Dreizler, A.

    2011-05-01

    Quantifying high-repetition rate laser diagnostic techniques for measuring scalars in turbulent combustion relies on a complete description of the relationship between detected photons and the signal produced by the detector. CMOS-chip based cameras are becoming an accepted tool for capturing high frame rate cinematographic sequences for laser-based techniques such as Particle Image Velocimetry (PIV) and Planar Laser Induced Fluorescence (PLIF) and can be used with thermographic phosphors to determine surface temperatures. At low repetition rates, imaging techniques have benefitted from significant developments in the quality of CCD-based camera systems, particularly with the uniformity of pixel response and minimal non-linearities in the photon-to-signal conversion. The state of the art in CMOS technology displays a significant number of technical aspects that must be accounted for before these detectors can be used for quantitative diagnostics. This paper addresses these issues.

  9. Fuzzy System-Based Target Selection for a NIR Camera-Based Gaze Tracker

    PubMed Central

    Naqvi, Rizwan Ali; Arsalan, Muhammad; Park, Kang Ryoung

    2017-01-01

    Gaze-based interaction (GBI) techniques have been a popular subject of research in the last few decades. Among other applications, GBI can be used by persons with disabilities to perform everyday tasks, as a game interface, and can play a pivotal role in the human computer interface (HCI) field. While gaze tracking systems have shown high accuracy in GBI, detecting a user’s gaze for target selection is a challenging problem that needs to be considered while using a gaze detection system. Past research has used the blinking of the eyes for this purpose as well as dwell time-based methods, but these techniques are either inconvenient for the user or requires a long time for target selection. Therefore, in this paper, we propose a method for fuzzy system-based target selection for near-infrared (NIR) camera-based gaze trackers. The results of experiments performed in addition to tests of the usability and on-screen keyboard use of the proposed method show that it is better than previous methods. PMID:28420114

  10. Positron emission tomography (PET) imaging with 18F-based radiotracers

    PubMed Central

    Alauddin, Mian M

    2012-01-01

    Positron Emission Tomography (PET) is a nuclear medicine imaging technique that is widely used in early detection and treatment follow up of many diseases, including cancer. This modality requires positron-emitting isotope labeled biomolecules, which are synthesized prior to perform imaging studies. Fluorine-18 is one of the several isotopes of fluorine that is routinely used in radiolabeling of biomolecules for PET; because of its positron emitting property and favorable half-life of 109.8 min. The biologically active molecule most commonly used for PET is 2-deoxy-2-18F-fluoro-β-D-glucose (18F-FDG), an analogue of glucose, for early detection of tumors. The concentrations of tracer accumulation (PET image) demonstrate the metabolic activity of tissues in terms of regional glucose metabolism and accumulation. Other tracers are also used in PET to image the tissue concentration. In this review, information on fluorination and radiofluorination reactions, radiofluorinating agents, and radiolabeling of various compounds and their application in PET imaging is presented. PMID:23133802

  11. Segmentation-based attenuation correction in positron emission tomography/magnetic resonance: erroneous tissue identification and its impact on positron emission tomography interpretation.

    PubMed

    Brendle, Cornelia; Schmidt, Holger; Oergel, Anja; Bezrukov, Ilja; Mueller, Mark; Schraml, Christina; Pfannenberg, Christina; la Fougère, Christian; Nikolaou, Konstantin; Schwenzer, Nina

    2015-05-01

    The objective of this study was to evaluate the frequency and characteristics of artifacts in segmentation-based attenuation correction maps (μ-maps) of positron emission tomography/magnetic resonance (PET/MR) and their impact on PET interpretation and the standardized uptake value (SUV) quantification in normal tissue and lesions. The study was approved by the local institutional review board. Attenuation maps of 100 patients with PET/MR and preceding PET/computed tomography examination were retrospectively inspected for artifacts (tracers: 2-deoxy-2-[¹⁸F]fluoro-D-glucose (¹⁸F-FDG), ¹¹C-Choline, ⁶⁸Ga-DOTATOC, ⁶⁸Ga-DOTATATE, ¹¹C-Methionine). The artifacts were subdivided into 9 different groups on the basis of their localization and appearance. The impact of μ-map artifacts in normal tissue and lesions on PET interpretation was evaluated qualitatively via visual analysis in synopsis with the non-attenuation-corrected (NAC) PET as well as quantitatively by comparing the SUV in artifact regions to reference regions. Attenuation map artifacts were found in 72% of the head/neck data sets, 61% of the thoracic data sets, 25% of the upper abdominal data sets, and 26% of the pelvic data sets. The most frequent localizations of the overall 276 artifacts were around metal implants (16%), in the lungs (19%), and outer body contours (31%). Twenty-one percent of all PET-avid lesions (38 of 184 lesions) were affected by artifacts in the majority without further consequences for visual PET interpretation. However, 9 PET-avid lung lesions were masked owing to μ-map artifacts and, thus, were only detectable on the NAC PET or additional MR imaging sequences. Quantitatively, μ-map artifacts led to significant SUV changes in areas with erroneous assignment of air instead of soft tissue (ie, metal artifacts) and of soft tissue instead of lung. Nevertheless, no change in diagnosis would have been caused by μ-map artifacts. Attenuation map artifacts that occur in a

  12. Radiobiological Modeling Based on 18F-Fluorodeoxyglucose Positron Emission Tomography Data for Esophageal Cancer

    PubMed Central

    Guerrero, Mariana; Tan, Shan; Lu, Wei

    2014-01-01

    Background We investigated the relationship of standardized uptake values (SUVs) to radiobiological parameters, such a 25 s tumor control probability (TCP), to allow for quantitative prediction of tumor response based on SUVs from 18F fluorodeoxyglucose (18F-FDG) positron emission tomography (PET) before and after treatment for esophageal cancer. Methods We analyzed data from 20 esophageal cancer patients treated with chemoradiotherapy (CRT) followed by surgery. Tumor pathologic response to CRT was assessed in surgical specimens. Patients underwent 18F-FDG PET imaging before and after CRT. Rigid image registration was performed between both images. Because TCP in a heterogeneous tumor is a function of average cell survival, we modeled TCP as a function of , a possible surrogate for average cell survival (=). TCP was represented by a sigmoid function with two parameters: SUVR50, the at which TCP=0.5, and γ50, the slope of the curve at SUVR50. The two parameters and their confidence intervals (CIs) were estimated using the maximum-likelihood method. The correlation between SUV before CRT and SUV change was also studied. Results A TCP model as a function of SUV before and after treatment was developed for esophageal cancer patients. The maximum-likelihood estimate of SUVR50 was 0.47 (90% CI, 0.30-0.61) and for γ50 was 1.62 (90% CI, 0-4.2). High initial SUV and larger metabolic response (larger ) were correlated, and this correlation was stronger among responders. Conclusions Our TCP model indicates that is a possible surrogate for cell survival in esophageal cancer patients. Although CIs are large as a result of the small patient sample, parameters for a TCP curve can be derived and an individualized TCP can be calculated for future patients. Initial SUV does not predict response, whereas a correlation is found between surrogates for initial tumor burden and

  13. Cloud Base Height Measurements at Manila Observatory: Initial Results from Constructed Paired Sky Imaging Cameras

    NASA Astrophysics Data System (ADS)

    Lagrosas, N.; Tan, F.; Antioquia, C. T.

    2014-12-01

    Fabricated all sky imagers are efficient and cost effective instruments for cloud detection and classification. Continuous operation of this instrument can result in the determination of cloud occurrence and cloud base heights for the paired system. In this study, a fabricated paired sky imaging system - consisting two commercial digital cameras (Canon Powershot A2300) enclosed in weatherproof containers - is developed in Manila Observatory for the purpose of determining cloud base heights at the Manila Observatory area. One of the cameras is placed on the rooftop of Manila Observatory and the other is placed on the rooftop of the university dormitory, 489m from the first camera. The cameras are programmed to simultaneously gather pictures every 5 min. Continuous operation of these cameras were implemented since the end of May of 2014 but data collection started end of October 2013. The data were processed following the algorithm proposed by Kassianov et al (2005). The processing involves the calculation of the merit function that determines the area of overlap of the two pictures. When two pictures are overlapped, the minimum of the merit function corresponds to the pixel column positions where the pictures have the best overlap. In this study, pictures of overcast sky prove to be difficult to process for cloud base height and were excluded from processing. The figure below shows the initial results of the hourly average of cloud base heights from data collected from November 2013 to July 2014. Measured cloud base heights ranged from 250m to 1.5km. These are the heights of cumulus and nimbus clouds that are dominant in this part of the world. Cloud base heights are low in the early hours of the day indicating low convection process during these times. However, the increase in the convection process in the atmosphere can be deduced from higher cloud base heights in the afternoon. The decrease of cloud base heights after 15:00 follows the trend of decreasing solar

  14. Volcano geodesy at Santiaguito using ground-based cameras and particle image velocimetry

    NASA Astrophysics Data System (ADS)

    Johnson, J.; Andrews, B. J.; Anderson, J.; Lyons, J. J.; Lees, J. M.

    2012-12-01

    The active Santiaguito dome in Guatemala is an exceptional field site for ground-based optical observations owing to the bird's-eye viewing perspective from neighboring Santa Maria Volcano. From the summit of Santa Maria the frequent (1 per hour) explosions and continuous lava flow effusion may be observed from a vantage point, which is at a ~30 degree elevation angle, 1200 m above and 2700 m distant from the active vent. At these distances both video cameras and SLR cameras fitted with high-power lenses can effectively track blocky features translating and uplifting on the surface of Santiaguito's dome. We employ particle image velocimetry in the spatial frequency domain to map movements of ~10x10 m^2 surface patches with better than 10 cm displacement resolution. During three field campaigns to Santiaguito in 2007, 2009, and 2012 we have used cameras to measure dome surface movements for a range of time scales. In 2007 and 2009 we used video cameras recording at 30 fps to track repeated rapid dome uplift (more than 1 m within 2 s) of the 30,000 m^2 dome associated with the onset of eruptive activity. We inferred that the these uplift events were responsible for both a seismic long period response and an infrasound bimodal pulse. In 2012 we returned to Santiaguito to quantify dome surface movements over hour-to-day-long time scales by recording time lapse imagery at one minute intervals. These longer time scales reveal dynamic structure to the uplift and subsidence trends, effusion rate, and surface flow patterns that are related to internal conduit pressurization. In 2012 we performed particle image velocimetry with multiple cameras spatially separated in order to reconstruct 3-dimensional surface movements.

  15. The Japanese Positron Factory

    NASA Astrophysics Data System (ADS)

    Okada, S.; Sunaga, H.; Kaneko, H.; Takizawa, H.; Kawasuso, A.; Yotsumoto, K.; Tanaka, R.

    1999-06-01

    The Positron Factory has been planned at Japan Atomic Energy Research Institute (JAERI). The factory is expected to produce linac-based monoenergetic positron beams having world-highest intensities of more than 1010e+/sec, which will be applied for R&D of materials science, biotechnology and basic physics & chemistry. In this article, results of the design studies are demonstrated for the following essential components of the facilities: 1) Conceptual design of a high-power electron linac with 100 MeV in beam energy and 100 kW in averaged beam power, 2) Performance tests of the RF window in the high-power klystron and of the electron beam window, 3) Development of a self-driven rotating electron-to-positron converter and the performance tests, 4) Proposal of multi-channel beam generation system for monoenergetic positrons, with a series of moderator assemblies based on a newly developed Monte Carlo simulation and the demonstrative experiment, 5) Proposal of highly efficient moderator structures, 6) Conceptual design of a local shield to suppress the surrounding radiation and activation levels.

  16. Robust Range Estimation with a Monocular Camera for Vision-Based Forward Collision Warning System

    PubMed Central

    2014-01-01

    We propose a range estimation method for vision-based forward collision warning systems with a monocular camera. To solve the problem of variation of camera pitch angle due to vehicle motion and road inclination, the proposed method estimates virtual horizon from size and position of vehicles in captured image at run-time. The proposed method provides robust results even when road inclination varies continuously on hilly roads or lane markings are not seen on crowded roads. For experiments, a vision-based forward collision warning system has been implemented and the proposed method is evaluated with video clips recorded in highway and urban traffic environments. Virtual horizons estimated by the proposed method are compared with horizons manually identified, and estimated ranges are compared with measured ranges. Experimental results confirm that the proposed method provides robust results both in highway and in urban traffic environments. PMID:24558344

  17. Detection of pointing errors with CMOS-based camera in intersatellite optical communications

    NASA Astrophysics Data System (ADS)

    Yu, Si-yuan; Ma, Jing; Tan, Li-ying

    2005-01-01

    For very high data rates, intersatellite optical communications hold a potential performance edge over microwave communications. Acquisition and Tracking problem is critical because of the narrow transmit beam. A single array detector in some systems performs both spatial acquisition and tracking functions to detect pointing errors, so both wide field of view and high update rate is required. The past systems tend to employ CCD-based camera with complex readout arrangements, but the additional complexity reduces the applicability of the array based tracking concept. With the development of CMOS array, CMOS-based cameras can employ the single array detector concept. The area of interest feature of the CMOS-based camera allows a PAT system to specify portion of the array. The maximum allowed frame rate increases as the size of the area of interest decreases under certain conditions. A commercially available CMOS camera with 105 fps @ 640×480 is employed in our PAT simulation system, in which only part pixels are used in fact. Beams angle varying in the field of view can be detected after getting across a Cassegrain telescope and an optical focus system. Spot pixel values (8 bits per pixel) reading out from CMOS are transmitted to a DSP subsystem via IEEE 1394 bus, and pointing errors can be computed by the centroid equation. It was shown in test that: (1) 500 fps @ 100×100 is available in acquisition when the field of view is 1mrad; (2)3k fps @ 10×10 is available in tracking when the field of view is 0.1mrad.

  18. Camera-Based Control for Industrial Robots Using OpenCV Libraries

    NASA Astrophysics Data System (ADS)

    Seidel, Patrick A.; Böhnke, Kay

    This paper describes a control system for industrial robots whose reactions base on the analysis of images provided by a camera mounted on top of the robot. We show that such control system can be designed and implemented with an open source image processing library and cheap hardware. Using one specific robot as an example, we demonstrate the structure of a possible control algorithm running on a PC and its interaction with the robot.

  19. Using ground-based stereo cameras to derive cloud-level wind fields.

    PubMed

    Porter, John N; Cao, Guang Xia

    2009-08-15

    Upper-level wind fields are obtained by tracking the motion of cloud features as seen in calibrated ground-based stereo cameras. By tracking many cloud features, it is possible to obtain horizontal wind speed and direction over a cone area throughout the troposphere. Preliminary measurements were made at the Mauna Loa Observatory, and resulting wind measurements are compared with winds from the Hilo, Hawaii radiosondes.

  20. Evaluation of Compton gamma camera prototype based on pixelated CdTe detectors

    PubMed Central

    Calderón, Y.; Chmeissani, M.; Kolstein, M.; De Lorenzo, G.

    2014-01-01

    A proposed Compton camera prototype based on pixelated CdTe is simulated and evaluated in order to establish its feasibility and expected performance in real laboratory tests. The system is based on module units containing a 2×4 array of square CdTe detectors of 10×10 mm2 area and 2 mm thickness. The detectors are pixelated and stacked forming a 3D detector with voxel sizes of 2 × 1 × 2 mm3. The camera performance is simulated with Geant4-based Architecture for Medicine-Oriented Simulations(GAMOS) and the Origin Ensemble(OE) algorithm is used for the image reconstruction. The simulation shows that the camera can operate with up to 104 Bq source activities with equal efficiency and is completely saturated at 109 Bq. The efficiency of the system is evaluated using a simulated 18F point source phantom in the center of the Field-of-View (FOV) achieving an intrinsic efficiency of 0.4 counts per second per kilobecquerel. The spatial resolution measured from the point spread function (PSF) shows a FWHM of 1.5 mm along the direction perpendicular to the scatterer, making it possible to distinguish two points at 3 mm separation with a peak-to-valley ratio of 8. PMID:24932209

  1. Obstacle Classification and 3D Measurement in Unstructured Environments Based on ToF Cameras

    PubMed Central

    Yu, Hongshan; Zhu, Jiang; Wang, Yaonan; Jia, Wenyan; Sun, Mingui; Tang, Yandong

    2014-01-01

    Inspired by the human 3D visual perception system, we present an obstacle detection and classification method based on the use of Time-of-Flight (ToF) cameras for robotic navigation in unstructured environments. The ToF camera provides 3D sensing by capturing an image along with per-pixel 3D space information. Based on this valuable feature and human knowledge of navigation, the proposed method first removes irrelevant regions which do not affect robot's movement from the scene. In the second step, regions of interest are detected and clustered as possible obstacles using both 3D information and intensity image obtained by the ToF camera. Consequently, a multiple relevance vector machine (RVM) classifier is designed to classify obstacles into four possible classes based on the terrain traversability and geometrical features of the obstacles. Finally, experimental results in various unstructured environments are presented to verify the robustness and performance of the proposed approach. We have found that, compared with the existing obstacle recognition methods, the new approach is more accurate and efficient. PMID:24945679

  2. PETIROC2 based readout electronics optimization for Gamma Cameras and PET detectors

    NASA Astrophysics Data System (ADS)

    Monzo, J. M.; Aguilar, A.; González-Montoro, A.; Lamprou, E.; González, A. J.; Hernández, L.; Mazur, D.; Colom, R. J.; Benlloch, J. M.

    2017-02-01

    Developing front-end electronics to improve charge detection and time resolution in gamma-ray detectors is one of the main tasks to improve performance in new multimodal imaging systems that merge information of Magnetic Resonance Imaging and Gamma Camera or PET tomographs. The aim of this work is to study the behaviour and to optimize the performance of an ASIC for PET and Gamma Camera applications based on SiPMs detectors. PETIROC2 is a commercial ASIC developed by Weeroc to provide accurate charge and time coincidence resolutions. It has 32 analog input channels that are independently managed. Each channel is divided into two signals, one for time stamping using a TDC and another for charge measurement. In this work, PETIROC2 is evaluated in an experimental setup composed of two pixelated LYSO crystals based detectors, each coupled to a Hamamatsu 4×4 SiPM array. Both detectors are working in coincidence with a separation distance between them that can be modified. In the present work, an energy resolution of 13.6% FWHM and a time coincidence resolution of 815 ps FWHM have been obtained. These results will be useful to optimize and improve PETIROC2 based PET and Gamma Camera systems.

  3. Design optimisation of a TOF-based collimated camera prototype for online hadrontherapy monitoring

    NASA Astrophysics Data System (ADS)

    Pinto, M.; Dauvergne, D.; Freud, N.; Krimmer, J.; Letang, J. M.; Ray, C.; Roellinghoff, F.; Testa, E.

    2014-12-01

    Hadrontherapy is an innovative radiation therapy modality for which one of the main key advantages is the target conformality allowed by the physical properties of ion species. However, in order to maximise the exploitation of its potentialities, online monitoring is required in order to assert the treatment quality, namely monitoring devices relying on the detection of secondary radiations. Herein is presented a method based on Monte Carlo simulations to optimise a multi-slit collimated camera employing time-of-flight selection of prompt-gamma rays to be used in a clinical scenario. In addition, an analytical tool is developed based on the Monte Carlo data to predict the expected precision for a given geometrical configuration. Such a method follows the clinical workflow requirements to simultaneously have a solution that is relatively accurate and fast. Two different camera designs are proposed, considering different endpoints based on the trade-off between camera detection efficiency and spatial resolution to be used in a proton therapy treatment with active dose delivery and assuming a homogeneous target.

  4. Shadow detection in camera-based vehicle detection: survey and analysis

    NASA Astrophysics Data System (ADS)

    Barcellos, Pablo; Gomes, Vitor; Scharcanski, Jacob

    2016-09-01

    The number of vehicles in circulation in modern urban centers has greatly increased, which motivates the development of automatic traffic monitoring systems. Consequently, camera-based traffic monitoring systems are becoming more widely used, since they offer important technological advantages in comparison with traditional traffic monitoring systems (e.g., simpler maintenance and more flexibility for the design of practical configurations). The segmentation of the foreground (i.e., vehicles) is a fundamental step in the workflow of a camera-based traffic monitoring system. However, foreground segmentation can be negatively affected by vehicle shadows. This paper discusses the types of shadow detection methods available in the literature, their advantages, disadvantages, and in which situations these methods can improve camera-based vehicle detection for traffic monitoring. In order to compare the performance of these different types of shadow detection methods, experiments are conducted with typical methods of each category using publicly available datasets. This work shows that shadow detection definitely can improve the reliability of traffic monitoring systems, but the choice of the type of shadow method depends on the system specifications (e.g., tolerated error), the availability of computational resources, and prior information about the scene and its illumination in regular operation conditions.

  5. Obstacle classification and 3D measurement in unstructured environments based on ToF cameras.

    PubMed

    Yu, Hongshan; Zhu, Jiang; Wang, Yaonan; Jia, Wenyan; Sun, Mingui; Tang, Yandong

    2014-06-18

    Inspired by the human 3D visual perception system, we present an obstacle detection and classification method based on the use of Time-of-Flight (ToF) cameras for robotic navigation in unstructured environments. The ToF camera provides 3D sensing by capturing an image along with per-pixel 3D space information. Based on this valuable feature and human knowledge of navigation, the proposed method first removes irrelevant regions which do not affect robot's movement from the scene. In the second step, regions of interest are detected and clustered as possible obstacles using both 3D information and intensity image obtained by the ToF camera. Consequently, a multiple relevance vector machine (RVM) classifier is designed to classify obstacles into four possible classes based on the terrain traversability and geometrical features of the obstacles. Finally, experimental results in various unstructured environments are presented to verify the robustness and performance of the proposed approach. We have found that, compared with the existing obstacle recognition methods, the new approach is more accurate and efficient.

  6. The computation of cloud base height from paired whole-sky imaging cameras

    SciTech Connect

    Allmen, M.C.; Kegelmeyer, W.P. Jr.

    1994-03-01

    A major goal for global change studies is to improve the accuracy of general circulation models (GCMs) capable of predicting the timing and magnitude of greenhouse gas-induced global warming. Research has shown that cloud radiative feedback is the single most important effect determining the magnitude of possible climate responses to human activity. Of particular value to reducing the uncertainties associated with cloud-radiation interactions is the measurement of cloud base height (CBH), both because it is a dominant factor in determining the infrared radiative properties of clouds with respect to the earth`s surface and lower atmosphere and because CBHs are essential to measuring cloud cover fraction. We have developed a novel approach to the extraction of cloud base height from pairs of whole sky imaging (WSI) cameras. The core problem is to spatially register cloud fields from widely separated WSI cameras; this complete, triangulation provides the CBH measurements. The wide camera separation (necessary to cover the desired observation area) and the self-similarity of clouds defeats all standard matching algorithms when applied to static views of the sky. To address this, our approach is based on optical flow methods that exploit the fact that modern WSIs provide sequences of images. We will describe the algorithm and present its performance as evaluated both on real data validated by ceilometer measurements and on a variety of simulated cases.

  7. Evaluation of Compton gamma camera prototype based on pixelated CdTe detectors.

    PubMed

    Calderón, Y; Chmeissani, M; Kolstein, M; De Lorenzo, G

    2014-06-01

    A proposed Compton camera prototype based on pixelated CdTe is simulated and evaluated in order to establish its feasibility and expected performance in real laboratory tests. The system is based on module units containing a 2×4 array of square CdTe detectors of 10×10 mm(2) area and 2 mm thickness. The detectors are pixelated and stacked forming a 3D detector with voxel sizes of 2 × 1 × 2 mm(3). The camera performance is simulated with Geant4-based Architecture for Medicine-Oriented Simulations(GAMOS) and the Origin Ensemble(OE) algorithm is used for the image reconstruction. The simulation shows that the camera can operate with up to 10(4) Bq source activities with equal efficiency and is completely saturated at 10(9) Bq. The efficiency of the system is evaluated using a simulated (18)F point source phantom in the center of the Field-of-View (FOV) achieving an intrinsic efficiency of 0.4 counts per second per kilobecquerel. The spatial resolution measured from the point spread function (PSF) shows a FWHM of 1.5 mm along the direction perpendicular to the scatterer, making it possible to distinguish two points at 3 mm separation with a peak-to-valley ratio of 8.

  8. Design optimisation of a TOF-based collimated camera prototype for online hadrontherapy monitoring.

    PubMed

    Pinto, M; Dauvergne, D; Freud, N; Krimmer, J; Letang, J M; Ray, C; Roellinghoff, F; Testa, E

    2014-12-21

    Hadrontherapy is an innovative radiation therapy modality for which one of the main key advantages is the target conformality allowed by the physical properties of ion species. However, in order to maximise the exploitation of its potentialities, online monitoring is required in order to assert the treatment quality, namely monitoring devices relying on the detection of secondary radiations. Herein is presented a method based on Monte Carlo simulations to optimise a multi-slit collimated camera employing time-of-flight selection of prompt-gamma rays to be used in a clinical scenario. In addition, an analytical tool is developed based on the Monte Carlo data to predict the expected precision for a given geometrical configuration. Such a method follows the clinical workflow requirements to simultaneously have a solution that is relatively accurate and fast. Two different camera designs are proposed, considering different endpoints based on the trade-off between camera detection efficiency and spatial resolution to be used in a proton therapy treatment with active dose delivery and assuming a homogeneous target.

  9. An enhanced high-resolution EMCCD-based gamma camera using SiPM side detection.

    PubMed

    Heemskerk, J W T; Korevaar, M A N; Huizenga, J; Kreuger, R; Schaart, D R; Goorden, M C; Beekman, F J

    2010-11-21

    Electron-multiplying charge-coupled devices (EMCCDs) coupled to scintillation crystals can be used for high-resolution imaging of gamma rays in scintillation counting mode. However, the detection of false events as a result of EMCCD noise deteriorates the spatial and energy resolution of these gamma cameras and creates a detrimental background in the reconstructed image. In order to improve the performance of an EMCCD-based gamma camera with a monolithic scintillation crystal, arrays of silicon photon-multipliers (SiPMs) can be mounted on the sides of the crystal to detect escaping scintillation photons, which are otherwise neglected. This will provide a priori knowledge about the correct number and energies of gamma interactions that are to be detected in each CCD frame. This information can be used as an additional detection criterion, e.g. for the rejection of otherwise falsely detected events. The method was tested using a gamma camera based on a back-illuminated EMCCD, coupled to a 3 mm thick continuous CsI:Tl crystal. Twelve SiPMs have been mounted on the sides of the CsI:Tl crystal. When the information of the SiPMs is used to select scintillation events in the EMCCD image, the background level for (99m)Tc is reduced by a factor of 2. Furthermore, the SiPMs enable detection of (125)I scintillations. A hybrid SiPM-/EMCCD-based gamma camera thus offers great potential for applications such as in vivo imaging of gamma emitters.

  10. Positron emission mammography imaging

    SciTech Connect

    Moses, William W.

    2003-10-02

    This paper examines current trends in Positron Emission Mammography (PEM) instrumentation and the performance tradeoffs inherent in them. The most common geometry is a pair of parallel planes of detector modules. They subtend a larger solid angle around the breast than conventional PET cameras, and so have both higher efficiency and lower cost. Extensions to this geometry include encircling the breast, measuring the depth of interaction (DOI), and dual-modality imaging (PEM and x-ray mammography, as well as PEM and x-ray guided biopsy). The ultimate utility of PEM may not be decided by instrument performance, but by biological and medical factors, such as the patient to patient variation in radiotracer uptake or the as yet undetermined role of PEM in breast cancer diagnosis and treatment.

  11. Cramer-Rao lower bound optimization of an EM-CCD-based scintillation gamma camera.

    PubMed

    Korevaar, Marc A N; Goorden, Marlies C; Beekman, Freek J

    2013-04-21

    Scintillation gamma cameras based on low-noise electron multiplication (EM-)CCDs can reach high spatial resolutions. For further improvement of these gamma cameras, more insight is needed into how various parameters that characterize these devices influence their performance. Here, we use the Cramer-Rao lower bound (CRLB) to investigate the sensitivity of the energy and spatial resolution of an EM-CCD-based gamma camera to several parameters. The gamma camera setup consists of a 3 mm thick CsI(Tl) scintillator optically coupled by a fiber optic plate to the E2V CCD97 EM-CCD. For this setup, the position and energy of incoming gamma photons are determined with a maximum-likelihood detection algorithm. To serve as the basis for the CRLB calculations, accurate models for the depth-dependent scintillation light distribution are derived and combined with a previously validated statistical response model for the EM-CCD. The sensitivity of the lower bounds for energy and spatial resolution to the EM gain and the depth-of-interaction (DOI) are calculated and compared to experimentally obtained values. Furthermore, calculations of the influence of the number of detected optical photons and noise sources in the image area on the energy and spatial resolution are presented. Trends predicted by CRLB calculations agree with experiments, although experimental values for spatial and energy resolution are typically a factor of 1.5 above the calculated lower bounds. Calculations and experiments both show that an intermediate EM gain setting results in the best possible spatial or energy resolution and that the spatial resolution of the gamma camera degrades rapidly as a function of the DOI. Furthermore, calculations suggest that a large improvement in gamma camera performance is achieved by an increase in the number of detected photons or a reduction of noise in the image area. A large noise reduction, as is possible with a new generation of EM-CCD electronics, may improve the

  12. Monte Carlo simulations of compact gamma cameras based on avalanche photodiodes.

    PubMed

    Després, Philippe; Funk, Tobias; Shah, Kanai S; Hasegawa, Bruce H

    2007-06-07

    Avalanche photodiodes (APDs), and in particular position-sensitive avalanche photodiodes (PSAPDs), are an attractive alternative to photomultiplier tubes (PMTs) for reading out scintillators for PET and SPECT. These solid-state devices offer high gain and quantum efficiency, and can potentially lead to more compact and robust imaging systems with improved spatial and energy resolution. In order to evaluate this performance improvement, we have conducted Monte Carlo simulations of gamma cameras based on avalanche photodiodes. Specifically, we investigated the relative merit of discrete and PSAPDs in a simple continuous crystal gamma camera. The simulated camera was composed of either a 4 x 4 array of four channels 8 x 8 mm2 PSAPDs or an 8 x 8 array of 4 x 4 mm2 discrete APDs. These configurations, requiring 64 channels readout each, were used to read the scintillation light from a 6 mm thick continuous CsI:Tl crystal covering the entire 3.6 x 3.6 cm2 photodiode array. The simulations, conducted with GEANT4, accounted for the optical properties of the materials, the noise characteristics of the photodiodes and the nonlinear charge division in PSAPDs. The performance of the simulated camera was evaluated in terms of spatial resolution, energy resolution and spatial uniformity at 99mTc (140 keV) and 125I ( approximately 30 keV) energies. Intrinsic spatial resolutions of 1.0 and 0.9 mm were obtained for the APD- and PSAPD-based cameras respectively for 99mTc, and corresponding values of 1.2 and 1.3 mm FWHM for 125I. The simulations yielded maximal energy resolutions of 7% and 23% for 99mTc and 125I, respectively. PSAPDs also provided better spatial uniformity than APDs in the simple system studied. These results suggest that APDs constitute an attractive technology especially suitable to build compact, small field of view gamma cameras dedicated, for example, to small animal or organ imaging.

  13. Treatment modification of yttrium-90 radioembolization based on quantitative positron emission tomography/CT imaging.

    PubMed

    Chang, Ted T; Bourgeois, Austin C; Balius, Anastasia M; Pasciak, Alexander S

    2013-03-01

    Treatment activity for yttrium-90 ((90)Y) radioembolization when calculated by using the manufacturer-recommended technique is only partially patient-specific and may result in a subtumoricidal dose in some patients. The authors describe the use of quantitative (90)Y positron emission tomography/computed tomography as a tool to provide patient-specific optimization of treatment activity and evaluate this new method in a patient who previously received traditional (90)Y radioembolization. The modified treatment resulted in a 40-Gy increase in absorbed dose to tumor and complete resolution of disease in the treated area within 3 months.

  14. Low-frequency vibration measurement based on camera-projector system

    NASA Astrophysics Data System (ADS)

    Lyu, Chengang; Liu, Yuxiang; Gao, Shuang; Bao, Zhiqiang; Chang, Yuqing; Gao, Jiale; Yang, Jiachen; Jin, Jie

    2017-09-01

    A low-frequency vibration measurement based on a camera-projector system is proposed and verified by experiment at several hertz. A CCD camera gathered deformed fringes modulated by a vibrating trumpet at a constant sampling rate. Information on amplitude and frequency are contained in a series of observed images. The height value of each pixel point in the image can be restored by using Fourier transform profilometry. Connecting the height value of every sampling frame into a motion trajectory, the frequency and amplitude of the vibrating object can be obtained from the trajectory curve. The experiment restored the vibration process of a trumpet in half a cycle while the vibrating frequency was 3 Hz. Subsequently, the measurement of different frequencies and different amplitudes are comparatively studied. The results show that slight vibration at 35 µm and above can be detected by the system, which confirms the feasibility of the proposed system in the field of industrial vibration measurement.

  15. 3D point cloud registration based on the assistant camera and Harris-SIFT

    NASA Astrophysics Data System (ADS)

    Zhang, Yue; Yu, HongYang

    2016-07-01

    3D(Three-Dimensional) point cloud registration technology is the hot topic in the field of 3D reconstruction, but most of the registration method is not real-time and ineffective. This paper proposes a point cloud registration method of 3D reconstruction based on Harris-SIFT and assistant camera. The assistant camera is used to pinpoint mobile 3D reconstruction device, The feature points of images are detected by using Harris operator, the main orientation for each feature point is calculated, and lastly, the feature point descriptors are generated after rotating the coordinates of the descriptors relative to the feature points' main orientations. Experimental results of demonstrate the effectiveness of the proposed method.

  16. Optical character recognition of camera-captured images based on phase features

    NASA Astrophysics Data System (ADS)

    Diaz-Escobar, Julia; Kober, Vitaly

    2015-09-01

    Nowadays most of digital information is obtained using mobile devices specially smartphones. In particular, it brings the opportunity for optical character recognition in camera-captured images. For this reason many recognition applications have been recently developed such as recognition of license plates, business cards, receipts and street signal; document classification, augmented reality, language translator and so on. Camera-captured images are usually affected by geometric distortions, nonuniform illumination, shadow, noise, which make difficult the recognition task with existing systems. It is well known that the Fourier phase contains a lot of important information regardless of the Fourier magnitude. So, in this work we propose a phase-based recognition system exploiting phase-congruency features for illumination/scale invariance. The performance of the proposed system is tested in terms of miss classifications and false alarms with the help of computer simulation.

  17. Improved camera calibration method based on perpendicularity compensation for binocular stereo vision measurement system.

    PubMed

    Jia, Zhenyuan; Yang, Jinghao; Liu, Wei; Wang, Fuji; Liu, Yang; Wang, Lingli; Fan, Chaonan; Zhao, Kai

    2015-06-15

    High-precision calibration of binocular vision systems plays an important role in accurate dimensional measurements. In this paper, an improved camera calibration method is proposed. First, an accurate intrinsic parameters calibration method based on active vision with perpendicularity compensation is developed. Compared to the previous work, this method eliminates the effect of non-perpendicularity of the camera motion on calibration accuracy. The principal point, scale factors, and distortion factors are calculated independently in this method, thereby allowing the strong coupling of these parameters to be eliminated. Second, an accurate global optimization method with only 5 images is presented. The results of calibration experiments show that the accuracy of the calibration method can reach 99.91%.

  18. Removal of parasitic image due to metal specularity based on digital micromirror device camera

    NASA Astrophysics Data System (ADS)

    Zhao, Shou-Bo; Zhang, Fu-Min; Qu, Xing-Hua; Chen, Zhe; Zheng, Shi-Wei

    2014-06-01

    Visual inspection for a highly reflective surface is commonly faced with a serious limitation, which is that useful information on geometric construction and textural defects is covered by a parasitic image due to specular highlights. In order to solve the problem, we propose an effective method for removing the parasitic image. Specifically, a digital micromirror device (DMD) camera for programmable imaging is first described. The strength of the optical system is to process scene ray before image formation. Based on the DMD camera, an iterative algorithm of modulated region selection, precise region mapping, and multimodulation provides removal of the parasitic image and reconstruction of a correction image. Finally, experimental results show the performance of the proposed approach.

  19. Dynamic characteristics of laser-induced vapor bubble formation in water based on high speed camera

    NASA Astrophysics Data System (ADS)

    Zhang, Xian-zeng; Guo, Wenqing; Zhan, Zhenlin; Xie, Shusen

    2013-08-01

    In clinical practice, laser ablation usually works under liquid environment such as water, blood or their mixture. Laser-induced vapor bubble or bubble formation and its consequent dynamics were believed to have important influence on tissue ablation. In the paper, the dynamic process of vapor bubble formation and consequently collapse induced by pulsed Ho:YAG laser in static water was investigated by using high-speed camera. The results showed that vapor channel / bubble can be produced with pulsed Ho:YAG laser, and the whole dynamic process of vapor bubble formation, pulsation and consequently collapse can be monitored by using high-speed camera. The dynamic characteristics of vapor bubble, such as pulsation period, the maximum depth and width were determined. The dependence of above dynamic parameters on incident radiant exposure was also presented. Based on which, the influence of vapor bubble on hard tissue ablation was discussed.

  20. Characterization of a CCD-camera-based system for measurement of the solar radial energy distribution

    NASA Astrophysics Data System (ADS)

    Gambardella, A.; Galleano, R.

    2011-10-01

    Charge-coupled device (CCD)-camera-based measurement systems offer the possibility to gather information on the solar radial energy distribution (sunshape). Sunshape measurements are very useful in designing high concentration photovoltaic systems and heliostats as they collect light only within a narrow field of view, the dimension of which has to be defined in the context of several different system design parameters. However, in this regard the CCD camera response needs to be adequately characterized. In this paper, uncertainty components for optical and other CCD-specific sources have been evaluated using indoor test procedures. We have considered CCD linearity and background noise, blooming, lens aberration, exposure time linearity and quantization error. Uncertainty calculation showed that a 0.94% (k = 2) combined expanded uncertainty on the solar radial energy distribution can be assumed.

  1. Secure Chaotic Map Based Block Cryptosystem with Application to Camera Sensor Networks

    PubMed Central

    Guo, Xianfeng; Zhang, Jiashu; Khan, Muhammad Khurram; Alghathbar, Khaled

    2011-01-01

    Recently, Wang et al. presented an efficient logistic map based block encryption system. The encryption system employs feedback ciphertext to achieve plaintext dependence of sub-keys. Unfortunately, we discovered that their scheme is unable to withstand key stream attack. To improve its security, this paper proposes a novel chaotic map based block cryptosystem. At the same time, a secure architecture for camera sensor network is constructed. The network comprises a set of inexpensive camera sensors to capture the images, a sink node equipped with sufficient computation and storage capabilities and a data processing server. The transmission security between the sink node and the server is gained by utilizing the improved cipher. Both theoretical analysis and simulation results indicate that the improved algorithm can overcome the flaws and maintain all the merits of the original cryptosystem. In addition, computational costs and efficiency of the proposed scheme are encouraging for the practical implementation in the real environment as well as camera sensor network. PMID:22319371

  2. Method for calibration accuracy improvement of projector-camera-based structured light system

    NASA Astrophysics Data System (ADS)

    Nie, Lei; Ye, Yuping; Song, Zhan

    2017-07-01

    Calibration is a critical step for the projector-camera-based structured light system (SLS). Conventional SLS calibration means usually use the calibrated camera to calibrate the projector device, and the optimization of calibration parameters is applied to minimize the two-dimensional (2-D) reprojection errors. A three-dimensional (3-D)-based method is proposed for the optimization of SLS calibration parameters. The system is first calibrated with traditional calibration methods to obtain the primary calibration parameters. Then, a reference plane with some precisely printed markers is used for the optimization of primary calibration results. Three metric error criteria are introduced to evaluate the 3-D reconstruction accuracy of the reference plane. By treating all the system parameters as a global optimization problem and using the primary calibration parameters as initial values, a nonlinear multiobjective optimization problem can be established and solved. Compared with conventional calibration methods that adopt the 2-D reprojection errors for the camera and projector separately, a global optimal calibration result can be obtained by the proposed calibration procedure. Experimental results showed that, with the optimized calibration parameters, measurement accuracy and 3-D reconstruction quality of the system can be greatly improved.

  3. Reference point detection for camera-based fingerprint image based on wavelet transformation.

    PubMed

    Khalil, Mohammed S

    2015-04-30

    Fingerprint recognition systems essentially require core-point detection prior to fingerprint matching. The core-point is used as a reference point to align the fingerprint with a template database. When processing a larger fingerprint database, it is necessary to consider the core-point during feature extraction. Numerous core-point detection methods are available and have been reported in the literature. However, these methods are generally applied to scanner-based images. Hence, this paper attempts to explore the feasibility of applying a core-point detection method to a fingerprint image obtained using a camera phone. The proposed method utilizes a discrete wavelet transform to extract the ridge information from a color image. The performance of proposed method is evaluated in terms of accuracy and consistency. These two indicators are calculated automatically by comparing the method's output with the defined core points. The proposed method is tested on two data sets, controlled and uncontrolled environment, collected from 13 different subjects. In the controlled environment, the proposed method achieved a detection rate 82.98%. In uncontrolled environment, the proposed method yield a detection rate of 78.21%. The proposed method yields promising results in a collected-image database. Moreover, the proposed method outperformed compare to existing method.

  4. AOTF-based NO2 camera, results from the AROMAT-2 campaign

    NASA Astrophysics Data System (ADS)

    Dekemper, Emmanuel; Fussen, Didier; Vanhamel, Jurgen; Van Opstal, Bert; Maes, Jeroen; Merlaud, Alexis; Stebel, Kerstin; Schuettemeyer, Dirk

    2016-04-01

    A hyperspectral imager based on an acousto-optical tunable filter (AOTF) has been developed in the frame of the ALTIUS mission (atmospheric limb tracker for the investigation of the upcoming stratosphere). ALTIUS is a three-channel (UV, VIS, NIR) space-borne limb sounder aiming at the retrieval of concentration profiles of important trace species (O3, NO2, aerosols and more) with a good vertical resolution. An optical breadboard was built from the VIS channel concept and is now serving as a ground-based remote sensing instrument. Its good spectral resolution (0.6nm) coupled to its natural imaging capabilities (6° square field of view sampled by a 512x512 pixels sensor) make it suitable for the measurement of 2D fields of NO2, similarly to what is nowadays achieved with SO2 cameras. Our NO2 camera was one of the instruments that took part to the second Airborne ROmanian Measurements of Aerosols and Trace gases (AROMAT-2) campaign in August 2015. It was pointed to the smokestacks of the coal and oil burning power plant of Turceni (Romania) in order to image the exhausted field of NO2 and derive slant columns and instantaneous emission fluxes. The ultimate goal of the AROMAT campaigns is to prepare the validation of TROPOMI onboard Sentinel-5P. We will briefly describe the instrumental concept of the NO2 camera, its heritage from the ALTIUS mission, and its advantages compared to previous attempts of reaching the same goal. Key results obtained with the camera during the AROMAT-2 campaign will be presented and further improvements will be discussed.

  5. Infrared line cameras based on linear arrays for industrial temperature measurement

    NASA Astrophysics Data System (ADS)

    Drogmoeller, Peter; Hofmann, Guenter; Budzier, Helmut; Reichardt, Thomas; Zimmerhackl, Manfred

    2002-03-01

    The PYROLINE/ MikroLine cameras provide continuous, non-contact measurement of linear temperature distributions. Operation in conjunction with the IR_LINE software provides data recording, real-time graphical analysis, process integration and camera-control capabilities. One system is based on pyroelectric line sensors with either 128 or 256 elements, operating at frame rates of 128 and 544 Hz respectively. Temperatures between 0 and 1300DGRC are measurable in four distinct spectral ranges; 8-14micrometers for low temperatures, 3-5micrometers for medium temperatures, 4.8-5.2micrometers for glass-temperature applications and 1.4-1.8micrometers for high temperatures. A newly developed IR-line camera (HRP 250) based upon a thermoelectrically cooled, 160-element, PbSe detector array operating in the 3 - 5 micrometers spectral range permits the thermal gradients of fast moving targets to be measured in the range 50 - 180 degree(s)C at a maximum frequency of 18kHz. This special system was used to measure temperature distributions on rotating tires at velocities of more than 300 km/h (190 mph). A modified version of this device was used for real-time measurement of disk-brake rotors under load. Another line camera consisting a 256 element InGaAs array was developed for the spectral range of 1.4 - 1.8 micrometers to detect impurities of polypropylene and polyethylene in raw cotton at frequencies of 2.5 - 5 kHz.

  6. Intense source of slow positrons

    NASA Astrophysics Data System (ADS)

    Perez, P.; Rosowsky, A.

    2004-10-01

    We describe a novel design for an intense source of slow positrons based on pair production with a beam of electrons from a 10 MeV accelerator hitting a thin target at a low incidence angle. The positrons are collected with a set of coils adapted to the large production angle. The collection system is designed to inject the positrons into a Greaves-Surko trap (Phys. Rev. A 46 (1992) 5696). Such a source could be the basis for a series of experiments in fundamental and applied research and would also be a prototype source for industrial applications, which concern the field of defect characterization in the nanometer scale.

  7. Development of plenoptic infrared camera using low dimensional material based photodetectors

    NASA Astrophysics Data System (ADS)

    Chen, Liangliang

    Infrared (IR) sensor has extended imaging from submicron visible spectrum to tens of microns wavelength, which has been widely used for military and civilian application. The conventional bulk semiconductor materials based IR cameras suffer from low frame rate, low resolution, temperature dependent and highly cost, while the unusual Carbon Nanotube (CNT), low dimensional material based nanotechnology has been made much progress in research and industry. The unique properties of CNT lead to investigate CNT based IR photodetectors and imaging system, resolving the sensitivity, speed and cooling difficulties in state of the art IR imagings. The reliability and stability is critical to the transition from nano science to nano engineering especially for infrared sensing. It is not only for the fundamental understanding of CNT photoresponse induced processes, but also for the development of a novel infrared sensitive material with unique optical and electrical features. In the proposed research, the sandwich-structured sensor was fabricated within two polymer layers. The substrate polyimide provided sensor with isolation to background noise, and top parylene packing blocked humid environmental factors. At the same time, the fabrication process was optimized by real time electrical detection dielectrophoresis and multiple annealing to improve fabrication yield and sensor performance. The nanoscale infrared photodetector was characterized by digital microscopy and precise linear stage in order for fully understanding it. Besides, the low noise, high gain readout system was designed together with CNT photodetector to make the nano sensor IR camera available. To explore more of infrared light, we employ compressive sensing algorithm into light field sampling, 3-D camera and compressive video sensing. The redundant of whole light field, including angular images for light field, binocular images for 3-D camera and temporal information of video streams, are extracted and

  8. Person Recognition System Based on a Combination of Body Images from Visible Light and Thermal Cameras

    PubMed Central

    Nguyen, Dat Tien; Hong, Hyung Gil; Kim, Ki Wan; Park, Kang Ryoung

    2017-01-01

    The human body contains identity information that can be used for the person recognition (verification/recognition) problem. In this paper, we propose a person recognition method using the information extracted from body images. Our research is novel in the following three ways compared to previous studies. First, we use the images of human body for recognizing individuals. To overcome the limitations of previous studies on body-based person recognition that use only visible light images for recognition, we use human body images captured by two different kinds of camera, including a visible light camera and a thermal camera. The use of two different kinds of body image helps us to reduce the effects of noise, background, and variation in the appearance of a human body. Second, we apply a state-of-the art method, called convolutional neural network (CNN) among various available methods, for image features extraction in order to overcome the limitations of traditional hand-designed image feature extraction methods. Finally, with the extracted image features from body images, the recognition task is performed by measuring the distance between the input and enrolled samples. The experimental results show that the proposed method is efficient for enhancing recognition accuracy compared to systems that use only visible light or thermal images of the human body. PMID:28300783

  9. Measurement of food volume based on single 2-D image without conventional camera calibration.

    PubMed

    Yue, Yaofeng; Jia, Wenyan; Sun, Mingui

    2012-01-01

    Food portion size measurement combined with a database of calories and nutrients is important in the study of metabolic disorders such as obesity and diabetes. In this work, we present a convenient and accurate approach to the calculation of food volume by measuring several dimensions using a single 2-D image as the input. This approach does not require the conventional checkerboard based camera calibration since it is burdensome in practice. The only prior requirements of our approach are: 1) a circular container with a known size, such as a plate, a bowl or a cup, is present in the image, and 2) the picture is taken under a reasonable assumption that the camera is always held level with respect to its left and right sides and its lens is tilted down towards foods on the dining table. We show that, under these conditions, our approach provides a closed form solution to camera calibration, allowing convenient measurement of food portion size using digital pictures.

  10. Camera-Based Lock-in and Heterodyne Carrierographic Photoluminescence Imaging of Crystalline Silicon Wafers

    NASA Astrophysics Data System (ADS)

    Sun, Q. M.; Melnikov, A.; Mandelis, A.

    2015-06-01

    Carrierographic (spectrally gated photoluminescence) imaging of a crystalline silicon wafer using an InGaAs camera and two spread super-bandgap illumination laser beams is introduced in both low-frequency lock-in and high-frequency heterodyne modes. Lock-in carrierographic images of the wafer up to 400 Hz modulation frequency are presented. To overcome the frame rate and exposure time limitations of the camera, a heterodyne method is employed for high-frequency carrierographic imaging which results in high-resolution near-subsurface information. The feasibility of the method is guaranteed by the typical superlinearity behavior of photoluminescence, which allows one to construct a slow enough beat frequency component from nonlinear mixing of two high frequencies. Intensity-scan measurements were carried out with a conventional single-element InGaAs detector photocarrier radiometry system, and the nonlinearity exponent of the wafer was found to be around 1.7. Heterodyne images of the wafer up to 4 kHz have been obtained and qualitatively analyzed. With the help of the complementary lock-in and heterodyne modes, camera-based carrierographic imaging in a wide frequency range has been realized for fundamental research and industrial applications toward in-line nondestructive testing of semiconductor materials and devices.

  11. Validity and repeatability of a depth camera-based surface imaging system for thigh volume measurement.

    PubMed

    Bullas, Alice M; Choppin, Simon; Heller, Ben; Wheat, Jon

    2016-10-01

    Complex anthropometrics such as area and volume, can identify changes in body size and shape that are not detectable with traditional anthropometrics of lengths, breadths, skinfolds and girths. However, taking these complex with manual techniques (tape measurement and water displacement) is often unsuitable. Three-dimensional (3D) surface imaging systems are quick and accurate alternatives to manual techniques but their use is restricted by cost, complexity and limited access. We have developed a novel low-cost, accessible and portable 3D surface imaging system based on consumer depth cameras. The aim of this study was to determine the validity and repeatability of the system in the measurement of thigh volume. The thigh volumes of 36 participants were measured with the depth camera system and a high precision commercially available 3D surface imaging system (3dMD). The depth camera system used within this study is highly repeatable (technical error of measurement (TEM) of <1.0% intra-calibration and ~2.0% inter-calibration) but systematically overestimates (~6%) thigh volume when compared to the 3dMD system. This suggests poor agreement yet a close relationship, which once corrected can yield a usable thigh volume measurement.

  12. Physical Activity Recognition Based on Motion in Images Acquired by a Wearable Camera

    PubMed Central

    Zhang, Hong; Li, Lu; Jia, Wenyan; Fernstrom, John D.; Sclabassi, Robert J.; Mao, Zhi-Hong; Sun, Mingui

    2011-01-01

    A new technique to extract and evaluate physical activity patterns from image sequences captured by a wearable camera is presented in this paper. Unlike standard activity recognition schemes, the video data captured by our device do not include the wearer him/herself. The physical activity of the wearer, such as walking or exercising, is analyzed indirectly through the camera motion extracted from the acquired video frames. Two key tasks, pixel correspondence identification and motion feature extraction, are studied to recognize activity patterns. We utilize a multiscale approach to identify pixel correspondences. When compared with the existing methods such as the Good Features detector and the Speed-up Robust Feature (SURF) detector, our technique is more accurate and computationally efficient. Once the pixel correspondences are determined which define representative motion vectors, we build a set of activity pattern features based on motion statistics in each frame. Finally, the physical activity of the person wearing a camera is determined according to the global motion distribution in the video. Our algorithms are tested using different machine learning techniques such as the K-Nearest Neighbor (KNN), Naive Bayesian and Support Vector Machine (SVM). The results show that many types of physical activities can be recognized from field acquired real-world video. Our results also indicate that, with a design of specific motion features in the input vectors, different classifiers can be used successfully with similar performances. PMID:21779142

  13. Vibration extraction based on fast NCC algorithm and high-speed camera.

    PubMed

    Lei, Xiujun; Jin, Yi; Guo, Jie; Zhu, Chang'an

    2015-09-20

    In this study, a high-speed camera system is developed to complete the vibration measurement in real time and to overcome the mass introduced by conventional contact measurements. The proposed system consists of a notebook computer and a high-speed camera which can capture the images as many as 1000 frames per second. In order to process the captured images in the computer, the normalized cross-correlation (NCC) template tracking algorithm with subpixel accuracy is introduced. Additionally, a modified local search algorithm based on the NCC is proposed to reduce the computation time and to increase efficiency significantly. The modified algorithm can rapidly accomplish one displacement extraction 10 times faster than the traditional template matching without installing any target panel onto the structures. Two experiments were carried out under laboratory and outdoor conditions to validate the accuracy and efficiency of the system performance in practice. The results demonstrated the high accuracy and efficiency of the camera system in extracting vibrating signals.

  14. Person Recognition System Based on a Combination of Body Images from Visible Light and Thermal Cameras.

    PubMed

    Nguyen, Dat Tien; Hong, Hyung Gil; Kim, Ki Wan; Park, Kang Ryoung

    2017-03-16

    The human body contains identity information that can be used for the person recognition (verification/recognition) problem. In this paper, we propose a person recognition method using the information extracted from body images. Our research is novel in the following three ways compared to previous studies. First, we use the images of human body for recognizing individuals. To overcome the limitations of previous studies on body-based person recognition that use only visible light images for recognition, we use human body images captured by two different kinds of camera, including a visible light camera and a thermal camera. The use of two different kinds of body image helps us to reduce the effects of noise, background, and variation in the appearance of a human body. Second, we apply a state-of-the art method, called convolutional neural network (CNN) among various available methods, for image features extraction in order to overcome the limitations of traditional hand-designed image feature extraction methods. Finally, with the extracted image features from body images, the recognition task is performed by measuring the distance between the input and enrolled samples. The experimental results show that the proposed method is efficient for enhancing recognition accuracy compared to systems that use only visible light or thermal images of the human body.

  15. Automatic control of a robot camera for broadcasting based on cameramen's techniques and subjective evaluation and analysis of reproduced images.

    PubMed

    Kato, D; Katsuura, T; Koyama, H

    2000-03-01

    With the goal of achieving an intelligent robot camera system that can take dynamic images automatically through humanlike, natural camera work, we analyzed how images were shot, subjectively evaluated reproduced images, and examined effects of camerawork, using camera control technique as a parameter. It was found that (1) A high evaluation is obtained when human-based data are used for the position adjusting velocity curve of the target; (2) Evaluation scores are relatively high for images taken with feedback-feedforward camera control method for target movement in one direction; (3) Keeping the target within the image area using the control method that imitates human camera handling becomes increasingly difficult when the target changes both direction and velocity and becomes bigger and faster, and (4) The mechanical feedback method can cope with rapid changes in the target's direction and velocity, constantly keeping the target within the image area, though the viewer finds the image rather mechanical as opposed to humanlike.

  16. Spin polarized low-energy positron source

    NASA Astrophysics Data System (ADS)

    Petrov, V. N.; Samarin, S. N.; Sudarshan, K.; Pravica, L.; Guagliardo, P.; Williams, J. F.

    2015-06-01

    This paper presents an investigation of spin polarization of positrons from a source based on the decay of 22Na isotopes. Positrons are moderated by transmission through a tungsten film and electrostatically focussed and transported through a 90 deg deflector to produce a slow positron beam with polarization vector normal to the linear momentum. The polarization of the beam was determined to be about 10% by comparison with polarized electron scattering asymmetries from a thin Fe film on W(110) at 10-10 Torr. Low energy electron emission from Fe layer on W(100) surfaces under positron impact is explored. It is shown that the intensity asymmetry of the electron emission as a function of the incident positron energy can be used to estimate the polarization of the positron beam. Also several materials with long mean free paths for spin relaxation are considered as possible moderators with increased polarization of the emergent positrons.

  17. Random versus Game Trail-Based Camera Trap Placement Strategy for Monitoring Terrestrial Mammal Communities

    PubMed Central

    Cusack, Jeremy J.; Dickman, Amy J.; Rowcliffe, J. Marcus; Carbone, Chris; Macdonald, David W.; Coulson, Tim

    2015-01-01

    Camera trap surveys exclusively targeting features of the landscape that increase the probability of photographing one or several focal species are commonly used to draw inferences on the richness, composition and structure of entire mammal communities. However, these studies ignore expected biases in species detection arising from sampling only a limited set of potential habitat features. In this study, we test the influence of camera trap placement strategy on community-level inferences by carrying out two spatially and temporally concurrent surveys of medium to large terrestrial mammal species within Tanzania’s Ruaha National Park, employing either strictly game trail-based or strictly random camera placements. We compared the richness, composition and structure of the two observed communities, and evaluated what makes a species significantly more likely to be caught at trail placements. Observed communities differed marginally in their richness and composition, although differences were more noticeable during the wet season and for low levels of sampling effort. Lognormal models provided the best fit to rank abundance distributions describing the structure of all observed communities, regardless of survey type or season. Despite this, carnivore species were more likely to be detected at trail placements relative to random ones during the dry season, as were larger bodied species during the wet season. Our findings suggest that, given adequate sampling effort (> 1400 camera trap nights), placement strategy is unlikely to affect inferences made at the community level. However, surveys should consider more carefully their choice of placement strategy when targeting specific taxonomic or trophic groups. PMID:25950183

  18. A digital auto-focusing method based on CCD mosaicing for aerial camera

    NASA Astrophysics Data System (ADS)

    Wang, Yan; Liu, Yang; Chen, Xing-lin

    2011-08-01

    A novel focusing method of the remote sensing camera is proposed in this paper. To evaluate the quality of the image obtained by the aerial camera, an assessing function was constructed based on the Wavelet transform. And the characteristic of the CCD mosaicing structure was taken use to solve the problem that the evaluating values can not be compared for the images captured by the aerial camera are variable. On the basis of the wavelet evaluating function, the quality of the image was assessed. Then the CCD mosaicing structure was utilized to perform auto-focusing process, a simulation was made to validate the novel method in the end. There are three major contributions in the paper. Firstly, the weights of the wavelet coefficients in the evaluating function were set according to the characteristic of the Wavelet transform and human vision system (HVS). By those, the assessing result is close to our subjective feel and insensitive to noise. Secondly, in order to make the function adaptive to images with different high-frequency components, the properties of the wavelet basis were analyzed. By comparing the evaluating effect of different images, the wavelet basis with the best effect is symlet2 and the level of decomposing is three. Finally, by making use of the CCD mosaicing structure, the problem that auto-focusing of the aerial camera can't use digital images processing directly was solved, and the region with the highest frequency component was chosen as the evaluating area so as to improve the sensitivity of the function.

  19. A Novel Multi-Digital Camera System Based on Tilt-Shift Photography Technology

    PubMed Central

    Sun, Tao; Fang, Jun-yong; Zhao, Dong; Liu, Xue; Tong, Qing-xi

    2015-01-01

    Multi-digital camera systems (MDCS) are constantly being improved to meet the increasing requirement of high-resolution spatial data. This study identifies the insufficiencies of traditional MDCSs and proposes a new category MDCS based on tilt-shift photography to improve ability of the MDCS to acquire high-accuracy spatial data. A prototype system, including two or four tilt-shift cameras (TSC, camera model: Nikon D90), is developed to validate the feasibility and correctness of proposed MDCS. Similar to the cameras of traditional MDCSs, calibration is also essential for TSC of new MDCS. The study constructs indoor control fields and proposes appropriate calibration methods for TSC, including digital distortion model (DDM) approach and two-step calibrated strategy. The characteristics of TSC are analyzed in detail via a calibration experiment; for example, the edge distortion of TSC. Finally, the ability of the new MDCS to acquire high-accuracy spatial data is verified through flight experiments. The results of flight experiments illustrate that geo-position accuracy of prototype system achieves 0.3 m at a flight height of 800 m, and spatial resolution of 0.15 m. In addition, results of the comparison between the traditional (MADC II) and proposed MDCS demonstrate that the latter (0.3 m) provides spatial data with higher accuracy than the former (only 0.6 m) under the same conditions. We also take the attitude that using higher accuracy TSC in the new MDCS should further improve the accuracy of the photogrammetry senior product. PMID:25835187

  20. Camera characterization using back-propagation artificial neutral network based on Munsell system

    NASA Astrophysics Data System (ADS)

    Liu, Ye; Yu, Hongfei; Shi, Junsheng

    2008-02-01

    The camera output RGB signals do not directly corresponded to the tristimulus values based on the CIE standard colorimetric observer, i.e., it is a device-independent color space. For achieving accurate color information, we need to do color characterization, which can be used to derive a transformation between camera RGB values and CIE XYZ values. In this paper we set up a Back-Propagation (BP) artificial neutral network to realize the mapping from camera RGB to CIE XYZ. We used the Munsell Book of Color with total number 1267 as color samples. Each patch of the Munsell Book of Color was recorded by camera, and the RGB values could be obtained. The Munsell Book of Color were taken in a light booth and the surround was kept dark. The viewing/illuminating geometry was 0/45 using D 65 illuminate. The lighting illuminating the reference target needs to be as uniform as possible. The BP network was a 5-layer one and (3-10-10-10-3), which was selected through our experiments. 1000 training samples were selected randomly from the 1267 samples, and the rest 267 samples were as the testing samples. Experimental results show that the mean color difference between the reproduced colors and target colors is 0.5 CIELAB color-difference unit, which was smaller than the biggest acceptable color difference 2 CIELAB color-difference unit. The results satisfy some applications for the more accurate color measurements, such as medical diagnostics, cosmetics production, the color reappearance of different media, etc.

  1. Camera on Vessel: A Camera-Based System to Measure Change in Water Volume in a Drinking Glass.

    PubMed

    Ayoola, Idowu; Chen, Wei; Feijs, Loe

    2015-09-18

    A major problem related to chronic health is patients' "compliance" with new lifestyle changes, medical prescriptions, recommendations, or restrictions. Heart-failure and hemodialysis patients are usually placed on fluid restrictions due to their hemodynamic status. A holistic approach to managing fluid imbalance will incorporate the monitoring of salt-water intake, body-fluid retention, and fluid excretion in order to provide effective intervention at an early stage. Such an approach creates a need to develop a smart device that can monitor the drinking activities of the patient. This paper employs an empirical approach to infer the real water level in a conically shapped glass and the volume difference due to changes in water level. The method uses a low-resolution miniaturized camera to obtain images using an Arduino microcontroller. The images are processed in MATLAB. Conventional segmentation techniques (such as a Sobel filter to obtain a binary image) are applied to extract the level gradient, and an ellipsoidal fitting helps to estimate the size of the cup. The fitting (using least-squares criterion) between derived measurements in pixel and the real measurements shows a low covariance between the estimated measurement and the mean. The correlation between the estimated results to ground truth produced a variation of 3% from the mean.

  2. Camera on Vessel: A Camera-Based System to Measure Change in Water Volume in a Drinking Glass

    PubMed Central

    Ayoola, Idowu; Chen, Wei; Feijs, Loe

    2015-01-01

    A major problem related to chronic health is patients’ “compliance” with new lifestyle changes, medical prescriptions, recommendations, or restrictions. Heart-failure and hemodialysis patients are usually placed on fluid restrictions due to their hemodynamic status. A holistic approach to managing fluid imbalance will incorporate the monitoring of salt-water intake, body-fluid retention, and fluid excretion in order to provide effective intervention at an early stage. Such an approach creates a need to develop a smart device that can monitor the drinking activities of the patient. This paper employs an empirical approach to infer the real water level in a conically shapped glass and the volume difference due to changes in water level. The method uses a low-resolution miniaturized camera to obtain images using an Arduino microcontroller. The images are processed in MATLAB. Conventional segmentation techniques (such as a Sobel filter to obtain a binary image) are applied to extract the level gradient, and an ellipsoidal fitting helps to estimate the size of the cup. The fitting (using least-squares criterion) between derived measurements in pixel and the real measurements shows a low covariance between the estimated measurement and the mean. The correlation between the estimated results to ground truth produced a variation of 3% from the mean. PMID:26393600

  3. Note: Tormenta: An open source Python-powered control software for camera based optical microscopy

    NASA Astrophysics Data System (ADS)

    Barabas, Federico M.; Masullo, Luciano A.; Stefani, Fernando D.

    2016-12-01

    Until recently, PC control and synchronization of scientific instruments was only possible through closed-source expensive frameworks like National Instruments' LabVIEW. Nowadays, efficient cost-free alternatives are available in the context of a continuously growing community of open-source software developers. Here, we report on Tormenta, a modular open-source software for the control of camera-based optical microscopes. Tormenta is built on Python, works on multiple operating systems, and includes some key features for fluorescence nanoscopy based on single molecule localization.

  4. Note: Tormenta: An open source Python-powered control software for camera based optical microscopy.

    PubMed

    Barabas, Federico M; Masullo, Luciano A; Stefani, Fernando D

    2016-12-01

    Until recently, PC control and synchronization of scientific instruments was only possible through closed-source expensive frameworks like National Instruments' LabVIEW. Nowadays, efficient cost-free alternatives are available in the context of a continuously growing community of open-source software developers. Here, we report on Tormenta, a modular open-source software for the control of camera-based optical microscopes. Tormenta is built on Python, works on multiple operating systems, and includes some key features for fluorescence nanoscopy based on single molecule localization.

  5. Electronics for the camera of the First G-APD Cherenkov Telescope (FACT) for ground based gamma-ray astronomy

    NASA Astrophysics Data System (ADS)

    Anderhub, H.; Backes, M.; Biland, A.; Boller, A.; Braun, I.; Bretz, T.; Commichau, V.; Djambazov, L.; Dorner, D.; Farnier, C.; Gendotti, A.; Grimm, O.; von Gunten, H. P.; Hildebrand, D.; Horisberger, U.; Huber, B.; Kim, K.-S.; Köhne, J.-H.; Krähenbühl, T.; Krumm, B.; Lee, M.; Lenain, J.-P.; Lorenz, E.; Lustermann, W.; Lyard, E.; Mannheim, K.; Meharga, M.; Neise, D.; Nessi-Tedaldi, F.; Overkemping, A.-K.; Pauss, F.; Renker, D.; Rhode, W.; Ribordy, M.; Rohlfs, R.; Röser, U.; Stucki, J.-P.; Thaele, J.; Tibolla, O.; Viertel, G.; Vogler, P.; Walter, R.; Warda, K.; Weitzel, Q.

    2012-01-01

    Within the FACT project, we construct a new type of camera based on Geiger-mode avalanche photodiodes (G-APDs). Compared to photomultipliers, G-APDs are more robust, need a lower operation voltage and have the potential of higher photon-detection efficiency and lower cost, but were never fully tested in the harsh environments of Cherenkov telescopes. The FACT camera consists of 1440 G-APD pixels and readout channels, based on the DRS4 (Domino Ring Sampler) analog pipeline chip and commercial Ethernet components. Preamplifiers, trigger system, digitization, slow control and power converters are integrated into the camera.

  6. Positron trapping at grain boundaries

    SciTech Connect

    Dupasquier, A. ); Romero, R.; Somoza, A. )

    1993-10-01

    The standard positron trapping model has often been applied, as a simple approximation, to the interpretation of positron lifetime spectra in situations of diffusion-controlled trapping. This paper shows that this approximation is not sufficiently accurate, and presents a model based on the correct solution of the diffusion equation, in the version appropriate for studying positron trapping at grain boundaries. The model is used for the analysis of new experimental data on positron lifetime spectra in a fine-grained Al-Ca-Zn alloy. Previous results on similar systems are also discussed and reinterpreted. The analysis yields effective diffusion coefficients not far from the values known for the base metals of the alloys.

  7. Positron microanalysis with high intensity beams

    SciTech Connect

    Hulett, L.D. Jr.; Donohue, D.L.

    1990-01-01

    One of the more common applications for a high intensity slow positron facility will be microanalysis of solid materials. In the first section of this paper some examples are given of procedures that can be developed. Since most of the attendees of this workshop are experts in positron spectroscopy, comprehensive descriptions will be omitted. With the exception of positron emission microscopy, most of the procedures will be based on those already in common use with broad beams. The utility of the methods have all been demonstrated, but material scientists use very few of them because positron microbeams are not generally available. A high intensity positron facility will make microbeams easier to obtain and partially alleviate this situation. All microanalysis techniques listed below will have a common requirement, which is the ability to locate the microscopic detail or area of interest and to focus the positron beam exclusively on it. The last section of this paper is a suggestion of how a high intensity positron facility might be designed so as to have this capability built in. The method will involve locating the specimen by scanning it with the microbeam of positrons and inducing a secondary electron image that will immediately reveal whether or not the positron beam is striking the proper portion of the specimen. This scanning positron microscope' will be a somewhat prosaic analog of the conventional SEM. It will, however, be an indispensable utility that will enhance the practicality of positron microanalysis techniques. 6 refs., 1 fig.

  8. Optimum design of the carbon fiber thin-walled baffle for the space-based camera

    NASA Astrophysics Data System (ADS)

    Yan, Yong; Song, Gu; Yuan, An; Jin, Guang

    2011-08-01

    The thin-walled baffle design of the space-based camera is an important job in the lightweight space camera research task for its stringent quality requirement and harsh mechanical environment especially for the thin-walled baffle of the carbon fiber design. In the paper, an especially thin-walled baffle of the carbon fiber design process was described and it is sound significant during the other thin-walled baffle design of the space camera. The designer obtained the design margin of the thin-walled baffle that structural stiffness and strength can tolerated belong to its development requirements through the appropriate use of the finite element analysis of the walled parameters influence sensitivity to its structural stiffness and strength. And the designer can determine the better optimization criterion of thin-walled baffle during the geometric parameter optimization process in such guiding principle. It sounds significant during the optimum design of the thin-walled baffle of the space camera. For structural stiffness and strength of the carbon fibers structure which can been designed, the effect of the optimization will be more remarkable though the optional design of the parameters chose. Combination of manufacture process and design requirements the paper completed the thin-walled baffle structure scheme selection and optimized the specific carbon fiber fabrication technology though the FEM optimization, and the processing cost and process cycle are retrenchment/saved effectively in the method. Meanwhile, the weight of the thin-walled baffle reduced significantly in meet the design requirements under the premise of the structure. The engineering prediction had been adopted, and the related result shows that the thin-walled baffle satisfied the space-based camera engineering practical needs very well, its quality reduced about 20%, the final assessment index of the thin-walled baffle were superior to the overall design requirements significantly. The design

  9. A pixellated γ-camera based on CdTe detectors clinical interests and performances

    NASA Astrophysics Data System (ADS)

    Chambron, J.; Arntz, Y.; Eclancher, B.; Scheiber, Ch; Siffert, P.; Hage Hali, M.; Regal, R.; Kazandjian, A.; Prat, V.; Thomas, S.; Warren, S.; Matz, R.; Jahnke, A.; Karman, M.; Pszota, A.; Nemeth, L.

    2000-07-01

    A mobile gamma camera dedicated to nuclear cardiology, based on a 15 cm×15 cm detection matrix of 2304 CdTe detector elements, 2.83 mm×2.83 mm×2 mm, has been developed with a European Community support to academic and industrial research centres. The intrinsic properties of the semiconductor crystals - low-ionisation energy, high-energy resolution, high attenuation coefficient - are potentially attractive to improve the γ-camera performances. But their use as γ detectors for medical imaging at high resolution requires production of high-grade materials and large quantities of sophisticated read-out electronics. The decision was taken to use CdTe rather than CdZnTe, because the manufacturer (Eurorad, France) has a large experience for producing high-grade materials, with a good homogeneity and stability and whose transport properties, characterised by the mobility-lifetime product, are at least 5 times greater than that of CdZnTe. The detector matrix is divided in 9 square units, each unit is composed of 256 detectors shared in 16 modules. Each module consists in a thin ceramic plate holding a line of 16 detectors, in four groups of four for an easy replacement, and holding a special 16 channels integrated circuit designed by CLRC (UK). A detection and acquisition logic based on a DSP card and a PC has been programmed by Eurorad for spectral and counting acquisition modes. Collimators LEAP and LEHR from commercial design, mobile gantry and clinical software were provided by Siemens (Germany). The γ-camera head housing, its general mounting and the electric connections were performed by Phase Laboratory (CNRS, France). The compactness of the γ-camera head, thin detectors matrix, electronic readout and collimator, facilitates the detection of close γ sources with the advantage of a high spatial resolution. Such an equipment is intended to bedside explorations. There is a growing clinical requirement in nuclear cardiology to early assess the extent of an

  10. Immersive Virtual Moon Scene System Based on Panoramic Camera Data of Chang'E-3

    NASA Astrophysics Data System (ADS)

    Gao, X.; Liu, J.; Mu, L.; Yan, W.; Zeng, X.; Zhang, X.; Li, C.

    2014-12-01

    The system "Immersive Virtual Moon Scene" is used to show the virtual environment of Moon surface in immersive environment. Utilizing stereo 360-degree imagery from panoramic camera of Yutu rover, the system enables the operator to visualize the terrain and the celestial background from the rover's point of view in 3D. To avoid image distortion, stereo 360-degree panorama stitched by 112 images is projected onto inside surface of sphere according to panorama orientation coordinates and camera parameters to build the virtual scene. Stars can be seen from the Moon at any time. So we render the sun, planets and stars according to time and rover's location based on Hipparcos catalogue as the background on the sphere. Immersing in the stereo virtual environment created by this imaged-based rendering technique, the operator can zoom, pan to interact with the virtual Moon scene and mark interesting objects. Hardware of the immersive virtual Moon system is made up of four high lumen projectors and a huge curve screen which is 31 meters long and 5.5 meters high. This system which take all panoramic camera data available and use it to create an immersive environment, enable operator to interact with the environment and mark interesting objects contributed heavily to establishment of science mission goals in Chang'E-3 mission. After Chang'E-3 mission, the lab with this system will be open to public. Besides this application, Moon terrain stereo animations based on Chang'E-1 and Chang'E-2 data will be showed to public on the huge screen in the lab. Based on the data of lunar exploration,we will made more immersive virtual moon scenes and animations to help the public understand more about the Moon in the future.

  11. Deconvolution based attenuation correction for time-of-flight positron emission tomography

    NASA Astrophysics Data System (ADS)

    Lee, Nam-Yong

    2017-10-01

    For an accurate quantitative reconstruction of the radioactive tracer distribution in positron emission tomography (PET), we need to take into account the attenuation of the photons by the tissues. For this purpose, we propose an attenuation correction method for the case when a direct measurement of the attenuation distribution in the tissues is not available. The proposed method can determine the attenuation factor up to a constant multiple by exploiting the consistency condition that the exact deconvolution of noise-free time-of-flight (TOF) sinogram must satisfy. Simulation studies shows that the proposed method corrects attenuation artifacts quite accurately for TOF sinograms of a wide range of temporal resolutions and noise levels, and improves the image reconstruction for TOF sinograms of higher temporal resolutions by providing more accurate attenuation correction.

  12. Novel fundus camera design

    NASA Astrophysics Data System (ADS)

    Dehoog, Edward A.

    A fundus camera a complex optical system that makes use of the principle of reflex free indirect ophthalmoscopy to image the retina. Despite being in existence as early as 1900's, little has changed in the design of a fundus camera and there is minimal information about the design principles utilized. Parameters and specifications involved in the design of fundus camera are determined and their affect on system performance are discussed. Fundus cameras incorporating different design methods are modeled and a performance evaluation based on design parameters is used to determine the effectiveness of each design strategy. By determining the design principles involved in the fundus camera, new cameras can be designed to include specific imaging modalities such as optical coherence tomography, imaging spectroscopy and imaging polarimetry to gather additional information about properties and structure of the retina. Design principles utilized to incorporate such modalities into fundus camera systems are discussed. Design, implementation and testing of a snapshot polarimeter fundus camera are demonstrated.

  13. Virus-based nanomaterials as positron emission tomography and magnetic resonance contrast agents: from technology development to translational medicine.

    PubMed

    Shukla, Sourabh; Steinmetz, Nicole F

    2015-01-01

    Viruses have recently emerged as ideal protein scaffolds for a new class of contrast agents that can be used in medical imaging procedures such as positron emission tomography (PET) and magnetic resonance imaging (MRI). Whereas synthetic nanoparticles are difficult to produce as homogeneous formulations due to the inherently stochastic nature of the synthesis process, virus-based nanoparticles are genetically encoded and are therefore produced as homogeneous and monodisperse preparations with a high degree of quality control. Because the virus capsids have a defined chemical structure that has evolved to carry cargoes of nucleic acids, they can be modified to carry precisely defined cargoes of contrast agents and can be decorated with spatially defined contrast reagents on the internal or external surfaces. Viral nanoparticles can also be genetically programed or conjugated with targeting ligands to deliver contrast agents to specific cells, and the natural biocompatibility of viruses means that they are cleared rapidly from the body. Nanoparticles based on bacteriophages and plant viruses are safe for use in humans and can be produced inexpensively in large quantities as self-assembling recombinant proteins. Based on these considerations, a new generation of contrast agents has been developed using bacteriophages and plant viruses as scaffolds to carry positron-emitting radioisotopes such as [(18) F] fluorodeoxyglucose for PET imaging and iron oxide or Gd(3+) for MRI. Although challenges such as immunogenicity, loading efficiency, and regulatory compliance remain to be address, virus-based nanoparticles represent a promising new enabling technology for a new generation of highly biocompatible and biodegradable targeted imaging reagents. © 2015 Wiley Periodicals, Inc.

  14. A clinical gamma camera-based pinhole collimated system for high resolution small animal SPECT imaging.

    PubMed

    Mejia, J; Galvis-Alonso, O Y; Castro, A A de; Braga, J; Leite, J P; Simões, M V

    2010-12-01

    The main objective of the present study was to upgrade a clinical gamma camera to obtain high resolution tomographic images of small animal organs. The system is based on a clinical gamma camera to which we have adapted a special-purpose pinhole collimator and a device for positioning and rotating the target based on a computer-controlled step motor. We developed a software tool to reconstruct the target's three-dimensional distribution of emission from a set of planar projections, based on the maximum likelihood algorithm. We present details on the hardware and software implementation. We imaged phantoms and heart and kidneys of rats. When using pinhole collimators, the spatial resolution and sensitivity of the imaging system depend on parameters such as the detector-to-collimator and detector-to-target distances and pinhole diameter. In this study, we reached an object voxel size of 0.6 mm and spatial resolution better than 2.4 and 1.7 mm full width at half maximum when 1.5- and 1.0-mm diameter pinholes were used, respectively. Appropriate sensitivity to study the target of interest was attained in both cases. Additionally, we show that as few as 12 projections are sufficient to attain good quality reconstructions, a result that implies a significant reduction of acquisition time and opens the possibility for radiotracer dynamic studies. In conclusion, a high resolution single photon emission computed tomography (SPECT) system was developed using a commercial clinical gamma camera, allowing the acquisition of detailed volumetric images of small animal organs. This type of system has important implications for research areas such as Cardiology, Neurology or Oncology.

  15. Comparison of the temperature accuracy between smart phone based and high-end thermal cameras using a temperature gradient phantom

    NASA Astrophysics Data System (ADS)

    Klaessens, John H.; van der Veen, Albert; Verdaasdonk, Rudolf M.

    2017-03-01

    Recently, low cost smart phone based thermal cameras are being considered to be used in a clinical setting for monitoring physiological temperature responses such as: body temperature change, local inflammations, perfusion changes or (burn) wound healing. These thermal cameras contain uncooled micro-bolometers with an internal calibration check and have a temperature resolution of 0.1 degree. For clinical applications a fast quality measurement before use is required (absolute temperature check) and quality control (stability, repeatability, absolute temperature, absolute temperature differences) should be performed regularly. Therefore, a calibrated temperature phantom has been developed based on thermistor heating on both ends of a black coated metal strip to create a controllable temperature gradient from room temperature 26 °C up to 100 °C. The absolute temperatures on the strip are determined with software controlled 5 PT-1000 sensors using lookup tables. In this study 3 FLIR-ONE cameras and one high end camera were checked with this temperature phantom. The results show a relative good agreement between both low-cost and high-end camera's and the phantom temperature gradient, with temperature differences of 1 degree up to 6 degrees between the camera's and the phantom. The measurements were repeated as to absolute temperature and temperature stability over the sensor area. Both low-cost and high-end thermal cameras measured relative temperature changes with high accuracy and absolute temperatures with constant deviations. Low-cost smart phone based thermal cameras can be a good alternative to high-end thermal cameras for routine clinical measurements, appropriate to the research question, providing regular calibration checks for quality control.

  16. A Probabilistic Feature Map-Based Localization System Using a Monocular Camera.

    PubMed

    Kim, Hyungjin; Lee, Donghwa; Oh, Taekjun; Choi, Hyun-Taek; Myung, Hyun

    2015-08-31

    Image-based localization is one of the most widely researched localization techniques in the robotics and computer vision communities. As enormous image data sets are provided through the Internet, many studies on estimating a location with a pre-built image-based 3D map have been conducted. Most research groups use numerous image data sets that contain sufficient features. In contrast, this paper focuses on image-based localization in the case of insufficient images and features. A more accurate localization method is proposed based on a probabilistic map using 3D-to-2D matching correspondences between a map and a query image. The probabilistic feature map is generated in advance by probabilistic modeling of the sensor system as well as the uncertainties of camera poses. Using the conventional PnP algorithm, an initial camera pose is estimated on the probabilistic feature map. The proposed algorithm is optimized from the initial pose by minimizing Mahalanobis distance errors between features from the query image and the map to improve accuracy. To verify that the localization accuracy is improved, the proposed algorithm is compared with the conventional algorithm in a simulation and realenvironments.

  17. A Probabilistic Feature Map-Based Localization System Using a Monocular Camera

    PubMed Central

    Kim, Hyungjin; Lee, Donghwa; Oh, Taekjun; Choi, Hyun-Taek; Myung, Hyun

    2015-01-01

    Image-based localization is one of the most widely researched localization techniques in the robotics and computer vision communities. As enormous image data sets are provided through the Internet, many studies on estimating a location with a pre-built image-based 3D map have been conducted. Most research groups use numerous image data sets that contain sufficient features. In contrast, this paper focuses on image-based localization in the case of insufficient images and features. A more accurate localization method is proposed based on a probabilistic map using 3D-to-2D matching correspondences between a map and a query image. The probabilistic feature map is generated in advance by probabilistic modeling of the sensor system as well as the uncertainties of camera poses. Using the conventional PnP algorithm, an initial camera pose is estimated on the probabilistic feature map. The proposed algorithm is optimized from the initial pose by minimizing Mahalanobis distance errors between features from the query image and the map to improve accuracy. To verify that the localization accuracy is improved, the proposed algorithm is compared with the conventional algorithm in a simulation and realenvironments. PMID:26404284

  18. New Stereo Vision Digital Camera System for Simultaneous Measurement of Cloud Base Height and Atmospheric Visibility

    NASA Astrophysics Data System (ADS)

    Janeiro, F. M.; Carretas, F.; Palma, N.; Ramos, P. M.; Wagner, F.

    2013-12-01

    Clouds play an important role in many aspects of everyday life. They affect both the local weather as well as the global climate and are an important parameter on climate change studies. Cloud parameters are also important for weather prediction models which make use of actual measurements. It is thus important to have low-cost instrumentation that can be deployed in the field to measure those parameters. This kind of instruments should also be automated and robust since they may be deployed in remote places and be subject to adverse weather conditions. Although clouds are very important in environmental systems, they are also an essential component of airplane safety when visual flight rules (VFR) are enforced, such as in most small aerodromes where it is not economically viable to install instruments for assisted flying. Under VFR there are strict limits on the height of the cloud base, cloud cover and atmospheric visibility that ensure the safety of the pilots and planes. Although there are instruments, available in the market, to measure those parameters, their relatively high cost makes them unavailable in many local aerodromes. In this work we present a new prototype which has been recently developed and deployed in a local aerodrome as proof of concept. It is composed by two digital cameras that capture photographs of the sky and allow the measurement of the cloud height from the parallax effect. The new developments consist on having a new geometry which allows the simultaneous measurement of cloud base height, wind speed at cloud base height and atmospheric visibility, which was not previously possible with only two cameras. The new orientation of the cameras comes at the cost of a more complex geometry to measure the cloud base height. The atmospheric visibility is calculated from the Lambert-Beer law after the measurement of the contrast between a set of dark objects and the background sky. The prototype includes the latest hardware developments that

  19. CCD-camera-based diffuse optical tomography to study ischemic stroke in preclinical rat models

    NASA Astrophysics Data System (ADS)

    Lin, Zi-Jing; Niu, Haijing; Liu, Yueming; Su, Jianzhong; Liu, Hanli

    2011-02-01

    Stroke, due to ischemia or hemorrhage, is the neurological deficit of cerebrovasculature and is the third leading cause of death in the United States. More than 80 percent of stroke patients are ischemic stroke due to blockage of artery in the brain by thrombosis or arterial embolism. Hence, development of an imaging technique to image or monitor the cerebral ischemia and effect of anti-stoke therapy is more than necessary. Near infrared (NIR) optical tomographic technique has a great potential to be utilized as a non-invasive image tool (due to its low cost and portability) to image the embedded abnormal tissue, such as a dysfunctional area caused by ischemia. Moreover, NIR tomographic techniques have been successively demonstrated in the studies of cerebro-vascular hemodynamics and brain injury. As compared to a fiberbased diffuse optical tomographic system, a CCD-camera-based system is more suitable for pre-clinical animal studies due to its simpler setup and lower cost. In this study, we have utilized the CCD-camera-based technique to image the embedded inclusions based on tissue-phantom experimental data. Then, we are able to obtain good reconstructed images by two recently developed algorithms: (1) depth compensation algorithm (DCA) and (2) globally convergent method (GCM). In this study, we will demonstrate the volumetric tomographic reconstructed results taken from tissuephantom; the latter has a great potential to determine and monitor the effect of anti-stroke therapies.

  20. Development of NEMA-based software for gamma camera quality control.

    PubMed

    Rova, Andrew; Celler, Anna; Hamarneh, Ghassan

    2008-06-01

    We have developed a cross-platform software application that implements all of the basic standardized nuclear medicine scintillation camera quality control analyses, thus serving as an independent complement to camera manufacturers' software. Our application allows direct comparison of data and statistics from different cameras through its ability to uniformly analyze a range of file types. The program has been tested using multiple gamma cameras, and its results agree with comparable analysis by the manufacturers' software.

  1. Development and characterization of a round hand-held silicon photomultiplier based gamma camera for intraoperative imaging

    PubMed Central

    Popovic, Kosta; McKisson, Jack E.; Kross, Brian; Lee, Seungjoon; McKisson, John; Weisenberger, Andrew G.; Proffitt, James; Stolin, Alexander; Majewski, Stan; Williams, Mark B.

    2017-01-01

    This paper describes the development of a hand-held gamma camera for intraoperative surgical guidance that is based on silicon photomultiplier (SiPM) technology. The camera incorporates a cerium doped lanthanum bromide (LaBr3:Ce) plate scintillator, an array of 80 SiPM photodetectors and a two-layer parallel-hole collimator. The field of view is circular with a 60 mm diameter. The disk-shaped camera housing is 75 mm in diameter, approximately 40.5 mm thick and has a mass of only 1.4 kg, permitting either hand-held or arm-mounted use. All camera components are integrated on a mobile cart that allows easy transport. The camera was developed for use in surgical procedures including determination of the location and extent of primary carcinomas, detection of secondary lesions and sentinel lymph node biopsy (SLNB). Here we describe the camera design and its principal operating characteristics, including spatial resolution, energy resolution, sensitivity uniformity, and geometric linearity. The gamma camera has an intrinsic spatial resolution of 4.2 mm FWHM, an energy resolution of 21.1 % FWHM at 140 keV, and a sensitivity of 481 and 73 cps/MBq when using the single- and double-layer collimators, respectively. PMID:28286345

  2. Development and Characterization of a Round Hand-Held Silicon Photomultiplier Based Gamma Camera for Intraoperative Imaging

    NASA Astrophysics Data System (ADS)

    Popovic, Kosta; McKisson, Jack E.; Kross, Brian; Lee, Seungjoon; McKisson, John; Weisenberger, Andrew G.; Proffitt, James; Stolin, Alexander; Majewski, Stan; Williams, Mark B.

    2014-06-01

    This paper describes the development of a hand-held gamma camera for intraoperative surgical guidance that is based on silicon photomultiplier (SiPM) technology. The camera incorporates a cerium doped lanthanum bromide ( LaBr3:Ce) plate scintillator, an array of 80 SiPM photodetectors and a two-layer parallel-hole collimator. The field of view is circular with a 60 mm diameter. The disk-shaped camera housing is 75 mm in diameter, approximately 40.5 mm thick and has a mass of only 1.4 kg, permitting either hand-held or arm-mounted use. All camera components are integrated on a mobile cart that allows easy transport. The camera was developed for use in surgical procedures, including determination of the location and extent of primary carcinomas, detection of secondary lesions, and sentinel lymph node biopsy (SLNB). Here, we describe the camera design and its principal operating characteristics, including spatial resolution, energy resolution, sensitivity uniformity, and geometric linearity. The gamma camera has an intrinsic spatial resolution of 4.2 mm FWHM, an energy resolution of 21.1% FWHM at 140 keV, and a sensitivity of 481 and 73 cps/MBq when using the single- and double-layer collimators, respectively.

  3. Development and characterization of a round hand-held silicon photomultiplier based gamma camera for intraoperative imaging.

    PubMed

    Popovic, Kosta; McKisson, Jack E; Kross, Brian; Lee, Seungjoon; McKisson, John; Weisenberger, Andrew G; Proffitt, James; Stolin, Alexander; Majewski, Stan; Williams, Mark B

    2014-05-01

    This paper describes the development of a hand-held gamma camera for intraoperative surgical guidance that is based on silicon photomultiplier (SiPM) technology. The camera incorporates a cerium doped lanthanum bromide (LaBr3:Ce) plate scintillator, an array of 80 SiPM photodetectors and a two-layer parallel-hole collimator. The field of view is circular with a 60 mm diameter. The disk-shaped camera housing is 75 mm in diameter, approximately 40.5 mm thick and has a mass of only 1.4 kg, permitting either hand-held or arm-mounted use. All camera components are integrated on a mobile cart that allows easy transport. The camera was developed for use in surgical procedures including determination of the location and extent of primary carcinomas, detection of secondary lesions and sentinel lymph node biopsy (SLNB). Here we describe the camera design and its principal operating characteristics, including spatial resolution, energy resolution, sensitivity uniformity, and geometric linearity. The gamma camera has an intrinsic spatial resolution of 4.2 mm FWHM, an energy resolution of 21.1 % FWHM at 140 keV, and a sensitivity of 481 and 73 cps/MBq when using the single- and double-layer collimators, respectively.

  4. Analytically based photon scatter modeling for a multipinhole cardiac SPECT camera.

    PubMed

    Pourmoghaddas, Amir; Wells, R Glenn

    2016-11-01

    Dedicated cardiac SPECT scanners have improved performance over standard gamma cameras allowing reductions in acquisition times and/or injected activity. One approach to improving performance has been to use pinhole collimators, but this can cause position-dependent variations in attenuation, sensitivity, and spatial resolution. CT attenuation correction (AC) and an accurate system model can compensate for many of these effects; however, scatter correction (SC) remains an outstanding issue. In addition, in cameras using cadmium-zinc-telluride-based detectors, a large portion of unscattered photons is detected with reduced energy (low-energy tail). Consequently, application of energy-based SC approaches in these cameras leads to a higher increase in noise than with standard cameras due to the subtraction of true counts detected in the low-energy tail. Model-based approaches with parallel-hole collimator systems accurately calculate scatter based on the physics of photon interactions in the patient and camera and generate lower-noise estimates of scatter than energy-based SC. In this study, the accuracy of a model-based SC method was assessed using physical phantom studies on the GE-Discovery NM530c and its performance was compared to a dual energy window (DEW)-SC method. The analytical photon distribution (APD) method was used to calculate the distribution of probabilities that emitted photons will scatter in the surrounding scattering medium and be subsequently detected. APD scatter calculations for (99m)Tc-SPECT (140 ± 14 keV) were validated with point-source measurements and 15 anthropomorphic cardiac-torso phantom experiments and varying levels of extra-cardiac activity causing scatter inside the heart. The activity inserted into the myocardial compartment of the phantom was first measured using a dose calibrator. CT images were acquired on an Infinia Hawkeye (GE Healthcare) SPECT/CT and coregistered with emission data for AC. For comparison, DEW scatter

  5. The E166 experiment: Development of an undulator-based polarized positron source for the international linear collider

    NASA Astrophysics Data System (ADS)

    Kovermann, J.; Stahl, A.; Mikhailichenko, A. A.; Scott, D.; Moortgat-Pick, G. A.; Gharibyan, V.; Pahl, P.; Põschl, R.; Schüler, K. P.; Laihem, K.; Riemann, S.; Schälicke, A.; Dollan, R.; Kolanoski, H.; Lohse, T.; Schweizer, T.; McDonald, K. T.; Batygin, Y.; Bharadwaj, V.; Bower, G.; Decker, F.-J.; Hast, C.; Iverson, R.; Sheppard, J. C.; Szalata, Z.; Walz, D.; Weidemann, A.; Alexander, G.; Reinherz-Aronis, E.; Berridge, S.; Bugg, W.; Efrimenko, Y.

    2007-12-01

    A longitudinal polarized positron beam is foreseen for the international linear collider (ILC). A proof-of-principle experiment has been performed in the final focus test beam at SLAC to demonstrate the production of polarized positrons for implementation at the ILC. The E166 experiment uses a 1 m long helical undulator in a 46.6 GeV electron beam to produce a few MeV photons with a high degree of circular polarization. These photons are then converted in a thin target to generate longitudinally polarized e^+ and e^-. The positron polarization is measured using a Compton transmission polarimeter. The data analysis has shown asymmetries in the expected vicinity of 3.4% and ˜1% for photons and positrons respectively and the expected positron longitudinal polarization is covering a range from 50% to 90%.

  6. Real-time implementation of camera positioning algorithm based on FPGA & SOPC

    NASA Astrophysics Data System (ADS)

    Yang, Mingcao; Qiu, Yuehong

    2014-09-01

    In recent years, with the development of positioning algorithm and FPGA, to achieve the camera positioning based on real-time implementation, rapidity, accuracy of FPGA has become a possibility by way of in-depth study of embedded hardware and dual camera positioning system, this thesis set up an infrared optical positioning system based on FPGA and SOPC system, which enables real-time positioning to mark points in space. Thesis completion include: (1) uses a CMOS sensor to extract the pixel of three objects with total feet, implemented through FPGA hardware driver, visible-light LED, used here as the target point of the instrument. (2) prior to extraction of the feature point coordinates, the image needs to be filtered to avoid affecting the physical properties of the system to bring the platform, where the median filtering. (3) Coordinate signs point to FPGA hardware circuit extraction, a new iterative threshold selection method for segmentation of images. Binary image is then segmented image tags, which calculates the coordinates of the feature points of the needle through the center of gravity method. (4) direct linear transformation (DLT) and extreme constraints method is applied to three-dimensional reconstruction of the plane array CMOS system space coordinates. using SOPC system on a chip here, taking advantage of dual-core computing systems, which let match and coordinate operations separately, thus increase processing speed.

  7. Camera based low-cost system to monitor hydrological parameters in small catchments

    NASA Astrophysics Data System (ADS)

    Eltner, Anette; Sardemann, Hannes; Kröhnert, Melanie; Schwalbe, Ellen

    2017-04-01

    Gauging stations in small catchments to measure hydrological parameters are usually solely installed at few selected locations. Thus, extreme events that can evolve rapidly, particularly in small catchments (especially in mountainous areas), potentially causing severe damage, are insufficiently documented eventually leading to difficulties of modeling and forecasting of these events. A conceptual approach using a low-cost camera based alternative is introduced to measure water level, flow velocity and changing river cross sections. Synchronized cameras are used for 3D reconstruction of the water surface, enabling the location of flow velocity vectors measured in video sequences. Furthermore, water levels are measured automatically using an image based approach originally developed for smartphone applications. Additional integration of a thermal sensor can increase the speed and reliability of the water level extraction. Finally, the reconstruction of the water surface as well as the surrounding topography allows for the detection of changing morphology. The introduced approach can help to increase the density of the monitoring system of hydrological parameters in (remote) small catchments and subsequently might be used as warning system for extreme events.

  8. Development of a ground-based automatic camera network for NLC observations: first steps

    NASA Astrophysics Data System (ADS)

    Dalin, P.; Kirkwood, S.; Pertsev, N.; Romejko, V.

    Noctilucent clouds NLC are the highest clouds in the Earth s atmosphere observing around the mesopause at 80-90 km altitudes They can be seen in night during summer time from May until September These night clouds are comprised of small ice particles that provide a sunlight scatter and thus NLC are readily seen against the dark twilight arc The basic physics of the NLC formation is well understood at present However questions concerning secular trends in NLC characteristics relationship between NLC and solar activity as well as global change effects differences in the NLC statistical behavior between different observational sites and many others are still unanswered In particular there is not sufficient information on NLC distribution around the globe and what their characteristic scales are A little knowledge is obtained with model simulations and with NLC observations from space However there are certain natural limitations to observing NLC from space as well as such observations are of a low spatial resolution several hundreds km On the other hand all available ground-based NLC observations are not correlated are conducted by different techniques and therefore there are a few chances to get a comprehensive representation of the NLC distribution above the Earth s surface That is why there is a need to develop a ground-based network for NLC observations using standard digital cameras These cameras are intended to be combined in a common network controlled by a standard program and should work continuously from May until September It is desired to

  9. Enhancing spatial resolution of (18)F positron imaging with the Timepix detector by classification of primary fired pixels using support vector machine.

    PubMed

    Wang, Qian; Liu, Zhen; Ziegler, Sibylle I; Shi, Kuangyu

    2015-07-07

    Position-sensitive positron cameras using silicon pixel detectors have been applied for some preclinical and intraoperative clinical applications. However, the spatial resolution of a positron camera is limited by positron multiple scattering in the detector. An incident positron may fire a number of successive pixels on the imaging plane. It is still impossible to capture the primary fired pixel along a particle trajectory by hardware or to perceive the pixel firing sequence by direct observation. Here, we propose a novel data-driven method to improve the spatial resolution by classifying the primary pixels within the detector using support vector machine. A classification model is constructed by learning the features of positron trajectories based on Monte-Carlo simulations using Geant4. Topological and energy features of pixels fired by (18)F positrons were considered for the training and classification. After applying the classification model on measurements, the primary fired pixels of the positron tracks in the silicon detector were estimated. The method was tested and assessed for [(18)F]FDG imaging of an absorbing edge protocol and a leaf sample. The proposed method improved the spatial resolution from 154.6   ±   4.2 µm (energy weighted centroid approximation) to 132.3   ±   3.5 µm in the absorbing edge measurements. For the positron imaging of a leaf sample, the proposed method achieved lower root mean square error relative to phosphor plate imaging, and higher similarity with the reference optical image. The improvements of the preliminary results support further investigation of the proposed algorithm for the enhancement of positron imaging in clinical and preclinical applications.

  10. Obstacle Detection and Avoidance System Based on Monocular Camera and Size Expansion Algorithm for UAVs

    PubMed Central

    Al-Kaff, Abdulla; García, Fernando; Martín, David; De La Escalera, Arturo; Armingol, José María

    2017-01-01

    One of the most challenging problems in the domain of autonomous aerial vehicles is the designing of a robust real-time obstacle detection and avoidance system. This problem is complex, especially for the micro and small aerial vehicles, that is due to the Size, Weight and Power (SWaP) constraints. Therefore, using lightweight sensors (i.e., Digital camera) can be the best choice comparing with other sensors; such as laser or radar.For real-time applications, different works are based on stereo cameras in order to obtain a 3D model of the obstacles, or to estimate their depth. Instead, in this paper, a method that mimics the human behavior of detecting the collision state of the approaching obstacles using monocular camera is proposed. The key of the proposed algorithm is to analyze the size changes of the detected feature points, combined with the expansion ratios of the convex hull constructed around the detected feature points from consecutive frames. During the Aerial Vehicle (UAV) motion, the detection algorithm estimates the changes in the size of the area of the approaching obstacles. First, the method detects the feature points of the obstacles, then extracts the obstacles that have the probability of getting close toward the UAV. Secondly, by comparing the area ratio of the obstacle and the position of the UAV, the method decides if the detected obstacle may cause a collision. Finally, by estimating the obstacle 2D position in the image and combining with the tracked waypoints, the UAV performs the avoidance maneuver. The proposed algorithm was evaluated by performing real indoor and outdoor flights, and the obtained results show the accuracy of the proposed algorithm compared with other related works. PMID:28481277

  11. Obstacle Detection and Avoidance System Based on Monocular Camera and Size Expansion Algorithm for UAVs.

    PubMed

    Al-Kaff, Abdulla; García, Fernando; Martín, David; De La Escalera, Arturo; Armingol, José María

    2017-05-07

    One of the most challenging problems in the domain of autonomous aerial vehicles is the designing of a robust real-time obstacle detection and avoidance system. This problem is complex, especially for the micro and small aerial vehicles, that is due to the Size, Weight and Power (SWaP) constraints. Therefore, using lightweight sensors (i.e., Digital camera) can be the best choice comparing with other sensors; such as laser or radar.For real-time applications, different works are based on stereo cameras in order to obtain a 3D model of the obstacles, or to estimate their depth. Instead, in this paper, a method that mimics the human behavior of detecting the collision state of the approaching obstacles using monocular camera is proposed. The key of the proposed algorithm is to analyze the size changes of the detected feature points, combined with the expansion ratios of the convex hull constructed around the detected feature points from consecutive frames. During the Aerial Vehicle (UAV) motion, the detection algorithm estimates the changes in the size of the area of the approaching obstacles. First, the method detects the feature points of the obstacles, then extracts the obstacles that have the probability of getting close toward the UAV. Secondly, by comparing the area ratio of the obstacle and the position of the UAV, the method decides if the detected obstacle may cause a collision. Finally, by estimating the obstacle 2D position in the image and combining with the tracked waypoints, the UAV performs the avoidance maneuver. The proposed algorithm was evaluated by performing real indoor and outdoor flights, and the obtained results show the accuracy of the proposed algorithm compared with other related works.

  12. Single-Camera-Based Method for Step Length Symmetry Measurement in Unconstrained Elderly Home Monitoring.

    PubMed

    Cai, Xi; Han, Guang; Song, Xin; Wang, Jinkuan

    2017-01-16

    Single-camera-based gait monitoring is unobtrusive, inexpensive and easy-to-use to monitor daily gait of seniors in their homes. However, most studies require subjects to walk perpendicularly to camera's optical axis or along some specified routes, which limits its application in elderly home monitoring. To build unconstrained monitoring environments, we propose a method to measure step length symmetry ratio (a useful gait parameter representing gait symmetry without significant relationship with age) from unconstrained straight walking using a single camera, without strict restrictions on walking directions or routes. According to projective geometry theory, we first develop a calculation formula of step length ratio for the case of unconstrained straight-line walking. Then, to adapt to general cases, we propose to modify noncollinear footprints, and accordingly provide general procedure for step length ratio extraction from unconstrained straight walking. Our method achieves a mean absolute percentage error (MAPE) of 1.9547% for 15 subjects' normal and abnormal side-view gaits, and also obtains satisfactory MAPEs for non-side-view gaits (2.4026% for 45º-view gaits and 3.9721% for 30º-view gaits). The performance is much better than a well-established monocular gait measurement system suitable only for side-view gaits with a MAPE of 3.5538%. Independently of walking directions, our method can accurately estimate step length ratios from unconstrained straight walking. This demonstrates our method is applicable for elders' daily gait monitoring to provide valuable information for elderly health care, such as abnormal gait recognition, fall risk assessment, etc.

  13. Positron emitting [68Ga]Ga-based imaging agents: chemistry and diversity.

    PubMed

    Velikyan, Irina

    2011-09-01

    Positron Emission Tomography (PET) field and, in particular utilization of (68)Ga radiometal is getting momentum. The development of new imaging agents for targeted, pre-targeted, non-targeted imaging and their clinical applications is accelerating worldwide. The pharmacopoeia monographs regarding generator produced (68)Ga radionuclide and (68)Ga-labeled somatostatin (SST) analogues are in progress. The number of commercial generators and automated synthesizers for (68)Ga-labeling chemistry is increasing constantly. Development of a molecular imaging agent is a complex process including identification of the biological target, respective lead compound, synthesis of the imaging agent, its chemical characterization, pre-clinical, and clinical evaluation. The introduction of new radiopharmaceuticals and their accessibility are important factors determining the expansion of clinical nuclear medicine for early disease detection and personalized medicine with higher therapeutic efficiency. Further, the availability of the technology for GMP compliant automated tracer production can facilitate the introduction of new radiopharmaceuticals due to the ability to conduct standardized and harmonized multi-center studies for regulatory approval. This review reflects on the current status of (68)Ga in PET field with the focus on the achievements in the chemistry as well as diversity and potential of the resulting tracers.

  14. Fluorodeoxyglucose-based positron emission tomography imaging to monitor drug responses in solid tumors.

    PubMed

    Newbold, Andrea; Martin, Ben P; Cullinane, Carleen; Bots, Michael

    2014-10-01

    Positron emission tomography (PET) is used to monitor the uptake of the labeled glucose analogue fluorodeoxyglucose (¹⁸F-FDG) by solid tumor cells, a process generally believed to reflect viable tumor cell mass. The use of ¹⁸F-FDG exploits the high demand for glucose in tumor cells, and serves to document over time the response of a solid tumor to an inducer of apoptosis. The apoptosis inducer crizotinib is a small-molecule inhibitor of c-Met, a receptor tyrosine kinase that is often dysregulated in human tumors. In this protocol, we describe how to monitor the response of a solid tumor to crizotinib. Human gastric tumor cells (GTL-16 cells) are injected into recipient mice and, on tumor formation, the mice are treated with crizotinib. The tracer ¹⁸F-FDG is then injected into the mice at several time points, and its uptake is monitored using PET. Because ¹⁸F-FDG uptake varies widely among different tumor models, preliminary experiments should be performed with each new model to determine its basal level of ¹⁸F-FDG uptake. Verifying that the basal level of uptake is sufficiently above background levels will assure accurate quantitation. Because ¹⁸F-FDG uptake is not a direct measure of apoptosis, it is advisable to carry out an additional direct method to show the presence of apoptotic cells. © 2014 Cold Spring Harbor Laboratory Press.

  15. Radiation defects induced by helium implantation in gold-based alloys investigated by positron annihilation spectroscopy

    NASA Astrophysics Data System (ADS)

    Thome, T.; Grynszpan, R. I.

    2006-06-01

    The formation of gas bubbles in metallic materials may result in drastic degradation of in-service properties. In order to investigate this effect in high density and medium-low melting temperature ( T-M ) alloys, positron annihilation spectroscopy measurements were performed on helium-implanted gold-silver solid solutions after isochronal annealing treatments. Three recovery stages are observed, attributed to the migration and elimination of defects not stabilized by helium atoms, helium bubble nucleation and bubble growth. Similarities with other metals are found for the recovery stages involving bubble nucleation and growth processes. Lifetime measurements indicate that He implantation leads to the formation of small and over-pressurized bubbles that generate internal stresses in the material. A comprehensive picture is drawn for possible mechanisms of helium bubble evolution. Two values of activation energy (0.26 and 0.53 eV) are determined below and above 0.7 T-M , respectively, from the variation of the helium bubble radius during the bubble growth stage. The migration and coalescence mechanism, which accounts for these very low activation energies, controls the helium bubble growth.

  16. Fluorodeoxyglucose-based positron emission tomography imaging to monitor drug responses in hematological tumors.

    PubMed

    Newbold, Andrea; Martin, Ben P; Cullinane, Carleen; Bots, Michael

    2014-10-01

    Positron emission tomography (PET) can be used to monitor the uptake of the labeled glucose analog fluorodeoxyglucose (¹⁸F-FDG), a process that is generally believed to reflect viable tumor cell mass. The use of ¹⁸F-FDG PET can be helpful in documenting over time the reduction in tumor mass volume in response to anticancer drug therapy in vivo. In this protocol, we describe how to monitor the response of murine B-cell lymphomas to an inducer of apoptosis, the anticancer drug vorinostat (a histone deacetylase inhibitor). B-cell lymphoma cells are injected into recipient mice and, on tumor formation, the mice are treated with vorinostat. The tracer ¹⁸F-FDG is then injected into the mice at several time points, and its uptake is monitored using PET. Because the uptake of ¹⁸F-FDG is not a direct measure of apoptosis, an additional direct method proving that apoptotic cells are present should also be performed.

  17. Image mosaic based on the camera self-calibration of combining two vanishing points and pure rotational motion

    NASA Astrophysics Data System (ADS)

    Duan, Shaoli; Zang, Huaping; Zhang, Xiaofang; Gong, Qiaoxia; Tian, Yongzhi; Wang, Junqiao; Liang, Erjun; Liu, Xiaomin; Zhao, Shujun

    2016-10-01

    Camera calibration is one of the indispensable processes to obtain 3D depth information from 2D images in the field of computer vision. Camera self-calibration is more convenient and flexible, especially in the application of large depth of fields, wide fields of view, and scene conversion, as well as other occasions like zooms. In this paper, two selfcalibration methods respectively based on two vanishing points and homography are studied, and finally realizing the image mosaic based on self-calibration of the camera purely rotating around optical center. The geometric characteristic of disappear points formed by two groups of orthogonal parallel lines is applied to self-calibration based on two vanishing points. By using the vectors' orthogonal properties of connection optical centers and the vanishing points, the constraint equations on the camera intrinsic parameters are established. By this method, four internal parameters of the camera can be solved though only four images taked from different viewpoints in a scene. Compared with the other selfcalibration based on homography, the method based on two vanishing points has more convenient calibration process and simple algorithm. To check the quality of the self-calibration, we create a spherical mosaic of the images that were used for the self-calibration based on homography. Compared with the experimental results of two methods respectively based on calibration plate and self-calibration method using machine vision software Halcon, the practicability and effectiveness of self-calibration respectively based on two vanishing points and homography is verified.

  18. Empirical Study on Designing of Gaze Tracking Camera Based on the Information of User’s Head Movement

    PubMed Central

    Pan, Weiyuan; Jung, Dongwook; Yoon, Hyo Sik; Lee, Dong Eun; Naqvi, Rizwan Ali; Lee, Kwan Woo; Park, Kang Ryoung

    2016-01-01

    Gaze tracking is the technology that identifies a region in space that a user is looking at. Most previous non-wearable gaze tracking systems use a near-infrared (NIR) light camera with an NIR illuminator. Based on the kind of camera lens used, the viewing angle and depth-of-field (DOF) of a gaze tracking camera can be different, which affects the performance of the gaze tracking system. Nevertheless, to our best knowledge, most previous researches implemented gaze tracking cameras without ground truth information for determining the optimal viewing angle and DOF of the camera lens. Eye-tracker manufacturers might also use ground truth information, but they do not provide this in public. Therefore, researchers and developers of gaze tracking systems cannot refer to such information for implementing gaze tracking system. We address this problem providing an empirical study in which we design an optimal gaze tracking camera based on experimental measurements of the amount and velocity of user’s head movements. Based on our results and analyses, researchers and developers might be able to more easily implement an optimal gaze tracking system. Experimental results show that our gaze tracking system shows high performance in terms of accuracy, user convenience and interest. PMID:27589768

  19. Modelling Positron Interactions with Matter

    NASA Astrophysics Data System (ADS)

    Garcia, G.; Petrovic, Z.; White, R.; Buckman, S.

    2011-05-01

    In this work we link fundamental measurements of positron interactions with biomolecules, with the development of computer codes for positron transport and track structure calculations. We model positron transport in a medium from a knowledge of the fundamental scattering cross section for the atoms and molecules comprising the medium, combined with a transport analysis based on statistical mechanics and Monte-Carlo techniques. The accurate knowledge of the scattering is most important at low energies, a few tens of electron volts or less. The ultimate goal of this work is to do this in soft condensed matter, with a view to ultimately developing a dosimetry model for Positron Emission Tomography (PET). The high-energy positrons first emitted by a radionuclide in PET may well be described by standard formulas for energy loss of charged particles in matter, but it is incorrect to extrapolate these formulas to low energies. Likewise, using electron cross-sections to model positron transport at these low energies has been shown to be in serious error due to the effects of positronium formation. Work was supported by the Australian Research Council, the Serbian Government, and the Ministerio de Ciencia e Innovación, Spain.

  20. 3D reconstruction method based on time-division multiplexing using multiple depth cameras

    NASA Astrophysics Data System (ADS)

    Kang, Ji-Hoon; Lee, Dong-Su; Park, Min-Chul; Lee, Kwang-Hoon

    2014-06-01

    This article proposes a 3D reconstruction method using multiple depth cameras. Since the depth camera acquires the depth information from a single viewpoint, it's inadequate for 3D reconstruction. In order to solve this problem, we used multiple depth cameras. For 3D scene reconstruction, the depth information is acquired from different viewpoints with multiple depth cameras. However, when using multiple depth cameras, it's difficult to acquire accurate depth information because of interference among depth cameras. To solve this problem, in this research, we propose Time-division multiplexing method. The depth information was acquired from different cameras sequentially. After acquiring the depth images, we extracted features using Fast Point Feature Histogram (FPFH) descriptor. Then, we performed 3D registration with Sample Consensus Initial Alignment (SAC-IA). We reconstructed 3D human bodies with our system and measured body sizes for evaluating the accuracy of 3D reconstruction.

  1. A Ground-Based Near Infrared Camera Array System for UAV Auto-Landing in GPS-Denied Environment

    PubMed Central

    Yang, Tao; Li, Guangpo; Li, Jing; Zhang, Yanning; Zhang, Xiaoqiang; Zhang, Zhuoyue; Li, Zhi

    2016-01-01

    This paper proposes a novel infrared camera array guidance system with capability to track and provide real time position and speed of a fixed-wing Unmanned air vehicle (UAV) during a landing process. The system mainly include three novel parts: (1) Infrared camera array and near infrared laser lamp based cooperative long range optical imaging module; (2) Large scale outdoor camera array calibration module; and (3) Laser marker detection and 3D tracking module. Extensive automatic landing experiments with fixed-wing flight demonstrate that our infrared camera array system has the unique ability to guide the UAV landing safely and accurately in real time. Moreover, the measurement and control distance of our system is more than 1000 m. The experimental results also demonstrate that our system can be used for UAV automatic accurate landing in Global Position System (GPS)-denied environments. PMID:27589755

  2. A Ground-Based Near Infrared Camera Array System for UAV Auto-Landing in GPS-Denied Environment.

    PubMed

    Yang, Tao; Li, Guangpo; Li, Jing; Zhang, Yanning; Zhang, Xiaoqiang; Zhang, Zhuoyue; Li, Zhi

    2016-08-30

    This paper proposes a novel infrared camera array guidance system with capability to track and provide real time position and speed of a fixed-wing Unmanned air vehicle (UAV) during a landing process. The system mainly include three novel parts: (1) Infrared camera array and near infrared laser lamp based cooperative long range optical imaging module; (2) Large scale outdoor camera array calibration module; and (3) Laser marker detection and 3D tracking module. Extensive automatic landing experiments with fixed-wing flight demonstrate that our infrared camera array system has the unique ability to guide the UAV landing safely and accurately in real time. Moreover, the measurement and control distance of our system is more than 1000 m. The experimental results also demonstrate that our system can be used for UAV automatic accurate landing in Global Position System (GPS)-denied environments.

  3. Early sinkhole detection using a drone-based thermal camera and image processing

    NASA Astrophysics Data System (ADS)

    Lee, Eun Ju; Shin, Sang Young; Ko, Byoung Chul; Chang, Chunho

    2016-09-01

    Accurate advance detection of the sinkholes that are occurring more frequently now is an important way of preventing human fatalities and property damage. Unlike naturally occurring sinkholes, human-induced ones in urban areas are typically due to groundwater disturbances and leaks of water and sewage caused by large-scale construction. Although many sinkhole detection methods have been developed, it is still difficult to predict sinkholes that occur in depth areas. In addition, conventional methods are inappropriate for scanning a large area because of their high cost. Therefore, this paper uses a drone combined with a thermal far-infrared (FIR) camera to detect potential sinkholes over a large area based on computer vision and pattern classification techniques. To make a standard dataset, we dug eight holes of depths 0.5-2 m in increments of 0.5 m and with a maximum width of 1 m. We filmed these using the drone-based FIR camera at a height of 50 m. We first detect candidate regions by analysing cold spots in the thermal images based on the fact that a sinkhole typically has a lower thermal energy than its background. Then, these regions are classified into sinkhole and non-sinkhole classes using a pattern classifier. In this study, we ensemble the classification results based on a light convolutional neural network (CNN) and those based on a Boosted Random Forest (BRF) with handcrafted features. We apply the proposed ensemble method successfully to sinkhole data for various sizes and depths in different environments, and prove that the CNN ensemble and the BRF one with handcrafted features are better at detecting sinkholes than other classifiers or standalone CNN.

  4. Fast time-of-flight camera based surface registration for radiotherapy patient positioning

    SciTech Connect

    Placht, Simon; Stancanello, Joseph; Schaller, Christian; Balda, Michael; Angelopoulou, Elli

    2012-01-15

    Purpose: This work introduces a rigid registration framework for patient positioning in radiotherapy, based on real-time surface acquisition by a time-of-flight (ToF) camera. Dynamic properties of the system are also investigated for future gating/tracking strategies. Methods: A novel preregistration algorithm, based on translation and rotation-invariant features representing surface structures, was developed. Using these features, corresponding three-dimensional points were computed in order to determine initial registration parameters. These parameters became a robust input to an accelerated version of the iterative closest point (ICP) algorithm for the fine-tuning of the registration result. Distance calibration and Kalman filtering were used to compensate for ToF-camera dependent noise. Additionally, the advantage of using the feature based preregistration over an ''ICP only'' strategy was evaluated, as well as the robustness of the rigid-transformation-based method to deformation. Results: The proposed surface registration method was validated using phantom data. A mean target registration error (TRE) for translations and rotations of 1.62 {+-} 1.08 mm and 0.07 deg. {+-} 0.05 deg., respectively, was achieved. There was a temporal delay of about 65 ms in the registration output, which can be seen as negligible considering the dynamics of biological systems. Feature based preregistration allowed for accurate and robust registrations even at very large initial displacements. Deformations affected the accuracy of the results, necessitating particular care in cases of deformed surfaces. Conclusions: The proposed solution is able to solve surface registration problems with an accuracy suitable for radiotherapy cases where external surfaces offer primary or complementary information to patient positioning. The system shows promising dynamic properties for its use in gating/tracking applications. The overall system is competitive with commonly-used surface

  5. Indirect Correspondence-Based Robust Extrinsic Calibration of LiDAR and Camera

    PubMed Central

    Sim, Sungdae; Sock, Juil; Kwak, Kiho

    2016-01-01

    LiDAR and cameras have been broadly utilized in computer vision and autonomous vehicle applications. However, in order to convert data between the local coordinate systems, we must estimate the rigid body transformation between the sensors. In this paper, we propose a robust extrinsic calibration algorithm that can be implemented easily and has small calibration error. The extrinsic calibration parameters are estimated by minimizing the distance between corresponding features projected onto the image plane. The features are edge and centerline features on a v-shaped calibration target. The proposed algorithm contributes two ways to improve the calibration accuracy. First, we use different weights to distance between a point and a line feature according to the correspondence accuracy of the features. Second, we apply a penalizing function to exclude the influence of outliers in the calibration datasets. Additionally, based on our robust calibration approach for a single LiDAR-camera pair, we introduce a joint calibration that estimates the extrinsic parameters of multiple sensors at once by minimizing one objective function with loop closing constraints. We conduct several experiments to evaluate the performance of our extrinsic calibration algorithm. The experimental results show that our calibration method has better performance than the other approaches. PMID:27338416

  6. Ground-based analysis of volcanic ash plumes using a new multispectral thermal infrared camera approach

    NASA Astrophysics Data System (ADS)

    Williams, D.; Ramsey, M. S.

    2015-12-01

    Volcanic plumes are complex mixtures of mineral, lithic and glass fragments of varying size, together with multiple gas species. These plumes vary in size dependent on a number of factors, including vent diameter, magma composition and the quantity of volatiles within a melt. However, determining the chemical and mineralogical properties of a volcanic plume immediately after an eruption is a great challenge. Thermal infrared (TIR) satellite remote sensing of these plumes is routinely used to calculate the volcanic ash particle size variations and sulfur dioxide concentration. These analyses are commonly performed using high temporal, low spatial resolution satellites, which can only reveal large scale trends. What is lacking is a high spatial resolution study specifically of the properties of the proximal plumes. Using the emissive properties of volcanic ash, a new method has been developed to determine the plume's particle size and petrology in spaceborne and ground-based TIR data. A multispectral adaptation of a FLIR TIR camera has been developed that simulates the TIR channels found on several current orbital instruments. Using this instrument, data of volcanic plumes from Fuego and Santiaguito volcanoes in Guatemala were recently obtained Preliminary results indicate that the camera is capable of detecting silicate absorption features in the emissivity spectra over the TIR wavelength range, which can be linked to both mineral chemistry and particle size. It is hoped that this technique can be expanded to isolate different volcanic species within a plume, validate the orbital data, and ultimately to use the results to better inform eruption dynamics modelling.

  7. Real-time 3D measurement based on structured light illumination considering camera lens distortion

    NASA Astrophysics Data System (ADS)

    Feng, Shijie; Chen, Qian; Zuo, Chao; Sun, Jiasong; Yu, ShiLing

    2014-12-01

    Optical three-dimensional (3-D) profilometry is gaining increasing attention for its simplicity, flexibility, high accuracy, and non-contact nature. Recent advances in imaging sensors and digital projection technology further its progress in high-speed, real-time applications, enabling 3-D shapes reconstruction of moving objects and dynamic scenes. In traditional 3-D measurement system where the processing time is not a key factor, camera lens distortion correction is performed directly. However, for the time-critical high-speed applications, the time-consuming correction algorithm is inappropriate to be performed directly during the real-time process. To cope with this issue, here we present a novel high-speed real-time 3-D coordinates measuring technique based on fringe projection with the consideration of the camera lens distortion. A pixel mapping relation between a distorted image and a corrected one is pre-determined and stored in computer memory for real-time fringe correction. And a method of lookup table (LUT) is introduced as well for fast data processing. Our experimental results reveal that the measurement error of the in-plane coordinates has been reduced by one order of magnitude and the accuracy of the out-plane coordinate been tripled after the distortions being eliminated. Moreover, owing to the merit of the LUT, the 3-D reconstruction can be achieved at 92.34 frames per second.

  8. Calibration and disparity maps for a depth camera based on a four-lens device

    NASA Astrophysics Data System (ADS)

    Riou, Cécile; Colicchio, Bruno; Lauffenburger, Jean Philippe; Haeberlé, Olivier; Cudel, Christophe

    2015-11-01

    We propose a model of depth camera based on a four-lens device. This device is used for validating alternate approaches for calibrating multiview cameras and also for computing disparity or depth images. The calibration method arises from previous works, where principles of variable homography were extended for three-dimensional (3-D) measurement. Here, calibration is performed between two contiguous views obtained on the same image sensor. This approach leads us to propose a new approach for simplifying calibration by using the properties of the variable homography. Here, the second part addresses new principles for obtaining disparity images without any matching. A fast algorithm using a contour propagation algorithm is proposed without requiring structured or random pattern projection. These principles are proposed in a framework of quality control by vision, for inspection in natural illumination. By preserving scene photometry, some other standard controls, as for example calipers, shape recognition, or barcode reading, can be done conjointly with 3-D measurements. Approaches presented here are evaluated. First, we show that rapid calibration is relevant for devices mounted with multiple lenses. Second, synthetic and real experimentations validate our method for computing depth images.

  9. Stereo camera-based intelligent UGV system for path planning and navigation

    NASA Astrophysics Data System (ADS)

    Lee, Jung-Suk; Ko, Jung-Hwan; Chungb, Dal-Do

    2006-08-01

    In this paper, a new real-time and intelligent mobile robot system for path planning and navigation using stereo camera embedded on the pan/tilt system is proposed. In the proposed system, face area of a moving person is detected from a sequence of the stereo image pairs by using the YCbCr color model and using the disparity map obtained from the left and right images captured by the pan/tilt-controlled stereo camera system and depth information can be detected. And then, the distance between the mobile robot system and the face of the moving person can be calculated from the detected depth information. Accordingly, based-on the analysis of these data, three-dimensional objects can be detected. Finally, by using these detected data, 2-D spatial map for a visually guided robot that can plan paths, navigate surrounding objects and explore an indoor environment is constructed. From some experiments on target tracking with 480 frames of the sequential stereo images, it is analyzed that error ratio between the calculated and measured values of the relative position is found to be very low value of 1.4 % on average. Also, the proposed target tracking system has achieved a high speed of 0.04 sec/frame for target detection and 0.06 sec/frame for target tracking.

  10. A positioning system for forest diseases and pests based on GIS and PTZ camera

    NASA Astrophysics Data System (ADS)

    Wang, Z. B.; Wang, L. L.; Zhao, F. F.; Wang, C. B.

    2014-03-01

    Forest diseases and pests cause enormous economic losses and ecological damage every year in China. To prevent and control forest diseases and pests, the key is to get accurate information timely. In order to improve monitoring coverage rate and economize on manpower, a cooperative investigation model for forest diseases and pests is put forward. It is composed of video positioning system and manual labor reconnaissance with mobile GIS embedded in PDA. Video system is used to scan the disaster area, and is particularly effective on where trees are withered. Forest diseases prevention and control workers can check disaster area with PDA system. To support this investigation model, we developed a positioning algorithm and a positioning system. The positioning algorithm is based on DEM and PTZ camera. Moreover, the algorithm accuracy is validated. The software consists of 3D GIS subsystem, 2D GIS subsystem, video control subsystem and disaster positioning subsystem. 3D GIS subsystem makes positioning visual, and practically easy to operate. 2D GIS subsystem can output disaster thematic map. Video control subsystem can change Pan/Tilt/Zoom of a digital camera remotely, to focus on the suspected area. Disaster positioning subsystem implements the positioning algorithm. It is proved that the positioning system can observe forest diseases and pests in practical application for forest departments.

  11. The cooling control system for focal plane assembly of astronomical satellite camera based on TEC

    NASA Astrophysics Data System (ADS)

    He, Yuqing; Du, Yunfei; Gao, Wei; Li, Baopeng; Fan, Xuewu; Yang, Wengang

    2017-02-01

    The dark current noise existing in the CCD of the astronomical observation camera has a serious influence on its working performance, reducing the working temperature of CCD can suppress the influence of dark current effectively. By analyzing the relationship between the CCD chip and the dark current noise, the optimum working temperature of the red band CCD focal plane is identified as -75°. According to the refrigeration temperature, a cooling control system for focal plane based on a thermoelectric cooler (TEC) was designed. It is required that the system can achieve high precision temperature control for the target. In the cooling control system, the 80C32 microcontroller was used as its systematic core processor. The advanced PID control algorithm is adopted to control the temperature of the top end of TEC. The bottom end of the TEC setting a constant value according to the target temperature used to assist the upper TEC to control the temperature. The experimental results show that the cooling system satisfies the requirements of the focal plane for the astronomical observation camera, it can reach the working temperature of -75° and the accuracy of ±2°.

  12. Geolocating thermal binoculars based on a software defined camera core incorporating HOT MCT grown by MOVPE

    NASA Astrophysics Data System (ADS)

    Pillans, Luke; Harmer, Jack; Edwards, Tim; Richardson, Lee

    2016-05-01

    Geolocation is the process of calculating a target position based on bearing and range relative to the known location of the observer. A high performance thermal imager with integrated geolocation functions is a powerful long range targeting device. Firefly is a software defined camera core incorporating a system-on-a-chip processor running the AndroidTM operating system. The processor has a range of industry standard serial interfaces which were used to interface to peripheral devices including a laser rangefinder and a digital magnetic compass. The core has built in Global Positioning System (GPS) which provides the third variable required for geolocation. The graphical capability of Firefly allowed flexibility in the design of the man-machine interface (MMI), so the finished system can give access to extensive functionality without appearing cumbersome or over-complicated to the user. This paper covers both the hardware and software design of the system, including how the camera core influenced the selection of peripheral hardware, and the MMI design process which incorporated user feedback at various stages.

  13. Indirect Correspondence-Based Robust Extrinsic Calibration of LiDAR and Camera.

    PubMed

    Sim, Sungdae; Sock, Juil; Kwak, Kiho

    2016-06-22

    LiDAR and cameras have been broadly utilized in computer vision and autonomous vehicle applications. However, in order to convert data between the local coordinate systems, we must estimate the rigid body transformation between the sensors. In this paper, we propose a robust extrinsic calibration algorithm that can be implemented easily and has small calibration error. The extrinsic calibration parameters are estimated by minimizing the distance between corresponding features projected onto the image plane. The features are edge and centerline features on a v-shaped calibration target. The proposed algorithm contributes two ways to improve the calibration accuracy. First, we use different weights to distance between a point and a line feature according to the correspondence accuracy of the features. Second, we apply a penalizing function to exclude the influence of outliers in the calibration datasets. Additionally, based on our robust calibration approach for a single LiDAR-camera pair, we introduce a joint calibration that estimates the extrinsic parameters of multiple sensors at once by minimizing one objective function with loop closing constraints. We conduct several experiments to evaluate the performance of our extrinsic calibration algorithm. The experimental results show that our calibration method has better performance than the other approaches.

  14. Retinal oximetry based on nonsimultaneous image acquisition using a conventional fundus camera.

    PubMed

    Kim, Sun Kwon; Kim, Dong Myung; Suh, Min Hee; Kim, Martha; Kim, Hee Chan

    2011-08-01

    To measure the retinal arteriole and venule oxygen saturation (SO(2)) using a conventional fundus camera, retinal oximetry based on nonsimultaneous image acquisition was developed and evaluated. Two retinal images were sequentially acquired using a conventional fundus camera with two bandpass filters (568 nm: isobestic, 600 nm: nonisobestic wavelength), one after another, instead of a built-in green filter. The images were registered to compensate for the differences caused by eye movements during the image acquisition. Retinal SO(2) was measured using two wavelength oximetry. To evaluate sensitivity of the proposed method, SO(2) in the arterioles and venules before and after inhalation of 100% O(2) were compared, respectively, in 11 healthy subjects. After inhalation of 100% O(2), SO(2) increased from 96.0 ±6.0% to 98.8% ±7.1% in the arterioles (p=0.002) and from 54.0 ±8.0% to 66.7% ±7.2% in the venules (p=0.005) (paired t-test, n=11). Reproducibility of the method was 2.6% and 5.2% in the arterioles and venules, respectively (average standard deviation of five measurements, n=11).

  15. Respiratory rate detection algorithm based on RGB-D camera: theoretical background and experimental results.

    PubMed

    Benetazzo, Flavia; Freddi, Alessandro; Monteriù, Andrea; Longhi, Sauro

    2014-09-01

    Both the theoretical background and the experimental results of an algorithm developed to perform human respiratory rate measurements without any physical contact are presented. Based on depth image sensing techniques, the respiratory rate is derived by measuring morphological changes of the chest wall. The algorithm identifies the human chest, computes its distance from the camera and compares this value with the instantaneous distance, discerning if it is due to the respiratory act or due to a limited movement of the person being monitored. To experimentally validate the proposed algorithm, the respiratory rate measurements coming from a spirometer were taken as a benchmark and compared with those estimated by the algorithm. Five tests were performed, with five different persons sat in front of the camera. The first test aimed to choose the suitable sampling frequency. The second test was conducted to compare the performances of the proposed system with respect to the gold standard in ideal conditions of light, orientation and clothing. The third, fourth and fifth tests evaluated the algorithm performances under different operating conditions. The experimental results showed that the system can correctly measure the respiratory rate, and it is a viable alternative to monitor the respiratory activity of a person without using invasive sensors.

  16. Bio-inspired motion detection in an FPGA-based smart camera module.

    PubMed

    Köhler, T; Röchter, F; Lindemann, J P; Möller, R

    2009-03-01

    Flying insects, despite their relatively coarse vision and tiny nervous system, are capable of carrying out elegant and fast aerial manoeuvres. Studies of the fly visual system have shown that this is accomplished by the integration of signals from a large number of elementary motion detectors (EMDs) in just a few global flow detector cells. We developed an FPGA-based smart camera module with more than 10,000 single EMDs, which is closely modelled after insect motion-detection circuits with respect to overall architecture, resolution and inter-receptor spacing. Input to the EMD array is provided by a CMOS camera with a high frame rate. Designed as an adaptable solution for different engineering applications and as a testbed for biological models, the EMD detector type and parameters such as the EMD time constants, the motion-detection directions and the angle between correlated receptors are reconfigurable online. This allows a flexible and simultaneous detection of complex motion fields such as translation, rotation and looming, such that various tasks, e.g., obstacle avoidance, height/distance control or speed regulation can be performed by the same compact device.

  17. Respiratory rate detection algorithm based on RGB-D camera: theoretical background and experimental results

    PubMed Central

    Freddi, Alessandro; Monteriù, Andrea; Longhi, Sauro

    2014-01-01

    Both the theoretical background and the experimental results of an algorithm developed to perform human respiratory rate measurements without any physical contact are presented. Based on depth image sensing techniques, the respiratory rate is derived by measuring morphological changes of the chest wall. The algorithm identifies the human chest, computes its distance from the camera and compares this value with the instantaneous distance, discerning if it is due to the respiratory act or due to a limited movement of the person being monitored. To experimentally validate the proposed algorithm, the respiratory rate measurements coming from a spirometer were taken as a benchmark and compared with those estimated by the algorithm. Five tests were performed, with five different persons sat in front of the camera. The first test aimed to choose the suitable sampling frequency. The second test was conducted to compare the performances of the proposed system with respect to the gold standard in ideal conditions of light, orientation and clothing. The third, fourth and fifth tests evaluated the algorithm performances under different operating conditions. The experimental results showed that the system can correctly measure the respiratory rate, and it is a viable alternative to monitor the respiratory activity of a person without using invasive sensors. PMID:26609383

  18. Evaluation of lens distortion errors using an underwater camera system for video-based motion analysis

    NASA Technical Reports Server (NTRS)

    Poliner, Jeffrey; Fletcher, Lauren; Klute, Glenn K.

    1994-01-01

    Video-based motion analysis systems are widely employed to study human movement, using computers to capture, store, process, and analyze video data. This data can be collected in any environment where cameras can be located. One of the NASA facilities where human performance research is conducted is the Weightless Environment Training Facility (WETF), a pool of water which simulates zero-gravity with neutral buoyance. Underwater video collection in the WETF poses some unique problems. This project evaluates the error caused by the lens distortion of the WETF cameras. A grid of points of known dimensions was constructed and videotaped using a video vault underwater system. Recorded images were played back on a VCR and a personal computer grabbed and stored the images on disk. These images were then digitized to give calculated coordinates for the grid points. Errors were calculated as the distance from the known coordinates of the points to the calculated coordinates. It was demonstrated that errors from lens distortion could be as high as 8 percent. By avoiding the outermost regions of a wide-angle lens, the error can be kept smaller.

  19. Pedestrian mobile mapping system for indoor environments based on MEMS IMU and range camera

    NASA Astrophysics Data System (ADS)

    Haala, N.; Fritsch, D.; Peter, M.; Khosravani, A. M.

    2011-12-01

    This paper describes an approach for the modeling of building interiors based on a mobile device, which integrates modules for pedestrian navigation and low-cost 3D data collection. Personal navigation is realized by a foot mounted low cost MEMS IMU, while 3D data capture for subsequent indoor modeling uses a low cost range camera, which was originally developed for gaming applications. Both steps, navigation and modeling, are supported by additional information as provided from the automatic interpretation of evacuation plans. Such emergency plans are compulsory for public buildings in a number of countries. They consist of an approximate floor plan, the current position and escape routes. Additionally, semantic information like stairs, elevators or the floor number is available. After the user has captured an image of such a floor plan, this information is made explicit again by an automatic raster-to-vector-conversion. The resulting coarse indoor model then provides constraints at stairs or building walls, which restrict the potential movement of the user. This information is then used to support pedestrian navigation by eliminating drift effects of the used low-cost sensor system. The approximate indoor building model additionally provides a priori information during subsequent indoor modeling. Within this process, the low cost range camera Kinect is used for the collection of multiple 3D point clouds, which are aligned by a suitable matching step and then further analyzed to refine the coarse building model.

  20. The new SCOS-based EGSE of the EPIC flight-spare on-ground cameras

    NASA Astrophysics Data System (ADS)

    La Palombara, N.; Abbey, A.; Insinga, F.; Calderon-Riano, P.; Martin, J.; Palazzo, M.; Poletti, M.; Sembay, S.; Vallejo, J.

    2014-07-01

    After almost 15 years since its launch, the instruments on-board the XMM-Newton observatory continue to operate smoothly. However, since the mission was originally planned for 10 years, progressive ageing and/or failures of the on-board instruments can be expected. Dealing with them could require substantial changes in the operating software and the command & telemetry database, which shall be tested with the on-ground flight-spare cameras. To this aim, the original Electrical Ground Support Equipment has been replaced with a new one based on SCOS2000, the same tool used by ESA for controlling the spacecraft. This was a demanding task, since it required both the recovery of the specialised knowledge regarding the original EGSE and need to adapt SCOS for a special use. Very recently this work has been completed by fully replacing the EGSE of one of the two cameras, which is now ready to be used by ESA. Here we describe the scope and purpose of this activity, the problems faced during its execution and the adopted solutions, and the tests performed to demonstrate the effectiveness of the new EGSE.

  1. A Camera-Based Target Detection and Positioning UAV System for Search and Rescue (SAR) Purposes

    PubMed Central

    Sun, Jingxuan; Li, Boyang; Jiang, Yifan; Wen, Chih-yung

    2016-01-01

    Wilderness search and rescue entails performing a wide-range of work in complex environments and large regions. Given the concerns inherent in large regions due to limited rescue distribution, unmanned aerial vehicle (UAV)-based frameworks are a promising platform for providing aerial imaging. In recent years, technological advances in areas such as micro-technology, sensors and navigation have influenced the various applications of UAVs. In this study, an all-in-one camera-based target detection and positioning system is developed and integrated into a fully autonomous fixed-wing UAV. The system presented in this paper is capable of on-board, real-time target identification, post-target identification and location and aerial image collection for further mapping applications. Its performance is examined using several simulated search and rescue missions, and the test results demonstrate its reliability and efficiency. PMID:27792156

  2. A Camera-Based Target Detection and Positioning UAV System for Search and Rescue (SAR) Purposes.

    PubMed

    Sun, Jingxuan; Li, Boyang; Jiang, Yifan; Wen, Chih-Yung

    2016-10-25

    Wilderness search and rescue entails performing a wide-range of work in complex environments and large regions. Given the concerns inherent in large regions due to limited rescue distribution, unmanned aerial vehicle (UAV)-based frameworks are a promising platform for providing aerial imaging. In recent years, technological advances in areas such as micro-technology, sensors and navigation have influenced the various applications of UAVs. In this study, an all-in-one camera-based target detection and positioning system is developed and integrated into a fully autonomous fixed-wing UAV. The system presented in this paper is capable of on-board, real-time target identification, post-target identification and location and aerial image collection for further mapping applications. Its performance is examined using several simulated search and rescue missions, and the test results demonstrate its reliability and efficiency.

  3. A non-linear camera calibration with modified teaching-learning-based optimization algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Buyang; Yang, Hua; Yang, Shuo

    2015-12-01

    In this paper, we put forward a novel approach based on hierarchical teaching-and-learning-based optimization (HTLBO) algorithm for nonlinear camera calibration. This algorithm simulates the teaching-learning ability of teachers and learners of a classroom. Different from traditional calibration approach, the proposed technique can find the nearoptimal solution without the need of accurate initial parameters estimation (with only very loose parameter bounds). With the introduction of cascade of teaching, the convergence speed is rapid and the global search ability is improved. Results from our study demonstrate the excellent performance of the proposed technique in terms of convergence, accuracy, and robustness. The HTLBO can also be used to solve many other complex non-linear calibration optimization problems for its good portability.

  4. Compressive Video Recovery Using Block Match Multi-Frame Motion Estimation Based on Single Pixel Cameras

    PubMed Central

    Bi, Sheng; Zeng, Xiao; Tang, Xin; Qin, Shujia; Lai, King Wai Chiu

    2016-01-01

    Compressive sensing (CS) theory has opened up new paths for the development of signal processing applications. Based on this theory, a novel single pixel camera architecture has been introduced to overcome the current limitations and challenges of traditional focal plane arrays. However, video quality based on this method is limited by existing acquisition and recovery methods, and the method also suffers from being time-consuming. In this paper, a multi-frame motion estimation algorithm is proposed in CS video to enhance the video quality. The proposed algorithm uses multiple frames to implement motion estimation. Experimental results show that using multi-frame motion estimation can improve the quality of recovered videos. To further reduce the motion estimation time, a block match algorithm is used to process motion estimation. Experiments demonstrate that using the block match algorithm can reduce motion estimation time by 30%. PMID:26950127

  5. Traffic camera system development

    NASA Astrophysics Data System (ADS)

    Hori, Toshi

    1997-04-01

    The intelligent transportation system has generated a strong need for the development of intelligent camera systems to meet the requirements of sophisticated applications, such as electronic toll collection (ETC), traffic violation detection and automatic parking lot control. In order to achieve the highest levels of accuracy in detection, these cameras must have high speed electronic shutters, high resolution, high frame rate, and communication capabilities. A progressive scan interline transfer CCD camera, with its high speed electronic shutter and resolution capabilities, provides the basic functions to meet the requirements of a traffic camera system. Unlike most industrial video imaging applications, traffic cameras must deal with harsh environmental conditions and an extremely wide range of light. Optical character recognition is a critical function of a modern traffic camera system, with detection and accuracy heavily dependent on the camera function. In order to operate under demanding conditions, communication and functional optimization is implemented to control cameras from a roadside computer. The camera operates with a shutter speed faster than 1/2000 sec. to capture highway traffic both day and night. Consequently camera gain, pedestal level, shutter speed and gamma functions are controlled by a look-up table containing various parameters based on environmental conditions, particularly lighting. Lighting conditions are studied carefully, to focus only on the critical license plate surface. A unique light sensor permits accurate reading under a variety of conditions, such as a sunny day, evening, twilight, storms, etc. These camera systems are being deployed successfully in major ETC projects throughout the world.

  6. Comparison of Positron Emission Tomography Quantification Using Magnetic Resonance- and Computed Tomography-Based Attenuation Correction in Physiological Tissues and Lesions: A Whole-Body Positron Emission Tomography/Magnetic Resonance Study in 66 Patients.

    PubMed

    Seith, Ferdinand; Gatidis, Sergios; Schmidt, Holger; Bezrukov, Ilja; la Fougère, Christian; Nikolaou, Konstantin; Pfannenberg, Christina; Schwenzer, Nina

    2016-01-01

    Attenuation correction (AC) in fully integrated positron emission tomography (PET)/magnetic resonance (MR) systems plays a key role for the quantification of tracer uptake. The aim of this prospective study was to assess the accuracy of standardized uptake value (SUV) quantification using MR-based AC in direct comparison with computed tomography (CT)-based AC of the same PET data set on a large patient population. Sixty-six patients (22 female; mean [SD], 61 [11] years) were examined by means of combined PET/CT and PET/MR (11C-choline, 18F-FDG, or 68Ga-DOTATATE) subsequently. Positron emission tomography images from PET/MR examinations were corrected with MR-derived AC based on tissue segmentation (PET(MR)). The same PET data were corrected using CT-based attenuation maps (μ-maps) derived from PET/CT after nonrigid registration of the CT to the MR-based μ-map (PET(MRCT)). Positron emission tomography SUVs were quantified placing regions of interest or volumes of interest in 6 different body regions as well as PET-avid lesions, respectively. The relative differences of quantitative PET values when using MR-based AC versus CT-based AC were varying depending on the organs and body regions assessed. In detail, the mean (SD) relative differences of PET SUVs were as follows: -7.8% (11.5%), blood pool; -3.6% (5.8%), spleen; -4.4% (5.6%)/-4.1% (6.2%), liver; -0.6% (5.0%), muscle; -1.3% (6.3%), fat; -40.0% (18.7%), bone; 1.6% (4.4%), liver lesions; -6.2% (6.8%), bone lesions; and -1.9% (6.2%), soft tissue lesions. In 10 liver lesions, distinct overestimations greater than 5% were found (up to 10%). In addition, overestimations were found in 2 bone lesions and 1 soft tissue lesion adjacent to the lung (up to 28.0%). Results obtained using different PET tracers show that MR-based AC is accurate in most tissue types, with SUV deviations generally of less than 10%. In bone, however, underestimations can be pronounced, potentially leading to inaccurate SUV quantifications. In

  7. Camera Optics.

    ERIC Educational Resources Information Center

    Ruiz, Michael J.

    1982-01-01

    The camera presents an excellent way to illustrate principles of geometrical optics. Basic camera optics of the single-lens reflex camera are discussed, including interchangeable lenses and accessories available to most owners. Several experiments are described and results compared with theoretical predictions or manufacturer specifications.…

  8. Camera Optics.

    ERIC Educational Resources Information Center

    Ruiz, Michael J.

    1982-01-01

    The camera presents an excellent way to illustrate principles of geometrical optics. Basic camera optics of the single-lens reflex camera are discussed, including interchangeable lenses and accessories available to most owners. Several experiments are described and results compared with theoretical predictions or manufacturer specifications.…

  9. Fluorodeoxyglucose-positron emission tomography in carcinoma nasopharynx: Can we predict outcomes and tailor therapy based on postradiotherapy fluorodeoxyglucose-positron emission tomography?

    PubMed Central

    Laskar, Sarbani Ghosh; Baijal, Gunjan; Rangarajan, Venkatesh; Purandare, Nilendu; Sengar, Manju; Shah, Sneha; Gupta, Tejpal; Budrukkar, Ashwini; Murthy, Vedang; Pai, Prathamesh S.; D’Cruz, A. K.; Agarwal, J. P.

    2016-01-01

    Background: Positron emission tomography-computed tomography (PET-CT) is an emerging modality for staging and response evaluation in carcinoma nasopharynx. This study was conducted to evaluate the impact of PET-CT in assessing response and outcomes in carcinoma nasopharynx. Materials and Methods: Forty-five patients of nonmetastatic carcinoma nasopharynx who underwent PET-CT for response evaluation at 10-12 weeks posttherapy between 2004 and 2009 were evaluated. Patients were classified as responders (Group A) if there was a complete response on PET-CT or as nonresponders (Group B) if there was any uptake above the background activity. Data regarding demographics, treatment, and outcomes were collected from their records and compared across the Groups A and B. Results: The median age was 41 years. 42 out of 45 (93.3%) patients had WHO Grade 2B disease (undifferentiated squamous carcinoma). 24.4%, 31.1%, 15.6, and 28.8% patients were in American Joint Committee on Cancer Stage IIb, III, Iva, and IVb. All patients were treated with neoadjuvant chemotherapy followed by concomitant chemoradiotherapy. Forty-five patients, 28 (62.2%) were classified as responders, whereas 17 (37.8%) were classified as nonresponders. There was no significant difference in the age, sex, WHO grade, and stage distribution between the groups. Compliance to treatment was comparable across both groups. The median follow-up was 25.3 months (759 days). The disease-free survival (DFS) of the group was 57.3% at 3 years. The DFS at 3 years was 87.3% and 19.7% for Group A and B, respectively (log-rank test, P < 0.001). Univariate and multivariate analysis revealed Groups to be the only significant factor predicting DFS (P value 0.002 and < 0.001, respectively). In Group B, the most common site of disease failure was distant (9, 53%). Conclusion: PET-CT can be used to evaluate response and as a tool to identify patients at higher risk of distant failure. Further, this could be exploited to identify

  10. Ventilation/Perfusion Positron Emission Tomography—Based Assessment of Radiation Injury to Lung

    SciTech Connect

    Siva, Shankar; Hardcastle, Nicholas; Kron, Tomas; Bressel, Mathias; Callahan, Jason; MacManus, Michael P.; Shaw, Mark; Plumridge, Nikki; Hicks, Rodney J.; Steinfort, Daniel; Ball, David L.; Hofman, Michael S.

    2015-10-01

    Purpose: To investigate {sup 68}Ga-ventilation/perfusion (V/Q) positron emission tomography (PET)/computed tomography (CT) as a novel imaging modality for assessment of perfusion, ventilation, and lung density changes in the context of radiation therapy (RT). Methods and Materials: In a prospective clinical trial, 20 patients underwent 4-dimensional (4D)-V/Q PET/CT before, midway through, and 3 months after definitive lung RT. Eligible patients were prescribed 60 Gy in 30 fractions with or without concurrent chemotherapy. Functional images were registered to the RT planning 4D-CT, and isodose volumes were averaged into 10-Gy bins. Within each dose bin, relative loss in standardized uptake value (SUV) was recorded for ventilation and perfusion, and loss in air-filled fraction was recorded to assess RT-induced lung fibrosis. A dose-effect relationship was described using both linear and 2-parameter logistic fit models, and goodness of fit was assessed with Akaike Information Criterion (AIC). Results: A total of 179 imaging datasets were available for analysis (1 scan was unrecoverable). An almost perfectly linear negative dose-response relationship was observed for perfusion and air-filled fraction (r{sup 2}=0.99, P<.01), with ventilation strongly negatively linear (r{sup 2}=0.95, P<.01). Logistic models did not provide a better fit as evaluated by AIC. Perfusion, ventilation, and the air-filled fraction decreased 0.75 ± 0.03%, 0.71 ± 0.06%, and 0.49 ± 0.02%/Gy, respectively. Within high-dose regions, higher baseline perfusion SUV was associated with greater rate of loss. At 50 Gy and 60 Gy, the rate of loss was 1.35% (P=.07) and 1.73% (P=.05) per SUV, respectively. Of 8/20 patients with peritumoral reperfusion/reventilation during treatment, 7/8 did not sustain this effect after treatment. Conclusions: Radiation-induced regional lung functional deficits occur in a dose-dependent manner and can be estimated by simple linear models with 4D-V/Q PET

  11. Cross-ratio-based line scan camera calibration using a planar pattern

    NASA Astrophysics Data System (ADS)

    Li, Dongdong; Wen, Gongjian; Qiu, Shaohua

    2016-01-01

    A flexible new technique is proposed to calibrate the geometric model of line scan cameras. In this technique, the line scan camera is rigidly coupled to a calibrated frame camera to establish a pair of stereo cameras. The linear displacements and rotation angles between the two cameras are fixed but unknown. This technique only requires the pair of stereo cameras to observe a specially designed planar pattern shown at a few (at least two) different orientations. At each orientation, a stereo pair is obtained including a linear array image and a frame image. Radial distortion of the line scan camera is modeled. The calibration scheme includes two stages. First, point correspondences are established from the pattern geometry and the projective invariance of cross-ratio. Second, with a two-step calibration procedure, the intrinsic parameters of the line scan camera are recovered from several stereo pairs together with the rigid transform parameters between the pair of stereo cameras. Both computer simulation and real data experiments are conducted to test the precision and robustness of the calibration algorithm, and very good calibration results have been obtained. Compared with classical techniques which use three-dimensional calibration objects or controllable moving platforms, our technique is affordable and flexible in close-range photogrammetric applications.

  12. Automatic camera-based identification and 3-D reconstruction of electrode positions in electrocardiographic imaging.

    PubMed

    Schulze, Walther H W; Mackens, Patrick; Potyagaylo, Danila; Rhode, Kawal; Tülümen, Erol; Schimpf, Rainer; Papavassiliu, Theano; Borggrefe, Martin; Dössel, Olaf

    2014-12-01

    Electrocardiographic imaging (ECG imaging) is a method to depict electrophysiological processes in the heart. It is an emerging technology with the potential of making the therapy of cardiac arrhythmia less invasive, less expensive, and more precise. A major challenge for integrating the method into clinical workflow is the seamless and correct identification and localization of electrodes on the thorax and their assignment to recorded channels. This work proposes a camera-based system, which can localize all electrode positions at once and to an accuracy of approximately 1 ± 1 mm. A system for automatic identification of individual electrodes is implemented that overcomes the need of manual annotation. For this purpose, a system of markers is suggested, which facilitates a precise localization to subpixel accuracy and robust identification using an error-correcting code. The accuracy of the presented system in identifying and localizing electrodes is validated in a phantom study. Its overall capability is demonstrated in a clinical scenario.

  13. Body-Based Gender Recognition Using Images from Visible and Thermal Cameras.

    PubMed

    Nguyen, Dat Tien; Park, Kang Ryoung

    2016-01-27

    Gender information has many useful applications in computer vision systems, such as surveillance systems, counting the number of males and females in a shopping mall, accessing control systems in restricted areas, or any human-computer interaction system. In most previous studies, researchers attempted to recognize gender by using visible light images of the human face or body. However, shadow, illumination, and time of day greatly affect the performance of these methods. To overcome this problem, we propose a new gender recognition method based on the combination of visible light and thermal camera images of the human body. Experimental results, through various kinds of feature extraction and fusion methods, show that our approach is efficient for gender recognition through a comparison of recognition rates with conventional systems.

  14. Noctilucent clouds: modern ground-based photographic observations by a digital camera network.

    PubMed

    Dubietis, Audrius; Dalin, Peter; Balčiūnas, Ričardas; Černis, Kazimieras; Pertsev, Nikolay; Sukhodoev, Vladimir; Perminov, Vladimir; Zalcik, Mark; Zadorozhny, Alexander; Connors, Martin; Schofield, Ian; McEwan, Tom; McEachran, Iain; Frandsen, Soeren; Hansen, Ole; Andersen, Holger; Grønne, Jesper; Melnikov, Dmitry; Manevich, Alexander; Romejko, Vitaly

    2011-10-01

    Noctilucent, or "night-shining," clouds (NLCs) are a spectacular optical nighttime phenomenon that is very often neglected in the context of atmospheric optics. This paper gives a brief overview of current understanding of NLCs by providing a simple physical picture of their formation, relevant observational characteristics, and scientific challenges of NLC research. Modern ground-based photographic NLC observations, carried out in the framework of automated digital camera networks around the globe, are outlined. In particular, the obtained results refer to studies of single quasi-stationary waves in the NLC field. These waves exhibit specific propagation properties--high localization, robustness, and long lifetime--that are the essential requisites of solitary waves.

  15. Indoor Scene Point Cloud Registration Algorithm Based on RGB-D Camera Calibration

    PubMed Central

    Huang, Chih-Hung

    2017-01-01

    With the increasing popularity of RGB-depth (RGB-D) sensor, research on the use of RGB-D sensors to reconstruct three-dimensional (3D) indoor scenes has gained more and more attention. In this paper, an automatic point cloud registration algorithm is proposed to efficiently handle the task of 3D indoor scene reconstruction using pan-tilt platforms on a fixed position. The proposed algorithm aims to align multiple point clouds using extrinsic parameters of the RGB-D camera obtained from every preset pan-tilt control point. A computationally efficient global registration method is proposed based on transformation matrices formed by the offline calibrated extrinsic parameters. Then, a local registration method, which is an optional operation in the proposed algorithm, is employed to refine the preliminary alignment result. Experimental results validate the quality and computational efficiency of the proposed point cloud alignment algorithm by comparing it with two state-of-the-art methods. PMID:28809787

  16. Design and fabrication of MEMS-based thermally-actuated image stabilizer for cell phone camera

    NASA Astrophysics Data System (ADS)

    Lin, Chun-Ying; Chiou, Jin-Chern

    2012-11-01

    A micro-electro-mechanical system (MEMS)-based image stabilizer is proposed to counteracting shaking in cell phone cameras. The proposed stabilizer (dimensions, 8.8 × 8.8 × 0.2 mm3) includes a two-axis decoupling XY stage and has sufficient strength to suspend an image sensor (IS) used for anti-shaking function. The XY stage is designed to send electrical signals from the suspended IS by using eight signal springs and 24 signal outputs. The maximum actuating distance of the stage is larger than 25 μm, which is sufficient to resolve the shaking problem. Accordingly, the applied voltage for the 25 μm moving distance is lower than 20 V; the dynamic resonant frequency of the actuating device is 4485 Hz, and the rising time is 21 ms.

  17. A Bevel Gear Quality Inspection System Based on Multi-Camera Vision Technology

    PubMed Central

    Liu, Ruiling; Zhong, Dexing; Lyu, Hongqiang; Han, Jiuqiang

    2016-01-01

    Surface defect detection and dimension measurement of automotive bevel gears by manual inspection are costly, inefficient, low speed and low accuracy. In order to solve these problems, a synthetic bevel gear quality inspection system based on multi-camera vision technology is developed. The system can detect surface defects and measure gear dimensions simultaneously. Three efficient algorithms named Neighborhood Average Difference (NAD), Circle Approximation Method (CAM) and Fast Rotation-Position (FRP) are proposed. The system can detect knock damage, cracks, scratches, dents, gibbosity or repeated cutting of the spline, etc. The smallest detectable defect is 0.4 mm × 0.4 mm and the precision of dimension measurement is about 40–50 μm. One inspection process takes no more than 1.3 s. Both precision and speed meet the requirements of real-time online inspection in bevel gear production. PMID:27571078

  18. Portable profilometer based on low-coherence interferometry and smart pixel camera

    NASA Astrophysics Data System (ADS)

    Salbut, Leszek; Pakuła, Anna; Tomczewski, Sławomir; Styk, Adam

    2010-09-01

    Although low coherence interferometers are commercially available (e.g., white light interferometers), they are generally quite bulky, expensive, and offer limited flexibility. In the paper the new portable profilometer based on low coherence interferometry is presented. In the device the white light diode with controlled spectrum shape is used in order to increase the zero order fringe contrast, what allows for its better and quicker localization. For image analysis the special type of CMOS matrix (called smart pixel camera), synchronized with reference mirror transducer, is applied. Due to hardware realization of the fringe contrast analysis, independently in each pixel, the time of measurement decreases significantly. High speed processing together with compact design allows that profilometer to be used as the portable device for both in and out door measurements. The capabilities of the designed profilometer are well illustrated by a few application examples.

  19. Body-Based Gender Recognition Using Images from Visible and Thermal Cameras

    PubMed Central

    Nguyen, Dat Tien; Park, Kang Ryoung

    2016-01-01

    Gender information has many useful applications in computer vision systems, such as surveillance systems, counting the number of males and females in a shopping mall, accessing control systems in restricted areas, or any human-computer interaction system. In most previous studies, researchers attempted to recognize gender by using visible light images of the human face or body. However, shadow, illumination, and time of day greatly affect the performance of these methods. To overcome this problem, we propose a new gender recognition method based on the combination of visible light and thermal camera images of the human body. Experimental results, through various kinds of feature extraction and fusion methods, show that our approach is efficient for gender recognition through a comparison of recognition rates with conventional systems. PMID:26828487

  20. A Bevel Gear Quality Inspection System Based on Multi-Camera Vision Technology.

    PubMed

    Liu, Ruiling; Zhong, Dexing; Lyu, Hongqiang; Han, Jiuqiang

    2016-08-25

    Surface defect detection and dimension measurement of automotive bevel gears by manual inspection are costly, inefficient, low speed and low accuracy. In order to solve these problems, a synthetic bevel gear quality inspection system based on multi-camera vision technology is developed. The system can detect surface defects and measure gear dimensions simultaneously. Three efficient algorithms named Neighborhood Average Difference (NAD), Circle Approximation Method (CAM) and Fast Rotation-Position (FRP) are proposed. The system can detect knock damage, cracks, scratches, dents, gibbosity or repeated cutting of the spline, etc. The smallest detectable defect is 0.4 mm × 0.4 mm and the precision of dimension measurement is about 40-50 μm. One inspection process takes no more than 1.3 s. Both precision and speed meet the requirements of real-time online inspection in bevel gear production.

  1. Estimating the spatial position of marine mammals based on digital camera recordings

    PubMed Central

    Hoekendijk, Jeroen P A; de Vries, Jurre; van der Bolt, Krissy; Greinert, Jens; Brasseur, Sophie; Camphuysen, Kees C J; Aarts, Geert

    2015-01-01

    Estimating the spatial position of organisms is essential to quantify interactions between the organism and the characteristics of its surroundings, for example, predator–prey interactions, habitat selection, and social associations. Because marine mammals spend most of their time under water and may appear at the surface only briefly, determining their exact geographic location can be challenging. Here, we developed a photogrammetric method to accurately estimate the spatial position of marine mammals or birds at the sea surface. Digital recordings containing landscape features with known geographic coordinates can be used to estimate the distance and bearing of each sighting relative to the observation point. The method can correct for frame rotation, estimates pixel size based on the reference points, and can be applied to scenarios with and without a visible horizon. A set of R functions was written to process the images and obtain accurate geographic coordinates for each sighting. The method is applied to estimate the spatiotemporal fine-scale distribution of harbour porpoises in a tidal inlet. Video recordings of harbour porpoises were made from land, using a standard digital single-lens reflex (DSLR) camera, positioned at a height of 9.59 m above mean sea level. Porpoises were detected up to a distance of ∽3136 m (mean 596 m), with a mean location error of 12 m. The method presented here allows for multiple detections of different individuals within a single video frame and for tracking movements of individuals based on repeated sightings. In comparison with traditional methods, this method only requires a digital camera to provide accurate location estimates. It especially has great potential in regions with ample data on local (a)biotic conditions, to help resolve functional mechanisms underlying habitat selection and other behaviors in marine mammals in coastal areas. PMID:25691982

  2. Estimating the spatial position of marine mammals based on digital camera recordings.

    PubMed

    Hoekendijk, Jeroen P A; de Vries, Jurre; van der Bolt, Krissy; Greinert, Jens; Brasseur, Sophie; Camphuysen, Kees C J; Aarts, Geert

    2015-02-01

    Estimating the spatial position of organisms is essential to quantify interactions between the organism and the characteristics of its surroundings, for example, predator-prey interactions, habitat selection, and social associations. Because marine mammals spend most of their time under water and may appear at the surface only briefly, determining their exact geographic location can be challenging. Here, we developed a photogrammetric method to accurately estimate the spatial position of marine mammals or birds at the sea surface. Digital recordings containing landscape features with known geographic coordinates can be used to estimate the distance and bearing of each sighting relative to the observation point. The method can correct for frame rotation, estimates pixel size based on the reference points, and can be applied to scenarios with and without a visible horizon. A set of R functions was written to process the images and obtain accurate geographic coordinates for each sighting. The method is applied to estimate the spatiotemporal fine-scale distribution of harbour porpoises in a tidal inlet. Video recordings of harbour porpoises were made from land, using a standard digital single-lens reflex (DSLR) camera, positioned at a height of 9.59 m above mean sea level. Porpoises were detected up to a distance of ∽3136 m (mean 596 m), with a mean location error of 12 m. The method presented here allows for multiple detections of different individuals within a single video frame and for tracking movements of individuals based on repeated sightings. In comparison with traditional methods, this method only requires a digital camera to provide accurate location estimates. It especially has great potential in regions with ample data on local (a)biotic conditions, to help resolve functional mechanisms underlying habitat selection and other behaviors in marine mammals in coastal areas.

  3. Ontological Representation of Light Wave Camera Data to Support Vision-Based AmI

    PubMed Central

    Serrano, Miguel Ángel; Gómez-Romero, Juan; Patricio, Miguel Ángel; García, Jesús; Molina, José Manuel

    2012-01-01

    Recent advances in technologies for capturing video data have opened a vast amount of new application areas in visual sensor networks. Among them, the incorporation of light wave cameras on Ambient Intelligence (AmI) environments provides more accurate tracking capabilities for activity recognition. Although the performance of tracking algorithms has quickly improved, symbolic models used to represent the resulting knowledge have not yet been adapted to smart environments. This lack of representation does not allow to take advantage of the semantic quality of the information provided by new sensors. This paper advocates for the introduction of a part-based representational level in cognitive-based systems in order to accurately represent the novel sensors' knowledge. The paper also reviews the theoretical and practical issues in part-whole relationships proposing a specific taxonomy for computer vision approaches. General part-based patterns for human body and transitive part-based representation and inference are incorporated to an ontology-based previous framework to enhance scene interpretation in the area of video-based AmI. The advantages and new features of the model are demonstrated in a Social Signal Processing (SSP) application for the elaboration of live market researches.

  4. Lock-in camera based heterodyne holography for ultrasound-modulated optical tomography inside dynamic scattering media

    NASA Astrophysics Data System (ADS)

    Liu, Yan; Shen, Yuecheng; Ma, Cheng; Shi, Junhui; Wang, Lihong V.

    2016-06-01

    Ultrasound-modulated optical tomography (UOT) images optical contrast deep inside scattering media. Heterodyne holography based UOT is a promising technique that uses a camera for parallel speckle detection. In previous works, the speed of data acquisition was limited by the low frame rates of conventional cameras. In addition, when the signal-to-background ratio was low, these cameras wasted most of their bits representing an informationless background, resulting in extremely low efficiencies in the use of bits. Here, using a lock-in camera, we increase the bit efficiency and reduce the data transfer load by digitizing only the signal after rejecting the background. Moreover, compared with the conventional four-frame based amplitude measurement method, our single-frame method is more immune to speckle decorrelation. Using lock-in camera based UOT with an integration time of 286 μs, we imaged an absorptive object buried inside a dynamic scattering medium exhibiting a speckle correlation time ( τ c ) as short as 26 μs. Since our method can tolerate speckle decorrelation faster than that found in living biological tissue ( τ c ˜ 100-1000 μs), it is promising for in vivo deep tissue non-invasive imaging.

  5. Lock-in camera based heterodyne holography for ultrasound-modulated optical tomography inside dynamic scattering media.

    PubMed

    Liu, Yan; Shen, Yuecheng; Ma, Cheng; Shi, Junhui; Wang, Lihong V

    2016-06-06

    Ultrasound-modulated optical tomography (UOT) images optical contrast deep inside scattering media. Heterodyne holography based UOT is a promising technique that uses a camera for parallel speckle detection. In previous works, the speed of data acquisition was limited by the low frame rates of conventional cameras. In addition, when the signal-to-background ratio was low, these cameras wasted most of their bits representing an informationless background, resulting in extremely low efficiencies in the use of bits. Here, using a lock-in camera, we increase the bit efficiency and reduce the data transfer load by digitizing only the signal after rejecting the background. Moreover, compared with the conventional four-frame based amplitude measurement method, our single-frame method is more immune to speckle decorrelation. Using lock-in camera based UOT with an integration time of 286 μs, we imaged an absorptive object buried inside a dynamic scattering medium exhibiting a speckle correlation time ([Formula: see text]) as short as 26 μs. Since our method can tolerate speckle decorrelation faster than that found in living biological tissue ([Formula: see text] ∼ 100-1000 μs), it is promising for in vivo deep tissue non-invasive imaging.

  6. KEK-IMSS Slow Positron Facility

    NASA Astrophysics Data System (ADS)

    Hyodo, T.; Wada, K.; Yagishita, A.; Kosuge, T.; Saito, Y.; Kurihara, T.; Kikuchi, T.; Shirakawa, A.; Sanami, T.; Ikeda, M.; Ohsawa, S.; Kakihara, K.; Shidara, T.

    2011-12-01

    The Slow Positron Facility at the Institute of Material Structure Science (IMSS) of High Energy Accelerator Research Organization (KEK) is a user dedicated facility with an energy tunable (0.1 - 35 keV) slow positron beam produced by a dedicated 55MeV linac. The present beam line branches have been used for the positronium time-of-flight (Ps-TOF) measurements, the transmission positron microscope (TPM) and the photo-detachment of Ps negative ions (Ps-). During the year 2010, a reflection high-energy positron diffraction (RHEPD) measurement station is going to be installed. The slow positron generator (converter/ moderator) system will be modified to get a higher slow positron intensity, and a new user-friendly beam line power-supply control and vacuum monitoring system is being developed. Another plan for this year is the transfer of a 22Na-based slow positron beam from RIKEN. This machine will be used for the continuous slow positron beam applications and for the orientation training of those who are interested in beginning researches with a slow positron beam.

  7. Improvement of the GRACE star camera data based on the revision of the combination method

    NASA Astrophysics Data System (ADS)

    Bandikova, Tamara; Flury, Jakob

    2014-11-01

    The new release of the sensor and instrument data (Level-1B release 02) of the Gravity Recovery and Climate Experiment (GRACE) had a substantial impact on the improvement of the overall accuracy of the gravity field models. This has implied that improvements on the sensor data level can still significantly contribute to arriving closer to the GRACE baseline accuracy. The recent analysis of the GRACE star camera data (SCA1B RL02) revealed their unexpectedly higher noise. As the star camera (SCA) data are essential for the processing of the K-band ranging data and the accelerometer data, thorough investigation of the data set was needed. We fully reexamined the SCA data processing from Level-1A to Level-1B with focus on the combination method of the data delivered by the two SCA heads. In the first step, we produced and compared our own combined attitude solution by applying two different combination methods on the SCA Level-1A data. The first method introduces the information about the anisotropic accuracy of the star camera measurement in terms of a weighing matrix. This method was applied in the official processing as well. The alternative method merges only the well determined SCA boresight directions. This method was implemented on the GRACE SCA data for the first time. Both methods were expected to provide optimal solution characteristic by the full accuracy about all three axes, which was confirmed. In the second step, we analyzed the differences between the official SCA1B RL02 data generated by the Jet Propulsion Laboratory (JPL) and our solution. SCA1B RL02 contains systematically higher noise of about a factor 3-4. The data analysis revealed that the reason is the incorrect implementation of algorithms in the JPL processing routines. After correct implementation of the combination method, significant improvement within the whole spectrum was achieved. Based on these results, the official reprocessing of the SCA data is suggested, as the SCA attitude data

  8. Positron emission tomography: physics, instrumentation, and image analysis.

    PubMed

    Porenta, G

    1994-01-01

    Positron emission tomography (PET) is a noninvasive diagnostic technique that permits reconstruction of cross-sectional images of the human body which depict the biodistribution of PET tracer substances. A large variety of physiological PET tracers, mostly based on isotopes of carbon, nitrogen, oxygen, and fluorine is available and allows the in vivo investigation of organ perfusion, metabolic pathways and biomolecular processes in normal and diseased states. PET cameras utilize the physical characteristics of positron decay to derive quantitative measurements of tracer concentrations, a capability that has so far been elusive for conventional SPECT (single photon emission computed tomography) imaging techniques. Due to the short half lives of most PET isotopes, an on-site cyclotron and a radiochemistry unit are necessary to provide an adequate supply of PET tracers. While operating a PET center in the past was a complex procedure restricted to few academic centers with ample resources, PET technology has rapidly advanced in recent years and has entered the commercial nuclear medicine market. To date, the availability of compact cyclotrons with remote computer control, automated synthesis units for PET radiochemistry, high-performance PET cameras, and user-friendly analysis workstations permits installation of a clinical PET center within most nuclear medicine facilities. This review provides simple descriptions of important aspects concerning physics, instrumentation, and image analysis in PET imaging which should be understood by medical personnel involved in the clinical operation of a PET imaging center.

  9. A novel virtual four-ocular stereo vision system based on single camera for measuring insect motion parameters

    NASA Astrophysics Data System (ADS)

    Wang, Ying; Zhang, Guangjun; Chen, Dazhi

    2005-11-01

    A novel virtual four-ocular stereo measurement system based on single high speed camera is proposed for measuring double beating wings of a high speed flapping insect. The principle of virtual monocular system consisting of a few planar mirrors and a single high speed camera is introduced. The stereo vision measurement principle based on optic triangulation is explained. The wing kinematics parameters are measured. Results show that this virtual stereo system not only decreases system cost extremely but also is effective to insect motion measurement.

  10. Method for measuring stereo camera depth accuracy based on stereoscopic vision

    NASA Astrophysics Data System (ADS)

    Kytö, Mikko; Nuutinen, Mikko; Oittinen, Pirkko

    2011-03-01

    We present a method to evaluate stereo camera depth accuracy in human centered applications. It enables the comparison between stereo camera depth resolution and human depth resolution. Our method uses a multilevel test target which can be easily assembled and used in various studies. Binocular disparity enables humans to perceive relative depths accurately, making a multilevel test target applicable for evaluating the stereo camera depth accuracy when the accuracy requirements come from stereoscopic vision. The method for measuring stereo camera depth accuracy was validated with a stereo camera built of two SLRs (singlelens reflex). The depth resolution of the SLRs was better than normal stereo acuity at all measured distances ranging from 0.7 m to 5.8 m. The method was used to evaluate the accuracy of a lower quality stereo camera. Two parameters, focal length and baseline, were varied. Focal length had a larger effect on stereo camera's depth accuracy than baseline. The tests showed that normal stereo acuity was achieved only using a tele lens. However, a user's depth resolution in a video see-through system differs from direct naked eye viewing. The same test target was used to evaluate this by mixing the levels of the test target randomly and asking users to sort the levels according to their depth. The comparison between stereo camera depth resolution and perceived depth resolution was done by calculating maximum erroneous classification of levels.

  11. Maximum-likelihood scintillation detection for EM-CCD based gamma cameras.

    PubMed

    Korevaar, Marc A N; Goorden, Marlies C; Heemskerk, Jan W T; Beekman, Freek J

    2011-08-07

    Gamma cameras based on charge-coupled devices (CCDs) coupled to continuous scintillation crystals can combine a good detection efficiency with high spatial resolutions with the aid of advanced scintillation detection algorithms. A previously developed analytical multi-scale algorithm (MSA) models the depth-dependent light distribution but does not take statistics into account. Here we present and validate a novel statistical maximum-likelihood algorithm (MLA) that combines a realistic light distribution model with an experimentally validated statistical model. The MLA was tested for an electron multiplying CCD optically coupled to CsI(Tl) scintillators of different thicknesses. For (99m)Tc imaging, the spatial resolution (for perpendicular and oblique incidence), energy resolution and signal-to-background counts ratio (SBR) obtained with the MLA were compared with those of the MSA. Compared to the MSA, the MLA improves the energy resolution by more than a factor of 1.6 and the SBR is enhanced by more than a factor of 1.3. For oblique incidence (approximately 45°), the depth-of-interaction corrected spatial resolution is improved by a factor of at least 1.1, while for perpendicular incidence the MLA resolution does not consistently differ significantly from the MSA result for all tested scintillator thicknesses. For the thickest scintillator (3 mm, interaction probability 66% at 141 keV) a spatial resolution (perpendicular incidence) of 147 µm full width at half maximum (FWHM) was obtained with an energy resolution of 35.2% FWHM. These results of the MLA were achieved without prior calibration of scintillations as is needed for many statistical scintillation detection algorithms. We conclude that the MLA significantly improves the gamma camera performance compared to the MSA.

  12. Observation of Passive and Explosive Emissions at Stromboli with a Ground-based Hyperspectral TIR Camera

    NASA Astrophysics Data System (ADS)

    Smekens, J. F.; Mathieu, G.

    2015-12-01

    Scientific imaging techniques have progressed at a fast pace in the recent years, thanks in part to great improvements in detector technology, and through our ability to process large amounts of complex data using sophisticated software. Broadband thermal cameras are ubiquitously used for permanent monitoring of volcanic activity, and have been used in a multitude of scientific applications, from tracking ballistics to studying the thermal evolution lava flow fields and volcanic plumes. In parallel, UV cameras are now used at several volcano observatories to quantify daytime sulfur dioxide (SO2) emissions at very high frequency. In this work we present the results the first deployment of a ground-based Thermal Infrared (TIR) Hyperspectral Imaging System (Telops Hyper-Cam LW) for the study of passive and explosive volcanic activity at Stromboli volcano, Italy. The instrument uses a Michelson spectrometer and Fourier Transform Infrared Spectrometry to produce hyperspectral datacubes of a scene (320x256 pixels) in the range 7.7-11.8 μm, with a spectral resolution of up to 0.25 cm-1 and at frequencies of ~10 Hz. The activity at Stromboli is characterized by explosions of small magnitude, often containing significant amounts of gas and ash, separated by periods of quiescent degassing of 10-60 minutes. With our dataset, spanning about 5 days of monitoring, we are able to detect and track temporal variations of SO2 and ash emissions during both daytime and nighttime. It ultimately allows for the quantification of the mass of gas and ash ejected during and between explosive events. Although the high price and power consumption of the instrument are obstacles to its deployment as a monitoring tool, this type of data sets offers unprecedented insight into the dynamic processes taking place at Stromboli, and could lead to a better understanding of the eruptive mechanisms at persistently active systems in general.

  13. Method for validating cloud mask obtained from satellite measurements using ground-based sky camera.

    PubMed

    Letu, Husi; Nagao, Takashi M; Nakajima, Takashi Y; Matsumae, Yoshiaki

    2014-11-01

    Error propagation in Earth's atmospheric, oceanic, and land surface parameters of the satellite products caused by misclassification of the cloud mask is a critical issue for improving the accuracy of satellite products. Thus, characterizing the accuracy of the cloud mask is important for investigating the influence of the cloud mask on satellite products. In this study, we proposed a method for validating multiwavelength satellite data derived cloud masks using ground-based sky camera (GSC) data. First, a cloud cover algorithm for GSC data has been developed using sky index and bright index. Then, Moderate Resolution Imaging Spectroradiometer (MODIS) satellite data derived cloud masks by two cloud-screening algorithms (i.e., MOD35 and CLAUDIA) were validated using the GSC cloud mask. The results indicate that MOD35 is likely to classify ambiguous pixels as "cloudy," whereas CLAUDIA is likely to classify them as "clear." Furthermore, the influence of error propagations caused by misclassification of the MOD35 and CLAUDIA cloud masks on MODIS derived reflectance, brightness temperature, and normalized difference vegetation index (NDVI) in clear and cloudy pixels was investigated using sky camera data. It shows that the influence of the error propagation by the MOD35 cloud mask on the MODIS derived monthly mean reflectance, brightness temperature, and NDVI for clear pixels is significantly smaller than for the CLAUDIA cloud mask; the influence of the error propagation by the CLAUDIA cloud mask on MODIS derived monthly mean cloud products for cloudy pixels is significantly smaller than that by the MOD35 cloud mask.

  14. Temperature dependent operation of PSAPD-based compact gamma camera for SPECT imaging

    PubMed Central

    Kim, Sangtaek; McClish, Mickel; Alhassen, Fares; Seo, Youngho; Shah, Kanai S.; Gould, Robert G.

    2011-01-01

    We investigated the dependence of image quality on the temperature of a position sensitive avalanche photodiode (PSAPD)-based small animal single photon emission computed tomography (SPECT) gamma camera with a CsI:Tl scintillator. Currently, nitrogen gas cooling is preferred to operate PSAPDs in order to minimize the dark current shot noise. Being able to operate a PSAPD at a relatively high temperature (e.g., 5 °C) would allow a more compact and simple cooling system for the PSAPD. In our investigation, the temperature of the PSAPD was controlled by varying the flow of cold nitrogen gas through the PSAPD module and varied from −40 °C to 20 °C. Three experiments were performed to demonstrate the performance variation over this temperature range. The point spread function (PSF) of the gamma camera was measured at various temperatures, showing variation of full-width-half-maximum (FWHM) of the PSF. In addition, a 99mTc-pertechnetate (140 keV) flood source was imaged and the visibility of the scintillator segmentation (16×16 array, 8 mm × 8 mm area, 400 μm pixel size) at different temperatures was evaluated. Comparison of image quality was made at −25 °C and 5 °C using a mouse heart phantom filled with an aqueous solution of 99mTc-pertechnetate and imaged using a 0.5 mm pinhole collimator made of tungsten. The reconstructed image quality of the mouse heart phantom at 5 °C degraded in comparision to the reconstructed image quality at −25 °C. However, the defect and structure of the mouse heart phantom were clearly observed, showing the feasibility of operating PSAPDs for SPECT imaging at 5 °C, a temperature that would not need the nitrogen cooling. All PSAPD evaluations were conducted with an applied bias voltage that allowed the highest gain at a given temperature. PMID:24465051

  15. An energy-optimized collimator design for a CZT-based SPECT camera

    PubMed Central

    Weng, Fenghua; Bagchi, Srijeeta; Zan, Yunlong; Huang, Qiu; Seo, Youngho

    2015-01-01

    In single photon emission computed tomography, it is a challenging task to maintain reasonable performance using only one specific collimator for radio-tracers over a broad spectrum of diagnostic photon energies, since photon scatter and penetration in a collimator differ with the photon energy. Frequent collimator exchanges are inevitable in daily clinical SPECT imaging, which hinders throughput while subjecting the camera to operational errors and damage. Our objective is to design a collimator, which independent of the photon energy performs reasonably well for commonly used radiotracers with low- to medium-energy levels of gamma emissions. Using the Geant4 simulation toolkit, we simulated and evaluated a parallel-hole collimator mounted to a CZT detector. With the pixel-geometry-matching collimation, the pitch of the collimator hole was fixed to match the pixel size of the CZT detector throughout this work. Four variables, hole shape, hole length, hole radius/width and the source-to-collimator distance were carefully studied. Scatter and penetration of the collimator, sensitivity and spatial resolution of the system were assessed for four radionuclides including 57Co, 99mTc, 123I and 111In, with respect to the aforementioned four variables. An optimal collimator was then decided upon such that it maximized the total relative sensitivity (TRS) for the four considered radionuclides while other performance parameters, such as scatter, penetration and spatial resolution, were benchmarked to prevalent commercial scanners and collimators. Digital phantom studies were also performed to validate the system with the optimal square-hole collimator (23 mm hole length, 1.28 mm hole width, 0.32 mm septal thickness) in terms of contrast, contrast-to-noise ratio and recovery ratio. This study demonstrates promise of our proposed energy-optimized collimator to be used in a CZT-based gamma camera, with comparable or even better imaging performance versus commercial collimators

  16. An energy-optimized collimator design for a CZT-based SPECT camera

    NASA Astrophysics Data System (ADS)

    Weng, Fenghua; Bagchi, Srijeeta; Zan, Yunlong; Huang, Qiu; Seo, Youngho

    2016-01-01

    In single photon emission computed tomography, it is a challenging task to maintain reasonable performance using only one specific collimator for radiotracers over a broad spectrum of diagnostic photon energies, since photon scatter and penetration in a collimator differ with the photon energy. Frequent collimator exchanges are inevitable in daily clinical SPECT imaging, which hinders throughput while subjecting the camera to operational errors and damage. Our objective is to design a collimator, which is independent of the photon energy, performs reasonably well for commonly used radiotracers with low- to medium-energy levels of gamma emissions. Using the Geant4 simulation toolkit, we simulated and evaluated a parallel-hole collimator mounted to a CZT detector. With the pixel-geometry-matching collimation, the pitch of the collimator hole was fixed to match the pixel size of the CZT detector throughout this work. Four variables, hole shape, hole length, hole radius/width and the source-to-collimator distance were carefully studied. Scatter and penetration of the collimator, sensitivity and spatial resolution of the system were assessed for four radionuclides including 57Co, 99mTc, 123I and 111In, with respect to the aforementioned four variables. An optimal collimator was then decided upon such that it maximized the total relative sensitivity (TRS) for the four considered radionuclides while other performance parameters, such as scatter, penetration and spatial resolution, were benchmarked to prevalent commercial scanners and collimators. Digital phantom studies were also performed to validate the system with the optimal square-hole collimator (23 mm hole length, 1.28 mm hole width, and 0.32 mm septal thickness) in terms of contrast, contrast-to-noise ratio and recovery ratio. This study demonstrates promise of our proposed energy-optimized collimator to be used in a CZT-based gamma camera, with comparable or even better imaging performance versus commercial

  17. Human Detection Based on the Generation of a Background Image and Fuzzy System by Using a Thermal Camera

    PubMed Central

    Jeon, Eun Som; Kim, Jong Hyun; Hong, Hyung Gil; Batchuluun, Ganbayar; Park, Kang Ryoung

    2016-01-01

    Recently, human detection has been used in various applications. Although visible light cameras are usually employed for this purpose, human detection based on visible light cameras has limitations due to darkness, shadows, sunlight, etc. An approach using a thermal (far infrared light) camera has been studied as an alternative for human detection, however, the performance of human detection by thermal cameras is degraded in case of low temperature differences between humans and background. To overcome these drawbacks, we propose a new method for human detection by using thermal camera images. The main contribution of our research is that the thresholds for creating the binarized difference image between the input and background (reference) images can be adaptively determined based on fuzzy systems by using the information derived from the background image and difference values between background and input image. By using our method, human area can be correctly detected irrespective of the various conditions of input and background (reference) images. For the performance evaluation of the proposed method, experiments were performed with the 15 datasets captured under different weather and light conditions. In addition, the experiments with an open database were also performed. The experimental results confirm that the proposed method can robustly detect human shapes in various environments. PMID:27043564

  18. Multi-Kinect v2 Camera Based Monitoring System for Radiotherapy Patient Safety.

    PubMed

    Santhanam, Anand P; Min, Yugang; Kupelian, Patrick; Low, Daniel

    2016-01-01

    3D kinect camera systems are essential for real-time imaging of 3D treatment space that consists of both the patient anatomy as well as the treatment equipment setup. In this paper, we present the technical details of a 3D treatment room monitoring system that employs a scalable number of calibrated and coregistered Kinect v2 cameras. The monitoring system tracks radiation gantry and treatment couch positions, and tracks the patient and immobilization accessories. The number and positions of the cameras were selected to avoid line-of-sight issues and to adequately cover the treatment setup. The cameras were calibrated with a calibration error of 0.1 mm. Our tracking system evaluation show that both gantry and patient motion could be acquired at a rate of 30 frames per second. The transformations between the cameras yielded a 3D treatment space accuracy of < 2 mm error in a radiotherapy setup within 500mm around the isocenter.

  19. Design of an Event-Driven Random-Access-Windowing CCD-Based Camera

    NASA Technical Reports Server (NTRS)

    Monacos, Steve P.; Lam, Raymond K.; Portillo, Angel A.; Ortiz, Gerardo G.

    2003-01-01

    Commercially available cameras are not design for the combination of single frame and high-speed streaming digital video with real-time control of size and location of multiple regions-of-interest (ROI). A new control paradigm is defined to eliminate the tight coupling between the camera logic and the host controller. This functionality is achieved by defining the indivisible pixel read out operation on a per ROI basis with in-camera time keeping capability. This methodology provides a Random Access, Real-Time, Event-driven (RARE) camera for adaptive camera control and is will suited for target tracking applications requiring autonomous control of multiple ROI's. This methodology additionally provides for reduced ROI read out time and higher frame rates compared to the original architecture by avoiding external control intervention during the ROI read out process.

  20. BroCam: a versatile PC-based CCD camera system

    NASA Astrophysics Data System (ADS)

    Klougart, Jens

    1995-03-01

    At the Copenhagen University, we have developed a compact CCD camera system for single and mosaic CCDs. The camera control and data acquisition is performed by a 486 type PC via a frame buffer located in one ISA-bus slot, communicating to the camera electronics on two optical fibers. The PC can run as well special purpose DOS programs, as in a more general mode under LINUX, a UNIX similar operating system. In the latter mode, standard software packages, such as SAOimage and Gnuplot, are utilized extensively thereby reducing the amount of camera specific software. At the same time the observer feels at ease with the system in an IRAF-like environment. Finally, the LINUX version enables the camera to be remotely controlled.

  1. SU-C-18A-02: Image-Based Camera Tracking: Towards Registration of Endoscopic Video to CT

    SciTech Connect

    Ingram, S; Rao, A; Wendt, R; Castillo, R; Court, L; Yang, J; Beadle, B

    2014-06-01

    Purpose: Endoscopic examinations are routinely performed on head and neck and esophageal cancer patients. However, these images are underutilized for radiation therapy because there is currently no way to register them to a CT of the patient. The purpose of this work is to develop a method to track the motion of an endoscope within a structure using images from standard clinical equipment. This method will be incorporated into a broader endoscopy/CT registration framework. Methods: We developed a software algorithm to track the motion of an endoscope within an arbitrary structure. We computed frame-to-frame rotation and translation of the camera by tracking surface points across the video sequence and utilizing two-camera epipolar geometry. The resulting 3D camera path was used to recover the surrounding structure via triangulation methods. We tested this algorithm on a rigid cylindrical phantom with a pattern spray-painted on the inside. We did not constrain the motion of the endoscope while recording, and we did not constrain our measurements using the known structure of the phantom. Results: Our software algorithm can successfully track the general motion of the endoscope as it moves through the phantom. However, our preliminary data do not show a high degree of accuracy in the triangulation of 3D point locations. More rigorous data will be presented at the annual meeting. Conclusion: Image-based camera tracking is a promising method for endoscopy/CT image registration, and it requires only standard clinical equipment. It is one of two major components needed to achieve endoscopy/CT registration, the second of which is tying the camera path to absolute patient geometry. In addition to this second component, future work will focus on validating our camera tracking algorithm in the presence of clinical imaging features such as patient motion, erratic camera motion, and dynamic scene illumination.

  2. A study of defects in iron-based binary alloys by the Mössbauer and positron annihilation spectroscopies

    SciTech Connect

    Idczak, R. Konieczny, R.; Chojcan, J.

    2014-03-14

    The room temperature positron annihilation lifetime spectra and {sup 57}Fe Mössbauer spectra were measured for pure Fe as well as for iron-based Fe{sub 1−x}Re{sub x}, Fe{sub 1−x}Os{sub x}, Fe{sub 1−x}Mo{sub x}, and Fe{sub 1−x}Cr{sub x} solid solutions, where x is in the range between 0.01 and 0.05. The measurements were performed in order to check if the known from the literature, theoretical calculations on the interactions between vacancies and solute atoms in iron can be supported by the experimental data. The vacancies were created during formation and further mechanical processing of the iron systems under consideration so the spectra mentioned above were collected at least twice for each studied sample synthesized in an arc furnace— after cold rolling to the thickness of about 40 μm as well as after subsequent annealing at 1270 K for 2 h. It was found that only in Fe and the Fe-Cr system the isolated vacancies thermally generated at high temperatures are not observed at the room temperature and cold rolling of the materials leads to creation of another type of vacancies which were associated with edge dislocations. In the case of other cold-rolled systems, positrons detect vacancies of two types mentioned above and Mössbauer nuclei “see” the vacancies mainly in the vicinity of non-iron atoms. This speaks in favour of the suggestion that in iron matrix the solute atoms of Os, Re, and Mo interact attractively with vacancies as it is predicted by theoretical computations and the energy of the interaction is large enough for existing the pairs vacancy-solute atom at the room temperature. On the other hand, the corresponding interaction for Cr atoms is either repulsive or attractive but smaller than that for Os, Re, and Mo atoms. The latter is in agreement with the theoretical calculations.

  3. Polarization encoded color camera.

    PubMed

    Schonbrun, Ethan; Möller, Guðfríður; Di Caprio, Giuseppe

    2014-03-15

    Digital cameras would be colorblind if they did not have pixelated color filters integrated into their image sensors. Integration of conventional fixed filters, however, comes at the expense of an inability to modify the camera's spectral properties. Instead, we demonstrate a micropolarizer-based camera that can reconfigure its spectral response. Color is encoded into a linear polarization state by a chiral dispersive element and then read out in a single exposure. The polarization encoded color camera is capable of capturing three-color images at wavelengths spanning the visible to the near infrared.

  4. Target detection for low cost uncooled MWIR cameras based on empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Piñeiro-Ave, José; Blanco-Velasco, Manuel; Cruz-Roldán, Fernando; Artés-Rodríguez, Antonio

    2014-03-01

    In this work, a novel method for detecting low intensity fast moving objects with low cost Medium Wavelength Infrared (MWIR) cameras is proposed. The method is based on background subtraction in a video sequence obtained with a low density Focal Plane Array (FPA) of the newly available uncooled lead selenide (PbSe) detectors. Thermal instability along with the lack of specific electronics and mechanical devices for canceling the effect of distortion make background image identification very difficult. As a result, the identification of targets is performed in low signal to noise ratio (SNR) conditions, which may considerably restrict the sensitivity of the detection algorithm. These problems are addressed in this work by means of a new technique based on the empirical mode decomposition, which accomplishes drift estimation and target detection. Given that background estimation is the most important stage for detecting, a previous denoising step enabling a better drift estimation is designed. Comparisons are conducted against a denoising technique based on the wavelet transform and also with traditional drift estimation methods such as Kalman filtering and running average. The results reported by the simulations show that the proposed scheme has superior performance.

  5. Towards the development of a SiPM-based camera for the Cherenkov Telescope Array

    NASA Astrophysics Data System (ADS)

    Ambrosi, G.; Bissaldi, E.; Di Venere, L.; Fiandrini, E.; Giglietto, N.; Giordano, F.; Ionica, M.; Paoletti, R.; Simone, D.; Vagelli, V.

    2017-03-01

    The Italian National Institute for Nuclear Physics (INFN) is involved in the development of a prototype for a camera based on Silicon Photomultipliers (SiPMs) for the Cherenkov Telescope Array (CTA), a new generation of telescopes for ground-based gamma-ray astronomy. In this framework, an R&D program within the `Progetto Premiale TElescopi CHErenkov made in Italy (TECHE.it)' for the development of SiPMs suitable for Cherenkov light detection in the Near-Ultraviolet (NUV) has been carried out. The developed device is a NUV High-Density (NUV-HD) SiPM based on a micro cell of 30 μm × 30 μm and an area of 6 mm × 6 mm, produced by Fondazione Bruno Kessler (FBK). A full characterization of the single NUV-HD SiPM will be presented. A matrix of 8 × 8 single NUV-HD SiPMs will be part of the focal plane of the Schwarzschild- Couder Telescope prototype (pSCT) for CTA. An update on recent tests on the detectors arranged in this matrix configuration and on the front-end electronics will be given.

  6. Cluster analysis for identifying sub-types of tinnitus: a positron emission tomography and voxel-based morphometry study.

    PubMed

    Schecklmann, Martin; Lehner, Astrid; Poeppl, Timm B; Kreuzer, Peter M; Hajak, Göran; Landgrebe, Michael; Langguth, Berthold

    2012-11-16

    Tinnitus is a heterogeneous disorder with respect to its etiology and phenotype. Thus, the identification of sub-types implicates high relevance for treatment recommendations. For this aim, we used cluster analysis of patients for which clinical data, positron-emission tomography (PET) data and voxel-based morphometry (VBM) data were available. 44 patients with chronic tinnitus were included in this analysis. On a phenotypical level, we used tinnitus distress, duration, and laterality for clustering. To correct PET and VBM data for age, gender, and hearing, we built up a design matrix including these variables as regressors and extracted the residuals. We applied Ward's clustering method and forced cluster analysis to divide the data into two groups for both imaging and phenotypical data. On a phenotypical level the clustered groups differed only in tinnitus laterality (uni- vs. bilateral tinnitus), but not in tinnitus duration, distress, age, gender, and hearing. For grey matter volume, groups differed mainly in frontal, cingulate, temporal, and thalamic areas. For glucose metabolism, groups differed in temporal and parietal areas. The correspondence of classification was near chance level for the interrelationship of all three data set clusters. Thus, we showed that clustering according to imaging data is feasible and might depict a new approach for identifying tinnitus sub-types. However, it remains an open question to what extent the phenotypical and imaging levels may be interrelated. This article is part of a Special Issue entitled: Tinnitus Neuroscience.

  7. Vacancy trapping by solute atoms during quenching in Cu-based dilute alloys studied by positron annihilation spectroscopy

    NASA Astrophysics Data System (ADS)

    Yabuuchi, A.; Yamamoto, Y.; Ohira, J.; Sugita, K.; Mizuno, M.; Araki, H.; Shirai, Y.

    2009-11-01

    Frozen-in vacancies and the recovery have been investigated in some Cu-based dilute alloys by using positron annihilation lifetime spectroscopy. Cu-0.5at%Sb, Cu-0.5at%Sn and Cu-0.5at%In dilute bulk alloys were quenched to ice water from 1223 K. A pure-Cu specimen was also quenched from the same temperature. As a result, no frozen-in vacancies have been detected in as-quenched pure-Cu specimen. On the other hand, as-quenched Cu-0.5at%Sb alloy contained frozen-in thermal equilibrium vacancies with concentration of 3 × 10-5. Furthermore, these frozen-in vacancies in Cu-0.5at%Sb alloy were stable until 473 K, and began to migrate at 523 K. Finally, the Cu-Sb alloy were recovered to the fully annealed state at 823 K. This thermal stability clearly implies some interaction exists between a vacancy and Sb atom and due to the interaction, thermal equilibrium vacancies are trapped by Sb atoms during quenching.

  8. A semi-automatic image-based close range 3D modeling pipeline using a multi-camera configuration.

    PubMed

    Rau, Jiann-Yeou; Yeh, Po-Chia

    2012-01-01

    The generation of photo-realistic 3D models is an important task for digital recording of cultural heritage objects. This study proposes an image-based 3D modeling pipeline which takes advantage of a multi-camera configuration and multi-image matching technique that does not require any markers on or around the object. Multiple digital single lens reflex (DSLR) cameras are adopted and fixed with invariant relative orientations. Instead of photo-triangulation after image acquisition, calibration is performed to estimate the exterior orientation parameters of the multi-camera configuration which can be processed fully automatically using coded targets. The calibrated orientation parameters of all cameras are applied to images taken using the same camera configuration. This means that when performing multi-image matching for surface point cloud generation, the orientation parameters will remain the same as the calibrated results, even when the target has changed. Base on this invariant character, the whole 3D modeling pipeline can be performed completely automatically, once the whole system has been calibrated and the software was seamlessly integrated. Several experiments were conducted to prove the feasibility of the proposed system. Images observed include that of a human being, eight Buddhist statues, and a stone sculpture. The results for the stone sculpture, obtained with several multi-camera configurations were compared with a reference model acquired by an ATOS-I 2M active scanner. The best result has an absolute accuracy of 0.26 mm and a relative accuracy of 1:17,333. It demonstrates the feasibility of the proposed low-cost image-based 3D modeling pipeline and its applicability to a large quantity of antiques stored in a museum.

  9. A Semi-Automatic Image-Based Close Range 3D Modeling Pipeline Using a Multi-Camera Configuration

    PubMed Central

    Rau, Jiann-Yeou; Yeh, Po-Chia

    2012-01-01

    The generation of photo-realistic 3D models is an important task for digital recording of cultural heritage objects. This study proposes an image-based 3D modeling pipeline which takes advantage of a multi-camera configuration and multi-image matching technique that does not require any markers on or around the object. Multiple digital single lens reflex (DSLR) cameras are adopted and fixed with invariant relative orientations. Instead of photo-triangulation after image acquisition, calibration is performed to estimate the exterior orientation parameters of the multi-camera configuration which can be processed fully automatically using coded targets. The calibrated orientation parameters of all cameras are applied to images taken using the same camera configuration. This means that when performing multi-image matching for surface point cloud generation, the orientation parameters will remain the same as the calibrated results, even when the target has changed. Base on this invariant character, the whole 3D modeling pipeline can be performed completely automatically, once the whole system has been calibrated and the software was seamlessly integrated. Several experiments were conducted to prove the feasibility of the proposed system. Images observed include that of a human being, eight Buddhist statues, and a stone sculpture. The results for the stone sculpture, obtained with several multi-camera configurations were compared with a reference model acquired by an ATOS-I 2M active scanner. The best result has an absolute accuracy of 0.26 mm and a relative accuracy of 1:17,333. It demonstrates the feasibility of the proposed low-cost image-based 3D modeling pipeline and its applicability to a large quantity of antiques stored in a museum. PMID:23112656

  10. Uas Based Tree Species Identification Using the Novel FPI Based Hyperspectral Cameras in Visible, NIR and SWIR Spectral Ranges

    NASA Astrophysics Data System (ADS)

    Näsi, R.; Honkavaara, E.; Tuominen, S.; Saari, H.; Pölönen, I.; Hakala, T.; Viljanen, N.; Soukkamäki, J.; Näkki, I.; Ojanen, H.; Reinikainen, J.

    2016-06-01

    Unmanned airborne systems (UAS) based remote sensing offers flexible tool for environmental monitoring. Novel lightweight Fabry-Perot interferometer (FPI) based, frame format, hyperspectral imaging in the spectral range from 400 to 1600 nm was used for identifying different species of trees in a forest area. To the best of the authors' knowledge, this was the first research where stereoscopic, hyperspectral VIS, NIR, SWIR data is collected for tree species identification using UAS. The first results of the analysis based on fusion of two FPI-based hyperspectral imagers and RGB camera showed that the novel FPI hyperspectral technology provided accurate geometric, radiometric and spectral information in a forested scene and is operational for environmental remote sensing applications.

  11. Positron-alkali atom scattering

    NASA Technical Reports Server (NTRS)

    Mceachran, R. P.; Horbatsch, M.; Stauffer, A. D.; Ward, S. J.

    1990-01-01

    Positron-alkali atom scattering was recently investigated both theoretically and experimentally in the energy range from a few eV up to 100 eV. On the theoretical side calculations of the integrated elastic and excitation cross sections as well as total cross sections for Li, Na and K were based upon either the close-coupling method or the modified Glauber approximation. These theoretical results are in good agreement with experimental measurements of the total cross section for both Na and K. Resonance structures were also found in the L = 0, 1 and 2 partial waves for positron scattering from the alkalis. The structure of these resonances appears to be quite complex and, as expected, they occur in conjunction with the atomic excitation thresholds. Currently both theoretical and experimental work is in progress on positron-Rb scattering in the same energy range.

  12. Positron spectroscopy for materials characterization

    SciTech Connect

    Schultz, P.J.; Snead, C.L. Jr.

    1988-01-01

    One of the more active areas of research on materials involves the observation and characterization of defects. The discovery of positron localization in vacancy-type defects in solids in the 1960's initiated a vast number of experimental and theoretical investigations which continue to this day. Traditional positron annihilation spectroscopic techniques, including lifetime studies, angular correlation, and Doppler broadening of annihilation radiation, are still being applied to new problems in the bulk properties of simple metals and their alloys. In addition new techniques based on tunable sources of monoenergetic positron beams have, in the last 5 years, expanded the horizons to studies of surfaces, thin films, and interfaces. In the present paper we briefly review these experimental techniques, illustrating with some of the important accomplishments of the field. 40 refs., 19 figs.

  13. a Uav-Based Low-Cost Stereo Camera System for Archaeological Surveys - Experiences from Doliche (turkey)

    NASA Astrophysics Data System (ADS)

    Haubeck, K.; Prinz, T.

    2013-08-01

    The use of Unmanned Aerial Vehicles (UAVs) for surveying archaeological sites is becoming more and more common due to their advantages in rapidity of data acquisition, cost-efficiency and flexibility. One possible usage is the documentation and visualization of historic geo-structures and -objects using UAV-attached digital small frame cameras. These monoscopic cameras offer the possibility to obtain close-range aerial photographs, but - under the condition that an accurate nadir-waypoint flight is not possible due to choppy or windy weather conditions - at the same time implicate the problem that two single aerial images not always meet the required overlap to use them for 3D photogrammetric purposes. In this paper, we present an attempt to replace the monoscopic camera with a calibrated low-cost stereo camera that takes two pictures from a slightly different angle at the same time. Our results show that such a geometrically predefined stereo image pair can be used for photogrammetric purposes e.g. the creation of digital terrain models (DTMs) and orthophotos or the 3D extraction of single geo-objects. Because of the limited geometric photobase of the applied stereo camera and the resulting base-height ratio the accuracy of the DTM however directly depends on the UAV flight altitude.

  14. Car speed estimation based on cross-ratio using video data of car-mounted camera (black box).

    PubMed

    Han, Inhwan

    2016-12-01

    This paper proposes several methods for using footages of car-mounted camera (car black box) to estimate the speed of the car with the camera, or the speed of other cars. This enables estimating car velocities directly from recorded footages without the need of specific physical locations of cars shown in the recorded material. To achieve this, this study collected 96 cases of black box footages and classified them for analysis based on various factors such as travel circumstances and directions. With these data, several case studies relating to speed estimation of camera-mounted car and other cars in recorded footage while the camera-mounted car is stationary, or moving, have been conducted. Additionally, a rough method for estimating the speed of other cars moving through a curvilinear path and its analysis results are described, for practical uses. Speed estimations made using cross-ratio were compared with the results of the traditional footage-analysis method and GPS calculation results for camera-mounted cars, proving its applicability.

  15. Instrumentation optimization for positron emission mammography

    SciTech Connect

    Moses, William W.; Qi, Jinyi

    2003-06-05

    The past several years have seen designs for PET cameras optimized to image the breast, commonly known as Positron Emission Mammography or PEM cameras. The guiding principal behind PEM instrumentation is that a camera whose field of view is restricted to a single breast has higher performance and lower cost than a conventional PET camera. The most common geometry is a pair of parallel planes of detector modules, although geometries that encircle the breast have also been proposed. The ability of the detector modules to measure the depth of interaction (DOI) is also a relevant feature. This paper finds that while both the additional solid angle coverage afforded by encircling the breast and the decreased blurring afforded by the DOI measurement improve performance, the ability to measure DOI is more important than the ability to encircle the breast.

  16. Development of an Optical Fiber-Based MR Compatible Gamma Camera for SPECT/MRI Systems

    NASA Astrophysics Data System (ADS)

    Yamamoto, Seiichi; Watabe, Tadashi; Kanai, Yasukazu; Watabe, Hiroshi; Hatazawa, Jun

    2015-02-01

    Optical fiber is a promising material for integrated positron emission tomography (PET) and magnetic resonance imaging (MRI) PET/MRI systems. Because its material is plastic, it has no interference between MRI. However, it is unclear whether this material can also be used for a single photon emission tomography (SPECT)/MRI system. For this purpose, we developed an optical fiber-based block detector for a SPECT/MRI system and tested its performance by combining 1.2 ×1.2 ×6 mm Y2SiO5 (YSO) pixels into a 15 ×15 block and was coupled it to an optical fiber image guide that used was 0.5-mm in diameter with 80-cm long double clad fibers. The image guide had 22 ×22 mm rectangular input and an equal size output. The input of the optical fiber-based image guide was bent at 90 degrees, and the output was optically coupled to a 1-in square high quantum efficiency position sensitive photomultiplier tube (HQE-PSPMT). The parallel hole, 7-mm-thick collimator made of tungsten plastic was mounted on a YSO block. The diameter of the collimator holes was 0.8 mm which was positioned one-to-one coupled to the YSO pixels. We evaluated the intrinsic and system performances. We resolved most of the YSO pixels in a two-dimensional histogram for Co-57 gamma photons (122-keV) with an average peak-to-value ratio of 1.5. The energy resolution was 38% full-width at half-maximum (FWHM). The system resolution was 1.7-mm FWHM, 1.5 mm from the collimator surface, and the sensitivity was 0.06%. Images of a Co-57 point source could be successfully obtained inside 0.3 T MRI without serious interference. We conclude that the developed optical fiber-based YSO block detector is promising for SPECT/MRI systems.

  17. Ultraviolet Imaging with Low Cost Smartphone Sensors: Development and Application of a Raspberry Pi-Based UV Camera

    PubMed Central

    Wilkes, Thomas C.; McGonigle, Andrew J. S.; Pering, Tom D.; Taggart, Angus J.; White, Benjamin S.; Bryant, Robert G.; Willmott, Jon R.

    2016-01-01

    Here, we report, for what we believe to be the first time, on the modification of a low cost sensor, designed for the smartphone camera market, to develop an ultraviolet (UV) camera system. This was achieved via adaptation of Raspberry Pi cameras, which are based on back-illuminated complementary metal-oxide semiconductor (CMOS) sensors, and we demonstrated the utility of these devices for applications at wavelengths as low as 310 nm, by remotely sensing power station smokestack emissions in this spectral region. Given the very low cost of these units, ≈ USD 25, they are suitable for widespread proliferation in a variety of UV imaging applications, e.g., in atmospheric science, volcanology, forensics and surface smoothness measurements. PMID:27782054

  18. Ultraviolet Imaging with Low Cost Smartphone Sensors: Development and Application of a Raspberry Pi-Based UV Camera.

    PubMed

    Wilkes, Thomas C; McGonigle, Andrew J S; Pering, Tom D; Taggart, Angus J; White, Benjamin S; Bryant, Robert G; Willmott, Jon R

    2016-10-06

    Here, we report, for what we believe to be the first time, on the modification of a low cost sensor, designed for the smartphone camera market, to develop an ultraviolet (UV) camera system. This was achieved via adaptation of Raspberry Pi cameras, which are based on back-illuminated complementary metal-oxide semiconductor (CMOS) sensors, and we demonstrated the utility of these devices for applications at wavelengths as low as 310 nm, by remotely sensing power station smokestack emissions in this spectral region. Given the very low cost of these units, ≈ USD 25, they are suitable for widespread proliferation in a variety of UV imaging applications, e.g., in atmospheric science, volcanology, forensics and surface smoothness measurements.

  19. Research on simulation and verification system of satellite remote sensing camera video processor based on dual-FPGA

    NASA Astrophysics Data System (ADS)

    Ma, Fei; Liu, Qi; Cui, Xuenan

    2014-09-01

    To satisfy the needs for testing video processor of satellite remote sensing cameras, a design is provided to achieve a simulation and verification system of satellite remote sensing camera video processor based on dual-FPGA. The correctness of video processor FPGA logic can be verified even without CCD signals or analog to digital convertor. Two Xilinx Virtex FPGAs are adopted to make a center unit, the logic of A/D digital data generating and data processing are developed with VHDL. The RS-232 interface is used to receive commands from the host computer, and different types of data are generated and outputted depending on the commands. Experimental results show that the simulation and verification system is flexible and can work well. The simulation and verification system meets the requirements of testing video processors for several different types of satellite remote sensing cameras.

  20. New camera-based microswitch technology to monitor small head and mouth responses of children with multiple disabilities.

    PubMed

    Lancioni, Giulio E; Bellini, Domenico; Oliva, Doretta; Singh, Nirbhay N; O'Reilly, Mark F; Green, Vanessa A; Furniss, Fred

    2014-06-01

    Assessing a new camera-based microswitch technology, which did not require the use of color marks on the participants' face. Two children with extensive multiple disabilities participated. The responses selected for them consisted of small, lateral head movements and mouth closing or opening. The intervention was carried out according to a multiple probe design across responses. The technology involved a computer with a CPU using a 2-GHz clock, a USB video camera with a 16-mm lens, a USB cable connecting the camera and the computer, and a special software program written in ISO C++ language. The new technology was satisfactorily used with both children. Large increases in their responding were observed during the intervention periods (i.e. when the responses were followed by preferred stimulation). The new technology may be an important resource for persons with multiple disabilities and minimal motor behavior.

  1. Design and implementation of non-contact detection system for catenary based on double linear array cameras

    NASA Astrophysics Data System (ADS)

    Chen, Peng; Xu, Cheng; Li, Zhiqi; Xu, Zisang

    2017-04-01

    The detection of catenary geometric parameters is a very important problem in railway infrastructure measurement. In this paper, a high resolution linear array CCD is used as the photosensitive element, and FPGA is used as the main controller to design the data acquisition system of linear array camera, and a non-contact detection system of catenary geometry parameters based on double linear array cameras is implemented. According to the intensity of linear array CCD output signal, and the adaptive adjustment algorithm of exposure intensity is achieved, so the optical integration time is changed in real time. In order to improve the bandwidth limitation of data transmission, the calibration algorithm of pixel position in the system is completed by FPGA, which improved the response speed of the system. The experimental results show that the scan frequency of linear array camera can reach 6MHz, the measurement error is controlled at mm level, and it has good accuracy and stability.

  2. A depth camera for natural human-computer interaction based on near-infrared imaging and structured light

    NASA Astrophysics Data System (ADS)

    Liu, Yue; Wang, Liqiang; Yuan, Bo; Liu, Hao

    2015-08-01

    Designing of a novel depth camera is presented, which targets close-range (20-60cm) natural human-computer interaction especially for mobile terminals. In order to achieve high precision through the working range, a two-stepping method is employed to match the near infrared intensity image to absolute depth in real-time. First, we use structured light achieved by an 808nm laser diode and a Dammann grating to coarsely quantize the output space of depth values into discrete bins. Then use a learning-based classification forest algorithm to predict the depth distribution over these bins for each pixel in the image. The quantitative experimental results show that this depth camera has 1% precision over range of 20-60cm, which show that the camera suit resource-limited and low-cost application.

  3. A field-based technique for the longitudinal profiling of ultrarelativistic electron or positron bunches down to lengths of {le}10 microns

    SciTech Connect

    Tatchyn, R.

    1993-05-01

    Present and future generations of particle accelerating and storage machines are expected to develop ever-decreasing electron/positron bunch lengths, down to 100 {mu} and beyond. In this paper a method for measuring the longitudinal profiles of ultrashort (1000 {mu} {approx} 10 {mu}) bunches, based on: (1) the extreme field compaction attained by ultrarelativistic particles, and (2) the reduction of the group velocity of a visible light pulse in a suitably-chosen dielectric medium, is outline.

  4. Pedestrian Detection Based on Adaptive Selection of Visible Light or Far-Infrared Light Camera Image by Fuzzy Inference System and Convolutional Neural Network-Based Verification.

    PubMed

    Kang, Jin Kyu; Hong, Hyung Gil; Park, Kang Ryoung

    2017-07-08

    A number of studies have been conducted to enhance the pedestrian detection accuracy of intelligent surveillance systems. However, detecting pedestrians under outdoor conditions is a challenging problem due to the varying lighting, shadows, and occlusions. In recent times, a growing number of studies have been performed on visible light camera-based pedestrian detection systems using a convolutional neural network (CNN) in order to make the pedestrian detection process more resilient to such conditions. However, visible light cameras still cannot detect pedestrians during nighttime, and are easily affected by shadows and lighting. There are many studies on CNN-based pedestrian detection through the use of far-infrared (FIR) light cameras (i.e., thermal cameras) to address such difficulties. However, when the solar radiation increases and the background temperature reaches the same level as the body temperature, it remains difficult for the FIR light camera to detect pedestrians due to the insignificant difference between the pedestrian and non-pedestrian features within the images. Researchers have been trying to solve this issue by inputting both the visible light and the FIR camera images into the CNN as the input. This, however, takes a longer time to process, and makes the system structure more complex as the CNN needs to process both camera images. This research adaptively selects a more appropriate candidate between two pedestrian images from visible light and FIR cameras based on a fuzzy inference system (FIS), and the selected candidate is verified with a CNN. Three types of databases were tested, taking into account various environmental factors using visible light and FIR cameras. The results showed that the proposed method performs better than the previously reported methods.

  5. Pedestrian Detection Based on Adaptive Selection of Visible Light or Far-Infrared Light Camera Image by Fuzzy Inference System and Convolutional Neural Network-Based Verification

    PubMed Central

    Kang, Jin Kyu; Hong, Hyung Gil; Park, Kang Ryoung

    2017-01-01

    A number of studies have been conducted to enhance the pedestrian detection accuracy of intelligent surveillance systems. However, detecting pedestrians under outdoor conditions is a challenging problem due to the varying lighting, shadows, and occlusions. In recent times, a growing number of studies have been performed on visible light camera-based pedestrian detection systems using a convolutional neural network (CNN) in order to make the pedestrian detection process more resilient to such conditions. However, visible light cameras still cannot detect pedestrians during nighttime, and are easily affected by shadows and lighting. There are many studies on CNN-based pedestrian detection through the use of far-infrared (FIR) light cameras (i.e., thermal cameras) to address such difficulties. However, when the solar radiation increases and the background temperature reaches the same level as the body temperature, it remains difficult for the FIR light camera to detect pedestrians due to the insignificant difference between the pedestrian and non-pedestrian features within the images. Researchers have been trying to solve this issue by inputting both the visible light and the FIR camera images into the CNN as the input. This, however, takes a longer time to process, and makes the system structure more complex as the CNN needs to process both camera images. This research adaptively selects a more appropriate candidate between two pedestrian images from visible light and FIR cameras based on a fuzzy inference system (FIS), and the selected candidate is verified with a CNN. Three types of databases were tested, taking into account various environmental factors using visible light and FIR cameras. The results showed that the proposed method performs better than the previously reported methods. PMID:28698475

  6. Fabrication and Characterization of 640x486 GaAs Based Quantum Well Infrared Photodetector (QWIP) Snapshot Camera

    NASA Technical Reports Server (NTRS)

    Gunapala, S. D.; Bandara, S. V.; Liu, J. K.; Hong, W.; Sundaram, M.; Carralejo, R.; Shott, C. A.; Maker, P. D.; Miller, R. E.

    1997-01-01

    In this paper, we discuss the development of this very sensitive long wavelength infrared (LWIR) camera based on a GaAs/AlGaAs QWIP focal plane array (FPA) and its performance in quantum efficiency, NE(delta)T, uniformity, and operability.

  7. Camera-Based Microswitch Technology for Eyelid and Mouth Responses of Persons with Profound Multiple Disabilities: Two Case Studies

    ERIC Educational Resources Information Center

    Lancioni, Giulio E.; Bellini, Domenico; Oliva, Doretta; Singh, Nirbhay N.; O'Reilly, Mark F.; Sigafoos, Jeff

    2010-01-01

    These two studies assessed camera-based microswitch technology for eyelid and mouth responses of two persons with profound multiple disabilities and minimal motor behavior. This technology, in contrast with the traditional optic microswitches used for those responses, did not require support frames on the participants' face but only small color…

  8. Two Persons with Multiple Disabilities Use Camera-Based Microswitch Technology to Control Stimulation with Small Mouth and Eyelid Responses

    ERIC Educational Resources Information Center

    Lancioni, Giulio E.; Bellini, Domenico; Oliva, Doretta; Singh, Nirbhay N.; O'Reilly, Mark F.; Sigafoos, Jeff; Lang, Russell

    2012-01-01

    Background: A camera-based microswitch technology was recently developed to monitor small facial responses of persons with multiple disabilities and allow those responses to control environmental stimulation. This study assessed such a technology with 2 new participants using slight variations of previous responses. Method: The technology involved…

  9. Camera-Based Microswitch Technology to Monitor Mouth, Eyebrow, and Eyelid Responses of Children with Profound Multiple Disabilities

    ERIC Educational Resources Information Center

    Lancioni, Giulio E.; Bellini, Domenico; Oliva, Doretta; Singh, Nirbhay N.; O'Reilly, Mark F.; Lang, Russell; Didden, Robert

    2011-01-01

    A camera-based microswitch technology was recently used to successfully monitor small eyelid and mouth responses of two adults with profound multiple disabilities (Lancioni et al., Res Dev Disab 31:1509-1514, 2010a). This technology, in contrast with the traditional optic microswitches used for those responses, did not require support frames on…

  10. Fabrication and Characterization of 640x486 GaAs Based Quantum Well Infrared Photodetector (QWIP) Snapshot Camera

    NASA Technical Reports Server (NTRS)

    Gunapala, S. D.; Bandara, S. V.; Liu, J. K.; Hong, W.; Sundaram, M.; Carralejo, R.; Shott, C. A.; Maker, P. D.; Miller, R. E.

    1997-01-01

    In this paper, we discuss the development of this very sensitive long wavelength infrared (LWIR) camera based on a GaAs/AlGaAs QWIP focal plane array (FPA) and its performance in quantum efficiency, NE(delta)T, uniformity, and operability.

  11. Camera-Based Microswitch Technology for Eyelid and Mouth Responses of Persons with Profound Multiple Disabilities: Two Case Studies

    ERIC Educational Resources Information Center

    Lancioni, Giulio E.; Bellini, Domenico; Oliva, Doretta; Singh, Nirbhay N.; O'Reilly, Mark F.; Sigafoos, Jeff

    2010-01-01

    These two studies assessed camera-based microswitch technology for eyelid and mouth responses of two persons with profound multiple disabilities and minimal motor behavior. This technology, in contrast with the traditional optic microswitches used for those responses, did not require support frames on the participants' face but only small color…

  12. Camera-Based Microswitch Technology to Monitor Mouth, Eyebrow, and Eyelid Responses of Children with Profound Multiple Disabilities

    ERIC Educational Resources Information Center

    Lancioni, Giulio E.; Bellini, Domenico; Oliva, Doretta; Singh, Nirbhay N.; O'Reilly, Mark F.; Lang, Russell; Didden, Robert

    2011-01-01

    A camera-based microswitch technology was recently used to successfully monitor small eyelid and mouth responses of two adults with profound multiple disabilities (Lancioni et al., Res Dev Disab 31:1509-1514, 2010a). This technology, in contrast with the traditional optic microswitches used for those responses, did not require support frames on…

  13. Two Persons with Multiple Disabilities Use Camera-Based Microswitch Technology to Control Stimulation with Small Mouth and Eyelid Responses

    ERIC Educational Resources Information Center

    Lancioni, Giulio E.; Bellini, Domenico; Oliva, Doretta; Singh, Nirbhay N.; O'Reilly, Mark F.; Sigafoos, Jeff; Lang, Russell

    2012-01-01

    Background: A camera-based microswitch technology was recently developed to monitor small facial responses of persons with multiple disabilities and allow those responses to control environmental stimulation. This study assessed such a technology with 2 new participants using slight variations of previous responses. Method: The technology involved…

  14. Remote Marker-Based Tracking for UAV Landing Using Visible-Light Camera Sensor.

    PubMed

    Nguyen, Phong Ha; Kim, Ki Wan; Lee, Young Won; Park, Kang Ryoung

    2017-08-30

    Unmanned aerial vehicles (UAVs), which are commonly known as drones, have proved to be useful not only on the battlefields where manned flight is considered too risky or difficult, but also in everyday life purposes such as surveillance, monitoring, rescue, unmanned cargo, aerial video, and photography. More advanced drones make use of global positioning system (GPS) receivers during the navigation and control loop which allows for smart GPS features of drone navigation. However, there are problems if the drones operate in heterogeneous areas with no GPS signal, so it is important to perform research into the development of UAVs with autonomous navigation and landing guidance using computer vision. In this research, we determined how to safely land a drone in the absence of GPS signals using our remote maker-based tracking algorithm based on the visible light camera sensor. The proposed method uses a unique marker designed as a tracking target during landing procedures. Experimental results show that our method significantly outperforms state-of-the-art object trackers in terms of both accuracy and processing time, and we perform test on an embedded system in various environments.

  15. Product quality-based eco-efficiency applied to digital cameras.

    PubMed

    Park, Pil-Ju; Tahara, Kiyotaka; Inaba, Atsushi

    2007-04-01

    When calculating eco-efficiency, there are considerable confusion and controversy about what the product value is and how it should be quantified. We have proposed here a quantification method for eco-efficiency that derives the ratio of the multiplication value of the product quality and the life span of a product to its whole environmental impact based on Life Cycle Assessment (LCA). In this study, product quality was used as the product value and quantified by the following three steps: (1) normalization based on a value function, (2) determination of the subjective weighting factors of the attributes, and (3) calculation of product quality of the chosen products. The applicability of the proposed method to an actual product was evaluated using digital cameras. The results show that the eco-efficiency values of products equipped with rechargeable batteries were higher than those products that use alkaline batteries, because of higher quality values and lower environmental impacts. The sensitivity analysis shows that the proposed method was superior to the existing methods, because it enables to identify the quality level of the chosen products by considering all products that have the same functions in the market and because, when adding a new product, the calculated quality values in the proposed method do not have to be changed.

  16. Remote Marker-Based Tracking for UAV Landing Using Visible-Light Camera Sensor

    PubMed Central

    Nguyen, Phong Ha; Kim, Ki Wan; Lee, Young Won; Park, Kang Ryoung

    2017-01-01

    Unmanned aerial vehicles (UAVs), which are commonly known as drones, have proved to be useful not only on the battlefields where manned flight is considered too risky or difficult, but also in everyday life purposes such as surveillance, monitoring, rescue, unmanned cargo, aerial video, and photography. More advanced drones make use of global positioning system (GPS) receivers during the navigation and control loop which allows for smart GPS features of drone navigation. However, there are problems if the drones operate in heterogeneous areas with no GPS signal, so it is important to perform research into the development of UAVs with autonomous navigation and landing guidance using computer vision. In this research, we determined how to safely land a drone in the absence of GPS signals using our remote maker-based tracking algorithm based on the visible light camera sensor. The proposed method uses a unique marker designed as a tracking target during landing procedures. Experimental results show that our method significantly outperforms state-of-the-art object trackers in terms of both accuracy and processing time, and we perform test on an embedded system in various environments. PMID:28867775

  17. Secondary caries detection with a novel fluorescence-based camera system in vitro

    NASA Astrophysics Data System (ADS)

    Brede, Olivier; Wilde, Claudia; Krause, Felix; Frentzen, Matthias; Braun, Andreas

    2010-02-01

    The aim of the study was to assess the ability of a fluorescence based optical system to detect secondary caries. The optical detecting system (VistaProof) illuminates the tooth surfaces with blue light emitted by high power GaN-LEDs at 405 nm. Employing this almost monochromatic excitation, fluorescence is analyzed using a RGB camera chip and encoded in color graduations (blue - red - orange - yellow) by a software (DBSWIN), indicating the degree of caries destruction. 31 freshly extracted teeth with existing fillings and secondary caries were cleaned, excavated and refilled with the same kind of restorative material. 19 of them were refilled with amalgam, 12 were refilled with a composite resin. Each step was analyzed with the respective software and analyzed statistically. Differences were considered as statistically significant at p<0.05. There was no difference between measurements at baseline and after cleaning (Mann Whitney, p>0.05). There was a significant difference between baseline measurements of the teeth primarily filled with composite resins and the refilled situation (p=0.014). There was also a significant difference between the non-excavated and the excavated group (Composite p=0.006, Amalgam p=0.018). The in vitro study showed, that the fluorescence based system allows detecting secondary caries next to composite resin fillings but not next to amalgam restorations. Cleaning of the teeth is not necessary, if there is no visible plaque. Further studies have to show, whether the system shows the same promising results in vivo.

  18. A Spot Reminder System for the Visually Impaired Based on a Smartphone Camera

    PubMed Central

    Takizawa, Hotaka; Orita, Kazunori; Aoyagi, Mayumi; Ezaki, Nobuo; Mizuno, Shinji

    2017-01-01

    The present paper proposes a smartphone-camera-based system to assist visually impaired users in recalling their memories related to important locations, called spots, that they visited. The memories are recorded as voice memos, which can be played back when the users return to the spots. Spot-to-spot correspondence is determined by image matching based on the scale invariant feature transform. The main contribution of the proposed system is to allow visually impaired users to associate arbitrary voice memos with arbitrary spots. The users do not need any special devices or systems except smartphones and do not need to remember the spots where the voice memos were recorded. In addition, the proposed system can identify spots in environments that are inaccessible to the global positioning system. The proposed system has been evaluated by two experiments: image matching tests and a user study. The experimental results suggested the effectiveness of the system to help visually impaired individuals, including blind individuals, recall information about regularly-visited spots. PMID:28165403

  19. A Spot Reminder System for the Visually Impaired Based on a Smartphone Camera.

    PubMed

    Takizawa, Hotaka; Orita, Kazunori; Aoyagi, Mayumi; Ezaki, Nobuo; Mizuno, Shinji

    2017-02-04

    The present paper proposes a smartphone-camera-based system to assist visually impaired users in recalling their memories related to important locations, called spots, that they visited. The memories are recorded as voice memos, which can be played back when the users return to the spots. Spot-to-spot correspondence is determined by image matching based on the scale invariant feature transform. The main contribution of the proposed system is to allow visually impaired users to associate arbitrary voice memos with arbitrary spots. The users do not need any special devices or systems except smartphones and do not need to remember the spots where the voice memos were recorded. In addition, the proposed system can identify spots in environments that are inaccessible to the global positioning system. The proposed system has been evaluated by two experiments: image matching tests and a user study. The experimental results suggested the effectiveness of the system to help visually impaired individuals, including blind individuals, recall information about regularly-visited spots.

  20. A wearable smartphone-enabled camera-based system for gait assessment.

    PubMed

    Kim, Albert; Kim, Junyoung; Rietdyk, Shirley; Ziaie, Babak

    2015-07-01

    Quantitative assessment of gait parameters provides valuable diagnostic and prognostic information. However, most gait analysis systems are bulky, expensive, and designed to be used indoors or in laboratory settings. Recently, wearable systems have attracted considerable attention due to their lower cost and portability. In this paper, we present a simple wearable smartphone-enabled camera-based system (SmartGait) for measurement of spatiotemporal gait parameters. We assess the concurrent validity of SmartGait as compared to a commercially available pressure-sensing walkway (GaitRite). Fifteen healthy young adults (25.8± 2.6 years) were instructed to walk at slow, preferred, and fast speed. The measures of step length (SL), step width (SW), step time (ST), gait speed, double support time (DS) and their variability were assessed for agreement between the two systems; absolute error and intra-class correlation coefficients (ICC) were determined. Measured gait parameters had modest to excellent agreements (ICCs between 0.731 and 0.982). Overall, SmartGait provides many advantages and is a strong alternative wearable system for laboratory and community-based gait assessment. Copyright © 2015 Elsevier B.V. All rights reserved.

  1. A high-sensitivity 2x2 multi-aperture color camera based on selective averaging

    NASA Astrophysics Data System (ADS)

    Zhang, Bo; Kagawa, Keiichiro; Takasawa, Taishi; Seo, Min-Woong; Yasutomi, Keita; Kawahito, Shoji

    2015-03-01

    To demonstrate the low-noise performance of the multi-aperture imaging system using a selective averaging method, an ultra-high-sensitivity multi-aperture color camera with 2×2 apertures is being developed. In low-light conditions, random telegraph signal (RTS) noise and dark current white defects become visible, which greatly degrades the quality of the image. To reduce these kinds of noise as well as to increase the number of incident photons, the multi-aperture imaging system composed of an array of lens and CMOS image sensor (CIS), and the selective averaging for minimizing the synthetic sensor noise at every pixel is utilized. It is verified by simulation that the effective noise at the peak of noise histogram is reduced from 1.44 e- to 0.73 e- in a 2×2-aperture system, where RTS noise and dark current white defects have been successfully removed. In this work, a prototype based on low-noise color sensors with 1280×1024 pixels fabricated in 0.18um CIS technology is considered. The pixel pitch is 7.1μm×7.1μm. The noise of the sensor is around 1e- based on the folding-integration and cyclic column ADCs, and the low voltage differential signaling (LVDS) is used to improve the noise immunity. The synthetic F-number of the prototype is 0.6.

  2. Automated Ground-based Time-lapse Camera Monitoring of West Greenland ice sheet outlet Glaciers: Challenges and Solutions

    NASA Astrophysics Data System (ADS)

    Ahn, Y.; Box, J. E.; Balog, J.; Lewinter, A.

    2008-12-01

    Monitoring Greenland outlet glaciers using remotely sensed data has drawn a great attention in earth science communities for decades and time series analysis of sensory data has provided important variability information of glacier flow by detecting speed and thickness changes, tracking features and acquiring model input. Thanks to advancements of commercial digital camera technology and increased solid state storage, we activated automatic ground-based time-lapse camera stations with high spatial/temporal resolution in west Greenland outlet and collected one-hour interval data continuous for more than one year at some but not all sites. We believe that important information of ice dynamics are contained in these data and that terrestrial mono-/stereo-photogrammetry can provide theoretical/practical fundamentals in data processing along with digital image processing techniques. Time-lapse images over periods in west Greenland indicate various phenomenon. Problematic is rain, snow, fog, shadows, freezing of water on camera enclosure window, image over-exposure, camera motion, sensor platform drift, and fox chewing of instrument cables, and the pecking of plastic window by ravens. Other problems include: feature identification, camera orientation, image registration, feature matching in image pairs, and feature tracking. Another obstacle is that non-metric digital camera contains large distortion to be compensated for precise photogrammetric use. Further, a massive number of images need to be processed in a way that is sufficiently computationally efficient. We meet these challenges by 1) identifying problems in possible photogrammetric processes, 2) categorizing them based on feasibility, and 3) clarifying limitation and alternatives, while emphasizing displacement computation and analyzing regional/temporal variability. We experiment with mono and stereo photogrammetric techniques in the aide of automatic correlation matching for efficiently handling the enormous

  3. ATLAAS: an automatic decision tree-based learning algorithm for advanced image segmentation in positron emission tomography

    NASA Astrophysics Data System (ADS)

    Berthon, Beatrice; Marshall, Christopher; Evans, Mererid; Spezi, Emiliano

    2016-07-01

    Accurate and reliable tumour delineation on positron emission tomography (PET) is crucial for radiotherapy treatment planning. PET automatic segmentation (PET-AS) eliminates intra- and interobserver variability, but there is currently no consensus on the optimal method to use, as different algorithms appear to perform better for different types of tumours. This work aimed to develop a predictive segmentation model, trained to automatically select and apply the best PET-AS method, according to the tumour characteristics. ATLAAS, the automatic decision tree-based learning algorithm for advanced segmentation is based on supervised machine learning using decision trees. The model includes nine PET-AS methods and was trained on a 100 PET scans with known true contour. A decision tree was built for each PET-AS algorithm to predict its accuracy, quantified using the Dice similarity coefficient (DSC), according to the tumour volume, tumour peak to background SUV ratio and a regional texture metric. The performance of ATLAAS was evaluated for 85 PET scans obtained from fillable and printed subresolution sandwich phantoms. ATLAAS showed excellent accuracy across a wide range of phantom data and predicted the best or near-best segmentation algorithm in 93% of cases. ATLAAS outperformed all single PET-AS methods on fillable phantom data with a DSC of 0.881, while the DSC for H&N phantom data was 0.819. DSCs higher than 0.650 were achieved in all cases. ATLAAS is an advanced automatic image segmentation algorithm based on decision tree predictive modelling, which can be trained on images with known true contour, to predict the best PET-AS method when the true contour is unknown. ATLAAS provides robust and accurate image segmentation with potential applications to radiation oncology.

  4. Highly multiplexed signal readout for a time-of-flight positron emission tomography detector based on silicon photomultipliers.

    PubMed

    Cates, Joshua W; Bieniosek, Matthew F; Levin, Craig S

    2017-01-01

    Maintaining excellent timing resolution in the generation of silicon photomultiplier (SiPM)-based time-of-flight positron emission tomography (TOF-PET) systems requires a large number of high-speed, high-bandwidth electronic channels and components. To minimize the cost and complexity of a system's back-end architecture and data acquisition, many analog signals are often multiplexed to fewer channels using techniques that encode timing, energy, and position information. With progress in the development SiPMs having lower dark noise, after pulsing, and cross talk along with higher photodetection efficiency, a coincidence timing resolution (CTR) well below 200 ps FWHM is now easily achievable in single pixel, bench-top setups using 20-mm length, lutetium-based inorganic scintillators. However, multiplexing the output of many SiPMs to a single channel will significantly degrade CTR without appropriate signal processing. We test the performance of a PET detector readout concept that multiplexes 16 SiPMs to two channels. One channel provides timing information with fast comparators, and the second channel encodes both position and energy information in a time-over-threshold-based pulse sequence. This multiplexing readout concept was constructed with discrete components to process signals from a [Formula: see text] array of SensL MicroFC-30035 SiPMs coupled to [Formula: see text] Lu1.8Gd0.2SiO5 (LGSO):Ce (0.025 mol. %) scintillators. This readout method yielded a calibrated, global energy resolution of 15.3% FWHM at 511 keV with a CTR of [Formula: see text] FWHM between the 16-pixel multiplexed detector array and a [Formula: see text] LGSO-SiPM reference detector. In summary, results indicate this multiplexing scheme is a scalable readout technique that provides excellent coincidence timing performance.

  5. Fast and compact internal scanning CMOS-based hyperspectral camera: the Snapscan

    NASA Astrophysics Data System (ADS)

    Pichette, Julien; Charle, Wouter; Lambrechts, Andy

    2017-02-01

    Imec has developed a process for the monolithic integration of optical filters on top of CMOS image sensors, leading to compact, cost-efficient and faster hyperspectral cameras. Linescan cameras are typically used in remote sensing or for conveyor belt applications. Translation of the target is not always possible for large objects or in many medical applications. Therefore, we introduce a novel camera, the Snapscan (patent pending), exploiting internal movement of a linescan sensor enabling fast and convenient acquisition of high-resolution hyperspectral cubes (up to 2048x3652x150 in spectral range 475-925 nm). The Snapscan combines the spectral and spatial resolutions of a linescan system with the convenience of a snapshot camera.

  6. Undulator-Based Production of Polarized Positrons, A Proposal for the 50-GeV Beam in the FFTB

    SciTech Connect

    G. Alexander; P. Anthony; V. Bharadwaj; Yu.K. Batygin; T. Behnke; S. Berridge; G.R. Bower; W. Bugg; R. Carr; E. Chudakov; J.E. Clendenin; F.J. Decker; Yu. Efremenko; T. Fieguth; K. Flottmann; M. Fukuda; V. Gharibyan; T. Handler; T. Hirose; R.H. Iverson; Yu. Kamyshkov; H. Kolanoski; T. Lohse; Chang-guo Lu; K.T. McDonald; N. Meyners; R. Michaels; A.A. Mikhailichenko; K. Monig; G. Moortgat-Pick; M. Olson; T. Omori; D. Onoprienko; N. Pavel; R. Pitthan; M. Purohit; L. Rinolfi; K.P. Schuler; J.C. Sheppard; S. Spanier; A. Stahl; Z.M. Szalata; J. Turner; D. Walz; A. Weidemann; J. Weisend

    2003-06-01

    The full exploitation of the physics potential of future linear colliders such as the JLC, NLC, and TESLA will require the development of polarized positron beams. In the proposed scheme of Balakin and Mikhailichenko [1] a helical undulator is employed to generate photons of several MeV with circular polarization which are then converted in a relatively thin target to generate longitudinally polarized positrons. This experiment, E-166, proposes to test this scheme to determine whether such a technique can produce polarized positron beams of sufficient quality for use in future linear colliders. The experiment will install a meter-long, short-period, pulsed helical undulator in the Final Focus Test Beam (FFTB) at SLAC. A low-emittance 50-GeV electron beam passing through this undulator will generate circularly polarized photons with energies up to 10 MeV. These polarized photons are then converted to polarized positrons via pair production in thin targets. Titanium and tungsten targets, which are both candidates for use in linear colliders, will be tested. The experiment will measure the flux and polarization of the undulator photons, and the spectrum and polarization of the positrons produced in the conversion target, and compare the measurement results to simulations. Thus the proposed experiment directly tests for the first time the validity of the simulation programs used for the physics of polarized pair production in finite matter, in particular the effects of multiple scattering on polarization. Successful comparison of the experimental results to the simulations will lead to greater confidence in the proposed designs of polarized positrons sources for the next generation of linear colliders. This experiment requests six-weeks of time in the FFTB beam line: three weeks for installation and setup and three weeks of beam for data taking. A 50-GeV beam with about twice the SLC emittance at a repetition rate of 30 Hz is required.

  7. Undulator-Based Production of Polarized Positrons, A Proposal for the 50-GeV Beam in the FFTB

    SciTech Connect

    Alexander, G

    2004-03-25

    The full exploitation of the physics potential of future linear colliders such as the JLC, NLC, and TESLA will require the development of polarized positron beams. In the proposed scheme of Balakin and Mikhailichenko [1] a helical undulator is employed to generate photons of several MeV with circular polarization which are then converted in a relatively thin target to generate longitudinally polarized positrons. This experiment, E-166, proposes to test this scheme to determine whether such a technique can produce polarized positron beams of sufficient quality for use in future linear colliders. The experiment will install a meter-long, short-period, pulsed helical undulator in the Final Focus Test Beam (FFTB) at SLAC. A low-emittance 50-GeV electron beam passing through this undulator will generate circularly polarized photons with energies up to 10 MeV. These polarized photons are then converted to polarized positrons via pair production in thin targets. Titanium and tungsten targets, which are both candidates for use in linear colliders, will be tested. The experiment will measure the flux and polarization of the undulator photons, and the spectrum and polarization of the positrons produced in the conversion target, and compare the measurement results to simulations. Thus the proposed experiment directly tests for the first time the validity of the simulation programs used for the physics of polarized pair production in finite matter, in particular the effects of multiple scattering on polarization. Successful comparison of the experimental results to the simulations will lead to greater confidence in the proposed designs of polarized positrons sources for the next generation of linear colliders. This experiment requests six-weeks of time in the FFTB beam line: three weeks for installation and setup and three weeks of beam for data taking. A 50-GeV beam with about twice the SLC emittance at a repetition rate of 30 Hz is required.

  8. Using commercial photo camera's RAW-based images in optical-digital correlator for pattern recognition

    NASA Astrophysics Data System (ADS)

    Starikov, Sergey N.; Konnik, Mikhail V.

    2008-03-01

    In optical-digital correlators for pattern recognition, linear registration of correlation signals is significant for both of recognition reliability and possible input image restoration. This usually achieves with scientific graduated technical cameras, but most of commercial digital cameras now have an option of RAW data output. With appropriate software and parameters of processing, it is possible to get linearized image data from photo camera's RAW file. Application of such photo cameras makes optical-digital systems cheaper, more flexible and brings along their wider propagation. For linear registration of correlation signals, open-source Dave Coffins's RAW converter DCRAW was used in this work. Data from photo camera were linearized by DCRAW converter in "totally RAW documental mode" with 16-bit output. Experimental results of comparison between linearized and non-linearized correlation signals and digitally restored input scene images are presented. It is shown, that applied linearization allows to increase linear dynamic range for used Canon EOS 400D camera more that 3 times.

  9. piscope - A Python based software package for the analysis of volcanic SO2 emissions using UV SO2 cameras

    NASA Astrophysics Data System (ADS)

    Gliss, Jonas; Stebel, Kerstin; Kylling, Arve; Solvejg Dinger, Anna; Sihler, Holger; Sudbø, Aasmund

    2017-04-01

    UV SO2 cameras have become a common method for monitoring SO2 emission rates from volcanoes. Scattered solar UV radiation is measured in two wavelength windows, typically around 310 nm and 330 nm (distinct / weak SO2 absorption) using interference filters. The data analysis comprises the retrieval of plume background intensities (to calculate plume optical densities), the camera calibration (to convert optical densities into SO2 column densities) and the retrieval of gas velocities within the plume as well as the retrieval of plume distances. SO2 emission rates are then typically retrieved along a projected plume cross section, for instance a straight line perpendicular to the plume propagation direction. Today, for most of the required analysis steps, several alternatives exist due to ongoing developments and improvements related to the measurement technique. We present piscope, a cross platform, open source software toolbox for the analysis of UV SO2 camera data. The code is written in the Python programming language and emerged from the idea of a common analysis platform incorporating a selection of the most prevalent methods found in literature. piscope includes several routines for plume background retrievals, routines for cell and DOAS based camera calibration including two individual methods to identify the DOAS field of view (shape and position) within the camera images. Gas velocities can be retrieved either based on an optical flow analysis or using signal cross correlation. A correction for signal dilution (due to atmospheric scattering) can be performed based on topographic features in the images. The latter requires distance retrievals to the topographic features used for the correction. These distances can be retrieved automatically on a pixel base using intersections of individual pixel viewing directions with the local topography. The main features of piscope are presented based on dataset recorded at Mt. Etna, Italy in September 2015.

  10. Smartphone-based fundus camera device (MII Ret Cam) and technique with ability to image peripheral retina.

    PubMed

    Sharma, Ashish; Subramaniam, Saranya Devi; Ramachandran, K I; Lakshmikanthan, Chinnasamy; Krishna, Soujanya; Sundaramoorthy, Selva K

    2016-01-01

    To demonstrate an inexpensive smartphone-based fundus camera device (MII Ret Cam) and technique with ability to capture peripheral retinal pictures. A fundus camera was designed in the form of a device that has slots to fit a smartphone (built-in camera and flash) and 20-D lens. With the help of the device and an innovative imaging technique, high-quality fundus videos were taken with easy extraction of images. The MII Ret Cam and innovative imaging technique was able to capture high-quality images of peripheral retina such as ora serrata and pars plana apart from central fundus pictures. Our smartphone-based fundus camera can help clinicians to monitor diseases affecting both central and peripheral retina. It can help patients understand their disease and clinicians convincing their patients regarding need of treatment especially in cases of peripheral lesions. Imaging peripheral retina has not been demonstrated in existing smartphone-based fundus imaging techniques. The device can also be an inexpensive tool for mass screening.

  11. Positron beam studies of transients in semiconductors

    NASA Astrophysics Data System (ADS)

    Beling, C. D.; Ling, C. C.; Cheung, C. K.; Naik, P. S.; Zhang, J. D.; Fung, S.

    2006-02-01

    Vacancy-sensing positron deep level transient spectroscopy (PDLTS) is a positron beam-based technique that seeks to provide information on the electronic ionization levels of vacancy defects probed by the positron through the monitoring of thermal transients. The experimental discoveries leading to the concept of vacancy-sensing PDLTS are first reviewed. The major problem associated with this technique is discussed, namely the strong electric fields establish in the near surface region of the sample during the thermal transient which tend to sweep positrons into the contact with negligible defect trapping. New simulations are presented which suggest that under certain conditions a sufficient fraction of positrons may be trapped into ionizing defects rendering PDLTS technique workable. Some suggestions are made for techniques that might avoid the problematic electric field problem, such as optical-PDLTS where deep levels are populated using light and the use of high forward bias currents for trap filling.

  12. Evaluation of a CdTe semiconductor based compact gamma camera for sentinel lymph node imaging

    SciTech Connect

    Russo, Paolo; Curion, Assunta S.; Mettivier, Giovanni; Esposito, Michela; Aurilio, Michela; Caraco, Corradina; Aloj, Luigi; Lastoria, Secondo

    2011-03-15

    Purpose: The authors assembled a prototype compact gamma-ray imaging probe (MediPROBE) for sentinel lymph node (SLN) localization. This probe is based on a semiconductor pixel detector. Its basic performance was assessed in the laboratory and clinically in comparison with a conventional gamma camera. Methods: The room-temperature CdTe pixel detector (1 mm thick) has 256x256 square pixels arranged with a 55 {mu}m pitch (sensitive area 14.08x14.08 mm{sup 2}), coupled pixel-by-pixel via bump-bonding to the Medipix2 photon-counting readout CMOS integrated circuit. The imaging probe is equipped with a set of three interchangeable knife-edge pinhole collimators (0.94, 1.2, or 2.1 mm effective diameter at 140 keV) and its focal distance can be regulated in order to set a given field of view (FOV). A typical FOV of 70 mm at 50 mm skin-to-collimator distance corresponds to a minification factor 1:5. The detector is operated at a single low-energy threshold of about 20 keV. Results: For {sup 99m}Tc, at 50 mm distance, a background-subtracted sensitivity of 6.5x10{sup -3} cps/kBq and a system spatial resolution of 5.5 mm FWHM were obtained for the 0.94 mm pinhole; corresponding values for the 2.1 mm pinhole were 3.3x10{sup -2} cps/kBq and 12.6 mm. The dark count rate was 0.71 cps. Clinical images in three patients with melanoma indicate detection of the SLNs with acquisition times between 60 and 410 s with an injected activity of 26 MBq {sup 99m}Tc and prior localization with standard gamma camera lymphoscintigraphy. Conclusions: The laboratory performance of this imaging probe is limited by the pinhole collimator performance and the necessity of working in minification due to the limited detector size. However, in clinical operative conditions, the CdTe imaging probe was effective in detecting SLNs with adequate resolution and an acceptable sensitivity. Sensitivity is expected to improve with the future availability of a larger CdTe detector permitting operation at shorter

  13. A robust method, based on a novel source, for performance and diagnostic capabilities assessment of the positron emission tomography system.

    PubMed

    Samartzis, Alexandros P; Fountos, George P; Kandarakis, Ioannis S; Kounadi, Evangelia P; Zoros, Emmanuel N; Skoura, Evangelia; Datseris, Ioannis E; Nikiforides, George H

    2014-01-01

    The aim of our work was to provide a robust method for evaluating imaging performance of positron emission tomography (PET) systems and particularly to estimate the modulation transfer function (MTF) using the line spread function (LSF) method. A novel plane source was prepared using thin layer chromatography (TLC) of a fluorine-18-fluorodeoxyglucose ((18)F-FDG) solution. The source was placed within a phantom, and imaged using the whole body (WB) two dimensional (2D) and three dimensional (3D) standard imaging protocols in a GE Discovery ST hybrid PET/CT scanner. Modulation transfer function was evaluated by determining the LSF, for various reconstruction methods and filters. The proposed MTF measurement method was validated against the conventional method, based on point spread function (PSF). Higher MTF values were obtained with 3D scanning protocol and 3D iterative reconstruction algorithm. All MTF obtained using 3D reconstruction algorithms showed better preservation of higher frequencies than the 2D algorithms. They also exhibited better contrast and resolution. MTF derived from LSF were more precise compared with those obtained from PSF since their reproducibility was better in all cases, providing a mean standard deviation of 0.0043, in contrary to the PSF method which gave 0.0405. In conclusion, the proposed method is novel and easy to implement for characterization of the signal transfer properties and image quality of PET/computed tomography (CT) systems. It provides an easy way to evaluate the frequency response of each kernel available. The proposed method requires cheap and easily accessible materials, available to the medical physicist in the nuclear medicine department. Furthermore, it is robust to aliasing and since this method is based on the LSF, is more resilient to noise due to greater data averaging than conventional PSF-integration techniques.

  14. Photon-counting versus an integrating CCD-based gamma camera: important consequences for spatial resolution.

    PubMed

    Beekman, Freek J; de Vree, Gerralt A

    2005-06-21

    Charge-coupled devices (CCDs) coupled to scintillation crystals can be used for high resolution imaging with x-rays and gamma-rays. When the CCD images can be read out fast enough, the energy and interaction position of individual gamma quanta can be estimated by real-time image analysis of scintillation light flashes ('photon counting mode'). We tested a set-up in which an electron-multiplying CCD was coupled to a 1 mm thick columnar CsI crystal by means of a fibre-optic taper. We found that, compared to light integration, photon counting improves the intrinsic spatial resolution by a factor of about 3 to 6. Applying our set-up to Tc-99m and I-125 imaging, we were able to obtain intrinsic resolutions below 60 microm (full width at half maximum). Counting losses due to overlapping of light flashes are negligible for event rates typical for biomedical radio-nuclide imaging and do strongly depend on energy window settings. Energy resolution was estimated to be approximately 35 keV FWHM for a 1:1 taper. We conclude that CCD-based gamma cameras have great potential for applications such as in vivo imaging of gamma emitters.

  15. Design Considerations for the Next-Generation MAPMT-Based Monolithic Scintillation Camera

    PubMed Central

    Salçın, Esen; Barber, H. Bradford; Furenlid, Lars R.

    2015-01-01

    Multi-anode photomultiplier tubes (MAPMTs) offer high spatial resolution with their small size anodes that may range from 64 to 1024 in number per tube. In order to increase detector size, MAPMT modules can be arranged in arrays and combined in a single modular scintillation camera. However, then the large number of channels that require amplification and digitization become practically not feasible unless signals are combined or reduced in some manner. Conventional approaches use resistive charge division readouts with a centroid algorithm (or a variant of it) for simplicity in the electronic circuitry implementation and fast execution. However, coupling signals from many anodes may cause significant information loss and limit achievable resolution. In this study, a new approach for optimizing readout-electronics design for MAPMTs based on an analysis of information content in the signals is presented. An adaptive read-out scheme to be used with maximum-likelihood estimation methods is proposed. This scheme achieves precision in estimating event parameters that is close to what is achieved by retaining all signals. PMID:26347497

  16. A regional density distribution based wide dynamic range algorithm for infrared camera systems

    NASA Astrophysics Data System (ADS)

    Park, Gyuhee; Kim, Yongsung; Joung, Shichang; Shin, Sanghoon

    2014-10-01

    Forward Looking InfraRed (FLIR) imaging system has been widely used for both military and civilian purposes. Military applications include target acquisition and tracking, night vision system. Civilian applications include thermal efficiency analysis, short-ranged wireless communication, weather forecasting and other various applications. The dynamic range of FLIR imaging system is larger than one of commercial display. Generally, auto gain controlling and contrast enhancement algorithm are applied to FLIR imaging system. In IR imaging system, histogram equalization and plateau equalization is generally used for contrast enhancement. However, they have no solution about the excessive enhancing when luminance histogram has been distributed in specific narrow region. In this paper, we proposed a Regional Density Distribution based Wide Dynamic Range algorithm for Infrared Camera Systems. Depending on the way of implementation, the result of WDR is quite different. Our approach is single frame type WDR algorithm for enhancing the contrast of both dark and white detail without loss of bins of histogram with real-time processing. The significant change in luminance caused by conventional contrast enhancement methods may introduce luminance saturation and failure in object tracking. Proposed method guarantees both the effective enhancing in contrast and successive object tracking. Moreover, since proposed method does not using multiple images on WDR, computation complexity might be significantly reduced in software / hardware implementation. The experimental results show that proposed method has better performance compared with conventional Contrast enhancement methods.

  17. Realization of the FPGA based TDI algorithm in digital domain for CMOS cameras

    NASA Astrophysics Data System (ADS)

    Tao, Shuping; Jin, Guang; Zhang, Xuyan; Qu, Hongsong

    2012-10-01

    In order to make the CMOS image sensors suitable for space high resolution imaging applications, a new method realizing TDI in digital domain by FPGA is proposed in this paper, which improves the imaging mode for area array CMOS sensors. The TDI algorithm accumulates the corresponding pixels of adjoining frames in digital domain, so the gray values increase by M times, where M is for the integration number, and the image's quality in signal-to-noise ratio can be improved. In addition, the TDI optimization algorithm is discussed. Firstly, the signal storage is optimized by 2 slices of external RAM, where memory depth expanding and the table tennis operation mechanism are used. Secondly, the FIFO operation mechanism reduces the reading and writing operation on memory by M×(M-1) times, It saves so much signal transfer time as is proportional to the square of integration number M2, that the frame frequency is able to increase greatly. At last, the CMOS camera based on TDI in digital domain is developed, and the algorithm is validated by experiments on it.

  18. Efficient Smart CMOS Camera Based on FPGAs Oriented to Embedded Image Processing

    PubMed Central

    Bravo, Ignacio; Baliñas, Javier; Gardel, Alfredo; Lázaro, José L.; Espinosa, Felipe; García, Jorge

    2011-01-01

    This article describes an image processing system based on an intelligent ad-hoc camera, whose two principle elements are a high speed 1.2 megapixel Complementary Metal Oxide Semiconductor (CMOS) sensor and a Field Programmable Gate Array (FPGA). The latter is used to control the various sensor parameter configurations and, where desired, to receive and process the images captured by the CMOS sensor. The flexibility and versatility offered by the new FPGA families makes it possible to incorporate microprocessors into these reconfigurable devices, and these are normally used for highly sequential tasks unsuitable for parallelization in hardware. For the present study, we used a Xilinx XC4VFX12 FPGA, which contains an internal Power PC (PPC) microprocessor. In turn, this contains a standalone system which manages the FPGA image processing hardware and endows the system with multiple software options for processing the images captured by the CMOS sensor. The system also incorporates an Ethernet channel for sending processed and unprocessed images from the FPGA to a remote node. Consequently, it is possible to visualize and configure system operation and captured and/or processed images remotely. PMID:22163739

  19. Single camera absolute motion based digital elevation mapping for a next generation planetary lander

    NASA Astrophysics Data System (ADS)

    Feetham, Luke M.; Aouf, Nabil; Bourdarias, Clement; Voirin, Thomas

    2014-05-01

    Robotic planetary surface exploration missions are becoming much more ambitious in their science goals as they attempt to answer the bigger questions relating to the possibility of life elsewhere in our solar system. Answering these questions will require scientifically rich landing sites. Such sites are unlikely to be located in relatively flat regions that are free from hazards, therefore there is a growing need for next generation entry descent and landing systems to possess highly sophisticated navigation capabilities coupled with active hazard avoidance that can enable a pin-point landing. As a first step towards achieving these goals, a multi-source, multi-rate data fusion algorithm is presented that combines single camera recursive feature-based structure from motion (SfM) estimates with measurements from an inertial measurement unit in order to overcome the scale ambiguity problem by directly estimating the unknown scale factor. This paper focuses on accurate estimation of absolute motion parameters, as well as the estimation of sparse landing site structure to provide a starting point for hazard detection. We assume no prior knowledge of the landing site terrain structure or of the landing craft motion in order to fully assess the capabilities of the proposed algorithm to allow a pin-point landing on distant solar system bodies where accurate knowledge of the desired landing site may be limited. We present results using representative synthetic images of deliberately challenging landing scenarios, which demonstrates that the proposed method has great potential.

  20. Potential of Uav-Based Laser Scanner and Multispectral Camera Data in Building Inspection

    NASA Astrophysics Data System (ADS)

    Mader, D.; Blaskow, R.; Westfeld, P.; Weller, C.

    2016-06-01

    Conventional building inspection of bridges, dams or large constructions in general is rather time consuming and often cost expensive due to traffic closures and the need of special heavy vehicles such as under-bridge inspection units or other large lifting platforms. In consideration that, an unmanned aerial vehicle (UAV) will be more reliable and efficient as well as less expensive and simpler to operate. The utilisation of UAVs as an assisting tool in building inspections is obviously. Furthermore, light-weight special sensors such as infrared and thermal cameras as well as laser scanner are available and predestined for usage on unmanned aircraft systems. Such a flexible low-cost system is realized in the ADFEX project with the goal of time-efficient object exploration, monitoring and damage detection. For this purpose, a fleet of UAVs, equipped with several sensors for navigation, obstacle avoidance and 3D object-data acquisition, has been developed and constructed. This contribution deals with the potential of UAV-based data in building inspection. Therefore, an overview of the ADFEX project, sensor specifications and requirements of building inspections in general are given. On the basis of results achieved in practical studies, the applicability and potential of the UAV system in building inspection will be presented and discussed.

  1. Hyperspectral characterization of fluorophore diffusion in human skin using a sCMOS based hyperspectral camera

    NASA Astrophysics Data System (ADS)

    Hernandez-Palacios, J.; Haug, I. J.; Grimstad, Ø.; Randeberg, L. L.

    2011-07-01

    Hyperspectral fluorescence imaging is a modality combining high spatial and spectral resolution with increased sensitivity for low photon counts. The main objective of the current study was to investigate if this technique is a suitable tool for characterization of diffusion properties in human skin. This was done by imaging fluorescence from Alexa 488 in ex vivo human skin samples using an sCMOS based hyperspectral camera. Pre-treatment with acetone, DMSO and mechanical micro-needling of the stratum corneum created variation in epidermal permeability between the measured samples. Selected samples were also stained using fluorescence labelled biopolymers. The effect of fluorescence enhancers on transdermal diffusion could be documented from the collected data. Acetone was found to have an enhancing effect on the transport, and the results indicate that the biopolymers might have a similar effect, The enhancement from these compounds were not as prominent as the effect of mechanical penetration of the sample using a micro-needling device. Hyperspectral fluorescence imaging has thus been proven to be an interesting tool for characterization of fluorophore diffusion in ex vivo skin samples. Further work will include repetition of the measurements in a shorter time scale and mathematical modeling of the diffusion process to determine the diffusivity in skin for the compounds in question.

  2. Carded Tow Real-Time Color Assessment: A Spectral Camera-Based System.

    PubMed

    Furferi, Rocco; Governi, Lapo; Volpe, Yary; Carfagni, Monica

    2016-08-31

    One of the most important parameters to be controlled during the production of textile yarns obtained by mixing pre-colored fibers, is the color correspondence between the manufactured yarn and a given reference, usually provided by a designer or a customer. Obtaining yarns from raw pre-colored fibers is a complex manufacturing process entailing a number of steps such as laboratory sampling, color recipe corrections, blowing, carding and spinning. Carding process is the one devoted to transform a "fuzzy mass" of tufted fibers into a regular mass of untwisted fibers, named "tow". During this process, unfortunately, the correspondence between the color of the tow and the target one cannot be assured, thus leading to yarns whose color differs from the one used for reference. To solve this issue, the main aim of this work is to provide a system able to perform a spectral camera-based real-time measurement of a carded tow, to assess its color correspondence with a reference carded fabric and, at the same time, to monitor the overall quality of the tow during the carding process. Tested against a number of differently colored carded fabrics, the proposed system proved its effectiveness in reliably assessing color correspondence in real-time.

  3. PCA-based spatially adaptive denoising of CFA images for single-sensor digital cameras.

    PubMed

    Zheng, Lei; Lukac, Rastislav; Wu, Xiaolin; Zhang, David

    2009-04-01

    Single-sensor digital color cameras use a process called color demosiacking to produce full color images from the data captured by a color filter array (CAF). The quality of demosiacked images is degraded due to the sensor noise introduced during the image acquisition process. The conventional solution to combating CFA sensor noise is demosiacking first, followed by a separate denoising processing. This strategy will generate many noise-caused color artifacts in the demosiacking process, which are hard to remove in the denoising process. Few denoising schemes that work directly on the CFA images have been presented because of the difficulties arisen from the red, green and blue interlaced mosaic pattern, yet a well-designed "denoising first and demosiacking later" scheme can have advantages such as less noise-caused color artifacts and cost-effective implementation. This paper presents a principle component analysis (PCA)-based spatially-adaptive denoising algorithm, which works directly on the CFA data using a supporting window to analyze the local image statistics. By exploiting the spatial and spectral correlations existing in the CFA image, the proposed method can effectively suppress noise while preserving color edges and details. Experiments using both simulated and real CFA images indicate that the proposed scheme outperforms many existing approaches, including those sophisticated demosiacking and denoising schemes, in terms of both objective measurement and visual evaluation.

  4. Image-based correction of the light dilution effect for SO2 camera measurements

    NASA Astrophysics Data System (ADS)

    Campion, Robin; Delgado-Granados, Hugo; Mori, Toshiya

    2015-07-01

    Ultraviolet SO2 cameras are increasingly used in volcanology because of their ability to remotely measure the 2D distribution of SO2 in volcanic plumes, at a high frequency. However, light dilution, i.e., the scattering of ambient photons within the instrument's field of view (FoV) on air parcels located between the plume and the instrument, induces a systematic underestimation of the measurements, whose magnitude increases with distance, SO2 content, atmospheric pressure and turbidity. Here we describe a robust and straightforward method to quantify and correct this effect. We retrieve atmospheric scattering coefficients based on the contrast attenuation between the sky and the increasingly distant slope of the volcanic edifice. We illustrate our method with a case study at Etna volcano, where difference between corrected and uncorrected emission rates amounts to 40% to 80%, and investigate the temporal variations of the scattering coefficient during 1 h of measurements on Etna. We validate the correction method at Popocatépetl volcano by performing measurements of the same plume at different distances from the volcano. Finally, we reported the atmospheric scattering coefficients for several volcanoes at different latitudes and altitudes.

  5. Carded Tow Real-Time Color Assessment: A Spectral Camera-Based System

    PubMed Central

    Furferi, Rocco; Governi, Lapo; Volpe, Yary; Carfagni, Monica

    2016-01-01

    One of the most important parameters to be controlled during the production of textile yarns obtained by mixing pre-colored fibers, is the color correspondence between the manufactured yarn and a given reference, usually provided by a designer or a customer. Obtaining yarns from raw pre-colored fibers is a complex manufacturing process entailing a number of steps such as laboratory sampling, color recipe corrections, blowing, carding and spinning. Carding process is the one devoted to transform a “fuzzy mass” of tufted fibers into a regular mass of untwisted fibers, named “tow”. During this process, unfortunately, the correspondence between the color of the tow and the target one cannot be assured, thus leading to yarns whose color differs from the one used for reference. To solve this issue, the main aim of this work is to provide a system able to perform a spectral camera-based real-time measurement of a carded tow, to assess its color correspondence with a reference carded fabric and, at the same time, to monitor the overall quality of the tow during the carding process. Tested against a number of differently colored carded fabrics, the proposed system proved its effectiveness in reliably assessing color correspondence in real-time. PMID:27589765

  6. Design of motion adjusting system for space camera based on ultrasonic motor

    NASA Astrophysics Data System (ADS)

    Xu, Kai; Jin, Guang; Gu, Song; Yan, Yong; Sun, Zhiyuan

    2011-08-01

    Drift angle is a transverse intersection angle of vector of image motion of the space camera. Adjusting the angle could reduce the influence on image quality. Ultrasonic motor (USM) is a new type of actuator using ultrasonic wave stimulated by piezoelectric ceramics. They have many advantages in comparison with conventional electromagnetic motors. In this paper, some improvement was designed for control system of drift adjusting mechanism. Based on ultrasonic motor T-60 was designed the drift adjusting system, which is composed of the drift adjusting mechanical frame, the ultrasonic motor, the driver of Ultrasonic Motor, the photoelectric encoder and the drift adjusting controller. The TMS320F28335 DSP was adopted as the calculation and control processor, photoelectric encoder was used as sensor of position closed loop system and the voltage driving circuit designed as generator of ultrasonic wave. It was built the mathematic model of drive circuit of the ultrasonic motor T-60 using matlab modules. In order to verify the validity of the drift adjusting system, was introduced the source of the disturbance, and made simulation analysis. It designed the control systems of motor drive for drift adjusting system with the improved PID control. The drift angle adjusting system has such advantages as the small space, simple configuration, high position control precision, fine repeatability, self locking property and low powers. It showed that the system could accomplish the mission of drift angle adjusting excellent.

  7. Contourlet domain multiband deblurring based on color correlation for fluid lens cameras.

    PubMed

    Tzeng, Jack; Liu, Chun-Chen; Nguyen, Truong Q

    2010-10-01

    Due to the novel fluid optics, unique image processing challenges are presented by the fluidic lens camera system. Developed for surgical applications, unique properties, such as no moving parts while zooming and better miniaturization than traditional glass optics, are advantages of the fluid lens. Despite these abilities, sharp color planes and blurred color planes are created by the nonuniform reaction of the liquid lens to different color wavelengths. Severe axial color aberrations are caused by this reaction. In order to deblur color images without estimating a point spread function, a contourlet filter bank system is proposed. Information from sharp color planes is used by this multiband deblurring method to improve blurred color planes. Compared to traditional Lucy-Richardson and Wiener deconvolution algorithms, significantly improved sharpness and reduced ghosting artifacts are produced by a previous wavelet-based method. Directional filtering is used by the proposed contourlet-based system to adjust to the contours of the image. An image is produced by the proposed method which has a similar level of sharpness to the previous wavelet-based method and has fewer ghosting artifacts. Conditions for when this algorithm will reduce the mean squared error are analyzed. While improving the blue color plane by using information from the green color plane is the primary focus of this paper, these methods could be adjusted to improve the red color plane. Many multiband systems such as global mapping, infrared imaging, and computer assisted surgery are natural extensions of this work. This information sharing algorithm is beneficial to any image set with high edge correlation. Improved results in the areas of deblurring, noise reduction, and resolution enhancement can be produced by the proposed algorithm.

  8. Automated cloud classification using a ground based infra-red camera and texture analysis techniques

    NASA Astrophysics Data System (ADS)

    Rumi, Emal; Kerr, David; Coupland, Jeremy M.; Sandford, Andrew P.; Brettle, Mike J.

    2013-10-01

    Clouds play an important role in influencing the dynamics of local and global weather and climate conditions. Continuous monitoring of clouds is vital for weather forecasting and for air-traffic control. Convective clouds such as Towering Cumulus (TCU) and Cumulonimbus clouds (CB) are associated with thunderstorms, turbulence and atmospheric instability. Human observers periodically report the presence of CB and TCU clouds during operational hours at airports and observatories; however such observations are expensive and time limited. Robust, automatic classification of cloud type using infrared ground-based instrumentation offers the advantage of continuous, real-time (24/7) data capture and the representation of cloud structure in the form of a thermal map, which can greatly help to characterise certain cloud formations. The work presented here utilised a ground based infrared (8-14 μm) imaging device mounted on a pan/tilt unit for capturing high spatial resolution sky images. These images were processed to extract 45 separate textural features using statistical and spatial frequency based analytical techniques. These features were used to train a weighted k-nearest neighbour (KNN) classifier in order to determine cloud type. Ground truth data were obtained by inspection of images captured simultaneously from a visible wavelength colour camera at the same installation, with approximately the same field of view as the infrared device. These images were classified by a trained cloud observer. Results from the KNN classifier gave an encouraging success rate. A Probability of Detection (POD) of up to 90% with a Probability of False Alarm (POFA) as low as 16% was achieved.

  9. Positron-rubidium scattering

    NASA Technical Reports Server (NTRS)

    Mceachran, R. P.; Horbatsch, M.; Stauffer, A. D.

    1990-01-01

    A 5-state close-coupling calculation (5s-5p-4d-6s-6p) was carried out for positron-Rb scattering in the energy range 3.7 to 28.0 eV. In contrast to the results of similar close-coupling calculations for positron-Na and positron-K scattering the (effective) total integrated cross section has an energy dependence which is contrary to recent experimental measurements.

  10. Image Sensors Enhance Camera Technologies

    NASA Technical Reports Server (NTRS)

    2010-01-01

    In the 1990s, a Jet Propulsion Laboratory team led by Eric Fossum researched ways of improving complementary metal-oxide semiconductor (CMOS) image sensors in order to miniaturize cameras on spacecraft while maintaining scientific image quality. Fossum s team founded a company to commercialize the resulting CMOS active pixel sensor. Now called the Aptina Imaging Corporation, based in San Jose, California, the company has shipped over 1 billion sensors for use in applications such as digital cameras, camera phones, Web cameras, and automotive cameras. Today, one of every three cell phone cameras on the planet feature Aptina s sensor technology.

  11. Generalized free-space diffuse photon transport model based on the influence analysis of a camera lens diaphragm.

    PubMed

    Chen, Xueli; Gao, Xinbo; Qu, Xiaochao; Chen, Duofang; Ma, Xiaopeng; Liang, Jimin; Tian, Jie

    2010-10-10

    The camera lens diaphragm is an important component in a noncontact optical imaging system and has a crucial influence on the images registered on the CCD camera. However, this influence has not been taken into account in the existing free-space photon transport models. To model the photon transport process more accurately, a generalized free-space photon transport model is proposed. It combines Lambertian source theory with analysis of the influence of the camera lens diaphragm to simulate photon transport process in free space. In addition, the radiance theorem is also adopted to establish the energy relationship between the virtual detector and the CCD camera. The accuracy and feasibility of the proposed model is validated with a Monte-Carlo-based free-space photon transport model and physical phantom experiment. A comparison study with our previous hybrid radiosity-radiance theorem based model demonstrates the improvement performance and potential of the proposed model for simulating photon transport process in free space.

  12. Nonuniformity correction based on focal plane array temperature in uncooled long-wave infrared cameras without a shutter.

    PubMed

    Liang, Kun; Yang, Cailan; Peng, Li; Zhou, Bo

    2017-02-01

    In uncooled long-wave IR camera systems, the temperature of a focal plane array (FPA) is variable along with the environmental temperature as well as the operating time. The spatial nonuniformity of the FPA, which is partly affected by the FPA temperature, obviously changes as well, resulting in reduced image quality. This study presents a real-time nonuniformity correction algorithm based on FPA temperature to compensate for nonuniformity caused by FPA temperature fluctuation. First, gain coefficients are calculated using a two-point correction technique. Then offset parameters at different FPA temperatures are obtained and stored in tables. When the camera operates, the offset tables are called to update the current offset parameters via a temperature-dependent interpolation. Finally, the gain coefficients and offset parameters are used to correct the output of the IR camera in real time. The proposed algorithm is evaluated and compared with two representative shutterless algorithms [minimizing the sum of the squares of errors algorithm (MSSE), template-based solution algorithm (TBS)] using IR images captured by a 384×288 pixel uncooled IR camera with a 17 μm pitch. Experimental results show that this method can quickly trace the response drift of the detector units when the FPA temperature changes. The quality of the proposed algorithm is as good as MSSE, while the processing time is as short as TBS, which means the proposed algorithm is good for real-time control and at the same time has a high correction effect.

  13. Calibration of a dual-PTZ-camera system for stereo vision based on parallel particle swarm optimization method

    NASA Astrophysics Data System (ADS)

    Chang, Yau-Zen; Wang, Huai-Ming; Lee, Shih-Tseng; Wu, Chieh-Tsai; Hsu, Ming-Hsi

    2014-02-01

    This work investigates the calibration of a stereo vision system based on two PTZ (Pan-Tilt-Zoom) cameras. As the accuracy of the system depends not only on intrinsic parameters, but also on the geometric relationships between rotation axes of the cameras, the major concern is the development of an effective and systematic way to obtain these relationships. We derived a complete geometric model of the dual-PTZ-camera system and proposed a calibration procedure for the intrinsic and external parameters of the model. The calibration method is based on Zhang's approach using an augmented checkerboard composed of eight small checkerboards, and is formulated as an optimization problem to be solved by an improved particle swarm optimization (PSO) method. Two Sony EVI-D70 PTZ cameras were used for the experiments. The root-mean-square errors (RMSE) of corner distances in the horizontal and vertical direction are 0.192 mm and 0.115 mm, respectively. The RMSE of overlapped points between the small checkerboards is 1.3958 mm.

  14. Mini Compton Camera Based on an Array of Virtual Frisch-Grid CdZnTe Detectors

    SciTech Connect

    Lee, Wonho; Bolotnikov, Aleksey; Lee, Taewoong; Camarda, Giuseppe; Cui, Yonggang; Gul, Rubi; Hossain, Anwar; Utpal, Roy; Yang, Ge; James, Ralph

    2016-02-15

    In this study, we constructed a mini Compton camera based on an array of CdZnTe detectors and assessed its spectral and imaging properties. The entire array consisted of 6×6 Frisch-grid CdZnTe detectors, each with a size of 6×6 ×15 mm3. Since it is easier and more practical to grow small CdZnTe crystals rather than large monolithic ones, constructing a mosaic array of parallelepiped crystals can be an effective way to build a more efficient, large-volume detector. With the fully operational CdZnTe array, we measured the energy spectra for 133Ba -, 137Cs -, 60Co-radiation sources; we also located these sources using a Compton imaging approach. Although the Compton camera was small enough to hand-carry, its intrinsic efficiency was several orders higher than those generated in previous researches using spatially separated arrays, because our camera measured the interactions inside the CZT detector array, wherein the detector elements were positioned very close to each other. Lastly, the performance of our camera was compared with that based on a pixelated detector.

  15. Mini Compton Camera Based on an Array of Virtual Frisch-Grid CdZnTe Detectors

    DOE PAGES

    Lee, Wonho; Bolotnikov, Aleksey; Lee, Taewoong; ...

    2016-02-15

    In this study, we constructed a mini Compton camera based on an array of CdZnTe detectors and assessed its spectral and imaging properties. The entire array consisted of 6×6 Frisch-grid CdZnTe detectors, each with a size of 6×6 ×15 mm3. Since it is easier and more practical to grow small CdZnTe crystals rather than large monolithic ones, constructing a mosaic array of parallelepiped crystals can be an effective way to build a more efficient, large-volume detector. With the fully operational CdZnTe array, we measured the energy spectra for 133Ba -, 137Cs -, 60Co-radiation sources; we also located these sources usingmore » a Compton imaging approach. Although the Compton camera was small enough to hand-carry, its intrinsic efficiency was several orders higher than those generated in previous researches using spatially separated arrays, because our camera measured the interactions inside the CZT detector array, wherein the detector elements were positioned very close to each other. Lastly, the performance of our camera was compared with that based on a pixelated detector.« less

  16. Temperament, character and serotonin activity in the human brain: a positron emission tomography study based on a general population cohort.

    PubMed

    Tuominen, L; Salo, J; Hirvonen, J; Någren, K; Laine, P; Melartin, T; Isometsä, E; Viikari, J; Cloninger, C R; Raitakari, O; Hietala, J; Keltikangas-Järvinen, L

    2013-04-01

    The psychobiological model of personality by Cloninger and colleagues originally hypothesized that interindividual variability in the temperament dimension 'harm avoidance' (HA) is explained by differences in the activity of the brain serotonin system. We assessed brain serotonin transporter (5-HTT) density in vivo with positron emission tomography (PET) in healthy individuals with high or low HA scores using an 'oversampling' study design. Method Subjects consistently in either upper or lower quartiles for the HA trait were selected from a population-based cohort in Finland (n = 2075) with pre-existing Temperament and Character Inventory (TCI) scores. A total of 22 subjects free of psychiatric and somatic disorders were included in the matched high- and low-HA groups. The main outcome measure was regional 5-HTT binding potential (BPND) in high- and low-HA groups estimated with PET and [11C]N,N-dimethyl-2-(2-amino-4-methylphenylthio)benzylamine ([11C]MADAM). In secondary analyses, 5-HTT BPND was correlated with other TCI dimensions. 5-HTT BPND did not differ between high- and low-HA groups in the midbrain or any other brain region. This result remained the same even after adjusting for other relevant TCI dimensions. Higher 5-HTT BPND in the raphe nucleus predicted higher scores in 'self-directedness'. This study does not support an association between the temperament dimension HA and serotonin transporter density in healthy subjects. However, we found a link between high serotonin transporter density and high 'self-directedness' (ability to adapt and control one's behaviour to fit situations in accord with chosen goals and values). We suggest that biological factors are more important in explaining variability in character than previously thought.

  17. Study of material properties important for an optical property modulation-based radiation detection method for positron emission tomography.

    PubMed

    Tao, Li; Daghighian, Henry M; Levin, Craig S

    2017-01-01

    We compare the performance of two detector materials, cadmium telluride (CdTe) and bismuth silicon oxide (BSO), for optical property modulation-based radiation detection method for positron emission tomography (PET), which is a potential new direction to dramatically improve the annihilation photon pair coincidence time resolution. We have shown that the induced current flow in the detector crystal resulting from ionizing radiation determines the strength of optical modulation signal. A larger resistivity is favorable for reducing the dark current (noise) in the detector crystal, and thus the higher resistivity BSO crystal has a lower (50% lower on average) noise level than CdTe. The CdTe and BSO crystals can achieve the same sensitivity under laser diode illumination at the same crystal bias voltage condition while the BSO crystal is not as sensitive to 511-keV photons as the CdTe crystal under the same crystal bias voltage. The amplitude of the modulation signal induced by 511-keV photons in BSO crystal is around 30% of that induced in CdTe crystal under the same bias condition. In addition, we have found that the optical modulation strength increases linearly with crystal bias voltage before saturation. The modulation signal with CdTe tends to saturate at bias voltages higher than 1500 V due to its lower resistivity (thus larger dark current) while the modulation signal strength with BSO still increases after 3500 V. Further increasing the bias voltage for BSO could potentially further enhance the modulation strength and thus, the sensitivity.

  18. Underwater camera with depth measurement

    NASA Astrophysics Data System (ADS)

    Wang, Wei-Chih; Lin, Keng-Ren; Tsui, Chi L.; Schipf, David; Leang, Jonathan

    2016-04-01

    The objective of this study is to develop an RGB-D (video + depth) camera that provides three-dimensional image data for use in the haptic feedback of a robotic underwater ordnance recovery system. Two camera systems were developed and studied. The first depth camera relies on structured light (as used by the Microsoft Kinect), where the displacement of an object is determined by variations of the geometry of a projected pattern. The other camera system is based on a Time of Flight (ToF) depth camera. The results of the structural light camera system shows that the camera system requires a stronger light source with a similar operating wavelength and bandwidth to achieve a desirable working distance in water. This approach might not be robust enough for our proposed underwater RGB-D camera system, as it will require a complete re-design of the light source component. The ToF camera system instead, allows an arbitrary placement of light source and camera. The intensity output of the broadband LED light source in the ToF camera system can be increased by putting them into an array configuration and the LEDs can be modulated comfortably with any waveform and frequencies required by the ToF camera. In this paper, both camera were evaluated and experiments were conducted to demonstrate the versatility of the ToF camera.

  19. Preliminary considerations of an intense slow positron facility based on a sup 78 Kr loop in the high flux isotopes reactor

    SciTech Connect

    Hulett, L.D. Jr.; Donohue, D.L.; Peretz, F.J.; Montgomery, B.H.; Hayter, J.B.

    1990-01-01

    Suggestions have been made to the National Steering Committee for the Advanced Neutron Source (ANS) by Mills that provisions be made to install a high intensity slow positron facility, based on a {sup 78}Kr loop, that would be available to the general community of scientists interested in this field. The flux of thermal neutrons calculated for the ANS is E + 15 sec{sup {minus}1} m{sup {minus}2}, which Mills has estimated will produce 5 mm beam of slow positrons having a current of about 1 E + 12 sec {sup {minus}1}. The intensity of such a beam will be a least 3 orders of magnitude greater than those presently available. The construction of the ANS is not anticipated to be complete until the year 2000. In order to properly plan the design of the ANS, strong considerations are being given to a proof-of-principle experiment, using the presently available High Flux Isotopes Reactor, to test the {sup 78}Kr loop technique. The positron current from the HFIR facility is expected to be about 1 E + 10 sec{sup {minus}1}, which is 2 orders of magnitude greater than any other available. If the experiment succeeds, a very valuable facility will be established, and important formation will be generated on how the ANS should be designed. 3 refs., 1 fig.

  20. Human detection based on the generation of a background image by using a far-infrared light camera.

    PubMed

    Jeon, Eun Som; Choi, Jong-Suk; Lee, Ji Hoon; Shin, Kwang Yong; Kim, Yeong Gon; Le, Toan Thanh; Park, Kang Ryoung

    2015-03-19

    The need for computer vision-based human detection has increased in fields, such as security, intelligent surveillance and monitoring systems. However, performance enhancement of human detection based on visible light cameras is limited, because of factors, such as nonuniform illumination, shadows and low external light in the evening and night. Consequently, human detection based on thermal (far-infrared light) cameras has been considered as an alternative. However, its performance is influenced by the factors, such as low image resolution, low contrast and the large noises of thermal images. It is also affected by the high temperature of backgrounds during the day. To solve these problems, we propose a new method for detecting human areas in thermal camera images. Compared to previous works, the proposed research is novel in the following four aspects. One background image is generated by median and average filtering. Additional filtering procedures based on maximum gray level, size filtering and region erasing are applied to remove the human areas from the background image. Secondly, candidate human regions in the input image are located by combining the pixel and edge difference images between the input and background images. The thresholds for the difference images are adaptively determined based on the brightness of the generated background image. Noise components are removed by component labeling, a morphological operation and size filtering. Third, detected areas that may have more than two human regions are merged or separated based on the information in the horizontal and vertical histograms of the detected area. This procedure is adaptively operated based on the brightness of the generated background image. Fourth, a further procedure for the separation and removal of the candidate human regions is performed based on the size and ratio of the height to width information of the candidate regions considering the camera viewing direction and perspective

  1. Human Detection Based on the Generation of a Background Image by Using a Far-Infrared Light Camera

    PubMed Central

    Jeon, Eun Som; Choi, Jong-Suk; Lee, Ji Hoon; Shin, Kwang Yong; Kim, Yeong Gon; Le, Toan Thanh; Park, Kang Ryoung

    2015-01-01

    The need for computer vision-based human detection has increased in fields, such as security, intelligent surveillance and monitoring systems. However, performance enhancement of human detection based on visible light cameras is limited, because of factors, such as nonuniform illumination, shadows and low external light in the evening and night. Consequently, human detection based on thermal (far-infrared light) cameras has been considered as an alternative. However, its performance is influenced by the factors, such as low image resolution, low contrast and the large noises of thermal images. It is also affected by the high temperature of backgrounds during the day. To solve these problems, we propose a new method for detecting human areas in thermal camera images. Compared to previous works, the proposed research is novel in the following four aspects. One background image is generated by median and average filtering. Additional filtering procedures based on maximum gray level, size filtering and region erasing are applied to remove the human areas from the background image. Secondly, candidate human regions in the input image are located by combining the pixel and edge difference images between the input and background images. The thresholds for the difference images are adaptively determined based on the brightness of the generated background image. Noise components are removed by component labeling, a morphological operation and size filtering. Third, detected areas that may have more than two human regions are merged or separated based on the information in the horizontal and vertical histograms of the detected area. This procedure is adaptively operated based on the brightness of the generated background image. Fourth, a further procedure for the separation and removal of the candidate human regions is performed based on the size and ratio of the height to width information of the candidate regions considering the camera viewing direction and perspective

  2. Real-time multiple objects tracking on Raspberry-Pi-based smart embedded camera

    NASA Astrophysics Data System (ADS)

    Dziri, Aziz; Duranton, Marc; Chapuis, Roland

    2016-07-01

    Multiple-object tracking constitutes a major step in several computer vision applications, such as surveillance, advanced driver assistance systems, and automatic traffic monitoring. Because of the number of cameras used to cover a large area, these applications are constrained by the cost of each node, the power consumption, the robustness of the tracking, the processing time, and the ease of deployment of the system. To meet these challenges, the use of low-power and low-cost embedded vision platforms to achieve reliable tracking becomes essential in networks of cameras. We propose a tracking pipeline that is designed for fixed smart cameras and which can handle occlusions between objects. We show that the proposed pipeline reaches real-time processing on a low-cost embedded smart camera composed of a Raspberry-Pi board and a RaspiCam camera. The tracking quality and the processing speed obtained with the proposed pipeline are evaluated on publicly available datasets and compared to the state-of-the-art methods.

  3. Synthetic neutron camera and spectrometer in JET based on AFSI-ASCOT simulations

    NASA Astrophysics Data System (ADS)

    Sirén, P.; Varje, J.; Weisen, H.; Koskela, T.; contributors, JET

    2017-09-01

    The ASCOT Fusion Source Integrator (AFSI) has been used to calculate neutron production rates and spectra corresponding to the JET 19-channel neutron camera (KN3) and the time-of-flight spectrometer (TOFOR) as ideal diagnostics, without detector-related effects. AFSI calculates fusion product distributions in 4D, based on Monte Carlo integration from arbitrary reactant distribution functions. The distribution functions were calculated by the ASCOT Monte Carlo particle orbit following code for thermal, NBI and ICRH particle reactions. Fusion cross-sections were defined based on the Bosch-Hale model and both DD and DT reactions have been included. Neutrons generated by AFSI-ASCOT simulations have already been applied as a neutron source of the Serpent neutron transport code in ITER studies. Additionally, AFSI has been selected to be a main tool as the fusion product generator in the complete analysis calculation chain: ASCOT - AFSI - SERPENT (neutron and gamma transport Monte Carlo code) - APROS (system and power plant modelling code), which encompasses the plasma as an energy source, heat deposition in plant structures as well as cooling and balance-of-plant in DEMO applications and other reactor relevant analyses. This conference paper presents the first results and validation of the AFSI DD fusion model for different auxiliary heating scenarios (NBI, ICRH) with very different fast particle distribution functions. Both calculated quantities (production rates and spectra) have been compared with experimental data from KN3 and synthetic spectrometer data from ControlRoom code. No unexplained differences have been observed. In future work, AFSI will be extended for synthetic gamma diagnostics and additionally, AFSI will be used as part of the neutron transport calculation chain to model real diagnostics instead of ideal synthetic diagnostics for quantitative benchmarking.

  4. Human Visual System-Based Fundus Image Quality Assessment of Portable Fundus Camera Photographs.

    PubMed

    Wang, Shaoze; Jin, Kai; Lu, Haitong; Cheng, Chuming; Ye, Juan; Qian, Dahong

    2016-04-01

    Telemedicine and the medical "big data" era in ophthalmology highlight the use of non-mydriatic ocular fundus photography, which has given rise to indispensable applications of portable fundus cameras. However, in the case of portable fundus photography, non-mydriatic image quality is more vulnerable to distortions, such as uneven illumination, color distortion, blur, and low contrast. Such distortions are called generic quality distortions. This paper proposes an algorithm capable of selecting images of fair generic quality that would be especially useful to assist inexperienced individuals in collecting meaningful and interpretable data with consistency. The algorithm is based on three characteristics of the human visual system--multi-channel sensation, just noticeable blur, and the contrast sensitivity function to detect illumination and color distortion, blur, and low contrast distortion, respectively. A total of 536 retinal images, 280 from proprietary databases and 256 from public databases, were graded independently by one senior and two junior ophthalmologists, such that three partial measures of quality and generic overall quality were classified into two categories. Binary classification was implemented by the support vector machine and the decision tree, and receiver operating characteristic (ROC) curves were obtained and plotted to analyze the performance of the proposed algorithm. The experimental results revealed that the generic overall quality classification achieved a sensitivity of 87.45% at a specificity of 91.66%, with an area under the ROC curve of 0.9452, indicating the value of applying the algorithm, which is based on the human vision system, to assess the image quality of non-mydriatic photography, especially for low-cost ophthalmological telemedicine applications.

  5. Dry imaging cameras.

    PubMed

    Indrajit, Ik; Alam, Aftab; Sahni, Hirdesh; Bhatia, Mukul; Sahu, Samaresh

    2011-04-01

    Dry imaging cameras are important hard copy devices in radiology. Using dry imaging camera, multiformat images of digital modalities in radiology are created from a sealed unit of unexposed films. The functioning of a modern dry camera, involves a blend of concurrent processes, in areas of diverse sciences like computers, mechanics, thermal, optics, electricity and radiography. Broadly, hard copy devices are classified as laser and non laser based technology. When compared with the working knowledge and technical awareness of different modalities in radiology, the understanding of a dry imaging camera is often superficial and neglected. To fill this void, this article outlines the key features of a modern dry camera and its important issues that impact radiology workflow.

  6. Dry imaging cameras

    PubMed Central

    Indrajit, IK; Alam, Aftab; Sahni, Hirdesh; Bhatia, Mukul; Sahu, Samaresh

    2011-01-01

    Dry imaging cameras are important hard copy devices in radiology. Using dry imaging camera, multiformat images of digital modalities in radiology are created from a sealed unit of unexposed films. The functioning of a modern dry camera, involves a blend of concurrent processes, in areas of diverse sciences like computers, mechanics, thermal, optics, electricity and radiography. Broadly, hard copy devices are classified as laser and non laser based technology. When compared with the working knowledge and technical awareness of different modalities in radiology, the understanding of a dry imaging camera is often superficial and neglected. To fill this void, this article outlines the key features of a modern dry camera and its important issues that impact radiology workflow. PMID:21799589

  7. Kinetic model-based factor analysis of dynamic sequences for 82-rubidium cardiac positron emission tomography.

    PubMed

    Klein, R; Beanlands, R S; Wassenaar, R W; Thorn, S L; Lamoureux, M; DaSilva, J N; Adler, A; deKemp, R A

    2010-08-01

    Factor analysis has been pursued as a means to decompose dynamic cardiac PET images into different tissue types based on their unique temporal signatures to improve quantification of physiological function. In this work, the authors present a novel kinetic model-based (MB) method that includes physiological models of factor relationships within the decomposition process. The physiological accuracy of MB decomposed (82)Rb cardiac PET images is evaluated using simulated and experimental data. Precision of myocardial blood flow (MBF) measurement is also evaluated. A gamma-variate model was used to describe the transport of (82)Rb in arterial blood from the right to left ventricle, and a one-compartment model to describe the exchange between blood and myocardium. Simulations of canine and rat heart imaging were performed to evaluate parameter estimation errors. Arterial blood sampling in rats and (11)CO blood pool imaging in dogs were used to evaluate factor and structure accuracy. Variable infusion duration studies in canine were used to evaluate MB structure and global MBF reproducibility. All results were compared to a previously published minimal structure overlap (MSO) method. Canine heart simulations demonstrated that MB has lower root-mean-square error (RMSE) than MSO for both factor (0.2% vs 0.5%, p < 0.001 MB vs MSO, respectively) and structure (3.0% vs 4.7%, p < 0.001) estimations, as with rat heart simulations (factors: 0.2% vs 0.9%, p < 0.001 and structures: 3.0% vs 6.7%, p < 0.001). MB blood factors compared to arterial blood samples in rats had lower RMSE than MSO (1.6% vs 2.2%, p =0.025). There was no difference in the RMSE of blood structures compared to a (11)CO blood pool image in dogs (8.5% vs 8.8%, p =0.23). Myocardial structures were more reproducible with MB than with MSO (RMSE=3.9% vs 6.2%, p < 0.001), as were blood structures (RMSE=4.9% vs 5.6%, p =0.006). Finally, MBF values tended to be more reproducible with MB compared to MSO (CV= 10% vs 18

  8. Dual cameras acquisition and display system of retina-like sensor camera and rectangular sensor camera

    NASA Astrophysics Data System (ADS)

    Cao, Nan; Cao, Fengmei; Lin, Yabin; Bai, Tingzhu; Song, Shengyu

    2015-04-01

    For a new kind of retina-like senor camera and a traditional rectangular sensor camera, dual cameras acquisition and display system need to be built. We introduce the principle and the development of retina-like senor. Image coordinates transformation and interpolation based on sub-pixel interpolation need to be realized for our retina-like sensor's special pixels distribution. The hardware platform is composed of retina-like senor camera, rectangular sensor camera, image grabber and PC. Combined the MIL and OpenCV library, the software program is composed in VC++ on VS 2010. Experience results show that the system can realizes two cameras' acquisition and display.

  9. The DSLR Camera

    NASA Astrophysics Data System (ADS)

    Berkó, Ernő; Argyle, R. W.

    Cameras have developed significantly in the past decade; in particular, digital Single-Lens Reflex Cameras (DSLR) have appeared. As a consequence we can buy cameras of higher and higher pixel number, and mass production has resulted in the great reduction of prices. CMOS sensors used for imaging are increasingly sensitive, and the electronics in the cameras allows images to be taken with much less noise. The software background is developing in a similar way—intelligent programs are created for after-processing and other supplementary works. Nowadays we can find a digital camera in almost every household, most of these cameras are DSLR ones. These can be used very well for astronomical imaging, which is nicely demonstrated by the amount and quality of the spectacular astrophotos appearing in different publications. These examples also show how much post-processing software contributes to the rise in the standard of the pictures. To sum up, the DSLR camera serves as a cheap alternative for the CCD camera, with somewhat weaker technical characteristics. In the following, I will introduce how we can measure the main parameters (position angle and separation) of double stars, based on the methods, software and equipment I use. Others can easily apply these for their own circumstances.

  10. Generic Learning-Based Ensemble Framework for Small Sample Size Face Recognition in Multi-Camera Networks.

    PubMed

    Zhang, Cuicui; Liang, Xuefeng; Matsuyama, Takashi

    2014-12-08

    Multi-camera networks have gained great interest in video-based surveillance systems for security monitoring, access control, etc. Person re-identification is an essential and challenging task in multi-camera networks, which aims to determine if a given individual has already appeared over the camera network. Individual recognition often uses faces as a trial and requires a large number of samples during the training phrase. This is difficult to fulfill due to the limitation of the camera hardware system and the unconstrained image capturing conditions. Conventional face recognition algorithms often encounter the "small sample size" (SSS) problem arising from the small number of training samples compared to the high dimensionality of the sample space. To overcome this problem, interest in the combination of multiple base classifiers has sparked research efforts in ensemble methods. However, existing ensemble methods still open two questions: (1) how to define diverse base classifiers from the small data; (2) how to avoid the diversity/accuracy dilemma occurring during ensemble. To address these problems, this paper proposes a novel generic learning-based ensemble framework, which augments the small data by generating new samples based on a generic distribution and introduces a tailored 0-1 knapsack algorithm to alleviate the diversity/accuracy dilemma. More diverse base classifiers can be generated from the expanded face space, and more appropriate base classifiers are selected for ensemble. Extensive experimental results on four benchmarks demonstrate the higher ability of our system to cope with the SSS problem compared to the state-of-the-art system.

  11. A New Approach for Combining Time-of-Flight and RGB Cameras Based on Depth-Dependent Planar Projective Transformations

    PubMed Central

    Salinas, Carlota; Fernández, Roemi; Montes, Héctor; Armada, Manuel

    2015-01-01

    Image registration for sensor fusion is a valuable technique to acquire 3D and colour information for a scene. Nevertheless, this process normally relies on feature-matching techniques, which is a drawback for combining sensors that are not able to deliver common features. The combination of ToF and RGB cameras is an instance that problem. Typically, the fusion of these sensors is based on the extrinsic parameter computation of the coordinate transformation between the two cameras. This leads to a loss of colour information because of the low resolution of the ToF camera, and sophisticated algorithms are required to minimize this issue. This work proposes a method for sensor registration with non-common features and that avoids the loss of colour information. The depth information is used as a virtual feature for estimating a depth-dependent homography lookup table (Hlut). The homographies are computed within sets of ground control points of 104 images. Since the distance from the control points to the ToF camera are known, the working distance of each element on the Hlut is estimated. Finally, two series of experimental tests have been carried out in order to validate the capabilities of the proposed method. PMID:26404315

  12. A New Approach for Combining Time-of-Flight and RGB Cameras Based on Depth-Dependent Planar Projective Transformations.

    PubMed

    Salinas, Carlota; Fernández, Roemi; Montes, Héctor; Armada, Manuel

    2015-09-23

    Image registration for sensor fusion is a valuable technique to acquire 3D and colour information for a scene. Nevertheless, this process normally relies on feature-matching techniques, which is a drawback for combining sensors that are not able to deliver common features. The combination of ToF and RGB cameras is an instance that problem. Typically, the fusion of these sensors is based on the extrinsic parameter computation of the coordinate transformation between the two cameras. This leads to a loss of colour information because of the low resolution of the ToF camera, and sophisticated algorithms are required to minimize this issue. This work proposes a method for sensor registration with non-common features and that avoids the loss of colour information. The depth information is used as a virtual feature for estimating a depth-dependent homography lookup table (Hlut). The homographies are computed within sets of ground control points of 104 images. Since the distance from the control points to the ToF camera are known, the working distance of each element on the Hlut is estimated. Finally, two series of experimental tests have been carried out in order to validate the capabilities of the proposed method.

  13. Fast time-lens-based line-scan single-pixel camera with multi-wavelength source

    PubMed Central

    Guo, Qiang; Chen, Hongwei; Weng, Zhiliang; Chen, Minghua; Yang, Sigang; Xie, Shizhong

    2015-01-01

    A fast time-lens-based line-scan single-pixel camera with multi-wavelength source is proposed and experimentally demonstrated in this paper. A multi-wavelength laser instead of a mode-locked laser is used as the optical source. With a diffraction grating and dispersion compensating fibers, the spatial information of an object is converted into temporal waveforms which are then randomly encoded, temporally compressed and captured by a single-pixel photodetector. Two algorithms (the dictionary learning algorithm and the discrete cosine transform-based algorithm) for image reconstruction are employed, respectively. Results show that the dictionary learning algorithm has greater capability to reduce the number of compressive measurements than the DCT-based algorithm. The effective imaging frame rate increases from 200 kHz to 1 MHz, which shows a significant improvement in imaging speed over conventional single-pixel cameras. PMID:26417527

  14. The Complementary Pinhole Camera.

    ERIC Educational Resources Information Center

    Bissonnette, D.; And Others

    1991-01-01

    Presents an experiment based on the principles of rectilinear motion of light operating in a pinhole camera that projects the image of an illuminated object through a small hole in a sheet to an image screen. (MDH)

  15. PTZ Camera-Based Displacement Sensor System with Perspective Distortion Correction Unit for Early Detection of Building Destruction

    PubMed Central

    Jeong, Yoosoo; Park, Daejin; Park, Kil Houm

    2017-01-01

    This paper presents a pan-tilt-zoom (PTZ) camera-based displacement measurement system, specially based on the perspective distortion correction technique for the early detection of building destruction. The proposed PTZ-based vision system rotates the camera to monitor the specific targets from various distances and controls the zoom level of the lens for a constant field of view (FOV). The proposed approach adopts perspective distortion correction to expand the measurable range in monitoring the displacement of the target structure. The implemented system successfully obtains the displacement information in structures, which is not easily accessible on the remote site. We manually measured the displacement acquired from markers which is attached on a sample of structures covering a wide geographic region. Our approach using a PTZ-based camera reduces the perspective distortion, so that the improved system could overcome limitations of previous works related to displacement measurement. Evaluation results show that a PTZ-based displacement sensor system with the proposed distortion correction unit is possibly a cost effective and easy-to-install solution for commercialization. PMID:28241464

  16. Target Volume Delineation in Dynamic Positron Emission Tomography Based on Time Activity Curve Differences

    NASA Astrophysics Data System (ADS)

    Teymurazyan, Artur

    Tumor volume delineation plays a critical role in radiation treatment planning and simulation, since inaccurately defined treatment volumes may lead to the overdosing of normal surrounding structures and potentially missing the cancerous tissue. However, the imaging modality almost exclusively used to determine tumor volumes, X-ray Computed Tomography (CT), does not readily exhibit a distinction between cancerous and normal tissue. It has been shown that CT data augmented with PET can improve radiation treatment plans by providing functional information not available otherwise. Presently, static PET scans account for the majority of procedures performed in clinical practice. In the radiation therapy (RT) setting, these scans are visually inspected by a radiation oncologist for the purpose of tumor volume delineation. This approach, however, often results in significant interobserver variability when comparing contours drawn by different experts on the same PET/CT data sets. For this reason, a search for more objective contouring approaches is underway. The major drawback of conventional tumor delineation in static PET images is the fact that two neighboring voxels of the same intensity can exhibit markedly different overall dynamics. Therefore, equal intensity voxels in a static analysis of a PET image may be falsely classified as belonging to the same tissue. Dynamic PET allows the evaluation of image data in the temporal domain, which often describes specific biochemical properties of the imaged tissues. Analysis of dynamic PET data can be used to improve classification of the imaged volume into cancerous and normal tissue. In this thesis we present a novel tumor volume delineation approach (Single Seed Region Growing algorithm in 4D (dynamic) PET or SSRG/4D-PET) in dynamic PET based on TAC (Time Activity Curve) differences. A partially-supervised approach is pursued in order to allow an expert reader to utilize the information available from other imaging

  17. An upgraded camera-based imaging system for mapping venous blood oxygenation in human skin tissue

    NASA Astrophysics Data System (ADS)

    Li, Jun; Zhang, Xiao; Qiu, Lina; Leotta, Daniel F.

    2016-07-01

    A camera-based imaging system was previously developed for mapping venous blood oxygenation in human skin. However, several limitations were realized in later applications, which could lead to either significant bias in the estimated oxygen saturation value or poor spatial resolution in the map of the oxygen saturation. To overcome these issues, an upgraded system was developed using improved modeling and image processing algorithms. In the modeling, Monte Carlo (MC) simulation was used to verify the effectiveness of the ratio-to-ratio method for semi-infinite and two-layer skin models, and then the relationship between the venous oxygen saturation and the ratio-to-ratio was determined. The improved image processing algorithms included surface curvature correction and motion compensation. The curvature correction is necessary when the imaged skin surface is uneven. The motion compensation is critical for the imaging system because surface motion is inevitable when the venous volume alteration is induced by cuff inflation. In addition to the modeling and image processing algorithms in the upgraded system, a ring light guide was used to achieve perpendicular and uniform incidence of light. Cross-polarization detection was also adopted to suppress surface specular reflection. The upgraded system was applied to mapping of venous oxygen saturation in the palm, opisthenar and forearm of human subjects. The spatial resolution of the oxygenation map achieved is much better than that of the original system. In addition, the mean values of the venous oxygen saturation for the three locations were verified with a commercial near-infrared spectroscopy system and were consistent with previously published data.

  18. A self-calibrating, camera-based eye tracker for the recording of rodent eye movements.

    PubMed

    Zoccolan, Davide; Graham, Brett J; Cox, David D

    2010-01-01

    Much of neurophysiology and vision science relies on careful measurement of a human or animal subject's gaze direction. Video-based eye trackers have emerged as an especially popular option for gaze tracking, because they are easy to use and are completely non-invasive. However, video eye trackers typically require a calibration procedure in which the subject must look at a series of points at known gaze angles. While it is possible to rely on innate orienting behaviors for calibration in some non-human species, other species, such as rodents, do not reliably saccade to visual targets, making this form of calibration impossible. To overcome this problem, we developed a fully automated infrared video eye-tracking system that is able to quickly and accurately calibrate itself without requiring co-operation from the subject. This technique relies on the optical geometry of the cornea and uses computer-controlled motorized stages to rapidly estimate the geometry of the eye relative to the camera. The accuracy and precision of our system was carefully measured using an artificial eye, and its capability to monitor the gaze of rodents was verified by tracking spontaneous saccades and evoked oculomotor reflexes in head-fixed rats (in both cases, we obtained measurements that are consistent with those found in the literature). Overall, given its fully automated nature and its intrinsic robustness against operator errors, we believe that our eye-tracking system enhances the utility of existing approaches to gaze-tracking in rodents and represents a valid tool for rodent vision studies.

  19. A Self-Calibrating, Camera-Based Eye Tracker for the Recording of Rodent Eye Movements

    PubMed Central

    Zoccolan, Davide; Graham, Brett J.; Cox, David D.

    2010-01-01

    Much of neurophysiology and vision science relies on careful measurement of a human or animal subject's gaze direction. Video-based eye trackers have emerged as an especially popular option for gaze tracking, because they are easy to use and are completely non-invasive. However, video eye trackers typically require a calibration procedure in which the subject must look at a series of points at known gaze angles. While it is possible to rely on innate orienting behaviors for calibration in some non-human species, other species, such as rodents, do not reliably saccade to visual targets, making this form of calibration impossible. To overcome this problem, we developed a fully automated infrared video eye-tracking system that is able to quickly and accurately calibrate itself without requiring co-operation from the subject. This technique relies on the optical geometry of the cornea and uses computer-controlled motorized stages to rapidly estimate the geometry of the eye relative to the camera. The accuracy and precision of our system was carefully measured using an artificial eye, and its capability to monitor the gaze of rodents was verified by tracking spontaneous saccades and evoked oculomotor reflexes in head-fixed rats (in both cases, we obtained measurements that are consistent with those found in the literature). Overall, given its fully automated nature and its intrinsic robustness against operator errors, we believe that our eye-tracking system enhances the utility of existing approaches to gaze-tracking in rodents and represents a valid tool for rodent vision studies. PMID:21152259

  20. Relativistic Positron Creation Using Ultraintense Short Pulse Lasers

    SciTech Connect

    Chen Hui; Wilks, Scott C.; Bonlie, James D.; Price, Dwight F.; Beiersdorfer, Peter; Liang, Edison P.; Myatt, Jason; Meyerhofer, David D.

    2009-03-13

    We measure up to 2x10{sup 10} positrons per steradian ejected out the back of {approx}mm thick gold targets when illuminated with short ({approx}1 ps) ultraintense ({approx}1x10{sup 20} W/cm{sup 2}) laser pulses. Positrons are produced predominately by the Bethe-Heitler process and have an effective temperature of 2-4 MeV, with the distribution peaking at 4-7 MeV. The angular distribution of the positrons is anisotropic. Modeling based on the measurements indicate the positron density to be {approx}10{sup 16} positrons/cm{sup 3}, the highest ever created in the laboratory.

  1. Streak camera receiver definition study

    NASA Technical Reports Server (NTRS)

    Johnson, C. B.; Hunkler, L. T., Sr.; Letzring, S. A.; Jaanimagi, P.

    1990-01-01

    Detailed streak camera definition studies were made as a first step toward full flight qualification of a dual channel picosecond resolution streak camera receiver for the Geoscience Laser Altimeter and Ranging System (GLRS). The streak camera receiver requirements are discussed as they pertain specifically to the GLRS system, and estimates of the characteristics of the streak camera are given, based upon existing and near-term technological capabilities. Important problem areas are highlighted, and possible corresponding solutions are discussed.

  2. Positrons in surface physics

    NASA Astrophysics Data System (ADS)

    Hugenschmidt, Christoph

    2016-12-01

    Within the last decade powerful methods have been developed to study surfaces using bright low-energy positron beams. These novel analysis tools exploit the unique properties of positron interaction with surfaces, which comprise the absence of exchange interaction, repulsive crystal potential and positron trapping in delocalized surface states at low energies. By applying reflection high-energy positron diffraction (RHEPD) one can benefit from the phenomenon of total reflection below a critical angle that is not present in electron surface diffraction. Therefore, RHEPD allows the determination of the atom positions of (reconstructed) surfaces with outstanding accuracy. The main advantages of positron annihilation induced Auger-electron spectroscopy (PAES) are the missing secondary electron background in the energy region of Auger-transitions and its topmost layer sensitivity for elemental analysis. In order to enable the investigation of the electron polarization at surfaces low-energy spin-polarized positrons are used to probe the outermost electrons of the surface. Furthermore, in fundamental research the preparation of well defined surfaces tailored for the production of bound leptonic systems plays an outstanding role. In this report, it is envisaged to cover both the fundamental aspects of positron surface interaction and the present status of surface studies using modern positron beam techniques.

  3. Field-programmable gate array-based hardware architecture for high-speed camera with KAI-0340 CCD image sensor

    NASA Astrophysics Data System (ADS)

    Wang, Hao; Yan, Su; Zhou, Zuofeng; Cao, Jianzhong; Yan, Aqi; Tang, Linao; Lei, Yangjie

    2013-08-01

    We present a field-programmable gate array (FPGA)-based hardware architecture for high-speed camera which have fast auto-exposure control and colour filter array (CFA) demosaicing. The proposed hardware architecture includes the design of charge coupled devices (CCD) drive circuits, image processing circuits, and power supply circuits. CCD drive circuits transfer the TTL (Transistor-Transistor-Logic) level timing Sequences which is produced by image processing circuits to the timing Sequences under which CCD image sensor can output analog image signals. Image processing circuits convert the analog signals to digital signals which is processing subsequently, and the TTL timing, auto-exposure control, CFA demosaicing, and gamma correction is accomplished in this module. Power supply circuits provide the power for the whole system, which is very important for image quality. Power noises effect image quality directly, and we reduce power noises by hardware way, which is very effective. In this system, the CCD is KAI-0340 which is can output 210 full resolution frame-per-second, and our camera can work outstandingly in this mode. The speed of traditional auto-exposure control algorithms to reach a proper exposure level is so slow that it is necessary to develop a fast auto-exposure control method. We present a new auto-exposure algorithm which is fit high-speed camera. Color demosaicing is critical for digital cameras, because it converts a Bayer sensor mosaic output to a full color image, which determines the output image quality of the camera. Complexity algorithm can acquire high quality but cannot implement in hardware. An low-complexity demosaicing method is presented which can implement in hardware and satisfy the demand of quality. The experiment results are given in this paper in last.

  4. Wavelet power spectrum-based autofocusing algorithm for time delayed and integration charge coupled device space camera.

    PubMed

    Tao, Shuping; Jin, Guang; Zhang, Xuyan; Qu, Hongsong; An, Yuan

    2012-07-20

    A novel autofocusing algorithm using the directional wavelet power spectrum is proposed for time delayed and integration charge coupled device (TDI CCD) space cameras, which overcomes the difficulty of focus measure for the real-time change of imaging scenes. Using the multiresolution and band-pass characteristics of wavelet transform to improve the power spectrum based on fast Fourier transform (FFT), the wavelet power spectrum is less sensitive to the variance of scenes. Moreover, the new focus measure can effectively eliminate the impact of image motion mismatching by the directional selection. We test the proposed method's performance on synthetic images as well as a real ground experiment for one TDI CCD prototype camera, and compare it with the focus measure based on the existing FFT spectrum. The simulation results show that the new focus measure can effectively express the defocused states for the real remote sensing images. The error ratio is only 0.112, while the prevalent algorithm based on the FFT spectrum is as high as 0.4. Compared with the FFT-based method, the proposed algorithm performs at a high reliability in the real imaging experiments, where it reduces the instability from 0.600 to 0.161. Two experimental results demonstrate that the proposed algorithm has the characteristics of good monotonicity, high sensitivity, and accuracy. The new algorithm can satisfy the autofocusing requirements for TDI CCD space cameras.

  5. Positrons for linear colliders

    SciTech Connect

    Ecklund, S.

    1987-11-01

    The requirements of a positron source for a linear collider are briefly reviewed, followed by methods of positron production and production of photons by electromagnetic cascade showers. Cross sections for the electromagnetic cascade shower processes of positron-electron pair production and Compton scattering are compared. A program used for Monte Carlo analysis of electromagnetic cascades is briefly discussed, and positron distributions obtained from several runs of the program are discussed. Photons from synchrotron radiation and from channeling are also mentioned briefly, as well as positron collection, transverse focusing techniques, and longitudinal capture. Computer ray tracing is then briefly discussed, followed by space-charge effects and thermal heating and stress due to showers. (LEW)

  6. Research on radiometric calibration of interline transfer CCD camera based on TDI working mode

    NASA Astrophysics Data System (ADS)

    Wu, Xing-xing; Liu, Jin-guo

    2010-10-01

    Interline transfer CCD camera can be designed to work in time delay and integration mode similar to TDI CCD to obtain higher responsivity and spatial resolution under poor illumination condition. However it was found that outputs of some pixels were much lower than others' as interline transfer CCD camera work in TDI mode in laboratory radiometric calibration experiments. As a result photo response non-uniformity(PRNU) and signal noise ratio(SNR) of the system turned for the worse. This phenomenon's mechanism was analyzed and improved PRNU and SNR algorithms of interline transfer CCD camera were advanced to solve this problem. In this way TDI stage was used as a variant in PRNU and SNR algorithms and system performance was improved observably with few influences on use. In validation experiments the improved algorithms was applied in radiometric calibration of a camera with KAI-0340s as detector. Results of validation experiments proved that the improved algorithms could effectively improve SNR and lower PRNU of the system. At the same time characteristic of the system could be reflected better. As working in 16 TDI stages, PRUN was reduced from 2.25% to 0.82% and SNR was improved about 2%.

  7. A Lyapunov-Based Method for Estimation of Euclidean Position of Static Features Using a Single Camera

    DTIC Science & Technology

    2006-01-01

    environment from 2D images have applications in such di- verse areas as autonomous vehicle navigation, visual servo- ing, 3D modeling, and geospatial mapping...calibrated, and hence, the intrinsic calibration para- meters are available. The kinematics of camera motion is formulated based on 2 1 2D visual ...factor by using techniques such as least squares minimization [12]. This approach is appropri- ate for visual servoing in structured environments

  8. Night Vision Camera

    NASA Technical Reports Server (NTRS)

    1996-01-01

    PixelVision, Inc. developed the Night Video NV652 Back-illuminated CCD Camera, based on the expertise of a former Jet Propulsion Laboratory employee and a former employee of Scientific Imaging Technologies, Inc. The camera operates without an image intensifier, using back-illuminated and thinned CCD technology to achieve extremely low light level imaging performance. The advantages of PixelVision's system over conventional cameras include greater resolution and better target identification under low light conditions, lower cost and a longer lifetime. It is used commercially for research and aviation.

  9. Observing low-level stratiform clouds and determining its base height at night by sky camera measurements

    NASA Astrophysics Data System (ADS)

    Kolláth, Kornél; Kolláth, Zoltán

    2017-04-01

    The amount and base height of low-level clouds are critical parameters in aviation meteorology. New techniques which can extend the geographic area coverage and characteristics of the cloudiness could be beneficial. In recent years, sky camera systems became more and more popular as a meteorological observation tool. Recent commercial digital cameras with increasingly sensitive sensors provide cheap opportunities for luminance measurements of the night sky. We introduce a new observation method for determining cloud base height analogous to the triangulation principle of searchlight ceilometer. We show that light pollution (the upward component of artificial lights) could be used passively as cloud ceiling projector in various environments. The method was tested in one year period from one observation site in central Budapest. Comparison with the Budapest airport cloud observation data could be performed. In the case of homogeneous stratus cloud sheets, we found that the base height could be estimated with reasonable accuracy via the illumination of the clouds from the stronger ornamental lights in the city. Case studies with different local light pollution characteristics (e.g. smaller settlements, different observation distances) will be presented. Limitations of the method will be discussed. The main problem to be addressed is how can we assimilate nighttime sky camera data into other routine meteorological observations available at night regarding low-level clouds.

  10. Image enhancement in positron emission mammography

    NASA Astrophysics Data System (ADS)

    Slavine, Nikolai V.; Seiler, Stephen; McColl, Roderick W.; Lenkinski, Robert E.

    2017-02-01

    Purpose: To evaluate an efficient iterative deconvolution method (RSEMD) for improving the quantitative accuracy of previously reconstructed breast images by commercial positron emission mammography (PEM) scanner. Materials and Methods: The RSEMD method was tested on breast phantom data and clinical PEM imaging data. Data acquisition was performed on a commercial Naviscan Flex Solo II PEM camera. This method was applied to patient breast images previously reconstructed with Naviscan software (MLEM) to determine improvements in resolution, signal to noise ratio (SNR) and contrast to noise ratio (CNR.) Results: In all of the patients' breast studies the post-processed images proved to have higher resolution and lower noise as compared with images reconstructed by conventional methods. In general, the values of SNR reached a plateau at around 6 iterations with an improvement factor of about 2 for post-processed Flex Solo II PEM images. Improvements in image resolution after the application of RSEMD have also been demonstrated. Conclusions: A rapidly converging, iterative deconvolution algorithm with a novel resolution subsets-based approach RSEMD that operates on patient DICOM images has been used for quantitative improvement in breast imaging. The RSEMD method can be applied to clinical PEM images to improve image quality to diagnostically acceptable levels and will be crucial in order to facilitate diagnosis of tumor progression at the earliest stages. The RSEMD method can be considered as an extended Richardson-Lucy algorithm with multiple resolution levels (resolution subsets).

  11. Compact pnCCD-based X-ray camera with high spatial and energy resolution: a color X-ray camera.

    PubMed

    Scharf, O; Ihle, S; Ordavo, I; Arkadiev, V; Bjeoumikhov, A; Bjeoumikhova, S; Buzanich, G; Gubzhokov, R; Günther, A; Hartmann, R; Kühbacher, M; Lang, M; Langhoff, N; Liebel, A; Radtke, M; Reinholz, U; Riesemeier, H; Soltau, H; Strüder, L; Thünemann, A F; Wedell, R

    2011-04-01

    For many applications there is a requirement for nondestructive analytical investigation of the elemental distribution in a sample. With the improvement of X-ray optics and spectroscopic X-ray imagers, full field X-ray fluorescence (FF-XRF) methods are feasible. A new device for high-resolution X-ray imaging, an energy and spatial resolving X-ray camera, is presented. The basic idea behind this so-called "color X-ray camera" (CXC) is to combine an energy dispersive array detector for X-rays, in this case a pnCCD, with polycapillary optics. Imaging is achieved using multiframe recording of the energy and the point of impact of single photons. The camera was tested using a laboratory 30 μm microfocus X-ray tube and synchrotron radiation from BESSY II at the BAMline facility. These experiments demonstrate the suitability of the camera for X-ray fluorescence analytics. The camera simultaneously records 69,696 spectra with an energy resolution of 152 eV for manganese K(α) with a spatial resolution of 50 μm over an imaging area of 12.7 × 12.7 mm(2). It is sensitive to photons in the energy region between 3 and 40 keV, limited by a 50 μm beryllium window, and the sensitive thickness of 450 μm of the chip. Online preview of the sample is possible as the software updates the sums of the counts for certain energy channel ranges during the measurement and displays 2-D false-color maps as well as spectra of selected regions. The complete data cube of 264 × 264 spectra is saved for further qualitative and quantitative processing.

  12. Photometric-based recovery of illuminant-free color images using a red-green-blue digital camera

    NASA Astrophysics Data System (ADS)

    Luis Nieves, Juan; Plata, Clara; Valero, Eva M.; Romero, Javier

    2012-01-01

    Albedo estimation has traditionally been used to make computational simulations of real objects under different conditions, but as yet no device is capable of measuring albedo directly. The aim of this work is to introduce a photometric-based color imaging framework that can estimate albedo and can reproduce the appearance both indoors and outdoors of images under different lights and illumination geometry. Using a calibration sample set composed of chips made of the same material but different colors and textures, we compare two photometric-stereo techniques, one of them avoiding the effect of shadows and highlights in the image and the other ignoring this constraint. We combined a photometric-stereo technique and a color-estimation algorithm that directly relates the camera sensor outputs with the albedo values. The proposed method can produce illuminant-free images with good color accuracy when a three-channel red-green-blue (RGB) digital camera is used, even outdoors under solar illumination.

  13. Infrared Camera

    NASA Technical Reports Server (NTRS)

    1997-01-01

    A sensitive infrared camera that observes the blazing plumes from the Space Shuttle or expendable rocket lift-offs is capable of scanning for fires, monitoring the environment and providing medical imaging. The hand-held camera uses highly sensitive arrays in infrared photodetectors known as quantum well infrared photo detectors (QWIPS). QWIPS were developed by the Jet Propulsion Laboratory's Center for Space Microelectronics Technology in partnership with Amber, a Raytheon company. In October 1996, QWIP detectors pointed out hot spots of the destructive fires speeding through Malibu, California. Night vision, early warning systems, navigation, flight control systems, weather monitoring, security and surveillance are among the duties for which the camera is suited. Medical applications are also expected.

  14. Infrared camera based thermometry for quality assurance of superficial hyperthermia applicators.

    PubMed

    Müller, Johannes; Hartmann, Josefin; Bert, Christoph

    2016-04-07

    The purpose of this work was to provide a feasible and easy to apply phantom-based quality assurance (QA) procedure for superficial hyperthermia (SHT) applicators by means of infrared (IR) thermography. The VarioCAM hr head (InfraTec, Dresden, Germany) was used to investigate the SA-812, the SA-510 and the SA-308 applicators (all: Pyrexar Medical, Salt Lake City, UT, USA). Probe referencing and thermal equilibrium procedures were applied to determine the emissivity of the muscle-equivalent agar phantom. Firstly, the disturbing potential of thermal conduction on the temperature distribution inside the phantom was analyzed through measurements after various heating times (5-50 min). Next, the influence of the temperature of the water bolus between the SA-812 applicator and the phantom's surface was evaluated by varying its temperature. The results are presented in terms of characteristic values (extremal temperatures, percentiles and effective field sizes (EFS)) and temperature-area-histograms (TAH). Lastly, spiral antenna applicators were compared by the introduced characteristics. The emissivity of the used phantom was found to be ε  =  0.91  ±  0.03, the results of both methods coincided. The influence of thermal conduction with regard to heating time was smaller than expected; the EFS of the SA-812 applicator had a size of (68.6  ±  6.7) cm(2), averaged group variances were  ±3.0 cm(2). The TAHs show that the influence of the water bolus is mostly limited to depths of  <3 cm, yet it can greatly enhance or reduce heat generation in this regime: at a depth of 1 cm, measured maximal temperature rises were 14.5 °C for T Bolus  =  30 °C and 8.6 °C for T Bolus  =  21 °C, respectively. The EFS was increased, too. The three spiral antenna applicators generated similar heat distributions. Generally, the procedure proved to yield informative insights into applicator characteristics, thus making the application

  15. Infrared camera based thermometry for quality assurance of superficial hyperthermia applicators

    NASA Astrophysics Data System (ADS)

    Müller, Johannes; Hartmann, Josefin; Bert, Christoph

    2016-04-01

    The purpose of this work was to provide a feasible and easy to apply phantom-based quality assurance (QA) procedure for superficial hyperthermia (SHT) applicators by means of infrared (IR) thermography. The VarioCAM hr head (InfraTec, Dresden, Germany) was used to investigate the SA-812, the SA-510 and the SA-308 applicators (all: Pyrexar Medical, Salt Lake City, UT, USA). Probe referencing and thermal equilibrium procedures were applied to determine the emissivity of the muscle-equivalent agar phantom. Firstly, the disturbing potential of thermal conduction on the temperature distribution inside the phantom was analyzed through measurements after various heating times (5-50 min). Next, the influence of the temperature of the water bolus between the SA-812 applicator and the phantom’s surface was evaluated by varying its temperature. The results are presented in terms of characteristic values (extremal temperatures, percentiles and effective field sizes (EFS)) and temperature-area-histograms (TAH). Lastly, spiral antenna applicators were compared by the introduced characteristics. The emissivity of the used phantom was found to be ɛ  =  0.91  ±  0.03, the results of both methods coincided. The influence of thermal conduction with regard to heating time was smaller than expected; the EFS of the SA-812 applicator had a size of (68.6  ±  6.7) cm2, averaged group variances were  ±3.0 cm2. The TAHs show that the influence of the water bolus is mostly limited to depths of  <3 cm, yet it can greatly enhance or reduce heat generation in this regime: at a depth of 1 cm, measured maximal temperature rises were 14.5 °C for T Bolus  =  30 °C and 8.6 °C for T Bolus  =  21 °C, respectively. The EFS was increased, too. The three spiral antenna applicators generated similar heat distributions. Generally, the procedure proved to yield informative insights into applicator characteristics, thus making the application

  16. Focal thyroid incidentaloma on whole body fluorodeoxyglucose positron emission tomography/computed tomography in known cancer patients: A case-based discussion with a series of three examples.

    PubMed

    Targe, Mangala; Basu, Sandip

    2015-01-01

    The importance, imaging characteristics and outcome of focal thyroid incidentaloma on fluorodeoxyglucose-positron emission tomography/computed tomography (FDG-PET/CT) have been illustrated in this report. This is drawn from a series of three case examples of proven malignancy at different locations, with three different thyroid cytopathological diagnoses. Subsequently, a case-based discussion on present consensus of the management of this entity has been undertaken including certain specific aspects of PET-CT interpretation and its role in this setting.

  17. Feasibility study of a ``4H'' X-ray camera based on GaAs:Cr sensor

    NASA Astrophysics Data System (ADS)

    Dragone, A.; Kenney, C.; Lozinskaya, A.; Tolbanov, O.; Tyazhev, A.; Zarubin, A.; Wang, Zhehui

    2016-11-01

    A multilayer stacked X-ray camera concept is described. This type of technology is called `4H' X-ray cameras, where 4H stands for high-Z (Z>30) sensor, high-resolution (less than 300 micron pixel pitch), high-speed (above 100 MHz), and high-energy (above 30 keV in photon energy). The components of the technology, similar to the popular two-dimensional (2D) hybrid pixelated array detectors, consists of GaAs:Cr sensors bonded to high-speed ASICs. 4H cameras based on GaAs also use integration mode of X-ray detection. The number of layers, on the order of ten, is smaller than an earlier configuration for single-photon-counting (SPC) mode of detection [1]. High-speed ASIC based on modification to the ePix family of ASIC is discussed. Applications in X-ray free electron lasers (XFELs), synchrotrons, medicine and non-destructive testing are possible.

  18. Use of a smart phone based thermo camera for skin prick allergy testing: a feasibility study (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Barla, Lindi; Verdaasdonk, Rudolf M.; Rustemeyer, Thomas; Klaessens, John; van der Veen, Albert

    2016-02-01

    Allergy testing is usually performed by exposing the skin to small quantities of potential allergens on the inner forearm and scratching the protective epidermis to increase exposure. After 15 minutes the dermatologist performs a visual check for swelling and erythema which is subjective and difficult for e.g. dark skin types. A small smart phone based thermo camera (FLIR One) was used to obtain quantitative images in a feasibility study of 17 patients Directly after allergen exposure on the forearm, thermal images were captured at 30 seconds interval and processed to a time lapse movie over 15 minutes. Considering the 'subjective' reading of the dermatologist as golden standard, in 11/17 pts (65%) the evaluation of dermatologist was confirmed by the thermo camera including 5 of 6 patients without allergic response. In 7 patients thermo showed additional spots. Of the 342 sites tested, the dermatologist detected 47 allergies of which 28 (60%) were confirmed by thermo imaging while thermo imaging showed 12 additional spots. The method can be improved with user dedicated acquisition software and better registration between normal and thermal images. The lymphatic reaction seems to shift from the original puncture site. The interpretation of the thermal images is still subjective since collecting quantitative data is difficult due to motion patient during 15 minutes. Although not yet conclusive, thermal imaging shows to be promising to improve the sensitivity and selectivity of allergy testing using a smart phone based camera.

  19. Cross-calibration of the Rosetta Navigation Camera based on images of the 67P comet nucleus

    NASA Astrophysics Data System (ADS)

    Statella, Thiago; Geiger, Bernhard

    2017-07-01

    The Rosetta spacecraft carried a Navigation Camera (NavCam) for optical navigation in the vicinity of the comet. In order to facilitate the use of the data for quantitative scientific work, we performed a cross-calibration study based on images taken with the OSIRIS near-angle camera. For this purpose, we selected sets of images acquired roughly simultaneously on 2014 August 1 during comet approach at small phase angles. We employed two procedures, the first one based on the average signal over the nucleus and the second considering histograms of signal values within the images. Both methods delivered consistent results for the radiometric calibration factor. As a first application and further consistency check, we employed the calibration procedure to an extended set of NavCam images acquired at phase angles ranging from ˜1° to 55° in order to study the nucleus reflectance properties. From empirical model fits to the phase angle dependence we obtained values of 0.065 ± 0.003 for the geometric albedo and 0.019 ± 0.001 for the Bond albedo in the broad spectral sensitivity band of the camera.

  20. Feasibility study of a ``4H'' X-ray camera based on GaAs:Cr sensor

    DOE PAGES

    Dragone, Angelo; Kenney, Chris; Lozinskaya, Anastassiya; ...

    2016-11-29

    Here, we describe a multilayer stacked X-ray camera concept. This type of technology is called `4H' X-ray cameras, where 4H stands for high-Z (Z>30) sensor, high-resolution (less than 300 micron pixel pitch), high-speed (above 100 MHz), and high-energy (above 30 keV in photon energy). The components of the technology, similar to the popular two-dimensional (2D) hybrid pixelated array detectors, consists of GaAs:Cr sensors bonded to high-speed ASICs. 4H cameras based on GaAs also use integration mode of X-ray detection. The number of layers, on the order of ten, is smaller than an earlier configuration for single-photon-counting (SPC) mode of detectionmore » [1]. High-speed ASIC based on modification to the ePix family of ASIC is discussed. Applications in X-ray free electron lasers (XFELs), synchrotrons, medicine and non-destructive testing are possible.« less

  1. Feasibility study of a ``4H'' X-ray camera based on GaAs:Cr sensor

    SciTech Connect

    Dragone, Angelo; Kenney, Chris; Lozinskaya, Anastassiya; Tolbanov, Oleg; Tyazhev, Anton; Zarubin, Andrei; Wang, Zhehui

    2016-11-29

    Here, we describe a multilayer stacked X-ray camera concept. This type of technology is called `4H' X-ray cameras, where 4H stands for high-Z (Z>30) sensor, high-resolution (less than 300 micron pixel pitch), high-speed (above 100 MHz), and high-energy (above 30 keV in photon energy). The components of the technology, similar to the popular two-dimensional (2D) hybrid pixelated array detectors, consists of GaAs:Cr sensors bonded to high-speed ASICs. 4H cameras based on GaAs also use integration mode of X-ray detection. The number of layers, on the order of ten, is smaller than an earlier configuration for single-photon-counting (SPC) mode of detection [1]. High-speed ASIC based on modification to the ePix family of ASIC is discussed. Applications in X-ray free electron lasers (XFELs), synchrotrons, medicine and non-destructive testing are possible.

  2. A passive terahertz video camera based on lumped element kinetic inductance detectors.

    PubMed

    Rowe, Sam; Pascale, Enzo; Doyle, Simon; Dunscombe, Chris; Hargrave, Peter; Papageorgio, Andreas; Wood, Ken; Ade, Peter A R; Barry, Peter; Bideaud, Aurélien; Brien, Tom; Dodd, Chris; Grainger, William; House, Julian; Mauskopf, Philip; Moseley, Paul; Spencer, Locke; Sudiwala, Rashmi; Tucker, Carole; Walker, Ian

    2016-03-01

    We have developed a passive 350 GHz (850 μm) video-camera to demonstrate lumped element kinetic inductance detectors (LEKIDs)--designed originally for far-infrared astronomy--as an option for general purpose terrestrial terahertz imaging applications. The camera currently operates at a quasi-video frame rate of 2 Hz with a noise equivalent temperature difference per frame of ∼0.1 K, which is close to the background limit. The 152 element superconducting LEKID array is fabricated from a simple 40 nm aluminum film on a silicon dielectric substrate and is read out through a single microwave feedline with a cryogenic low noise amplifier and room temperature frequency domain multiplexing electronics.

  3. Super-resolution in a defocused plenoptic camera: a wave-optics-based approach.

    PubMed

    Sahin, Erdem; Katkovnik, Vladimir; Gotchev, Atanas

    2016-03-01

    Plenoptic cameras enable the capture of a light field with a single device. However, with traditional light field rendering procedures, they can provide only low-resolution two-dimensional images. Super-resolution is considered to overcome this drawback. In this study, we present a super-resolution method for the defocused plenoptic camera (Plenoptic 1.0), where the imaging system is modeled using wave optics principles and utilizing low-resolution depth information of the scene. We are particularly interested in super-resolution of in-focus and near in-focus scene regions, which constitute the most challenging cases. The simulation results show that the employed wave-optics model makes super-resolution possible for such regions as long as sufficiently accurate depth information is available.

  4. Design of belief propagation based on FPGA for the multistereo CAFADIS camera.

    PubMed

    Magdaleno, Eduardo; Lüke, Jonás Philipp; Rodríguez, Manuel; Rodríguez-Ramos, José Manuel

    2010-01-01

    In this paper we describe a fast, specialized hardware implementation of the belief propagation algorithm for the CAFADIS camera, a new plenoptic sensor patented by the University of La Laguna. This camera captures the lightfield of the scene and can be used to find out at which depth each pixel is in focus. The algorithm has been designed for FPGA devices using VHDL. We propose a parallel and pipeline architecture to implement the algorithm without external memory. Although the BRAM resources of the device increase considerably, we can maintain real-time restrictions by using extremely high-performance signal processing capability through parallelism and by accessing several memories simultaneously. The quantifying results with 16 bit precision have shown that performances are really close to the original Matlab programmed algorithm.

  5. Miniature optical planar camera based on a wide-angle metasurface doublet corrected for monochromatic aberrations.

    PubMed

    Arbabi, Amir; Arbabi, Ehsan; Kamali, Seyedeh Mahsa; Horie, Yu; Han, Seunghoon; Faraon, Andrei

    2016-11-28

    Optical metasurfaces are two-dimensional arrays of nano-scatterers that modify optical wavefronts at subwavelength spatial resolution. They are poised to revolutionize optics by enabling complex low-cost systems where multiple metasurfaces are lithographically stacked and integrated with electronics. For imaging applications, metasurface stacks can perform sophisticated image corrections and can be directly integrated with image sensors. Here we demonstrate this concept with a miniature flat camera integrating a monolithic metasurface lens doublet corrected for monochromatic aberrations, and an image sensor. The doublet lens, which acts as a fisheye photographic objective, has a small f-number of 0.9, an angle-of-view larger than 60° × 60°, and operates at 850 nm wavelength with 70% focusing efficiency. The camera exhibits nearly diffraction-limited image quality, which indicates the potential of this technology in the development of optical systems for microscopy, photography, and computer vision.

  6. Miniature optical planar camera based on a wide-angle metasurface doublet corrected for monochromatic aberrations

    NASA Astrophysics Data System (ADS)

    Arbabi, Amir; Arbabi, Ehsan; Kamali, Seyedeh Mahsa; Horie, Yu; Han, Seunghoon; Faraon, Andrei

    2016-11-01

    Optical metasurfaces are two-dimensional arrays of nano-scatterers that modify optical wavefronts at subwavelength spatial resolution. They are poised to revolutionize optics by enabling complex low-cost systems where multiple metasurfaces are lithographically stacked and integrated with electronics. For imaging applications, metasurface stacks can perform sophisticated image corrections and can be directly integrated with image sensors. Here we demonstrate this concept with a miniature flat camera integrating a monolithic metasurface lens doublet corrected for monochromatic aberrations, and an image sensor. The doublet lens, which acts as a fisheye photographic objective, has a small f-number of 0.9, an angle-of-view larger than 60° × 60°, and operates at 850 nm wavelength with 70% focusing efficiency. The camera exhibits nearly diffraction-limited image quality, which indicates the potential of this technology in the development of optical systems for microscopy, photography, and computer vision.

  7. Miniature optical planar camera based on a wide-angle metasurface doublet corrected for monochromatic aberrations

    PubMed Central

    Arbabi, Amir; Arbabi, Ehsan; Kamali, Seyedeh Mahsa; Horie, Yu; Han, Seunghoon; Faraon, Andrei

    2016-01-01

    Optical metasurfaces are two-dimensional arrays of nano-scatterers that modify optical wavefronts at subwavelength spatial resolution. They are poised to revolutionize optics by enabling complex low-cost systems where multiple metasurfaces are lithographically stacked and integrated with electronics. For imaging applications, metasurface stacks can perform sophisticated image corrections and can be directly integrated with image sensors. Here we demonstrate this concept with a miniature flat camera integrating a monolithic metasurface lens doublet corrected for monochromatic aberrations, and an image sensor. The doublet lens, which acts as a fisheye photographic objective, has a small f-number of 0.9, an angle-of-view larger than 60° × 60°, and operates at 850 nm wavelength with 70% focusing efficiency. The camera exhibits nearly diffraction-limited image quality, which indicates the potential of this technology in the development of optical systems for microscopy, photography, and computer vision. PMID:27892454

  8. Design of Belief Propagation Based on FPGA for the Multistereo CAFADIS Camera

    PubMed Central

    Magdaleno, Eduardo; Lüke, Jonás Philipp; Rodríguez, Manuel; Rodríguez-Ramos, José Manuel

    2010-01-01

    In this paper we describe a fast, specialized hardware implementation of the belief propagation algorithm for the CAFADIS camera, a new plenoptic sensor patented by the University of La Laguna. This camera captures the lightfield of the scene and can be used to find out at which depth each pixel is in focus. The algorithm has been designed for FPGA devices using VHDL. We propose a parallel and pipeline architecture to implement the algorithm without external memory. Although the BRAM resources of the device increase considerably, we can maintain real-time restrictions by using extremely high-performance signal processing capability through parallelism and by accessing several memories simultaneously. The quantifying results with 16 bit precision have shown that performances are really close to the original Matlab programmed algorithm. PMID:22163404

  9. A passive terahertz video camera based on lumped element kinetic inductance detectors

    SciTech Connect

    Rowe, Sam Pascale, Enzo; Doyle, Simon; Dunscombe, Chris; Hargrave, Peter; Papageorgio, Andreas; Ade, Peter A. R.; Barry, Peter; Bideaud, Aurélien; Brien, Tom; Dodd, Chris; House, Julian; Moseley, Paul; Sudiwala, Rashmi; Tucker, Carole; Walker, Ian; Wood, Ken; Grainger, William; Mauskopf, Philip; Spencer, Locke

    2016-03-15

    We have developed a passive 350 GHz (850 μm) video-camera to demonstrate lumped element kinetic inductance detectors (LEKIDs)—designed originally for far-infrared astronomy—as an option for general purpose terrestrial terahertz imaging applications. The camera currently operates at a quasi-video frame rate of 2 Hz with a noise equivalent temperature difference per frame of ∼0.1 K, which is close to the background limit. The 152 element superconducting LEKID array is fabricated from a simple 40 nm aluminum film on a silicon dielectric substrate and is read out through a single microwave feedline with a cryogenic low noise amplifier and room temperature frequency domain multiplexing electronics.

  10. Thresholding schemes for visible light communications with CMOS camera using entropy-based algorithms.

    PubMed

    Liang, Kevin; Chow, Chi-Wai; Liu, Yang; Yeh, Chien-Hung

    2016-10-31

    Recent visible light communication (VLC) studies mainly used positive-intrinsic-negative (PIN) and avalanche photodiode (APD). VLC using embedded complementary-metal-oxide-semiconductor (CMOS) camera is attractive. Using the rolling shutter effect of CMOS camera can increase the VLC data rate; and different techniques have been proposed for improving the demodulation of the rolling shutter pattern. Important steps to demodulate the rolling shutter pattern are the smoothing and the application of efficient thresholding to distinguish data logic. Here, we propose and demonstrate for the first time two entropy thresholding algorithms, including maximum entropy thresholding and minimum cross entropy thresholding. Experimental evaluation to compare their bit-error-rate (BER) performances and efficiencies are also performed.

  11. On-orbit calibration approach for star cameras based on the iteration method with variable weights.

    PubMed

    Wang, Mi; Cheng, Yufeng; Yang, Bo; Chen, Xiao

    2015-07-20

    To perform efficient on-orbit calibration for star cameras, we developed an attitude-independent calibration approach for global optimization and noise removal by least-square estimation using multiple star images, with which the optimal principal point, focal length, and the high-order focal plane distortion can be obtained in one step in full consideration of the interaction among star camera parameters. To avoid the problem when stars could be misidentified in star images, an iteration method with variable weights is introduced to eliminate the influence of misidentified star pairs. The approach can increase the precision of least-square estimation and use fewer star images. The proposed approach has been well verified to be precise and robust in three experiments.

  12. A risk-based coverage model for video surveillance camera control optimization

    NASA Astrophysics Data System (ADS)

    Zhang, Hongzhou; Du, Zhiguo; Zhao, Xingtao; Li, Peiyue; Li, Dehua

    2015-12-01

    Visual surveillance system for law enforcement or police case investigation is different from traditional application, for it is designed to monitor pedestrians, vehicles or potential accidents. Visual surveillance risk is defined as uncertainty of visual information of targets and events monitored in present work and risk entropy is introduced to modeling the requirement of police surveillance task on quality and quantity of vide information. the prosed coverage model is applied to calculate the preset FoV position of PTZ camera.

  13. MOEMS-based time-of-flight camera for 3D video capturing

    NASA Astrophysics Data System (ADS)

    You, Jang-Woo; Park, Yong-Hwa; Cho, Yong-Chul; Park, Chang-Young; Yoon, Heesun; Lee, Sang-Hun; Lee, Seung-Wan

    2013-03-01

    We suggest a Time-of-Flight (TOF) video camera capturing real-time depth images (a.k.a depth map), which are generated from the fast-modulated IR images utilizing a novel MOEMS modulator having switching speed of 20 MHz. In general, 3 or 4 independent IR (e.g. 850nm) images are required to generate a single frame of depth image. Captured video image of a moving object frequently shows motion drag between sequentially captured IR images, which results in so called `motion blur' problem even when the frame rate of depth image is fast (e.g. 30 to 60 Hz). We propose a novel `single shot' TOF 3D camera architecture generating a single depth image out of synchronized captured IR images. The imaging system constitutes of 2x2 imaging lens array, MOEMS optical shutters (modulator) placed on each lens aperture and a standard CMOS image sensor. The IR light reflected from object is modulated by optical shutters on the apertures of 2x2 lens array and then transmitted images are captured on the image sensor resulting in 2x2 sub-IR images. As a result, the depth image is generated with those simultaneously captured 4 independent sub-IR images, hence the motion blur problem is canceled. The resulting performance is very useful in the applications of 3D camera to a human-machine interaction device such as user interface of TV, monitor, or hand held devices and motion capturing of human body. In addition, we show that the presented 3D camera can be modified to capture color together with depth image simultaneously on `single shot' frame rate.

  14. Scent Lure Effect on Camera-Trap Based Leopard Density Estimates.

    PubMed

    Braczkowski, Alexander Richard; Balme, Guy Andrew; Dickman, Amy; Fattebert, Julien; Johnson, Paul; Dickerson, Tristan; Macdonald, David Whyte; Hunter, Luke

    2016-01-01

    Density estimates for large carnivores derived from camera surveys often have wide confidence intervals due to low detection rates. Such estimates are of limited value to authorities, which require precise population estimates to inform conservation strategies. Using lures can potentially increase detection, improving the precision of estimates. However, by altering the spatio-temporal patterning of individuals across the camera array, lures may violate closure, a fundamental assumption of capture-recapture. Here, we test the effect of scent lures on the precision and veracity of density estimates derived from camera-trap surveys of a protected African leopard population. We undertook two surveys (a 'control' and 'treatment' survey) on Phinda Game Reserve, South Africa. Survey design remained consistent except a scent lure was applied at camera-trap stations during the treatment survey. Lures did not affect the maximum movement distances (p = 0.96) or temporal activity of female (p = 0.12) or male leopards (p = 0.79), and the assumption of geographic closure was met for both surveys (p >0.05). The numbers of photographic captures were also similar for control and treatment surveys (p = 0.90). Accordingly, density estimates were comparable between surveys (although estimates derived using non-spatial methods (7.28-9.28 leopards/100km2) were considerably higher than estimates from spatially-explicit methods (3.40-3.65 leopards/100km2). The precision of estimates from the control and treatment surveys, were also comparable and this applied to both non-spatial and spatial methods of estimation. Our findings suggest that at least in the context of leopard research in productive habitats, the use of lures is not warranted.

  15. Color correction for projected image on colored-screen based on a camera

    NASA Astrophysics Data System (ADS)

    Kim, Dae-Chul; Lee, Tae-Hyoung; Choi, Myong-Hui; Ha, Yeong-Ho

    2011-01-01

    Recently, projector is one of the most common display devices not only for presentation at offices and classes, but for entertainment at home and theater. The use of mobile projector expands applications to meeting at fields and presentation on any spots. Accordingly, the projection is not always guaranteed on white screen, causing some color distortion. Several algorithms have been suggested to correct the projected color on the light colored screen. These have limitation on the use of measurement equipment which can't bring always, also lack of accuracy due to transform matrix obtained by using small number of patches. In this paper, color correction method using general still camera as convenient measurement equipment is proposed to match the colors between on white and colored screens. A patch containing 9 ramps of each channel are firstly projected on white and light colored screens, then captured by the camera, respectively, Next, digital values are obtained by the captured image for each ramp patch on both screens, resulting in different values to the same patch. After that, we check which ramp patch on colored screen has the same digital value on white screen, repeating this procedure for all ramp patches. The difference between corresponding ramp patches reveals th